id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
62668132
Esther 2
A chapter in the Book of Esther Esther 2 is the second chapter of the Book of Esther in the Hebrew Bible or the Old Testament of the Christian Bible. The author of the book is unknown and modern scholars have established that the final stage of the Hebrew text would have been formed by the second century BCE. Chapters 1 and 2 form the exposition of the book. This chapter introduces Mordecai and his adoptive daughter, Esther, whose beauty won the approval of the king Ahasuerus, and she was crowned the queen of Persia (). Given information from Mordecai, Esther warned the king of an assassination plan (verses 21–), so that the would-be assassins were executed on the gallows, and the king owed Mordecai his life. Text. This chapter was originally written in the Hebrew language and since the 16th century, is divided into 23 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The king's decision to seek a new queen (2:1–4). To find a replacement for a Persian queen after the deposal of Vashti, the king decided to hold a nationwide contest following the advice of his counselors. "And let the king appoint officers in all the provinces of his kingdom, that they may gather together all the fair young virgins unto Shushan the palace, to the house of the women, unto the custody of Hege the king's chamberlain, keeper of the women; and let their things for purification be given them:" Esther's admission to the court (2:5–11). As the requirement to enter the contest is simply her beauty (verse 3), Esther's status of being Jewish, a descendant of captives (verse 6), without father and mother, did not hinder her entrance to the court. Once she was in the harem, she obtained 'a favored position in eyes of the harem-master'. "Now in Shushan the palace there was a certain Jew, whose name was Mordecai, the son of Jair, the son of Shimei, the son of Kish, a Benjamite;" "Who had been carried away from Jerusalem with the captivity which had been carried away with Jeconiah king of Judah, whom Nebuchadnezzar the king of Babylon had carried away.' "And he brought up Hadassah, that is, Esther, his uncle's daughter: for she had neither father nor mother, and the maid was fair and beautiful; whom Mordecai, when her father and mother were dead, took for his own daughter." "Esther had not made known her people or kindred, for Mordecai had commanded her not to make it known." Esther's accession to the throne (2:12–18). This part contains the description of the twelve-month course of beautifying treatments for the candidates of the Persian queen. It also gives a hint of Esther's character: she might possess 'innate cunning' to distinguish herself from her competitors and at the end was chosen to be the queen. "Now when every maid's turn was come to go in to king Ahasuerus, after that she had been twelve months, according to the manner of the women, (for so were the days of their purifications accomplished, to wit, six months with oil of myrrh, and six months with sweet odours, and with other things for the purifying of the women;)" "So Esther was taken unto king Ahasuerus into his house royal in the tenth month, which is the month Tebeth, in the seventh year of his reign." Verse 16. The time referred to in the verse falls in the January or February of 478 BC which would have been very shortly after Xerxes' return to Susa from the war with the Greeks, thus the long delay in replacing Vashti can be explained by the long absence of Xerxes in Greece. Mordecai's discovery of the plot against the king (2:19–23). This section records how Mordecai overheard a plot to assassinate the king and told Esther, so she could save the king's life based on the information "in the name of Mordecai" (). This episode foreshadows the future events and becomes truly functional with the rewarding of Mordecai in chapter 6. "Esther had not made known her kindred or her people, as Mordecai had commanded her, for Esther obeyed Mordecai just as when she was brought up by him." "In those days, as Mordecai was sitting at the king's gate, Bigthan and Teresh, two of the king's eunuchs, who guarded the threshold, became angry and sought to lay hands on King Ahasuerus." Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62668132
6267178
Simulation noise
Simulation noise is a function that creates a divergence-free vector field. This signal can be used in artistic simulations for the purposes of increasing the perception of extra detail. The function can be calculated in three dimensions by dividing the space into a regular lattice grid. With each edge is associated a random value, indicating a rotational component of material revolving around the edge. By following rotating material into and out of faces, one can quickly sum the flux passing through each face of the lattice. Flux values at lattice faces are then interpolated to create a field value for all positions. Perlin noise is the earliest form of lattice noise, which has become very popular in computer graphics. Perlin Noise is not suited for simulation because it is not divergence-free. Noises based on lattices, such as simulation noise and Perlin noise, are often calculated at different frequencies and summed together to form band-limited fractal signals. Other approaches developed later that use vector calculus identities to produce divergence free fields, such as "Curl-Noise" as suggested by Robert Bridson, and "Divergence-Free Noise" due to Ivan DeWolf. These often require calculation of lattice noise gradients, which sometimes are not readily available. A naive implementation would call a lattice noise function several times to calculate its gradient, resulting in more computation than is strictly necessary. Unlike these noises, simulation noise has a geometric rationale in addition to its mathematical properties. It simulates vortices scattered in space, to produce its pleasing aesthetic. Curl Noise. The vector field is created as follows, for evey point (x,y,z) in the space a vector field G is created, every component x, y and z of the vector field (Gx, Gy, Gz) is defined by a 3D perlin or simplex noise function with x, y and z as parameters. The partial derivative of Gx, Gy, and Gz respect to x, y and z is obtained with the gradient of the perlin or simplex noise by finite diferences of implicit calculation inside the simplex noise. The partial derivatives are used to calculate F as the curl of G given by formula_0 Bitangent Noise. This method is based in the fact that the curl of the gradient of scalar field is zero and the identity that expand the divergence of a cross product of two vectors A and B as the difference of the dot products of each vector with the curl of the other: formula_1 formula_2 which means that if the curl of both vector fields is zero then the divergence of the product of two vectors that are the gradients of scalar fields is zero too. This result in a divergence free vector field by construction only calling two noise functions to create the scalar fields. The vector field es created as follows, two scalar fields are calculated formula_3 and formula_4 using 3D perlin or simplex noise functions, then the gradients A and B of each of this fields is calculated, the cross product of A and B gives a divergence free vector field. Signed Distance Noise. The vector field is created based on a closed and differentiable implicit surface S = F(x,y,z) = 0. For every point in the space, frecuently outside or near the surface, we get a vector g that is normal to the surface, this is the gradient of S or the partial derivatives respect to x, y and z, this vector is not unitary but we can get a unitary normal n by dividing each component of the point by the magnitude of the gradient g. Outside of the surface all these normals point away from the surface. formula_5 formula_6 formula_7 Afterwards we calculate a scalar value p for that point in the space using a 3D perlin or simplex noise function. Now we create a vector field V = pn pointing outside of the surface. The curl of this vector field gives the direction in every point in the space where the particles should move. formula_8 By construction this vector SDN will point in a tangent direction to a isosurface at the level of the signed distance to the original surface and can be used to confine the movements of the particles to stay in that surface.
[ { "math_id": 0, "text": "F = (\\frac{\\partial Gz}{\\partial y} - \\frac{\\partial Gy}{\\partial z} ,\\frac{\\partial Gx}{\\partial z} - \\frac{\\partial Gz}{\\partial x},\\frac{\\partial Gy}{\\partial x} - \\frac{\\partial Gx}{\\partial y})" }, { "math_id": 1, "text": "\\nabla \\times ( \\nabla \\varphi ) = \\mathbf{0}." }, { "math_id": 2, "text": "\\nabla \\cdot (\\mathbf{A} \\times \\mathbf{B})\n =\\ (\\nabla {\\times} \\mathbf{A}) \\cdot \\mathbf{B} \n \\,-\\, \\mathbf{A} \\cdot (\\nabla {\\times} \\mathbf{B})" }, { "math_id": 3, "text": "\\phi" }, { "math_id": 4, "text": "\\psi" }, { "math_id": 5, "text": "\ng=\\nabla F(x, y, z) = \\left(\\frac{\\partial F}{\\partial x}, \\frac{\\partial F}{\\partial y}, \\frac{\\partial F}{\\partial z}\\right)" }, { "math_id": 6, "text": "\n\\mathbf{n} = \\frac{g(x, y, z)}{\\|\\nabla F(x, y, z)\\|}\n" }, { "math_id": 7, "text": "\n\\|\\nabla F(x, y, z)\\| = \\sqrt{\\left(\\frac{\\partial F}{\\partial x}\\right)^2 + \\left(\\frac{\\partial F}{\\partial y}\\right)^2 + \\left(\\frac{\\partial F}{\\partial z}\\right)^2}\n" }, { "math_id": 8, "text": "SDN = (\\frac{\\partial Vz}{\\partial y} - \\frac{\\partial Vy}{\\partial z} ,\\frac{\\partial Vx}{\\partial z} - \\frac{\\partial Vz}{\\partial x},\\frac{\\partial Vy}{\\partial x} - \\frac{\\partial Vx}{\\partial y})" } ]
https://en.wikipedia.org/wiki?curid=6267178
62672363
Ter-Antonyan function
The Ter-Antonyan function parameterizes the energy spectra of primary cosmic rays in the "knee" region (formula_0 eV) by the continuously differentiable function of energy formula_1 taking into account the rate of change of spectral slope. The function is expressed as: where formula_2 is a scale factor, formula_3 and formula_4 are the asymptotic slopes of the function (or spectral slopes) in a logarithmic scale at formula_5 and formula_6 respectively for a given formula_7 energy (the so-called "knee" energy). The rate of change of spectral slopes is set in function (1) by the "sharpness of knee" parameter, formula_8. Function (1) was proposed in ANI'98 Workshop (1998) by Samvel Ter-Antonyan for both the interpolation of primary energy spectra in the energy range 1—100 PeV and the search of parametrized solutions of inverse problem to reconstruct primary cosmic ray energy spectra. Function (1) is also used for the interpolation of observed Extensive Air Shower spectra in the knee region. Function (1) can be re-written as: formula_9 where formula_10 and formula_11 is the “knee” shaping function describing the change of the spectral slope. Examples of formula_12 for formula_13 are presented above. The rate of change of spectral slope from formula_14 to formula_15 with respect to energy (formula_1) is derived from (1) as: formula_16, where formula_17, formula_18, and formula_19 is the sharpness-independent spectral slope at the knee energy. Function (1) coincides with B. Peters spectra for formula_20 and asymptotically approaches the broken power law of cosmic ray energy spectra for formula_21: formula_22, where formula_23 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "10^{15}-10^{17}" }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "\\Phi" }, { "math_id": 3, "text": "\\gamma_1" }, { "math_id": 4, "text": "\\gamma_2" }, { "math_id": 5, "text": "E\\ll E_k" }, { "math_id": 6, "text": "E\\gg E_k" }, { "math_id": 7, "text": "E_k" }, { "math_id": 8, "text": "\\epsilon>0" }, { "math_id": 9, "text": "\n\\frac{dF}{dE} = \\Phi E^{-\\gamma_1}Y(E,\\epsilon,\\Delta\\gamma),\n" }, { "math_id": 10, "text": "\\Delta\\gamma=\\gamma_2-\\gamma_1" }, { "math_id": 11, "text": "\nY(E,\\epsilon,\\Delta\\gamma)\\equiv\\left(1+\\left(\\frac{E}{E_{k}}\\right)^{\\epsilon}\\right)^{-\\frac{\\Delta\\gamma}{\\epsilon}}\n" }, { "math_id": 12, "text": "Y(E,\\epsilon,\\Delta\\gamma=0.5)" }, { "math_id": 13, "text": "\\epsilon\\equiv0.5, 1, 2, \\cdots 500" }, { "math_id": 14, "text": "-\\gamma_1" }, { "math_id": 15, "text": "-\\gamma_2" }, { "math_id": 16, "text": "\n\\frac{df(E)}{dx}=-\\gamma_1-\\frac{\\Delta\\gamma}{1+(E_k/E)^\\epsilon}" }, { "math_id": 17, "text": "f=\\ln\\left(\\frac{dF}{dE}\\right)" }, { "math_id": 18, "text": "x=\\ln(\\frac{E}{E_k})" }, { "math_id": 19, "text": "\\left(\\frac{df}{dx}\\right)_{E=E_k}=-\\frac{\\gamma_1+\\gamma_2}{2}" }, { "math_id": 20, "text": "\\epsilon=1" }, { "math_id": 21, "text": "\\epsilon\\gg1" }, { "math_id": 22, "text": "\\left(\\frac{dF}{dE}\\right)_{\\epsilon=\\infin}\\propto\\left(\\frac{E}{E_k}\\right)^{-\\gamma}" }, { "math_id": 23, "text": "\\gamma=\n\\begin{cases}\n\\gamma_1, & \\text{if } E<E_k\\\\\n\\gamma_2, & \\text{if } E>E_k.\n\\end{cases}\n" } ]
https://en.wikipedia.org/wiki?curid=62672363
62674651
Eakin–Nagata theorem
In abstract algebra, the Eakin–Nagata theorem states: given commutative rings formula_0 such that formula_1 is finitely generated as a module over formula_2, if formula_1 is a Noetherian ring, then formula_2 is a Noetherian ring. (Note the converse is also true and is easier.) The theorem is similar to the Artin–Tate lemma, which says that the same statement holds with "Noetherian" replaced by "finitely generated algebra" (assuming the base ring is a Noetherian ring). The theorem was first proved in Paul M. Eakin's thesis and later independently by Masayoshi Nagata (1968). The theorem can also be deduced from the characterization of a Noetherian ring in terms of injective modules, as done for example by David Eisenbud in ; this approach is useful for a generalization to non-commutative rings. Proof. The following more general result is due to Edward W. Formanek and is proved by an argument rooted to the original proofs by Eakin and Nagata. According to , this formulation is likely the most transparent one. "Proof": It is enough to show that formula_3 is a Noetherian module since, in general, a ring admitting a faithful Noetherian module over it is a Noetherian ring. Suppose otherwise. By assumption, the set of all formula_4, where formula_6 is an ideal of formula_2 such that formula_7 is not Noetherian has a maximal element, formula_8. Replacing formula_3 and formula_2 by formula_9 and formula_10, we can assume Next, consider the set formula_11 of submodules formula_12 such that formula_13 is faithful. Choose a set of generators formula_14 of formula_3 and then note that formula_13 is faithful if and only if for each formula_15, the inclusion formula_16 implies formula_17. Thus, it is clear that Zorn's lemma applies to the set formula_11, and so the set has a maximal element, formula_18. Now, if formula_19 is Noetherian, then it is a faithful Noetherian module over "A" and, consequently, "A" is a Noetherian ring, a contradiction. Hence, formula_19 is not Noetherian and replacing formula_3 by formula_19, we can also assume Let a submodule formula_20 be given. Since formula_13 is not faithful, there is a nonzero element formula_15 such that formula_21. By assumption, formula_22 is Noetherian and so formula_23 is finitely generated. Since formula_24 is also finitely generated, it follows that formula_25 is finitely generated; i.e., formula_3 is Noetherian, a contradiction. formula_26 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A \\subset B" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "M" }, { "math_id": 4, "text": "IM" }, { "math_id": 5, "text": "I \\subset A" }, { "math_id": 6, "text": "I" }, { "math_id": 7, "text": "M/IM" }, { "math_id": 8, "text": "I_0 M" }, { "math_id": 9, "text": "M/I_0 M" }, { "math_id": 10, "text": "A/\\operatorname{Ann}(M/I_0 M)" }, { "math_id": 11, "text": "S" }, { "math_id": 12, "text": "N \\subset M" }, { "math_id": 13, "text": "M/N" }, { "math_id": 14, "text": "\\{ x_1, \\dots, x_n \\}" }, { "math_id": 15, "text": "a \\in A" }, { "math_id": 16, "text": "\\{ a x_1, \\dots, a x_n \\} \\subset N" }, { "math_id": 17, "text": "a = 0" }, { "math_id": 18, "text": "N_0" }, { "math_id": 19, "text": "M/N_0" }, { "math_id": 20, "text": "0 \\ne N \\subset M" }, { "math_id": 21, "text": "aM \\subset N" }, { "math_id": 22, "text": "M/aM" }, { "math_id": 23, "text": "N/aM" }, { "math_id": 24, "text": "aM" }, { "math_id": 25, "text": "N" }, { "math_id": 26, "text": "\\square" } ]
https://en.wikipedia.org/wiki?curid=62674651
6267642
Kalman–Yakubovich–Popov lemma
The Kalman–Yakubovich–Popov lemma is a result in system analysis and control theory which states: Given a number formula_0, two n-vectors B, C and an n x n Hurwitz matrix A, if the pair formula_1 is completely controllable, then a symmetric matrix P and a vector Q satisfying formula_2 formula_3 exist if and only if formula_4 Moreover, the set formula_5 is the unobservable subspace for the pair formula_6. The lemma can be seen as a generalization of the Lyapunov equation in stability theory. It establishes a relation between a linear matrix inequality involving the state space constructs A, B, C and a condition in the frequency domain. The Kalman–Popov–Yakubovich lemma which was first formulated and proved in 1962 by Vladimir Andreevich Yakubovich where it was stated that for the strict frequency inequality. The case of nonstrict frequency inequality was published in 1963 by Rudolf E. Kálmán. In that paper the relation to solvability of the Lur’e equations was also established. Both papers considered scalar-input systems. The constraint on the control dimensionality was removed in 1964 by Gantmakher and Yakubovich and independently by Vasile Mihai Popov. Extensive reviews of the topic can be found in and in Chapter 3 of. Multivariable Kalman–Yakubovich–Popov lemma. Given formula_7 with formula_8 for all formula_9 and formula_10 controllable, the following are equivalent: The corresponding equivalence for strict inequalities holds even if formula_10 is not controllable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma > 0" }, { "math_id": 1, "text": "(A,B)" }, { "math_id": 2, "text": "A^T P + P A = -Q Q^T" }, { "math_id": 3, "text": " P B-C = \\sqrt{\\gamma}Q" }, { "math_id": 4, "text": "\n\\gamma+2 Re[C^T (j\\omega I-A)^{-1}B]\\ge 0\n" }, { "math_id": 5, "text": "\\{x: x^T P x = 0\\}" }, { "math_id": 6, "text": "(C,A)" }, { "math_id": 7, "text": "A \\in \\R^{n \\times n}, B \\in \\R^{n \\times m}, M = M^T \\in \\R^{(n+m) \\times (n+m)}" }, { "math_id": 8, "text": "\\det(j\\omega I - A) \\ne 0" }, { "math_id": 9, "text": "\\omega \\in \\R" }, { "math_id": 10, "text": "(A, B)" } ]
https://en.wikipedia.org/wiki?curid=6267642
62676796
Quantum Cramér–Rao bound
The quantum Cramér–Rao bound is the quantum analogue of the classical Cramér–Rao bound. It bounds the achievable precision in parameter estimation with a quantum system: formula_0 where formula_1 is the number of independent repetitions, and formula_2 is the quantum Fisher information. Here, formula_3 is the state of the system and formula_4 is the Hamiltonian of the system. When considering a unitary dynamics of the type formula_5 where formula_6 is the initial state of the system, formula_7 is the parameter to be estimated based on measurements on formula_8 Simple derivation from the Heisenberg uncertainty relation. Let us consider the decomposition of the density matrix to pure components as formula_9 The Heisenberg uncertainty relation is valid for all formula_10 formula_11 From these, employing the Cauchy-Schwarz inequality we arrive at formula_12 Here formula_13 is the error propagation formula, which roughly tells us how well formula_7 can be estimated by measuring formula_14 Moreover, the convex roof of the variance is given as formula_15 where formula_16 is the quantum Fisher information. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\Delta \\theta)^2 \\ge \\frac 1 {m F_{\\rm Q}[\\varrho,H]}," }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "F_{\\rm Q}[\\varrho,H]" }, { "math_id": 3, "text": "\\varrho" }, { "math_id": 4, "text": "H" }, { "math_id": 5, "text": "\\varrho(\\theta)=\\exp(-iH\\theta)\\varrho_0\\exp(+iH\\theta)," }, { "math_id": 6, "text": "\\varrho_0" }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "\\varrho(\\theta)." }, { "math_id": 9, "text": "\\varrho=\\sum_k p_k \\vert\\Psi_k\\rangle\\langle\\Psi_k\\vert. " }, { "math_id": 10, "text": "\\vert\\Psi_k\\rangle" }, { "math_id": 11, "text": "(\\Delta A)^2_{\\Psi_k}(\\Delta B)^2_{\\Psi_k}\\ge \\frac 1 4 |\\langle i[A,B] \\rangle_{\\Psi_k}|^2. " }, { "math_id": 12, "text": "(\\Delta\\theta)^2_A \\ge \\frac{1}{4\\min_{\\{p_k,\\Psi_k\\}}[\\sum_k p_k (\\Delta B)_{\\Psi_k}^2]}. " }, { "math_id": 13, "text": "(\\Delta\\theta)^2_A= \\frac{(\\Delta A)^2}{|\\partial_{\\theta}\\langle A \\rangle|^2}=\\frac{(\\Delta A)^2}{|\\langle i[A,B] \\rangle|^2} " }, { "math_id": 14, "text": "A." }, { "math_id": 15, "text": "\\min_{\\{p_k,\\Psi_k\\}}\\left[\\sum_k p_k (\\Delta B)_{\\Psi_k}^2\\right]=\\frac1 4 F_Q[\\varrho, B]," }, { "math_id": 16, "text": "F_Q[\\varrho, B]" } ]
https://en.wikipedia.org/wiki?curid=62676796
62676893
Quantum Fisher information
Quantum The quantum Fisher information is a central quantity in quantum metrology and is the quantum analogue of the classical Fisher information. It is one of the central quantities used to qualify the utility of an input state, especially in Mach–Zehnder (or, equivalently, Ramsey) interferometer-based phase or parameter estimation. It is shown that the quantum Fisher information can also be a sensitive probe of a quantum phase transition (e.g. recognizing the superradiant quantum phase transition in the Dicke model). The quantum Fisher information formula_0 of a state formula_1 with respect to the observable formula_2 is defined as formula_3 where formula_4 and formula_5 are the eigenvalues and eigenvectors of the density matrix formula_6 respectively, and the summation goes over all formula_7 and formula_8 such that formula_9. When the observable generates a unitary transformation of the system with a parameter formula_10 from initial state formula_11, formula_12 the quantum Fisher information constrains the achievable precision in statistical estimation of the parameter formula_10 via the quantum Cramér–Rao bound as formula_13 where formula_14 is the number of independent repetitions. It is often desirable to estimate the magnitude of an unknown parameter formula_15 that controls the strength of a system's Hamiltonian formula_16 with respect to a known observable formula_2 during a known dynamical time formula_17. In this case, defining formula_18, so that formula_19, means estimates of formula_10 can be directly translated into estimates of formula_15. Connection with Fisher information. Classical Fisher information of measuring observable formula_20 on density matrix formula_21 is defined as formula_22, where formula_23 is the probability of obtaining outcome formula_24 when measuring observable formula_20 on the transformed density matrix formula_21. formula_25 is the eigenvalue corresponding to eigenvector formula_26 of observable formula_20. Quantum Fisher information is the supremum of the classical Fisher information over all such observables, formula_27 Relation to the symmetric logarithmic derivative. The quantum Fisher information equals the expectation value of formula_28, where formula_29 is the symmetric logarithmic derivative Equivalent expressions. For a unitary encoding operation formula_12, the quantum Fisher information can be computed as an integral, formula_30 where formula_31 on the right hand side denotes commutator. It can be also expressed in terms of Kronecker product and vectorization, formula_32 where formula_33 denotes complex conjugate, and formula_34 denotes conjugate transpose. This formula holds for invertible density matrices. For non-invertible density matrices, the inverse above is substituted by the Moore-Penrose pseudoinverse. Alternatively, one can compute the quantum Fisher information for invertible state formula_35, where formula_36 is any full-rank density matrix, and then perform the limit formula_37 to obtain the quantum Fisher information for formula_38. Density matrix formula_36 can be, for example, formula_39 in a finite-dimensional system, or a thermal state in infinite dimensional systems. Generalization and relations to Bures metric and quantum fidelity. For any differentiable parametrization of the density matrix formula_40 by a vector of parameters formula_41, the quantum Fisher information matrix is defined as formula_42 where formula_43 denotes partial derivative with respect to parameter formula_44. The formula also holds without taking the real part formula_45, because the imaginary part leads to an antisymmetric contribution that disappears under the sum. Note that all eigenvalues formula_4 and eigenvectors formula_46 of the density matrix potentially depend on the vector of parameters formula_47. This definition is identical to four times the Bures metric, up to singular points where the rank of the density matrix changes (those are the points at which formula_48 suddenly becomes zero.) Through this relation, it also connects with quantum fidelity formula_49 of two infinitesimally close states, formula_50 where the inner sum goes over all formula_7 at which eigenvalues formula_51. The extra term (which is however zero in most applications) can be avoided by taking a symmetric expansion of fidelity, formula_52 For formula_53 and unitary encoding, the quantum Fisher information matrix reduces to the original definition. Quantum Fisher information matrix is a part of a wider family of quantum statistical distances. Relation to fidelity susceptibility. Assuming that formula_54 is a ground state of a parameter-dependent non-degenerate Hamiltonian formula_55, four times the quantum Fisher information of this state is called fidelity susceptibility, and denoted formula_56 Fidelity susceptibility measures the sensitivity of the ground state to the parameter, and its divergence indicates a quantum phase transition. This is because of the aforementioned connection with fidelity: a diverging quantum Fisher information means that formula_57 and formula_58 are orthogonal to each other, for any infinitesimal change in parameter formula_59, and thus are said to undergo a phase-transition at point formula_10. Convexity properties. The quantum Fisher information equals four times the variance for pure states formula_60. For mixed states, when the probabilities are parameter independent, i.e., when formula_61, the quantum Fisher information is convex: formula_62 The quantum Fisher information is the largest function that is convex and that equals four times the variance for pure states. That is, it equals four times the convex roof of the variance formula_63 where the infimum is over all decompositions of the density matrix formula_64 Note that formula_65 are not necessarily orthogonal to each other. The above optimization can be rewritten as an optimization over the two-copy space as formula_66 such that formula_67 is a symmetric separable state and formula_68 Later the above statement has been proved even for the case of a minimization over general (not necessarily symmetric) separable states. When the probabilities are formula_69-dependent, an extended-convexity relation has been proved: formula_70 where formula_71 is the classical Fisher information associated to the probabilities contributing to the convex decomposition. The first term, in the right hand side of the above inequality, can be considered as the average quantum Fisher information of the density matrices in the convex decomposition. Inequalities for composite systems. We need to understand the behavior of quantum Fisher information in composite system in order to study quantum metrology of many-particle systems. For product states, formula_72 holds. For the reduced state, we have formula_73 where formula_74. Relation to entanglement. There are strong links between quantum metrology and quantum information science. For a multiparticle system of formula_75 spin-1/2 particles formula_76 holds for separable states, where formula_77 and formula_78 is a single particle angular momentum component. The maximum for general quantum states is given by formula_79 Hence, quantum entanglement is needed to reach the maximum precision in quantum metrology. Moreover, for quantum states with an entanglement depth formula_7, formula_80 holds, where formula_81 is the largest integer smaller than or equal to formula_82 and formula_83 is the remainder from dividing formula_75 by formula_7. Hence, a higher and higher levels of multipartite entanglement is needed to achieve a better and better accuracy in parameter estimation. It is possible to obtain a weaker but simpler bound formula_84 Hence, a lower bound on the entanglement depth is obtained as formula_85 Relation to the Wigner–Yanase skew information. The Wigner–Yanase skew information is defined as formula_86 It follows that formula_87 is convex in formula_88 For the quantum Fisher information and the Wigner–Yanase skew information, the inequality formula_89 holds, where there is an equality for pure states. Relation to the variance. For any decomposition of the density matrix given by formula_90 and formula_65 the relation formula_91 holds, where both inequalities are tight. That is, there is a decomposition for which the second inequality is saturated, which is the same as stating that the quantum Fisher information is the convex roof of the variance over four, discussed above. There is also a decomposition for which the first inequality is saturated, which means that the variance is its own concave roof formula_92 Uncertainty relations with the quantum Fisher information and the variance. Knowing that the quantum Fisher information is the convex roof of the variance times four, we obtain the relation formula_93 which is stronger than the Heisenberg uncertainty relation. For a particle of spin-formula_94 the following uncertainty relation holds formula_95 where formula_96 are angular momentum components. The relation can be strengthened as formula_97 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " F_{\\rm Q}[\\varrho,A] " }, { "math_id": 1, "text": "\\varrho" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "\nF_{\\rm Q}[\\varrho,A]=2\\sum_{k,l} \\frac{(\\lambda_k-\\lambda_l)^2}{(\\lambda_k+\\lambda_l)} \\vert \\langle k \\vert A \\vert l\\rangle \\vert^2,\n" }, { "math_id": 4, "text": "\\lambda_k" }, { "math_id": 5, "text": "\\vert k \\rangle" }, { "math_id": 6, "text": "\\varrho," }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "l" }, { "math_id": 9, "text": "\\lambda_k+\\lambda_l>0" }, { "math_id": 10, "text": "\\theta" }, { "math_id": 11, "text": "\\varrho_0" }, { "math_id": 12, "text": "\\varrho(\\theta)=\\exp(-iA\\theta)\\varrho_0\\exp(+iA\\theta)," }, { "math_id": 13, "text": "(\\Delta \\theta)^2 \\ge \\frac 1 {m F_{\\rm Q}[\\varrho,A]}," }, { "math_id": 14, "text": "m" }, { "math_id": 15, "text": "\\alpha" }, { "math_id": 16, "text": "H = \\alpha A" }, { "math_id": 17, "text": "t" }, { "math_id": 18, "text": "\\theta = \\alpha t" }, { "math_id": 19, "text": "\\theta A = t H" }, { "math_id": 20, "text": "B" }, { "math_id": 21, "text": "\\varrho(\\theta)" }, { "math_id": 22, "text": "F[B,\\theta]=\\sum_b\\frac{1}{p(b|\\theta)}\\left(\\frac{\\partial p(b|\\theta)}{\\partial \\theta}\\right)^2" }, { "math_id": 23, "text": "p(b|\\theta)=\\langle b\\vert \\varrho(\\theta)\\vert b \\rangle" }, { "math_id": 24, "text": "b" }, { "math_id": 25, "text": " b " }, { "math_id": 26, "text": " \\vert b \\rangle" }, { "math_id": 27, "text": "\nF_{\\rm Q}[\\varrho,A]=\\sup_{B} F[B,\\theta].\n" }, { "math_id": 28, "text": "L_{\\varrho}^2" }, { "math_id": 29, "text": "L_{\\varrho}" }, { "math_id": 30, "text": "\nF_{\\rm Q}[\\varrho,A] = -2\\int_0^\\infty\\text{tr}\\left(\\exp(-\\rho_0 t)[\\varrho_0,A] \\exp(-\\rho_0 t)[\\varrho_0,A]\\right)\\ dt,\n" }, { "math_id": 31, "text": "[\\ ,\\ ]" }, { "math_id": 32, "text": "\nF_{\\rm Q}[\\varrho,A] = 2\\,\\text{vec}([\\varrho_0,A])^\\dagger\\big(\\rho_0^*\\otimes {\\rm I}+{\\rm I}\\otimes\\rho_0\\big)^{-1}\\text{vec}([\\varrho_0,A]),\n" }, { "math_id": 33, "text": "^*" }, { "math_id": 34, "text": "^\\dagger" }, { "math_id": 35, "text": "\\rho_\\nu=(1-\\nu)\\rho_0+\\nu\\pi" }, { "math_id": 36, "text": "\\pi" }, { "math_id": 37, "text": "\\nu \\rightarrow 0^+" }, { "math_id": 38, "text": "\\rho_0" }, { "math_id": 39, "text": "{\\rm Identity}/\\dim{\\mathcal{H}}" }, { "math_id": 40, "text": "\\varrho(\\boldsymbol{\\theta})" }, { "math_id": 41, "text": "\\boldsymbol{\\theta}=(\\theta_1,\\dots,\\theta_n)" }, { "math_id": 42, "text": "\nF_{\\rm Q}^{ij}[\\varrho(\\boldsymbol{\\theta})]=2\\sum_{k,l} \\frac{\\operatorname{Re}(\\langle k \\vert \\partial_{i}\\varrho \\vert l\\rangle \\langle l \\vert \\partial_{j}\\varrho \\vert k\\rangle )}{\\lambda_k+\\lambda_l},\n" }, { "math_id": 43, "text": "\\partial_i" }, { "math_id": 44, "text": "\\theta_i" }, { "math_id": 45, "text": "\\operatorname{Re}" }, { "math_id": 46, "text": "\\vert k\\rangle" }, { "math_id": 47, "text": "\\boldsymbol{\\theta}" }, { "math_id": 48, "text": "\\lambda_k+\\lambda_l" }, { "math_id": 49, "text": "F(\\varrho,\\sigma)=\\left(\\mathrm{tr}\\left[\\sqrt{\\sqrt{\\varrho}\\sigma\\sqrt{\\varrho}}\\right]\\right)^2" }, { "math_id": 50, "text": "\nF(\\varrho_{\\boldsymbol{\\theta}},\\varrho_{\\boldsymbol{\\theta}+d\\boldsymbol{\\theta}})=1-\\frac{1}{4}\\sum_{i,j}\\Big(F_{\\rm Q}^{ij}[\\varrho(\\boldsymbol{\\theta})]+2\\!\\!\\sum_{\\lambda_k(\\boldsymbol{\\theta})=0}\\!\\!\\partial_i\\partial_j\\lambda_k\\Big)d\\theta_i d\\theta_j+\\mathcal{O}(d\\theta^3),\n" }, { "math_id": 51, "text": "\\lambda_k(\\boldsymbol{\\theta})=0" }, { "math_id": 52, "text": "\nF\\left(\\varrho_{\\boldsymbol{\\theta}-d\\boldsymbol{\\theta}/2},\\varrho_{\\boldsymbol{\\theta}+d\\boldsymbol{\\theta}/2}\\right)=1-\\frac{1}{4}\\sum_{i,j}F_{\\rm Q}^{ij}[\\varrho(\\boldsymbol{\\theta})]d\\theta_i d\\theta_j+\\mathcal{O}(d\\theta^3).\n" }, { "math_id": 53, "text": "n=1" }, { "math_id": 54, "text": "\\vert \\psi_0(\\theta)\\rangle" }, { "math_id": 55, "text": "H(\\theta)" }, { "math_id": 56, "text": "\n\\chi_F=4F_Q(\\vert\\psi_0(\\theta)\\rangle).\n" }, { "math_id": 57, "text": "\\vert\\psi_0(\\theta)\\rangle" }, { "math_id": 58, "text": "\\vert\\psi_0(\\theta+d\\theta)\\rangle" }, { "math_id": 59, "text": "d\\theta" }, { "math_id": 60, "text": " F_{\\rm Q}[\\vert \\Psi \\rangle,H] = 4 (\\Delta H)^2_{\\Psi} " }, { "math_id": 61, "text": " p(\\theta)=p " }, { "math_id": 62, "text": "F_{\\rm Q}[p \\varrho_1(\\theta) + (1-p) \\varrho_2(\\theta) ,H] \\le p F_{\\rm Q}[\\varrho_1(\\theta),H]+(1-p)F_{\\rm Q}[\\varrho_2(\\theta),H]." }, { "math_id": 63, "text": "F_{\\rm Q}[\\varrho,H] = 4 \\inf_{\\{p_k,\\vert \\Psi_k \\rangle \\}} \\sum_k p_k (\\Delta H)^2_{\\Psi_k}," }, { "math_id": 64, "text": "\\varrho=\\sum_k p_k \\vert \\Psi_k\\rangle \\langle \\Psi_k \\vert. " }, { "math_id": 65, "text": " \\vert \\Psi_k\\rangle " }, { "math_id": 66, "text": "\nF_Q[\\varrho,H]=\n\\min_{\\varrho_{12}} 2{\\rm Tr}[(H\\otimes {\\rm Identity}-{\\rm Identity}\\otimes H)^2\\varrho_{12}],\n" }, { "math_id": 67, "text": "\\varrho_{12}" }, { "math_id": 68, "text": " \n{\\rm Tr}_1(\\varrho_{12})={\\rm Tr}_2(\\varrho_{12})=\\varrho.\n" }, { "math_id": 69, "text": " \\theta " }, { "math_id": 70, "text": "F_{\\rm Q}\\Big[\\sum_i p_i(\\theta) \\varrho_i(\\theta)\\Big] \\le \\sum_i p_i(\\theta) F_{\\rm Q}[\\varrho_i(\\theta)]+F_{\\rm C}[\\{p_i(\\theta)\\}]," }, { "math_id": 71, "text": "F_{\\rm C}[\\{p_i(\\theta)\\}]=\\sum_i \\frac{\\partial_{\\theta} p_i(\\theta)^2}{p_i(\\theta)}" }, { "math_id": 72, "text": "F_{\\rm Q}[\\varrho_1 \\otimes \\varrho_2 , H_1\\otimes {\\rm Identity}+{\\rm Identity} \\otimes H_2] =\nF_{\\rm Q}[\\varrho_1,H_1]+F_{\\rm Q}[\\varrho_2,H_2]" }, { "math_id": 73, "text": "F_{\\rm Q}[\\varrho_{12}, H_1\\otimes {\\rm Identity}_2] \\ge \nF_{\\rm Q}[\\varrho_{1}, H_1]," }, { "math_id": 74, "text": "\\varrho_{1}={\\rm Tr}_2(\\varrho_{12})" }, { "math_id": 75, "text": "N" }, { "math_id": 76, "text": "F_{\\rm Q}[\\varrho, J_z] \\le N " }, { "math_id": 77, "text": " J_z=\\sum_{n=1}^N j_z^{(n)}, " }, { "math_id": 78, "text": "j_z^{(n)}" }, { "math_id": 79, "text": "F_{\\rm Q}[\\varrho, J_z] \\le N^2. " }, { "math_id": 80, "text": "F_{\\rm Q}[\\varrho, J_z] \\le sk^2 + r^{2} " }, { "math_id": 81, "text": "s=\\lfloor N/k \\rfloor " }, { "math_id": 82, "text": "N/k," }, { "math_id": 83, "text": "r=N-sk" }, { "math_id": 84, "text": "F_{\\rm Q}[\\varrho, J_z] \\le Nk. " }, { "math_id": 85, "text": "\\frac{F_{\\rm Q}[\\varrho, J_z]}{N} \\le k. " }, { "math_id": 86, "text": "I(\\varrho,H)={\\rm Tr}(H^2\\varrho)-{\\rm Tr}(H \\sqrt{\\varrho} H \\sqrt{\\varrho})." }, { "math_id": 87, "text": "I(\\varrho,H)" }, { "math_id": 88, "text": "\\varrho." }, { "math_id": 89, "text": "F_{\\rm Q}[\\varrho,H] \\ge 4 I(\\varrho,H)" }, { "math_id": 90, "text": " p_k " }, { "math_id": 91, "text": "(\\Delta H)^2 \\ge \\sum_k p_k (\\Delta H)^2_{\\Psi_k} \\ge \\frac1 4 F_{\\rm Q}[\\varrho,H]" }, { "math_id": 92, "text": "(\\Delta H)^2 = \\sup_{\\{p_k,\\vert \\Psi_k \\rangle \\}} \\sum_k p_k (\\Delta H)^2_{\\Psi_k}." }, { "math_id": 93, "text": "\n(\\Delta A)^2 F_Q[\\varrho,B] \\geq \\vert \\langle i[A,B]\\rangle\\vert^2,\n" }, { "math_id": 94, "text": "j," }, { "math_id": 95, "text": "\n(\\Delta J_x)^2+(\\Delta J_y)^2+(\\Delta J_z)^2\\ge j,\n" }, { "math_id": 96, "text": "J_l" }, { "math_id": 97, "text": "\n(\\Delta J_x)^2+(\\Delta J_y)^2+F_Q[\\varrho,J_z]/4\\ge j.\n" } ]
https://en.wikipedia.org/wiki?curid=62676893
62677192
Symmetric logarithmic derivative
The symmetric logarithmic derivative is an important quantity in quantum metrology, and is related to the quantum Fisher information. Definition. Let formula_0 and formula_1 be two operators, where formula_0 is Hermitian and positive semi-definite. In most applications, formula_0 and formula_1 fulfill further properties, that also formula_1 is Hermitian and formula_0 is a density matrix (which is also trace-normalized), but these are not required for the definition. The symmetric logarithmic derivative formula_2 is defined implicitly by the equation formula_3 where formula_4 is the commutator and formula_5 is the anticommutator. Explicitly, it is given by formula_6 where formula_7 and formula_8 are the eigenvalues and eigenstates of formula_9, i.e. formula_10 and formula_11. Formally, the map from operator formula_1 to operator formula_2 is a (linear) superoperator. Properties. The symmetric logarithmic derivative is linear in formula_1: formula_12 formula_13 The symmetric logarithmic derivative is Hermitian if its argument formula_1 is Hermitian: formula_14 The derivative of the expression formula_15 w.r.t. formula_16 at formula_17 reads formula_18 where the last equality is per definition of formula_2; this relation is the origin of the name "symmetric logarithmic derivative". Further, we obtain the Taylor expansion formula_19. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "L_\\varrho(A)" }, { "math_id": 3, "text": "i[\\varrho,A]=\\frac{1}{2} \\{\\varrho, L_\\varrho(A)\\}" }, { "math_id": 4, "text": "[X,Y]=XY-YX" }, { "math_id": 5, "text": "\\{X,Y\\}=XY+YX" }, { "math_id": 6, "text": "L_\\varrho(A)=2i\\sum_{k,l} \\frac{\\lambda_k-\\lambda_l}{\\lambda_k+\\lambda_l} \\langle k \\vert A \\vert l\\rangle \\vert k\\rangle \\langle l \\vert" }, { "math_id": 7, "text": "\\lambda_k" }, { "math_id": 8, "text": "\\vert k\\rangle" }, { "math_id": 9, "text": "\\varrho" }, { "math_id": 10, "text": "\\varrho\\vert k\\rangle=\\lambda_k\\vert k\\rangle" }, { "math_id": 11, "text": "\\varrho=\\sum_k \\lambda_k \\vert k\\rangle\\langle k\\vert" }, { "math_id": 12, "text": "L_\\varrho(\\mu A)=\\mu L_\\varrho(A)" }, { "math_id": 13, "text": "L_\\varrho(A+B)=L_\\varrho(A)+L_\\varrho(B)" }, { "math_id": 14, "text": "A=A^\\dagger\\Rightarrow[L_\\varrho(A)]^\\dagger=L_\\varrho(A)" }, { "math_id": 15, "text": "\\exp(-i\\theta A)\\varrho\\exp(+i\\theta A)" }, { "math_id": 16, "text": "\\theta" }, { "math_id": 17, "text": "\\theta=0" }, { "math_id": 18, "text": "\\frac{\\partial}{\\partial\\theta}\\Big[\\exp(-i\\theta A)\\varrho\\exp(+i\\theta A)\\Big]\\bigg\\vert_{\\theta=0} = i(\\varrho A-A\\varrho) = i[\\varrho,A] = \\frac{1}{2}\\{\\varrho, L_\\varrho(A)\\}" }, { "math_id": 19, "text": "\\exp(-i\\theta A)\\varrho\\exp(+i\\theta A) = \\varrho + \\underbrace{\\frac{1}{2}\\theta\\{\\varrho, L_\\varrho(A)\\}}_{=i\\theta[\\varrho,A]} + \\mathcal{O}(\\theta^2)" } ]
https://en.wikipedia.org/wiki?curid=62677192
62683332
Fairness (machine learning)
Measurement of algorithmic bias Fairness in machine learning refers to the various attempts at correcting algorithmic bias in automated decision processes based on machine learning models. Decisions made by computers after a machine-learning process may be considered unfair if they were based on variables considered sensitive. For example gender, ethnicity, sexual orientation or disability. As it is the case with many ethical concepts, definitions of fairness and bias are always controversial. In general, fairness and bias are considered relevant when the decision process impacts people's lives. In machine learning, the problem of algorithmic bias is well known and well studied. Outcomes may be skewed by a range of factors and thus might be considered unfair with respect to certain groups or individuals. An example would be the way social media sites deliver personalized news to consumer. Context. Discussion about fairness in machine learning is a relatively recent topic. Since 2016 there has been a sharp increase in research into the topic. This increase could be partly accounted to an influential report by ProPublica that claimed that the COMPAS software, widely used in US courts to predict recidivism, was racially biased. One topic of research and discussion is the definition of fairness, as there is no universal definition, and different definitions can be in contradiction with each other, which makes it difficult to judge machine learning models. Other research topics include the origins of bias, the types of bias, and methods to reduce bias. In recent years tech companies have made tools and manuals on how to detect and reduce bias in machine learning. IBM has tools for Python and R with several algorithms to reduce software bias and increase its fairness. Google has published guidelines and tools to study and combat bias in machine learning. Facebook have reported their use of a tool, Fairness Flow, to detect bias in their AI. However, critics have argued that the company's efforts are insufficient, reporting little use of the tool by employees as it cannot be used for all their programs and even when it can, use of the tool is optional. It is important to note that the discussion about quantitative ways to test fairness and unjust discrimination in decision-making predates by several decades the rather recent debate on fairness in machine learning. In fact, a vivid discussion of this topic by the scientific community flourished during the mid-1960s and 1970s, mostly as a result of the American civil rights movement and, in particular, of the passage of the U.S. Civil Rights Act of 1964. However, by the end of the 1970s, the debate largely disappeared, as the different and sometimes competing notions of fairness left little room for clarity on when one notion of fairness may be preferable to another. Language Bias. Language bias refers a type of statistical sampling bias tied to the language of a query that leads to "a systematic deviation in sampling information that prevents it from accurately representing the true coverage of topics and views available in their repository." Luo et al. show that current large language models, as they are predominately trained on English-language data, often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?", ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent. Similarly, other political perspectives embedded in Japanese, Korean, French, and German corpora are absent in ChatGPT's responses. ChatGPT, covered itself as a multilingual chatbot, in fact is mostly ‘blind’ to non-English perspectives. Gender Bias. Gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. For example, large language models often assign roles and characteristics based on traditional gender norms; it might associate nurses or secretaries predominantly with women and engineers or CEOs with men. Political Bias. Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data. Controversies. The use of algorithmic decision making in the legal system has been a notable area of use under scrutiny. In 2014, then U.S. Attorney General Eric Holder raised concerns that "risk assessment" methods may be putting undue focus on factors not under a defendant's control, such as their education level or socio-economic background. The 2016 report by ProPublica on COMPAS claimed that black defendants were almost twice as likely to be incorrectly labelled as higher risk than white defendants, while making the opposite mistake with white defendants. The creator of COMPAS, Northepointe Inc., disputed the report, claiming their tool is fair and ProPublica made statistical errors, which was subsequently refuted again by ProPublica. Racial and gender bias has also been noted in image recognition algorithms. Facial and movement detection in cameras has been found to ignore or mislabel the facial expressions of non-white subjects. In 2015, the automatic tagging feature in both Flickr and Google Photos was found to label black people with tags such as "animal" and "gorilla". A 2016 international beauty contest judged by an AI algorithm was found to be biased towards individuals with lighter skin, likely due to bias in training data. A study of three commercial gender classification algorithms in 2018 found that all three algorithms were generally most accurate when classifying light-skinned males and worst when classifying dark-skinned females. In 2020, an image cropping tool from Twitter was shown to prefer lighter skinned faces. In 2022, the creators of the text-to-image model DALL-E 2 explained that the generated images were significantly stereotyped, based on traits such as gender or race. Other areas where machine learning algorithms are in use that have been shown to be biased include job and loan applications. Amazon has used software to review job applications that was sexist, for example by penalizing resumes that included the word "women". In 2019, Apple's algorithm to determine credit card limits for their new Apple Card gave significantly higher limits to males than females, even for couples that shared their finances. Mortgage-approval algorithms in use in the U.S. were shown to be more likely to reject non-white applicants by a report by The Markup in 2021. Limitations. Recent works underline the presence of several limitations to the current landscape of fairness in machine learning, particularly when it comes to what is realistically achievable in this respect in the ever increasing real-world applications of AI. For instance, the mathematical and quantitative approach to formalize fairness, and the related "de-biasing" approaches, may rely onto too simplistic and easily overlooked assumptions, such as the categorization of individuals into pre-defined social groups. Other delicate aspects are, e.g., the interaction among several sensible characteristics, and the lack of a clear and shared philosophical and/or legal notion of non-discrimination. Group fairness criteria. In classification problems, an algorithm learns a function to predict a discrete characteristic formula_0, the target variable, from known characteristics formula_1. We model formula_2 as a discrete random variable which encodes some characteristics contained or implicitly encoded in formula_1 that we consider as sensitive characteristics (gender, ethnicity, sexual orientation, etc.). We finally denote by formula_3 the prediction of the classifier. Now let us define three main criteria to evaluate if a given classifier is fair, that is if its predictions are not influenced by some of these sensitive variables. Independence. We say the random variables formula_4 satisfy independence if the sensitive characteristics formula_2 are statistically independent of the prediction formula_3, and we write formula_5 We can also express this notion with the following formula: formula_6 This means that the classification rate for each target classes is equal for people belonging to different groups with respect to sensitive characteristics formula_7. Yet another equivalent expression for independence can be given using the concept of mutual information between random variables, defined as formula_8 In this formula, formula_9 is the entropy of the random variable formula_10. Then formula_11 satisfy independence if formula_12. A possible relaxation of the independence definition include introducing a positive slack formula_13 and is given by the formula: formula_14 Finally, another possible relaxation is to require formula_15. Separation. We say the random variables formula_16 satisfy separation if the sensitive characteristics formula_2 are statistically independent of the prediction formula_3 given the target value formula_0, and we write formula_17 We can also express this notion with the following formula: formula_18 This means that all the dependence of the decision formula_19 on the sensitive attribute formula_7 must be justified by the actual dependence of the true target variable formula_20. Another equivalent expression, in the case of a binary target rate, is that the true positive rate and the false positive rate are equal (and therefore the false negative rate and the true negative rate are equal) for every value of the sensitive characteristics: formula_21 formula_22 A possible relaxation of the given definitions is to allow the value for the difference between rates to be a positive number lower than a given slack formula_13, rather than equal to zero. In some fields separation (separation coefficient) in a confusion matrix is a measure of the distance (at a given level of the probability score) between the "predicted" cumulative percent negative and "predicted" cumulative percent positive. The greater this separation coefficient is at a given score value, the more effective the model is at differentiating between the set of positives and negatives at a particular probability cut-off. According to Mayes: "It is often observed in the credit industry that the selection of validation measures depends on the modeling approach. For example, if modeling procedure is parametric or semi-parametric, the two-sample K-S test is often used. If the model is derived by heuristic or iterative search methods, the measure of model performance is usually divergence. A third option is the coefficient of separation...The coefficient of separation, compared to the other two methods, seems to be most reasonable as a measure for model performance because it reflects the separation pattern of a model." Sufficiency. We say the random variables formula_16 satisfy sufficiency if the sensitive characteristics formula_2 are statistically independent of the target value formula_0 given the prediction formula_3, and we write formula_23 We can also express this notion with the following formula: formula_24 This means that the probability of actually being in each of the groups is equal for two individuals with different sensitive characteristics given that they were predicted to belong to the same group. Relationships between definitions. Finally, we sum up some of the main results that relate the three definitions given above: It is referred to as total fairness when independence, separation, and sufficiency are all satisfied simultaneously. However, total fairness is not possible to achieve except in specific rhetorical cases. Mathematical formulation of group fairness definitions. Preliminary definitions. Most statistical measures of fairness rely on different metrics, so we will start by defining them. When working with a binary classifier, both the predicted and the actual classes can take two values: positive and negative. Now let us start explaining the different possible relations between predicted and actual outcome: These relations can be easily represented with a confusion matrix, a table that describes the accuracy of a classification model. In this matrix, columns and rows represent instances of the predicted and the actual cases, respectively. By using these relations, we can define multiple metrics which can be later used to measure the fairness of an algorithm: The following criteria can be understood as measures of the three general definitions given at the beginning of this section, namely Independence, Separation and Sufficiency. In the table to the right, we can see the relationships between them. To define these measures specifically, we will divide them into three big groups as done in Verma et al.: definitions based on a predicted outcome, on predicted and actual outcomes, and definitions based on predicted probabilities and the actual outcome. We will be working with a binary classifier and the following notation: formula_33 refers to the score given by the classifier, which is the probability of a certain subject to be in the positive or the negative class. formula_3 represents the final classification predicted by the algorithm, and its value is usually derived from formula_33, for example will be positive when formula_33 is above a certain threshold. formula_0 represents the actual outcome, that is, the real classification of the individual and, finally, formula_2 denotes the sensitive attributes of the subjects. Definitions based on predicted outcome. The definitions in this section focus on a predicted outcome formula_3 for various distributions of subjects. They are the simplest and most intuitive notions of fairness. Definitions based on predicted and actual outcomes. These definitions not only considers the predicted outcome formula_3 but also compare it to the actual outcome formula_0. Mathematically, if a classifier has equal PPV for both groups, it will also have equal FDR, satisfying the formula:formula_37 Mathematically, if a classifier has equal FPR for both groups, it will also have equal TNR, satisfying the formula:formula_39 Mathematically, if a classifier has equal FNR for both groups, it will also have equal TPR, satisfying the formula:formula_41 Definitions based on predicted probabilities and actual outcome. These definitions are based in the actual outcome formula_0 and the predicted probability score formula_33. Equal confusion fairness. With respect to confusion matrices, independence, separation, and sufficiency require the respective quantities listed below to not have statistically significant difference across sensitive characteristics. The notion of equal confusion fairness requires the confusion matrix of a given decision system to have the same distribution when computed stratified over all sensitive characteristics. Social welfare function. Some scholars have proposed defining algorithmic fairness in terms of a social welfare function. They argue that using a social welfare function enables an algorithm designer to consider fairness and predictive accuracy in terms of their benefits to the people affected by the algorithm. It also allows the designer to trade off efficiency and equity in a principled way. Sendhil Mullainathan has stated that algorithm designers should use social welfare functions in order to recognize absolute gains for disadvantaged groups. For example, a study found that using a decision-making algorithm in pretrial detention rather than pure human judgment reduced the detention rates for Blacks, Hispanics, and racial minorities overall, even while keeping the crime rate constant. Individual fairness criteria. An important distinction among fairness definitions is the one between group and individual notions. Roughly speaking, while group fairness criteria compare quantities at a group level, typically identified by sensitive attributes (e.g. gender, ethnicity, age, etc.), individual criteria compare individuals. In words, individual fairness follow the principle that "similar individuals should receive similar treatments". There is a very intuitive approach to fairness, which usually goes under the name of fairness through unawareness (FTU), or "blindness", that prescribes not to explicitly employ sensitive features when making (automated) decisions. This is effectively a notion of individual fairness, since two individuals differing only for the value of their sensitive attributes would receive the same outcome. However, in general, FTU is subject to several drawbacks, the main being that it does not take into account possible correlations between sensitive attributes and non-sensitive attributes employed in the decision-making process. For example, an agent with the (malignant) intention to discriminate on the basis of gender could introduce in the model a proxy variable for gender (i.e. a variable highly correlated with gender) and effectively using gender information while at the same time being compliant to the FTU prescription. The problem of "what variables correlated to sensitive ones are fairly employable by a model" in the decision-making process is a crucial one, and is relevant for group concepts as well: independence metrics require a complete removal of sensitive information, while separation-based metrics allow for correlation, but only as far as the labeled target variable "justify" them. The most general concept of individual fairness was introduced in the pioneer work by Cynthia Dwork and collaborators in 2012 and can be thought of as a mathematical translation of the principle that the decision map taking features as input should be built such that it is able to "map similar individuals similarly", that is expressed as a Lipschitz condition on the model map. They call this approach fairness through awareness (FTA), precisely as counterpoint to FTU, since they underline the importance of choosing the appropriate target-related distance metric in order to assess which individuals are "similar" in specific situations. Again, this problem is very related to the point raised above about what variables can be seen as "legitimate" in particular contexts. Causality-based metrics. Causal fairness measures the frequency with which two nearly identical users or applications who differ only in a set of characteristics with respect to which resource allocation must be fair receive identical treatment. An entire branch of the academic research on fairness metrics is devoted to leverage causal models to assess bias in machine learning models. This approach is usually justified by the fact that the same observational distribution of data may hide different causal relationships among the variables at play, possibly with different interpretations of whether the outcome are affected by some form of bias or not. Kusner et al. propose to employ counterfactuals, and define a decision-making process counterfactually fair if, for any individual, the outcome does not change in the counterfactual scenario where the sensitive attributes are changed. The mathematical formulation reads: formula_55 that is: taken a random individual with sensitive attribute formula_56 and other features formula_57 and the same individual if she had formula_58, they should have same chance of being accepted. The symbol formula_59 represents the counterfactual random variable formula_19 in the scenario where the sensitive attribute formula_7 is fixed to formula_56. The conditioning on formula_60 means that this requirement is at the individual level, in that we are conditioning on all the variables identifying a single observation. Machine learning models are often trained upon data where the outcome depended on the decision made at that time. For example, if a machine learning model has to determine whether an inmate will recidivate and will determine whether the inmate should be released early, the outcome could be dependent on whether the inmate was released early or not. Mishler et al. propose a formula for counterfactual equalized odds: formula_61 where formula_19 is a random variable, formula_62 denotes the outcome given that the decision formula_63 was taken, and formula_7 is a sensitive feature. Plecko and Bareinboim propose a unified framework to deal with causal analysis of fairness. They suggest the use of a Standard Fairness Model, consisting of a causal graph with 4 types of variables: Within this framework, Plecko and Bareinboim are therefore able to classify the possible effects that sensitive attributes may have on the outcome. Moreover, the granularity at which these effects are measured—namely, the conditioning variables used to average the effect—is directly connected to the "individual vs. group" aspect of fairness assessment. Bias mitigation strategies. Fairness can be applied to machine learning algorithms in three different ways: data preprocessing, optimization during software training, or post-processing results of the algorithm. Preprocessing. Usually, the classifier is not the only problem; the dataset is also biased. The discrimination of a dataset formula_66 with respect to the group formula_67 can be defined as follows: formula_68 That is, an approximation to the difference between the probabilities of belonging in the positive class given that the subject has a protected characteristic different from formula_69 and equal to formula_69. Algorithms correcting bias at preprocessing remove information about dataset variables which might result in unfair decisions, while trying to alter as little as possible. This is not as simple as just removing the sensitive variable, because other attributes can be correlated to the protected one. A way to do this is to map each individual in the initial dataset to an intermediate representation in which it is impossible to identify whether it belongs to a particular protected group while maintaining as much information as possible. Then, the new representation of the data is adjusted to get the maximum accuracy in the algorithm. This way, individuals are mapped into a new multivariable representation where the probability of any member of a protected group to be mapped to a certain value in the new representation is the same as the probability of an individual which doesn't belong to the protected group. Then, this representation is used to obtain the prediction for the individual, instead of the initial data. As the intermediate representation is constructed giving the same probability to individuals inside or outside the protected group, this attribute is hidden to the classifier. An example is explained in Zemel et al. where a multinomial random variable is used as an intermediate representation. In the process, the system is encouraged to preserve all information except that which can lead to biased decisions, and to obtain a prediction as accurate as possible. On the one hand, this procedure has the advantage that the preprocessed data can be used for any machine learning task. Furthermore, the classifier does not need to be modified, as the correction is applied to the dataset before processing. On the other hand, the other methods obtain better results in accuracy and fairness. Reweighing. Reweighing is an example of a preprocessing algorithm. The idea is to assign a weight to each dataset point such that the weighted discrimination is 0 with respect to the designated group. If the dataset formula_66 was unbiased the sensitive variable formula_2 and the target variable formula_0 would be statistically independent and the probability of the joint distribution would be the product of the probabilities as follows: formula_70 In reality, however, the dataset is not unbiased and the variables are not statistically independent so the observed probability is: formula_71 To compensate for the bias, the software adds a weight, lower for favored objects and higher for unfavored objects. For each formula_72 we get: formula_73 When we have for each formula_1 a weight associated formula_74 we compute the weighted discrimination with respect to group formula_67 as follows: formula_75 It can be shown that after reweighting this weighted discrimination is 0. Inprocessing. Another approach is to correct the bias at training time. This can be done by adding constraints to the optimization objective of the algorithm. These constraints force the algorithm to improve fairness, by keeping the same rates of certain measures for the protected group and the rest of individuals. For example, we can add to the objective of the algorithm the condition that the false positive rate is the same for individuals in the protected group and the ones outside the protected group. The main measures used in this approach are false positive rate, false negative rate, and overall misclassification rate. It is possible to add just one or several of these constraints to the objective of the algorithm. Note that the equality of false negative rates implies the equality of true positive rates so this implies the equality of opportunity. After adding the restrictions to the problem it may turn intractable, so a relaxation on them may be needed. Adversarial debiasing. We train two classifiers at the same time through some gradient-based method (f.e.: gradient descent). The first one, the "predictor" tries to accomplish the task of predicting formula_0, the target variable, given formula_1, the input, by modifying its weights formula_76 to minimize some loss function formula_77. The second one, the "adversary" tries to accomplish the task of predicting formula_2, the sensitive variable, given formula_78 by modifying its weights formula_79 to minimize some loss function formula_80. An important point here is that, in order to propagate correctly, formula_78 above must refer to the raw output of the classifier, not the discrete prediction; for example, with an artificial neural network and a classification problem, formula_78 could refer to the output of the softmax layer. Then we update formula_79 to minimize formula_81 at each training step according to the gradient formula_82 and we modify formula_76 according to the expression: formula_83 where formula_84 is a tunable hyperparameter that can vary at each time step. The intuitive idea is that we want the "predictor" to try to minimize formula_85 (therefore the term formula_86) while, at the same time, maximize formula_81 (therefore the term formula_87), so that the "adversary" fails at predicting the sensitive variable from formula_78. The term formula_88 prevents the "predictor" from moving in a direction that helps the "adversary" decrease its loss function. It can be shown that training a "predictor" classification model with this algorithm improves demographic parity with respect to training it without the "adversary". Postprocessing. The final method tries to correct the results of a classifier to achieve fairness. In this method, we have a classifier that returns a score for each individual and we need to do a binary prediction for them. High scores are likely to get a positive outcome, while low scores are likely to get a negative one, but we can adjust the threshold to determine when to answer yes as desired. Note that variations in the threshold value affect the trade-off between the rates for true positives and true negatives. If the score function is fair in the sense that it is independent of the protected attribute, then any choice of the threshold will also be fair, but classifiers of this type tend to be biased, so a different threshold may be required for each protected group to achieve fairness. A way to do this is plotting the true positive rate against the false negative rate at various threshold settings (this is called ROC curve) and find a threshold where the rates for the protected group and other individuals are equal. Reject option based classification. Given a classifier let formula_89 be the probability computed by the classifiers as the probability that the instance formula_1 belongs to the positive class +. When formula_89 is close to 1 or to 0, the instance formula_1 is specified with high degree of certainty to belong to class + or - respectively. However, when formula_89 is closer to 0.5 the classification is more unclear. We say formula_1 is a "rejected instance" if formula_90 with a certain formula_91 such that formula_92. The algorithm of "ROC" consists on classifying the non-rejected instances following the rule above and the rejected instances as follows: if the instance is an example of a deprived group (formula_93) then label it as positive, otherwise, label it as negative. We can optimize different measures of discrimination (link) as functions of formula_91 to find the optimal formula_91 for each problem and avoid becoming discriminatory against the privileged group.
[ { "math_id": 0, "text": " Y " }, { "math_id": 1, "text": " X " }, { "math_id": 2, "text": " A " }, { "math_id": 3, "text": " R " }, { "math_id": 4, "text": "(R,A)" }, { "math_id": 5, "text": " R \\bot A. " }, { "math_id": 6, "text": " P(R = r\\ |\\ A = a) = P(R = r\\ |\\ A = b) \\quad \\forall r \\in R \\quad \\forall a,b \\in A " }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": " I(X,Y) = H(X) + H(Y) - H(X,Y) " }, { "math_id": 9, "text": " H(X) " }, { "math_id": 10, "text": " X " }, { "math_id": 11, "text": " (R,A) " }, { "math_id": 12, "text": " I(R,A) = 0 " }, { "math_id": 13, "text": " \\epsilon > 0 " }, { "math_id": 14, "text": " P(R = r\\ |\\ A = a) \\geq P(R = r\\ |\\ A = b) - \\epsilon \\quad \\forall r \\in R \\quad \\forall a,b \\in A " }, { "math_id": 15, "text": " I(R,A) \\leq \\epsilon " }, { "math_id": 16, "text": "(R,A,Y)" }, { "math_id": 17, "text": " R \\bot A\\ |\\ Y. " }, { "math_id": 18, "text": " P(R = r\\ |\\ Y = q, A = a) = P(R = r\\ |\\ Y = q, A = b) \\quad \\forall r \\in R \\quad q \\in Y \\quad \\forall a,b \\in A " }, { "math_id": 19, "text": "R" }, { "math_id": 20, "text": "Y" }, { "math_id": 21, "text": " P(R = 1\\ |\\ Y = 1, A = a) = P(R = 1\\ |\\ Y = 1, A = b) \\quad \\forall a,b \\in A " }, { "math_id": 22, "text": " P(R = 1\\ |\\ Y = 0, A = a) = P(R = 1\\ |\\ Y = 0, A = b) \\quad \\forall a,b \\in A " }, { "math_id": 23, "text": " Y \\bot A\\ |\\ R. " }, { "math_id": 24, "text": " P(Y = q\\ |\\ R = r, A = a) = P(Y = q\\ |\\ R = r, A = b) \\quad \\forall q \\in Y \\quad r \\in R \\quad \\forall a,b \\in A " }, { "math_id": 25, "text": " PPV = P(actual=+\\ |\\ prediction=+) = \\frac{TP}{TP+FP}" }, { "math_id": 26, "text": " FDR = P(actual=-\\ |\\ prediction=+) = \\frac{FP}{TP+FP} " }, { "math_id": 27, "text": " NPV = P(actual=-\\ |\\ prediction=-) = \\frac{TN}{TN+FN} " }, { "math_id": 28, "text": " FOR = P(actual=+\\ |\\ prediction=-) = \\frac{FN}{TN+FN} " }, { "math_id": 29, "text": " TPR = P(prediction=+\\ |\\ actual=+) = \\frac{TP}{TP+FN} " }, { "math_id": 30, "text": " FNR = P(prediction=-\\ |\\ actual=+) = \\frac{FN}{TP+FN} " }, { "math_id": 31, "text": " TNR = P(prediction=-\\ |\\ actual=-) = \\frac{TN}{TN+FP} " }, { "math_id": 32, "text": " FPR = P(prediction=+\\ |\\ actual=-) = \\frac{FP}{TN+FP} " }, { "math_id": 33, "text": " S " }, { "math_id": 34, "text": " P(R = +\\ |\\ A = a) = P(R = +\\ |\\ A = b) \\quad \\forall a,b \\in A " }, { "math_id": 35, "text": " P(R = +\\ |\\ L = l, A = a) = P(R = +\\ |\\ L = l, A = b) \\quad \\forall a,b \\in A \\quad \\forall l \\in L " }, { "math_id": 36, "text": " P(Y = +\\ |\\ R = +, A = a) = P(Y = +\\ |\\ R = +, A = b) \\quad \\forall a,b \\in A " }, { "math_id": 37, "text": " P(Y = -\\ |\\ R = +, A = a) = P(Y = -\\ |\\ R = +, A = b) \\quad \\forall a,b \\in A " }, { "math_id": 38, "text": " P(R = +\\ |\\ Y = -, A = a) = P(R = +\\ |\\ Y = -, A = b) \\quad \\forall a,b \\in A " }, { "math_id": 39, "text": " P(R = -\\ |\\ Y = -, A = a) = P(R = -\\ |\\ Y = -, A = b) \\quad \\forall a,b \\in A " }, { "math_id": 40, "text": " P(R = -\\ |\\ Y = +, A = a) = P(R = -\\ |\\ Y = +, A = b) \\quad \\forall a,b \\in A " }, { "math_id": 41, "text": " P(R = +\\ |\\ Y = +, A = a) = P(R = +\\ |\\ Y = +, A = b) \\quad \\forall a,b \\in A " }, { "math_id": 42, "text": " P(R = +\\ |\\ Y = y, A = a) = P(R = +\\ |\\ Y = y, A = b) \\quad y \\in \\{+,-\\} \\quad \\forall a,b \\in A " }, { "math_id": 43, "text": " P(Y = y\\ |\\ R = y, A = a) = P(Y = y\\ |\\ R = y, A = b) \\quad y \\in \\{+,-\\} \\quad \\forall a,b \\in A " }, { "math_id": 44, "text": " P(R = Y\\ |\\ A = a) = P(R = Y\\ |\\ A = b) \\quad \\forall a,b \\in A " }, { "math_id": 45, "text": " \\frac{FN_{A=a}}{FP_{A=a}} = \\frac{FN_{A=b}}{FP_{A=b}} " }, { "math_id": 46, "text": " P(Y = +\\ |\\ S = s,A = a) = P(Y = +\\ |\\ S = s,A = b) \\quad \\forall s \\in S \\quad \\forall a,b \\in A " }, { "math_id": 47, "text": " P(Y = +\\ |\\ S = s,A = a) = P(Y = +\\ |\\ S = s,A = b) = s \\quad \\forall s \\in S \\quad \\forall a,b \\in A " }, { "math_id": 48, "text": " E(S\\ |\\ Y = +,A = a) = E(S\\ |\\ Y = +,A = b) \\quad \\forall a,b \\in A " }, { "math_id": 49, "text": " E(S\\ |\\ Y = -,A = a) = E(S\\ |\\ Y = -,A = b) \\quad \\forall a,b \\in A " }, { "math_id": 50, "text": "P(\\hat{Y} = 1)" }, { "math_id": 51, "text": "P(\\hat{Y} = 0 \\mid Y=0)" }, { "math_id": 52, "text": "P(\\hat{Y} = 1 \\mid Y = 1)" }, { "math_id": 53, "text": "P(Y = 1 \\mid \\hat{Y}=1)" }, { "math_id": 54, "text": "P(Y=0 \\mid \\hat{Y}=0)" }, { "math_id": 55, "text": " \nP(R_{A\\leftarrow a}=1 \\mid A=a,X=x) = P(R_{A\\leftarrow b}=1 \\mid A=a,X=x),\\quad\\forall a,b;\n" }, { "math_id": 56, "text": "A=a" }, { "math_id": 57, "text": "X=x" }, { "math_id": 58, "text": "A = b" }, { "math_id": 59, "text": "\\hat{R}_{A\\leftarrow a}" }, { "math_id": 60, "text": "A=a, X=x" }, { "math_id": 61, "text": "P(R=1 \\mid Y^0=0, A=a) = P(R=1 \\mid Y^0=0, A=b) \\wedge P(R=0 \\mid Y^1=1, A=a) = P(R=0 \\mid Y^1=1, A=b),\\quad\\forall a,b;" }, { "math_id": 62, "text": "Y^x" }, { "math_id": 63, "text": "x" }, { "math_id": 64, "text": "W" }, { "math_id": 65, "text": "Z" }, { "math_id": 66, "text": " D " }, { "math_id": 67, "text": " A = a " }, { "math_id": 68, "text": " disc_{A=a}(D) = \\frac{|\\{X\\in D| X(A) \\neq a, X(Y) = +\\}|}{|\\{X \\in D | X(A) \\neq a \\}|} - \\frac{|\\{X\\in D| X(A) = a, X(Y) = +\\}|}{|\\{X \\in D | X(A) = a \\}|}" }, { "math_id": 69, "text": " a " }, { "math_id": 70, "text": " P_{exp}(A = a \\wedge Y = +) = P(A = a) \\times P(Y = +) = \\frac{|\\{X \\in D | X(A) = a\\}|}{|D|} \\times \\frac{|\\{X \\in D| X(Y) = + \\}|}{|D|}" }, { "math_id": 71, "text": " P_{obs}(A = a \\wedge Y = +) = \\frac{|\\{X \\in D | X(A) = a \\wedge X(Y) = +\\}|}{|D|} " }, { "math_id": 72, "text": " X \\in D " }, { "math_id": 73, "text": " W(X) = \\frac{P_{exp}(A = X(A) \\wedge Y = X(Y))}{P_{obs}(A = X(A) \\wedge Y = X(Y))} " }, { "math_id": 74, "text": " W(X) " }, { "math_id": 75, "text": " disc_{A = a}(D) = \\frac{\\sum W(X) X \\in \\{X\\in D| X(A) \\neq a, X(Y) = +\\}}{\\sum W(X) X \\in \\{X \\in D | X(A) \\neq a \\}} - \\frac{\\sum W(X) X \\in \\{X\\in D| X(A) = a, X(Y) = +\\}}{\\sum W(X) X \\in \\{X \\in D | X(A) = a \\}} " }, { "math_id": 76, "text": " W " }, { "math_id": 77, "text": "L_{P}(\\hat{y},y)" }, { "math_id": 78, "text": " \\hat{Y} " }, { "math_id": 79, "text": " U " }, { "math_id": 80, "text": "L_{A}(\\hat{a},a) " }, { "math_id": 81, "text": " L_{A} " }, { "math_id": 82, "text": " \\nabla_{U}L_{A} " }, { "math_id": 83, "text": " \\nabla_{W}L_{P} - proj_{\\nabla_{W}L_{A}}\\nabla_{W}L_{P} - \\alpha \\nabla_{W}L_{A} " }, { "math_id": 84, "text": " \\alpha " }, { "math_id": 85, "text": " L_{P} " }, { "math_id": 86, "text": " \\nabla_{W}L_{P} " }, { "math_id": 87, "text": " - \\alpha \\nabla_{W}L_{A} " }, { "math_id": 88, "text": " -proj_{\\nabla_{W}L_{A}}\\nabla_{W}L_{P} " }, { "math_id": 89, "text": " P(+|X) " }, { "math_id": 90, "text": " max(P(+|X), 1-P(+|X)) \\leq \\theta " }, { "math_id": 91, "text": " \\theta " }, { "math_id": 92, "text": " 0.5 < \\theta < 1 " }, { "math_id": 93, "text": "X(A) = a" } ]
https://en.wikipedia.org/wiki?curid=62683332
62687342
Gurzadyan-Savvidy relaxation
In cosmology, Gurzadyan-Savvidy (GS) relaxation is a theory developed by Vahe Gurzadyan and George Savvidy to explain the relaxation over time of the dynamics of N-body gravitating systems such as star clusters and galaxies. Stellar systems observed in the Universe – globular clusters and elliptical galaxies – reveal their relaxed state reflected in the high degree of regularity of some of their physical characteristics such as surface luminosity, velocity dispersion, geometric shapes, etc. The basic mechanism of relaxation of stellar systems has been considered the 2-body encounters (of stars), to lead to the observed fine-grained equilibrium. The coarse-grained phase of evolution of gravitating systems is described by violent relaxation developed by Donald Lynden-Bell. The 2-body mechanism of relaxation is known in plasma physics. The difficulties with description of collective effects in N-body gravitating systems arise due to the long-range character of gravitational interaction, as distinct of plasma where due to two different signs of charges the Debye screening takes place. The 2-body relaxation mechanism e.g. for elliptical galaxies predicts around formula_0 years i.e. time scales exceeding the age of the Universe. The problem of relaxation and evolution of stellar systems and the role of collective effects are studied by various techniques, see. Among the efficient methods of study of N-body gravitating systems are the numerical simulations, particularly, Sverre Aarseth's N-body codes are widely used. Stellar system time scales. Using the geometric methods of theory of dynamical systems, Gurzadyan and Savvidy showed the exponential instability (chaos) of spherical N-body systems interacting by Newtonian gravity and derived the collective (N-body) relaxation time (see also ) formula_1 where formula_2 denotes the average stellar velocity, formula_3 is the mean stellar mass and formula_4 is the stellar density. Normalized for parameters of stellar systems like globular clusters it yields formula_5 For clusters of galaxies it yields 10-1000 Gyr. Comparing this (GS) relaxation time to the 2-body relaxation time (see ) formula_6 Gurzadyan and Savvidy obtain formula_7 where formula_8 is the radius of gravitational influence and d is the mean distance between stars. With increasing density, d decreases and approaches formula_9 so that the 2-body encounters become the dominant in the relaxation mechanism. The times formula_10 and formula_11 are related to the dynamical time formula_12 by the relations formula_13 and reflect the fact of existence of 3 scales of time and length for stellar systems (see also ) formula_14 That approach (from the analysis of so-called two-dimensional curvature of the configuration space of the system) enabled to conclude that while the spherical systems are exponentially instable systems (Kolmogorov K-systems), the spiral galaxies "spend a large amount of time in regions with positive two-dimensional curvature" and hence "elliptical and spiral galaxies should have a different origin". Within the same geometric approach Gurzadyan and Armen Kocharyan had introduced the Ricci curvature criterion for relative instability (chaos) of dynamical systems. Derivation of GS-time scale by stochastic differential equation approach. GS-time scale formula_15 has been rederived by Gurzadyan and Kocharyan using stochastic differential equation approach Observational indication and numerical simulations. Observational support to the GS-time scale is reported for globular clusters. Numerical simulations supporting GS-time scale are claimed in. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "10^{13}" }, { "math_id": 1, "text": "\\tau_{GS}=\\left(\\frac{15}{4}\\right)^{2/3}\\left(\\frac{1}{2\\pi\\sqrt2}\\frac{v}{GMn^{2/3}}\\right)," }, { "math_id": 2, "text": "v" }, { "math_id": 3, "text": "M" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "\\tau_{GS} \\simeq 10^8 \\text{year} \\left(\\frac{v}{10 \\text{km/sec}}\\right)\\left(\\frac{n}{1 \\text{pc}^{-3}}\\right)^{-2/3} \\left(\\frac{M}{M_{\\odot}}\\right)^{-1}." }, { "math_id": 6, "text": "\\tau_{2b}=\\frac{\\sqrt 2 v^3}{\\pi G^2M^2 n ln(N/2)}," }, { "math_id": 7, "text": "\\frac{\\tau_{2b}}{\\tau_{GS}}\\simeq\\frac{v^2}{GMn^{1/3}}\\frac{1}{lnN}\\simeq\\frac{d}{r_*}\\frac{1}{lnN}," }, { "math_id": 8, "text": "r_*=GM/v^2 " }, { "math_id": 9, "text": "r_*" }, { "math_id": 10, "text": "\\tau_{GS}" }, { "math_id": 11, "text": "\\tau_{2b}" }, { "math_id": 12, "text": "\\tau_{dyn}=D^{3/2}{GMN}^{1/2}" }, { "math_id": 13, "text": "\\tau_{GS}\\simeq\\frac{D}{d}\\tau_{dyn}, \\quad \\tau_{2b}\\simeq\\frac{D}{r_*}\\tau_{dyn}," }, { "math_id": 14, "text": " D:\\tau_{dyn} \\quad; \\quad d:\\tau_{GS} \\quad; \\quad r_{*}: \\tau_{2b}." }, { "math_id": 15, "text": "\\tau_{GS}=\\tau_{dyn} N^{1/3}" } ]
https://en.wikipedia.org/wiki?curid=62687342
62687611
Relative convex hull
In discrete geometry and computational geometry, the relative convex hull or geodesic convex hull is an analogue of the convex hull for the points inside a simple polygon or a rectifiable simple closed curve. Definition. Let formula_0 be a simple polygon or a rectifiable simple closed curve, and let formula_1 be any set enclosed by formula_0. A geodesic between two points in formula_0 is a shortest path connecting those two points that stays entirely within formula_0. A subset formula_2 of the points inside formula_0 is said to be relatively convex, geodesically convex, or formula_0-convex if, for every two points of formula_2, the geodesic between them in formula_0 stays within formula_2. Then the relative convex hull of formula_1 can be defined as the intersection of all relatively convex sets containing formula_1. Equivalently, the relative convex hull is the minimum-perimeter weakly simple polygon in formula_0 that encloses formula_1. This was the original formulation of relative convex hulls, by . However this definition is complicated by the need to use weakly simple polygons (intuitively, polygons in which the polygon boundary can touch or overlap itself but not cross itself) instead of simple polygons when formula_1 is disconnected and its components are not all visible to each other. Special cases. Finite sets of points. , who provided an efficient algorithm for the construction of the relative convex hull for finite sets of points inside a simple polygon. With subsequent improvements in the time bounds for two subroutines, finding shortest paths between query points in a polygon, and polygon triangulation, this algorithm takes time formula_3 on an input with formula_4 points in a polygon with formula_5 vertices. It can also be maintained dynamically in sublinear time per update. The relative convex hull of a finite set of points is always a weakly simple polygon, but it might not actually be a simple polygon, because parts of it can be connected to each other by line segments or polygonal paths rather than by regions of nonzero area. Simple polygons. For relative convex hulls of simple polygons, an alternative but equivalent definition of convexity can be used. A simple polygon formula_0 within another simple polygon formula_6 is relatively convex or formula_6-convex if every line segment contained in formula_6 that connects two points of formula_0 lies within formula_0. The relative convex hull of a simple polygon formula_0 within formula_6 can be defined as the intersection of all formula_6-convex polygons that contain formula_0, as the smallest formula_6-convex polygon that contains formula_0, or as the minimum-perimeter simple polygon that contains formula_0 and is contained by formula_6. generalizes linear time algorithms for the convex hull of a simple polygon to the relative convex hull of one simple polygon within another. The resulting generalized algorithm is not linear time, however: its time complexity depends on the depth of nesting of certain features of one polygon within another. In this case, the relative convex hull is itself a simple polygon. Alternative linear time algorithms based on path planning are known. A similar definition can also be given for the relative convex hull of two disjoint simple polygons. This type of hull can be used in algorithms for testing whether the two polygons can be separated into disjoint halfplanes by a continuous linear motion, and in data structures for collision detection of moving polygons. Higher dimensions. The definition of relative convex hulls based on minimum enclosure does not extend to higher dimensions, because (even without being surrounded by an outer shape) the minimum surface area enclosure of a non-convex set is not generally convex. However, for the relative convex hull of a connected set within another set, a similar definition to one for simple polygons can be used. In this case, a relatively convex set can again be defined as a subset of the given outer set that contains all line segments in the outer set between pairs of its points. The relative convex hull can be defined as the intersection of all relatively convex sets that contain the inner set. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "K" }, { "math_id": 3, "text": "O(p + n\\log(p+n))" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "p" }, { "math_id": 6, "text": "Q" } ]
https://en.wikipedia.org/wiki?curid=62687611
62689487
Esther 3
A chapter in the Book of Esther Esther 3 is the third chapter of the Book of Esther in the Hebrew Bible or the Old Testament of the Christian Bible. The author of the book is unknown and modern scholars have established that the final stage of the Hebrew text would have been formed by the second century BCE. Chapters 3 to 8 contain the nine scenes that form the complication in the book. This chapter introduces Haman the Agagite, who is linked by his genealogy to King Agag, the enemy of Israel's King Saul, from whose father, Kish, Mordecai was descended (). The king Ahasuerus elevated Haman to a high position in the court, and ordered everyone to bow down to him, but Mordecai refuses to do so to Haman (), which is connected to Mordecai's Jewish identity (as Jews would only bow down to worship their own God (cf. Daniel 3); this indirectly introduced the religious dimension of the story. Haman reacted by a vast plan to destroy not simply Mordecai, but his entire people (), getting the approval from the king to arrange for a particular date of genocide, selected by casting a lot, or "pur" (one reason for the festival of Purim; ) to fall on the thirteenth day of the twelfth month, Adar (, ). The chapter ends with the confused reaction of the whole city of Susa due to the decree (). Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 15 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Haman's promotion and Mordecai's refusal to honor him (3:1–6). Shifting the focus from Esther and Mordecai, this section describes Haman the Agagite which would be "the enemy of the Jews". Haman's displeasure of Mordecai's refusal to bow down to him turns into an evil design to wipe out the whole people of Mordecai. "After these things did king Ahasuerus promote Haman the son of Hammedatha the Agagite, and advanced him, and set his seat above all the princes that were with him." "Now it came to pass, when they spake daily unto him, and he hearkened not unto them, that they told Haman, to see whether Mordecai's matters would stand: for he had told them that he was a Jew." "But he disdained to lay hands on Mordecai alone. So, as they had made known to him the people of Mordecai, Haman sought to destroy all the Jews, the people of Mordecai, throughout the whole kingdom of Ahasuerus. Haman's plot against the Jews gains the king's consent (3:7–15). Haman carried out his design by first casting lots to choose the suitable day for execution and then persuading the king to issue a decree to assure the implementation of it. "In the first month, that is, the month Nisan, in the twelfth year of king Ahasuerus, they cast Pur, that is, the lot, before Haman from day to day, and from month to month, to the twelfth month, that is, the month Adar." [Haman said:] "If it please the king, let it be decreed that they be destroyed, and I will pay 10,000 talents of silver into the hands of those who have charge of the king's business, that they may put it into the king's treasuries." "Then the king’s scribes were summoned on the thirteenth day of the first month, and a decree was written just as Haman had commanded to the king’s satraps and to the governors over each province and to the officials of all peoples and to every province according to its own script, and to every people in their language. It was written in the name of King Ahasuerus and sealed with the king’s signet ring." "And the letters were sent by couriers into all the king’s provinces, to destroy, to kill, and to annihilate all the Jews, both young and old, little children and women, in one day, on the thirteenth day of the twelfth month, which is the month of Adar, and to plunder their possessions." Verse 13. This first edict can be compared and contrasted to the second one as recorded in : "The couriers went out, hastened by the king’s command; and the decree was proclaimed in Shushan the citadel. So the king and Haman sat down to drink, but the city of Shushan was perplexed." Verse 15. This verse can be compared and contrasted to : Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62689487
62693289
European Green Deal
Plan to transform the EU into a climate-neutral economy by 2050 The European Green Deal, approved in 2020, is a set of policy initiatives by the European Commission with the overarching aim of making the European Union (EU) climate neutral in 2050. The plan is to review each existing law on its climate merits, and also introduce new legislation on the circular economy (CE), building renovation, biodiversity, farming and innovation. The president of the European Commission, Ursula von der Leyen, stated that the European Green Deal would be Europe's "man on the moon moment". On 13 December 2019, the European Council decided to press ahead with the plan, with an opt-out for Poland. On 15 January 2020, the European Parliament voted to support the deal as well, with requests for higher ambition. A year later, the European Climate Law was passed, which legislated that greenhouse gas emissions should be 55% lower in 2030 compared to 1990. The Fit for 55 package is a large set of proposed legislation detailing how the European Union plans to reach this target. The European Commission's climate change strategy, launched in 2020, is focused on a promise to make Europe a net-zero emitter of greenhouse gases by 2050 and to demonstrate that economies will develop without increasing resource usage. However, the Green Deal has measures to ensure that nations that are already reliant on fossil fuels are not left behind in the transition to renewable energy. The green transition is a top priority for Europe. The EU Member States want to reduce greenhouse gas emissions by 55% by 2030 from 1990 levels, and become climate neutral by 2050. Von der Leyen appointed Frans Timmermans as Executive Vice President of the European Commission for the European Green Deal in 2019. He was succeeded by Maroš Šefčovič in 2023. European Climate Pact. The European Climate Pact is an initiative of the European Commission supporting the implementation of the European Green Deal. It is a movement to build a greener Europe, providing a platform to work and learn together, develop solutions, and achieve real change. The Pact provides opportunities for people, communities, and organizations to participate in climate and environmental action across Europe. By pledging to the Pact, European stakeholders commit to taking concrete climate and environmental actions in a way that can be measured and/or followed up. Participating in the Pact is an opportunity for organizations to share their transition journey with their peers and collaborate with other actors towards common targets. Aims. The overarching aim of the European Green Deal is for the European Union to become the world's first “climate-neutral bloc” by 2050. It has goals extending to many different sectors, including construction, biodiversity, energy, transport and food. The plan includes potential carbon tariffs for countries that don't curtail their greenhouse gas pollution at the same rate. The mechanism to achieve this is called the Carbon Border Adjustment Mechanism (CBAM). It also includes: It also leans on Horizon Europe, to play a pivotal role in leveraging national public and private investments. Through partnerships with industry and member States, it will support research and innovation on transport technologies, including batteries, clean hydrogen, low-carbon steel making, circular bio-based sectors and the built environment. The EU plans to finance the policies set out in the Green Deal through an investment plan – InvestEU, which forecasts at least €1 trillion in investment. Furthermore, for the EU to reach its goals set out in the deal, it is estimated that approximately €260 billion a year is going to be required by 2030 in investments. Before 1970, almost half of all European residential structures were built. At the time, no consideration was given to the amount of energy used by materials and standards. At the present rate of refurbishment, reaching a highly energy-efficient and decarbonised building stock might take more than a century. One of the major aims of the European Green Deal is to “at least double or even triple” the current refurbishment rate of approximately 1%. This is also true outside of the EU. In addition to rehabilitation, investment is required to enable the development of new efficient and ecologically friendly structures. In July 2021, the European Commission released its “Fit for 55” legislation package, which contains important guidelines for the future of the automotive industry: All new cars sold in the EU must be zero-emission vehicles from 2035. In the context of the Paris Agreement, and therefore using today's emissions as baseline, since 1990 EU emissions already dropped by 25% at 2019, a 55% reduction target using 1990 as baseline represents in 2019 terms a 40% reduction target, which can be calculated using this equation: formula_0 According to the Emissions Gap Report 2020 by the United Nations Environment Programme, meeting the Paris Agreement's 1.5 °C temperature increase target (with 66% probability) requires GtCO2e 34/59 = 57% emissions reduction globally from 2019 levels by 2030, therefore well above the 40% target of the European Green Deal. This 57% emission reduction target at 2030 represents average global reductions, while advanced economies are expected to contribute more. Policy areas. Clean energy. Climate neutrality by the year of 2050 is the main goal of the European Green Deal. For the EU to reach their target of climate neutrality, one goal is to decarbonise their energy system by aiming to achieve “net-zero greenhouse gas emissions by 2050.” Their relevant energy directive is intended to be looked over and adjusted if problem areas arise. Many other in place and present regulations will also be reviewed. In 2023, the Member states will update their climate and national energy plans to adhere to the EU's climate goal for 2030. The key principles include: In 2020, the European Commission unveiled its strategy for a greener, cleaner energy future. The EU Strategy for Energy System Integration serves as a framework for an energy transition, which comprises measures to achieve a more circular system, and measures to implement greater direct electrification as well as to develop clean fuels (including hydrogen). The European Clean Hydrogen Alliance has also been launched as hydrogen has a special role to play in this seismic shift. By 2023, greentech was one of the few sectors in the EU where venture capital investments matched those in the United States, highlighting the impact of the EU's ambitious climate goals and government subsidies. The European Green Deal and accompanying government policies have driven substantial investment in greentech, particularly in areas like energy storage, circular economy initiatives, and agricultural technology. This focus has enabled the EU to close the existing investment gap with the US in these strategic sectors. Sustainable industry. Another target area to achieve the EU's climate goals is the introduction of the Circular Economy Industrial policy. In March 2020, the EU announced their Industrial Strategy with its aim to “empower citizens, revitalises regions and have the best technologies.” Key points of this policy area include boosting the modern aspects of industries, influencing the exploration and creation of “climate neutral” circular economy friendly goods markets. This further entails the “decarbonisation and modernisation of energy-intensive industries such as steel and cement.” A ‘Sustainable products’ policy is also projected to be introduced which will focus on reducing the wastage of materials. This aims to ensure products will be reused and recycling processes will be reinforced. The materials particularly focused on include “textiles, construction, vehicles, batteries, electronics and plastics.” The European Union is also of the opinion that it "should stop exporting its waste outside of the EU" and it will therefore "revisit the rules on waste shipments and illegal exports" The EU mentioned that "the Commission will also propose to revise the rules on end-of-life vehicles with a view to promoting more circular business models." The European Commission estimates that up to 2030, Europe's green investment offensive will cost an additional €350 billion annually. Building and renovation. This policy area is targeting the process of building and renovation in regards to their currently unsustainable methods. Many non-renewable resources are used in the process as well. Thus, the plan focuses on promoting the use of energy efficient building methods such as climate proofing buildings, increasing digitalisation and enforcing rules surrounding the energy performance of buildings. Social housing renovation will also occur in order to reduce the price of energy bills for those less able to finance these costs. They aim to triple the renovation rate of all buildings to reduce the pollution emitted during these processes. Digital technologies are important in achieving the European Green Deal's environmental targets. Emerging digital technologies, if correctly applied, have the potential to play a critical role in addressing environmental issues. Smart city mobility, precision agriculture, sustainable supply chains, environmental monitoring, and catastrophe prediction are just a few examples. Farm to Fork. The ‘From Farm to Fork’ strategy pursues the issue of food sustainability as well as the support allocated to the producers, i.e. farmers and fishermen. The methods of production and transfer of these resources are what the E.U. considers a climate-friendly approach, aiming to increase efficiency as well. The price and quality of the goods will aim to not be hindered during these newly adopted processes. Specific target areas include reducing the use of chemical pesticides, increasing the availability of health food options and aiding consumers to understand the health ratings of products and sustainable packaging. In the official page of the program From Farm to Fork is cited Frans Timmermans the Executive Vice-president of the European Commission, saying that: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; The program includes the next targets: Eliminating pollution. The ‘Zero Pollution Action Plan’ that aims to be adopted by the commission in 2021 intends to achieve no pollution from “all sources”, cleaning the air, water and soil by 2050. The Environment Quality standards are to be fully met, enforcing all industrial activities to be within toxic-free environments. Agricultural and urban industries water management policies will be overlooked to suit the “no harm” policy. Harmful resources such as micro-plastics and chemicals, such as pharmaceuticals, that are threatening the environment aim to be substituted in order to reach this goal. The ‘Farm to Fork’ strategy aids pollution reduction from excess nutrients and sustainable methods of production and transportation. Some formulations of the plan such as "toxic-free" and "zero pollution" have been criticized by Genetic Literacy Project as anti-scientific and contradictory, as any substance can be toxic at specific dose, and almost any life-related process results in "pollution". Sustainable mobility. A reduction in emissions from transportation methods is another target area within the European Green Deal. A comprehensive strategy on "Sustainable and Smart mobility" intends to be implemented. This will increase the adoption of sustainable and alternative fuels in road, maritime and air transport and fix the emission standards for combustion-engine vehicles. It also aims to make sustainable alternative solutions available to businesses and the public. Smart traffic management systems and applications intend to be developed as a solution. Freight delivery methods aim to be altered, with preferred pathways being by land or water. Public transport alterations aim to reduce public congestion as well as pollution. Installations of charging ports for electric vehicles intends to encourage the purchase of low-emission vehicles. The ‘Single European Sky’ plan focuses on air traffic management in order to increase safety, flight efficiency and environmentally friendly conditions. Biodiversity and ecosystem health. A strategy surrounding the protection of the European Union's biodiversity will be put forth in 2021. Management of forests and maritime areas, environment protection and addressing the issue of losses of species and ecosystems are all aspects of this target area. Restoration of affected ecosystems is intended to occur through implementing organic farming methods, aiding pollination processes, restoring free flowing rivers, reducing pesticides that harm surrounding wildlife and reforestation. The EU wants to protect 30% of land and 30% of sea, whilst creating stricter safeguards around new and old growth forests. Their aim is to restore ecosystems and their biological levels. The official page of the EU Biodiversity Strategy for 2030 cites Ursula von der Leyen, President of the European Commission, saying that: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; The biodiversity strategy is an essential part of the climate change mitigation strategy of the European Union. From the 25% of the European budget that will go to fight climate change, a large portion of that will be dedicated to restoring biodiversity and nature based solutions. The EU Biodiversity Strategy for 2030 includes the following targets: According to the page, approximately half of the global GDP depends on nature. In Europe many parts of the economy that generate trillions € per year, depend on nature. Currently the benefits of Natura 2000 in Europe contribute €200 - €300 billion per year. Florika Fink-Hooijer, Director General of the Directorate-General for the Environment, said that the EU has the “ambition to be a standard setter" for global biodiversity policy. Motivation. The main aim of the European Green Deal is to become climate neutral by the year of 2050. The reasons pushing for the plan's creation are based upon the environmental issues such as climate change, a loss of biodiversity, ozone depletion, water pollution, urban stress, waste production and more. The following statistics highlight the climate related issues within the European Union: Clean energy statistics. Global carbon dioxide emissions by country in 2023: &lt;templatestyles src="Legend/styles.css" /&gt;  China (31.8%)&lt;templatestyles src="Legend/styles.css" /&gt;  United States (14.4%)&lt;templatestyles src="Legend/styles.css" /&gt;  European Union (4.9%)&lt;templatestyles src="Legend/styles.css" /&gt;  India (9.5%)&lt;templatestyles src="Legend/styles.css" /&gt;  Russia (5.8%)&lt;templatestyles src="Legend/styles.css" /&gt;  Japan (3.5%)&lt;templatestyles src="Legend/styles.css" /&gt;  Other (30.10000000000001%) Biodiversity. All 54 actions were adopted or implemented by 2019. The EU is now recognised as a leader in circular economy policy making globally. The waste legislation was adopted in 2018, following negotiations with the European Parliament and Member States in the European Council. According to Eurostat, jobs related to circular economy activities have increased by 6% between 2012 and 2016 within the EU. The action plan has also encouraged at least 14 Member States, eight regions, and 11 cities to put forward circular economy strategies. Recovery program from the novel coronavirus. With the 2020 COVID-19 pandemic spreading rapidly within the European Union, the focus on the European Green Deal diminished. Many leaders including the deputy minister, Kowalski, from Poland, a Romanian politician, and the Czech prime minister, Babiš, suggested either a yearly pause or a complete discontinuation of the deal. Many believe the current main focus of the European Union's current policymaking process should be the immediate, shorter-term crisis rather than climate change. The financial market being under immense stress along with a reduction in economic activity is another factor threatening to derail the European Green Deal. Public and private funds for the policy as well as the EU's GDP being affected by COVID-19 both hinder the budgeting for the policy to take action. However, as recovery processes have begun within the European Union, a large majority of ministers are supporting the push for the deal to begin, alongside the subsiding of the first wave of infections. Representatives from 17 governments signed a letter in mid-April pushing for the deal to continue as a “response to the economic crisis while transforming Europe into a sustainable and climate neutral economy.” In April 2020, the European Parliament called for including the European Green Deal in its economic recovery program. Ten countries urged the European Union to adopt the “green recovery plan" as fears grew that the economic hit caused by the COVID-19 pandemic could weaken action on climate change. In May 2020 the leaders of the European Commission argued that the ecological crisis helped create the pandemic, which emphasised the need to advance the European Green Deal. Later that month, the €750 billion European recovery package (called Next Generation EU) and the €1 trillion budget were announced. The European Green Deal is part of it. The money will be spent only on projects that meet certain green criteria. 25% of all funding will go to climate change mitigation. Fossil fuels and Nuclear power are excluded from the funding. The recovery package is also intended to restore some equilibrium between rich and poor countries in the European Union. As part of the European Union response to the COVID-19 pandemic, several economic programs were set up, including the CRII, CRII+, European Social Fund+ and REACT-EU With these programs, flexibility is maintained, and CRII and CRII+ are also able to direct money to crisis repair measures through the European Regional Development Fund (ERDF), European Social Fund (ESF), Fund for European Aid to Most Deprived (FEAD) or the European Social Fund Plus. Some of these programs (such as REACT-EU) also serve to invest in the European Green Deal. In July 2020, a proposed "Green Recovery Act" in the United Kingdom was published by a think tank and academic group, implementing all recommendations of a “Green New Deal” for Europe (which is distinct from the EU Green Deal) and drawing attention to the fact that "car manufacturers in Europe are far behind China" in ending fossil fuel-based production. The same month, the recovery package and the budget of the European Union were generally accepted. The portion of the money that was allocated for climate action grew to 30%. The plan includes some green taxation on European products and on imports, but critics say it is still not enough for achieving the climate targets of the European Union and it is not clear how to ensure that all the money will really go to green projects. History of opposition by countries. Although all EU leaders signed off on the European green deal in December 2019, disagreements in regards to the goals and their timeline to achieve each factor arose. Poland has stated that climate neutrality by 2050 will not be a possibility for their country due to their reliance on coal as their main power source. Their climate minister, Michał Kurtyka, declared that commitments and funds need to be more fairly allocated. The initiative to increase the goal of lowering carbon emissions split the EU, with the coal reliant countries such as Poland complaining it will affect “jobs and competitiveness.” Up to 41,000 jobs could be lost within Poland, with the Czech Republic, Bulgaria and Romania also having a possible loss of 10,000 jobs each. Czech Prime Minister, Andrej Babiš, stated that their nation will not reach the 2050 goal “without nuclear” association. Countries are also arguing over the Just Transition Fund (JTF) that aims to help countries who are reliant on coal to become more environmentally friendly. These countries that changed their impacts prior to the Policy, such as Spain, believe that the JTF is unfair as it only benefits the countries that didn't "go green earlier." The head of Brussels' office of the Open Europe think tank, Pieter Cleppe, further dismissed the plan with sarcastic comment, “What could possibly go wrong.” Poland's Prime Minister Mateusz Morawiecki said that the EU's carbon pricing system unfairly disadvantages poorer countries in Southern and Eastern Europe. Speaking at the COP26 climate summit in Glasgow, Czech Prime Minister Babiš denounced the European Green Deal, saying that the European Union "can achieve nothing without the participation of the largest polluters such as China or the USA that are responsible for 27 and 15 percent, respectively, of global CO2 emissions." In August 2023, the Polish government filed a series of complaints with the Court of Justice of the European Union (CJEU) against provisions that are part of the Fit for 55 package, claiming that three EU climate policies threaten Poland's economy and energy security. In February 2024, responding to protests by Polish farmers, Polish Prime Minister Donald Tusk declared that he would advocate for changes in the European Green Deal. In March 2024, Tusk insisted that Poland would go its own way "without European coercion". Italian Prime Minister Giorgia Meloni criticized the EU ban on the sale of new petrol and diesel cars from 2035 that would "condemn [Europe] to new strategic dependencies, such as China's electric [vehicles]." Meloni said, "Reducing polluting emissions is the path we want to follow, but with common sense." Analysis and criticism. Initial European Green Deal. It has been found that American oil company ExxonMobil had a significant impact on the early negotiations of the European Green Deal. ExxonMobil attempted to change the deal in a way that puts less emphasis on the importance of reducing transport that emits carbon dioxide. This was only one of many opponents of the deal. The European Green Deal has faced criticism from some EU member states, as well as non-governmental organizations. Greenpeace has argued that the deal is not drastic enough and that it will fail to slow down climate change to an acceptable degree. The Corporate Europe Observatory calls the Deal a positive first step, but criticizes the influence the fossil fuel industry had on it. There has been criticism of the deal not doing enough, but also of the deal potentially being destructive to the European Union in its current state. Former Romanian president, Traian Băsescu, has warned that the deal could lead to some EU members to push towards an exit from the union. While some European states are on their way to eliminating the use of coal as a source of energy, many others still rely heavily on it. This scenario demonstrates how the deal may appeal to some states more than others. The economic impact of the deal is likely to be unevenly spread among EU states. This was highlighted by Polish MEP, Ryszard Legutko, who asked, “is the Commission trying to seize power from the member states?”. Poland, the Czech Republic and Hungary, three states that depend mostly on coal for energy, were the most opposed to the deal. Young climate activist Greta Thunberg commented on governments opposing the deal, saying "It seems to have turned into some kind of opportunity for countries to negotiate loopholes and to avoid raising their ambition". In addition, many groups such as “Greenpeace”, “Friends of the Earth Europe” and the “Institute for European Environmental Policy” have all analysed the policy and believe it isn't “ambitious enough.” Greenpeace believes the plan is “too little too late” whilst the IEEP stated that most prospects of meeting policy objectives “lacked clear or adequate” goals for the problem areas. The Greens-European Free Alliance and Jytte Guteland have proposed that the European Green Deal's EU 2030 climate target were to be raised to at least 65% greenhouse gas emissions reductions. The EU has acknowledged these issues, however, and through the “Just Transition Mechanism”, expects to distribute the burden of transitioning to a greener economy more fairly. This policy means that countries that have more workers in coal and oil shale sectors, as well as those with higher greenhouse emissions, will receive more financial aid. According to Frans Timmermans, this mechanism will also make investment more accessible for those most affected, as well as offering a support package, which will be worth “at least 100 billion euros”. The Mechanism, a part of the Sustainable Europe Investment Plan, is expected to mobilize €100 billion in investments during the 2021-2027 Multiannual Financial Framework (MFF), with funding from the EU budget and Member States, as well as contributions from InvestEU and the European Investment Bank. The Just Transition Mechanism provides a comprehensive set of support options for the most vulnerable regions. The Just Transition Fund, the first pillar, will provide €17.5 billion in EU grants available to the most affected territories, implying a national co-financing requirement of around €10 billion. The second pillar creates a specialized transition plan under InvestEU to leverage private investment. Finally, a new public sector credit facility is formed under the third pillar to leverage public finance. These measures will be accompanied by specialized advisory and technical assistance for the affected regions and projects. The European Investment Bank Group will be able to support this through Structural Programme Loans in conjunction with European structural and investment funds (ESIF) co-financing operations. At COP26, the European Investment Bank announced a set of just transition common principles agreed upon with multilateral development banks, which also align with the Paris Agreement. The principles refer to focusing financing on the transition to net zero carbon economies, while keeping socioeconomic effects in mind, along with policy engagement and plans for inclusion and gender equality, all aiming to deliver long-term economic transformation. Until 2030, the European Investment Bank announced that it is prepared to mobilise $1 trillion for climate action. The African Development Bank, Asian Development Bank, Islamic Development Bank, Council of Europe Development Bank, Asian Infrastructure Investment Bank, European Bank for Reconstruction and Development, New Development Bank, and Inter-American Development Bank are among the multilateral development banks that have vowed to uphold the principles of climate change mitigation and a Just Transition. The World Bank Group also contributed. Fossil fuels. The current proposals have been criticised for falling short of the goal of ending fossil fuels, or being sufficient for a green recovery after the COVID-19 pandemic. The European Environmental Bureau as well as the International Energy Agency (IEA) stated that fossil fuel subsidies would need to end. However, it should be stated that this can not be done until 2021, when the Energy Taxation Directive is to be revised. Also, while fossil fuels are still actively being subsidized by the EU until 2021, even during an economic recession, it is also already working on supporting electrification of vehicles and green fuels such as hydrogen. Environmental effects of electric cars. Nickel and cobalt are the basic commodities used in almost every electric vehicle battery. Open-pit nickel mining has led to environmental degradation and pollution in developing countries such as the Philippines and Indonesia. In 2024, nickel mining and processing was one of the main causes of deforestation in Indonesia. Open-pit cobalt mining has led to deforestation and habitat destruction in the Democratic Republic of Congo. The European Environmental Bureau and Friends of the Earth Europe published a report analysing the European Green deal. According to the report, it will not be enough to change energy sources for reaching a sustainable society because EVs, wind, solar energy require a high rate of resource consumption while the mining process is associated with high societal and environmental damage. The report says: "With respect to environmental impacts from resource use, the EU uses between 70% and 97% of the ‘safe operating space’ available for the whole world. This means the EU alone is close to exceeding the planetary boundaries for resource use impacts, beyond which the stable functioning of the earth’s biophysical systems are in jeopardy." The report proposes among others, creating binding targets for reducing resource consumption, shrinking economic sectors with little or no societal benefits (military, aerospace, fast fashion, cars), protecting areas and people from mining, and creating a sharing economy. Job losses and inflation. Trade unions warned that the European Green Deal will put 11 million jobs at risk. The Commission predicts that 180,000 jobs could be lost in the coal mining industry by 2030. A 2021 study estimates that the automotive industry could lose half a million jobs. The EU’s carbon pricing scheme for road and heating fuels (ETS2) will lead to an increase in the price of fossil fuels such as heating gas, petrol and diesel. The ETS2 will become fully operational in 2027. The European Green Deal could reduce Europe's agricultural production and increase global food prices. In July 2023, European People's Party (EPP) leader Manfred Weber tried to block the Nature Restoration Law, saying it would destroy farmers' livelihoods and threaten food security. 2021–2023 global energy crisis. Due to a combination of unfavourable conditions, which involved soaring demand of natural gas, its diminished supply from Russia and Norway to the European markets, less power generation by renewable energy sources such as wind, water and solar energy, and cold winter that left European and Russian gas reservoirs depleted, Europe faced steep increases in gas prices in 2021. Hungarian Prime Minister Viktor Orbán blamed a record-breaking surge in energy prices on the European Commission's Green Deal plans. "Politico" reported that "Despite the impact of high energy prices, [EU Commissioner for Energy] Simson insisted that there are no plans to backtrack on the bloc's Green Deal". European Commission President Ursula von der Leyen said that "Europe today is too reliant on gas and too dependent on gas imports. The answer has to do with diversifying our suppliers ... and, crucially, with speeding up the transition to clean energy." Energy-intensive German industry and German exporters were hit particularly hard by the energy crisis. 2024 European farmers' protests. European farmers have protested against proposed environmental regulations (such as a carbon tax, pesticide bans, nitrogen emissions curbs and restrictions on water and land usage), low food prices and trade in agricultural products with non-European Union member states, such as Ukraine and the Mercosur bloc of South America. Academic analysis. A meta-analysis from 2023 reported results about "required technology-level investment shifts for climate-relevant infrastructure until 2035" within the EU, and found these are "most drastic for power plants, electricity grids and rail infrastructure", ~87€ billion above the planned budgets in the near-term (2021–25), and in need of sustainable finance policies. The European Union Emissions Trading System should be expanded to more sectors is proposed in a paper from Bruegel. Bioeconomy. In 2024, over 60 NGOs sent a letter to the European Union expressing a concern about the bioeconomy concept "explicitly referenced in the European Green Deal". According to the organizations, it puts strong pressure on ecosystems, while in the same European Green Deal, one of the targets is to protect 30% of land and waters and to restore ecosystems. The authors argue that "moving from fossil to bio sources without embedding it in a wider socio-ecological transformation and drastically reducing consumption would be a disaster." May 2024 analysis. In May 2024, a report has been published, summarizing the main achievements of the European Union in the environmental domain from 2019, with an emphasis on the green new deal. The report says that without this set of policies, the environmental situation was worse but the world is still on track to 2.9 degrees warming. New European Bauhaus. The New European Bauhaus is an artistic movement initiated by the European Commission, more precisely by the President of the European Commission, Mrs. Ursula von der Leyen herself. Its aim is to implement the European Green Deal through culture by integrating esthetics, sustainability and inclusiveness. The New European Bauhaus (NEB) is an interdisciplinary movement which intends to re-express the fundamental ambitions of the historical Bauhaus movement generated by the German architect Walter Gropius, in order to deal with contemporary issues from the fields of creation: art, crafts, design, architecture and urban planning. The New European Bauhaus being "new" it is currently still being developed by a multicultural and international Research Committee headed by an artist, Alexandre Dang. However the name who has been chosen is strongly criticized in some artistic communities as being "inherently not inclusive". Phases. This movement wanting to be as open and accessible as possible, this will be facilitated by a planification in three phases: the Design phase (2020-2021), the Delivery phase (2021-2023+) and the Dissemination phase (2023-2024+). The Design phase. As a first step, the Design phase was about finding methods that could boost existing ideas related to the NEB's challenges, regarding culture and technology. These two notions are considered by the New European Bauhaus as determinant elements to face contemporary concerns, especially in architecture and urban planning sectors. By launching a call for proposals, acceleration services and financial contribution started to be provided to some projects under European Union funding programs, such as Horizon Europe or LIFE programme, but also international organisations. In the idea of a collective design dynamic, a "High-level roundtable" has been set up with 18 thinkers and practitioners, involving for example famous architects Shigeru Ban and Bjark Ingels, the President of the Italian National Innovation Fund Francesca Bria, the activist and academic Sheela Patel, and others. The Delivery phase. After the Design phase, the Communication of the European Commission "New European Bauhaus Beautiful, Sustainable, Together" was released on 15 September 2021. The detailed content of this communication directly led to the Delivery phase, which began by setting up five pilots projects. These projects were selected as flagship proposals for the NEB's announced goal: "a sustainable green transformation in housing, architecture, transportation, urban, and rural spaces as part of its effort to reach carbon neutrality by 2050". In fact, one of the fundamental points of the New European Bauhaus, that is put forward by the European Commission, is to translate the European Green Deal, officially approved in 2020, to make it a tangible cultural experience in which citizens from all around the world could participate. Referring to the major principles of the original Bauhaus movement, the NEB initiative wants to be multi-level: "from global to local, participatory and transdisciplinary". By initiating a co-design process, views and experiences of thousands of citizens, professionals and organisations across the EU, and beyond, were involved into open conversations. Emerging from this collective thinking, the three terms highlighted to define the movement are "Sustainability" (including climate goals, circularity, zero pollution and biodiversity), "Aesthetics" (quality of experience and style, beyond functionality) and "Inclusion" (including diversity first, securing accessibility and affordability). The four thematic axes chosen to guide the NEB's implementation for the next years are "Reconnecting with nature", "Regaining a sense of belonging", "Prioritising the places and people that need it most", and "Fostering long term, life cycle thinking in the industrial ecosystem". The three levels of interconnected transformations expected from the initiative are "changes in places around us", "changes in the environment that enable innovation" and "changes in the diffusion of new meanings". The Dissemination phase. During the Dissemination phase, the New European Bauhaus planned to focus on spreading chosen ideas and concepts to a broader audience, not only inside the EU. Within the three-phases development, this last step should be about networking and sharing knowledge between practitioners on available methods, solutions and prototypes, but also, it is meant to help creators to replicate their experiences across cities, rural areas and localities and to influence the new generation of architects and designers. New European Bauhaus prizes. In spring 2021, the European Commission launched New European Bauhaus prizes to reward inspiring examples of the realizations fitting the NEB principles. For the first edition of the contest, Commissioners Ferreira and Gabriel awarded 20 projects in a ceremony in Brussels on 16 September 2021. A second edition of NEB prizes is taking place in 2022. The NEB LAB. The NEB LAB, or New European Bauhaus Laboratory, has been established as a meeting space to work with the New European Bauhaus growing community, which is more than 450 official partners, High-Level Roundtable members, Contact Points of the national governments, and winners and finalists of the New European Bauhaus prizes. The NEB LAB's main objective is to put the movement's thinking into practice, by co-creating and testing solutions and policy actions, like the development of labeling tools. It has started with a "Call for Friends of the New European Bauhaus", in order to get public entities, companies and political organisations involved. The New European Bauhaus Festival. The opening of a New European Bauhaus Festival has been announced by the European Commission to allow visibility for creators, to encourage them to "showcase" their ideas and share their progress, but also to enable networking and to foster citizen engagement. It will stand on three pillars: Fair (presentation of completed projects or products), Fest (the cultural section, with artists and performance) and Forum (debates with innovative participatory formats). Its first edition will take place on 9–12 June 2022 in Brussels. Based on this experience, the commission will draw up a concept for a yearly event that will include places in and outside the EU from 2023 onwards. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left ( \\frac{0.55-0.25}{1-0.25} \\right ) = 0.40 = 40\\%" } ]
https://en.wikipedia.org/wiki?curid=62693289
62694924
Inverse Pythagorean theorem
Relation between the side lengths and altitude of a right triangle In geometry, the inverse Pythagorean theorem (also known as the reciprocal Pythagorean theorem or the upside down Pythagorean theorem) is as follows: Let A, B be the endpoints of the hypotenuse of a right triangle △"ABC". Let D be the foot of a perpendicular dropped from C, the vertex of the right angle, to the hypotenuse. Then formula_0 This theorem should not be confused with proposition 48 in book 1 of Euclid's "Elements", the converse of the Pythagorean theorem, which states that if the square on one side of a triangle is equal to the sum of the squares on the other two sides then the other two sides contain a right angle. Proof. The area of triangle △"ABC" can be expressed in terms of either AC and BC, or AB and CD: formula_1 given "CD" &gt; 0, "AC" &gt; 0 and "BC" &gt; 0. Using the Pythagorean theorem, formula_2 as above. Note in particular: formula_3 Special case of the cruciform curve. The cruciform curve or cross curve is a quartic plane curve given by the equation formula_4 where the two parameters determining the shape of the curve, a and b are each CD. Substituting x with AC and y with BC gives formula_5 Inverse-Pythagorean triples can be generated using integer parameters t and u as follows. formula_6 Application. If two identical lamps are placed at A and B, the theorem and the inverse-square law imply that the light intensity at C is the same as when a single lamp is placed at D. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac 1 {CD^2} = \\frac 1 {AC^2} + \\frac 1 {BC^2}." }, { "math_id": 1, "text": "\\begin{align}\n\\tfrac{1}{2} AC \\cdot BC &= \\tfrac{1}{2} AB \\cdot CD \\\\[4pt]\n (AC \\cdot BC)^2 &= (AB \\cdot CD)^2 \\\\[4pt]\n \\frac{1}{CD^2} &= \\frac{AB^2}{AC^2 \\cdot BC^2}\n\\end{align}" }, { "math_id": 2, "text": "\\begin{align}\n\\frac{1}{CD^2} &= \\frac{BC^2 + AC^2}{AC^2 \\cdot BC^2} \\\\[4pt]\n &= \\frac{BC^2}{AC^2 \\cdot BC^2} + \\frac{AC^2}{AC^2 \\cdot BC^2} \\\\[4pt]\n\\quad \\therefore \\;\\; \\frac{1}{CD^2} &= \\frac{ 1 }{AC^2} + \\frac{1}{BC^2}\n\\end{align}" }, { "math_id": 3, "text": "\\begin{align}\n\\tfrac{1}{2} AC \\cdot BC &= \\tfrac{1}{2} AB \\cdot CD \\\\[4pt]\n CD &= \\tfrac{AC \\cdot BC}{AB} \\\\[4pt]\n\\end{align}" }, { "math_id": 4, "text": "x^2 y^2 - b^2 x^2 - a^2 y^2 = 0" }, { "math_id": 5, "text": "\\begin{align}\nAC^2 BC^2 - CD^2 AC^2 - CD^2 BC^2 &= 0 \\\\[4pt]\nAC^2 BC^2 &= CD^2 BC^2 + CD^2 AC^2 \\\\[4pt]\n\\frac{1}{CD^2} &= \\frac{BC^2}{AC^2 \\cdot BC^2} + \\frac{AC^2}{AC^2 \\cdot BC^2} \\\\[4pt]\n\\therefore \\;\\; \\frac{1}{CD^2} &= \\frac{1}{AC^2} + \\frac{1}{BC^2}\n\\end{align}" }, { "math_id": 6, "text": "\\begin{align}\nAC &= (t^2 + u^2)(t^2 - u^2) \\\\\nBC &= 2tu(t^2 + u^2) \\\\\nCD &= 2tu(t^2 - u^2)\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=62694924
6269850
Market domination
Measure of the strength of a brand, product, service or firm Market dominance is the control of a economic market by a firm. A dominant firm possesses the power to affect competition and influence market price. A firms' dominance is a measure of the power of a brand, product, service, or firm, relative to competitive offerings, whereby a dominant firm can behave independent of their competitors or consumers, and without concern for resource allocation. Dominant positioning is both a legal concept and an economic concept and the distinction between the two is important when determining whether a firm's market position is dominant. Abuse of market dominance is an anti-competitive practice, however dominance itself is legal. Sources of market dominance. Firms can achieve dominance in their industry through multiple means, such as; First-mover advantages. Many dominant firms are the first "important" competitor in their industry. These firms can achieve short- or long-term advantages over their competitors when they are the first offering in a new industry. First-movers can set a benchmark for competitors and consumers regarding expectations of product and service offering, technology, convenience, quality, or price. These firms are representative of their industry and their brand can become synonymous with the product category itself, such as the company Band-Aid. First-mover advantage is a limited source of market dominance if a firm becomes complacent or fails to keep up innovation by competitors. Innovation. It is recognised that firms who place greater importance on product innovation often have an advantage over firms who do not. The significant links to Game theory have are apparent, and in conjunction with empirical evidence, research has attempted to explain whether more dominant firms or less dominant firms innovate more. Brand Equity. Referring to the value that branding adds over a generic equivalent, Brand Equity can contribute to gains in market dominance for firms who choose to capitalise on its worth, whether through charging a price premium or other business strategy. Economies of Scale. As firms expand, production becomes more efficient and costs lower. It has been shown in empirically several times that there is a clear link between profitability and market share, and thus market dominance. The explicit relationship between economies of scale and market shares has also been explored. Measuring market dominance. Identifying a dominant position involves the use of several factors. The European Commission's Guidance on A102 states that a dominant position is derived from a combination of factors, which taken separately are not determinative. Therefore, it is necessary to consider the constraints imposed by existing supplies from, and the position of, actual competitors, meaning those who are competing with the undertaking in question. This involves looking at the day-to-day downwards pressure that retains low product prices and competitiveness within the market, which market shares are only useful as a first indication of; this needs to be followed by the consideration of other factors such as market conditions and dynamics. Market share. There is often a geographic element to the competitive landscape. In defining market dominance, one must see to what extent a product, brand, or firm controls a product category in a given geographic area. There are several ways of measuring market dominance. The most direct is market share. This is the percentage of the total market served by a firm or brand. A declining scale of market shares is common in most industries: that is, if the industry leader has say 50% share, the next largest might have 25% share, the next 12% share, the next 6% share, and all remaining firms combined might have 7% share. Market share is not a perfect proxy of market dominance. Although there are no hard and fast rules governing the relationship between market share and market dominance, the following are general criteria: Market shares within an industry might not exhibit a declining scale. There could be only two firms in a duopolistic market, each with 50% share; or there could be three firms in the industry each with 33% share; or 100 firms each with 1% share. The concentration ratio of an industry is used as an indicator of the relative size of leading firms in relation to the industry as a whole. One commonly used concentration ratio is the "four-firm concentration ratio", which consists of the combined market share of the four largest firms, as a percentage, in the total industry. The higher the concentration ratio, the greater the market power of the leading firms. Legally, the determination is often more complex. A case that can be used to define market dominance under EU Law is the "United Brands v Commission" (The ‘bananas’ case) where the court of justice said, 'the dominant position thus referred to by Article [102] relates to a position of economic strength enjoyed by an undertaking which enables it to prevent effective competition being maintained on the relevant market by affording it the power to behave to an appreciable extent independently of its competitors, customers and ultimately of its consumers’ The commission's Guidance suggests that market shares is only a ‘useful first indication’ in the process of assessing market power. Market dominance and monopolies. Market dominance is closely related to the economic concept of competition. Monopolistic power is derived from market share, and thus intertwined with dominance. Whilst a theoretical monopoly will have a single firm supplying the industry, market dominance can describe a situation where multiple firms operate in the market, however a single firm has majority control. As economic competition is encouraged, regulation in most countries applies to both monopolistic firms, as well as firms who hold dominant market positions. In Australia, for example, the Australian Competition &amp; Consumer Commission hold the position that a firm with significant market power (relating to dominant firms, all the way up to firms in a complete monopoly) must not "do anything that has the purpose, effect or likely effect of substantially lessening competition." Relevance of market shares According to the European Commission, market shares provide a useful first indication of the structure of any market and of the relative importance of the various undertakings active on it. In paragraph 15 of the Guidance on A102, the European Commission state that a high market share over a long period of time can be a preliminary indication of dominance. The International Competition Network stress that determining whether substantial market power is apparent should not be based on market shares alone, but instead an analysis of all factors affecting the competitive conditions in the market, should be used. 100% market shares are very rare but can arise in niche areas, a close example of this being 91.8% market share in "Tetra Pak 1 (BTG Licence)", and the 96% market share in plasterboard held by BPB in "BPB Industries Plc v Commission OJ". In "Hoffman-La Roche v Commission", the Court of Justice said that large market shares are ‘evidence of the existence of a dominant position’ which led to the Court of Justice decision in "AKZO v Commission" that where there is market share of at least 50%, without exceptional circumstances, there will be a presumption of dominance that shifts the burden of proof on to the undertaking. The European Commission has affirmed this threshold in cases since AKZO. For example, in paragraph 100 of the Commission Judgment in the Court of First Instance in "France Telecom v Commission", the Commission state that ‘…very large shares are in themselves, and save in exceptional circumstances, evidence of the existence of a dominant position…’, citing the Court of Justice judgement in AZKO, paragraph 60, ‘…this was so in the case of a 50% market share.’. European Commission's "Tenth Report on Competition" implies that a significant disparity between the largest and the second-largest firm shares can indicate that the largest firm has a dominant position in the market. Specifically, under a section entitled "Scrutiny of mergers for compatibility with Article 86 EEC," the Report states: A dominant position can generally be said to exist once a market share to the order of 40% to 45% is reached. [footnote: A dominant position cannot even be ruled out in respect of market shares between 20% and 40%; Ninth Report on Competition Policy, point 22.] Although this share does not in itself automatically give control of the market, if there are large gaps between the position of the firm concerned and those of its closest competitors and also other factors likely to place it at an advantage as regards competition, a dominant position may well exist. (European Commission's Tenth Report on Competition, page 103, paragraph 150.) Impact on competitors. Another way of calculating market dominance, by looking at competition as market shares, are even less useful when assessing the competitive pressure that is exerted on an undertaking – i.e. the competition that would come from other firms that are not yet operating on the market but have the capacity to enter it in the near future. Of particular importance here are paragraphs 16 and 17 of the commission's Guidance...16. Competition is a dynamic process and an assessment of the competitive constraints on an undertaking cannot be based solely on the existing market situation. The potential impact of expansion by actual competitors or entry by potential competitors, including the threat of such expansion or entry, is also relevant. An undertaking can be deterred from increasing prices if expansion or entry is likely, timely and sufficient. For the commission to consider expansion or entry likely it must be sufficiently profitable for the competitor or entrant, taking into account factors such as the barriers to expansion or entry, the likely reactions of the allegedly dominant undertaking and other competitors, and the risks and costs of failure. The Guidance also states that the constraints imposed by the credible threat of future expansion by actual competitors, or entry by potential competitors, is a required factor of consideration. For example, Intellectual Property in the form of patent protection, is a potential legal barrier to entering the market for new businesses, as was shown in "Microsoft Corp". In this case, the Court of Justice confirmed the commission's decision, that Microsoft were dominant and had abused their dominant position regarding their refusal to supply the interoperability information for operating PC Windows with other systems. Microsoft was forced to license out its interoperability data. Herfindahl–Hirschman index. There is also the Herfindahl–Hirschman index. It is a measure of the size of firms in relation to the industry and an indicator of the amount of competition among them. It is defined as the sum of the squares of the market shares of each individual firm. As such, it can range from 0 to 10,000, moving from a very large amount of very small firms to a single monopolistic producer. Decreases in the Herfindahl–Hirschman index generally indicate a loss of pricing power and an increase in competition, whereas increases imply the opposite. Kwoka's dominance index. Kwoka's "dominance index" (formula_0) is defined as the sum of the squared differences between each firm's share and the next largest share in a market: formula_1 where formula_2 for all formula_3. Other measures of market dominance. As part of its merger review process, Mexican Competition Commission uses García Alba's "dominance index" (formula_4), described as the Herfindahl–Hirschman index of a Herfindahl–Hirschman index (formula_5). Formally, formula_4 is the sum of squared firm contributions to the market formula_5: formula_6 where formula_7"Asymmetry Index" (formula_8) is defined as the statistical variance of market shares: formula_9 Customer power. Countervailing Buyer Power is something else that should be considered when calculating market dominance. In market where the buyers have more power than suppliers in determining prices or changes in the market a firm of high market share may not exercise its powers against competitors easily as it always has to be accountable to customers that give it its high market share and are not hesitant to switch product preference to the next firm. Such customers will need to have sufficient bargaining strength which will normally come from its size or its commercial significance in the industry sector. The final point that must be considered is the bargaining strength of the undertaking's customers, also known as the countervailing buyer power. This refers to the competitive constraints that customers may exert where they are a large size, or commercially significant, for a dominant firm. However, the commission will not come to a final decision without examining all of the factors which may be relevant to constrain the behavior of the undertaking. Previous findings of dominance can not be used to calculate dominance as agreed in the Coca-Cola v Commission [2000] case where it was Court held that the Commission must take a fresh approach to the market conditions each time it adopts a decision in relation to Art 102. Legal definition. There are different perspectives of what indicates dominance and how to go about establishing dominance. One of these being the perspective of the European Commission regarding their application of Article 102 of the Treaty on the Functioning of the European Union (Formerly Article 82 of the Treaty establishing the European Community), that deals specifically with the abuse of dominance in the market regarding competition law. The European Commission equates dominance with the economic concept of substantial market power, which indicates that dominance can be exerted and abused, in its Guidance on A102 Enforcement Priorities. In paragraph 10 of the Guidance, it is stated that where there is no competitive pressure, an undertaking, which is a legal entity acting in the course of business, is probably able to exercise substantial market power. Furthermore, in paragraph 11, this is developed on, arguing if an undertaking can increase their products above the competitive price level, and does not face economic restraints, it is therefore dominant. For example, in basic terms, if two businesses are selling competing products, and one can increase their selling price, and not suffer an economic consequence such as a boycott of their products or a shift of their customers to a cheaper product, they are dominant. The Guidance is not law, it is instead a set of rules the courts are to follow. However, the same definition can be found elsewhere, in Chapter 3 of the Unilateral Conduct Workbook. The Guidance is also supported by paragraph 65 of the commission's judgement in "United Brands v Commission". “65"THE DOMINANT POSITION REFERRED TO IN THIS ARTICLE (102) RELATES TO A POSITION OF ECONOMIC STRENGTH ENJOYED BY AN UNDERTAKING WHICH ENABLES IT TO PREVENT EFFECTIVE COMPETITION BEING MAINTAINED ON THE RELEVANT MARKET BY GIVING IT THE POWER TO BEHAVE TO AN APPRECIABLE EXTENT INDEPENDENTLY OF ITS COMPETITORS , CUSTOMERS AND ULTIMATELY OF ITS CONSUMERS."" The identification of the relevant and geographic market must first be established before being able to calculate shares or an undertaking’s dominance within that market. Dominance as an economic concept is determined within EU competition law through a 2-stage process, which first requires the identification of the relevant market as was established in "Continental Can v Commission". This was affirmed in paragraph 30 of the judgement of "AstraZeneca AB v Commission", in which the Commission stated that it must be assessed whether an undertaking is able to act independently of its competitors, customers and consumers. The identification of the relevant and geographic market is assessed through the hypothetical monopolist test, which questions would a party's customer, switch to an alternative supplier located elsewhere, in response to a small relative price increase. Therefore, it is a question of interchangeability and demand substitutability, meaning whether one product can be a substitute for another, and whether an undertaking's market power puts them above price competition. The second stage of the test requires the commission to look at various factors to see if an undertaking enjoys a dominant position on that relevant market. Market Dominance Attractiveness. Why firms want a greater market share is a logical concept with both empirical and theoretical foundations. One of the main driving principles is a firm's profit motive, dealing specifically with why firms choose to maximize their profits. As research links market share to return on investment, it is expected that firms will choose to follow strategies which lead to increasing market share and a more dominant position in the market. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{D}" }, { "math_id": 1, "text": "\\text{D} = \\sum_{i=1}^{n-1} (s_i-s_{i+1})^2" }, { "math_id": 2, "text": "s_1 \\ge ... \\ge s_i \\ge s_{i+1} \\ge ... \\ge s_n" }, { "math_id": 3, "text": "i = 1, ..., n - 1" }, { "math_id": 4, "text": "\\text{ID}" }, { "math_id": 5, "text": "\\text{HHI}" }, { "math_id": 6, "text": "\\text{ID} = \\sum_{i=1}^n h_i^2" }, { "math_id": 7, "text": "h_i = \\frac{s_i^2}{\\text{HHI}}." }, { "math_id": 8, "text": "\\text{AI}" }, { "math_id": 9, "text": "\\text{AI}=\\frac{\\sum_{i=1}^n \\left(s_i- {\\frac{1}{n}}\\right)^2}{n}." } ]
https://en.wikipedia.org/wiki?curid=6269850
62698847
Comparison of electric cars
This is a comparison of battery electric vehicles. Charging time per driven distance. The amount of range gained per time charging, "charging speed", is the ratio of charging power to the vehicle's consumption, and its inverse is the "charging time per driven distance": formula_0 formula_1 The triple bar equality symbolizes that these measures, equivalent as they are, are both meaningful as instantaneous values, not only as averages. Typically, charging power varies with state of charge and battery temperature over a charging session. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Charging\\ speed\\ [km/h] \\equiv \\frac{charging\\ power\\ [kW]}{consumption\\ [kWh/km]}" }, { "math_id": 1, "text": "\\frac{Charging\\ time}{driven\\ distance} [h/km] \\equiv \\frac{consumption\\ [kWh/km]}{charging\\ power\\ [kW]}" } ]
https://en.wikipedia.org/wiki?curid=62698847
62704378
Esther 4
A chapter in the Book of Esther Esther 4 is the fourth chapter of the Book of Esther in the Hebrew Bible or the Old Testament of the Christian Bible. The author of the book is unknown and modern scholars have established that the final stage of the Hebrew text would have been formed by the second century BCE. Chapters 3 to 8 contain the nine scenes that form the complication in the book. This chapter describes the reaction of the Jews to Haman's evil decree, focusing on Mordecai's action of mourning and fasting, which eventually forced Esther to take action on her own by risking her life to appear uninvited before King Ahasuerus. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 17 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Mordecai hears the threat of genocide against the Jews (4:1–3). When they heard the threat of genocide, Mordecai and the Jews throughout the Persian empire showed religious response publicly, although without referring to God. "When Mordecai perceived all that was done, Mordecai rent his clothes, and put on sackcloth with ashes, and went out into the midst of the city, and cried with a loud and a bitter cry;" "And in every province, wherever the king's command and his decree reached, there was great mourning among the Jews, with fasting and weeping and lamenting, and many of them lay in sackcloth and ashes." Mordecai impresses on Esther the need for action (4:4–17). This section records the communication between Mordecai and Esther, which passed through three stages: These stages represent a movement in Esther from ignorance to understanding to decision: Esther eventually took charge and Mordecai went to do 'everything that Esther had commanded him'. [Mordecai said:] "For if you remain completely silent at this time, relief and deliverance will arise for the Jews from another place, but you and your father’s house will perish. Yet who knows whether you have come to the kingdom for such a time as this?" Verse 14. Mordecai's statement assumes the existence of a providential order, although God is not mentioned by name at all. Some ancient references (Josephus; "Targum Esther I, II") understand the Hebrew word "mā-qōm" ("place") as a circumlocution for God. [Esther said:] "Go, gather all the Jews who are present in Shushan, and fast for me; neither eat nor drink for three days, night or day. My maids and I will fast likewise. And so I will go to the king, which is against the law; and if I perish, I perish!" Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62704378
62704502
Stratification (water)
Layering of a body of water due to density variations Stratification in water is the formation in a body of water of relatively distinct and stable layers by density. It occurs in all water bodies where there is stable density variation with depth. Stratification is a barrier to the vertical mixing of water, which affects the exchange of heat, carbon, oxygen and nutrients. Wind-driven upwelling and downwelling of open water can induce mixing of different layers through the stratification, and force the rise of denser cold, nutrient-rich, or saline water and the sinking of lighter warm or fresher water, respectively. Layers are based on water density: denser water remains below less dense water in stable stratification in the absence of forced mixing. Stratification occurs in several kinds of water bodies, such as oceans, lakes, estuaries, flooded caves, aquifers and some rivers. Mechanism. The driving force in stratification is gravity, which sorts adjacent arbitrary volumes of water by local density, operating on them by buoyancy and weight. A volume of water of lower density than the surroundings will have a resultant buoyant force lifting it upwards, and a volume with higher density will be pulled down by the weight which will be greater than the resultant buoyant forces, following Archimedes' principle. Each volume will rise or sink until it has either mixed with its surroundings through turbulence and diffusion to match the density of the surroundings, reaches a depth where it has the same density as the surroundings, or reaches the top or bottom boundary of the body of water, and spreads out until the forces are balanced and the body of water reaches its lowest potential energy. The density of water, which is defined as mass per unit of volume, is a function of temperature (formula_0), salinity (formula_1) and pressure (formula_2), which is a function of depth and the density distribution of the overlaying water column, and is denoted as formula_3. The dependence on pressure is not significant, since water is almost perfectly incompressible. An increase in the temperature of the water above 4°C causes expansion and the density will decrease. Water expands when it freezes, and a decrease in temperature below 4°C also causes expansion and a decrease in density. An increase in salinity, the mass of dissolved solids, will increase the density. Density is the decisive factor in stratification. It is possible for a combination of temperature and salinity to result in a density that is less or more than the effect of either one in isolation, so it can happen that a layer of warmer saline water is layered between a colder fresher surface layer and a colder more saline deeper layer. A pycnocline is a layer in a body of water where the change in density is relatively large compared to that of other layers. The thickness of the pycnoocline is not constant everywhere and depends on a variety of variables. Just like a pycnocline is a layer with a large change in density with depth, similar layers can be defined for a large change in temperature, a thermocline, and salinity, a halocline. Since the density depends on both the temperature and the salinity, the pycno-, thermo-, and haloclines have a similar shape. Mixing. Mixing is the breakdown of stratification. Once a body of water has reached a stable state of stratification, and no external forces or energy are applied, it will slowly mix by diffusion until homogeneous in density, temperature and composition, varying only due to minor effects of compressibility. This does not usually occur in nature, where there are a variety of external influences to maintain or disturb the equilibrium. Among these are heat input from the sun, which warms the upper volume, making it expand slightly and decreasing the density, so this tends to increase or stabilise stratification. Heat input from below, as occurs from tectonic plate spreading and vulcanism is a disturbing influence, causing heated water to rise, but these are usually local effects and small compared to the effects of wind, heat loss and evaporation from the free surface, and changes of direction of currents. Wind has the effects of generating wind waves and wind currents, and increasing evaporation at the surface, which has a cooling effect and a concentrating effect on solutes, increasing salinity, both of which increase density. The movement of waves creates some shear in the water, which increases mixing in the surface water, as does the development of currents. Mass movement of water between latitudes is affected by coriolis forces, which impart motion across the current direction, and movement towards or away from a land mass or other topographic obstruction may leave a deficit or excess which lowers or raises the sea level locally, driving upwelling and downwelling to compensate. The major upwellings in the ocean are associated with the divergence of currents that bring deeper waters to the surface. There are at least five types of upwelling: coastal upwelling, large-scale wind-driven upwelling in the ocean interior, upwelling associated with eddies, topographically-associated upwelling, and broad-diffusive upwelling in the ocean interior. Downwelling also occurs in anti-cyclonic regions of the ocean where warm rings spin clockwise, causing surface convergence. When these surface waters converge, the surface water is pushed downwards. These mixing effects destabilise and reduce stratification. By water body type. Oceans. Ocean stratification is the natural separation of an ocean's water into horizontal layers by density, and occurs in all ocean basins. Denser water is below lighter water, representing a stable stratification. The pycnocline is the layer where the rate of change in density is largest. Ocean stratification is generally stable because warmer water is less dense than colder water, and most heating is from the sun, which directly affects only the surface layer. Stratification is reduced by mechanical mixing induced by wind, but reinforced by convection (warm water rising, cold water sinking). Stratified layers act as a barrier to the mixing of water, which impacts the exchange of heat, carbon, oxygen and other nutrients. The surface mixed layer is the uppermost layer in the ocean and is well mixed by mechanical (wind) and thermal (convection) effects. Due to wind driven movement of surface water away from and towards land masses, upwelling and downwelling can occur, breaking through the stratification in those areas, where cold nutrient-rich water rises and warm water sinks, respectively, mixing surface and bottom waters. The thickness of the thermocline is not constant everywhere and depends on a variety of variables. Between 1960 and 2018, upper ocean stratification increased between 0.7-1.2% per decade due to climate change. This means that the differences in density of the layers in the oceans increase, leading to larger mixing barriers and other effects. Global upper-ocean stratification has continued its increasing trend in 2022. The southern oceans (south of 30°S) experienced the strongest rate of stratification since 1960, followed by the Pacific, Atlantic, and the Indian Oceans. Increasing stratification is predominantly affected by changes in ocean temperature; salinity only plays a role locally. Estuaries. An estuary is a partially enclosed coastal body of brackish water with one or more rivers or streams flowing into it, and with a free connection to the open sea. The residence time of water in an estuary is dependent on the circulation within the estuary that is driven by density differences due to changes in salinity and temperature. Less dense freshwater floats over saline water and warmer water floats above colder water for temperatures greater than 4°C. As a result, near-surface and near-bottom waters can have different trajectories, resulting in different residence times. "Vertical mixing" determines how much the salinity and temperature will change from the top to the bottom, profoundly affecting water circulation. Vertical mixing occurs at three levels: from the surface downward by wind forces, the bottom upward by turbulence generated at the interface between the estuarine and oceanic water masses, and internally by turbulent mixing caused by the water currents which are driven by the tides, wind, and river inflow. Different types of estuarine circulation result from vertical mixing: Salt wedge estuaries are characterized by a sharp density interface between the upper layer of freshwater and the bottom layer of saline water. River water dominates in this system, and tidal effects have a small role in the circulation patterns. The freshwater floats on top of the seawater and gradually thins as it moves seaward. The denser seawater moves along the bottom up the estuary forming a wedge shaped layer and becoming thinner as it moves landward. As a velocity difference develops between the two layers, shear forces generate internal waves at the interface, mixing the seawater upward with the freshwater. An example is the Mississippi estuary. As tidal forcing increases, the control of river flow on the pattern of circulation in the estuary becomes less dominating. Turbulent mixing induced by the current creates a moderately stratified condition. Turbulent eddies mix the water column, creating a mass transfer of freshwater and seawater in both directions across the density boundary. Therefore, the interface separating the upper and lower water masses is replaced with a water column with a gradual increase in salinity from surface to bottom. A two layered flow still exists however, with the maximum salinity gradient at mid depth. Partially stratified estuaries are typically shallow and wide, with a greater width to depth ratio than salt wedge estuaries. An example is the Thames. In vertically homogeneous estuaries, tidal flow is greater relative to river discharge, resulting in a well mixed water column and the disappearance of the vertical salinity gradient. The freshwater-seawater boundary is eliminated due to the intense turbulent mixing and eddy effects. The width to depth ratio of vertically homogeneous estuaries is large, with the limited depth creating enough vertical shearing on the seafloor to mix the water column completely. If tidal currents at the mouth of an estuary are strong enough to create turbulent mixing, vertically homogeneous conditions often develop. Fjords are usually examples of highly stratified estuaries; they are basins with sills and have freshwater inflow that greatly exceeds evaporation. Oceanic water is imported in an intermediate layer and mixes with the freshwater. The resulting brackish water is then exported into the surface layer. A slow import of seawater may flow over the sill and sink to the bottom of the fjord (deep layer), where the water remains stagnant until flushed by an occasional storm. Inverse estuaries occur in dry climates where evaporation greatly exceeds the inflow of freshwater. A salinity maximum zone is formed, and both riverine and oceanic water flow close to the surface towards this zone. This water is pushed downward and spreads along the bottom in both the seaward and landward direction. The maximum salinity can reach extremely high values and the residence time can be several months. In these systems, the salinity maximum zone acts like a plug, inhibiting the mixing of estuarine and oceanic waters so that freshwater does not reach the ocean. The high salinity water sinks seaward and exits the estuary. Lakes. Lake stratification, generally a form of thermal stratification caused by density variations due to water temperature, is the formation of separate and distinct layers of water during warm weather, and sometimes when frozen over. Typically stratified lakes show three distinct layers, the epilimnion comprising the top warm layer, the thermocline (or metalimnion): the middle layer, which may change depth throughout the day, and the colder hypolimnion extending to the floor of the lake. The thermal stratification of lakes is a vertical isolation of parts of the water body from mixing caused by variation in the temperature at different depths in the lake, and is due to the density of water varying with temperature. Cold water is denser than warm water of the same salinity, and the epilimnion generally consists of water that is not as dense as the water in the hypolimnion. However, the temperature of maximum density for freshwater is 4 °C. In temperate regions where lake water warms up and cools through the seasons, a cyclical pattern of overturn occurs that is repeated from year to year as the water at the top of the lake cools and sinks (see stable and unstable stratification). For example, in dimictic lakes the lake water turns over during the spring and the fall. This process occurs more slowly in deeper water and as a result, a thermal bar may form. If the stratification of water lasts for extended periods, the lake is meromictic. In shallow lakes, stratification into epilimnion, metalimnion, and hypolimnion often does not occur, as wind or cooling causes regular mixing throughout the year. These lakes are called polymictic. There is not a fixed depth that separates polymictic and stratifying lakes, as apart from depth, this is also influenced by turbidity, lake surface area, and climate. The lake mixing regime (e.g. polymictic, dimictic, meromictic) describes the yearly patterns of lake stratification that occur in most years. However, short-term events can influence lake stratification as well. Heat waves can cause periods of stratification in otherwise mixed, shallow lakes, while mixing events, such as storms or large river discharge, can break down stratification. Recent research suggests that seasonally ice-covered dimictic lakes may be described as "cryostratified" or "cryomictic" according to their wintertime stratification regimes. Cryostratified lakes exhibit inverse stratification near the ice surface and have depth-averaged temperatures near 4 °C, while cryomictic lakes have no under-ice thermocline and have depth-averaged winter temperatures closer to 0 °C. Anchialine systems. An anchialine system is a landlocked body of water with a subterranean connection to the ocean. Depending on its formation, these systems can exist in one of two primary forms: pools or caves. The primary differentiating characteristics between pools and caves is the availability of light; cave systems are generally aphotic while pools are euphotic. The difference in light availability has a large influence on the biology of a given system. Anchialine systems are a feature of coastal aquifers which are density stratified, with water near the surface being fresh or brackish, and saline water intruding from the coast at depth. Depending on the site, it is sometimes possible to access the deeper saline water directly in the anchialine pool, or sometimes it may be accessible by cave diving. Anchialine systems are extremely common worldwide especially along neotropical coastlines where the geology and aquifer systems are relatively young, and there is minimal soil development. Such conditions occur notably where the bedrock is limestone or recently formed volcanic lava. Many anchialine systems are found on the coastlines of the island of Hawaii, the Yucatán Peninsula, South Australia, the Canary Islands, Christmas Island, and other karst and volcanic systems. Karst caves which drain into the sea may have a halocline separating the fresh water from the seawater underneath which can be visible even when both layers are clear due to the difference in refractive indices. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "\\rho(S, T, p)" } ]
https://en.wikipedia.org/wiki?curid=62704502
6271
Chemical reaction
Process that results in the interconversion of chemical species A chemical reaction is a process that leads to the chemical transformation of one set of chemical substances to another. When chemical reactions occur, the atoms are rearranged and the reaction is accompanied by an energy change as new products are generated. Classically, chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation. Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur. The substance (or substances) initially involved in a chemical reaction are called reactants or reagents. Chemical reactions are usually characterized by a chemical change, and they yield one or more products, which usually have properties different from the reactants. Reactions often consist of a sequence of individual sub-steps, the so-called elementary reactions, and the information on the precise course of action is part of the reaction mechanism. Chemical reactions are described with chemical equations, which symbolically present the starting materials, end products, and sometimes intermediate products and reaction conditions. Chemical reactions happen at a characteristic reaction rate at a given temperature and chemical concentration. Some reactions produce heat and are called exothermic reactions, while others may require heat to enable the reaction to occur, which are called endothermic reactions. Typically, reaction rates increase with increasing temperature because there is more thermal energy available to reach the activation energy necessary for breaking bonds between atoms. A reaction may be classified as redox in which oxidation and reduction occur or non-redox in which there is no oxidation and reduction occurring. Most simple redox reactions may be classified as a combination, decomposition, or single displacement reaction. Different chemical reactions are used during chemical synthesis in order to obtain the desired product. In biochemistry, a consecutive series of chemical reactions (where the product of one reaction is the reactant of the next reaction) form metabolic pathways. These reactions are often catalyzed by protein enzymes. Enzymes increase the rates of biochemical reactions, so that metabolic syntheses and decompositions impossible under ordinary conditions can occur at the temperature and concentrations present within a cell. The general concept of a chemical reaction has been extended to reactions between entities smaller than atoms, including nuclear reactions, radioactive decays and reactions between elementary particles, as described by quantum field theory. History. Chemical reactions such as combustion in fire, fermentation and the reduction of ores to metals were known since antiquity. Initial theories of transformation of materials were developed by Greek philosophers, such as the Four-Element Theory of Empedocles stating that any substance is composed of the four basic elements – fire, water, air and earth. In the Middle Ages, chemical transformations were studied by alchemists. They attempted, in particular, to convert lead into gold, for which purpose they used reactions of lead and lead-copper alloys with sulfur. The artificial production of chemical substances already was a central goal for medieval alchemists. Examples include the synthesis of ammonium chloride from organic substances as described in the works (c. 850–950) attributed to Jābir ibn Ḥayyān, or the production of mineral acids such as sulfuric and nitric acids by later alchemists, starting from c. 1300. The production of mineral acids involved the heating of sulfate and nitrate minerals such as copper sulfate, alum and saltpeter. In the 17th century, Johann Rudolph Glauber produced hydrochloric acid and sodium sulfate by reacting sulfuric acid and sodium chloride. With the development of the lead chamber process in 1746 and the Leblanc process, allowing large-scale production of sulfuric acid and sodium carbonate, respectively, chemical reactions became implemented into the industry. Further optimization of sulfuric acid technology resulted in the contact process in the 1880s, and the Haber process was developed in 1909–1910 for ammonia synthesis. From the 16th century, researchers including Jan Baptist van Helmont, Robert Boyle, and Isaac Newton tried to establish theories of experimentally observed chemical transformations. The phlogiston theory was proposed in 1667 by Johann Joachim Becher. It postulated the existence of a fire-like element called "phlogiston", which was contained within combustible bodies and released during combustion. This proved to be false in 1785 by Antoine Lavoisier who found the correct explanation of the combustion as a reaction with oxygen from the air. Joseph Louis Gay-Lussac recognized in 1808 that gases always react in a certain relationship with each other. Based on this idea and the atomic theory of John Dalton, Joseph Proust had developed the law of definite proportions, which later resulted in the concepts of stoichiometry and chemical equations. Regarding the organic chemistry, it was long believed that compounds obtained from living organisms were too complex to be obtained synthetically. According to the concept of vitalism, organic matter was endowed with a "vital force" and distinguished from inorganic materials. This separation was ended however by the synthesis of urea from inorganic precursors by Friedrich Wöhler in 1828. Other chemists who brought major contributions to organic chemistry include Alexander William Williamson with his synthesis of ethers and Christopher Kelk Ingold, who, among many discoveries, established the mechanisms of substitution reactions. Characteristics. The general characteristics of chemical reactions are: Equations. Chemical equations are used to graphically illustrate chemical reactions. They consist of chemical or structural formulas of the reactants on the left and those of the products on the right. They are separated by an arrow (→) which indicates the direction and type of the reaction; the arrow is read as the word "yields". The tip of the arrow points in the direction in which the reaction proceeds. A double arrow (⇌) pointing in opposite directions is used for equilibrium reactions. Equations should be balanced according to the stoichiometry, the number of atoms of each species should be the same on both sides of the equation. This is achieved by scaling the number of involved molecules (A, B, C and D in a schematic example below) by the appropriate integers "a, b, c" and "d". More elaborate reactions are represented by reaction schemes, which in addition to starting materials and products show important intermediates or transition states. Also, some relatively minor additions to the reaction can be indicated above the reaction arrow; examples of such additions are water, heat, illumination, a catalyst, etc. Similarly, some minor products can be placed below the arrow, often with a minus sign. Retrosynthetic analysis can be applied to design a complex synthesis reaction. Here the analysis starts from the products, for example by splitting selected chemical bonds, to arrive at plausible initial reagents. A special arrow (⇒) is used in retro reactions. Elementary reactions. The elementary reaction is the smallest division into which a chemical reaction can be decomposed, it has no intermediate products. Most experimentally observed reactions are built up from many elementary reactions that occur in parallel or sequentially. The actual sequence of the individual elementary reactions is known as reaction mechanism. An elementary reaction involves a few molecules, usually one or two, because of the low probability for several molecules to meet at a certain time. The most important elementary reactions are unimolecular and bimolecular reactions. Only one molecule is involved in a unimolecular reaction; it is transformed by isomerization or a dissociation into one or more other molecules. Such reactions require the addition of energy in the form of heat or light. A typical example of a unimolecular reaction is the cis–trans isomerization, in which the cis-form of a compound converts to the trans-form or vice versa. In a typical dissociation reaction, a bond in a molecule splits (ruptures) resulting in two molecular fragments. The splitting can be homolytic or heterolytic. In the first case, the bond is divided so that each product retains an electron and becomes a neutral radical. In the second case, both electrons of the chemical bond remain with one of the products, resulting in charged ions. Dissociation plays an important role in triggering chain reactions, such as hydrogen–oxygen or polymerization reactions. &lt;chem&gt;AB -&gt; A + B&lt;/chem&gt; For bimolecular reactions, two molecules collide and react with each other. Their merger is called chemical synthesis or an addition reaction. &lt;chem&gt;A + B -&gt; AB&lt;/chem&gt; Another possibility is that only a portion of one molecule is transferred to the other molecule. This type of reaction occurs, for example, in redox and acid-base reactions. In redox reactions, the transferred particle is an electron, whereas in acid-base reactions it is a proton. This type of reaction is also called metathesis. &lt;chem&gt;HA + B -&gt; A + HB&lt;/chem&gt; for example &lt;chem&gt;NaCl + AgNO3 -&gt; NaNO3 + AgCl(v)&lt;/chem&gt; Chemical equilibrium. Most chemical reactions are reversible; that is, they can and do run in both directions. The forward and reverse reactions are competing with each other and differ in reaction rates. These rates depend on the concentration and therefore change with the time of the reaction: the reverse rate gradually increases and becomes equal to the rate of the forward reaction, establishing the so-called chemical equilibrium. The time to reach equilibrium depends on parameters such as temperature, pressure, and the materials involved, and is determined by the minimum free energy. In equilibrium, the Gibbs free energy of reaction must be zero. The pressure dependence can be explained with the Le Chatelier's principle. For example, an increase in pressure due to decreasing volume causes the reaction to shift to the side with fewer moles of gas. The reaction yield stabilizes at equilibrium but can be increased by removing the product from the reaction mixture or changed by increasing the temperature or pressure. A change in the concentrations of the reactants does not affect the equilibrium constant but does affect the equilibrium position. Thermodynamics. Chemical reactions are determined by the laws of thermodynamics. Reactions can proceed by themselves if they are exergonic, that is if they release free energy. The associated free energy change of the reaction is composed of the changes of two different thermodynamic quantities, enthalpy and entropy: ; formula_0. Reactions can be exothermic, where Δ"H" is negative and energy is released. Typical examples of exothermic reactions are combustion, precipitation and crystallization, in which ordered solids are formed from disordered gaseous or liquid phases. In contrast, in endothermic reactions, heat is consumed from the environment. This can occur by increasing the entropy of the system, often through the formation of gaseous or dissolved reaction products, which have higher entropy. Since the entropy term in the free-energy change increases with temperature, many endothermic reactions preferably take place at high temperatures. On the contrary, many exothermic reactions such as crystallization occur preferably at lower temperatures. A change in temperature can sometimes reverse the sign of the enthalpy of a reaction, as for the carbon monoxide reduction of molybdenum dioxide: &lt;chem&gt;2CO(g) + MoO2(s) -&gt; 2CO2(g) + Mo(s)&lt;/chem&gt;; formula_1 This reaction to form carbon dioxide and molybdenum is endothermic at low temperatures, becoming less so with increasing temperature. Δ"H"° is zero at , and the reaction becomes exothermic above that temperature. Changes in temperature can also reverse the direction tendency of a reaction. For example, the water gas shift reaction &lt;chem&gt;CO(g) + H2O({v}) &lt;=&gt; CO2(g) + H2(g)&lt;/chem&gt; is favored by low temperatures, but its reverse is favored by high temperatures. The shift in reaction direction tendency occurs at . Reactions can also be characterized by their internal energy change, which takes into account changes in the entropy, volume and chemical potentials. The latter depends, among other things, on the activities of the involved substances. ; formula_2 Kinetics. The speed at which reactions take place is studied by reaction kinetics. The rate depends on various parameters, such as: Several theories allow calculating the reaction rates at the molecular level. This field is referred to as reaction dynamics. The rate "v" of a first-order reaction, which could be the disintegration of a substance A, is given by: formula_3 Its integration yields: formula_4 Here "k" is the first-order rate constant, having dimension 1/time, [A]("t") is the concentration at a time "t" and [A]0 is the initial concentration. The rate of a first-order reaction depends only on the concentration and the properties of the involved substance, and the reaction itself can be described with a characteristic half-life. More than one time constant is needed when describing reactions of higher order. The temperature dependence of the rate constant usually follows the Arrhenius equation: formula_5 where "E"a is the activation energy and "k"B is the Boltzmann constant. One of the simplest models of reaction rate is the collision theory. More realistic models are tailored to a specific problem and include the transition state theory, the calculation of the potential energy surface, the Marcus theory and the Rice–Ramsperger–Kassel–Marcus (RRKM) theory. Reaction types. Four basic types. Synthesis. In a synthesis reaction, two or more simple substances combine to form a more complex substance. These reactions are in the general form: &lt;chem display="block"&gt;A + B-&gt;AB&lt;/chem&gt; Two or more reactants yielding one product is another way to identify a synthesis reaction. One example of a synthesis reaction is the combination of iron and sulfur to form iron(II) sulfide: &lt;chem display="block"&gt;8Fe + S8-&gt;8FeS&lt;/chem&gt; Another example is simple hydrogen gas combined with simple oxygen gas to produce a more complex substance, such as water. Decomposition. A decomposition reaction is when a more complex substance breaks down into its more simple parts. It is thus the opposite of a synthesis reaction and can be written as &lt;chem display="block"&gt;AB-&gt;A + B&lt;/chem&gt; One example of a decomposition reaction is the electrolysis of water to make oxygen and hydrogen gas: &lt;chem display="block"&gt;2H2O-&gt;2H2 + O2&lt;/chem&gt; Single displacement. In a single displacement reaction, a single uncombined element replaces another in a compound; in other words, one element trades places with another element in a compound These reactions come in the general form of: &lt;chem display="block"&gt;A + BC-&gt;AC + B&lt;/chem&gt; One example of a single displacement reaction is when magnesium replaces hydrogen in water to make solid magnesium hydroxide and hydrogen gas: &lt;chem display="block"&gt;Mg + 2H2O-&gt;Mg(OH)2 (v) + H2 (^)&lt;/chem&gt; Double displacement. In a double displacement reaction, the anions and cations of two compounds switch places and form two entirely different compounds. These reactions are in the general form: &lt;chem display="block"&gt;AB + CD-&gt;AD + CB&lt;/chem&gt; For example, when barium chloride (BaCl2) and magnesium sulfate (MgSO4) react, the SO42− anion switches places with the 2Cl− anion, giving the compounds BaSO4 and MgCl2. Another example of a double displacement reaction is the reaction of lead(II) nitrate with potassium iodide to form lead(II) iodide and potassium nitrate: &lt;chem display="block"&gt;Pb(NO3)2 + 2KI-&gt;PbI2(v) + 2KNO3&lt;/chem&gt; Forward and backward reactions. According to Le Chatelier's Principle, reactions may proceed in the forward or reverse direction until they end or reach equilibrium. Forward reactions. Reactions that proceed in the forward direction to approach equilibrium are often called spontaneous reactions, that is, formula_6 is negative, which means that if they occur at constant temperature and pressure, they decrease the Gibbs free energy of the reaction. They do not require much energy to proceed in the forward direction. Most reactions are forward reactions. Examples: 2H2 + O2 ⇌ 2H2O CH3COOH + H2O ⇌ CH3COO- + H3O+ Backward reactions. Reactions that proceed in the backward direction to approach equilibrium are often called non-spontaneous reactions, that is, formula_6 is positive, which means that if they occur at constant temperature and pressure, they increase the Gibbs free energy of the reaction. They require input of energy to proceed in the forward direction. Examples include: CO2carbondioxide + H2Owater + photonslight energy → [CH2O]carbohydrate + O2oxygen Combustion. In a combustion reaction, an element or compound reacts with an oxidant, usually oxygen, often producing energy in the form of heat or light. Combustion reactions frequently involve a hydrocarbon. For instance, the combustion of 1 mole (114 g) of octane in oxygen &lt;chem display="block"&gt;C8H18(l) + 25/2 O2(g)-&gt;8CO2 + 9H2O(l)&lt;/chem&gt; releases 5500 kJ. A combustion reaction can also result from carbon, magnesium or sulfur reacting with oxygen. &lt;chem display="block"&gt;2Mg(s) + O2-&gt;2MgO(s)&lt;/chem&gt; &lt;chem display="block"&gt;S(s) + O2(g)-&gt;SO2(g)&lt;/chem&gt; Oxidation and reduction. Redox reactions can be understood in terms of the transfer of electrons from one involved species (reducing agent) to another (oxidizing agent). In this process, the former species is "oxidized" and the latter is "reduced". Though sufficient for many purposes, these descriptions are not precisely correct. Oxidation is better defined as an increase in oxidation state of atoms and reduction as a decrease in oxidation state. In practice, the transfer of electrons will always change the oxidation state, but there are many reactions that are classed as "redox" even though no electron transfer occurs (such as those involving covalent bonds). In the following redox reaction, hazardous sodium metal reacts with toxic chlorine gas to form the ionic compound sodium chloride, or common table salt: &lt;chem display="block"&gt;2Na(s) + Cl2(g)-&gt;2NaCl(s)&lt;/chem&gt; In the reaction, sodium metal goes from an oxidation state of 0 (a pure element) to +1: in other words, the sodium lost one electron and is said to have been oxidized. On the other hand, the chlorine gas goes from an oxidation of 0 (also a pure element) to −1: the chlorine gains one electron and is said to have been reduced. Because the chlorine is the one reduced, it is considered the electron acceptor, or in other words, induces oxidation in the sodium – thus the chlorine gas is considered the oxidizing agent. Conversely, the sodium is oxidized or is the electron donor, and thus induces a reduction in the other species and is considered the "reducing agent". Which of the involved reactants would be a reducing or oxidizing agent can be predicted from the electronegativity of their elements. Elements with low electronegativities, such as most metals, easily donate electrons and oxidize – they are reducing agents. On the contrary, many oxides or ions with high oxidation numbers of their non-oxygen atoms, such as H2O2, MnO4-, CrO3, Cr2O72-, or OsO4, can gain one or two extra electrons and are strong oxidizing agents. For some main-group elements the number of electrons donated or accepted in a redox reaction can be predicted from the electron configuration of the reactant element. Elements try to reach the low-energy noble gas configuration, and therefore alkali metals and halogens will donate and accept one electron, respectively. Noble gases themselves are chemically inactive. The overall redox reaction can be balanced by combining the oxidation and reduction half-reactions multiplied by coefficients such that the number of electrons lost in the oxidation equals the number of electrons gained in the reduction. An important class of redox reactions are the electrolytic electrochemical reactions, where electrons from the power supply at the negative electrode are used as the reducing agent and electron withdrawal at the positive electrode as the oxidizing agent. These reactions are particularly important for the production of chemical elements, such as chlorine or aluminium. The reverse process, in which electrons are released in redox reactions and chemical energy is converted to electrical energy, is possible and used in batteries. Complexation. In complexation reactions, several ligands react with a metal atom to form a coordination complex. This is achieved by providing lone pairs of the ligand into empty orbitals of the metal atom and forming dipolar bonds. The ligands are Lewis bases, they can be both ions and neutral molecules, such as carbon monoxide, ammonia or water. The number of ligands that react with a central metal atom can be found using the 18-electron rule, saying that the valence shells of a transition metal will collectively accommodate 18 electrons, whereas the symmetry of the resulting complex can be predicted with the crystal field theory and ligand field theory. Complexation reactions also include ligand exchange, in which one or more ligands are replaced by another, and redox processes which change the oxidation state of the central metal atom. Acid–base reactions. In the Brønsted–Lowry acid–base theory, an acid–base reaction involves a transfer of protons (H+) from one species (the acid) to another (the base). When a proton is removed from an acid, the resulting species is termed that acid's conjugate base. When the proton is accepted by a base, the resulting species is termed that base's conjugate acid. In other words, acids act as proton donors and bases act as proton acceptors according to the following equation: &lt;chem display="block"&gt;\underset{acid}{HA} + \underset{base}{B} &lt;=&gt; \underset{conjugated\ base}{A^-} + \underset{conjugated\ acid}{HB+}&lt;/chem&gt; The reverse reaction is possible, and thus the acid/base and conjugated base/acid are always in equilibrium. The equilibrium is determined by the acid and base dissociation constants ("K"a and "K"b) of the involved substances. A special case of the acid-base reaction is the neutralization where an acid and a base, taken at the exact same amounts, form a neutral salt. Acid-base reactions can have different definitions depending on the acid-base concept employed. Some of the most common are: Precipitation. Precipitation is the formation of a solid in a solution or inside another solid during a chemical reaction. It usually takes place when the concentration of dissolved ions exceeds the solubility limit and forms an insoluble salt. This process can be assisted by adding a precipitating agent or by the removal of the solvent. Rapid precipitation results in an amorphous or microcrystalline residue and a slow process can yield single crystals. The latter can also be obtained by recrystallization from microcrystalline salts. Solid-state reactions. Reactions can take place between two solids. However, because of the relatively small diffusion rates in solids, the corresponding chemical reactions are very slow in comparison to liquid and gas phase reactions. They are accelerated by increasing the reaction temperature and finely dividing the reactant to increase the contacting surface area. Reactions at the solid/gas interface. The reaction can take place at the solid|gas interface, surfaces at very low pressure such as ultra-high vacuum. Via scanning tunneling microscopy, it is possible to observe reactions at the solid|gas interface in real space, if the time scale of the reaction is in the correct range. Reactions at the solid|gas interface are in some cases related to catalysis. Photochemical reactions. In photochemical reactions, atoms and molecules absorb energy (photons) of the illumination light and convert it into an excited state. They can then release this energy by breaking chemical bonds, thereby producing radicals. Photochemical reactions include hydrogen–oxygen reactions, radical polymerization, chain reactions and rearrangement reactions. Many important processes involve photochemistry. The premier example is photosynthesis, in which most plants use solar energy to convert carbon dioxide and water into glucose, disposing of oxygen as a side-product. Humans rely on photochemistry for the formation of vitamin D, and vision is initiated by a photochemical reaction of rhodopsin. In fireflies, an enzyme in the abdomen catalyzes a reaction that results in bioluminescence. Many significant photochemical reactions, such as ozone formation, occur in the Earth atmosphere and constitute atmospheric chemistry. Catalysis. In catalysis, the reaction does not proceed directly, but through a reaction with a third substance known as catalyst. Although the catalyst takes part in the reaction, forming weak bonds with reactants or intermediates, it is returned to its original state by the end of the reaction and so is not consumed. However, it can be inhibited, deactivated or destroyed by secondary processes. Catalysts can be used in a different phase (heterogeneous) or in the same phase (homogeneous) as the reactants. In heterogeneous catalysis, typical secondary processes include coking where the catalyst becomes covered by polymeric side products. Additionally, heterogeneous catalysts can dissolve into the solution in a solid-liquid system or evaporate in a solid–gas system. Catalysts can only speed up the reaction – chemicals that slow down the reaction are called inhibitors. Substances that increase the activity of catalysts are called promoters, and substances that deactivate catalysts are called catalytic poisons. With a catalyst, a reaction that is kinetically inhibited by high activation energy can take place in the circumvention of this activation energy. Heterogeneous catalysts are usually solids, powdered in order to maximize their surface area. Of particular importance in heterogeneous catalysis are the platinum group metals and other transition metals, which are used in hydrogenations, catalytic reforming and in the synthesis of commodity chemicals such as nitric acid and ammonia. Acids are an example of a homogeneous catalyst, they increase the nucleophilicity of carbonyls, allowing a reaction that would not otherwise proceed with electrophiles. The advantage of homogeneous catalysts is the ease of mixing them with the reactants, but they may also be difficult to separate from the products. Therefore, heterogeneous catalysts are preferred in many industrial processes. Reactions in organic chemistry. In organic chemistry, in addition to oxidation, reduction or acid-base reactions, a number of other reactions can take place which involves covalent bonds between carbon atoms or carbon and heteroatoms (such as oxygen, nitrogen, halogens, etc.). Many specific reactions in organic chemistry are name reactions designated after their discoverers. One of the most industrially important reactions is the cracking of heavy hydrocarbons at oil refineries to create smaller, simpler molecules. This process is used to manufacture gasoline. Specific types of organic reactions may be grouped by their reaction mechanisms (particularly substitution, addition and elimination) or by the types of products they produce (for example, methylation, polymerisation and halogenation). Substitution. In a substitution reaction, a functional group in a particular chemical compound is replaced by another group. These reactions can be distinguished by the type of substituting species into a nucleophilic, electrophilic or radical substitution. In the first type, a nucleophile, an atom or molecule with an excess of electrons and thus a negative charge or partial charge, replaces another atom or part of the "substrate" molecule. The electron pair from the nucleophile attacks the substrate forming a new bond, while the leaving group departs with an electron pair. The nucleophile may be electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged. Examples of nucleophiles are hydroxide ion, alkoxides, amines and halides. This type of reaction is found mainly in aliphatic hydrocarbons, and rarely in aromatic hydrocarbon. The latter have high electron density and enter nucleophilic aromatic substitution only with very strong electron withdrawing groups. Nucleophilic substitution can take place by two different mechanisms, SN1 and SN2. In their names, S stands for substitution, N for nucleophilic, and the number represents the kinetic order of the reaction, unimolecular or bimolecular. The SN1 reaction proceeds in two steps. First, the leaving group is eliminated creating a carbocation. This is followed by a rapid reaction with the nucleophile. In the SN2 mechanisms, the nucleophile forms a transition state with the attacked molecule, and only then the leaving group is cleaved. These two mechanisms differ in the stereochemistry of the products. SN1 leads to the non-stereospecific addition and does not result in a chiral center, but rather in a set of geometric isomers ("cis/trans"). In contrast, a reversal (Walden inversion) of the previously existing stereochemistry is observed in the SN2 mechanism. Electrophilic substitution is the counterpart of the nucleophilic substitution in that the attacking atom or molecule, an electrophile, has low electron density and thus a positive charge. Typical electrophiles are the carbon atom of carbonyl groups, carbocations or sulfur or nitronium cations. This reaction takes place almost exclusively in aromatic hydrocarbons, where it is called electrophilic aromatic substitution. The electrophile attack results in the so-called σ-complex, a transition state in which the aromatic system is abolished. Then, the leaving group, usually a proton, is split off and the aromaticity is restored. An alternative to aromatic substitution is electrophilic aliphatic substitution. It is similar to the nucleophilic aliphatic substitution and also has two major types, SE1 and SE2 In the third type of substitution reaction, radical substitution, the attacking particle is a radical. This process usually takes the form of a chain reaction, for example in the reaction of alkanes with halogens. In the first step, light or heat disintegrates the halogen-containing molecules producing radicals. Then the reaction proceeds as an avalanche until two radicals meet and recombine. ;&lt;chem&gt;X. + R-H -&gt; X-H + R.&lt;/chem&gt; ;&lt;chem&gt;R. + X2 -&gt; R-X + X.&lt;/chem&gt; Addition and elimination. The addition and its counterpart, the elimination, are reactions that change the number of substituents on the carbon atom, and form or cleave multiple bonds. Double and triple bonds can be produced by eliminating a suitable leaving group. Similar to the nucleophilic substitution, there are several possible reaction mechanisms that are named after the respective reaction order. In the E1 mechanism, the leaving group is ejected first, forming a carbocation. The next step, the formation of the double bond, takes place with the elimination of a proton (deprotonation). The leaving order is reversed in the E1cb mechanism, that is the proton is split off first. This mechanism requires the participation of a base. Because of the similar conditions, both reactions in the E1 or E1cb elimination always compete with the SN1 substitution. The E2 mechanism also requires a base, but there the attack of the base and the elimination of the leaving group proceed simultaneously and produce no ionic intermediate. In contrast to the E1 eliminations, different stereochemical configurations are possible for the reaction product in the E2 mechanism, because the attack of the base preferentially occurs in the anti-position with respect to the leaving group. Because of the similar conditions and reagents, the E2 elimination is always in competition with the SN2-substitution. The counterpart of elimination is an addition where double or triple bonds are converted into single bonds. Similar to substitution reactions, there are several types of additions distinguished by the type of the attacking particle. For example, in the electrophilic addition of hydrogen bromide, an electrophile (proton) attacks the double bond forming a carbocation, which then reacts with the nucleophile (bromine). The carbocation can be formed on either side of the double bond depending on the groups attached to its ends, and the preferred configuration can be predicted with the Markovnikov's rule. This rule states that "In the heterolytic addition of a polar molecule to an alkene or alkyne, the more electronegative (nucleophilic) atom (or part) of the polar molecule becomes attached to the carbon atom bearing the smaller number of hydrogen atoms." If the addition of a functional group takes place at the less substituted carbon atom of the double bond, then the electrophilic substitution with acids is not possible. In this case, one has to use the hydroboration–oxidation reaction, wherein the first step, the boron atom acts as electrophile and adds to the less substituted carbon atom. In the second step, the nucleophilic hydroperoxide or halogen anion attacks the boron atom. While the addition to the electron-rich alkenes and alkynes is mainly electrophilic, the nucleophilic addition plays an important role in the carbon-heteroatom multiple bonds, and especially its most important representative, the carbonyl group. This process is often associated with elimination so that after the reaction the carbonyl group is present again. It is, therefore, called an addition-elimination reaction and may occur in carboxylic acid derivatives such as chlorides, esters or anhydrides. This reaction is often catalyzed by acids or bases, where the acids increase the electrophilicity of the carbonyl group by binding to the oxygen atom, whereas the bases enhance the nucleophilicity of the attacking nucleophile. Nucleophilic addition of a carbanion or another nucleophile to the double bond of an alpha, beta-unsaturated carbonyl compound can proceed via the Michael reaction, which belongs to the larger class of conjugate additions. This is one of the most useful methods for the mild formation of C–C bonds. Some additions which can not be executed with nucleophiles and electrophiles can be succeeded with free radicals. As with the free-radical substitution, the radical addition proceeds as a chain reaction, and such reactions are the basis of the free-radical polymerization. Other organic reaction mechanisms. In a rearrangement reaction, the carbon skeleton of a molecule is rearranged to give a structural isomer of the original molecule. These include hydride shift reactions such as the Wagner-Meerwein rearrangement, where a hydrogen, alkyl or aryl group migrates from one carbon to a neighboring carbon. Most rearrangements are associated with the breaking and formation of new carbon-carbon bonds. Other examples are sigmatropic reaction such as the Cope rearrangement. Cyclic rearrangements include cycloadditions and, more generally, pericyclic reactions, wherein two or more double bond-containing molecules form a cyclic molecule. An important example of cycloaddition reaction is the Diels–Alder reaction (the so-called [4+2] cycloaddition) between a conjugated diene and a substituted alkene to form a substituted cyclohexene system. Whether a certain cycloaddition would proceed depends on the electronic orbitals of the participating species, as only orbitals with the same sign of wave function will overlap and interact constructively to form new bonds. Cycloaddition is usually assisted by light or heat. These perturbations result in a different arrangement of electrons in the excited state of the involved molecules and therefore in different effects. For example, the [4+2] Diels-Alder reactions can be assisted by heat whereas the [2+2] cycloaddition is selectively induced by light. Because of the orbital character, the potential for developing stereoisomeric products upon cycloaddition is limited, as described by the Woodward–Hoffmann rules. Biochemical reactions. Biochemical reactions are mainly controlled by complex proteins called enzymes, which are usually specialized to catalyze only a single, specific reaction. The reaction takes place in the active site, a small part of the enzyme which is usually found in a cleft or pocket lined by amino acid residues, and the rest of the enzyme is used mainly for stabilization. The catalytic action of enzymes relies on several mechanisms including the molecular shape ("induced fit"), bond strain, proximity and orientation of molecules relative to the enzyme, proton donation or withdrawal (acid/base catalysis), electrostatic interactions and many others. The biochemical reactions that occur in living organisms are collectively known as metabolism. Among the most important of its mechanisms is the anabolism, in which different DNA and enzyme-controlled processes result in the production of large molecules such as proteins and carbohydrates from smaller units. Bioenergetics studies the sources of energy for such reactions. Important energy sources are glucose and oxygen, which can be produced by plants via photosynthesis or assimilated from food and air, respectively. All organisms use this energy to produce adenosine triphosphate (ATP), which can then be used to energize other reactions. Decomposition of organic material by fungi, bacteria and other micro-organisms is also within the scope of biochemistry. Applications. Chemical reactions are central to chemical engineering, where they are used for the synthesis of new compounds from natural raw materials such as petroleum, mineral ores, and oxygen in air. It is essential to make the reaction as efficient as possible, maximizing the yield and minimizing the number of reagents, energy inputs and waste. Catalysts are especially helpful for reducing the energy required for the reaction and increasing its reaction rate. Some specific reactions have their niche applications. For example, the thermite reaction is used to generate light and heat in pyrotechnics and welding. Although it is less controllable than the more conventional oxy-fuel welding, arc welding and flash welding, it requires much less equipment and is still used to mend rails, especially in remote areas. Monitoring. Mechanisms of monitoring chemical reactions depend strongly on the reaction rate. Relatively slow processes can be analyzed in situ for the concentrations and identities of the individual ingredients. Important tools of real-time analysis are the measurement of pH and analysis of optical absorption (color) and emission spectra. A less accessible but rather efficient method is the introduction of a radioactive isotope into the reaction and monitoring how it changes over time and where it moves to; this method is often used to analyze the redistribution of substances in the human body. Faster reactions are usually studied with ultrafast laser spectroscopy where utilization of femtosecond lasers allows short-lived transition states to be monitored at a time scaled down to a few femtoseconds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta G = \\Delta H - T \\cdot \\Delta S" }, { "math_id": 1, "text": "\\Delta H^o = +21.86 \\ \\text{kJ at 298 K}" }, { "math_id": 2, "text": "{d}U = T\\cdot {d}S - p\\cdot {d}V + \\mu\\cdot {d}n" }, { "math_id": 3, "text": " v= -\\frac {d[\\ce{A}]}{dt}= k \\cdot [\\ce{A}]. " }, { "math_id": 4, "text": "\\ce{[A]}(t) = \\ce{[A]}_{0} \\cdot e^{-k\\cdot t}. " }, { "math_id": 5, "text": "k = k_0 e^{{-E_a}/{k_{B}T}}" }, { "math_id": 6, "text": "\\Delta G" } ]
https://en.wikipedia.org/wiki?curid=6271
62719320
K-stability
In mathematics, and especially differential and algebraic geometry, K-stability is an algebro-geometric stability condition, for complex manifolds and complex algebraic varieties. The notion of K-stability was first introduced by Gang Tian and reformulated more algebraically later by Simon Donaldson. The definition was inspired by a comparison to geometric invariant theory (GIT) stability. In the special case of Fano varieties, K-stability precisely characterises the existence of Kähler–Einstein metrics. More generally, on any compact complex manifold, K-stability is conjectured to be equivalent to the existence of constant scalar curvature Kähler metrics (cscK metrics). History. In 1954, Eugenio Calabi formulated a conjecture about the existence of Kähler metrics on compact Kähler manifolds, now known as the Calabi conjecture. One formulation of the conjecture is that a compact Kähler manifold formula_0 admits a unique Kähler–Einstein metric in the class formula_1. In the particular case where formula_2, such a Kähler–Einstein metric would be Ricci flat, making the manifold a Calabi–Yau manifold. The Calabi conjecture was resolved in the case where formula_3 by Thierry Aubin and Shing-Tung Yau, and when formula_2 by Yau. In the case where formula_4, that is when formula_0 is a Fano manifold, a Kähler–Einstein metric does not always exist. Namely, it was known by work of Yozo Matsushima and André Lichnerowicz that a Kähler manifold with formula_4 can only admit a Kähler–Einstein metric if the Lie algebra formula_5 is reductive. However, it can be easily shown that the blow up of the complex projective plane at one point, formula_6 is Fano, but does not have reductive Lie algebra. Thus not all Fano manifolds can admit Kähler–Einstein metrics. After the resolution of the Calabi conjecture for formula_7 attention turned to the loosely related problem of finding canonical metrics on "vector bundles" over complex manifolds. In 1983, Donaldson produced a new proof of the Narasimhan–Seshadri theorem. As proved by Donaldson, the theorem states that a holomorphic vector bundle over a compact Riemann surface is stable if and only if it corresponds to an irreducible unitary Yang–Mills connection. That is, a unitary connection which is a critical point of the Yang–Mills functional formula_8 On a Riemann surface such a connection is projectively flat, and its holonomy gives rise to a projective unitary representation of the fundamental group of the Riemann surface, thus recovering the original statement of the theorem by M. S. Narasimhan and C. S. Seshadri. During the 1980s this theorem was generalised through the work of Donaldson, Karen Uhlenbeck and Yau, and Jun Li and Yau to the Kobayashi–Hitchin correspondence, which relates stable holomorphic vector bundles to Hermitian–Einstein connections over arbitrary compact complex manifolds. A key observation in the setting of holomorphic vector bundles is that once a holomorphic structure is fixed, any choice of Hermitian metric gives rise to a unitary connection, the Chern connection. Thus one can either search for a Hermitian–Einstein connection, or its corresponding Hermitian–Einstein metric. Inspired by the resolution of the existence problem for canonical metrics on vector bundles, in 1993 Yau was motivated to conjecture the existence of a Kähler–Einstein metric on a Fano manifold should be equivalent to some form of algebro-geometric stability condition on the variety itself, just as the existence of a Hermitian–Einstein metric on a holomorphic vector bundle is equivalent to its stability. Yau suggested this stability condition should be an analogue of slope stability of vector bundles. In 1997, Tian suggested such a stability condition, which he called "K-stability" after the K-energy functional introduced by Toshiki Mabuchi. The "K" originally stood for "kinetic" due to the similarity of the K-energy functional with the kinetic energy, and for the German "kanonisch" for the canonical bundle. Tian's definition was analytic in nature, and specific to the case of Fano manifolds. Several years later Donaldson introduced an algebraic condition described in this article called K-stability, which makes sense on any polarised variety, and is equivalent to Tian's analytic definition in the case of the polarised variety formula_9 where formula_0 is Fano. Definition. In this section we work over the complex numbers formula_10, but the essential points of the definition apply over any field. A polarised variety is a pair formula_11 where formula_0 is a complex algebraic variety and formula_12 is an ample line bundle on formula_0. Such a polarised variety comes equipped with an embedding into projective space using the Proj construction, formula_13 where formula_14 is any positive integer large enough that formula_15 is very ample, and so every polarised variety is projective. Changing the choice of ample line bundle formula_12 on formula_0 results in a new embedding of formula_0 into a possibly different projective space. Therefore a polarised variety can be thought of as a projective variety together with a fixed embedding into some projective space formula_16. Hilbert–Mumford criterion. K-stability is defined by analogy with the Hilbert–Mumford criterion from finite-dimensional geometric invariant theory. This theory describes the stability of "points" on polarised varieties, whereas K-stability concerns the stability of the polarised variety itself. The Hilbert–Mumford criterion shows that to test the stability of a point formula_17 in a projective algebraic variety formula_18 under the action of a reductive algebraic group formula_19, it is enough to consider the one parameter subgroups (1-PS) of formula_20. To proceed, one takes a 1-PS of formula_20, say formula_21, and looks at the limiting point formula_22 This is a fixed point of the action of the 1-PS formula_23, and so the line over formula_17 in the affine space formula_24 is preserved by the action of formula_23. An action of the multiplicative group formula_25 on a one dimensional vector space comes with a weight, an integer we label formula_26, with the property that formula_27 for any formula_28 in the fibre over formula_29. The Hilbert-Mumford criterion says: If one wishes to define a notion of stability for varieties, the Hilbert-Mumford criterion therefore suggests it is enough to consider one parameter deformations of the variety. This leads to the notion of a test configuration. Test Configurations. A test configuration for a polarised variety formula_11 is a pair formula_34 where formula_35 is a scheme with a flat morphism formula_36 and formula_37 is a relatively ample line bundle for the morphism formula_38, such that: We say that a test configuration formula_34 is a product configuration if formula_49, and a trivial configuration if the formula_25 action on formula_49 is trivial on the first factor. Donaldson–Futaki Invariant. To define a notion of stability analogous to the Hilbert–Mumford criterion, one needs a concept of weight formula_50 on the fibre over formula_51 of a test configuration formula_52 for a polarised variety formula_11. By definition this family comes equipped with an action of formula_25 covering the action on the base, and so the fibre of the test configuration over formula_46 is fixed. That is, we have an action of formula_25 on the central fibre formula_53. In general this central fibre is not smooth, or even a variety. There are several ways to define the weight on the central fiber. The first definition was given by using Ding-Tian's version of generalized Futaki invariant. This definition is differential geometric and is directly related to the existence problems in Kähler geometry. Algebraic definitions were given by using Donaldson-Futaki invariants and CM-weights defined by intersection formula. By definition an action of formula_54 on a polarised scheme comes with an action of formula_54 on the ample line bundle formula_55, and therefore induces an action on the vector spaces formula_56 for all integers formula_57. An action of formula_54 on a complex vector space formula_58 induces a direct sum decomposition formula_59 into "weight spaces", where each formula_60 is a one dimensional subspace of formula_58, and the action of formula_25 when restricted to formula_60 has a weight formula_61. Define the total weight of the action to be the integer formula_62. This is the same as the weight of the induced action of formula_54 on the one dimensional vector space formula_63 where formula_64. Define the weight function of the test configuration formula_42 to be the function formula_65 where formula_65 is the total weight of the formula_54 action on the vector space formula_66 for each non-negative integer formula_67. Whilst the function formula_65 is not a polynomial in general, it becomes a polynomial of degree formula_68 for all formula_69 for some fixed integer formula_70, where formula_71. This can be seen using an equivariant Riemann-Roch theorem. Recall that the Hilbert polynomial formula_41 satisfies the equality formula_72 for all formula_73 for some fixed integer formula_74, and is a polynomial of degree formula_75. For such formula_76, let us write formula_77 The Donaldson-Futaki invariant of the test configuration formula_34 is the rational number formula_78 In particular formula_79 where formula_80 is the first order term in the expansion formula_81 The Donaldson-Futaki invariant does not change if formula_12 is replaced by a positive power formula_82, and so in the literature K-stability is often discussed using formula_83-line bundles. It is possible to describe the Donaldson-Futaki invariant in terms of intersection theory, and this was the approach taken by Tian in defining the CM-weight. Any test configuration formula_84 admits a natural compactification formula_85 over formula_86 (e.g.,see ), then the CM-weight is defined by formula_87 where formula_88. This definition by intersection formula is now often used in algebraic geometry. It is known that formula_89 coincides with formula_90, so we can take the weight formula_91 to be either formula_89 or formula_92. The weight formula_91 can be also expressed in terms of the Chow form and hyperdiscriminant. In the case of Fano manifolds, there is an interpretation of the weight in terms of new formula_93-invariant on valuations found by Chi Li and Kento Fujita. K-stability. In order to define K-stability, we need to first exclude certain test configurations. Initially it was presumed one should just ignore trivial test configurations as defined above, whose Donaldson-Futaki invariant always vanishes, but it was observed by Li and Xu that more care is needed in the definition. One elegant way of defining K-stability is given by Székelyhidi using the norm of a test configuration, which we first describe. For a test configuration formula_34, define the norm as follows. Let formula_94 be the infinitesimal generator of the formula_25 action on the vector space formula_95. Then formula_96. Similarly to the polynomials formula_65 and formula_41, the function formula_97 is a polynomial for large enough integers formula_14, in this case of degree formula_98. Let us write its expansion as formula_99 The norm of a test configuration is defined by the expression formula_100 According to the analogy with the Hilbert-Mumford criterion, once one has a notion of deformation (test configuration) and weight on the central fibre (Donaldson-Futaki invariant), one can define a stability condition, called K-stability. Let formula_11 be a polarised algebraic variety. We say that formula_11 is: Yau–Tian–Donaldson Conjecture. K-stability was originally introduced as an algebro-geometric condition which should characterise the existence of a Kähler–Einstein metric on a Fano manifold. This came to be known as the Yau–Tian–Donaldson conjecture (for Fano manifolds). The conjecture was resolved in the 2010s in works of Xiuxiong Chen, Simon Donaldson, and Song Sun, The strategy is based on a continuity method with respect to the cone angle of a Kähler–Einstein metric with cone singularities along a fixed anticanonical divisor, as well as an in-depth use of the Cheeger–Colding–Tian theory of Gromov–Hausdorff limits of Kähler manifolds with Ricci bounds. Theorem (Yau–Tian–Donaldson conjecture for Kähler–Einstein metrics): A Fano Manifold formula_0 admits a Kähler–Einstein metric in the class of formula_1 if and only if the pair formula_105 is K-polystable. Chen, Donaldson, and Sun have alleged that Tian's claim to equal priority for the proof is incorrect, and they have accused him of academic misconduct. Tian has disputed their claims. Chen, Donaldson, and Sun were recognized by the American Mathematical Society's prestigious 2019 Veblen Prize as having had resolved the conjecture. The Breakthrough Prize has recognized Donaldson with the Breakthrough Prize in Mathematics and Sun with the New Horizons Breakthrough Prize, in part based upon their work with Chen on the conjecture. More recently, a proof based on the "classical" continuity method was provided by Ved Datar and Gabor Székelyhidi, followed by a proof by Chen, Sun, and Bing Wang using the Kähler–Ricci flow. Robert Berman, Sébastien Boucksom, and Mattias Jonsson also provided a proof from the variational approach. Extension to constant scalar curvature Kähler metrics. It is expected that the Yau–Tian–Donaldson conjecture should apply more generally to cscK metrics over arbitrary smooth polarised varieties. In fact, the Yau–Tian–Donaldson conjecture refers to this more general setting, with the case of Fano manifolds being a special case, which was conjectured earlier by Yau and Tian. Donaldson built on the conjecture of Yau and Tian from the Fano case after his definition of K-stability for arbitrary polarised varieties was introduced. Yau–Tian–Donaldson conjecture for constant scalar curvature metrics: A smooth polarised variety formula_11 admits a constant scalar curvature Kähler metric in the class of formula_106 if and only if the pair formula_11 is K-polystable. As discussed, the Yau–Tian–Donaldson conjecture has been resolved in the Fano setting. It was proven by Donaldson in 2009 that the Yau–Tian–Donaldson conjecture holds for toric varieties of complex dimension 2. For arbitrary polarised varieties it was proven by Stoppa, also using work of Arezzo and Pacard, that the existence of a cscK metric implies K-polystability. This is in some sense the easy direction of the conjecture, as it assumes the existence of a solution to a difficult partial differential equation, and arrives at the comparatively easy algebraic result. The significant challenge is to prove the reverse direction, that a purely algebraic condition implies the existence of a solution to a PDE. Examples. Smooth Curves. It has been known since the original work of Pierre Deligne and David Mumford that smooth algebraic curves are asymptotically stable in the sense of geometric invariant theory, and in particular that they are K-stable. In this setting, the Yau–Tian–Donaldson conjecture is equivalent to the uniformization theorem. Namely, every smooth curve admits a Kähler–Einstein metric of constant scalar curvature either formula_107 in the case of the projective line formula_108, formula_51 in the case of elliptic curves, or formula_109 in the case of compact Riemann surfaces of genus formula_110. Fano varieties. The setting where formula_111 is ample so that formula_0 is a Fano manifold is of particular importance, and in that setting many tools are known to verify the K-stability of Fano varieties. For example using purely algebraic techniques it can be proven that all Fermat hypersurfacesformula_112are K-stable Fano varieties for formula_113. Toric Varieties. K-stability was originally introduced by Donaldson in the context of toric varieties. In the toric setting many of the complicated definitions of K-stability simplify to be given by data on the moment polytope formula_114 of the polarised toric variety formula_115. First it is known that to test K-stability, it is enough to consider "toric test configurations", where the total space of the test configuration is also a toric variety. Any such toric test configuration can be elegantly described by a convex function on the moment polytope, and Donaldson originally defined K-stability for such convex functions. If a toric test configuration formula_42 for formula_115 is given by a convex function formula_116 on formula_114, then the Donaldson-Futaki invariant can be written as formula_117 where formula_118 is the Lebesgue measure on formula_114, formula_119 is the canonical measure on the boundary of formula_114 arising from its description as a moment polytope (if an edge of formula_114 is given by a linear inequality formula_120 for some affine linear functional h on formula_121 with integer coefficients, then formula_122), and formula_123. Additionally the norm of the test configuration can be given by formula_124 where formula_125 is the average of formula_116 on formula_114 with respect to formula_118. It was shown by Donaldson that for toric surfaces, it suffices to test convex functions of a particularly simple form. We say a convex function on formula_114 is piecewise-linear if it can be written as a maximum formula_126 for some affine linear functionals formula_127. Notice that by the definition of the constant formula_128, the Donaldson-Futaki invariant formula_129 is invariant under the addition of an affine linear functional, so we may always take one of the formula_130 to be the constant function formula_51. We say a convex function is simple piecewise-linear if it is a maximum of two functions, and so is given by formula_131 for some affine linear function formula_132, and simple rational piecewise-linear if formula_132 has rational cofficients. Donaldson showed that for toric surfaces it is enough to test K-stability only on simple rational piecewise-linear functions. Such a result is powerful in so far as it is possible to readily compute the Donaldson-Futaki invariants of such simple test configurations, and therefore computationally determine when a given toric surface is K-stable. An example of a K-unstable manifold is given by the toric surface formula_133, the first Hirzebruch surface, which is the blow up of the complex projective plane at a point, with respect to the polarisation given by formula_134, where formula_135 is the blow up and formula_136 the exceptional divisor. The measure formula_119 on the horizontal and vertical boundary faces of the polytope are just formula_137 and formula_138. On the diagonal face formula_139 the measure is given by formula_140. Consider the convex function formula_141 on this polytope. Then formula_142 and formula_143 Thus formula_144 and so the first Hirzebruch surface formula_145 is K-unstable. Alternative Notions. Hilbert and Chow Stability. K-stability arises from an analogy with the Hilbert-Mumford criterion for finite-dimensional geometric invariant theory. It is possible to use geometric invariant theory directly to obtain other notions of stability for varieties that are closely related to K-stability. Take a polarised variety formula_11 with Hilbert polynomial formula_146, and fix an formula_147 such that formula_82 is very ample with vanishing higher cohomology. The pair formula_148 can then be identified with a point in the Hilbert scheme of subschemes of formula_149 with Hilbert polynomial formula_150. This Hilbert scheme can be embedded into projective space as a subscheme of a Grassmannian (which is projective via the Plücker embedding). The general linear group formula_151 acts on this Hilbert scheme, and two points in the Hilbert scheme are equivalent if and only if the corresponding polarised varieties are isomorphic. Thus one can use geometric invariant theory for this group action to give a notion of stability. This construction depends on a choice of formula_147, so one says a polarised variety is asymptotically Hilbert stable if it is stable with respect to this embedding for all formula_152 sufficiently large, for some fixed formula_153. There is another projective embedding of the Hilbert scheme called the Chow embedding, which provides a different linearisation of the Hilbert scheme and therefore a different stability condition. One can similarly therefore define asymptotic Chow stability. Explicitly the Chow weight for a fixed formula_147 can be computed as formula_154 for formula_155 sufficiently large. Unlike the Donaldson-Futaki invariant, the Chow weight changes if the line bundle formula_12 is replaced by some power formula_15. However, from the expression formula_156 one observes that formula_157 and so K-stability is in some sense the limit of Chow stability as the dimension of the projective space formula_0 is embedded in approaches infinity. One may similarly define asymptotic Chow semistability and asymptotic Hilbert semistability, and the various notions of stability are related as follows: Asymptotically Chow stable formula_158 Asymptotically Hilbert stable formula_158 Asymptotically Hilbert semistable formula_158 Asymptotically Chow semistable formula_158 K-semistable It is however not know whether K-stability implies asymptotic Chow stability. Slope K-Stability. It was originally predicted by Yau that the correct notion of stability for varieties should be analogous to slope stability for vector bundles. Julius Ross and Richard Thomas developed a theory of slope stability for varieties, known as slope K-stability. It was shown by Ross and Thomas that any test configuration is essentially obtained by blowing up the variety formula_159 along a sequence of formula_25 invariant ideals, supported on the central fibre. This result is essentially due to David Mumford. Explicitly, every test configuration is dominated by a blow up of formula_159 along an ideal of the form formula_160 where formula_161 is the coordinate on formula_43. By taking the support of the ideals this corresponds to blowing up along a flag of subschemes formula_162 inside the copy formula_163 of formula_0. One obtains this decomposition essentially by taking the weight space decomposition of the invariant ideal formula_164 under the formula_25 action. In the special case where this flag of subschemes is of length one, the Donaldson-Futaki invariant can be easily computed and one arrives at slope K-stability. Given a subscheme formula_165 defined by an ideal sheaf formula_166, the test configuration is given by formula_167 which is the deformation to the normal cone of the embedding formula_168. If the variety formula_0 has Hilbert polynomial formula_169, define the slope of formula_0 to be formula_170 To define the slope of the subscheme formula_171, consider the Hilbert-Samuel polynomial of the subscheme formula_171, formula_172 for formula_173 and formula_17 a rational number such that formula_174. The coefficients formula_175 are polynomials in formula_17 of degree formula_176, and the K-slope of formula_166 with respect to formula_177 is defined by formula_178 This definition makes sense for any choice of real number formula_179 where formula_180 is the Seshadri constant of formula_171. Notice that taking formula_181 we recover the slope of formula_0. The pair formula_11 is slope K-semistable if for all proper subschemes formula_165, formula_182 for all formula_179 (one can also define slope K-stability and slope K-polystability by requiring this inequality to be strict, with some extra technical conditions). It was shown by Ross and Thomas that K-semistability implies slope K-semistability. However, unlike in the case of vector bundles, it is not the case that slope K-stability implies K-stability. In the case of vector bundles it is enough to consider only single subsheaves, but for varieties it is necessary to consider flags of length greater than one also. Despite this, slope K-stability can still be used to identify K-unstable varieties, and therefore by the results of Stoppa, give obstructions to the existence of cscK metrics. For example, Ross and Thomas use slope K-stability to show that the projectivisation of an unstable vector bundle over a K-stable base is K-unstable, and so does not admit a cscK metric. This is a converse to results of Hong, which show that the projectivisation of a stable bundle over a base admitting a cscK metric, also admits a cscK metric, and is therefore K-stable. Filtration K-Stability. Work of Apostolov–Calderbank–Gauduchon–Tønnesen-Friedman shows the existence of a manifold which does not admit any extremal metric, but does not appear to be destabilised by any test configuration. This suggests that the definition of K-stability as given here may not be precise enough to imply the Yau–Tian–Donaldson conjecture in general. However, this example "is" destabilised by a limit of test configurations. This was made precise by Székelyhidi, who introduced filtration K-stability. A filtration here is a filtration of the coordinate ring formula_183 of the polarised variety formula_11. The filtrations considered must be compatible with the grading on the coordinate ring in the following sense: A filtation formula_184 of formula_185 is a chain of finite-dimensional subspaces formula_186 such that the following conditions hold: Given a filtration formula_184, its Rees algebra is defined by formula_193 We say that a filtration is finitely generated if its Rees algebra is finitely generated. It was proven by David Witt Nyström that a filtration is finitely generated if and only if it arises from a test configuration, and by Székelyhidi that any filtration is a limit of finitely generated filtrations. Combining these results Székelyhidi observed that the example of Apostolov-Calderbank-Gauduchon-Tønnesen-Friedman would not violate the Yau–Tian–Donaldson conjecture if K-stability was replaced by filtration K-stability. This suggests that the definition of K-stability may need to be edited to account for these limiting examples. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "c_1(X)" }, { "math_id": 2, "text": "c_1(X)=0" }, { "math_id": 3, "text": "c_1(X)<0" }, { "math_id": 4, "text": "c_1(X)>0" }, { "math_id": 5, "text": "H^0(X, TX)" }, { "math_id": 6, "text": "\\text{Bl}_p \\mathbb{CP}^2" }, { "math_id": 7, "text": "c_1(X)\\le 0" }, { "math_id": 8, "text": "\\operatorname{YM}(\\nabla) = \\int_X \\|F_{\\nabla}\\|^2 \\, d \\operatorname{vol} ." }, { "math_id": 9, "text": "(X, -K_X)" }, { "math_id": 10, "text": "\\Complex" }, { "math_id": 11, "text": "(X,L)" }, { "math_id": 12, "text": "L" }, { "math_id": 13, "text": "X \\cong \\operatorname{Proj} \\bigoplus_{r\\ge 0} H^0 \\left(X, L^{kr}\\right) \\hookrightarrow \\mathbb{P}\\left(H^0\\left(X, L^k\\right)^*\\right)" }, { "math_id": 14, "text": "k" }, { "math_id": 15, "text": "L^k" }, { "math_id": 16, "text": "\\mathbb{CP}^N" }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "X\\subset \\mathbb{CP}^N" }, { "math_id": 19, "text": "G\\subset \\operatorname{GL}(N+1,\\mathbb{C})" }, { "math_id": 20, "text": "G" }, { "math_id": 21, "text": "\\lambda: \\mathbb{C}^* \\hookrightarrow G" }, { "math_id": 22, "text": " x_0 = \\lim_{t\\to 0} \\lambda(t) \\cdot x ." }, { "math_id": 23, "text": "\\lambda" }, { "math_id": 24, "text": "\\mathbb{C}^{N+1}" }, { "math_id": 25, "text": "\\mathbb{C}^*" }, { "math_id": 26, "text": "\\mu(x,\\lambda)" }, { "math_id": 27, "text": " \\lambda(t) \\cdot \\tilde{x} = t^{\\mu(x,\\lambda)} \\tilde{x}" }, { "math_id": 28, "text": "\\tilde{x}" }, { "math_id": 29, "text": "x_0" }, { "math_id": 30, "text": "\\mu(x,\\lambda)\\le 0" }, { "math_id": 31, "text": "\\lambda < G" }, { "math_id": 32, "text": "\\mu(x,\\lambda)<0" }, { "math_id": 33, "text": "\\mu(x,\\lambda) >0" }, { "math_id": 34, "text": "(\\mathcal{X}, \\mathcal{L})" }, { "math_id": 35, "text": "\\mathcal{X}" }, { "math_id": 36, "text": "\\pi: \\mathcal{X} \\to \\mathbb{C}" }, { "math_id": 37, "text": "\\mathcal{L}" }, { "math_id": 38, "text": "\\pi" }, { "math_id": 39, "text": "t\\in \\mathbb{C}" }, { "math_id": 40, "text": "(\\mathcal{X}_t, \\mathcal{L}_t)" }, { "math_id": 41, "text": "\\mathcal{P}(k)" }, { "math_id": 42, "text": "(\\mathcal{X},\\mathcal{L})" }, { "math_id": 43, "text": "\\mathbb{C}" }, { "math_id": 44, "text": "t\\in \\mathbb{C}^*" }, { "math_id": 45, "text": "(\\mathcal{X}_t, \\mathcal{L}_t) \\cong (X,L)" }, { "math_id": 46, "text": "0\\in \\mathbb{C}" }, { "math_id": 47, "text": "(\\mathcal{X}_{t\\ne 0}, \\mathcal{L}_{t\\ne 0}) \\cong (X\\times \\mathbb{C}^*,\\operatorname{pr}_1^*L)" }, { "math_id": 48, "text": "\\operatorname{pr}_1 : X\\times \\mathbb{C}^* \\to X" }, { "math_id": 49, "text": "\\mathcal{X} \\cong X\\times \\mathbb{C}" }, { "math_id": 50, "text": "\\mu(\\mathcal{X},\\mathcal{L})" }, { "math_id": 51, "text": "0" }, { "math_id": 52, "text": "(\\mathcal{X},\\mathcal{L})\\to \\mathbb{C}" }, { "math_id": 53, "text": "(\\mathcal{X}_0,\\mathcal{L}_0)" }, { "math_id": 54, "text": "\\Complex^*" }, { "math_id": 55, "text": "\\mathcal{L}_0" }, { "math_id": 56, "text": "H^0(\\mathcal{X}_0, \\mathcal{L}_0^k)" }, { "math_id": 57, "text": "k\\ge 0" }, { "math_id": 58, "text": "V" }, { "math_id": 59, "text": "V=V_1\\oplus \\cdots \\oplus V_n" }, { "math_id": 60, "text": "V_i" }, { "math_id": 61, "text": "w_i" }, { "math_id": 62, "text": "w=w_1+\\cdots + w_n" }, { "math_id": 63, "text": "\\bigwedge^n V" }, { "math_id": 64, "text": "n=\\dim V" }, { "math_id": 65, "text": "w(k)" }, { "math_id": 66, "text": "H^0(\\mathcal{X}_0,\\mathcal{L}_0^k)" }, { "math_id": 67, "text": "k \\ge 0" }, { "math_id": 68, "text": "n+1" }, { "math_id": 69, "text": "k>k_0\\gg 0" }, { "math_id": 70, "text": "k_0" }, { "math_id": 71, "text": "n = \\dim X" }, { "math_id": 72, "text": "\\mathcal{P}(k)=\\dim H^0(X, L^k) = \\dim H^0(\\mathcal{X}_0,\\mathcal{L}_0^k)" }, { "math_id": 73, "text": "k>k_1\\gg 0" }, { "math_id": 74, "text": "k_1" }, { "math_id": 75, "text": "n" }, { "math_id": 76, "text": "k\\gg 0" }, { "math_id": 77, "text": "\\mathcal{P}(k) = a_0 k^n + a_1 k^{n-1} + O(k^{n-2}), \\quad w(k) = b_0 k^{n+1} + b_1 k^n + O(k^{n-1}) ." }, { "math_id": 78, "text": "\\operatorname{DF}(\\mathcal{X}, \\mathcal{L}) = \\frac{b_0a_1-b_1a_0}{a_0^2} ." }, { "math_id": 79, "text": "\\operatorname{DF}(\\mathcal{X}, \\mathcal{L}) = -f_1" }, { "math_id": 80, "text": "f_1" }, { "math_id": 81, "text": "\\frac{w(k)}{k \\mathcal{P}(k)} = f_0 + f_1 k^{-1} + O(k^{-2}) ." }, { "math_id": 82, "text": "L^r" }, { "math_id": 83, "text": "\\mathbb{Q}" }, { "math_id": 84, "text": " (\\mathcal{X}, \\mathcal{L})" }, { "math_id": 85, "text": "(\\bar{\\mathcal{X}}, \\bar{\\mathcal{L}})" }, { "math_id": 86, "text": "\\mathbb{P}^1" }, { "math_id": 87, "text": "CM({\\mathcal{X}},{ \\mathcal{L}})=\\frac { 1 }{ 2(n+1)\\cdot L^{ n } } \\left( \\mu \\cdot n{ (\\bar {\\mathcal{L}} ) }^{ n+1 }+(n+1){ K }_{ \\bar { \\mathcal{X}} /{ \\mathbb{P} }^{ 1 }} \\cdot { (\\bar {\\mathcal{L}} ) }^{ n } \\right) " }, { "math_id": 88, "text": "\\mu= -\\frac{L^{n-1}\\cdot K_X}{L^n}" }, { "math_id": 89, "text": "\\operatorname{ DF } (\\mathcal{X}, \\mathcal{L})" }, { "math_id": 90, "text": "\\operatorname {CM} (\\mathcal{X}, \\mathcal{L})" }, { "math_id": 91, "text": "\\mu(\\mathcal{X}, \\mathcal{L})" }, { "math_id": 92, "text": "\\operatorname {CM}(\\mathcal{X}, \\mathcal{L})" }, { "math_id": 93, "text": "\\beta" }, { "math_id": 94, "text": "A_k" }, { "math_id": 95, "text": "H^0(X,L^k)" }, { "math_id": 96, "text": "\\operatorname{Tr}(A_k)=w(k)" }, { "math_id": 97, "text": "\\operatorname{Tr}(A_k^2)" }, { "math_id": 98, "text": "n+2" }, { "math_id": 99, "text": "\\operatorname{Tr}(A_k^2) = c_0 k^{n+2} + O(k^{n+1})." }, { "math_id": 100, "text": "\\|(\\mathcal{X}, \\mathcal{L})\\|^2 = c_0 - \\frac{b_0^2}{a_0}." }, { "math_id": 101, "text": "\\operatorname{\\mu }(\\mathcal{X}, \\mathcal{L})\\ge 0" }, { "math_id": 102, "text": "\\operatorname{\\mu}(\\mathcal{X}, \\mathcal{L})> 0" }, { "math_id": 103, "text": "\\|(\\mathcal{X}, \\mathcal{L})\\|>0" }, { "math_id": 104, "text": "\\operatorname{\\mu}(\\mathcal{X}, \\mathcal{L})=0" }, { "math_id": 105, "text": "(X,-K_X)" }, { "math_id": 106, "text": "c_1(L)" }, { "math_id": 107, "text": "+1" }, { "math_id": 108, "text": "\\mathbb{CP}^1" }, { "math_id": 109, "text": "-1" }, { "math_id": 110, "text": " g > 1" }, { "math_id": 111, "text": "L=-K_X" }, { "math_id": 112, "text": "F_{n,d} = \\{z\\in \\mathbb{CP}^{n+1}\\mid z_0^d + \\cdots z_{n+1}^d = 0\\} \\subset \\mathbb{CP}^{n+1}" }, { "math_id": 113, "text": "3 \\le d \\le n+1" }, { "math_id": 114, "text": "P" }, { "math_id": 115, "text": "(X_P, L_P)" }, { "math_id": 116, "text": "f" }, { "math_id": 117, "text": "\\operatorname{DF}(\\mathcal{X},\\mathcal{L}) = \\frac{1}{2} \\mathcal{L}(f) = \\frac{1}{2} \\left( \\int_{\\partial P} f \\,d\\sigma - a \\int_{P} f\\, d\\mu\\right) ," }, { "math_id": 118, "text": "d\\mu" }, { "math_id": 119, "text": "d\\sigma" }, { "math_id": 120, "text": "h(x) \\le a" }, { "math_id": 121, "text": "\\mathbb{R}^n" }, { "math_id": 122, "text": "d\\mu = \\pm dh \\wedge d\\sigma" }, { "math_id": 123, "text": " a = \\operatorname{Vol}(\\partial P, d\\sigma)/\\operatorname{Vol}(P, d\\mu)" }, { "math_id": 124, "text": "\\left\\|(\\mathcal{X},\\mathcal{L}) \\right\\| = \\left\\|f - \\bar{f}\\right\\|_{L^2} ," }, { "math_id": 125, "text": "\\bar{f}" }, { "math_id": 126, "text": "f = \\max (h_1, \\dots, h_n)" }, { "math_id": 127, "text": "h_1,\\dots,h_n" }, { "math_id": 128, "text": "a" }, { "math_id": 129, "text": "\\mathcal{L}(f)" }, { "math_id": 130, "text": "h_i" }, { "math_id": 131, "text": "f = \\max (0, h)" }, { "math_id": 132, "text": "h" }, { "math_id": 133, "text": "\\mathbb{F}_1 = \\operatorname{Bl}_0\\mathbb{CP}^2" }, { "math_id": 134, "text": "L = \\frac{1}{2}(\\pi^* \\mathcal{O}(2) - E)" }, { "math_id": 135, "text": "\\pi: \\mathbb{F}_1 \\to \\mathbb{CP}^2" }, { "math_id": 136, "text": "E" }, { "math_id": 137, "text": "dx" }, { "math_id": 138, "text": "dy" }, { "math_id": 139, "text": "x+y=2" }, { "math_id": 140, "text": "(dx-dy)/2" }, { "math_id": 141, "text": "f(x,y)=x+y" }, { "math_id": 142, "text": "\\int_P f\\, d\\mu = \\frac{11}{6},\\qquad \\int_{\\partial P} f\\, d\\sigma = 6 ," }, { "math_id": 143, "text": "\\operatorname{Vol}(P, d\\mu)=\\frac{3}{2},\\qquad \\operatorname{Vol}(\\partial P, d\\sigma) = 5 ," }, { "math_id": 144, "text": "\\mathcal{L}(f) = 6-\\frac{55}{9} = -\\frac{1}{9} < 0 ," }, { "math_id": 145, "text": "\\mathbb{F}_1" }, { "math_id": 146, "text": "\\mathcal{P}" }, { "math_id": 147, "text": "r>0" }, { "math_id": 148, "text": "(X,L^r)" }, { "math_id": 149, "text": "\\mathbb{P}^{\\mathcal{P}(r)-1}" }, { "math_id": 150, "text": "\\mathcal{P}'(K) = \\mathcal{P}(Kr)" }, { "math_id": 151, "text": "\\operatorname{GL}(\\mathcal{P}(r), \\mathbb{C})" }, { "math_id": 152, "text": "r>r_0\\gg0" }, { "math_id": 153, "text": "r_0" }, { "math_id": 154, "text": "\\operatorname{Chow}_r(\\mathcal{X},\\mathcal{L}) = \\frac{rb_0}{a_0} - \\frac{w(r)}{\\mathcal{P}(r)}" }, { "math_id": 155, "text": "r" }, { "math_id": 156, "text": "\\operatorname{Chow}_{rk}(\\mathcal{X},\\mathcal{L^k}) = \\frac{krb_0}{a_0} - \\frac{w(kr)}{\\mathcal{P}(kr)}" }, { "math_id": 157, "text": "\\operatorname{DF}(\\mathcal{X},\\mathcal{L}) = \\lim_{k\\to\\infty} \\operatorname{Chow}_{rk}(\\mathcal{X},\\mathcal{L^k}) ," }, { "math_id": 158, "text": "\\implies" }, { "math_id": 159, "text": "X\\times \\mathbb{C}" }, { "math_id": 160, "text": "I=I_0 + t I_1 + t^2 I_2 + \\cdots + t^{r-1} I_{r-1} + (t^r)\\subset \\mathcal{O}_X \\otimes \\mathbb{C}[t]," }, { "math_id": 161, "text": "t" }, { "math_id": 162, "text": " Z_{r-1} \\subset \\cdots \\subset Z_2 \\subset Z_1 \\subset Z_0 \\subset X" }, { "math_id": 163, "text": "X\\times \\{0\\}" }, { "math_id": 164, "text": "I" }, { "math_id": 165, "text": "Z\\subset X" }, { "math_id": 166, "text": "I_Z" }, { "math_id": 167, "text": "\\mathcal{X} = \\operatorname{Bl}_{Z\\times\\{0\\}} (X\\times \\mathbb{C}) ," }, { "math_id": 168, "text": "Z\\hookrightarrow X" }, { "math_id": 169, "text": "\\mathcal{P}(k) = a_0 k^n + a_1 k^{n-1} + O(k^{n-2})" }, { "math_id": 170, "text": " \\mu(X) = \\frac{a_1}{a_0} ." }, { "math_id": 171, "text": "Z" }, { "math_id": 172, "text": "\\chi(L^r \\otimes I_Z^{xr}) = a_0(x) r^n + a_1(x)r^{n-1} + O(r^{n-2}) ," }, { "math_id": 173, "text": "r\\gg 0" }, { "math_id": 174, "text": "xr \\in \\mathbb{N}" }, { "math_id": 175, "text": "a_i(x)" }, { "math_id": 176, "text": "n-i" }, { "math_id": 177, "text": "c" }, { "math_id": 178, "text": "\\mu_c(I_Z) = \\frac{\\int_0^c \\big(a_1(x) + \\frac{a_0'(x)}{2}\\big)\\, dx}{\\int_0^c a_0(x) \\, dx}." }, { "math_id": 179, "text": "c\\in (0,\\epsilon(Z)]" }, { "math_id": 180, "text": "\\epsilon(Z)" }, { "math_id": 181, "text": "Z=\\emptyset" }, { "math_id": 182, "text": "\\mu_c(I_Z) \\le \\mu(X)" }, { "math_id": 183, "text": "R = \\bigoplus_{k\\ge 0} H^0(X, L^k)" }, { "math_id": 184, "text": "\\chi" }, { "math_id": 185, "text": "R" }, { "math_id": 186, "text": "\\mathbb{C} = F_0 R \\subset F_1 R \\subset F_2 R \\subset \\dots \\subset R" }, { "math_id": 187, "text": "(F_iR)(F_jR) \\subset F_{i+j}R" }, { "math_id": 188, "text": "i,j\\ge 0" }, { "math_id": 189, "text": "R_k = H^0(X, L^k)" }, { "math_id": 190, "text": "f\\in F_iR" }, { "math_id": 191, "text": "F_iR" }, { "math_id": 192, "text": "\\bigcup_{i\\ge0} F_iR = R" }, { "math_id": 193, "text": "\\operatorname{Rees}(\\chi) = \\bigoplus_{i\\ge 0} (F_i R)t^i \\subset R[t]." } ]
https://en.wikipedia.org/wiki?curid=62719320
6272460
Beta diversity
Ratio of regional to local species diversity in ecology In ecology, beta diversity (β-diversity or true beta diversity) is the ratio between regional and local species diversity. The term was introduced by R. H. Whittaker together with the terms alpha diversity (α-diversity) and gamma diversity (γ-diversity). The idea was that the total species diversity in a landscape (γ) is determined by two different things: the mean species diversity at the local level (α) and the differentiation among local sites (β). Other formulations for beta diversity include "absolute species turnover", "Whittaker's species turnover" and "proportional species turnover". Whittaker proposed several ways of quantifying differentiation, and subsequent generations of ecologists have invented more. As a result, there are now many defined types of beta diversity. Some use "beta diversity" to refer to any of several indices related to compositional heterogeneity. Confusion is avoided by using distinct names for other formulations. Beta diversity as a measure of species turnover overemphasizes the role of rare species as the difference in species composition between two sites or communities is likely reflecting the presence and absence of some rare species in the assemblages. Beta diversity can also be a measure of nestedness, which occurs when species assemblages in species-poor sites are a subset of the assemblages in more species-rich sites. Moreover, pairwise beta diversity are inadequate in building all biodiversity partitions (some partitions in a Venn diagram of 3 or more sites cannot be expressed by alpha and beta diversity). Consequently, some macroecological and community patterns cannot be fully expressed by alpha and beta diversity. Due to these two reasons, a new way of measuring species turnover, coined Zeta diversity (ζ-diversity), has been proposed and used to connect all existing incidence-based biodiversity patterns. Types. Whittaker beta diversity. Gamma diversity and alpha diversity can be calculated directly from species inventory data. The simplest of Whittaker's original definitions of beta diversity is β = γ/α Here gamma diversity is the total species diversity of a landscape and alpha diversity is the mean species diversity per site. Because the limits among local sites and landscapes are diffuse and to some degree subjective, it has been proposed that gamma diversity can be quantified for any inventory dataset and that alpha and beta diversity can be quantified whenever the dataset is divided into subunits. Then gamma diversity is the total species diversity in the dataset and alpha diversity the mean species diversity per subunit. Beta diversity quantifies how many subunits there would be if the total species diversity of the dataset and the mean species diversity per subunit remained the same, but the subunits shared no species. Absolute species turnover. Some researchers have preferred to partition gamma diversity into additive rather than multiplicative components. Then the beta component of diversity becomes βA = γ - α This quantifies how much more species diversity the entire dataset contains than an average subunit within the dataset. This can also be interpreted as the total amount of species turnover among the subunits in the dataset. When there are two subunits, and presence-absence data are used, this can be calculated with the following equation: formula_0 where, S1= the total number of species recorded in the first community, S2= the total number of species recorded in the second community, and c= the number of species common to both communities. Whittaker's species turnover. If absolute species turnover is divided by alpha diversity, a measure is obtained that quantifies how many times the species composition changes completely among the subunits of the dataset. This measure was proposed by Whittaker, so it has been called Whittaker's species turnover. It is calculated as βW = (γ - α)/α = γ/α - 1 When there are two subunits, and presence-absence data are used, this equals the one-complement of the Sørensen similarity index. Proportional species turnover. If absolute species turnover is divided by gamma diversity, a measure is obtained that quantifies what proportion of the species diversity in the dataset is not contained in an average subunit. It is calculated as βP = (γ - α)/γ = 1 - α/γ When there are two subunits, and presence-absence data are used, this measure as ranged to the interval [0, 1] equals the one-complement of the Jaccard similarity index. β-diversity patterns. Although understanding the change in species composition from local to regional scales (β-diversity) is a central theme in ecology and biogeography, studies often reached different conclusions as to the fundamental patterns in β-diversity. For example, niche compression hypothesis predicted higher β-diversity at lower latitudes. Studies comparing natural local sites with human-modified local sites are no different. Kitching et al. sampled moths in primary and logged forests of Danum valley, Borneo to show that β-diversity in primary forests was higher than logged forests. Contrastingly, Berry et al. sampled trees in the same study area to show that β-diversity in logged forests was higher than primary forests. The results of these two studies were completely different from the results of a recent quantitative synthesis, which showed that β-diversity in primary forests were similar to β-diversity in all types of human-modified local sites (secondary forests, plantations, pasture and urban). Therefore, there is a clear lack of consensus on β-diversity patterns among studies. Sreekar et al. suggested that most of these inconsistencies were due to the differences in grain size and/or spatial extent among studies. They showed that spatial scale changes the relationship between β-diversity and latitude. Diversity partitioning in the geologic past. Major diversification events in the geologic past were associated with shifts in the relative contributions of alpha- and beta-diversity (diversity partitioning). Examples include the Cambrian explosion, the great Ordovician biodiversification event, and the recoveries from the end-Permian and end-Triassic mass extinction events. Empirical data from these case studies confirm theoretical predictions that an increasing number of species will increase beta-diversity relative to alpha diversity because of the effects from interspecific competition; yet, alpha diversity may increase again once options for increasing geographic turnover are exhausted. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\beta_A=(S_1-c)+(S_2-c)" } ]
https://en.wikipedia.org/wiki?curid=6272460
6273078
Malmquist index
Bilateral index The Malmquist Index (MI) is a bilateral index that can be used to compare the production technology of two economies. It is named after Professor Sten Malmquist, on whose ideas it is based. It is also called the Malmquist Productivity Index. The MI is based on the concept of the production function. This is a function of maximum possible production, with respect to a set of inputs pertaining to capital and labour. So, if formula_0 is the set of labour and capital inputs to the production function of Economy A, and formula_1 is the production function of Economy A, we could write formula_2. While the production function would normally apply to an enterprise, it is possible to calculate it for an entire region or nation. This would be called the aggregate production function. To calculate the Malmquist Index of economy A with respect to economy B, we must substitute the labour and capital inputs of economy A into the production function of B, and vice versa. The formula for MI is given below. formula_3 where formula_4 formula_5 formula_6 formula_7 Note that the MI of A with respect to B is the reciprocal of the MI of B with respect to A. If the MI of A with respect to B is greater than 1, the aggregate production technology of economy A is superior to that of economy B. The Malmquist Index was introduced in the 1982 paper, "Multilateral Comparisons of Output, Input and Productivity Using Superlative Index Numbers", by Douglas W. Caves, Laurits R. Christensen and W. Erwin Diewert. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Definition on About.com
[ { "math_id": 0, "text": "S_a" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "Q=f_a(S_a)" }, { "math_id": 3, "text": "MI=\\sqrt{(Q_1 Q_2)/(Q_3 Q_4)} " }, { "math_id": 4, "text": "Q_1=f_a(S_a)" }, { "math_id": 5, "text": "Q_2=f_a(S_b)" }, { "math_id": 6, "text": "Q_3=f_b(S_a)" }, { "math_id": 7, "text": "Q_4=f_b(S_b)" } ]
https://en.wikipedia.org/wiki?curid=6273078
62734118
Esther 5
A chapter in the Book of Esther Esther 5 is the fifth chapter of the Book of Esther in the Hebrew Bible or the Old Testament of the Christian Bible, The author of the book is unknown and modern scholars have established that the final stage of the Hebrew text would have been formed by the second century BCE. Chapters 3 to 8 contain the nine scenes that form the complication in the book. This chapter records that Esther's risky behavior to appear uninvited before the king Ahasuerus is richly rewarded, because the king generously offers to give her whatever she wants, 'even to the half of my kingdom' (5:3), but Esther cleverly asks for nothing more than an opportunity to entertain her husband and his chief officer, Haman. Both men were pleased at her hospitality, but when the king again offers her half the empire, this time she requests only a second banquet. While Haman was happy to have been entertained by the queen, he became intensely distressed when Mordecai once more refused to bow down before him. Haman's wife, Zeresh, advised him to erect a monumental gallows intended for Mordecai, and only then Haman felt happy again to look forward to Esther's second banquet. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 14 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Esther's first audience with the king and its outcome (5:1–8). This section records the first uninvited audience of Esther before king Ahasuerus. Esther was immediately successful in her approach: the king extended his scepter as a sign of clemency and promised to grant her wish up to half of his kingdom. However, she didn't use this opportunity to avert the decree of genocide and instead invited the king and Haman to a dinner party. The act indicates Esther's skills as a wise courtier because the seemingly simple request gives Esther several advantages to achieve her goal: "Now it came to pass on the third day, that Esther put on her royal apparel, and stood in the inner court of the king's house, over against the king's house: and the king sat upon his royal throne in the royal house, over against the gate of the house." "And the king said to her, "What is it, Queen Esther? What is your request? It shall be given you, even to the half of my kingdom."" "So Esther answered, "If it pleases the king, let the king and Haman come today to the banquet that I have prepared for him."" "Haman also said, “Even Esther the queen let no one but me come with the king to the banquet which she had prepared; and tomorrow also I am invited by her with the king." Haman grows more incensed against Mordecai (5:9–14). This pericope shows that Haman is a dangerous foe who was constantly full of wrath for being worsted by his inferior, Mordecai, so he planned to butcher the whole population of Jews to appease his own sense of inferiority. Haman would not enjoy all his honors as long as there was one Jew who did not give him the customary respect he wanted. His friends understood that Haman wanted not only Mordecai dead, but also be humiliated publicly, so they suggested the setting up of high gallows for Mordecai to appease Haman. Nonetheless, Modercai's continued defiance against Haman is 'enigmatic', as he still held it while knowing that his action has placed the Jews in great mortal danger. [Haman said:] "Yet all this avails me nothing, so long as I see Mordecai the Jew sitting at the king's gate." "Then his wife Zeresh and all his friends said to him, "Let a gallows be made, fifty cubits high, and in the morning suggest to the king that Mordecai be hanged on it; then go merrily with the king to the banquet."" "And the thing pleased Haman; so he had the gallows made." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62734118
62736148
Divisorial scheme
In algebraic geometry, a divisorial scheme is a scheme admitting an ample family of line bundles, as opposed to an ample line bundle. In particular, a quasi-projective variety is a divisorial scheme and the notion is a generalization of "quasi-projective". It was introduced in (in the case of a variety) as well as in (in the case of a scheme). The term "divisorial" refers to the fact that "the topology of these varieties is determined by their positive divisors." The class of divisorial schemes is quite large: it includes affine schemes, separated regular (noetherian) schemes and subschemes of a divisorial scheme (such as projective varieties). Definition. Here is the definition in SGA 6, which is a more general version of the definition of Borelli. Given a quasi-compact quasi-separated scheme "X", a family of invertible sheaves formula_0 on it is said to be an ample family if the open subsets formula_1 form a base of the (Zariski) topology on "X"; in other words, there is an open affine cover of "X" consisting of open sets of such form. A scheme is then said to be divisorial if there exists such an ample family of invertible sheaves. Properties and counterexample. Since a subscheme of a divisorial scheme is divisorial, "divisorial" is a necessary condition for a scheme to be embedded into a smooth variety (or more generally a separated Noetherian regular scheme). To an extent, it is also a sufficient condition. A divisorial scheme has the resolution property; i.e., a coherent sheaf is a quotient of a vector bundle. In particular, a scheme that does not have the resolution property is an example of a non-divisorial scheme. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L_i, i \\in I" }, { "math_id": 1, "text": "U_f = \\{ f \\ne 0 \\}, f \\in \\Gamma(X, L_i^{\\otimes n}), i \\in I, n \\ge 1" } ]
https://en.wikipedia.org/wiki?curid=62736148
627501
Littlewood conjecture
Mathematical problem In mathematics, the Littlewood conjecture is an open problem (as of April 2024[ [update]]) in Diophantine approximation, proposed by John Edensor Littlewood around 1930. It states that for any two real numbers α and β, formula_0 where formula_1 is the distance to the nearest integer. Formulation and explanation. This means the following: take a point ("α", "β") in the plane, and then consider the sequence of points (2"α", 2"β"), (3"α", 3"β"), ... . For each of these, multiply the distance to the closest line with integer "x"-coordinate by the distance to the closest line with integer "y"-coordinate. This product will certainly be at most 1/4. The conjecture makes no statement about whether this sequence of values will converge; it typically does not, in fact. The conjecture states something about the limit inferior, and says that there is a subsequence for which the distances decay faster than the reciprocal, i.e. o(1/"n") in the little-o notation. Connection to further conjectures. It is known that this would follow from a result in the geometry of numbers, about the minimum on a non-zero lattice point of a product of three linear forms in three real variables: the implication was shown in 1955 by Cassels and Swinnerton-Dyer. This can be formulated another way, in group-theoretic terms. There is now another conjecture, expected to hold for "n" ≥ 3: it is stated in terms of "G" = "SLn"("R"), Γ = "SLn"("Z"), and the subgroup "D" of diagonal matrices in "G". Conjecture: for any "g" in "G"/Γ such that "Dg" is relatively compact (in "G"/Γ), then "Dg" is closed. This in turn is a special case of a general conjecture of Margulis on Lie groups. Partial results. Borel showed in 1909 that the exceptional set of real pairs (α,β) violating the statement of the conjecture is of Lebesgue measure zero. Manfred Einsiedler, Anatole Katok and Elon Lindenstrauss have shown that it must have Hausdorff dimension zero; and in fact is a union of countably many compact sets of box-counting dimension zero. The result was proved by using a measure classification theorem for diagonalizable actions of higher-rank groups, and an "isolation theorem" proved by Lindenstrauss and Barak Weiss. These results imply that non-trivial pairs satisfying the conjecture exist: indeed, given a real number α such that formula_2, it is possible to construct an explicit β such that (α,β) satisfies the conjecture. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\liminf_{n\\to\\infty} \\ n\\,\\Vert n\\alpha\\Vert \\,\\Vert n\\beta\\Vert = 0," }, { "math_id": 1, "text": "\\Vert x\\Vert:=\\min(|x-\\lfloor x \\rfloor|,|x-\\lceil x \\rceil|)" }, { "math_id": 2, "text": "\\inf_{n \\ge 1} n \\cdot || n \\alpha || > 0 " } ]
https://en.wikipedia.org/wiki?curid=627501
62751176
Esther 6
A chapter in the Book of Esther Esther 6 is the sixth chapter of the Book of Esther in the Hebrew Bible or the Old Testament of the Christian Bible, The author of the book is unknown and modern scholars have established that the final stage of the Hebrew text would have been formed by the second century BCE. Chapters 3 to 8 contain the nine scenes that form the complication in the book. This chapter relates how a sleepless Ahasuerus had his court annals read aloud and discovered that he had failed to reward Mordecai for passing on the information about the assassination plot. The episode leads to 'a marvellously ironic scene' (), as the narrative 'moves inexorably to its ultimate reversal', starting with Haman leading a king's horse carrying Mordecai, clothed in royal garb through the streets of Susa, and proclaiming the king's favor for Mordecai. Haman went home exhibiting mourning behavior and his wife predicted that Haman's intent to destroy Mordecai would end up with the opposite result. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 14 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The king discovers the failure to reward Mordecai (6:1–3). This section records how the king was by chance sleepless that night (cf. ), so he asked the Royal Chronicles to be read aloud (by chance containing Mordecai's benefaction to the king) and found out that, by chance, Mordecai had not been rewarded. "On that night could not the king sleep, and he commanded to bring the book of records of the chronicles; and they were read before the king." "And it was found written, that Mordecai had told of Bigthana and Teresh, two of the king's chamberlains, the keepers of the door, who sought to lay hand on the king Ahasuerus." "And the king said, What honour and dignity hath been done to Mordecai for this? Then said the king's servants that ministered unto him, There is nothing done for him." Haman advises the king how to reward the man whom the king wishes to honor (6:4–10). By chance, Haman was first to afoot in the palace early in the morning to give advice. Any courtier of the king could have given advice, but ironically it is Haman who gave it and also who had to perform the bestowing of the honor he actually desired for himself to Mordecai. "So Haman came in, and the king asked him, "What shall be done for the man whom the king delights to honor?"" "Now Haman thought in his heart, "Whom would the king delight to honor more than me?"" Verse 6. The king unintentionally destroys Haman by hiding the name of the person he wants to honor, in an irony to the fact that Haman intentionally hid from the king the name of the people he wants to destroy (). Haman honors Mordecai according to his own advice to the king (6:11). Haman concocted an unusually high honor for the "man whom the king delights to honor", but the king without reflection immediately accepted the advice, only to command Haman to perform it on "Mordecai the Jew", thus unintentionally heaping additional humiliation to Haman. The mention of "the Jew" indicates that the king does not relate the Jewish people to the edict of destruction he approved just a few days ago. "Then took Haman the apparel and the horse, and arrayed Mordecai, and brought him on horseback through the street of the city, and proclaimed before him, Thus shall it be done unto the man whom the king delighteth to honour." Verse 11. Haman's desire to wear the king's clothes and ride on the king horse shows the psychology of an outsider who longed, but never really believed he was able, to be an insider of Persian royalty. This is also shown by how thrilled Haman was to be invited to a private banquet with the king and queen (, ). Haman's friends and wife predict his downfall (6:12–14). This section articulates the significant turning point of the story with the prediction concerning the downfall of Haman, the hereditary enemy of the Jews, and the deliverance of the Jews. "And Haman told Zeresh his wife and all his friends every thing that had befallen him. Then said his wise men and Zeresh his wife unto him, If Mordecai be of the seed of the Jews, before whom thou hast begun to fall, thou shalt not prevail against him, but shalt surely fall before him." Verse 13. Zeresh's response is based on the fact that Mordecai is Jewish, conveying a powerful notion underlying the whole book—that the Jews will ultimately survive. "And while they were yet talking with him, came the king's chamberlains, and hasted to bring Haman unto the banquet that Esther had prepared." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62751176
627542
Rolling-element bearing
Bearing which carries a load with rolling elements placed between two grooved rings In mechanical engineering, a rolling-element bearing, also known as a rolling bearing, is a bearing which carries a load by placing rolling elements (such as balls or rollers) between two concentric, grooved rings called races. The relative motion of the races causes the rolling elements to roll with very little rolling resistance and with little sliding. One of the earliest and best-known rolling-element bearings are sets of logs laid on the ground with a large stone block on top. As the stone is pulled, the logs roll along the ground with little sliding friction. As each log comes out the back, it is moved to the front where the block then rolls on to it. It is possible to imitate such a bearing by placing several pens or pencils on a table and placing an item on top of them. See "bearings" for more on the historical development of bearings. A rolling element rotary bearing uses a shaft in a much larger hole, and spheres or cylinders called "rollers" tightly fill the space between the shaft and hole. As the shaft turns, each roller acts as the logs in the above example. However, since the bearing is round, the rollers never fall out from under the load. Rolling-element bearings have the advantage of a good trade-off between cost, size, weight, carrying capacity, durability, accuracy, friction, and so on. Other bearing designs are often better on one specific attribute, but worse in most other attributes, although fluid bearings can sometimes simultaneously outperform on carrying capacity, durability, accuracy, friction, rotation rate and sometimes cost. Only plain bearings are used as widely as rolling-element bearings. Common mechanical components where they are widely used are – automotive, industrial, marine, and aerospace applications. They are products of great necessity for modern technology. The rolling element bearing was developed from a firm foundation that was built over thousands of years. The concept emerged in its primitive form in Roman times; after a long inactive period in the Middle Ages, it was revived during the Renaissance by Leonardo da Vinci, developed steadily in the seventeenth and eighteenth centuries. Overall Design. Design description Bearings, especially rolling element bearings are designed in similar fashion across the board consisting of the outer and inner track, a central bore, a retainer to keep the rolling elements from clashing into one another or seizing the bearing movement, and the rolling elements themselves. The internal rolling  components may differ in design due to their intended purpose of application of the bearing.  The main five types of bearings are Ball, Cylindrical, Tapered, Barrel, and Needle. Ball - the simplest following the basic principles with minimal design intention. Important to note the ability for more seizures is likely due to the freedom of the track design.   Cylindrical - For single axis movement for straight directional movement. The shape allows for more surface area to be in contact adding in moving more weight with less force at a greater distance. Tapered -  Primarily focused on the ability to take on axial loading and radial loading and how it does this is by using a conical structure enabling the elements to roll diagonally. Barrel -  Provides assistance to high radial socks loads that cause misalignment and uses its shape and size for compensation. Needle - Varying in size, diameters, and materials these types of bearings are best suited for helping reduce weight as well as smaller cross sections application, typically higher load capacity than ball bearings and rigid shaft applications. Specific Design Types. Ball bearing. A particularly common kind of rolling-element bearing is the ball bearing. The bearing has inner and outer "races" between which balls roll. Each race features a groove usually shaped so the ball fits slightly loose. Thus, in principle, the ball contacts each race across a very narrow area. However, a load on an infinitely small point would cause infinitely high contact pressure. In practice, the ball deforms (flattens) slightly where it contacts each race much as a tire flattens where it contacts the road. The race also yields slightly where each ball presses against it. Thus, the contact between ball and race is of finite size and has finite pressure. The deformed ball and race do not roll entirely smoothly because different parts of the ball are moving at different speeds as it rolls. Thus, there are opposing forces and sliding motions at each ball/race contact. Overall, these cause bearing drag. Roller bearings. Cylindrical roller. Roller bearings are the earliest known type of rolling-element-bearing, dating back to at least 40 BC. Common roller bearings use cylinders of slightly greater length than diameter. Roller bearings typically have a higher radial load capacity than ball bearings, but a lower capacity and higher friction under axial loads. If the inner and outer races are misaligned, the bearing capacity often drops quickly compared to either a ball bearing or a spherical roller bearing. As in all radial bearings, the outer load is continuously re-distributed among the rollers. Often fewer than half of the total number of rollers carry a significant portion of the load. The animation on the right shows how a static radial load is supported by the bearing rollers as the inner ring rotates. Spherical roller. Spherical roller bearings have an outer race with an internal spherical shape. The rollers are thicker in the middle and thinner at the ends. Spherical roller bearings can thus accommodate both static and dynamic misalignment. However, spherical rollers are difficult to produce and thus expensive, and the bearings have higher friction than an ideal cylindrical or tapered roller bearing since there will be a certain amount of sliding between rolling elements and races. Gear bearing. Gear bearing is roller bearing combining to epicyclical gear. Each element of it is represented by concentric alternation of rollers and gearwheels with equality of roller(s) diameter(s) to gearwheel(s) pitch diameter(s). The widths of conjugated rollers and gearwheels in pairs are the same. The engagement is herringbone or with the skew end faces to realize efficient rolling axial contact. The downside to this bearing is manufacturing complexity. Gear bearings could be used, for example, as efficient rotary suspension, kinematically simplified planetary gear mechanism in measuring instruments and watches. Tapered roller. Tapered roller bearings use conical rollers that run on conical races. Most roller bearings only take radial or axial loads, but tapered roller bearings support both radial and axial loads, and generally can carry higher loads than ball bearings due to greater contact area. Tapered roller bearings are used, for example, as the wheel bearings of most wheeled land vehicles. The downsides to this bearing is that due to manufacturing complexities, tapered roller bearings are usually more expensive than ball bearings; and additionally under heavy loads the tapered roller is like a wedge and bearing loads tend to try to eject the roller; the force from the collar which keeps the roller in the bearing adds to bearing friction compared to ball bearings. Needle roller. The needle roller bearing is a special type of roller bearing which uses long, thin cylindrical rollers resembling needles. Often the ends of the rollers taper to points, and these are used to keep the rollers captive, or they may be hemispherical and not captive but held by the shaft itself or a similar arrangement. Since the rollers are thin, the outside diameter of the bearing is only slightly larger than the hole in the middle. However, the small-diameter rollers must bend sharply where they contact the races, and thus the bearing fatigues relatively quickly. CARB toroidal roller bearings. CARB bearings are toroidal roller bearings and similar to spherical roller bearings, but can accommodate both angular misalignment and also axial displacement. Compared to a spherical roller bearing, their radius of curvature is longer than a spherical radius would be, making them an intermediate form between spherical and cylindrical rollers. Their limitation is that, like a cylindrical roller, they do not locate axially. CARB bearings are typically used in pairs with a locating bearing, such as a spherical roller bearing. This non-locating bearing can be an advantage, as it can be used to allow a shaft and a housing to undergo thermal expansion independently. Toroidal roller bearings were introduced in 1995 by SKF as "CARB bearings". The inventor behind the bearing was the engineer Magnus Kellström. Configurations. The configuration of the races determine the types of motions and loads that a bearing can best support. A given configuration can serve multiple of the following types of loading. Thrust loadings. Thrust bearings are used to support axial loads, such as vertical shafts. Common designs are Thrust ball bearings, spherical roller thrust bearings, tapered roller thrust bearings or cylindrical roller thrust bearings. Also non-rolling-element bearings such as hydrostatic or magnetic bearings see some use where particularly heavy loads or low friction is needed. Radial loadings. Rolling-element bearings are often used for axles due to their low rolling friction. For light loads, such as bicycles, ball bearings are often used. For heavy loads and where the loads can greatly change during cornering, such as cars and trucks, tapered rolling bearings are used. Linear motion. Linear motion roller-element bearings are typically designed for either shafts or flat surfaces. Flat surface bearings often consist of rollers and are mounted in a cage, which is then placed between the two flat surfaces; a common example is drawer-support hardware. Roller-element bearing for a shaft use bearing balls in a groove designed to recirculate them from one end to the other as the bearing moves; as such, they are called "linear ball bearings" or "recirculating bearings". Bearing failure. Rolling-element bearings often work well in non-ideal conditions, but sometimes minor problems cause bearings to fail quickly and mysteriously. For example, with a stationary (non-rotating) load, small vibrations can gradually press out the lubricant between the races and rollers or balls (false brinelling). Without lubricant the bearing fails, even though it is not rotating and thus is apparently not being used. For these sorts of reasons, much of bearing design is about failure analysis. Vibration based analysis can be used for fault identification of bearings. There are three usual limits to the lifetime or load capacity of a bearing: abrasion, fatigue and pressure-induced welding. Although there are many other apparent causes of bearing failure, most can be reduced to these three. For example, a bearing which is run dry of lubricant fails not because it is "without lubricant", but because lack of lubrication leads to fatigue and welding, and the resulting wear debris can cause abrasion. Similar events occur in false brinelling damage. In high speed applications, the oil flow also reduces the bearing metal temperature by convection. The oil becomes the heat sink for the friction losses generated by the bearing. ISO has categorised bearing failures into a document Numbered ISO 15243. Life calculation models. The life of a rolling bearing is expressed as the number of revolutions or the number of operating hours at a given speed that the bearing is capable of enduring before the first sign of metal fatigue (also known as spalling) occurs on the race of the inner or outer ring, or on a rolling element. Calculating the endurance life of bearings is possible with the help of so-called life models. More specifically, life models are used to determine the bearing size – since this must be sufficient to ensure that the bearing is strong enough to deliver the required life under certain defined operating conditions. Under controlled laboratory conditions, however, seemingly identical bearings operating under identical conditions can have different individual endurance lives. Thus, bearing life cannot be calculated based on specific bearings, but is instead related to in statistical terms, referring to populations of bearings. All information with regard to load ratings is then based on the life that 90% of a sufficiently large group of apparently identical bearings can be expected to attain or exceed. This gives a clearer definition of the concept of bearing life, which is essential to calculate the correct bearing size. Life models can thus help to predict the performance of a bearing more realistically. The prediction of bearing life is described in ISO 281 and the ANSI/American Bearing Manufacturers Association Standards 9 and 11. The traditional life prediction model for rolling-element bearings uses the basic life equation: formula_0 Where: Basic life or formula_1 is the life that 90% of bearings can be expected to reach or exceed. The median or average life, sometimes called Mean Time Between Failure (MTBF), is about five times the calculated basic rating life. Several factors, the 'ASME five factor model', can be used to further adjust the formula_1 life depending upon the desired reliability, lubrication, contamination, etc. The major implication of this model is that bearing life is finite, and reduces by a cube power of the ratio between design load and applied load. This model was developed in 1924, 1947 and 1952 work by and Gustaf Lundberg in their paper "Dynamic Capacity of Rolling Bearings". The model dates from 1924, the values of the constant formula_4 from the post-war works. Higher formula_4 values may be seen as both a longer lifetime for a correctly-used bearing below its design load, or also as the increased rate by which lifetime is shortened when overloaded. This model was recognised to have become inaccurate for modern bearings. Particularly owing to improvements in the quality of bearing steels, the mechanisms for how failures develop in the 1924 model are no longer as significant. By the 1990s, real bearings were found to give service lives up to 14 times longer than those predicted. An explanation was put forward based on fatigue life; if the bearing was loaded to never exceed the fatigue strength, then the Lundberg-Palmgren mechanism for failure by fatigue would simply never occur. This relied on homogeneous vacuum-melted steels, such as AISI 52100, that avoided the internal inclusions that had previously acted as stress risers within the rolling elements, and also on smoother finishes to bearing tracks that avoided impact loads. The formula_4 constant now had values of 4 for ball and 5 for roller bearings. Provided that load limits were observed, the idea of a 'fatigue limit' entered bearing lifetime calculations. If the bearing was not loaded beyond this limit, its theoretical lifetime would be limited only by external factors, such as contamination or a failure of lubrication. A new model of bearing life was put forward by FAG and developed by SKF as the Ioannides-Harris model. ISO 281:2000 first incorporated this model and ISO 281:2007 is based on it. The concept of fatigue limit, and thus ISO 281:2007, remains controversial, at least in the US. Generalized Bearing Life Model (GBLM). In 2015, the SKF Generalized Bearing Life Model (GBLM) was introduced. In contrast to previous life models, GBLM explicitly separates surface and subsurface failure modes – making the model flexible to accommodate several different failure modes. Modern bearings and applications show fewer failures, but the failures that do occur are more linked to surface stresses. By separating surface from the subsurface, mitigating mechanisms can more easily be identified. GBLM makes use of advanced tribology models to introduce a surface distress failure mode function, obtained from the evaluation of surface fatigue. For the subsurface fatigue, GBLM uses the classical Hertzian rolling contact model. With all this, GBLM includes the effects of lubrication, contamination, and race surface properties, which together influence the stress distribution in the rolling contact. In 2019, the Generalized Bearing Life Model was relaunched. The updated model offers life calculations also for hybrid bearings, i.e. bearings with steel rings and ceramic (silicon nitride) rolling elements. Even if the 2019 GBLM release was primarily developed to realistically determine the working life of hybrid bearings, the concept can also be used for other products and failure modes. Constraints and trade-offs. All parts of a bearing are subject to many design constraints. For example, the inner and outer races are often complex shapes, making them difficult to manufacture. Balls and rollers, though simpler in shape, are small; since they bend sharply where they run on the races, the bearings are prone to fatigue. The loads within a bearing assembly are also affected by the speed of operation: rolling-element bearings may spin over 100,000 rpm, and the principal load in such a bearing may be momentum rather than the applied load. Smaller rolling elements are lighter and thus have less momentum, but smaller elements also bend more sharply where they contact the race, causing them to fail more rapidly from fatigue. Maximum rolling-element bearing speeds are often specified in 'nDm', which is the product of the mean diameter (in mm) and the maximum RPM. For angular contact bearings nDms over 2.1 million have been found to be reliable in high performance rocketry applications. There are also many material issues: a harder material may be more durable against abrasion but more likely to suffer fatigue fracture, so the material varies with the application, and while steel is most common for rolling-element bearings, plastics, glass, and ceramics are all in common use. A small defect (irregularity) in the material is often responsible for bearing failure; one of the biggest improvements in the life of common bearings during the second half of the 20th century was the use of more homogeneous materials, rather than better materials or lubricants (though both were also significant). Lubricant properties vary with temperature and load, so the best lubricant varies with application. Although bearings tend to wear out with use, designers can make tradeoffs of bearing size and cost versus lifetime. A bearing can last indefinitely—longer than the rest of the machine—if it is kept cool, clean, lubricated, is run within the rated load, and if the bearing materials are sufficiently free of microscopic defects. Cooling, lubrication, and sealing are thus important parts of the bearing design. The needed bearing lifetime also varies with the application. For example, Tedric A. Harris reports in his "Rolling Bearing Analysis" on an oxygen pump bearing in the U.S. Space Shuttle which could not be adequately isolated from the liquid oxygen being pumped. All lubricants reacted with the oxygen, leading to fires and other failures. The solution was to lubricate the bearing with the oxygen. Although liquid oxygen is a poor lubricant, it was adequate, since the service life of the pump was just a few hours. The operating environment and service needs are also important design considerations. Some bearing assemblies require routine addition of lubricants, while others are factory sealed, requiring no further maintenance for the life of the mechanical assembly. Although seals are appealing, they increase friction, and in a permanently sealed bearing the lubricant may become contaminated by hard particles, such as steel chips from the race or bearing, sand, or grit that gets past the seal. Contamination in the lubricant is abrasive and greatly reduces the operating life of the bearing assembly. Another major cause of bearing failure is the presence of water in the lubrication oil. Online water-in-oil monitors have been introduced in recent years to monitor the effects of both particles and the presence of water in oil and their combined effect. Designation. Metric rolling-element bearings have alphanumerical designations, defined by ISO 15, to define all of the physical parameters. The main designation is a seven digit number with optional alphanumeric digits before or after to define additional parameters. Here the digits will be defined as: 7654321. Any zeros to the left of the last defined digit are not printed; e.g. a designation of 0007208 is printed 7208. Digits one and two together are used to define the inner diameter (ID), or bore diameter, of the bearing. For diameters between 20 and 495 mm, inclusive, the designation is multiplied by five to give the ID; e.g. designation 08 is a 40 mm ID. For inner diameters less than 20 the following designations are used: 00 = 10 mm ID, 01 = 12 mm ID, 02 = 15 mm ID, and 03 = 17 mm ID. The third digit defines the "diameter series", which defines the outer diameter (OD). The diameter series, defined in ascending order, is: 0, 8, 9, 1, 7, 2, 3, 4, 5, 6. The fourth digit defines the type of bearing: The fifth and sixth digit define structural modifications to the bearing. For example, on radial thrust bearings the digits define the contact angle, or the presence of seals on any bearing type. The seventh digit defines the "width series", or thickness, of the bearing. The width series, defined from lightest to heaviest, is: 7, 8, 9, 0, 1 (extra light series), 2 (light series), 3 (medium series), 4 (heavy series). The third digit and the seventh digit define the "dimensional series" of the bearing. There are four optional prefix characters, here defined as A321-XXXXXXX (where the X's are the main designation), which are separated from the main designation with a dash. The first character, A, is the bearing class, which is defined, in ascending order: C, B, A. The class defines extra requirements for vibration, deviations in shape, the rolling surface tolerances, and other parameters that are not defined by a designation character. The second character is the frictional moment (friction), which is defined, in ascending order, by a number 1–9. The third character is the radial clearance, which is normally defined by a number between 0 and 9 (inclusive), in ascending order, however for radial-thrust bearings it is defined by a number between 1 and 3, inclusive. The fourth character is the accuracy ratings, which normally are, in ascending order: 0 (normal), 6X, 6, 5, 4, T, and 2. Ratings 0 and 6 are the most common; ratings 5 and 4 are used in high-speed applications; and rating 2 is used in gyroscopes. For tapered bearings, the values are, in ascending order: 0, N, and X, where 0 is 0, N is "normal", and X is 6X. There are five optional characters that can defined after the main designation: A, E, P, C, and T; these are tacked directly onto the end of the main designation. Unlike the prefix, not all of the designations must be defined. "A" indicates an increased dynamic load rating. "E" indicates the use of a plastic cage. "P" indicates that heat-resistant steel are used. "C" indicates the type of lubricant used (C1–C28). "T" indicates the degree to which the bearing components have been tempered (T1–T5). While manufacturers follow ISO 15 for part number designations on some of their products, it is common for them to implement proprietary part number systems that do not correlate to ISO 15. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nL_{10} = (C/P)^p\n" }, { "math_id": 1, "text": "L_{10}" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=627542
62756140
Esther 7
A chapter in Book of Esther Esther 7 is the seventh chapter of the Book of Esther in the Hebrew Bible or the Old Testament of the Christian Bible, The author of the book is unknown and modern scholars have established that the final stage of the Hebrew text would have been formed by the second century BCE. Chapters 3 to 8 contain the nine scenes that form the complication in the book. This chapter records the second banquet of Esther. The king Ahasuerus was then determined to grant her any request, so Esther spoke out about the death threat on her people and identifies Haman as the perpetrator of the projected genocide. The king went out to his garden in a rage, but shortly came back to see Haman seemingly threatening Esther on her recliner couch. This caused the king to command the hanging of Haman on the very gallows Haman intended for Mordecai. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 10 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Esther reveals Haman's plot (7:1–6). This section records how Esther finally speaks out about the projected genocide to her people (for the first time identifies herself as a member of these people) and then 'requests that the lives of all this group be spared'. The king was then determined to grant her any request, so Esther spoke out about the death threat on her people and identifies Haman as the perpetrator of the projected genocide. The king is perplexed because he might have thought he was authorizing a servitude plan instead of annihilation, which gave a chance for Esther to identify Haman as the perpetrator. "Then Queen Esther answered and said, “If I have found favor in your sight, O king, and if it pleases the king, let my life be given me at my petition, and my people at my request." Verse 3. For the first time Esther addressed the king in the second person, 'if I have won your favour', rather than the custom of using the third person, 'if I have won the king's favour', as in 5:8, indicating that she is now "ready to be direct in her petitions as well as in her identity'. "So King Ahasuerus answered and said to Queen Esther, "Who is he, and where is he, who would dare presume in his heart to do such a thing?"" from right to left: ' ' ' ' from left to right: ' ' ' '. Haman is hanged (7:7–10). After hearing Esther's words, the king stomped out to his garden in a rage, but said nothing about reversing Haman's edict. Left alone with Esther, the terrified Haman plead for mercy, eventually falling upon the couch where she was reclining to, right when the king was back in the room. This led to the climactic reversal of the story, which occurs on a personal level, because the king only acts when his own wife is apparently threatened by Haman, just as he issued the decree that "all men are to be masters in their homes" () only after his previous wife's defiance. The king ordered to hang Haman on the gallows that Haman himself prepared (cf. ; ). The impalement of the man who plotted against the queen and Mordecai who saved the king has a similarity to the impalement of the conspirators against the king reported by Mordecai (). After the removal of the immediate threat to his wife, 'the king's anger is abated' (, as in when he had dealt with Vashti). "And the king arising from the banquet of wine in his wrath went into the palace garden: and Haman stood up to make request for his life to Esther the queen; for he saw that there was evil determined against him by the king." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62756140
6275730
Merton's portfolio problem
Problem in continuous-time finance Merton's portfolio problem is a problem in continuous-time finance and in particular intertemporal portfolio choice. An investor must choose how much to consume and must allocate their wealth between stocks and a risk-free asset so as to maximize expected utility. The problem was formulated and solved by Robert C. Merton in 1969 both for finite lifetimes and for the infinite case. Research has continued to extend and generalize the model to include factors like transaction costs and bankruptcy. Problem statement. The investor lives from time 0 to time "T"; their wealth at time "T" is denoted "W""T". He starts with a known initial wealth "W"0 (which may include the present value of wage income). At time "t" he must choose what amount of his wealth to consume, "c""t", and what fraction of wealth to invest in a stock portfolio, "π""t" (the remaining fraction 1 − "π""t" being invested in the risk-free asset). The objective is formula_0 where "E" is the expectation operator, "u" is a known utility function (which applies both to consumption and to the terminal wealth, or bequest, "W""T"), "ε" parameterizes the desired level of bequest, "ρ" is the subjective discount rate, and formula_1 is a constant which expresses the investor's risk aversion: the higher the gamma, the more reluctance to own stocks. The wealth evolves according to the stochastic differential equation formula_2 where "r" is the risk-free rate, ("μ", "σ") are the expected return and volatility of the stock market and "dB""t" is the increment of the Wiener process, i.e. the stochastic term of the SDE. The utility function is of the constant relative risk aversion (CRRA) form: formula_3 Consumption cannot be negative: "c""t" ≥ 0, while "π""t" is unrestricted (that is borrowing or shorting stocks is allowed). Investment opportunities are assumed constant, that is "r", "μ", "σ" are known and constant, in this (1969) version of the model, although Merton allowed them to change in his intertemporal CAPM (1973). Solution. Somewhat surprisingly for an optimal control problem, a closed-form solution exists. The optimal consumption and stock allocation depend on wealth and time as follows: formula_4 This expression is commonly referred to as Merton's fraction. Because "W" and "t" do not appear on the right-hand side; a constant fraction of wealth is invested in stocks, no matter what the age or prosperity of the investor. formula_5 where formula_6 and formula_7 Extensions. Many variations of the problem have been explored, but most do not lead to a simple closed-form solution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\max E \\left[ \\int_0^T e^{-\\rho t}u(c_t) \\, dt + \\epsilon^\\gamma e^{-\\rho T}u(W_T) \\right] " }, { "math_id": 1, "text": "\\gamma" }, { "math_id": 2, "text": "dW_t = [(r + \\pi_t(\\mu-r))W_t - c_t ] \\, dt +W_t \\pi_t \\sigma \\, dB_t " }, { "math_id": 3, "text": " u(x) = \\frac{x^{1-\\gamma}}{1-\\gamma}." }, { "math_id": 4, "text": "\\pi(W,t) = \\frac{\\mu-r}{\\sigma^2\\gamma}." }, { "math_id": 5, "text": "c(W,t)= \\begin{cases}\\nu \\left(1+(\\nu\\epsilon-1)e^{-\\nu(T-t)}\\right)^{-1} W&\\textrm{if}\\;T<\\infty\\;\\textrm{and}\\;\\nu\\neq0\\\\(T-t+\\epsilon)^{-1}W&\\textrm{if}\\;T<\\infty\\;\\textrm{and}\\;\\nu=0\\\\\\nu W&\\textrm{if}\\; T=\\infty\\end{cases}" }, { "math_id": 6, "text": "0\\le\\epsilon\\ll1" }, { "math_id": 7, "text": "\\begin{align}\\nu&=\\left(\\rho-(1-\\gamma)\\left(\\frac{(\\mu-r)^2}{2\\sigma^2\\gamma}+r\\right)\\right)/\\gamma \\\\&=\\rho/\\gamma-(1-\\gamma)\\left(\\frac{(\\mu-r)^2}{2\\sigma^2\\gamma^2}+\\frac r{\\gamma}\\right)\\\\&=\\rho/\\gamma-(1-\\gamma)(\\pi(W,t)^2 \\sigma^2/2+ r/\\gamma)\\\\&=\\rho/\\gamma-(1-\\gamma)((\\mu-r)\\pi(W,t)/2\\gamma+ r/\\gamma).\\end{align}" }, { "math_id": 8, "text": "r,\\mu,\\sigma" } ]
https://en.wikipedia.org/wiki?curid=6275730
6275926
Imaginary Thirteen
Card game Imaginary Thirteen is a solitaire card game which is played with two decks of playing cards. Its gameplay makes it a two-deck version of Calculation and its name is taken from the fact that when a sum is over thirteen, thirteen (from out of nowhere) is subtracted to get the value of the next card, with spot cards worth their face value, jacks eleven, queens twelve, and kings thirteen. Rules. To set up the tableau, an ace, a deuce (two), a trey (three), a four, a five, a six, a seven, and an eight are removed and placed in a row. These cards are markers which remind the player on the number to be added to the top card of the foundations to be placed under them. Then eight cards are placed under the marker cards, each supposedly double the value of the card above it. Therefore, a deuce is placed below the ace, a four below the deuce, a six below the three, an eight below the four, a ten below the five, and a queen below the six. Since formula_0 and formula_1, an ace is placed below the seven. Also, formula_2 and formula_3, which puts a trey under the eight. The illustration below shows the arrangement. These eight new cards will be the bases for the eight foundations, each would be built up regardless of suit to kings in intervals each indicated by the marker cards, i.e. the foundation under the ace is built up by ones, the foundation under the deuce built up by twos, the foundation under the trey built up by threes, and so on. Whenever the total goes over thirteen, it is deducted by thirteen to get the value of the next card. The table below illustrates the sequences in the building the foundations below. The gameplay is composed of taking a card from the stock. If it can be played on the foundations, it is placed there in the appropriate foundation. Otherwise, it is placed in one of four waste piles. The top card of each waste pile is available for play. At any time, after a card is placed in the foundations, the player would check the top cards of the waste piles to see if any more plays are possible. This process is repeated until the stock is depleted. The game is won when all cards have been played to the foundations after the stock runs out. The game is lost, however, if there are no more plays after the stock runs out. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "7 * 2 = 14" }, { "math_id": 1, "text": "14 - 13 = 1" }, { "math_id": 2, "text": "8 * 2 = 16" }, { "math_id": 3, "text": "16 - 13 = 3" } ]
https://en.wikipedia.org/wiki?curid=6275926
62769306
Molecular fragmentation methods
Molecular dissociation Molecular fragmentation (mass spectrometry), or molecular dissociation, occurs both in nature and in experiments. It occurs when a complete molecule is rendered into smaller fragments by some energy source, usually ionizing radiation. The resulting fragments can be far more chemically reactive than the original molecule, as in radiation therapy for cancer, and are thus a useful field of inquiry. Different molecular fragmentation methods have been built to break apart molecules, some of which are listed below. Background. A major objective of theoretical chemistry and computational chemistry is the calculation of the energy and properties of molecules so that chemical reactivity and material properties can be understood from first principles. As a practical matter, the aim is to complement the knowledge we gain from experiments, particularly where experimental data may be incomplete or very difficult to obtain. High-level ab-initio quantum chemistry methods are known to be an invaluable tool for understanding the structure, energy, and properties of small up to medium-sized molecules. However, the computational time for these calculations grows rapidly with increased size of molecules. One way of dealing with this problem is the molecular fragmentation approach which provides a hierarchy of approximations to the molecular electronic energy. In this approach, large molecules are divided in a systematic way to small fragments, for which high-level ab-initio calculation can be performed with acceptable computational time. The defining characteristic of an energy-based molecular fragmentation method is that the molecule (also cluster of molecules, or liquid or solid) is broken up into a set of relatively small molecular fragments, in such a way that the electronic energy, formula_0, of the full system formula_1 is given by a sum of the energies of these fragment molecules: formula_2 where formula_3 is the energy of a relatively small molecular fragment,formula_4. The formula_5 are simple coefficients (typically integers), and formula_6 is the number of fragment molecules. Some of the methods also require a correction to the energies evaluated from the fragments. However, where necessary, this correction, formula_7, is easily computed. Methods. Different methods have been devised to fragment molecules. Among them you can find the following energy-based methods: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_F" }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": " E_F = \\sum_{i=1}^{N_{frag}}(c_i E_i) + \\epsilon_F " }, { "math_id": 3, "text": "E_i" }, { "math_id": 4, "text": "F_i" }, { "math_id": 5, "text": "c_i" }, { "math_id": 6, "text": "N_{frag}" }, { "math_id": 7, "text": "\\epsilon_F" } ]
https://en.wikipedia.org/wiki?curid=62769306
62769759
Newman's conjecture
Unsolved problem in mathematics &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: In mathematics, specifically in number theory, Newman's conjecture is a conjecture about the behavior of the partition function modulo any integer. Specifically, it states that for any integers m and r such that formula_0, the value of the partition function formula_1 satisfies the congruence formula_2 for infinitely many non-negative integers n. It was formulated by mathematician Morris Newman in 1960. It is unsolved as of 2020. History. Oddmund Kolberg was probably the first to prove a related result, namely that the partition function takes both even and odd values infinitely often. The proof employed was of elementary nature and easily accessible, and was proposed as an exercise by Newman in the American Mathematical Monthly. 1 year later, in 1960, Newman proposed the conjecture and proved the cases m=5 and 13 in his original paper, and m=65 two years later. Ken Ono, an American mathematician, made further advances by exhibiting sufficient conditions for the conjecture to hold for prime m. He first showed that Newman's conjecture holds for prime m if for each r between 0 and m-1, there exists a nonnegative integer n such that the following holds: He used the result, together with a computer program, to prove the conjecture for all primes less than 1000 (except 3). Ahlgren expanded on his result to show that Ono's condition is, in fact, true for all composite numbers coprime to 6. Three years later, Ono showed that for every prime m greater than 3, one of the following must hold: Using computer technology, he proved the theorem for all primes less than 200,000 (except 3). Afterwards, Ahlgren and Boylan used Ono's criterion to extend Newman's conjecture to all primes except possibly 3. 2 years afterwards, they extended their result to all prime powers except powers of 2 or 3. Partial progress and solved cases. The weaker statement that formula_7 has at least 1 solution has been proved for all m. It was formerly known as the Erdős–Ivić conjecture, named after mathematicians Paul Erdős and Aleksandar Ivić. It was settled by Ken Ono. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0\\le r\\le m-1" }, { "math_id": 1, "text": "p(n)" }, { "math_id": 2, "text": "p(n)\\equiv r\\pmod{m}" }, { "math_id": 3, "text": "24\\mid mn+1 " }, { "math_id": 4, "text": "p \\left (\\frac{mn+1}{24} \\right ) \\equiv r \\pmod{m}" }, { "math_id": 5, "text": "m\\mid p(mn+k)" }, { "math_id": 6, "text": "1\\le k < 24,24k\\equiv 1 \\pmod{m}" }, { "math_id": 7, "text": "p(n)\\equiv 0 \\pmod{m}" } ]
https://en.wikipedia.org/wiki?curid=62769759
62773982
Radcliffe wave
Coherent, wave-shaped gaseous structure in the Milky Way The Radcliffe wave is a neighbouring coherent gaseous structure in the Milky Way, dotted with a related high concentration of interconnected stellar nurseries. It stretches about 8,800 light years. This structure runs with the trajectory of the Milky Way arms. It lies at its closest (the Taurus Molecular Cloud) at around 400 light-years and at its farthest about 5,000 light-years (the Cygnus X star complex) from the Sun, always within the Local Arm (Orion Arm) itself, spanning about 40% of its length and on average 20% of its width. Its discovery was announced in January 2020, and its proximity surprised astronomers. Formation. Scientists do not know how the undulation of dust and gas formed. It has been suggested that it could be a result of a much smaller galaxy colliding with the Milky Way, leaving behind "ripples", or could be related to dark matter. Inside the dense clouds, gas can be so compressed that new stars are born. It has been suggested that this may be where the Sun originated. Many of the star-forming regions found in the Radcliffe wave were thought to be part of a similar-sized but somewhat helio-centric ring which contained the Solar System, the "Gould Belt". It is now understood the nearest discrete relative concentration of sparse interstellar matter instead forms a massive wave. Discovery. The wave was discovered by an international team of astronomers including Catherine Zucker and João Alves. It was announced by co-author Alyssa A. Goodman at the 235th meeting of the American Astronomical Society, held at Honolulu and published in the journal "Nature" on 7 January 2020. The discovery was made using data collected by the European Space Agency's "Gaia" space observatory. The wave was invisible in 2D, requiring new 3D techniques of mapping interstellar matter to reveal its pattern using Glue (software). The proximity of the wave surprised astronomers. It is named after the Radcliffe Institute for Advanced Study in Cambridge, Massachusetts, the place of study of the team. Structure and movement. The Radcliffe wave contains four of the five Gould Belt clouds: The cloud not within its scope is the Rho Ophiuchi Cloud complex, part of a linear structure parallel to the Radcliffe wave. Other structures in the wave, further from the local star system, are Canis Major OB1, the North America Nebula and Cygnus X. The mass of this structure is on the scale of formula_0 M☉. It has a length of 8,800 light-years (2,700 parsecs) and an amplitude of 520 light-years (160 parsecs). The Radcliffe wave occupies about 20% of the width and 40% of the length of the local arm (Orion Arm). The latter is more dispersed as to its interstellar medium than the wave and has further large star-forming regions such as Monoceros OB1, California Nebula, Cepheus Far, and Rho Ophiuchi. A 2024 paper announced the discovery that the Radcliffe wave is oscillating in the form of a traveling wave. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\geq3\\times10^6" } ]
https://en.wikipedia.org/wiki?curid=62773982
62775
Pre-abelian category
Category In mathematics, specifically in category theory, a pre-abelian category is an additive category that has all kernels and cokernels. Spelled out in more detail, this means that a category C is pre-abelian if: Note that the zero morphism in item 3 can be identified as the identity element of the hom-set Hom("A","B"), which is an abelian group by item 1; or as the unique morphism "A" → 0 → "B", where 0 is a zero object, guaranteed to exist by item 2. Examples. The original example of an additive category is the category Ab of abelian groups. Ab is preadditive because it is a closed monoidal category, the biproduct in Ab is the finite direct sum, the kernel is inclusion of the ordinary kernel from group theory and the cokernel is the quotient map onto the ordinary cokernel from group theory. Other common examples: These will give you an idea of what to think of; for more examples, see abelian category (every abelian category is pre-abelian). Elementary properties. Every pre-abelian category is of course an additive category, and many basic properties of these categories are described under that subject. This article concerns itself with the properties that hold specifically because of the existence of kernels and cokernels. Although kernels and cokernels are special kinds of equalisers and coequalisers, a pre-abelian category actually has "all" equalisers and coequalisers. We simply construct the equaliser of two morphisms "f" and "g" as the kernel of their difference "g" − "f"  ; similarly, their coequaliser is the cokernel of their difference. Since pre-abelian categories have all finite products and coproducts (the biproducts) and all binary equalisers and coequalisers (as just described), then by a general theorem of category theory, they have all finite limits and colimits. That is, pre-abelian categories are finitely complete. The existence of both kernels and cokernels gives a notion of image and coimage. We can define these as im "f" := ker coker "f"; coim "f" := coker ker "f". That is, the image is the kernel of the cokernel, and the coimage is the cokernel of the kernel. Note that this notion of image may not correspond to the usual notion of image, or range, of a function, even assuming that the morphisms in the category "are" functions. For example, in the category of topological abelian groups, the image of a morphism actually corresponds to the inclusion of the "closure" of the range of the function. For this reason, people will often distinguish the meanings of the two terms in this context, using "image" for the abstract categorical concept and "range" for the elementary set-theoretic concept. In many common situations, such as the category of sets, where images and coimages exist, their objects are isomorphic. Put more precisely, we have a factorisation of "f": "A" → "B" as "A" → "C" → "I" → "B", where the morphism on the left is the coimage, the morphism on the right is the image, and the morphism in the middle (called the "parallel" of "f") is an isomorphism. In a pre-abelian category, "this is not necessarily true". The factorisation shown above does always exist, but the parallel might not be an isomorphism. In fact, the parallel of "f" is an isomorphism for every morphism "f" if and only if the pre-abelian category is an abelian category. An example of a non-abelian, pre-abelian category is, once again, the category of topological abelian groups. As remarked, the image is the inclusion of the "closure" of the range; however, the coimage is a quotient map onto the range itself. Thus, the parallel is the inclusion of the range into its closure, which is not an isomorphism unless the range was already closed. Exact functors. Recall that all finite limits and colimits exist in a pre-abelian category. In general category theory, a functor is called "left exact" if it preserves all finite limits and "right exact" if it preserves all finite colimits. (A functor is simply "exact" if it's both left exact and right exact.) In a pre-abelian category, exact functors can be described in particularly simple terms. First, recall that an additive functor is a functor "F": C → D between preadditive categories that acts as a group homomorphism on each hom-set. Then it turns out that a functor between pre-abelian categories is left exact if and only if it is additive and preserves all kernels, and it's right exact if and only if it's additive and preserves all cokernels. Note that an exact functor, because it preserves both kernels and cokernels, preserves all images and coimages. Exact functors are most useful in the study of abelian categories, where they can be applied to exact sequences. Maximal exact structure. On every pre-abelian category formula_0 there exists an exact structure formula_1 that is maximal in the sense that it contains every other exact structure. The exact structure formula_1 consists of precisely those kernel-cokernel pairs formula_2 where formula_3 is a semi-stable kernel and formula_4 is a semi-stable cokernel. Here, formula_5 is a semi-stable kernel if it is a kernel and for each morphism formula_6 in the pushout diagram formula_7 the morphism formula_8 is again a kernel. formula_9 is a semi-stable cokernel if it is a cokernel and for every morphism formula_10 in the pullback diagram formula_11 the morphism formula_12 is again a cokernel. A pre-abelian category formula_0 is quasi-abelian if and only if all kernel-cokernel pairs form an exact structure. An example for which this is not the case is the category of (Hausdorff) bornological spaces. The result is also valid for additive categories that are not pre-abelian but Karoubian. Special cases. The pre-abelian categories most commonly studied are in fact abelian categories; for example, Ab is an abelian category. Pre-abelian categories that are not abelian appear for instance in functional analysis.
[ { "math_id": 0, "text": "\\mathcal A" }, { "math_id": 1, "text": "\\mathcal{E}_{\\text{max}}" }, { "math_id": 2, "text": "(f,g)" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "\\mathcal g" }, { "math_id": 5, "text": "f:X\\rightarrow Y" }, { "math_id": 6, "text": "h:X\\rightarrow Z" }, { "math_id": 7, "text": "\n\\begin{array}{ccc} \nX & \\xrightarrow{f} & Y \\\\\n\\downarrow_{h} & & \\downarrow_{h'}\\\\\nZ & \\xrightarrow{f'} & Q\n\\end{array}\n" }, { "math_id": 8, "text": "f'" }, { "math_id": 9, "text": "g: X\\rightarrow Y" }, { "math_id": 10, "text": "h: Z\\rightarrow Y" }, { "math_id": 11, "text": "\n\\begin{array}{ccc} \nP & \\xrightarrow{g'} & Z \\\\\n\\downarrow_{h'} & & \\downarrow_{h}\\\\\nX & \\xrightarrow{g} & Y\n\\end{array}\n" }, { "math_id": 12, "text": "g'" }, { "math_id": 13, "text": "\\overline{f}:\\operatorname{coim}f\\rightarrow\\operatorname{im}f" } ]
https://en.wikipedia.org/wiki?curid=62775
62777715
Esther 8
A chapter in the Book of Esther Esther 8 is the eighth chapter of the Book of Esther in the Hebrew Bible or the Old Testament of the Christian Bible, The author of the book is unknown and modern scholars have established that the final stage of the Hebrew text would have been formed by the second century BCE. Chapters 3 to 8 contain the nine scenes that form the complication in the book. This chapter contains the effort to deal with the irreversible decree against the Jews now that Haman is dead and Mordecai is elevated to the position of prime minister. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 17 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Esther saves the Jews (8:1–8). The death of Haman does not change the fact that the irreversible decree to destroy the Jews, written in the king's name and sealed with his ring (), still stands. The king gave Haman's household to Esther and gave Mordecai Haman's signet ring, but he refused to regard it as his problem, even as Esther tearfully begged the king to "avert the evil design of Haman the Agagite" (). Thus, Mordecai and Esther together had to come up with a solution, after receiving the king's permission to "write whatever (they) like about the Jews" (verse 8). "On that day King Ahasuerus gave to Queen Esther the house of Haman, the enemy of the Jews. And Mordecai came before the king, for Esther had told what he was to her." Verse 1. The king may see the giving of Haman's house to Esther as suitable compensation because Haman has wronged her in two ways: "And the king took off his signet ring, which he had taken from Haman, and gave it to Mordecai." "And Esther set Mordecai over the house of Haman." Verse 2. The second part of verse 2 displays a shift of the focus to Esther, as she is now the one who makes decisions. "Then Esther spoke again to the king and fell down at his feet and begged him with tears to avert the evil of Haman the Agagite, and the scheme that he had devised against the Jews." Verse 3. The change of tone of Esther's petition before the king indicates her awareness that the gift of Haman's house to her and the signet ring to Mordecai won't do any good after the thirteenth of Adar as long as the decree to annihilate the Jews still stands. Esther only mentioned Haman as the sole enemy of the Jews (cf. ) and avoided implicating the king in this plot. "For how can I endure to see the evil that shall come unto my people? or how can I endure to see the destruction of my kindred?" Verse 6. Esther used the same two terms — 'people' and 'kindred' as she reversed the act of concealing her identity previously in , when she entered the harem. "Write ye also for the Jews, as it liketh you, in the king's name, and seal it with the king's ring: for the writing which is written in the king's name, and sealed with the king's ring, may no man reverse." Mordecai's balancing act (8:9–17). The narrative starts with an elaborate description of the system to dispatch the letters conveying the solutions (verses 9–10), then of the content which reveals Mordecai's ingenious ploy: a second decree without contradicting the first one but effectively annulling it by authorizing the Jews to defend themselves against those executing the first decree (verse 11). Both Jews and non-Jews throughout the empire saw the second decree as a bloodless victory for the Jewish cause and the Jews were clearly perceived to have the upper hand that many non-Jews spontaneously converted to Judaism (verse 17). "So the king’s scribes were called at that time, in the third month, which is the month of Sivan, on the twenty-third day; and it was written, according to all that Mordecai commanded, to the Jews, the satraps, the governors, and the princes of the provinces from India to Ethiopia, one hundred and twenty-seven provinces in all, to every province in its own script, to every people in their own language, and to the Jews in their own script and language." "By these letters the king permitted the Jews who were in every city to gather together and protect their lives—to destroy, kill, and annihilate all the forces of any people or province that would assault them, both little children and women, and to plunder their possessions," Verse 11. This second edict can be compared and contrasted to the first one as recorded in : "And Mordecai went out from the presence of the king in royal apparel of blue and white, and with a great crown of gold, and with a garment of fine linen and purple: and the city of Shushan rejoiced and was glad." "And in every province, and in every city, whithersoever the king's commandment and his decree came, the Jews had joy and gladness, a feast and a good day. And many of the people of the land became Jews; for the fear of the Jews fell upon them." Verse 17. This verse can be compared and contrasted to : Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62777715
627828
Double Mersenne number
In mathematics, a double Mersenne number is a Mersenne number of the form formula_0 where "p" is prime. Examples. The first four terms of the sequence of double Mersenne numbers are (sequence in the OEIS): formula_1 formula_2 formula_3 formula_4 Double Mersenne primes. A double Mersenne number that is prime is called a double Mersenne prime. Since a Mersenne number "M""p" can be prime only if "p" is prime, (see Mersenne prime for a proof), a double Mersenne number formula_5 can be prime only if "M""p" is itself a Mersenne prime. For the first values of "p" for which "M""p" is prime, formula_6 is known to be prime for "p" = 2, 3, 5, 7 while explicit factors of formula_6 have been found for "p" = 13, 17, 19, and 31. Thus, the smallest candidate for the next double Mersenne prime is formula_8, or 22305843009213693951 − 1. Being approximately 1.695×10694127911065419641, this number is far too large for any currently known primality test. It has no prime factor below 1 × 1036. There are probably no other double Mersenne primes than the four known. Smallest prime factor of formula_6 (where "p" is the "n"th prime) are 7, 127, 2147483647, 170141183460469231731687303715884105727, 47, 338193759479, 231733529, 62914441, 2351, 1399, 295257526626031, 18287, 106937, 863, 4703, 138863, 22590223644617, ... (next term is &gt; 1 × 1036) (sequence in the OEIS) Catalan–Mersenne number conjecture. The recursively defined sequence formula_9 formula_10 is called the sequence of Catalan–Mersenne numbers. The first terms of the sequence (sequence in the OEIS) are: formula_11 formula_12 formula_13 formula_14 formula_15 formula_16 Catalan discovered this sequence after the discovery of the primality of formula_17 by Lucas in 1876. Catalan conjectured that they are prime "up to a certain limit". Although the first five terms are prime, no known methods can prove that any further terms are prime (in any reasonable time) simply because they are too huge. However, if formula_18 is not prime, there is a chance to discover this by computing formula_18 modulo some small prime formula_7 (using recursive modular exponentiation). If the resulting residue is zero, formula_7 represents a factor of formula_18 and thus would disprove its primality. Since formula_18 is a Mersenne number, such a prime factor formula_7 would have to be of the form formula_19. Additionally, because formula_20 is composite when formula_21 is composite, the discovery of a composite term in the sequence would preclude the possibility of any further primes in the sequence. If formula_18 were prime, it would also contradict the New Mersenne conjecture. It is known that formula_22 is composite, with factor formula_23. In popular culture. In the Futurama movie , the double Mersenne number formula_24 is briefly seen in "an elementary proof of the Goldbach conjecture". In the movie, this number is known as a "Martian prime". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M_{M_p} = 2^{2^p-1}-1" }, { "math_id": 1, "text": "M_{M_2} = M_3 = 7 " }, { "math_id": 2, "text": "M_{M_3} = M_7 = 127 " }, { "math_id": 3, "text": "M_{M_5} = M_{31} = 2147483647 " }, { "math_id": 4, "text": "M_{M_7} = M_{127} = 170141183460469231731687303715884105727 " }, { "math_id": 5, "text": "M_{M_p}" }, { "math_id": 6, "text": "M_{M_{p}}" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "M_{M_{61}}" }, { "math_id": 9, "text": "c_0 = 2" }, { "math_id": 10, "text": "c_{n+1} = 2^{c_n}-1 = M_{c_n}" }, { "math_id": 11, "text": "c_0 = 2 " }, { "math_id": 12, "text": "c_1 = 2^2-1 = 3 " }, { "math_id": 13, "text": "c_2 = 2^3-1 = 7 " }, { "math_id": 14, "text": "c_3 = 2^7-1 = 127 " }, { "math_id": 15, "text": "c_4 = 2^{127}-1 = 170141183460469231731687303715884105727 " }, { "math_id": 16, "text": "c_5 = 2^{170141183460469231731687303715884105727}-1 \\approx 5.45431 \\times 10^{51217599719369681875006054625051616349} \\approx 10^{10^{37.70942}}" }, { "math_id": 17, "text": "M_{127}=c_4" }, { "math_id": 18, "text": "c_5" }, { "math_id": 19, "text": "2kc_4 +1" }, { "math_id": 20, "text": "2^n-1" }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": "\\frac{2^{c_4} + 1}{3}" }, { "math_id": 23, "text": " 886407410000361345663448535540258622490179142922169401 = 5209834514912200c_4 + 1" }, { "math_id": 24, "text": "M_{M_7}" } ]
https://en.wikipedia.org/wiki?curid=627828
6278350
Grimm's conjecture
Prime number conjecture In number theory, Grimm's conjecture (named after Carl Albert Grimm, 1 April 1926 – 2 January 2018) states that to each element of a set of consecutive composite numbers one can assign a distinct prime that divides it. It was first published in "American Mathematical Monthly", 76(1969) 1126-1128. Formal statement. If "n" + 1, "n" + 2, ..., "n" + "k" are all composite numbers, then there are "k" distinct primes "p""i" such that "p""i" divides "n" + "i" for 1 ≤ "i" ≤ "k". Weaker version. A weaker, though still unproven, version of this conjecture states: If there is no prime in the interval formula_0, then formula_1 has at least "k" distinct prime divisors.
[ { "math_id": 0, "text": "[n+1, n+k]" }, { "math_id": 1, "text": "\\prod_{1\\le x\\le k}(n+x)" } ]
https://en.wikipedia.org/wiki?curid=6278350
62786585
SARS-CoV-2
Virus that causes COVID-19 Severe acute respiratory syndrome coronavirus 2 (SARS‑CoV‑2) is a strain of coronavirus that causes COVID-19, the respiratory illness responsible for the COVID-19 pandemic. The virus previously had the provisional name 2019 novel coronavirus (2019-nCoV), and has also been called human coronavirus 2019 (HCoV-19 or hCoV-19). SARS‑CoV‑2 is a positive-sense single-stranded RNA virus that is contagious in humans. SARS‑CoV‑2 is a strain of the species "Betacoronavirus pandemicum" (SARSr-CoV), as is SARS-CoV-1, the virus that caused the 2002–2004 SARS outbreak. There are animal-borne coronavirus strains more closely related to SARS-CoV-2, the most closely known relative being the BANAL-52 bat coronavirus. SARS-CoV-2 is of zoonotic origin; its close genetic similarity to bat coronaviruses suggests it emerged from such a bat-borne virus. Research is ongoing as to whether SARS‑CoV‑2 came directly from bats or indirectly through any intermediate hosts. The virus shows little genetic diversity, indicating that the spillover event introducing SARS‑CoV‑2 to humans is likely to have occurred in late 2019. Epidemiological studies estimate that in the period between December 2019 and September 2020 each infection resulted in an average of 2.4–3.4 new infections when no members of the community were immune and no preventive measures were taken. However, some subsequent variants have become more infectious. The virus is airborne and primarily spreads between people through close contact and via aerosols and respiratory droplets that are exhaled when talking, breathing, or otherwise exhaling, as well as those produced from coughs and sneezes. It enters human cells by binding to angiotensin-converting enzyme 2 (ACE2), a membrane protein that regulates the renin–angiotensin system. Terminology. During the initial outbreak in Wuhan, China, various names were used for the virus; some names used by different sources included "the coronavirus" or "Wuhan coronavirus". In January 2020, the World Health Organization (WHO) recommended "2019 novel coronavirus" (2019-nCoV) as the provisional name for the virus. This was in accordance with WHO's 2015 guidance against using geographical locations, animal species, or groups of people in disease and virus names. On 11 February 2020, the International Committee on Taxonomy of Viruses adopted the official name "severe acute respiratory syndrome coronavirus 2" (SARS‑CoV‑2). To avoid confusion with the disease SARS, the WHO sometimes refers to SARS‑CoV‑2 as "the COVID-19 virus" in public health communications and the name HCoV-19 was included in some research articles. Referring to COVID-19 as the "Wuhan virus" has been described as dangerous by WHO officials, and as xenophobic by many journalists and academics. Infection and transmission. Human-to-human transmission of SARS‑CoV‑2 was confirmed on 20 January 2020 during the COVID-19 pandemic. Transmission was initially assumed to occur primarily via respiratory droplets from coughs and sneezes within a range of about . Laser light scattering experiments suggest that speaking is an additional mode of transmission and a far-reaching one, indoors, with little air flow. Other studies have suggested that the virus may be airborne as well, with aerosols potentially being able to transmit the virus. During human-to-human transmission, between 200 and 800 infectious SARS‑CoV‑2 virions are thought to initiate a new infection. If confirmed, aerosol transmission has biosafety implications because a major concern associated with the risk of working with emerging viruses in the laboratory is the generation of aerosols from various laboratory activities which are not immediately recognizable and may affect other scientific personnel. Indirect contact via contaminated surfaces is another possible cause of infection. Preliminary research indicates that the virus may remain viable on plastic (polypropylene) and stainless steel (AISI 304) for up to three days, but it does not survive on cardboard for more than one day or on copper for more than four hours. The virus is inactivated by soap, which destabilizes its lipid bilayer. Viral RNA has also been found in stool samples and semen from infected individuals. The degree to which the virus is infectious during the incubation period is uncertain, but research has indicated that the pharynx reaches peak viral load approximately four days after infection or in the first week of symptoms and declines thereafter. The duration of SARS-CoV-2 RNA shedding is generally between 3 and 46 days after symptom onset. A study by a team of researchers from the University of North Carolina found that the nasal cavity is seemingly the dominant initial site of infection, with subsequent aspiration-mediated virus-seeding into the lungs in SARS‑CoV‑2 pathogenesis. They found that there was an infection gradient from high in proximal towards low in distal pulmonary epithelial cultures, with a focal infection in ciliated cells and type 2 pneumocytes in the airway and alveolar regions respectively. Studies have identified a range of animals—such as cats, ferrets, hamsters, non-human primates, minks, tree shrews, raccoon dogs, fruit bats, and rabbits—that are susceptible and permissive to SARS-CoV-2 infection. Some institutions have advised that those infected with SARS‑CoV‑2 restrict their contact with animals. Asymptomatic and presymptomatic transmission. On 1February 2020, the World Health Organization (WHO) indicated that "transmission from asymptomatic cases is likely not a major driver of transmission". One meta-analysis found that 17% of infections are asymptomatic, and asymptomatic individuals were 42% less likely to transmit the virus. However, an epidemiological model of the beginning of the outbreak in China suggested that "pre-symptomatic shedding may be typical among documented infections" and that subclinical infections may have been the source of a majority of infections. That may explain how out of 217 on board a cruise liner that docked at Montevideo, only 24 of 128 who tested positive for viral RNA showed symptoms. Similarly, a study of ninety-four patients hospitalized in January and February 2020 estimated patients began shedding virus two to three days before symptoms appear and that "a substantial proportion of transmission probably occurred before first symptoms in the index case". The authors later published a correction that showed that shedding began earlier than first estimated, four to five days before symptoms appear. Reinfection. There is uncertainty about reinfection and long-term immunity. It is not known how common reinfection is, but reports have indicated that it is occurring with variable severity. The first reported case of reinfection was a 33-year-old man from Hong Kong who first tested positive on 26 March 2020, was discharged on 15 April 2020 after two negative tests, and tested positive again on 15 August 2020 (142 days later), which was confirmed by whole-genome sequencing showing that the viral genomes between the episodes belong to different clades. The findings had the implications that herd immunity may not eliminate the virus if reinfection is not an uncommon occurrence and that vaccines may not be able to provide lifelong protection against the virus. Another case study described a 25-year-old man from Nevada who tested positive for SARS‑CoV‑2 on 18 April 2020 and on 5 June 2020 (separated by two negative tests). Since genomic analyses showed significant genetic differences between the SARS‑CoV‑2 variant sampled on those two dates, the case study authors determined this was a reinfection. The man's second infection was symptomatically more severe than the first infection, but the mechanisms that could account for this are not known. Reservoir and origin. No natural reservoir for SARS-CoV-2 has been identified. Prior to the emergence of SARS-CoV-2 as a pathogen infecting humans, there had been two previous zoonosis-based coronavirus epidemics, those caused by SARS-CoV-1 and MERS-CoV. The first known infections from SARS‑CoV‑2 were discovered in Wuhan, China. The original source of viral transmission to humans remains unclear, as does whether the virus became pathogenic before or after the spillover event. Because many of the early infectees were workers at the Huanan Seafood Market, it has been suggested that the virus might have originated from the market. However, other research indicates that visitors may have introduced the virus to the market, which then facilitated rapid expansion of the infections. A March 2021 WHO-convened report stated that human spillover via an intermediate animal host was the most likely explanation, with direct spillover from bats next most likely. Introduction through the food supply chain and the Huanan Seafood Market was considered another possible, but less likely, explanation. An analysis in November 2021, however, said that the earliest-known case had been misidentified and that the preponderance of early cases linked to the Huanan Market argued for it being the source. For a virus recently acquired through a cross-species transmission, rapid evolution is expected. The mutation rate estimated from early cases of SARS-CoV-2 was of per site per year. Coronaviruses in general have high genetic plasticity, but SARS-CoV-2's viral evolution is slowed by the RNA proofreading capability of its replication machinery. For comparison, the viral mutation rate in vivo of SARS-CoV-2 has been found to be lower than that of influenza. Research into the natural reservoir of the virus that caused the 2002–2004 SARS outbreak has resulted in the discovery of many SARS-like bat coronaviruses, most originating in horseshoe bats. The closest match by far, published in "Nature (journal)" in February 2022, were viruses BANAL-52 (96.8% resemblance to SARS‑CoV‑2), BANAL-103 and BANAL-236, collected in three different species of bats in Feuang, Laos. An earlier source published in February 2020 identified the virus RaTG13, collected in bats in Mojiang, Yunnan, China to be the closest to SARS‑CoV‑2, with 96.1% resemblance. None of the above are its direct ancestor. Bats are considered the most likely natural reservoir of SARS‑CoV‑2. Differences between the bat coronavirus and SARS‑CoV‑2 suggest that humans may have been infected via an intermediate host; although the source of introduction into humans remains unknown. Although the role of pangolins as an intermediate host was initially posited (a study published in July 2020 suggested that pangolins are an intermediate host of SARS‑CoV‑2-like coronaviruses), subsequent studies have not substantiated their contribution to the spillover. Evidence against this hypothesis includes the fact that pangolin virus samples are too distant to SARS-CoV-2: isolates obtained from pangolins seized in Guangdong were only 92% identical in sequence to the SARS‑CoV‑2 genome (matches above 90 percent may sound high, but in genomic terms it is a wide evolutionary gap). In addition, despite similarities in a few critical amino acids, pangolin virus samples exhibit poor binding to the human ACE2 receptor. Phylogenetics and taxonomy. &lt;onlyinclude&gt; SARS‑CoV‑2 belongs to the broad family of viruses known as coronaviruses. It is a positive-sense single-stranded RNA (+ssRNA) virus, with a single linear RNA segment. Coronaviruses infect humans, other mammals, including livestock and companion animals, and avian species. Human coronaviruses are capable of causing illnesses ranging from the common cold to more severe diseases such as Middle East respiratory syndrome (MERS, fatality rate ~34%). SARS-CoV-2 is the seventh known coronavirus to infect people, after 229E, NL63, OC43, HKU1, MERS-CoV, and the original SARS-CoV. Like the SARS-related coronavirus implicated in the 2003 SARS outbreak, SARS‑CoV‑2 is a member of the subgenus "Sarbecovirus" (beta-CoV lineage B). Coronaviruses undergo frequent recombination. The mechanism of recombination in unsegmented RNA viruses such as SARS-CoV-2 is generally by copy-choice replication, in which gene material switches from one RNA template molecule to another during replication. The SARS-CoV-2 RNA sequence is approximately 30,000 bases in length, relatively long for a coronavirus—which in turn carry the largest genomes among all RNA families. Its genome consists nearly entirely of protein-coding sequences, a trait shared with other coronaviruses. A distinguishing feature of SARS‑CoV‑2 is its incorporation of a polybasic site cleaved by furin, which appears to be an important element enhancing its virulence. In SARS-CoV-2 the recognition site is formed by the incorporated 12 codon nucleotide sequence CCT CGG CGG GCA which corresponds to the amino acid sequence P RR A. This sequence is upstream of an arginine and serine which forms the S1/S2 cleavage site (P RR A R↓S) of the spike protein. Although such sites are a common naturally-occurring feature of other viruses within the Subfamily Orthocoronavirinae, it appears in few other viruses from the Beta-CoV genus, and it is unique among members of its subgenus for such a site. The furin cleavage site PRRAR↓ is highly similar to that of the feline coronavirus, an alphacoronavirus 1 strain. Viral genetic sequence data can provide critical information about whether viruses separated by time and space are likely to be epidemiologically linked. With a sufficient number of sequenced genomes, it is possible to reconstruct a phylogenetic tree of the mutation history of a family of viruses. By 12 January 2020, five genomes of SARS‑CoV‑2 had been isolated from Wuhan and reported by the Chinese Center for Disease Control and Prevention (CCDC) and other institutions; the number of genomes increased to 42 by 30 January 2020. A phylogenetic analysis of those samples showed they were "highly related with at most seven mutations relative to a common ancestor", implying that the first human infection occurred in November or December 2019. Examination of the topology of the phylogenetic tree at the start of the pandemic also found high similarities between human isolates. As of 21 August 2021,[ [update]] 3,422 SARS‑CoV‑2 genomes, belonging to 19 strains, sampled on all continents except Antarctica were publicly available. On 11 February 2020, the International Committee on Taxonomy of Viruses announced that according to existing rules that compute hierarchical relationships among coronaviruses based on five conserved sequences of nucleic acids, the differences between what was then called 2019-nCoV and the virus from the 2003 SARS outbreak were insufficient to make them separate viral species. Therefore, they identified 2019-nCoV as a virus of "Severe acute respiratory syndrome–related coronavirus".&lt;/onlyinclude&gt; In July 2020, scientists reported that a more infectious SARS‑CoV‑2 variant with spike protein variant G614 has replaced D614 as the dominant form in the pandemic. Coronavirus genomes and subgenomes encode six open reading frames (ORFs). In October 2020, researchers discovered a possible overlapping gene named "ORF3d", in the SARS‑CoV‑2 genome. It is unknown if the protein produced by "ORF3d" has any function, but it provokes a strong immune response. "ORF3d" has been identified before, in a variant of coronavirus that infects pangolins. Phylogenetic tree. A phylogenetic tree based on whole-genome sequences of SARS-CoV-2 and related coronaviruses is: Variants. There are many thousands of variants of SARS-CoV-2, which can be grouped into the much larger clades. Several different clade nomenclatures have been proposed. Nextstrain divides the variants into five clades (19A, 19B, 20A, 20B, and 20C), while GISAID divides them into seven (L, O, V, S, G, GH, and GR). Several notable variants of SARS-CoV-2 emerged in late 2020. The World Health Organization has currently declared five variants of concern, which are as follows: Other notable variants include 6 other WHO-designated variants under investigation and Cluster 5, which emerged among mink in Denmark and resulted in a mink euthanasia campaign rendering it virtually extinct. Virology. Virus structure. Each SARS-CoV-2 virion is in diameter; its mass within the global human populace has been estimated as being between 0.1 and 10 kilograms. Like other coronaviruses, SARS-CoV-2 has four structural proteins, known as the S (spike), E (envelope), M (membrane), and N (nucleocapsid) proteins; the N protein holds the RNA genome, and the S, E, and M proteins together create the viral envelope. Coronavirus S proteins are glycoproteins and also type I membrane proteins (membranes containing a single transmembrane domain oriented on the extracellular side). They are divided into two functional parts (S1 and S2). In SARS-CoV-2, the spike protein, which has been imaged at the atomic level using cryogenic electron microscopy, is the protein responsible for allowing the virus to attach to and fuse with the membrane of a host cell; specifically, its S1 subunit catalyzes attachment, the S2 subunit fusion. Genome. As of early 2022, about 7 million SARS-CoV-2 genomes had been sequenced and deposited into public databases and another 800,000 or so were added each month. By September 2023, the GISAID EpiCoV database contained more than 16 million genome sequences. SARS-CoV-2 has a linear, positive-sense, single-stranded RNA genome about 30,000 bases long. Its genome has a bias against cytosine (C) and guanine (G) nucleotides, like other coronaviruses. The genome has the highest composition of U (32.2%), followed by A (29.9%), and a similar composition of G (19.6%) and C (18.3%). The nucleotide bias arises from the mutation of guanines and cytosines to adenosines and uracils, respectively. The mutation of CG dinucleotides is thought to arise to avoid the zinc finger antiviral protein related defense mechanism of cells, and to lower the energy to unbind the genome during replication and translation (adenosine and uracil base pair via two hydrogen bonds, cytosine and guanine via three). The depletion of CG dinucleotides in its genome has led the virus to have a noticeable codon usage bias. For instance, arginine's six different codons have a relative synonymous codon usage of AGA (2.67), CGU (1.46), AGG (.81), CGC (.58), CGA (.29), and CGG (.19). A similar codon usage bias trend is seen in other SARS–related coronaviruses. Replication cycle. Virus infections start when viral particles bind to host surface cellular receptors. Protein modeling experiments on the spike protein of the virus soon suggested that SARS‑CoV‑2 has sufficient affinity to the receptor angiotensin converting enzyme 2 (ACE2) on human cells to use them as a mechanism of cell entry. By 22 January 2020, a group in China working with the full virus genome and a group in the United States using reverse genetics methods independently and experimentally demonstrated that ACE2 could act as the receptor for SARS‑CoV‑2. Studies have shown that SARS‑CoV‑2 has a higher affinity to human ACE2 than the original SARS virus. SARS‑CoV‑2 may also use basigin to assist in cell entry. Initial spike protein priming by transmembrane protease, serine 2 (TMPRSS2) is essential for entry of SARS‑CoV‑2. The host protein neuropilin 1 (NRP1) may aid the virus in host cell entry using ACE2. After a SARS‑CoV‑2 virion attaches to a target cell, the cell's TMPRSS2 cuts open the spike protein of the virus, exposing a fusion peptide in the S2 subunit, and the host receptor ACE2. After fusion, an endosome forms around the virion, separating it from the rest of the host cell. The virion escapes when the pH of the endosome drops or when cathepsin, a host cysteine protease, cleaves it. The virion then releases RNA into the cell and forces the cell to produce and disseminate copies of the virus, which infect more cells. SARS‑CoV‑2 produces at least three virulence factors that promote shedding of new virions from host cells and inhibit immune response. Whether they include downregulation of ACE2, as seen in similar coronaviruses, remains under investigation (as of May 2020). Treatment and drug development. Very few drugs are known to effectively inhibit SARS‑CoV‑2. Masitinib was found to inhibit SARS-CoV-2 main protease, showing a greater than 200-fold reduction in viral titers in the lungs and nose of mice, however it is not approved for the treatment of COVID-19 in humans. In December 2021, the United States granted emergency use authorization to Nirmatrelvir/ritonavir for the treatment of the virus; the European Union, United Kingdom, and Canada followed suit with full authorization soon after. One study found that Nirmatrelvir/ritonavir reduced the risk of hospitalization and death by 88%. COVID Moonshot is an international collaborative open-science project started in March 2020 with the goal of developing an un-patented oral antiviral drug for treatment of SARS-CoV-2. Epidemiology. Retrospective tests collected within the Chinese surveillance system revealed no clear indication of substantial unrecognized circulation of SARS‑CoV‑2 in Wuhan during the latter part of 2019. A meta-analysis from November 2020 estimated the basic reproduction number (formula_0) of the virus to be between 2.39 and 3.44. This means each infection from the virus is expected to result in 2.39 to 3.44 new infections when no members of the community are immune and no preventive measures are taken. The reproduction number may be higher in densely populated conditions such as those found on cruise ships. Human behavior affects the R0 value and hence estimates of R0 differ between different countries, cultures, and social norms. For instance, one study found relatively low R0 (~3.5) in Sweden, Belgium and the Netherlands, while Spain and the US had significantly higher R0 values (5.9 to 6.4, respectively). There have been about 96,000 confirmed cases of infection in mainland China. While the proportion of infections that result in confirmed cases or progress to diagnosable disease remains unclear, one mathematical model estimated that 75,815 people were infected on 25 January 2020 in Wuhan alone, at a time when the number of confirmed cases worldwide was only 2,015. Before 24 February 2020, over 95% of all deaths from COVID-19 worldwide had occurred in Hubei province, where Wuhan is located. As of 10 March 2023, the percentage had decreased to . As of 10 March 2023, there were total confirmed cases of SARS‑CoV‑2 infection. The total number of deaths attributed to the virus was . References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "R_0" } ]
https://en.wikipedia.org/wiki?curid=62786585
6278763
Coleman–Weinberg potential
Potential arising from loop effects The Coleman–Weinberg model represents quantum electrodynamics of a scalar field in four-dimensions. The Lagrangian for the model is formula_0 where the scalar field is complex, formula_1 is the electromagnetic field tensor, and formula_2 the covariant derivative containing the electric charge formula_3 of the electromagnetic field. Assume that formula_4 is nonnegative. Then if the mass term is tachyonic, formula_5 there is a spontaneous breaking of the gauge symmetry at low energies, a variant of the Higgs mechanism. On the other hand, if the squared mass is positive, formula_6 the vacuum expectation of the field formula_7 is zero. At the classical level the latter is true also if formula_8. However, as was shown by Sidney Coleman and Erick Weinberg, even if the renormalized mass is zero, spontaneous symmetry breaking still happens due to the radiative corrections (this introduces a mass scale into a classically conformal theory - the model has a conformal anomaly). The same can happen in other gauge theories. In the broken phase the fluctuations of the scalar field formula_7 will manifest themselves as a naturally light Higgs boson, as a matter of fact even too light to explain the electroweak symmetry breaking in the minimal model - much lighter than vector bosons. There are non-minimal models that give a more realistic scenarios. Also the variations of this mechanism were proposed for the hypothetical spontaneously broken symmetries including supersymmetry. Equivalently one may say that the model possesses a first-order phase transition as a function of formula_9. The model is the four-dimensional analog of the three-dimensional Ginzburg–Landau theory used to explain the properties of superconductors near the phase transition. The three-dimensional version of the Coleman–Weinberg model governs the superconducting phase transition which can be both first- and second-order, depending on the ratio of the Ginzburg–Landau parameter formula_10, with a tricritical point near formula_11 which separates type I from type II superconductivity. Historically, the order of the superconducting phase transition was debated for a long time since the temperature interval where fluctuations are large (Ginzburg interval) is extremely small. The question was finally settled in 1982. If the Ginzburg–Landau parameter formula_12 that distinguishes type-I and type-II superconductors (see also here) is large enough, vortex fluctuations becomes important which drive the transition to second order. The tricritical point lies at roughly formula_13, i.e., slightly below the value formula_14 where type-I goes over into type-II superconductor. The prediction was confirmed in 2002 by Monte Carlo computer simulations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L = -\\frac{1}{4} (F_{\\mu \\nu})^2 + |D_{\\mu} \\phi|^2 - m^2 |\\phi|^2 - \\frac{\\lambda}{6} |\\phi|^4" }, { "math_id": 1, "text": "F_{\\mu \\nu}=\\partial_\\mu A_\\nu-\\partial_\\nu A_\\mu " }, { "math_id": 2, "text": "D_{\\mu}=\\partial_\\mu-\\mathrm i (e/\\hbar c)A_\\mu " }, { "math_id": 3, "text": "e" }, { "math_id": 4, "text": "\\lambda" }, { "math_id": 5, "text": "m^2<0" }, { "math_id": 6, "text": "m^2>0" }, { "math_id": 7, "text": "\\phi" }, { "math_id": 8, "text": "m^2=0" }, { "math_id": 9, "text": "m^2" }, { "math_id": 10, "text": " \\kappa\\equiv\\lambda/e^2" }, { "math_id": 11, "text": " \\kappa=1/\\sqrt 2" }, { "math_id": 12, "text": "\\kappa" }, { "math_id": 13, "text": "\\kappa=0.76/\\sqrt{2}" }, { "math_id": 14, "text": "\\kappa=1/\\sqrt{2}" } ]
https://en.wikipedia.org/wiki?curid=6278763
62788018
Esther 9
A chapter in the Book of Esther Esther 9 is the ninth chapter of the Book of Esther in the Hebrew Bible or the Old Testament of the Christian Bible, The author of the book is unknown and modern scholars have established that the final stage of the Hebrew text would have been formed by the second century BCE. Chapters 9 to 10 contain the resolution of the stories in the book. This chapter records the events on the thirteenth and fourteenth of Adar and the institution of the Purim festival after the Jews overcome their enemies. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 32 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The events on the thirteenth and fourteenth of Adar (9:1–19). The opening verse of this section explicitly describes the power reversal on the very day that the enemies of the Jews were to have vanquished them, but the opposite happened: 'the Jews would gain power over their foes' (9:1). On the thirteenth of Adar the Jews struck down 75,000 in the provinces () and 500 in the citadel of Susa () of those who hated them (), also on the fourteenth of Adar, by a special additional edict (provided directly by the king at Esther's behest), the Jews killed 300 remaining enemies in the city of Susa, while at the same time, in accord with that additional royal edict, they hang the bodies of the ten sons of Haman on the gallows. A parallel with is that Saul spared Agag, and therefore lost his kingship as well as his life, so this time Esther determined not to make the same mistake with Haman and his sons. One important point is that they refrained from plundering (this is mentioned three times: , , ), which indicates an echo in Esther 9 of , resuming the parallel set up between Mordecai/Saul and Haman/Agag. After Saul defeated the Agagites (Amalek), he kept the best sheep and cattle as spoils in disobedience to God's command, thus earned divine disapproval and God regretting the choice of Saul as king. This time, unlike Saul, Mordecai and the Jews refrained from taking booty. However, the narrative overall focuses more on the pacific results of the bloodletting, gaining relief from hostile neighbors (9:1, 16) and the day(s) of rejoicing as celebration after the triumphant self-defense (9:17–19). "Now in the twelfth month, that is, the month Adar, on the thirteenth day of the same, when the king's commandment and his decree drew near to be put in execution, in the day that the enemies of the Jews hoped to have power over them, (though it was turned to the contrary, that the Jews had rule over them that hated them;)" "On the thirteenth day of the month Adar; and on the fourteenth day of the same rested they, and made it a day of feasting and gladness."' Verse 17. The Jews in the Persian empire celebrate on the fourteenth, except those in Susa who celebrate on the fifteenth (verse 18). "But the Jews that were at Shushan assembled together on the thirteenth day thereof, and on the fourteenth thereof; and on the fifteenth day of the same they rested, and made it a day of feasting and gladness." Verse 18. The Jews in Susa have a different date of celebration than those outside the city because there were still fights in Susa until the fourteenth, so the celebration in that city was on the fifteenth. "Therefore the Jews of the villages, that dwelt in the unwalled towns, made the fourteenth day of the month Adar a day of gladness and feasting, and a good day, and of sending portions one to another." Links to modern history. On Purim 1942, ten Jews were hanged by Nazi in Zduńska Wola to "avenge" the hanging of Haman's ten sons. In a similar incident in 1943, the Nazis shot ten Jews from the Piotrków ghetto. In an apparent connection made by Hitler between his Nazi regime and the role of Haman, Hitler stated in a speech made on January 30, 1944, that if the Nazis were defeated, the Jews could celebrate "a second Purim". On October 16, 1946, Julius Streicher, one of ten Nazi members sentenced to hanging for the crimes against humanity after the Nuremberg trials, was heard to sarcastically remark "Purimfest 1946" as he ascended the scaffold. According to Rabbi Mordechai Neugroschel, there is a code in the Book of Esther which lies in the names of Haman's 10 sons (). Three of the Hebrew letters—a tav, a shin and a zayin—are written smaller than the rest, while a vav is written larger. The outsized vav—which represents the number six—corresponds to the sixth millennium of the world since creation, which, according to Jewish tradition, is the period between 1240 and 2240 CE. As for the tav, shin and zayin, their numerical values add up to 707. Put together, these letters refer to the Jewish year 5707, which corresponds to the secular 1946–1947. In his research, Neugroschel noticed that ten Nazi defendants in the Nuremberg Trials were executed by hanging on October 16, 1946, which was also that year's date of Hoshana Rabbah (21st of Tishrei; the final judgement day of Judaism). Additionally, Hermann Göring, the eleventh Nazi official sentenced to death, committed suicide, parallel to Haman's daughter in Tractate Megillah. The institution of the Festival of Purim (9:20–32). This section, perhaps an addition to the coherent narrative of through , recapitulates the core reversals: relief from persecution, turning 'sorrow into gladness' and 'mourning into a holiday' (). For commemoration by future generations, a two-day holiday is newly instituted, reflecting the original feasting on the fourteenth of Adar in the provinces and a day later in Susa, with Haman's casting of lots ("purim") providing an etymology for the festival. Mordecai and Esther as officeholders in the Persian empire harnessing 'the resources of the chancellery and the imperial postal system' dispatched a set of letters to Jews in 'all the provinces' (verse 20; cf. verse 30) and thus using the same language as in the accounts of earlier royal edicts (; ; ). Together they wrote these official letters enjoining Jews to celebrate Purim (verses 29, ), as well as a second letter (verse 29). Esther's royal authority in establishing Purim is reaffirmed at the end of this section, where she is named as the one establishing the customs of the holiday (). "Then Esther the queen, the daughter of Abihail, and Mordecai the Jew, wrote with all authority, to confirm this second letter of Purim." Verse 29. The name of the festival calls the attention to "the day of reversal", when the day of determined defeat became a day of salvation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62788018
62792329
Esther 10
A chapter in the Book of Esther Esther 10 is the tenth (and the final) chapter of the Book of Esther in the Hebrew Bible or the Old Testament of the Christian Bible, The author of the book is unknown and modern scholars have established that the final stage of the Hebrew text would have been formed by the second century BCE. Chapters 9 and 10 contain the resolution of the stories in the book. This brief chapter is an encomium to Mordecai, showing his power alongside that of the king, being a Jew as second in command to a Gentile king, serving the interests of both groups, Persians and Jews. It is a picture of an 'ideal diaspora situation' and 'serves as a model for all diaspora communities'. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 3 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). "And the king Ahasuerus laid a tribute upon the land, and upon the isles of the sea." Verse 1. The statement serves to compliment Mordecai's position in the Persian empire in the next verses. "And all the acts of his power and of his might, and the declaration of the greatness of Mordecai, whereunto the king advanced him, are they not written in the book of the chronicles of the kings of Media and Persia?" "For Mordecai the Jew was next unto king Ahasuerus, and great among the Jews, and accepted of the multitude of his brethren, seeking the wealth of his people, and speaking peace to all his seed." Verse 3. This verse shows that a highly esteemed Jew could still be the highest ranked Persian official. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62792329
62793986
Non-fullerene acceptor
Non-fullerene acceptors (NFAs) are types of acceptors used in organic solar cells (OSCs). The name Fullerene comes from another type of acceptor-molecule which was used as the main acceptor material for bulk heterojunction Organic solar cells. Non-fullerene acceptors are thus defined as not being a part of this sort of acceptors. Research in non-fullerene acceptors did not show promising results starting up when being compared to fullerene based organic solar cells. However, recent developments in this field launched a series of new opportunities for the NFA based OSCs. The most important breakthrough was the development of the small molecule acceptors (SMAs). These acceptors are showing promising results to be better alternatives for Fullerene acceptors because of their properties. The property that makes these SMAs such a big research topic is their tunability. SMAs can be modified to a much greater extent than Fullerene acceptors. There are, however, still many improvements to make on the design of the SMAs in order become profitable to use in OSCs. Recent research on designing NFA-OSCs showed an efficiency of 15% with a so-called tandem solar cell which made use of Non-fullerene acceptors as well as fullerene acceptors. With a good chance that researchers will be able to boost this percentage up to 18%, it is clear that NFA-OSCs have a great potential in becoming a profitable photovoltaic in commercial application. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; NFA Potential. Advantages. Fullerene acceptors (FAs) have been used extensively in OSCs. This is rationalized by several characteristics of fullerenes. The three-dimensional character causes them to be suitable materials for bulk heterojunction structures. Additionally, its electronic configuration (delocalized LUMOs) allows for efficient percolation and high electron mobility. Another consequence is that they are easily coupled to compatible donor polymers. However, fullerene acceptor organic solar cells (FA-OSCs) encounter a limited efficiency. The energy levels in fullerene compounds are relatively constant and difficult to alter. Moreover, they employ weak absorption in the visible spectrum and the near-infrared spectrum and low thermal instability and photochemical instability. The acceptors need to be purified extensively, adding to the economical and temporal disadvantages of using FAs. The organic NFAs, in the form of small molecular acceptors (SMAs), can be used to overcome these fullerene deficiencies. They have more structural degrees of freedom, allowing higher electron affinity tunability; they absorb incidental visible-NIR radiation more strongly; they are more stable; they are compatible with donor polymers and they are (in general) easier to synthesize. NF-OSCs with power conversion efficiencies (PCE) of over 13% have been reported, reaching a higher value than its FA-based counterpart. Disadvantages. One of the downsides of using SMAs is the fact that, under atmospheric conditions, they tend to engage in disordered (anisotropic) states as a result of their planar structures. They are often planar as aromaticity is required for sufficient electron mobility. The lack of order may diminish electron transport and effective extraction routes that lead to induced current. Moreover, the corresponding lack of orientation affects donor-acceptor exciton formation. This makes them less compatible for bulk heterojunction blends than FAs. Another downside to research on SMA usage is the profound scala of possibilities of donor-acceptor pairs that scientists are challenged to induce. Physics. The mechanism of current induction in organic solar cells involves a charge transfer. After electromagnetic absorption and exciton formation in the electron donor polymer, the excited electron is moved towards the acceptor conduction band (LUMO) as a result of the lower energy value than the donor LUMO. This process is called a charge separation, and the corresponding energy value formula_0 satisfies formula_1 where CS denotes charge separation, A denotes the acceptor and D denotes the donor molecule. Along with the Coulombic potential that needs to be surpassed, the maximum energy obtained from the process is defined as the Charge Transfer energy, formula_2. The difference between the optical excitation energy (the optical band gap energy, formula_3) and the charge transfer energy is the driving force of the system. An advantage of NF-OSCs over current fullerene-based OSCs is that the SMAs used are relatively compatible with donors, as a result of their electronic affinity tunability. Their compatibility originates from their LUMO-energy value similarity. The driving force is minimized to solely Coulombic contributions (&lt;0.3 eV) with negligible charge separation loss. This results in low potential spillage, formula_4, which depends explicitly on the value of the driving force, along with radiative and non-radiative losses during the current induction process. Thus, for NF-OSCs, formula_5, with q the electron's charge, is minimized, leading to a higher useful energy output. The result is a high open-circuit voltage formula_6 of the solar cell compared to fullerene counterparts, with reports of values as high as 1.1V. However, the diminished charge separation energy cost negatively influences the tendency of excited electrons in the donor conduction band to transport to the acceptor LUMO as it is less preferred energetically. This gives rise to the fact that electrons induced in the current are more energetic, but fewer electrons are induced. This means that the short-circuit current density formula_7 and the fill factor (FF) are decreased. In terms of the PCE, the higher open-circuit voltage is compensated by the lower short-circuit current density and fill factor. Researchers showed that ultrafast charge separation is possible with negligible driving force. In fact, the electrical external quantum efficiency formula_8 is highest for donor-acceptor blends with lowest driving force. Types. One of the main advantages of the non-fullerene acceptors is their ability to be tuned and customized by chemical modification. This in contrary to fullerene acceptors. It also immediately creates a bottleneck because of the huge amount of possibilities there are which could be applied as an SMA. A wide variety of SMAs are tested to be a successful acceptor, but two classes of SMAs have proven to give the best results concerning Power Conversion Efficiency (PCE) and have made the greatest attribute to the recent development in NFA-OSCs. Rylene diimides. Rylene diimides are, as said, one of the two main subclasses which are a basis for acceptor-molecules in modern NFA-OSCs. Rylene diimides are industrial dyes and can be divided into, once again, two subclasses: Perylene Diimides (PDIs) and Naphthalene Diimides (NDIs). Rylene diimides consist of a planar rylene framework and numerous constructions can be made by attaching certain subgroups and by using more PDI molecules in one acceptor. The mono-PDI molecule is shown in the figure on the right. Rylene diimides are considered good acceptors because of their favourable properties. Rylene diimides usually have high electron mobility values formula_9 due to intermolecular π-stacking. These values are comparable to ones of fullerene acceptors. Furthermore, Rylene diimides also have a high absorbance spectrum in the visible area, high thermal and oxidative stability and their electric affinities can be tuned to a great extent by adding side groups and 3D-structure which leads to a significant higher open-circuit voltage (formula_10) Challenges that must be faced by designing and improving Rylene diimides based OSCs are mainly concerned by synthesis of PDIs because the planar structure of the molecule makes that it tends to aggregate into a crystal structure. This greatly enhances the domain size, larger than the preferred 20 nm, in the bulk heterojunction which leads to a lower charge transport ability. Researchers have tried to reduce this aggregation by three structure adaptions, all focused on enhancing the mobility of Rylene diimides molecules. The first approach is to link two PDI molecules with a single carbon bond, to form a so-called twisted dimer. The second synthesis forms highly twisted 3D-structures of PDI molecules and the third approach forms a fused-ring structure. For all three possible ways, an example molecule is shown in the figure below. These derivatives are examples of acceptor-molecules which were tested and assessed in OSCs for their performance and PCE. Future research will focus on developing better PDIs resulting in higher PCE values for the OSC. Fused-ring electron acceptors. Fused-ring electron acceptors (FREAs) are completely different from Rylene diimides. They consist of two electron withdrawing groups in between of a donor group. This donor group is a π-bridge of fused aromatic rings. FREAs have values for formula_9 similar to those of fullerene acceptors and have a wide absorption range. Electron affinities can be tuned by substituting the side chains, the core and the end groups. Current research focusses on designing the best FREA with varying all these groups. Another development issue is the expensive synthesis of these molecules. Finding the most efficient synthetic route is therefore also an important subject concerning these acceptors Future development. In current research, rylene diimides (for small band-gap energy donors) and FREAs (for large band-gap energy donors) have shown the most potential for becoming commercially viable solar cell materials for bulk heterojunction blend cells. Wide band gap donors are known to enhance voltage and diminish current density, but in combination with FREAs both values can be relatively high. There are still a lot of improvements to be made before an NFA-OSC can be commercially profitable. First of all, the PCE should be increased to at least 15% since this is the minimal value for commercial application. As said, PCEs already have exceeded 13% so recent development is on the right track. PCEs can be increased by designing even better NFAs, for instance, on the level of electron mobility NFAs still can increase a lot compared to FAs ( formula_11 for the best NFAs compared to formula_12 for the best FAs). Improvements can also be made in the following aspects: better donor matching, tandem constructions, BHJ morphology and domain purity of the donor and acceptor. Besides these theoretical research aspects, implementation in a life size commercial solar cell also brings a lot of challenges, such as easy and sustainable device fabrication methods and long-term stability of the organic compounds. Studies also show that with upscaling, the PCE in general drops. On all of these areas, NFA-OSCs show great potential but it will take a lot of research before a solid non-fullerene acceptor-organic solar cell can compete with inorganic solar cells. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_{CS}" }, { "math_id": 1, "text": "E_{CS} = E^A_{LUMO} - E^D_{HOMO}" }, { "math_id": 2, "text": "E_{CT}" }, { "math_id": 3, "text": "E^{opt}_g" }, { "math_id": 4, "text": "V_{loss}" }, { "math_id": 5, "text": "E_{loss} = qV_{loss}" }, { "math_id": 6, "text": "V_{OC}" }, { "math_id": 7, "text": " J_{SC} " }, { "math_id": 8, "text": "EQE_{EL}" }, { "math_id": 9, "text": " \\mu_e " }, { "math_id": 10, "text": " V_{oc} " }, { "math_id": 11, "text": " 3.3 \\cdot 10^{-4} cm^2 V^{-1} s^{-1} " }, { "math_id": 12, "text": " 7.0 \\cdot 10^{-4} cm^2 V^{-1} s^{-1} " } ]
https://en.wikipedia.org/wiki?curid=62793986
627957
Okun's law
Economic relationship between unemployment and production losses In economics, Okun's law is an empirically observed relationship between unemployment and losses in a country's production. It is named after Arthur Melvin Okun, who first proposed the relationship in 1962. The "gap version" states that for every 1% increase in the unemployment rate, a country's GDP will be roughly an additional 2% lower than its potential GDP. The "difference version" describes the relationship between quarterly changes in unemployment and quarterly changes in real GDP. The stability and usefulness of the law has been disputed. Imperfect relationship. Okun's law is an empirical relationship. In Okun's original statement of his law, a 2% increase in output corresponds to a 1% decline in the rate of cyclical unemployment; a 0.5% increase in labor force participation; a 0.5% increase in hours worked per employee; and a 1% increase in output per hours worked (labor productivity). Okun's law states that a one-point increase in the cyclical unemployment rate is associated with two percentage points of negative growth in real GDP. The relationship varies depending on the country and time period under consideration. The relationship has been tested by regressing GDP or GNP growth on change in the unemployment rate. Martin Prachowny estimated about a 3% decrease in output for every 1% increase in the unemployment rate. However, he argued that the majority of this change in output is actually due to changes in factors other than unemployment, such as capacity utilization and hours worked. Holding these other factors constant reduces the association between unemployment and GDP to around 0.7% for every 1% change in the unemployment rate. The magnitude of the decrease seems to be declining over time in the United States. According to Andrew Abel and Ben Bernanke, estimates based on data from more recent years give about a 2% decrease in output for every 1% increase in unemployment. There are several reasons why GDP may increase or decrease more rapidly than unemployment decreases or increases: As unemployment increases, One implication of Okun's law is that an increase in labor productivity or an increase in the size of the labor force can mean that real net output grows without net unemployment rates falling (the phenomenon of "jobless growth") Okun's Law is sometimes confused with Lucas wedge. Mathematical statements. The gap version of Okun's law may be written (Abel &amp; Bernanke 2005) as: formula_0, where In the United States since 1955 or so, the value of c has typically been around 2 or 3, as explained above. The gap version of Okun's law, as shown above, is difficult to use in practice because formula_2 and formula_4 can only be estimated, not measured. A more commonly used form of Okun's law, known as the difference or growth rate form of Okun's law, relates changes in output to changes in unemployment: formula_6, where: At the present time in the United States, "k" is about 3% and "c" is about 2, so the equation may be written formula_10 The graph at the top of this article illustrates the growth rate form of Okun's law, measured quarterly rather than annually. Derivation of the growth rate form. We start with the first form of Okun's law: formula_11 formula_12 Taking annual differences on both sides, we obtain formula_13 Putting both numerators over a common denominator, we obtain formula_14 Multiplying the left hand side by formula_15, which is approximately equal to 1, we obtain formula_16 formula_17 We assume that formula_18, the change in the natural rate of unemployment, is approximately equal to 0. We also assume that formula_19, the growth rate of full-employment output, is approximately equal to its average value, formula_9. So we finally obtain formula_20 Usefulness. Through comparisons between actual data and theoretical forecasting, Okun's law proves to be an invaluable tool in predicting trends between unemployment and real GDP. However, the accuracy of the data theoretically proved through Okun's law compared to real world numbers proves to be generally inaccurate. This is due to the variances in Okun's coefficient. Many, including the Reserve Bank of Australia, conclude that information proved by Okun's law to be acceptable to a certain degree. Also, some findings have concluded that Okun's law tends to have higher rates of accuracy for short-run predictions, rather than long-run predictions. Forecasters have concluded this to be true due to unforeseen market conditions that may affect Okun's coefficient. As such, Okun's law is generally acceptable by forecasters as a tool for short-run trend analysis between unemployment and real GDP, rather than being used for long run analysis as well as accurate numerical calculations. The San Francisco Federal Reserve Bank determined through the use of empirical data from past recessions in the 1970s, 1990s, and 2000s that Okun’s law was a useful theory. All recessions showed two common main trends: a counterclockwise loop for both real-time and revised data. The recoveries of the 1990s and 2000s did have smaller and tighter loops. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\overline{Y}-Y}{\\overline{Y}} = c(u-\\overline{u})" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "\\overline{Y}" }, { "math_id": 3, "text": "u" }, { "math_id": 4, "text": "\\overline{u}" }, { "math_id": 5, "text": "c" }, { "math_id": 6, "text": "\\frac{\\Delta Y}{Y} = k - c \\, \\Delta u\\," }, { "math_id": 7, "text": "\\Delta Y" }, { "math_id": 8, "text": "\\Delta u" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "\\frac{\\Delta Y}{Y} = 0.03 - 2 \\, \\Delta u.\\," }, { "math_id": 11, "text": "\\frac{\\overline{Y}-Y}{\\overline{Y}} = 1-\\frac{Y}{\\overline{Y}} = c(u-\\overline{u})" }, { "math_id": 12, "text": "\\frac{Y}{\\overline{Y}}-1 = c(\\overline{u}-u)." }, { "math_id": 13, "text": "\\Delta\\left(\\frac{Y}{\\overline{Y}}\\right) = \\frac{Y + \\Delta Y}{\\overline{Y}+ \\Delta \\overline{Y}} - \\frac{Y}{\\overline{Y}} = c(\\Delta \\overline{u}-\\Delta u)." }, { "math_id": 14, "text": "\\frac{\\overline{Y} \\, \\Delta Y - Y \\, \\Delta \\overline{Y}}{\\overline{Y}(\\overline{Y} + \\Delta \\overline{Y})}= c(\\Delta \\overline{u}-\\Delta u)." }, { "math_id": 15, "text": "\\frac{\\overline{Y} + \\Delta \\overline{Y}}{Y}" }, { "math_id": 16, "text": "\\frac{\\overline{Y} \\Delta Y - Y \\Delta \\overline{Y}}{\\overline{Y}Y} = \\frac{\\Delta Y}{Y} - \\frac{\\Delta \\overline{Y}}{\\overline{Y}} \\approx c(\\Delta \\overline{u}-\\Delta u)" }, { "math_id": 17, "text": "\\frac{\\Delta Y}{Y} \\approx \\frac{\\Delta \\overline{Y}}{\\overline{Y}} + c(\\Delta \\overline{u}-\\Delta u)." }, { "math_id": 18, "text": "\\Delta \\overline{u}" }, { "math_id": 19, "text": "\\frac{\\Delta \\overline{Y}}{\\overline{Y}}" }, { "math_id": 20, "text": "\\frac{\\Delta Y}{Y} \\approx k - c \\, \\Delta u." } ]
https://en.wikipedia.org/wiki?curid=627957
6280
Cuboctahedron
Polyhedron with 8 triangular faces and 6 square faces A cuboctahedron is a polyhedron with 8 triangular faces and 6 square faces. A cuboctahedron has 12 identical vertices, with 2 triangles and 2 squares meeting at each, and 24 identical edges, each separating a triangle from a square. As such, it is a quasiregular polyhedron, i.e., an Archimedean solid that is not only vertex-transitive but also edge-transitive. It is radially equilateral. Its dual polyhedron is the rhombic dodecahedron. Construction. The cuboctahedron can be constructed in many ways: From all of these constructions, the cuboctahedron has 14 faces: 8 equilateral triangles and 6 squares. It also has 24 edges and 12 vertices. The Cartesian coordinates for the vertices of a cuboctahedron with edge length formula_2 centered at the origin are: formula_3 Properties. Measurement and other metric properties. The surface area of a cuboctahedron formula_4 can be determined by summing all the area of its polygonal faces. The volume of a cuboctahedron formula_5 can be determined by slicing it off into two regular triangular cupolas, summing up their volume. Given that the edge length formula_6, its surface area and volume are: formula_7 The dihedral angle of a cuboctahedron can be calculated with the angle of triangular cupolas. The dihedral angle of a triangular cupola between square-to-triangle is approximately 125°, that between square-to-hexagon is 54.7°, and that between triangle-to-hexagon is 70.5°. Therefore, the dihedral angle of a cuboctahedron between square-to-triangle, on the edge where the base of two triangular cupolas are attached is 54.7° + 70.5° approximately 125°. Therefore, the dihedral angle of a cuboctahedron between square-to-triangle is approximately 125°. Buckminster Fuller found that the cuboctahedron is the only polyhedron in which the distance between its center to the vertex is the same as the distance between its edges. In other words, it has the same length vectors in three-dimensional space, known as "vector equilibrium". The rigid struts and the flexible vertices of a cuboctahedron may also be transformed progressively into a regular icosahedron, regular octahedron, regular tetrahedron. Fuller named this the "jitterbug transformation". A cuboctahedron has the Rupert property, meaning there is a polyhedron of the same or larger size that can pass through its hole. Symmetry and classification. The cuboctahedron is an Archimedean solid, meaning it is a highly symmetric and semi-regular polyhedron, and two or more different regular polygonal faces meet in a vertex. The cuboctahedron has two symmetries, resulting from the constructions as has mentioned above: the same symmetry as the regular octahedron or cube, the octahedral symmetry formula_0, and the same symmetry as the regular tetrahedron, tetrahedral symmetry formula_1. The polygonal faces that meet for every vertex are two equilateral triangles and two squares, and the vertex figure of a cuboctahedron is formula_8. The dual of a cuboctahedron is rhombic dodecahedron. Radial equilateral symmetry. In a cuboctahedron, the long radius (center to vertex) is the same as the edge length; thus its long diameter (vertex to opposite vertex) is 2 edge lengths. Its center is like the apical vertex of a canonical pyramid: one edge length away from "all" the other vertices. (In the case of the cuboctahedron, the center is in fact the apex of 6 square and 8 triangular pyramids). This radial equilateral symmetry is a property of only a few uniform polytopes, including the two-dimensional hexagon, the three-dimensional cuboctahedron, and the four-dimensional 24-cell and 8-cell (tesseract). "Radially equilateral" polytopes are those that can be constructed, with their long radii, from equilateral triangles which meet at the center of the polytope, each contributing two radii and an edge. Therefore, all the interior elements which meet at the center of these polytopes have equilateral triangle inward faces, as in the dissection of the cuboctahedron into 6 square pyramids and 8 tetrahedra. Each of these radially equilateral polytopes also occurs as cells of a characteristic space-filling tessellation: the tiling of regular hexagons, the rectified cubic honeycomb (of alternating cuboctahedra and octahedra), the 24-cell honeycomb and the tesseractic honeycomb, respectively. Each tessellation has a dual tessellation; the cell centers in a tessellation are cell vertices in its dual tessellation. The densest known regular sphere-packing in two, three and four dimensions uses the cell centers of one of these tessellations as sphere centers. Because it is radially equilateral, the cuboctahedron's center is one edge length distant from the 12 vertices. Related polyhedra and honeycomb. The cuboctahedron shares its skeleton with the two nonconvex uniform polyhedra, the cubohemioctahedron and octahemioctahedron. These polyhedrons are constructed from the skeleton of a cuboctahedron in which the four hexagonal planes bisect its diagonal, intersecting its interior. Adding six squares or eight equilateral triangles results in the cubohemicotahedron or octahemioctahedron, respectively. The cuboctahedron 2-covers the tetrahemihexahedron, which accordingly has the same abstract vertex figure (two triangles and two squares: formula_9) and half the vertices, edges, and faces. (The actual vertex figure of the tetrahemihexahedron is formula_10, with the formula_11 factor due to the cross.) The cuboctahedron can be dissected into 6 square pyramids and 8 tetrahedra meeting at a central point. This dissection is expressed in the tetrahedral-octahedral honeycomb where pairs of square pyramids are combined into octahedra. Graph. The skeleton of a cuboctahedron may be represented as the graph, one of the Archimedean graph. It has 12 vertices and 24 edges. It is quartic graph, which is four vertices connecting each vertex. The graph of a cuboctahedron may be constructed as the line graph of the cube, making it becomes the locally linear graph. Appearance. The cuboctahedron was probably known to Plato: Heron's "Definitiones" quotes Archimedes as saying that Plato knew of a solid made of 8 triangles and 6 squares. References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; Works cited. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathrm{O}_\\mathrm{h} " }, { "math_id": 1, "text": " \\mathrm{T}_\\mathrm{d} " }, { "math_id": 2, "text": "\\sqrt{2}" }, { "math_id": 3, "text": " (\\pm 1, \\pm 1, 0), \\qquad (\\pm 1, 0, \\pm 1), \\qquad (0, \\pm 1, \\pm 1). " }, { "math_id": 4, "text": " A " }, { "math_id": 5, "text": " V " }, { "math_id": 6, "text": " a " }, { "math_id": 7, "text": " \\begin{align}\nA &= \\left(6+2\\sqrt{3}\\right)a^2 &&\\approx 9.464a^2 \\\\\nV &= \\frac{5 \\sqrt{2}}{3} a^3 &&\\approx 2.357a^3.\n\\end{align}" }, { "math_id": 8, "text": " (3 \\cdot 4)^2 = 3^2 \\cdot 4^2 " }, { "math_id": 9, "text": " 3 \\cdot 4 \\cdot 3 \\cdot 4 " }, { "math_id": 10, "text": " 3 \\cdot 4 \\cdot \\frac{3}{2} \\cdot 4 " }, { "math_id": 11, "text": " \\frac{a}{2} " } ]
https://en.wikipedia.org/wiki?curid=6280
62817424
Hemispherical electron energy analyzer
A hemispherical electron energy analyzer or hemispherical deflection analyzer is a type of electron energy spectrometer generally used for applications where high energy resolution is needed—different varieties of electron spectroscopy such as angle-resolved photoemission spectroscopy (ARPES), X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES) or in imaging applications such as photoemission electron microscopy (PEEM) and low-energy electron microscopy (LEEM). It consists of two concentric conductive hemispheres that serve as electrodes that bend the trajectories of the electrons entering a narrow slit at one end so that their final radii depend on their kinetic energy. The analyzer, therefore, provides a mapping from kinetic energies to positions on a detector. Function. An ideal hemispherical analyzer consists of two concentric hemispherical electrodes (inner and outer hemispheres) of radii formula_0 and formula_1 held at proper voltages. In such a system, the electrons are linearly dispersed, depending on their kinetic energy, along the direction connecting the entrance and the exit slit, while the electrons with the same energy are first-order focused. When two voltages, formula_2 and formula_3, are applied to the inner and outer hemispheres, respectively, the electric potential in the region between the two electrodes follows from the Laplace equation: formula_4 The electric field, pointing radially from the center of the hemispheres out, has the familiar planetary motion formula_5 form formula_6 The voltages are set in such a way that the electrons with kinetic energy formula_7 equal to the so-called "pass energy" formula_8 follow a circular trajectory of radius formula_9. The centripetal force along the path is imposed by the electric field formula_10. With this in mind, formula_11 The potential difference between the two hemispheres needs to be formula_12. A single pointlike detector at radius formula_13 on the other side of the hemispheres will register only the electrons of a single kinetic energy. The detection can, however, be parallelized because of nearly linear dependence of the final radii on the kinetic energy. In the past, several discrete electron detectors (channeltrons) were used, but now microchannel plates with phosphorescent screens and camera detection prevail. In general, these trajectories are described in polar coordinates formula_14 for the plane of the great circle for electrons impinging at an angle formula_15 with respect to the normal to the entrance, and for the initial radii formula_16 to account for the finite aperture and slit widths (typically 0.1 to 5 mm): formula_17 where formula_18 As can be seen in the pictures of calculated electron trajectories, the finite slit width maps directly into energy detection channels (thus confusing the real energy spread with the beam width). The angular spread, while also worsening the energy resolution, shows some focusing as the equal negative and positive deviations map to the same final spot. When these deviations from the central trajectory are expressed in terms of the small parameters formula_19 defined as formula_20, formula_21, and having in mind that formula_15 itself is small (of the order of 1°), the final radius of the electron's trajectory, formula_22, can be expressed as formula_23. If electrons of one fixed energy formula_7 were entering the analyzer through a slit that is formula_24 wide, they would be imaged on the other end of the analyzer as a spot formula_24 wide. If their maximal angular spread at the entrance is formula_15, an additional width of formula_25 is acquired, and a single energy channel is smeared over formula_26 at the detector side. But there, this additional width is interpreted as energy dispersion, which is, to the first order, formula_27. It follows that the instrumental energy resolution, given as a function of the width of the slit, formula_24, and the maximal incidence angle, formula_15, of the incoming photoelectrons, which is itself dependent on the width of the aperture and slit, is formula_28. The analyzer resolution improves with increasing formula_13. However, technical problems related to the size of the analyzer put a limit on its actual value, and most analyzers have it in the range of 100–200 mm. Lower pass energies formula_8 also improve the resolution, but then the electron transmission probability is reduced, and the signal-to-noise ratio deteriorates accordingly. The electrostatic lenses in front of the analyzer have two main purposes: they collect and focus the incoming photoelectrons into the entrance slit of the analyzer, and they decelerate the electrons to the range of kinetic energies around formula_8, in order to increase the resolution. When acquiring spectra in "swept" (or "scanning") mode, the voltages of the two hemispheres – and hence the pass energy – are held fixed; at the same time, the voltages applied to the electrostatic lenses are swept in such a way that each channel counts electrons with the selected kinetic energy for the selected amount of time. In order to reduce the acquisition time per spectrum, the so-called "snapshot" (or "fixed") mode can be used. This mode exploits the relation between the kinetic energy of a photoelectron and its position inside the detector. If the detector energy range is wide enough, and if the photoemission signal collected from all the channels is sufficiently strong, the photoemission spectrum can be obtained in one single shot from the image of the detector.
[ { "math_id": 0, "text": "R_{1}" }, { "math_id": 1, "text": "R_{2}" }, { "math_id": 2, "text": "V_{1}" }, { "math_id": 3, "text": "V_{2}" }, { "math_id": 4, "text": " V(r)= - \\left[\\frac{V_{2}-V_{1}}{R_{2}-R_{1}}\\right]\\cdot\\frac{R_{1}R_{2}}{r} + const." }, { "math_id": 5, "text": "1/r^2" }, { "math_id": 6, "text": " |\\mathbf{E}(r)|= - \\left[\\frac{V_{2}-V_{1}}{R_{2}-R_{1}}\\right]\\cdot\\frac{R_{1}R_{2}}{r^{2}} " }, { "math_id": 7, "text": "E_k" }, { "math_id": 8, "text": "E_\\textrm{P}" }, { "math_id": 9, "text": "R_{\\textrm{P}} = \\tfrac{1}{2}(R_1 + R_2)" }, { "math_id": 10, "text": "-e \\mathbf{E}(r)" }, { "math_id": 11, "text": "V (r) = \\frac{E_\\textrm{P}}{e}\\frac{R_\\textrm{P}}{r}+const." }, { "math_id": 12, "text": "V_{1}-V_{2}=\\frac{1}{e}\\left(\\frac{R_{2}}{R_{1}}-\\frac{R_{1}}{R_{2}}\\right)E_\\textrm{P}" }, { "math_id": 13, "text": "R_\\textrm{P}" }, { "math_id": 14, "text": "r, \\varphi" }, { "math_id": 15, "text": "\\alpha" }, { "math_id": 16, "text": "r_0 \\equiv r(\\varphi=0)" }, { "math_id": 17, "text": "\nr(\\varphi)=r_0\\,\\left[{(1-c^{2})\\cos\\varphi-\\tan\\alpha\\sin\\varphi+c^{2}}\\right]^{-1}\n" }, { "math_id": 18, "text": "\nc^{2}=R_{\\textrm{P}}\\left[\\tfrac{E_{\\textrm{k}}}{E_{\\textrm{P}}}r_0\\cos^{2}\\alpha-2\\left(r_0-R_{\\textrm{P}}\\right)\\right]^{-1}\n" }, { "math_id": 19, "text": "\\varepsilon, \\sigma" }, { "math_id": 20, "text": "E_k=(1+\\varepsilon)E_\\textrm{P}" }, { "math_id": 21, "text": "r_0=(1+\\sigma)R_\\textrm{P}" }, { "math_id": 22, "text": "r(\\pi)" }, { "math_id": 23, "text": "r_\\pi\\approx R_\\textrm{P}(1+2\\varepsilon-\\sigma-2\\alpha^2+2\\varepsilon^2-6\\alpha^2\\varepsilon)" }, { "math_id": 24, "text": "w" }, { "math_id": 25, "text": "2R_P\\,\\alpha^2" }, { "math_id": 26, "text": "|\\Delta r_\\pi|_{\\sigma,\\alpha}=w+2R_P\\,\\alpha^2" }, { "math_id": 27, "text": "|\\Delta r_\\pi|_\\varepsilon = 2R_P\\,\\Delta E/E_P" }, { "math_id": 28, "text": " \\Delta E=E_\\textrm{P}\\left(\\frac{w}{2R_\\textrm{P}}+\\alpha ^2\\right) " } ]
https://en.wikipedia.org/wiki?curid=62817424
628183
Goldstone boson
Massless boson that must be present in a quantum system with spontaneously broken symmetry In particle and condensed matter physics, Goldstone bosons or Nambu–Goldstone bosons (NGBs) are bosons that appear necessarily in models exhibiting spontaneous breakdown of continuous symmetries. They were discovered by Yoichiro Nambu in particle physics within the context of the BCS superconductivity mechanism, and subsequently elucidated by Jeffrey Goldstone, and systematically generalized in the context of quantum field theory. In condensed matter physics such bosons are quasiparticles and are known as Anderson–Bogoliubov modes. These spinless bosons correspond to the spontaneously broken internal symmetry generators, and are characterized by the quantum numbers of these. They transform nonlinearly (shift) under the action of these generators, and can thus be excited out of the asymmetric vacuum by these generators. Thus, they can be thought of as the excitations of the field in the broken symmetry directions in group space—and are massless if the spontaneously broken symmetry is not also broken explicitly. If, instead, the symmetry is not exact, i.e. if it is explicitly broken as well as spontaneously broken, then the Nambu–Goldstone bosons are not massless, though they typically remain relatively light; they are then called pseudo-Goldstone bosons or pseudo–Nambu–Goldstone bosons (abbreviated PNGBs). Goldstone's theorem. Goldstone's theorem examines a generic continuous symmetry which is spontaneously broken; i.e., its currents are conserved, but the ground state is not invariant under the action of the corresponding charges. Then, necessarily, new massless (or light, if the symmetry is not exact) scalar particles appear in the spectrum of possible excitations. There is one scalar particle—called a Nambu–Goldstone boson—for each generator of the symmetry that is broken, i.e., that does not preserve the ground state. The Nambu–Goldstone mode is a long-wavelength fluctuation of the corresponding order parameter. By virtue of their special properties in coupling to the vacuum of the respective symmetry-broken theory, vanishing momentum ("soft") Goldstone bosons involved in field-theoretic amplitudes make such amplitudes vanish ("Adler zeros"). Examples. Theory. Consider a complex scalar field "ϕ", with the constraint that formula_0, a constant. One way to impose a constraint of this sort is by including a potential interaction term in its Lagrangian density, formula_1 and taking the limit as "λ" → ∞. This is called the "Abelian nonlinear σ-model". The constraint, and the action, below, are invariant under a "U"(1) phase transformation, "δϕ" i"εϕ". The field can be redefined to give a real scalar field (i.e., a spin-zero particle) "θ" without any constraint by formula_2 where "θ" is the Nambu–Goldstone boson (actually formula_3 is) and the "U"(1) symmetry transformation effects a shift on "θ", namely formula_4 but does not preserve the ground state |0〉 (i.e. the above infinitesimal transformation "does not annihilate it"—the hallmark of invariance), as evident in the charge of the current below. Thus, the vacuum is degenerate and noninvariant under the action of the spontaneously broken symmetry. The corresponding Lagrangian density is given by formula_5 and thus formula_6 Note that the constant term formula_7 in the Lagrangian density has no physical significance, and the other term in it is simply the kinetic term for a massless scalar. The symmetry-induced conserved "U"(1) current is formula_8 The charge, "Q", resulting from this current shifts "θ" and the ground state to a new, degenerate, ground state. Thus, a vacuum with 〈"θ"〉 0 will shift to a "different vacuum" with 〈"θ"〉 "ε". The current connects the original vacuum with the Nambu–Goldstone boson state, 〈0|"J"0(0)|"θ"〉≠ 0. In general, in a theory with several scalar fields, "ϕ"j, the Nambu–Goldstone mode "ϕ"g is massless, and parameterises the curve of possible (degenerate) vacuum states. Its hallmark under the broken symmetry transformation is "nonvanishing vacuum expectation" 〈"δϕg"〉, an order parameter, for vanishing 〈"ϕg"〉 0, at some ground state |0〉 chosen at the minimum of the potential, 〈∂"V"/∂"ϕ"i〉 0. In principle the vacuum should be the minimum of the effective potential which takes into account quantum effects, however it is equal to the classical potential to first approximation. Symmetry dictates that all variations of the potential with respect to the fields in all symmetry directions vanish. The vacuum value of the first order variation in any direction vanishes as just seen; while the vacuum value of the second order variation must also vanish, as follows. Vanishing vacuum values of field symmetry transformation increments add no new information. By contrast, however, "nonvanishing vacuum expectations of transformation increments", 〈"δϕ"g〉, specify the relevant (Goldstone) "null eigenvectors of the mass matrix", formula_9 and hence the corresponding zero-mass eigenvalues. Goldstone's argument. The principle behind Goldstone's argument is that the ground state is not unique. Normally, by current conservation, the charge operator for any symmetry current is time-independent, formula_10 Acting with the charge operator on the vacuum either "annihilates the vacuum", if that is symmetric; else, if "not", as is the case in spontaneous symmetry breaking, it produces a zero-frequency state out of it, through its shift transformation feature illustrated above. Actually, here, the charge itself is ill-defined, cf. the Fabri–Picasso argument below. But its better behaved commutators with fields, that is, the nonvanishing transformation shifts 〈"δϕ"g〉, are, nevertheless, "time-invariant", formula_11 thus generating a δ("k"0) in its Fourier transform. (This ensures that, inserting a complete set of intermediate states in a nonvanishing current commutator can lead to vanishing time-evolution only when one or more of these states is massless.) Thus, if the vacuum is not invariant under the symmetry, action of the charge operator produces a state which is different from the vacuum chosen, but which has zero frequency. This is a long-wavelength oscillation of a field which is nearly stationary: there are physical states with zero frequency, "k"0, so that the theory cannot have a mass gap. This argument is further clarified by taking the limit carefully. If an approximate charge operator acting in a huge but finite region A is applied to the vacuum, formula_12 a state with approximately vanishing time derivative is produced, formula_13 Assuming a nonvanishing mass gap "m"0, the frequency of any state like the above, which is orthogonal to the vacuum, is at least "m"0, formula_14 Letting A become large leads to a contradiction. Consequently m0 = 0. However this argument fails when the symmetry is gauged, because then the symmetry generator is only performing a gauge transformation. A gauge transformed state is the same exact state, so that acting with a symmetry generator does not get one out of the vacuum (see Higgs mechanism). Fabri–Picasso Theorem. Q does not properly exist in the Hilbert space, unless "Q"|0〉 0. The argument requires both the vacuum and the charge Q to be translationally invariant, "P"|0〉 0, ["P,Q"] 0. Consider the correlation function of the charge with itself, formula_15 so the integrand in the right hand side does not depend on the position. Thus, its value is proportional to the total space volume, formula_16 — unless the symmetry is unbroken, "Q"|0〉 0. Consequently, Q does not properly exist in the Hilbert space. Infraparticles. There is an arguable loophole in the theorem. If one reads the theorem carefully, it only states that there exist non-vacuum states with arbitrarily small energies. Take for example a chiral N = 1 super QCD model with a nonzero squark VEV which is conformal in the IR. The chiral symmetry is a global symmetry which is (partially) spontaneously broken. Some of the "Goldstone bosons" associated with this spontaneous symmetry breaking are charged under the unbroken gauge group and hence, these composite bosons have a continuous mass spectrum with arbitrarily small masses but yet there is no Goldstone boson with exactly zero mass. In other words, the Goldstone bosons are infraparticles. Extensions. Nonrelativistic theories. A version of Goldstone's theorem also applies to nonrelativistic theories. It essentially states that, for each spontaneously broken symmetry, there corresponds some quasiparticle which is typically a boson and has no energy gap. In condensed matter these goldstone bosons are also called gapless modes (i.e. states where the energy dispersion relation is like formula_17 and is zero for formula_18), the nonrelativistic version of the massless particles (i.e. photons where the dispersion relation is also formula_19 and zero for formula_18). Note that the energy in the non relativistic condensed matter case is "H"−"μN"−"α"⋅"P" and not "H" as it would be in a relativistic case. However, two "different" spontaneously broken generators may now give rise to the "same" Nambu–Goldstone boson. As a first example an antiferromagnet has 2 goldstone bosons, a ferromagnet has 1 goldstone bosons, where in both cases we are breaking symmetry from SO(3) to SO(2), for the antiferromagnet the dispersion is formula_20 and the expectation value of the ground state is zero, for the ferromagnet instead the dispersion is formula_21 and the expectation value of the ground state is not zero, i.e. there is a spontaneously broken symmetry for the ground state As a second example, in a superfluid, both the "U(1)" particle number symmetry and Galilean symmetry are spontaneously broken. However, the phonon is the Goldstone boson for both. Still in regards to symmetry breaking there is also a close analogy between gapless modes in condensed matter and the Higgs boson, e.g. in the paramagnet to ferromagnet phase transition Breaking of spacetime symmetries. In contrast to the case of the breaking of internal symmetries, when spacetime symmetries such as Lorentz, conformal, rotational, or translational symmetries are broken, the order parameter need not be a scalar field, but may be a tensor field, and the number of independent massless modes may be fewer than the number of spontaneously broken generators. For a theory with an order parameter formula_22 that spontaneously breaks a spacetime symmetry, the number of broken generators formula_23 minus the number non-trivial independent solutions formula_24 to formula_25 is the number of Goldstone modes that arise. For internal symmetries, the above equation has no non-trivial solutions, so the usual Goldstone theorem holds. When solutions do exist, this is because the Goldstone modes are linearly dependent among themselves, in that the resulting mode can be expressed as a gradients of another mode. Since the spacetime dependence of the solutions formula_24 is in the direction of the unbroken generators, when all translation generators are broken, no non-trivial solutions exist and the number of Goldstone modes is once again exactly the number of broken generators. In general, the phonon is effectively the Nambu–Goldstone boson for spontaneously broken translation symmetry. Nambu–Goldstone fermions. Spontaneously broken global fermionic symmetries, which occur in some supersymmetric models, lead to Nambu–Goldstone fermions, or "goldstinos". These have spin , instead of 0, and carry all quantum numbers of the respective supersymmetry generators broken spontaneously. Spontaneous supersymmetry breaking smashes up ("reduces") supermultiplet structures into the characteristic nonlinear realizations of broken supersymmetry, so that goldstinos are superpartners of "all" particles in the theory, of "any spin", and the only superpartners, at that. That is, to say, two non-goldstino particles are connected to only goldstinos through supersymmetry transformations, and not to each other, even if they were so connected before the breaking of supersymmetry. As a result, the masses and spin multiplicities of such particles are then arbitrary. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\phi^* \\phi= v^2" }, { "math_id": 1, "text": "\\lambda(\\phi^*\\phi - v^2)^2 ~, " }, { "math_id": 2, "text": "\\phi = v e^{i\\theta} " }, { "math_id": 3, "text": " v\\theta" }, { "math_id": 4, "text": " \\delta \\theta = \\epsilon ~," }, { "math_id": 5, "text": "{\\mathcal L}=\\frac{1}{2}(\\partial^\\mu \\phi^*)\\partial_\\mu \\phi -m^2 \\phi^* \\phi = \\frac{1}{2}(-iv e^{-i\\theta} \\partial^\\mu \\theta)(iv e^{i\\theta} \\partial_\\mu \\theta) - m^2 v^2 ," }, { "math_id": 6, "text": " =\\frac{v^2}{2}(\\partial^\\mu \\theta)(\\partial_\\mu \\theta) - m^2 v^2~." }, { "math_id": 7, "text": "m^2v^2" }, { "math_id": 8, "text": " J_\\mu = v^2 \\partial_\\mu \\theta ~." }, { "math_id": 9, "text": " \\left\\langle { \\partial^2 V \\over \\partial \\phi _i \\partial \\phi _j } \\right\\rangle \\langle \\delta \\phi_j \\rangle =0~, " }, { "math_id": 10, "text": "{d\\over dt} Q = {d\\over dt} \\int_x J^0(x) =0." }, { "math_id": 11, "text": "\\frac{d \\langle \\delta \\phi_g \\rangle }{dt} = 0," }, { "math_id": 12, "text": "{d\\over dt} Q_A = {d\\over dt} \\int_x e^{-\\frac{x^2}{2A^2}} J^0(x) = -\\int_x e^{-\\frac{x^2}{2A^2}} \\nabla \\cdot J = \\int_x \\nabla \\left (e^{-\\frac{x^2}{2A^2}} \\right ) \\cdot J," }, { "math_id": 13, "text": "\\left \\| {d\\over dt} Q_A |0\\rangle \\right \\| \\approx \\frac{1}{A} \\left \\| Q_A|0\\rangle\\right \\|." }, { "math_id": 14, "text": " \\left \\| \\frac{d}{dt} |\\theta\\rangle \\right \\| = \\| H |\\theta\\rangle \\| \\ge m_0 \\||\\theta\\rangle \\|." }, { "math_id": 15, "text": "\\begin{align}\n \\langle 0| QQ |0\\rangle &= \\int d^3x \\langle0|j_0(x) Q|0\\rangle \\\\\n &=\\int d^3x \\left \\langle 0 \\left |e^{iPx} j_0(0) e^{-iPx} Q \\right |0 \\right \\rangle \\\\\n &=\\int d^3x \\left \\langle 0 \\left | e^{iPx} j_0(0) e^{-iPx} Q e^{iPx} e^{-iPx} \\right | 0 \\right \\rangle \\\\\n &=\\int d^3x \\left \\langle 0 \\left | j_0(0) Q \\right |0 \\right \\rangle \n\\end{align}" }, { "math_id": 16, "text": "\\|Q|0\\rangle \\|^2 = \\infty" }, { "math_id": 17, "text": "E \\propto p^n" }, { "math_id": 18, "text": "p=0" }, { "math_id": 19, "text": "E=pc" }, { "math_id": 20, "text": "E \\propto p" }, { "math_id": 21, "text": "E \\propto p^2" }, { "math_id": 22, "text": "\\langle \\phi(\\boldsymbol r)\\rangle" }, { "math_id": 23, "text": "T^a" }, { "math_id": 24, "text": "c_a(\\boldsymbol r)" }, { "math_id": 25, "text": "\nc_a(\\boldsymbol r) T^a \\langle \\phi(\\boldsymbol r)\\rangle = 0\n" } ]
https://en.wikipedia.org/wiki?curid=628183
62823306
Automorphism of a Lie algebra
Type of automorphism In abstract algebra, an automorphism of a Lie algebra formula_0 is an isomorphism from formula_0 to itself, that is, a bijective linear map preserving the Lie bracket. The set of automorphisms of formula_1 are denoted formula_2, the automorphism group of formula_1. Inner and outer automorphisms. The subgroup of formula_3 generated using the adjoint action formula_4 is called the inner automorphism group of formula_0. The group is denoted formula_5. These form a normal subgroup in the group of automorphisms, and the quotient formula_6 is known as the outer automorphism group. Diagram automorphisms. It is known that the outer automorphism group for a simple Lie algebra formula_1 is isomorphic to the group of diagram automorphisms for the corresponding Dynkin diagram in the classification of Lie algebras. The only algebras with non-trivial outer automorphism group are therefore formula_7 and formula_8. There are ways to concretely realize these automorphisms in the matrix representations of these groups. For formula_9, the automorphism can be realized as the negative transpose. For formula_10, the automorphism is obtained by conjugating by an orthogonal matrix in formula_11 with determinant -1. Derivations. A derivation on a Lie algebra is a linear map formula_12 satisfying the Leibniz rule formula_13 The set of derivations on a Lie algebra formula_1 is denoted formula_14, and is a subalgebra of the endomorphisms on formula_1, that is formula_15. They inherit a Lie algebra structure from the Lie algebra structure on the endomorphism algebra, and closure of the bracket follows from the Leibniz rule. Due to the Jacobi identity, it can be shown that the image of the adjoint representation formula_16 lies in formula_14. Through the Lie group-Lie algebra correspondence, the Lie group of automorphisms formula_17 corresponds to the Lie algebra of derivations formula_14. For formula_1 finite, all derivations are inner. Theorems. The Borel–Morozov theorem states that every solvable subalgebra of a complex semisimple Lie algebra formula_0 can be mapped to a subalgebra of a Cartan subalgebra formula_22 of formula_0 by an inner automorphism of formula_0. In particular, it says that formula_23, where formula_24 are root spaces, is a maximal solvable subalgebra (that is, a Borel subalgebra). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak g" }, { "math_id": 1, "text": "\\mathfrak{g}" }, { "math_id": 2, "text": "\\text{Aut}(\\mathfrak{g})" }, { "math_id": 3, "text": "\\operatorname{Aut}(\\mathfrak g)" }, { "math_id": 4, "text": "e^{\\operatorname{ad}(x)}, x \\in \\mathfrak g" }, { "math_id": 5, "text": "\\operatorname{Aut}^0(\\mathfrak{g})" }, { "math_id": 6, "text": "\\operatorname{Aut}(\\mathfrak{g})/\\operatorname{Aut}^0(\\mathfrak{g})" }, { "math_id": 7, "text": "A_n (n \\geq 2), D_n" }, { "math_id": 8, "text": "E_6" }, { "math_id": 9, "text": "A_n = \\mathfrak{sl}(n+1, \\mathbb{C})" }, { "math_id": 10, "text": "D_n = \\mathfrak{so}(2n)" }, { "math_id": 11, "text": "O(2n)" }, { "math_id": 12, "text": "\\delta: \\mathfrak{g} \\rightarrow \\mathfrak{g}" }, { "math_id": 13, "text": "\\delta[X,Y] = [\\delta X, Y] + [X, \\delta Y]." }, { "math_id": 14, "text": "\\operatorname{der}(\\mathfrak{g})" }, { "math_id": 15, "text": "\\operatorname{der}(\\mathfrak{g}) < \\operatorname{End}(\\mathfrak{g})" }, { "math_id": 16, "text": "\\operatorname{ad}: \\mathfrak{g} \\rightarrow \\operatorname{End}(\\mathfrak{g})" }, { "math_id": 17, "text": "\\operatorname{Aut}(\\mathfrak{g})" }, { "math_id": 18, "text": "g" }, { "math_id": 19, "text": "G" }, { "math_id": 20, "text": "\\operatorname{Ad}_g" }, { "math_id": 21, "text": "\\mathfrak{g} = \\operatorname{Lie}(G)" }, { "math_id": 22, "text": "\\mathfrak h" }, { "math_id": 23, "text": "\\mathfrak h \\oplus \\bigoplus_{\\alpha > 0} \\mathfrak{g}_{\\alpha} =: \\mathfrak{h} \\oplus \\mathfrak{g}^+" }, { "math_id": 24, "text": "\\mathfrak{g}_{\\alpha}" } ]
https://en.wikipedia.org/wiki?curid=62823306
6282756
Interactive skeleton-driven simulation
Scientific computer simulation technique Interactive skeleton-driven simulation (or Interactive skeleton-driven dynamic deformations) is a scientific computer simulation technique used to approximate realistic physical deformations of dynamic bodies in real-time. It involves using elastic dynamics and mathematical optimizations to decide the body-shapes during motion and interaction with forces. It has various applications within realistic simulations for medicine, 3D computer animation and virtual reality. Background. Methods for simulating deformation, such as changes of shapes, of dynamic bodies involve intensive calculations, and several models have been developed. Some of these are known as "free-form deformation", "skeleton-driven deformation", "dynamic deformation" and "anatomical modelling". Skeletal animation is well known in computer animation and 3D character simulation. Because of the calculation insensitivity of the simulation, few interactive systems are available which realistically can simulate dynamic bodies in real-time. Being able to "interact" with such a realistic 3D model would mean that calculations would have to be performed within the constraints of a frame rate which would be acceptable via a user interface. Recent research has been able to build on previously developed models and methods to provide sufficiently efficient and realistic simulations. The promise for this technique can be as widespread as mimicking human facial expressions for perception of simulating a human actor in real-time or other cell organisms. Using skeletal constraints and parameterized force to calculate deformations also has the benefit of matching how a single cell has a shaping skeleton, as well as how a larger living organism might have an internal bone skeleton - such as the vertebrae. The generalized external body force simulations makes elasticity calculations more efficient, and means real-time interactions are possible. Basic theory. There are several components to such a simulation system: domain with calculations of these hierarchical functions similar to that of lazy wavelets Rather than fitting the object to the skeleton, as is common, the skeleton is used to set constraints for deformation. Also the hierarchical basis means that detail levels can be introduced or removed when needed - for example, observing from a distance or hidden surfaces. Pre-calculated poses are used to be able to interpolate between shapes and achieve realistic deformations throughout motions. This means traditional keyframes are avoided. There are performance tuning similarities between this technique and procedural generation, wavelet and data compression methods. Algorithmic considerations. To achieve interactivity there are several optimizations necessary which are implementation specific. Start by defining the object you wish to animate as a set (i.e. define all the points): formula_0 . Then get a handle on it. Let formula_1 Then you need to define the rest state of the object (the non-wobble point): formula_2 Projects. Projects are taking place to further develop this technique and presenting results to SIGGRAPH, with available reference of details. Academic institutions and commercial enterprises like Alias Systems Corporation (the makers of the Maya rendering software), Intel and Electronic Arts are among the known proponents of this work. There are also videos available showcasing the techniques, with editors showing interactivity in real-time with realistic results. The computer game Spore also has showcased similar techniques.
[ { "math_id": 0, "text": "p : \\Omega \\times \\mathbb{R} \\rightarrow \\mathbb{R}^3 : (x, t) \\mapsto p(x, t)" }, { "math_id": 1, "text": "p_S : S \\times \\mathbb{R} \\rightarrow \\mathbb{R}^3" }, { "math_id": 2, "text": "r(x) = \\sum_{a} r_a \\emptyset ^a (x) = r_a \\emptyset ^a (x) = x" } ]
https://en.wikipedia.org/wiki?curid=6282756
628352
Tsirelson space
In mathematics, especially in functional analysis, the Tsirelson space is the first example of a Banach space in which neither an ℓ "p" space nor a "c"0 space can be embedded. The Tsirelson space is reflexive. It was introduced by B. S. Tsirelson in 1974. The same year, Figiel and Johnson published a related article () where they used the notation "T" for the "dual" of Tsirelson's example. Today, the letter "T" is the standard notation for the dual of the original example, while the original Tsirelson example is denoted by "T"*. In "T"* or in "T", no subspace is isomorphic, as Banach space, to an "ℓ" "p" space, 1 ≤ "p" &lt; ∞, or to "c"0. All classical Banach spaces known to , spaces of continuous functions, of differentiable functions or of integrable functions, and all the Banach spaces used in functional analysis for the next forty years, contain some "ℓ" "p" or "c"0. Also, new attempts in the early '70s to promote a geometric theory of Banach spaces led to ask whether or not "every" infinite-dimensional Banach space has a subspace isomorphic to some "ℓ" "p" or to "c"0. Moreover, it was shown by Baudier, Lancien, and Schlumprecht that "ℓ" "p" and "c"0 do not even coarsely embed into T*. The radically new Tsirelson construction is at the root of several further developments in Banach space theory: the arbitrarily distortable space of Thomas Schlumprecht (), on which depend Gowers' solution to Banach's hyperplane problem and the Odell–Schlumprecht solution to the distortion problem. Also, several results of Argyros et al. are based on ordinal refinements of the Tsirelson construction, culminating with the solution by Argyros–Haydon of the scalar plus compact problem. Tsirelson's construction. On the vector space ℓ∞ of bounded scalar sequences  "x" {"x""j" } "j"∈N, let "P""n" denote the linear operator which sets to zero all coordinates "x""j" of "x" for which "j" ≤ "n". A finite sequence formula_0 of vectors in ℓ∞ is called "block-disjoint" if there are natural numbers formula_1 so that formula_2, and so that formula_3 when formula_4 or formula_5, for each "n" from 1 to "N". The unit ball  "B"∞  of ℓ∞ is compact and metrizable for the topology of pointwise convergence (the product topology). The crucial step in the Tsirelson construction is to let "K" be the "smallest" pointwise closed subset of  "B"∞  satisfying the following two properties: a. For every integer  "j"  in N, the unit vector "e""j" and all multiples formula_6, for |λ| ≤ 1, belong to "K". b. For any integer "N" ≥ 1, if formula_7 is a block-disjoint sequence in "K", then formula_8 belongs to "K". This set "K" satisfies the following stability property: c. Together with every element "x" of "K", the set "K" contains all vectors "y" in ℓ∞ such that |"y"| ≤ |"x"| (for the pointwise comparison). It is then shown that "K" is actually a subset of "c"0, the Banach subspace of ℓ∞ consisting of scalar sequences tending to zero at infinity. This is done by proving that d: for every element "x" in "K", there exists an integer "n" such that 2 "P""n"("x") belongs to "K", and iterating this fact. Since "K" is pointwise compact and contained in "c"0, it is weakly compact in "c"0. Let "V" be the closed convex hull of "K" in "c"0. It is also a weakly compact set in "c"0. It is shown that "V" satisfies b, c and d. The Tsirelson space "T"* is the Banach space whose unit ball is "V". The unit vector basis is an unconditional basis for "T"* and "T"* is reflexive. Therefore, "T"* does not contain an isomorphic copy of "c"0. The other "ℓ" "p" spaces, 1 ≤ "p" &lt; ∞, are ruled out by condition b. Properties. The Tsirelson space T* is reflexive () and finitely universal, which means that for some constant , the space T* contains C-isomorphic copies of every finite-dimensional normed space, namely, for every finite-dimensional normed space X, there exists a subspace Y of the Tsirelson space with multiplicative Banach–Mazur distance to X less than C. Actually, every finitely universal Banach space contains "almost-isometric" copies of every finite-dimensional normed space, meaning that C can be replaced by 1 + ε for every ε &gt; 0. Also, every infinite-dimensional subspace of T* is finitely universal. On the other hand, every infinite-dimensional subspace in the dual T of T* contains almost isometric copies of formula_9, the n-dimensional ℓ1-space, for all n. The Tsirelson space T is distortable, but it is not known whether it is arbitrarily distortable. The space T* is a "minimal" Banach space. This means that every infinite-dimensional Banach subspace of T* contains a further subspace isomorphic to T*. Prior to the construction of T*, the only known examples of minimal spaces were "ℓ" "p" and c0. The dual space T is not minimal. The space T* is polynomially reflexive. Derived spaces. The symmetric Tsirelson space "S"("T") is polynomially reflexive and it has the approximation property. As with "T", it is reflexive and no "ℓ" "p" space can be embedded into it. Since it is symmetric, it can be defined even on an uncountable supporting set, giving an example of non-separable polynomially reflexive Banach space. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{x_n\\}_{n=1}^N" }, { "math_id": 1, "text": "\\textstyle \\{a_n, b_n\\}_{n=1}^N" }, { "math_id": 2, "text": "a_1 \\leq b_1 < a_2 \\leq b_2 < \\cdots \\leq b_N" }, { "math_id": 3, "text": "(x_n)_i=0" }, { "math_id": 4, "text": "i<a_n" }, { "math_id": 5, "text": "i>b_n" }, { "math_id": 6, "text": "\\lambda e_j" }, { "math_id": 7, "text": "\\textstyle (x_1,\\ldots,x_N)" }, { "math_id": 8, "text": "\\textstyle{{1\\over2}P_N(x_1 + \\cdots + x_N)}" }, { "math_id": 9, "text": "\\scriptstyle{\\ell^1_n}" } ]
https://en.wikipedia.org/wiki?curid=628352
62838132
Dependency network (graphical model)
Dependency networks (DNs) are graphical models, similar to Markov networks, wherein each vertex (node) corresponds to a random variable and each edge captures dependencies among variables. Unlike Bayesian networks, DNs may contain cycles. Each node is associated to a conditional probability table, which determines the realization of the random variable given its parents. Markov blanket. In a Bayesian network, the Markov blanket of a node is the set of parents and children of that node, together with the children's parents. The values of the parents and children of a node evidently give information about that node. However, its children's parents also have to be included in the Markov blanket, because they can be used to explain away the node in question. In a Markov random field, the Markov blanket for a node is simply its adjacent (or neighboring) nodes. In a dependency network, the Markov blanket for a node is simply the set of its parents. Dependency network versus Bayesian networks. Dependency networks have advantages and disadvantages with respect to Bayesian networks. In particular, they are easier to parameterize from data, as there are efficient algorithms for learning both the structure and probabilities of a dependency network from data. Such algorithms are not available for Bayesian networks, for which the problem of determining the optimal structure is NP-hard. Nonetheless, a dependency network may be more difficult to construct using a knowledge-based approach driven by expert-knowledge. Dependency networks versus Markov networks. Consistent dependency networks and Markov networks have the same representational power. Nonetheless, it is possible to construct non-consistent dependency networks, i.e., dependency networks for which there is no compatible valid joint probability distribution. Markov networks, in contrast, are always consistent. Definition. A consistent dependency network for a set of random variables formula_0 with joint distribution formula_1 is a pair formula_2 where formula_3 is a cyclic directed graph, where each of its nodes corresponds to a variable in formula_4, and formula_5 is a set of conditional probability distributions. The parents of node formula_6, denoted formula_7, correspond to those variables formula_8 that satisfy the following independence relationships formula_9 The dependency network is consistent in the sense that each local distribution can be obtained from the joint distribution formula_1. Dependency networks learned using large data sets with large sample sizes will almost always be consistent. A non-consistent network is a network for which there is no joint probability distribution compatible with the pair formula_2. In that case, there is no joint probability distribution that satisfies the independence relationships subsumed by that pair. Structure and parameters learning. Two important tasks in a dependency network are to learn its structure and probabilities from data. Essentially, the learning algorithm consists of independently performing a probabilistic regression or classification for each variable in the domain. It comes from observation that the local distribution for variable formula_6 in a dependency network is the conditional distribution formula_10, which can be estimated by any number of classification or regression techniques, such as methods using a probabilistic decision tree, a neural network or a probabilistic support-vector machine. Hence, for each variable formula_6 in domain formula_11, we independently estimate its local distribution from data using a classification algorithm, even though it is a distinct method for each variable. Here, we will briefly show how probabilistic decision trees are used to estimate the local distributions. For each variable formula_6 in formula_4, a probabilistic decision tree is learned where formula_6 is the target variable and formula_12 are the input variables. To learn a decision tree structure for formula_6, the search algorithm begins with a singleton root node without children. Then, each leaf node in the tree is replaced with a binary split on some variable formula_13 in formula_12, until no more replacements increase the score of the tree. Probabilistic Inference. A probabilistic inference is the task in which we wish to answer probabilistic queries of the form formula_14, given a graphical model for formula_4, where formula_15 (the 'target' variables) formula_16 (the 'input' variables) are disjoint subsets of formula_4. One of the alternatives for performing probabilistic inference is using Gibbs sampling. A naive approach for this uses an ordered Gibbs sampler, an important difficulty of which is that if either formula_14 or formula_17 is small, then many iterations are required for an accurate probability estimate. Another approach for estimating formula_14 when formula_17 is small is to use modified ordered Gibbs sampler, where formula_18 is fixed during Gibbs sampling. It may also happen that formula_19 is rare, e.g. when formula_15 has many variables. So, the law of total probability along with the independencies encoded in a dependency network can be used to decompose the inference task into a set of inference tasks on single variables. This approach comes with the advantage that some terms may be obtained by direct lookup, thereby avoiding some Gibbs sampling. You can see below an algorithm that can be used for obtain formula_20 for a particular instance of formula_21 and formula_22, where formula_15 and formula_16 are disjoint subsets. Applications. In addition to the applications to probabilistic inference, the following applications are in the category of Collaborative Filtering (CF), which is the task of predicting preferences. Dependency networks are a natural model class on which to base CF predictions, once an algorithm for this task only needs estimation of formula_35 to produce recommendations. In particular, these estimates may be obtained by a direct lookup in a dependency network. Another class of useful applications for dependency networks is related to data visualization, that is, visualization of predictive relationships. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{X} = (X_1, \\ldots, X_n)" }, { "math_id": 1, "text": "p(\\mathbf{x})" }, { "math_id": 2, "text": "(G,P)" }, { "math_id": 3, "text": "G" }, { "math_id": 4, "text": "\\mathbf{X}" }, { "math_id": 5, "text": "P" }, { "math_id": 6, "text": "X_i" }, { "math_id": 7, "text": "\\mathbf{Pa_i}" }, { "math_id": 8, "text": "\\mathbf{Pa_i} \\subseteq (X_1, \\ldots, X_{i-1}, X_{i+1}, \\ldots, X_n)" }, { "math_id": 9, "text": "p(x_i\\mid\\mathbf{pa_i}) = p(x_i\\mid x_1, \\ldots , x_{i-1}, x_{i+1}, \\ldots, x_n) = p(x_i\\mid\\mathbf{x} - {x_i})." }, { "math_id": 10, "text": "p(x_i|\\mathbf{x} - {x_i})" }, { "math_id": 11, "text": "X" }, { "math_id": 12, "text": "\\mathbf{X} - X_i" }, { "math_id": 13, "text": "X_j" }, { "math_id": 14, "text": "p(\\mathbf{y\\mid z})" }, { "math_id": 15, "text": "\\mathbf{Y}" }, { "math_id": 16, "text": "\\mathbf{Z}" }, { "math_id": 17, "text": "p(\\mathbf{z})" }, { "math_id": 18, "text": "\\mathbf{Z = z}" }, { "math_id": 19, "text": "\\mathbf{y}" }, { "math_id": 20, "text": "p(\\mathbf{y|z})" }, { "math_id": 21, "text": "\\mathbf{y} \\in \\mathbf{Y}" }, { "math_id": 22, "text": "\\mathbf{z} \\in \\mathbf{Z}" }, { "math_id": 23, "text": "\\mathbf{U := Y}" }, { "math_id": 24, "text": "\\mathbf{P := Z}" }, { "math_id": 25, "text": "\\mathbf{p := z}" }, { "math_id": 26, "text": "\\mathbf{P}" }, { "math_id": 27, "text": "\\mathbf{U} \\neq \\empty" }, { "math_id": 28, "text": "X_i \\in \\mathbf{U}" }, { "math_id": 29, "text": "U" }, { "math_id": 30, "text": "p(x_i|\\mathbf{p}) := p(x_i|\\mathbf{pa_i})" }, { "math_id": 31, "text": "p(x_i|\\mathbf{p})" }, { "math_id": 32, "text": "\\mathbf{U := U} - X_i" }, { "math_id": 33, "text": "\\mathbf{P := P} + X_i" }, { "math_id": 34, "text": "\\mathbf{p := p} + x_i" }, { "math_id": 35, "text": "p(x_i = 1|\\mathbf{x} - {x_i} = 0)" } ]
https://en.wikipedia.org/wiki?curid=62838132
62842987
Boschloo's test
Statistical test for analysis of contingency tables Boschloo's test is a statistical hypothesis test for analysing 2x2 contingency tables. It examines the association of two Bernoulli distributed random variables and is a uniformly more powerful alternative to Fisher's exact test. It was proposed in 1970 by R. D. Boschloo. Setting. A 2 × 2 contingency table visualizes formula_0 independent observations of two binary variables formula_1 and formula_2: formula_3 The probability distribution of such tables can be classified into three distinct cases. Experiment type 1: Rare taste-test experiment, fully constrained. Fisher's exact test is designed for the first case and therefore an exact conditional test (because it conditions on the column sums). The typical example of such a case is the Lady tasting tea: A lady tastes 8 cups of tea with milk. In 4 of those cups the milk is poured in before the tea. In the other 4 cups the tea is poured in first. The lady tries to assign the cups to the two categories. Following our notation, the random variable formula_1 represents the used method (1 = milk first, 0 = milk last) and formula_2 represents the lady's guesses (1 = milk first guessed, 0 = milk last guessed). Then the row sums are the fixed numbers of cups prepared with each method: formula_18 The lady knows that there are 4 cups in each category, so will assign 4 cups to each method. Thus, the column sums are also fixed in advance: formula_19 If she is not able to tell the difference, formula_1 and formula_2 are independent and the number formula_8 of correctly classified cups with milk first follows the hypergeometric distribution formula_20 Experiment type 2: Normal laboratory controlled experiment, only one margin constrained. Boschloo's test is designed for the second case and therefore an exact unconditional test. Examples of such a case are often found in medical research, where a binary endpoint is compared between two patient groups. Following our notation, formula_21 represents the first group that receives some medication of interest. formula_22 represents the second group that receives a placebo. formula_23 indicates the cure of a patient (1 = cure, 0 = no cure). Then the row sums equal the group sizes and are usually fixed in advance. The column sums are the total number of cures respectively disease continuations and not fixed in advance. Experiment type 3: Field observation, no marginal constraints at all. Pearson's chi-squared test (without "any" "continuity correction") is the correct choice for the third case, where there are no constraints on either the row totals or the column totals. This third scenario describes most observational studies or "field-observations", where data is collected as-available in an uncontrolled environment. For example, if one goes out collecting two types of butterflies of some particular predetermined identifiable color, which can be recognized before capture, however it is "not" possible to distinguished whether a butterfly is species 1 or species 0; before it is captured and closely examined: One can merely tell by its color that a butterfly being pursued must be either one of the two species of interest. For any one day's session of butterfly collecting, one cannot predetermine how many of each species will be collected, only perhaps the total number of capture, depending on the collector's criterion for stopping. If the species are tallied in separate rows of the table, then the row sums are unconstrained and independently binomially distributed. The second distinction between the captured butterflies will be whether the butterfly is female (type 1) or male (type 0), tallied in the columns. If its sex also requires close examination of the butterfly, that also is independently binomially random. That means that because of the experimental design, the column sums are unconstrained just like the rows are: Neither the count for either of species, nor count of the sex of the captured butterflies in each species is predetermined by the process of observation, and neither total constrains the other. The only possible constraint is the grand total of all butterflies captured, and even that could itself be unconstrained, depending on how the collector decides to stop. But since one cannot reliably know beforehand for any one particular day in any one particular meadow how successful one's pursuit might be during the time available for collection, even the grand total might be unconstrained: It depends on whether the constraint on data collected is the time available to catch butterflies, or some predetermined total to be collected, perhaps to ensure adequately significant statistics. This type of 'experiment' (also called a "field observation") is almost entirely uncontrolled, hence some prefer to only call it an 'observation', not an 'experiment'. All the numbers in the table are independently random. Each of the cells of the contingency table is a separate binomial probability and neither Fisher's fully constrained 'exact' test nor Boschloo's partly-constrained test are based on the statistics arising from the experimental design. Pearson's chi-squared test is the appropriate test for an unconstrained observational study, and Pearson's test, in turn, employs the wrong statistical model for the other two types of experiment. (Note in passing that Pearson's chi-squared statistic should "never" have "any" "continuity correction" applied, what-so-ever, e.g. no "Yates' correction": The consequence of that "correction" will be to distort its to match Fisher's test, i.e. give the wrong answer.) Test hypothesis. The null hypothesis of Boschloo's one-tailed test (high values of formula_24 favor the alternative hypothesis) is: formula_25 The null hypothesis of the one-tailed test can also be formulated in the other direction (small values of formula_24 favor the alternative hypothesis): formula_26 The null hypothesis of the two-tailed test is: formula_27 There is no universal definition of the two-tailed version of Fisher's exact test. Since Boschloo's test is based on Fisher's exact test, a universal two-tailed version of Boschloo's test also doesn't exist. In the following we deal with the one-tailed test and formula_28. Boschloo's idea. We denote the desired significance level by formula_29. Fisher's exact test is a conditional test and appropriate for the first of the above mentioned cases. But if we treat the observed column sum formula_30 as fixed in advance, Fisher's exact test can also be applied to the second case. The true size of the test then depends on the nuisance parameters formula_31 and formula_32. It can be shown that the size maximum formula_33 is taken for equal proportions formula_34 and is still controlled by formula_29. However, Boschloo stated that for small sample sizes, the maximal size is often considerably smaller than formula_29. This leads to an undesirable loss of power. Boschloo proposed to use Fisher's exact test with a greater nominal level formula_35. Here, formula_36 should be chosen as large as possible such that the maximal size is still controlled by formula_29: formula_37. This method was especially advantageous at the time of Boschloo's publication because formula_36 could be looked up for common values of formula_38 and formula_39. This made performing Boschloo's test computationally easy. Test statistic. The decision rule of Boschloo's approach is based on Fisher's exact test. An equivalent way of formulating the test is to use the p-value of Fisher's exact test as test statistic. Fisher's p-value is calculated from the hypergeometric distribution (for ease of notation we write formula_40 instead of formula_41): formula_42 The distribution of formula_43 is determined by the binomial distributions of formula_24 and formula_44 and depends on the unknown nuisance parameter formula_45. For a specified significance level formula_46 the critical value of formula_43 is the maximal value formula_36 that satisfies formula_47. The critical value formula_36 is equal to the nominal level of Boschloo's original approach. Modification. Boschloo's test deals with the unknown nuisance parameter formula_45 by taking the maximum over the whole parameter space formula_48. The Berger &amp; Boos procedure takes a different approach by maximizing formula_49 over a formula_50 confidence interval of formula_51 and adding formula_52. formula_52 is usually a small value such as 0.001 or 0.0001. This results in a modified Boschloo's test which is also exact. Comparison to other exact tests. All exact tests hold the specified significance level but can have varying power in different situations. Mehrotra et al. compared the power of some exact tests in different situations. The results regarding Boschloo's test are summarized in the following. Modified Boschloo's test. Boschloo's test and the modified Boschloo's test have similar power in all considered scenarios. Boschloo's test has slightly more power in some cases, and vice versa in some other cases. Fisher's exact test. Boschloo's test is by construction uniformly more powerful than Fisher's exact test. For small sample sizes (e.g. 10 per group) the power difference is large, ranging from 16 to 20 percentage points in the regarded cases. The power difference is smaller for greater sample sizes. Exact Z-Pooled test. This test is based on the test statistic formula_53 where formula_54 are the group event rates and formula_55 is the pooled event rate. The power of this test is similar to that of Boschloo's test in most scenarios. In some cases, the formula_56-Pooled test has greater power, with differences mostly ranging from 1 to 5 percentage points. In very few cases, the difference goes up to 9 percentage points. This test can also be modified by the Berger &amp; Boos procedure. However, the resulting test has very similar power to the unmodified test in all scenarios. Exact Z-Unpooled test. This test is based on the test statistic formula_57 where formula_54 are the group event rates. The power of this test is similar to that of Boschloo's test in many scenarios. In some cases, the formula_56-Unpooled test has greater power, with differences ranging from 1 to 5 percentage points. However, in some other cases, Boschloo's test has noticeably greater power, with differences up to 68 percentage points. This test can also be modified by the Berger &amp; Boos procedure. The resulting test has similar power to the unmodified test in most scenarios. In some cases, the power is considerably improved by the modification but the overall power comparison to Boschloo's test remains unchanged. Software. The calculation of Boschloo's test can be performed in following software: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ n\\ " }, { "math_id": 1, "text": "\\ A\\ " }, { "math_id": 2, "text": "\\ B\\ " }, { "math_id": 3, "text": "\n\\begin{array}{c|cc|c}\n& B = 1 & B = 0 & \\mbox{Total}\\\\\n\\hline\nA = 1 & x_{11} & x_{10} & n_1 \\\\\nA = 0 & x_{01} & x_{00} & n_0 \\\\\n\\hline\n\\mbox{Total} & s_1 & s_0 & n\\\\\n\\end{array}\n" }, { "math_id": 4, "text": "\\ n_1\\ , n_0\\ " }, { "math_id": 5, "text": "\\ s_1\\ , s_0\\ " }, { "math_id": 6, "text": "\\ x_{ij}\\ " }, { "math_id": 7, "text": "\\ x_{11} ~." }, { "math_id": 8, "text": "\\ x_{11}\\ " }, { "math_id": 9, "text": "\\ n\\ , n_1\\ , s_1\\ :" }, { "math_id": 10, "text": "\\ x_{11}\\ \\sim\\ \\mbox{Hypergeometric}(\\ n\\ , n_1\\ , s_1\\ ) ~." }, { "math_id": 11, "text": "x_{01}\\ " }, { "math_id": 12, "text": "\\ x_{11}\\ , x_{01}\\ " }, { "math_id": 13, "text": "\\ p_1\\ , p_0\\ :" }, { "math_id": 14, "text": "\\ x_{11}\\ \\sim\\ B(\\ n_1\\ , p_1\\ )\\ " }, { "math_id": 15, "text": "\\ x_{01}\\ \\sim\\ B(\\ n_0\\ , p_0\\ )\\ " }, { "math_id": 16, "text": "\\ (\\ x_{11}, x_{10}\\ , x_{01}\\ , x_{00}\\ )\\ " }, { "math_id": 17, "text": "\\ (p_{11}\\ , p_{10}\\ , p_{01}\\ , p_{00}\\ ) ~." }, { "math_id": 18, "text": "\\ n_1 = 4\\ , n_0 = 4 ~." }, { "math_id": 19, "text": "\\ s_1 = 4\\ , s_0 = 4 ~." }, { "math_id": 20, "text": "\\ \\mbox{Hypergeometric}(8, 4, 4) ~." }, { "math_id": 21, "text": "\\ A = 1\\ " }, { "math_id": 22, "text": "\\ A = 0\\ " }, { "math_id": 23, "text": "B" }, { "math_id": 24, "text": "x_1" }, { "math_id": 25, "text": "\nH_0: p_1 \\le p_0\n" }, { "math_id": 26, "text": "\nH_0: p_1 \\ge p_0\n" }, { "math_id": 27, "text": "\nH_0: p_1 = p_0\n" }, { "math_id": 28, "text": "H_0: p_1 \\le p_0" }, { "math_id": 29, "text": "\\alpha" }, { "math_id": 30, "text": "s_1" }, { "math_id": 31, "text": "p_1" }, { "math_id": 32, "text": "p_0" }, { "math_id": 33, "text": "\\max\\limits_{p_1 \\le p_0}\\big(\\mbox{size}(p_1, p_0)\\big)" }, { "math_id": 34, "text": "p=p_1=p_0" }, { "math_id": 35, "text": "\\alpha^* > \\alpha" }, { "math_id": 36, "text": "\\alpha^*" }, { "math_id": 37, "text": "\\max\\limits_{p \\in [0, 1]}\\big(\\mbox{size}(p)\\big) \\le \\alpha" }, { "math_id": 38, "text": "\\alpha, n_1" }, { "math_id": 39, "text": "n_0" }, { "math_id": 40, "text": "x_1, x_0" }, { "math_id": 41, "text": "x_{11}, x_{01}" }, { "math_id": 42, "text": "\np_F = 1-F_{\\mbox{Hypergeometric}(n, n_1, x_1+x_0)}(x_1-1)\n" }, { "math_id": 43, "text": "p_F" }, { "math_id": 44, "text": "x_0" }, { "math_id": 45, "text": "p" }, { "math_id": 46, "text": "\\alpha," }, { "math_id": 47, "text": "\\max\\limits_{p \\in [0, 1]}P(p_F \\le \\alpha^*) \\le \\alpha" }, { "math_id": 48, "text": "[0,1]" }, { "math_id": 49, "text": "P(p_F \\le \\alpha^*)" }, { "math_id": 50, "text": "(1-\\gamma)" }, { "math_id": 51, "text": "p = p_1 = p_0 " }, { "math_id": 52, "text": "\\gamma" }, { "math_id": 53, "text": "\nZ_P(x_1, x_0) = \\frac{\\hat p_1 - \\hat p_0}{\\sqrt{\\tilde p(1-\\tilde p)(\\frac{1}{n_1} + \\frac{1}{n_0})}},\n" }, { "math_id": 54, "text": "\\hat p_i = \\frac{x_i}{n_i}" }, { "math_id": 55, "text": "\\tilde p = \\frac{x_1+x_0}{n_1+n_0}" }, { "math_id": 56, "text": "Z" }, { "math_id": 57, "text": "\nZ_U(x_1, x_0) = \\frac{\\hat p_1 - \\hat p_0}{\\sqrt{\\frac{\\hat p_1(1-\\hat p_1)}{n_1} + \\frac{\\hat p_0(1-\\hat p_0)}{n_0}}},\n" } ]
https://en.wikipedia.org/wiki?curid=62842987
62844
Density matrix
Matrix describing a quantum system in a pure or mixed state In quantum mechanics, a density matrix (or density operator) is a matrix that describes an ensemble of physical systems as quantum states (even if the ensemble contains only one system). It allows for the calculation of the probabilities of the outcomes of any measurements performed upon the systems of the ensemble using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent "mixed ensembles" (sometimes ambiguously called "mixed states"). Mixed ensembles arise in quantum mechanics in two different situations: Density matrices are thus crucial tools in areas of quantum mechanics that deal with mixed ensembles, such as quantum statistical mechanics, open quantum systems and quantum information. Definition and motivation. The density matrix is a representation of a linear operator called the density operator. The density matrix is obtained from the density operator by a choice of an orthonormal basis in the underlying space. In practice, the terms "density matrix" and "density operator" are often used interchangeably. Pick a basis with states formula_0, formula_1 in a two-dimensional Hilbert space, then the density operator is represented by the matrixformula_2 where the diagonal elements are real numbers that sum to one (also called populations of the two states formula_0, formula_1). The off-diagonal elements are complex conjugates of each other (also called coherences); they are restricted in magnitude by the requirement that formula_3 be a positive semi-definite, see below. In operator language, a density operator for a system is a positive semi-definite, Hermitian operator of trace one acting on the Hilbert space of the system. This definition can be motivated by considering a situation where each pure state formula_4 is prepared with probability formula_5, describing an "ensemble" of pure states. The probability of obtaining projective measurement result formula_6 when using projectors formula_7 is given by99 formula_8 which makes the density operator, defined as formula_9 a convenient representation for the state of this ensemble. It is easy to check that this operator is positive semi-definite, Hermitian, and has trace one. Conversely, it follows from the spectral theorem that every operator with these properties can be written as formula_10 for some states formula_11 and coefficients formula_5 that are non-negative and add up to one.102 However, this representation will not be unique, as shown by the Schrödinger–HJW theorem. Another motivation for the definition of density operators comes from considering local measurements on entangled states. Let formula_12 be a pure entangled state in the composite Hilbert space formula_13. The probability of obtaining measurement result formula_6 when measuring projectors formula_7 on the Hilbert space formula_14 alone is given by107 formula_15 where formula_16 denotes the partial trace over the Hilbert space formula_17. This makes the operator formula_18 a convenient tool to calculate the probabilities of these local measurements. It is known as the reduced density matrix of formula_12 on subsystem 1. It is easy to check that this operator has all the properties of a density operator. Conversely, the Schrödinger–HJW theorem implies that all density operators can be written as formula_19 for some state formula_20. Pure and mixed states. A pure quantum state is a state that can not be written as a probabilistic mixture, or convex combination, of other quantum states. There are several equivalent characterizations of pure states in the language of density operators. A density operator represents a pure state if and only if: It is important to emphasize the difference between a probabilistic mixture (i.e. an ensemble) of quantum states and the superposition of two states. If an ensemble is prepared to have half of its systems in state formula_25 and the other half in formula_26, it can be described by the density matrix: formula_27 where formula_25 and formula_26 are assumed orthogonal and of dimension 2, for simplicity. On the other hand, a quantum superposition of these two states with equal probability amplitudes results in the pure state formula_28 with density matrix formula_29 Unlike the probabilistic mixture, this superposition can display quantum interference. Geometrically, the set of density operators is a convex set, and the pure states are the extremal points of that set. The simplest case is that of a two-dimensional Hilbert space, known as a qubit. An arbitrary mixed state for a qubit can be written as a linear combination of the Pauli matrices, which together with the identity matrix provide a basis for formula_30 self-adjoint matrices: formula_31 where the real numbers formula_32 are the coordinates of a point within the unit ball and formula_33 Points with formula_34 represent pure states, while mixed states are represented by points in the interior. This is known as the Bloch sphere picture of qubit state space. Example: light polarization. An example of pure and mixed states is light polarization. An individual photon can be described as having right or left circular polarization, described by the orthogonal quantum states formula_35 and formula_36 or a superposition of the two: it can be in any state formula_37 (with formula_38), corresponding to linear, circular, or elliptical polarization. Consider now a vertically polarized photon, described by the state formula_39. If we pass it through a circular polarizer that allows either only formula_35 polarized light, or only formula_36 polarized light, half of the photons are absorbed in both cases. This may make it "seem" like half of the photons are in state formula_35 and the other half in state formula_36, but this is not correct: if we pass formula_40 through a linear polarizer there is no absorption whatsoever, but if we pass either state formula_35 or formula_36 half of the photons are absorbed. Unpolarized light (such as the light from an incandescent light bulb) cannot be described as "any" state of the form formula_37 (linear, circular, or elliptical polarization). Unlike polarized light, it passes through a polarizer with 50% intensity loss whatever the orientation of the polarizer; and it cannot be made polarized by passing it through any wave plate. However, unpolarized light "can" be described as a statistical ensemble, e. g. as each photon having either formula_35 polarization or formula_36 polarization with probability 1/2. The same behavior would occur if each photon had either vertical polarization formula_41 or horizontal polarization formula_42 with probability 1/2. These two ensembles are completely indistinguishable experimentally, and therefore they are considered the same mixed state. For this example of unpolarized light, the density operator equals formula_43 There are also other ways to generate unpolarized light: one possibility is to introduce uncertainty in the preparation of the photon, for example, passing it through a birefringent crystal with a rough surface, so that slightly different parts of the light beam acquire different polarizations. Another possibility is using entangled states: a radioactive decay can emit two photons traveling in opposite directions, in the quantum state formula_44. The joint state of the two photons "together" is pure, but the density matrix for each photon individually, found by taking the partial trace of the joint density matrix, is completely mixed. Equivalent ensembles and purifications. A given density operator does not uniquely determine which ensemble of pure states gives rise to it; in general there are infinitely many different ensembles generating the same density matrix. Those cannot be distinguished by any measurement. The equivalent ensembles can be completely characterized: let formula_45 be an ensemble. Then for any complex matrix formula_46 such that formula_47 (a partial isometry), the ensemble formula_48 defined by formula_49 will give rise to the same density operator, and all equivalent ensembles are of this form. A closely related fact is that a given density operator has infinitely many different purifications, which are pure states that generate the density operator when a partial trace is taken. Let formula_50 be the density operator generated by the ensemble formula_45, with states formula_4 not necessarily orthogonal. Then for all partial isometries formula_46 we have that formula_51 is a purification of formula_52, where formula_53 is an orthogonal basis, and furthermore all purifications of formula_52 are of this form. Measurement. Let formula_54 be an observable of the system, and suppose the ensemble is in a mixed state such that each of the pure states formula_55 occurs with probability formula_5. Then the corresponding density operator equals formula_56 The expectation value of the measurement can be calculated by extending from the case of pure states: formula_57 where formula_58 denotes trace. Thus, the familiar expression formula_59 for pure states is replaced by formula_60 for mixed states. Moreover, if formula_54 has spectral resolution formula_61 where formula_62 is the projection operator into the eigenspace corresponding to eigenvalue formula_63, the post-measurement density operator is given by formula_64 when outcome "i" is obtained. In the case where the measurement result is not known the ensemble is instead described by formula_65 If one assumes that the probabilities of measurement outcomes are linear functions of the projectors formula_62, then they must be given by the trace of the projector with a density operator. Gleason's theorem shows that in Hilbert spaces of dimension 3 or larger the assumption of linearity can be replaced with an assumption of non-contextuality. This restriction on the dimension can be removed by assuming non-contextuality for POVMs as well, but this has been criticized as physically unmotivated. Entropy. The von Neumann entropy formula_66 of a mixture can be expressed in terms of the eigenvalues of formula_52 or in terms of the trace and logarithm of the density operator formula_52. Since formula_52 is a positive semi-definite operator, it has a spectral decomposition such that formula_67, where formula_68 are orthonormal vectors, formula_69, and formula_70. Then the entropy of a quantum system with density matrix formula_52 is formula_71 This definition implies that the von Neumann entropy of any pure state is zero. If formula_72 are states that have support on orthogonal subspaces, then the von Neumann entropy of a convex combination of these states, formula_73 is given by the von Neumann entropies of the states formula_72 and the Shannon entropy of the probability distribution formula_74: formula_75 When the states formula_72 do not have orthogonal supports, the sum on the right-hand side is strictly greater than the von Neumann entropy of the convex combination formula_52. Given a density operator formula_52 and a projective measurement as in the previous section, the state formula_76 defined by the convex combination formula_77 which can be interpreted as the state produced by performing the measurement but not recording which outcome occurred, has a von Neumann entropy larger than that of formula_52, except if formula_78. It is however possible for the formula_76 produced by a "generalized" measurement, or POVM, to have a lower von Neumann entropy than formula_52. The von Neumann equation for time evolution. Just as the Schrödinger equation describes how pure states evolve in time, the von Neumann equation (also known as the Liouville–von Neumann equation) describes how a density operator evolves in time. The von Neumann equation dictates that formula_79 where the brackets denote a commutator. This equation only holds when the density operator is taken to be in the Schrödinger picture, even though this equation seems at first look to emulate the Heisenberg equation of motion in the Heisenberg picture, with a crucial sign difference: formula_80 where formula_81 is some "Heisenberg picture" operator; but in this picture the density matrix is "not time-dependent", and the relative sign ensures that the time derivative of the expected value formula_82 comes out "the same as in the Schrödinger picture". If the Hamiltonian is time-independent, the von Neumann equation can be easily solved to yield formula_83 For a more general Hamiltonian, if formula_84 is the wavefunction propagator over some interval, then the time evolution of the density matrix over that same interval is given by formula_85 Wigner functions and classical analogies. The density matrix operator may also be realized in phase space. Under the Wigner map, the density matrix transforms into the equivalent Wigner function, formula_86 The equation for the time evolution of the Wigner function, known as Moyal equation, is then the Wigner-transform of the above von Neumann equation, formula_87 where formula_88 is the Hamiltonian, and formula_89 is the Moyal bracket, the transform of the quantum commutator. The evolution equation for the Wigner function is then analogous to that of its classical limit, the Liouville equation of classical physics. In the limit of vanishing Planck's constant formula_90, formula_91 reduces to the classical Liouville probability density function in phase space. Example applications. Density matrices are a basic tool of quantum mechanics, and appear at least occasionally in almost any type of quantum-mechanical calculation. Some specific examples where density matrices are especially helpful and common are as follows: C*-algebraic formulation of states. It is now generally accepted that the description of quantum mechanics in which all self-adjoint operators represent observables is untenable. For this reason, observables are identified with elements of an abstract C*-algebra "A" (that is one without a distinguished representation as an algebra of operators) and states are positive linear functionals on "A". However, by using the GNS construction, we can recover Hilbert spaces that realize "A" as a subalgebra of operators. Geometrically, a pure state on a C*-algebra "A" is a state that is an extreme point of the set of all states on "A". By properties of the GNS construction these states correspond to irreducible representations of "A". The states of the C*-algebra of compact operators "K"("H") correspond exactly to the density operators, and therefore the pure states of "K"("H") are exactly the pure states in the sense of quantum mechanics. The C*-algebraic formulation can be seen to include both classical and quantum systems. When the system is classical, the algebra of observables become an abelian C*-algebra. In that case the states become probability measures. History. The formalism of density operators and matrices was introduced in 1927 by John von Neumann and independently, but less systematically, by Lev Landau and later in 1946 by Felix Bloch. Von Neumann introduced the density matrix in order to develop both quantum statistical mechanics and a theory of quantum measurements. The name density matrix itself relates to its classical correspondence to a phase-space probability measure (probability distribution of position and momentum) in classical statistical mechanics, which was introduced by Wigner in 1932. In contrast, the motivation that inspired Landau was the impossibility of describing a subsystem of a composite quantum system by a state vector. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|0\\rangle" }, { "math_id": 1, "text": "|1\\rangle" }, { "math_id": 2, "text": "\n(\\rho_{ij}) = \\left( \\begin{matrix} \n\\rho_{00} & \\rho_{01} \\\\\n\\rho_{10} & \\rho_{11}\n\\end{matrix} \\right)\n= \\left( \\begin{matrix} \np_{0} & \\rho_{01} \\\\\n\\rho^*_{01} & p_{1}\n\\end{matrix} \\right)\n" }, { "math_id": 3, "text": "(\\rho_{ij})" }, { "math_id": 4, "text": "|\\psi_j\\rangle" }, { "math_id": 5, "text": "p_j" }, { "math_id": 6, "text": "m" }, { "math_id": 7, "text": "\\Pi_m" }, { "math_id": 8, "text": " p(m) = \\sum_j p_j \\left\\langle \\psi_j\\right| \\Pi_m \\left|\\psi_j\\right\\rangle = \\operatorname{tr} \\left[ \\Pi_m \\left ( \\sum_j p_j \\left|\\psi_j\\right\\rangle \\left\\langle \\psi_j\\right|\\right) \\right]," }, { "math_id": 9, "text": "\\rho = \\sum_j p_j \\left|\\psi_j \\right\\rangle \\left\\langle \\psi_j\\right|, " }, { "math_id": 10, "text": " \\sum_j p_j \\left|\\psi_j\\right\\rangle \\left\\langle \\psi_j\\right|" }, { "math_id": 11, "text": "\\left|\\psi_j\\right\\rangle" }, { "math_id": 12, "text": "|\\Psi\\rangle" }, { "math_id": 13, "text": " \\mathcal{H}_1\\otimes\\mathcal{H}_2" }, { "math_id": 14, "text": "\\mathcal{H}_1" }, { "math_id": 15, "text": " p(m) = \\left\\langle \\Psi\\right| \\left(\\Pi_m \\otimes I\\right) \\left|\\Psi\\right\\rangle = \\operatorname{tr} \\left[ \\Pi_m \\left ( \\operatorname{tr}_2 \\left|\\Psi\\right\\rangle \\left\\langle \\Psi\\right| \\right) \\right]," }, { "math_id": 16, "text": " \\operatorname{tr}_2 " }, { "math_id": 17, "text": "\\mathcal{H}_2" }, { "math_id": 18, "text": "\\rho = \\operatorname{tr}_2 \\left|\\Psi\\right\\rangle\\left\\langle \\Psi\\right| " }, { "math_id": 19, "text": "\\operatorname{tr}_2 \\left|\\Psi\\right\\rangle \\left\\langle \\Psi\\right|" }, { "math_id": 20, "text": "\\left|\\Psi\\right\\rangle " }, { "math_id": 21, "text": "|\\psi\\rangle" }, { "math_id": 22, "text": " \\rho = |\\psi \\rangle \\langle \\psi|." }, { "math_id": 23, "text": "\\rho = \\rho^2." }, { "math_id": 24, "text": "\\operatorname{tr}(\\rho^2) = 1." }, { "math_id": 25, "text": "| \\psi_1 \\rangle" }, { "math_id": 26, "text": "| \\psi_2 \\rangle" }, { "math_id": 27, "text": "\\rho = \\frac12\\begin{pmatrix} 1 & 0 \\\\ 0 & 1\\end{pmatrix}, " }, { "math_id": 28, "text": "| \\psi \\rangle = (| \\psi_1 \\rangle + | \\psi_2 \\rangle)/\\sqrt{2}," }, { "math_id": 29, "text": "|\\psi\\rangle\\langle\\psi| = \\frac12\\begin{pmatrix} 1 & 1 \\\\ 1 & 1\\end{pmatrix}." }, { "math_id": 30, "text": "2 \\times 2" }, { "math_id": 31, "text": "\\rho = \\frac{1}{2}\\left(I + r_x \\sigma_x + r_y \\sigma_y + r_z \\sigma_z\\right)," }, { "math_id": 32, "text": "(r_x, r_y, r_z)" }, { "math_id": 33, "text": "\n \\sigma_x =\n \\begin{pmatrix}\n 0&1\\\\\n 1&0\n \\end{pmatrix}, \\quad\n \\sigma_y =\n \\begin{pmatrix}\n 0&-i\\\\\n i&0\n \\end{pmatrix}, \\quad\n \\sigma_z =\n \\begin{pmatrix}\n 1&0\\\\\n 0&-1\n \\end{pmatrix} ." }, { "math_id": 34, "text": "r_x^2 + r_y^2 + r_z^2 = 1" }, { "math_id": 35, "text": "|\\mathrm{R}\\rangle" }, { "math_id": 36, "text": "|\\mathrm{L}\\rangle" }, { "math_id": 37, "text": "\\alpha|\\mathrm{R}\\rangle+\\beta|\\mathrm{L}\\rangle" }, { "math_id": 38, "text": "|\\alpha|^2+|\\beta|^2=1" }, { "math_id": 39, "text": "|\\mathrm{V}\\rangle = (|\\mathrm{R}\\rangle+|\\mathrm{L}\\rangle)/\\sqrt{2}" }, { "math_id": 40, "text": "(|\\mathrm{R}\\rangle+|\\mathrm{L}\\rangle)/\\sqrt{2}" }, { "math_id": 41, "text": "| \\mathrm{V}\\rangle " }, { "math_id": 42, "text": "| \\mathrm{H} \\rangle " }, { "math_id": 43, "text": "\\rho = \\frac{1}{2} |\\mathrm{R}\\rangle \\langle \\mathrm{R}| + \\frac{1}{2}|\\mathrm{L}\\rangle \\langle \\mathrm{L}| = \\frac{1}{2} |\\mathrm{H}\\rangle \\langle \\mathrm{H}| + \\frac{1}{2}|\\mathrm{V}\\rangle \\langle \\mathrm{V}| = \\frac12\\begin{pmatrix} 1 & 0 \\\\ 0 & 1\\end{pmatrix}." }, { "math_id": 44, "text": "(|\\mathrm{R},\\mathrm{L}\\rangle+|\\mathrm{L},\\mathrm{R}\\rangle)/\\sqrt{2}" }, { "math_id": 45, "text": "\\{p_j,|\\psi_j\\rangle\\}" }, { "math_id": 46, "text": "U" }, { "math_id": 47, "text": "U^\\dagger U = I" }, { "math_id": 48, "text": "\\{q_i,|\\varphi_i\\rangle\\}" }, { "math_id": 49, "text": "\\sqrt{q_i} \\left| \\varphi_i \\right\\rangle = \\sum_j U_{ij} \\sqrt{p_j} \\left| \\psi_j \\right\\rangle " }, { "math_id": 50, "text": "\\rho = \\sum_j p_j |\\psi_j \\rangle \\langle \\psi_j| " }, { "math_id": 51, "text": " |\\Psi\\rangle = \\sum_j \\sqrt{p_j} |\\psi_j \\rangle U |a_j\\rangle " }, { "math_id": 52, "text": "\\rho" }, { "math_id": 53, "text": "|a_j\\rangle" }, { "math_id": 54, "text": "A" }, { "math_id": 55, "text": "\\textstyle |\\psi_j\\rangle" }, { "math_id": 56, "text": "\\rho = \\sum_j p_j |\\psi_j \\rangle \\langle \\psi_j|." }, { "math_id": 57, "text": " \\langle A \\rangle = \\sum_j p_j \\langle \\psi_j|A|\\psi_j \\rangle = \\sum_j p_j \\operatorname{tr}\\left(|\\psi_j \\rangle \\langle \\psi_j|A \\right) = \\operatorname{tr}\\left(\\sum_j p_j |\\psi_j \\rangle \\langle \\psi_j|A\\right) = \\operatorname{tr}(\\rho A)," }, { "math_id": 58, "text": "\\operatorname{tr}" }, { "math_id": 59, "text": "\\langle A\\rangle=\\langle\\psi|A|\\psi\\rangle" }, { "math_id": 60, "text": " \\langle A \\rangle = \\operatorname{tr}( \\rho A)" }, { "math_id": 61, "text": "A = \\sum _i a_i P_i," }, { "math_id": 62, "text": "P_i" }, { "math_id": 63, "text": "a_i" }, { "math_id": 64, "text": "\\rho_i' = \\frac{P_i \\rho P_i}{\\operatorname{tr}\\left[\\rho P_i\\right]}" }, { "math_id": 65, "text": "\\; \\rho ' = \\sum_i P_i \\rho P_i." }, { "math_id": 66, "text": "S" }, { "math_id": 67, "text": "\\rho = \\textstyle\\sum_i \\lambda_i |\\varphi_i\\rangle \\langle\\varphi_i|" }, { "math_id": 68, "text": "|\\varphi_i\\rangle" }, { "math_id": 69, "text": "\\lambda_i \\ge 0" }, { "math_id": 70, "text": "\\textstyle \\sum \\lambda_i = 1" }, { "math_id": 71, "text": "S = -\\sum_i \\lambda_i \\ln\\lambda_i = -\\operatorname{tr}(\\rho \\ln\\rho)." }, { "math_id": 72, "text": "\\rho_i" }, { "math_id": 73, "text": "\\rho = \\sum_i p_i \\rho_i," }, { "math_id": 74, "text": "p_i" }, { "math_id": 75, "text": "S(\\rho) = H(p_i) + \\sum_i p_i S(\\rho_i)." }, { "math_id": 76, "text": "\\rho'" }, { "math_id": 77, "text": "\\rho' = \\sum_i P_i \\rho P_i," }, { "math_id": 78, "text": "\\rho = \\rho'" }, { "math_id": 79, "text": " i \\hbar \\frac{\\operatorname{d} \\rho}{\\operatorname{d} t} = [H, \\rho]~, " }, { "math_id": 80, "text": " i \\hbar \\frac{\\operatorname{d} A^{(\\mathrm{H})}}{\\operatorname{d} t} = -\\left[H, A^{(\\mathrm{H})}\\right] ~," }, { "math_id": 81, "text": "A^{(\\mathrm{H})}(t)" }, { "math_id": 82, "text": "\\langle A \\rangle" }, { "math_id": 83, "text": "\\rho(t) = e^{-i H t/\\hbar} \\rho(0) e^{i H t/\\hbar}." }, { "math_id": 84, "text": "G(t)" }, { "math_id": 85, "text": " \\rho(t) = G(t) \\rho(0) G(t)^\\dagger." }, { "math_id": 86, "text": " W(x,p) \\,\\ \\stackrel{\\mathrm{def}}{=}\\ \\, \\frac{1}{\\pi\\hbar} \\int_{-\\infty}^\\infty \\psi^*(x + y) \\psi(x - y) e^{2ipy/\\hbar} \\,dy." }, { "math_id": 87, "text": "\\frac{\\partial W(x, p, t)}{\\partial t} = -\\{\\{W(x, p, t), H(x, p)\\}\\}," }, { "math_id": 88, "text": "H(x,p)" }, { "math_id": 89, "text": "\\{\\{\\cdot,\\cdot\\}\\}" }, { "math_id": 90, "text": "\\hbar" }, { "math_id": 91, "text": "W(x,p,t)" }, { "math_id": 92, "text": "\\rho = \\exp(-\\beta H)/Z(\\beta)" }, { "math_id": 93, "text": "\\beta" }, { "math_id": 94, "text": "(k_{\\rm B} T)^{-1}" }, { "math_id": 95, "text": "H" }, { "math_id": 96, "text": "Z(\\beta) = \\mathrm{tr} \\exp(-\\beta H)" }, { "math_id": 97, "text": "N" }, { "math_id": 98, "text": "|\\psi_i\\rangle" }, { "math_id": 99, "text": "\\sum_{i=1}^N |\\psi_i\\rangle \\langle \\psi_i|" } ]
https://en.wikipedia.org/wiki?curid=62844
628466
Outer automorphism group
Mathematical group In mathematics, the outer automorphism group of a group, G, is the quotient, Aut("G") / Inn("G"), where Aut("G") is the automorphism group of G and Inn("G") is the subgroup consisting of inner automorphisms. The outer automorphism group is usually denoted Out("G"). If Out("G") is trivial and G has a trivial center, then G is said to be complete. An automorphism of a group that is not inner is called an outer automorphism. The cosets of Inn("G") with respect to outer automorphisms are then the elements of Out("G"); this is an instance of the fact that quotients of groups are not, in general, (isomorphic to) subgroups. If the inner automorphism group is trivial (when a group is abelian), the automorphism group and outer automorphism group are naturally identified; that is, the outer automorphism group does act on the group. For example, for the alternating group, A"n", the outer automorphism group is usually the group of order 2, with exceptions noted below. Considering A"n" as a subgroup of the symmetric group, S"n", conjugation by any odd permutation is an outer automorphism of A"n" or more precisely "represents the class of the (non-trivial) outer automorphism of A"n"", but the outer automorphism does not correspond to conjugation by any "particular" odd element, and all conjugations by odd elements are equivalent up to conjugation by an even element. Structure. The Schreier conjecture asserts that Out("G") is always a solvable group when G is a finite simple group. This result is now known to be true as a corollary of the classification of finite simple groups, although no simpler proof is known. As dual of the center. The outer automorphism group is dual to the center in the following sense: conjugation by an element of G is an automorphism, yielding a map "σ" : "G" → Aut("G"). The kernel of the conjugation map is the center, while the cokernel is the outer automorphism group (and the image is the inner automorphism group). This can be summarized by the exact sequence formula_0 Applications. The outer automorphism group of a group acts on conjugacy classes, and accordingly on the character table. See details at character table: outer automorphisms. Topology of surfaces. The outer automorphism group is important in the topology of surfaces because there is a connection provided by the Dehn–Nielsen theorem: the extended mapping class group of the surface is the outer automorphism group of its fundamental group. In finite groups. For the outer automorphism groups of all finite simple groups see the list of finite simple groups. Sporadic simple groups and alternating groups (other than the alternating group, A6; see below) all have outer automorphism groups of order 1 or 2. The outer automorphism group of a finite simple group of Lie type is an extension of a group of "diagonal automorphisms" (cyclic except for D"n"("q"), when it has order 4), a group of "field automorphisms" (always cyclic), and a group of "graph automorphisms" (of order 1 or 2 except for D4("q"), when it is the symmetric group on 3 points). These extensions are not always semidirect products, as the case of the alternating group A6 shows; a precise criterion for this to happen was given in 2003. In symmetric and alternating groups. The outer automorphism group of a finite simple group in some infinite family of finite simple groups can almost always be given by a uniform formula that works for all elements of the family. There is just one exception to this: the alternating group A6 has outer automorphism group of order 4, rather than 2 as do the other simple alternating groups (given by conjugation by an odd permutation). Equivalently the symmetric group S6 is the only symmetric group with a non-trivial outer automorphism group. formula_1 Note that, in the case of "G" A6 PSL(2, 9), the sequence 1 ⟶ "G" ⟶ Aut("G") ⟶ Out("G") ⟶ 1 does not split. A similar result holds for any PSL(2, "q"2), q odd. In reductive algebraic groups. Let G now be a connected reductive group over an algebraically closed field. Then any two Borel subgroups are conjugate by an inner automorphism, so to study outer automorphisms it suffices to consider automorphisms that fix a given Borel subgroup. Associated to the Borel subgroup is a set of simple roots, and the outer automorphism may permute them, while preserving the structure of the associated Dynkin diagram. In this way one may identify the automorphism group of the Dynkin diagram of G with a subgroup of Out("G"). D4 has a very symmetric Dynkin diagram, which yields a large outer automorphism group of Spin(8), namely Out(Spin(8)) S3; this is called triality. In complex and real simple Lie algebras. The preceding interpretation of outer automorphisms as symmetries of a Dynkin diagram follows from the general fact, that for a complex or real simple Lie algebra, 𝔤, the automorphism group Aut("𝔤") is a semidirect product of Inn("𝔤") and Out("𝔤"); i.e., the short exact sequence 1 ⟶ Inn("𝔤") ⟶ Aut("𝔤") ⟶ Out("𝔤") ⟶ 1 splits. In the complex simple case, this is a classical result, whereas for real simple Lie algebras, this fact was proven as recently as 2010. Word play. The term "outer automorphism" lends itself to word play: the term "outermorphism" is sometimes used for "outer automorphism", and a particular geometry on which Out("F""n") acts is called "outer space". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Z(G) \\hookrightarrow G \\, \\overset{\\sigma}{\\longrightarrow} \\, \\mathrm{Aut}(G) \\twoheadrightarrow \\mathrm{Out}(G)" }, { "math_id": 1, "text": "\\begin{align}\n n \\neq 6: \\operatorname{Out}(\\mathrm{S}_n) & = \\mathrm{C}_1 \\\\\n n \\geq 3,\\ n \\neq 6: \\operatorname{Out}(\\mathrm{A}_n) & = \\mathrm{C}_2 \\\\\n \\operatorname{Out}(\\mathrm{S}_6) & = \\mathrm{C}_2 \\\\\n \\operatorname{Out}(\\mathrm{A}_6) & = \\mathrm{C}_2 \\times \\mathrm{C}_2\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=628466
6284939
Log area ratio
Log area ratios (LAR) can be used to represent reflection coefficients (another form for linear prediction coefficients) for transmission over a channel. While not as efficient as line spectral pairs (LSPs), log area ratios are much simpler to compute. Let formula_0 be the "k"th reflection coefficient of a filter, the "k"th LAR is: formula_1 Use of Log Area Ratios have now been mostly replaced by Line Spectral Pairs, but older codecs, such as GSM-FR use LARs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r_k" }, { "math_id": 1, "text": "A_k = \\log{1-r_k \\over 1+r_k}" } ]
https://en.wikipedia.org/wiki?curid=6284939
6285
Cube
Solid object with six equal square faces In geometry, a cube is a three-dimensional solid object bounded by six square faces. It has twelve edges and eight vertices. It can be represented as a rectangular cuboid with six square faces, or a parallelepiped with equal edges. It is an example of many type of solids: Platonic solid, regular polyhedron, parallelohedron, zonohedron, and plesiohedron. The dual polyhedron of a cube is the regular octahedron. The cube can be represented in many ways, one of which is the graph known as the cubical graph. It can be constructed by using the Cartesian product of graphs. The cube was discovered in antiquity. It was associated with the nature of earth by Plato, the founder of Platonic solid. It was used as the part of the Solar System, proposed by Johannes Kepler. It can be derived differently to create more polyhedrons, and it has applications to construct a new polyhedron by attaching others. It can be generalized as tesseract in four-dimensional space. Properties. A cube is a special case of rectangular cuboid in which the edges are equal in length. Like other cuboids, every face of a cube has four vertices, each of which connects with three congruent lines. These edges form square faces, making the dihedral angle of a cube between every two adjacent squares being the interior angle of a square, 90°. Hence, the cube has six faces, twelve edges, and eight vertices. Because of such properties, it is categorized as one of the five Platonic solids, a polyhedron in which all the regular polygons are congruent and the same number of faces meet at each vertex. Measurement and other metric properties. Given that a cube with edge length formula_1. The face diagonal of a cube is the diagonal of a square formula_2, and the space diagonal of a cube is a line connecting two vertices that is not in the same face, formulated as formula_3. Both formulas can be determined by using Pythagorean theorem. The surface area of a cube formula_4 is six times the area of a square: formula_5 The volume of a cuboid is the product of length, width, and height. Because the edges of a cube are all equal in length, it is: formula_6 A unit cube is a special case where each cube's edge is 1 unit length. The surface area and the volume of a unit cube is 1. It has Rupert property, meaning a polyhedron with the same or larger size can pass through into the hole of the unit cube. The Prince Rupert's cube, named after Prince Rupert of the Rhine, is the largest cube that can fit inside, with the size being 6% larger. A geometric problem of doubling the cube—alternatively known as the "Delian problem"—requires the construction of a cube with a volume twice the original by using a compass and straightedge solely. Ancient mathematicians could not solve this old problem until French mathematician Pierre Wantzel in 1837 proved it was impossible. Relation to the spheres. With edge length formula_1, the inscribed sphere of a cube is the sphere tangent to the faces of a cube at their centroids, with radius formula_7. The midsphere of a cube is the sphere tangent to the edges of a cube, with radius formula_8. The circumscribed sphere of a cube is the sphere tangent to the vertices of a cube, with radius formula_9. For a cube whose circumscribed sphere has radius formula_10, and for a given point in its three-dimensional space with distances formula_11 from the cube's eight vertices, it is: formula_12 Symmetry. The cube has octahedral symmetry formula_0. It is composed of reflection symmetry, a symmetry by cutting into two halves by a plane. There are nine reflection symmetries: the five are cut the cube from the midpoints of its edges, and the four are cut diagonally. It is also composed of rotational symmetry, a symmetry by rotating it around the axis, from which the appearance is interchangeable. It has octahedral rotation symmetry formula_13: three axes pass through the cube's opposite faces centroid, six through the cube's opposite edges midpoints, and four through the cube's opposite vertices; each of these axes is respectively four-fold rotational symmetry (0°, 90°, 180°, and 270°), two-fold rotational symmetry (0° and 180°), and three-fold rotational symmetry (0°, 120°, and 240°). The dual polyhedron can be obtained from each of the polyhedron's vertices tangent to a plane by the process known as polar reciprocation. One property of dual polyhedrons generally is that the polyhedron and its dual share their three-dimensional symmetry point group. In this case, the dual polyhedron of a cube is the regular octahedron, and both of these polyhedron has the same symmetry, the octahedral symmetry. The cube is face-transitive, meaning its two squares are alike and can be mapped by rotation and reflection. It is vertex-transitive, meaning all of its vertices are equivalent and can be mapped isometrically under its symmetry. It is also edge-transitive, meaning the same kind of faces surround each of its vertices in the same or reverse order, all two adjacent faces have the same dihedral angle. Therefore, the cube is regular polyhedron because it requires those properties. Classifications. The cube is a special case among every cuboids. As mentioned above, the cube can be represented as the rectangular cuboid with edges equal in length and all of its faces are all squares. The cube may be considered as the parallelepiped in which all of its edges are equal edges. The cube is a plesiohedron, a special kind of space-filling polyhedron that can be defined as the Voronoi cell of a symmetric Delone set. The plesiohedra include the parallelohedrons, which can be translated without rotating to fill a space—called honeycomb—in which each face of any of its copies is attached to a like face of another copy. There are five kinds of parallelohedra, one of which is the cuboid. Every three-dimensional parallelohedron is zonohedron, a centrally symmetric polyhedron whose faces are centrally symmetric polygons, Construction. An elementary way to construct a cube is using the net. A net is an arrangement of edge-joining polygons constructing a polyhedron by connecting along the edges of those polygons. Here, there are eleven different cube's net. In analytic geometry, a cube may be constructed using the Cartesian coordinate systems. For a cube centered at the origin, with edges parallel to the axes and with an edge length of 2, the Cartesian coordinates of the vertices are formula_14. Its interior consists of all points formula_15 with formula_16 for all formula_17. A cube's surface with center formula_18 and edge length of formula_19 is the locus of all points formula_20 such that formula_21 The cube is Hanner polytope, because it can be constructed by using Cartesian product of three line segments. Its dual polyhedron, the regular octahedron, is constructed by direct sum of three line segments. Representation. As a graph. According to Steinitz's theorem, the graph can be represented as the skeleton of a polyhedron; roughly speaking, a framework of a polyhedron. Such a graph has two properties. It is planar, meaning the edges of a graph are connected to every vertex without crossing other edges. It is also 3-connected graph, meaning that, whenever a graph with more than three vertices, and two of the vertices are removed, the edges remain connected. The skeleton of a cube can be represented as the graph, and it is called the cubical graph, a Platonic graph. It has the same number of vertices and edges as the cube, twelve vertices and eight edges. The cubical graph is a special case of hypercube graph or formula_22-cube—denoted as formula_23—because it can be constructed by using the operation known as the Cartesian product of graphs. To put it in a plain, its construction involves two graphs connecting the pair of vertices with an edge to form a new graph. In the case of the cubical graph, it is the product of two formula_24; roughly speaking, it is a graph resembling a square. In other words, the cubical graph is constructed by connecting each vertex of two squares with an edge. Notationally, the cubical graph can be denoted as formula_25. As a part of the hypercube graph, it is also an example of a unit distance graph. Like other graphs of cuboids, the cubical graph is also classified as the prism graph. In orthogonal projection. An object illuminated by parallel rays of light casts a shadow on a plane perpendicular to those rays, called an orthogonal projection. A polyhedron is considered "equiprojective" if, for some position of the light, its orthogonal projection is a regular polygon. The cube is equiprojective because, if the light is parallel to one of the four lines joining a vertex to the opposite vertex, its projection is a regular hexagon. Conventionally, the cube is 6-equiprojective. As a configuration matrix. The cube can be represented as configuration matrix. A configuration matrix is a matrix in which the rows and columns correspond to the elements of a polyhedron as in the vertices, edges, and faces. The diagonal of a matrix denotes the number of each element that appears in a polyhedron, whereas the non-diagonal of a matrix denotes the number of the column's elements that occur in or at the row's element. As mentioned above, the cube has eight vertices, twelve edges, and six faces; each element in a matrix's diagonal is denoted as 8, 12, and 6. The first column of the middle row indicates that there are two vertices in (i.e., at the extremes of) each edge, denoted as 2; the middle column of the first row indicates that three edges meet at each vertex, denoted as 3. The following matrix is: formula_26 Appearances. In antiquity. The Platonic solid is a set of polyhedrons known since antiquity. It was named after Plato in his "Timaeus" dialogue, who attributed these solids with nature. One of them, the cube, represented the classical element of earth because of its stability. Euclid's "Elements" defined the Platonic solids, including the cube, and using these solids with the problem involving to find the ratio of the circumscribed sphere's diameter to the edge length. Following its attribution with nature by Plato, Johannes Kepler in his "Harmonices Mundi" sketched each of the Platonic solids, one of them is a cube in which Kepler decorated a tree on it. In his "Mysterium Cosmographicum", Kepler also proposed the Solar System by using the Platonic solids setting into another one and separating them with six spheres resembling the six planets. The ordered solids started from the innermost to the outermost: regular octahedron, regular icosahedron, regular dodecahedron, regular tetrahedron, and cube. Polyhedron, honeycombs, and polytopes. The cube can appear in the construction of a polyhedron, and some of its types can be derived differently in the following: The honeycomb is the space-filling or tessellation in three-dimensional space, meaning it is an object in which the construction begins by attaching any polyhedrons onto their faces without leaving a gap. The cube can be represented as the cell, and examples of a honeycomb are cubic honeycomb, order-5 cubic honeycomb, order-6 cubic honeycomb, and order-7 cubic honeycomb. The cube can be constructed with six square pyramids, tiling space by attaching their apices. Polycube is a polyhedron in which the faces of many cubes are attached. Analogously, it can be interpreted as the polyominoes in three-dimensional space. When four cubes are stacked vertically, and the other four are attached to the second-from-top cube of the stack, the resulting polycube is Dali cross, after Salvador Dali. The Dali cross is a tile space polyhedron, which can be represented as the net of a tesseract. A tesseract is a cube analogous' four-dimensional space bounded by twenty-four squares, and it is bounded by the eight cubes known as its cells. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathrm{O}_\\mathrm{h} " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " a\\sqrt{2} " }, { "math_id": 3, "text": " a \\sqrt{3} " }, { "math_id": 4, "text": " A " }, { "math_id": 5, "text": " A = 6a^2. " }, { "math_id": 6, "text": " V = a^3. " }, { "math_id": 7, "text": " \\frac{1}{2}a " }, { "math_id": 8, "text": " \\frac{1}{\\sqrt{2}}a " }, { "math_id": 9, "text": " \\frac{\\sqrt{3}}{2}a " }, { "math_id": 10, "text": " R " }, { "math_id": 11, "text": " d_i " }, { "math_id": 12, "text": " \\frac{1}{8}\\sum_{i=1}^8 d_i^4 + \\frac{16R^4}{9} = \\left(\\frac{1}{8}\\sum_{i=1}^8 d_i^2 + \\frac{2R^2}{3}\\right)^2. " }, { "math_id": 13, "text": " \\mathrm{O} " }, { "math_id": 14, "text": " (\\pm 1, \\pm 1, \\pm 1) " }, { "math_id": 15, "text": " (x_0, x_1, x_2) " }, { "math_id": 16, "text": " -1 < x_i < 1 " }, { "math_id": 17, "text": " i " }, { "math_id": 18, "text": " (x_0, y_0, z_0) " }, { "math_id": 19, "text": " 2a " }, { "math_id": 20, "text": " (x,y,z) " }, { "math_id": 21, "text": " \\max\\{ |x-x_0|,|y-y_0|,|z-z_0| \\} = a." }, { "math_id": 22, "text": "n" }, { "math_id": 23, "text": " Q_n " }, { "math_id": 24, "text": " Q_2 " }, { "math_id": 25, "text": " Q_3 " }, { "math_id": 26, "text": " \\begin{bmatrix}\\begin{matrix}8 & 3 & 3 \\\\ 2 & 12 & 2 \\\\ 4 & 4 & 6 \\end{matrix}\\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=6285
62853314
Focal conics
Pairs of conic sections in geometry In geometry, focal conics are a pair of curves consisting of either or Focal conics play an essential role answering the question: "Which right circular cones contain a given ellipse or hyperbola or parabola (see below)". Focal conics are used as directrices for generating Dupin cyclides as canal surfaces in two ways. Focal conics can be seen as degenerate focal surfaces: Dupin cyclides are the only surfaces, where focal surfaces collapse to a pair of curves, namely focal conics. In Physical chemistry focal conics are used for describing geometrical properties of liquid crystals. One should not mix focal conics with confocal conics. The latter ones have all the same foci. Equations and parametric representations. Ellipse and hyperbola. If one describes the ellipse in the x-y-plane in the common way by the equation formula_0 then the corresponding focal hyperbola in the x-z-plane has equation formula_1 where formula_2 is the linear eccentricity of the ellipse with formula_3 ellipse: formula_4 and hyperbola: formula_5 Two parabolas. Two parabolas in the x-y-plane and in the x-z-plane: 1. parabola: formula_6 and 2. parabola: formula_7 with formula_8 the semi-latus rectum of both the parabolas. Right circular cones through an ellipse. "Given": Ellipse with vertices formula_9 and foci formula_10 and a right circular cone with apex formula_11 containing the ellipse (see diagram). Because of symmetry the axis of the cone has to be contained in the plane through the foci, which is orthogonal to the ellipse's plane. There exists a Dandelin sphere formula_12, which touches the ellipse's plane at the focus formula_13 and the cone at a circle. From the diagram and the fact that all tangential distances of a point to a sphere are equal one gets: formula_14 formula_15 Hence: formula_16const. and the set of all possible apices lie on the hyperbola with the vertices formula_10 and the foci formula_9. Analogously one proves the cases, where the cones contain a hyperbola or a parabola.
[ { "math_id": 0, "text": "\\frac{x^2}{a^2} + \\frac{y^2}{b^2} = 1\\; ," }, { "math_id": 1, "text": "\\frac{x^2}{c^2} - \\frac{z^2}{b^2} = 1\\; , " }, { "math_id": 2, "text": "c" }, { "math_id": 3, "text": "\\; c^2 = a^2 - b^2\\; ." }, { "math_id": 4, "text": "\\quad \\vec u(\\varphi)=(a\\cos \\varphi,b\\sin \\varphi,0)^T\\ " }, { "math_id": 5, "text": " \\ \\vec v(\\psi)=(c\\cosh \\psi, 0,b\\sinh \\psi)^T\\ ." }, { "math_id": 6, "text": "\\ y^2=p^2-2px\\ " }, { "math_id": 7, "text": "\\ z^2=2px \\ ." }, { "math_id": 8, "text": "p" }, { "math_id": 9, "text": "A,B" }, { "math_id": 10, "text": "E,F" }, { "math_id": 11, "text": "S" }, { "math_id": 12, "text": "k" }, { "math_id": 13, "text": "F" }, { "math_id": 14, "text": "|AS|=|AA_1|+|A_1S|=|AF|+|B_1S| " }, { "math_id": 15, "text": "|BS|=|BB_1|+|B_1S|=|BF|+|B_1S| " }, { "math_id": 16, "text": "|AS|-|BS|=|AF|-|BF|=|EF|= " } ]
https://en.wikipedia.org/wiki?curid=62853314
628536
Bandwidth-limited pulse
Type of wave pulse A bandwidth-limited pulse (also known as Fourier-transform-limited pulse, or more commonly, transform-limited pulse) is a pulse of a wave that has the minimum possible duration for a given spectral bandwidth. Bandwidth-limited pulses have a constant phase across all frequencies making up the pulse. Optical pulses of this type can be generated by mode-locked lasers. Any waveform can be disassembled into its spectral components by Fourier analysis or Fourier transformation. The length of a pulse thereby is determined by its complex spectral components, which include not just their relative intensities, but also the relative positions (spectral phase) of these spectral components. For different pulse shapes, the minimum duration-bandwidth product is different. The duration-bandwidth product is minimal for zero phase-modulation. For example, formula_0 pulses have a minimum duration-bandwidth product of 0.315 while gaussian pulses have a minimum value of 0.441. A bandwidth-limited pulse can only be kept together if the dispersion of the medium the wave is travelling through is zero; otherwise dispersion management is needed to revert the effects of unwanted spectral phase changes. For example, when an ultrashort pulse passes through a block of glass, the glass medium broadens the pulse due to group velocity dispersion. Keeping pulses bandwidth-limited is necessary to compress information in time or to achieve high field densities, as with ultrashort pulses in modelocked lasers.
[ { "math_id": 0, "text": "\\mathrm{sech^2}" } ]
https://en.wikipedia.org/wiki?curid=628536
62856263
Finite algebra
In abstract algebra, an associative algebra formula_0 over a ring formula_1 is called finite if it is finitely generated as an formula_1-module. An formula_1-algebra can be thought as a homomorphism of rings formula_2, in this case formula_3 is called a finite morphism if formula_0 is a finite formula_1-algebra. Being a finite algebra is a stronger condition than being an algebra of finite type. Finite morphisms in algebraic geometry. This concept is closely related to that of finite morphism in algebraic geometry; in the simplest case of affine varieties, given two affine varieties formula_4, formula_5 and a dominant regular map formula_6, the induced homomorphism of formula_7-algebras formula_8 defined by formula_9 turns formula_10 into a formula_11-algebra: formula_12 is a finite morphism of affine varieties if formula_8 is a finite morphism of formula_7-algebras. The generalisation to schemes can be found in the article on finite morphisms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "f\\colon R \\to A" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "V\\subseteq\\mathbb{A}^n" }, { "math_id": 5, "text": "W\\subseteq\\mathbb{A}^m" }, { "math_id": 6, "text": "\\phi\\colon V\\to W" }, { "math_id": 7, "text": "\\Bbbk" }, { "math_id": 8, "text": "\\phi^*\\colon\\Gamma(W)\\to\\Gamma(V)" }, { "math_id": 9, "text": "\\phi^*f=f\\circ\\phi" }, { "math_id": 10, "text": "\\Gamma(V)" }, { "math_id": 11, "text": "\\Gamma(W)" }, { "math_id": 12, "text": "\\phi" } ]
https://en.wikipedia.org/wiki?curid=62856263
628583
Optical coherence tomography
Imaging technique Optical coherence tomography (OCT) is an imaging technique that uses interferometry with short-coherence-length light to obtain micrometer-level depth resolution and uses transverse scanning of the light beam to form two- and three-dimensional images from light reflected from within biological tissue or other scattering media. Short-coherence-length light can be obtained using a superluminescent diode (SLD) with a broad spectral bandwidth or a broadly tunable laser with narrow linewidth. The first demonstration of OCT imaging (in vitro) was published by a team from MIT and Harvard Medical School in a 1991 article in the journal "Science". The article introduced the term "OCT" to credit its derivation from optical coherence-domain reflectometry, in which the axial resolution is based on temporal coherence. The first demonstrations of in vivo OCT imaging quickly followed. The first US patents on OCT by the MIT/Harvard group described both a time-domain OCT (TD-OCT) system and a Fourier-domain OCT (FD-OCT) system of the swept-source variety. These patents were licensed by Zeiss and formed the basis of the first generations of OCT products until 2006. Tanno et al. obtained a patent on optical heterodyne tomography (similar to TD-OCT) in Japan in the same year. In the decade preceding the invention of OCT, interferometry with short-coherence-length light had been investigated for a variety of applications. The potential to use interferometry for imaging was proposed, and measurement of retinal elevation profile and thickness had been demonstrated. The initial commercial clinical OCT systems were based on point-scanning TD-OCT technology, which primarily produced cross-sectional images due to the speed limitation (tens to thousands of axial scans per second). Fourier-domain OCT became available clinically 2006, enabling much greater image acquisition rate (tens of thousands to hundreds of thousands axial scans per second) without sacrificing signal strength. The higher speed allowed for three-dimensional imaging, which can be visualized in both en face and cross-sectional views. Novel contrasts such as angiography, elastography, and optoretinography also became possible by detecting signal change over time. Over the past three decades, the speed of commercial clinical OCT systems has increased more than 1000-fold, doubling every three years and rivaling Moore's law of computer chip performance. Development of parallel image acquisition approaches such as line-field and full-field technology may allow the performance improvement trend to continue. OCT is most widely used in ophthalmology, in which it has transformed the diagnosis and monitoring of retinal diseases, optic nerve diseases, and corneal diseases. It has greatly improved the management of the top three causes of blindness – macular degeneration, diabetic retinopathy, and glaucoma – thereby preventing vision loss in many patients. By 2016 OCT was estimated to be used in more than 30 million imaging procedures per year worldwide. OCT angioscopy is used in the intravascular evaluation of coronary artery plaques and to guide stent placement. Beyond ophthalmology and cardiology, applications are also developing in other medical specialties such as dermatology, gastroenterology (endoscopy), neurology, oncology, and dentistry. Introduction. Interferometric reflectometry of biological tissue, especially of the human eye using short-coherence-length light (also referred to as partially-coherent, low-coherence, or broadband, broad-spectrum, or white light) was investigated in parallel by multiple groups worldwide since 1980s. In 1991, David Huang, then a student in James Fujimoto laboratory at Massachusetts Institute of Technology, working with Eric Swanson at the MIT Lincoln Laboratory and colleagues at the Harvard Medical School, successfully demonstrated imaging and called the new imaging modality "optical coherence tomography". Since then, OCT with micrometer resolution and cross-sectional imaging capabilities has become a prominent biomedical imaging technique that has continually improved in technical performance and range of applications. The improvement in image acquisition rate is particularly spectacular, starting with the original 0.8 Hz axial scan repetition rate to the current commercial clinical OCT systems operating at several hundred kHz and laboratory prototypes at multiple MHz. The range of applications has expanded from ophthalmology to cardiology and other medical specialties. For their roles in the invention of OCT, Fujimoto, Huang, and Swanson received the 2023 Lasker-DeBakey Clinical Medical Research Award and the National Medal of Technology and Innovation. These developments have been reviewed in articles written for the general scientific and medical readership. It is particularly suited to ophthalmic applications and other tissue imaging requiring micrometer resolution and millimeter penetration depth. OCT has also been used for various art conservation projects, where it is used to analyze different layers in a painting. OCT has interesting advantages over other medical imaging systems. Medical ultrasonography, magnetic resonance imaging (MRI), confocal microscopy, and OCT are differently suited to morphological tissue imaging: while the first two have whole body but low resolution imaging capability (typically a fraction of a millimeter), the third one can provide images with resolutions well below 1 micrometer (i.e. sub-cellular), between 0 and 100 micrometers in depth, and the fourth can probe as deep as 500 micrometers, but with a lower (i.e. architectural) resolution (around 10 micrometers in lateral and a few micrometers in depth in ophthalmology, for instance, and 20 micrometers in lateral in endoscopy). OCT is based on low-coherence interferometry. In conventional interferometry with long coherence length (i.e., laser interferometry), interference of light occurs over a distance of meters. In OCT, this interference is shortened to a distance of micrometers, owing to the use of broad-bandwidth light sources (i.e., sources that emit light over a broad range of frequencies). Light with broad bandwidths can be generated by using superluminescent diodes or lasers with extremely short pulses (femtosecond lasers). White light is an example of a broadband source with lower power. Light in an OCT system is broken into two arms – a sample arm (containing the item of interest) and a reference arm (usually a mirror). The combination of reflected light from the sample arm and reference light from the reference arm gives rise to an interference pattern, but only if light from both arms have traveled the "same" optical distance ("same" meaning a difference of less than a coherence length). By scanning the mirror in the reference arm, a reflectivity profile of the sample can be obtained (this is time domain OCT). Areas of the sample that reflect back a lot of light will create greater interference than areas that don't. Any light that is outside the short coherence length will not interfere. This reflectivity profile, called an A-scan, contains information about the spatial dimensions and location of structures within the item of interest. A cross-sectional tomogram (B-scan) may be achieved by laterally combining a series of these axial depth scans (A-scan). A face imaging at an acquired depth is possible depending on the imaging engine used. Layperson's explanation. Optical coherence tomography (OCT) is a technique for obtaining sub-surface images of translucent or opaque materials at a resolution equivalent to a low-power microscope. It is effectively "optical ultrasound", imaging reflections from within tissue to provide cross-sectional images. OCT has attracted interest among the medical community because it provides tissue morphology imagery at much higher resolution (less than 10 μm axially and less than 20 μm laterally ) than other imaging modalities such as MRI or ultrasound. The key benefits of OCT are: OCT delivers high resolution because it is based on light, rather than sound or radio frequency. An optical beam is directed at the tissue, and the small portion of this light that reflects directly back from sub-surface features is collected. Note that most light scatters off at large angles. In conventional imaging, this diffusely scattered light contributes background that obscures an image. However, in OCT, a technique called interferometry is used to record the optical path length of received photons, allowing rejection of most photons that scatter multiple times before detection. Thus OCT can build up clear 3D images of thick samples by rejecting background signal while collecting light directly reflected from surfaces of interest. Within the range of noninvasive three-dimensional imaging techniques that have been introduced to the medical research community, OCT as an echo technique is similar to ultrasound imaging. Other medical imaging techniques such as computerized axial tomography, magnetic resonance imaging, or positron emission tomography do not use the echo-location principle. The technique is limited to imaging 1 to 2 mm below the surface in biological tissue, because at greater depths the proportion of light that escapes without scattering is too small to be detected. No special preparation of a biological specimen is required, and images can be obtained "non-contact" or through a transparent window or membrane. The laser output from the instruments used is low – eye-safe near-infrared or visible-light – and no damage to the sample is therefore likely. Theory. The principle of OCT is white light, or low coherence, interferometry. The optical setup typically consists of an interferometer (Fig. 1, typically Michelson type) with a low coherence, broad bandwidth light source. Light is split into and recombined from reference and sample arms, respectively. formula_0 Time domain. In time domain OCT the path length of the reference arm is varied in time (the reference mirror is translated longitudinally). A property of low coherence interferometry is that interference, i.e. the series of dark and bright fringes, is only achieved when the path difference lies within the coherence length of the light source. This interference is called autocorrelation in a symmetric interferometer (both arms have the same reflectivity), or cross-correlation in the common case. The envelope of this modulation changes as path length difference is varied, where the peak of the envelope corresponds to path length matching. The interference of two partially coherent light beams can be expressed in terms of the source intensity, formula_1, as formula_2 where formula_3 represents the interferometer beam splitting ratio, and formula_4 is called the complex degree of coherence, i.e. the interference envelope and carrier dependent on reference arm scan or time delay formula_5, and whose recovery is of interest in OCT. Due to the coherence gating effect of OCT the complex degree of coherence is represented as a Gaussian function expressed as formula_6 where formula_7 represents the spectral width of the source in the optical frequency domain, and formula_8 is the centre optical frequency of the source. In equation (2), the Gaussian envelope is amplitude modulated by an optical carrier. The peak of this envelope represents the location of the microstructure of the sample under test, with an amplitude dependent on the reflectivity of the surface. The optical carrier is due to the Doppler effect resulting from scanning one arm of the interferometer, and the frequency of this modulation is controlled by the speed of scanning. Therefore, translating one arm of the interferometer has two functions; depth scanning and a Doppler-shifted optical carrier are accomplished by pathlength variation. In OCT, the Doppler-shifted optical carrier has a frequency expressed as formula_9 where formula_8 is the central optical frequency of the source, formula_10 is the scanning velocity of the pathlength variation, and formula_11 is the speed of light. The axial and lateral resolutions of OCT are decoupled from one another; the former being an equivalent to the coherence length of the light source and the latter being a function of the optics. The axial resolution of OCT is defined as where formula_12 and formula_13 are respectively the central wavelength and the spectral width of the light source. Fourier domain. Fourier-domain (or Frequency-domain) OCT (FD-OCT) has speed and signal-to-noise ratio (SNR) advantages over time-domain OCT (TD-OCT) and has become the standard in the industry since 2006. The idea of using frequency modulation and coherent detection to obtain ranging information was already demonstrated in optical frequency domain reflectometry and laser radar in the 1980s, though the distance resolution and range were much longer than OCT. There are two types of FD-OCT – swept-source OCT (SS-OCT) and spectral-domain OCT (SD-OCT) – both of which acquire spectral interferograms which are then Fourier transformed to obtain an axial scan of reflectance amplitude versus depth. In SS-OCT, the spectral interferogram is acquired sequentially by tuning the wavelength of a laser light source. SD-OCT acquires spectral interferogram simultaneously in a spectrometer. An implementation of SS-OCT was described by the MIT group as early as 1994.   A group based in the University of Vienna described measurement of intraocular distance using both tunable laser and spectrometer-based interferometry as early as 1995. SD-OCT imaging was first demonstrated both in vitro and in vivo by a collaboration between the Vienna group and a group based in the Nicholas Copernicus University in a series of articles between 2000 and 2002. The SNR advantage of FD-OCT over TD-OCT was analyzed by multiple groups of researchers in 2003. Spectral-domain OCT. Spectral-domain OCT (spatially encoded frequency domain OCT) extracts spectral information by distributing different optical frequencies onto a detector stripe (line-array CCD or CMOS) via a dispersive element (see Fig. 4). Thereby the information of the full depth scan can be acquired within a single exposure. However, the large signal-to-noise advantage of FD-OCT is reduced due to the lower dynamic range of stripe detectors with respect to single photosensitive diodes, resulting in an SNR advantage of ~10 dB at much higher speeds. This is not much of a problem when working at 1300 nm, however, since dynamic range is not a serious problem at this wavelength range. The drawbacks of this technology are found in a strong fall-off of the SNR, which is proportional to the distance from the zero delay and a sinc-type reduction of the depth-dependent sensitivity because of limited detection linewidth. (One pixel detects a quasi-rectangular portion of an optical frequency range instead of a single frequency, the Fourier transform leads to the sinc(z) behavior). Additionally, the dispersive elements in the spectroscopic detector usually do not distribute the light equally spaced in frequency on the detector, but mostly have an inverse dependence. Therefore, the signal has to be resampled before processing, which cannot take care of the difference in local (pixelwise) bandwidth, which results in further reduction of the signal quality. However, the fall-off is not a serious problem with the development of new generation CCD or photodiode array with a larger number of pixels. Synthetic array heterodyne detection offers another approach to this problem without the need for high dispersion. Swept-source OCT. Swept-source OCT (Time-encoded frequency domain OCT) tries to combine some of the advantages of standard TD and spectral domain OCT. Here the spectral components are not encoded by spatial separation, but they are encoded in time. The spectrum is either filtered or generated in single successive frequency steps and reconstructed before Fourier transformation. By accommodation of a frequency scanning light source (i.e. frequency scanning laser) the optical setup (see Fig. 3) becomes simpler than spectral domain OCT, but the problem of scanning is essentially translated from the TD-OCT reference arm into the swept source OCT light source. Here the advantage lies in the proven high SNR detection technology, while swept laser sources achieve very small instantaneous bandwidths (linewidths) at very high frequencies (20–200 kHz). Drawbacks are the nonlinearities in the wavelength (especially at high scanning frequencies), the broadening of the linewidth at high frequencies and a high sensitivity to movements of the scanning geometry or the sample (below the range of nanometers within successive frequency steps). Scanning schemes. Focusing the light beam to a point on the surface of the sample under test, and recombining the reflected light with the reference will yield an interferogram with sample information corresponding to a single A-scan (Z axis only). Scanning of the sample can be accomplished by either scanning the light on the sample, or by moving the sample under test. A linear scan will yield a two-dimensional data set corresponding to a cross-sectional image (X-Z axes scan), whereas an area scan achieves a three-dimensional data set corresponding to a volumetric image (X-Y-Z axes scan). Single point. Systems based on single point, confocal, or flying-spot time domain OCT, must scan the sample in two lateral dimensions and reconstruct a three-dimensional image using depth information obtained by coherence-gating through an axially scanning reference arm (Fig. 2). Two-dimensional lateral scanning has been electromechanically implemented by moving the sample using a translation stage, and using a novel micro-electro-mechanical system scanner. Line-field OCT. Line-field confocal optical coherence tomography (LC-OCT) is an imaging technique based on the principle of time-domain OCT with line illumination using a broadband laser and line detection using a line-scan camera. LC-OCT produces B-scans in real-time from multiple A-scans acquired in parallel. En face as well as three-dimensional images can also be obtained by scanning the illumination line laterally. The focus is continuously adjusted during the scan of the sample depth, using a high numerical aperture (NA) microscope objective to image with high lateral resolution. By using a supercontinuum laser as a light source, a quasi-isotropic spatial resolution of ~ 1 μm is achieved at a central wavelength of ~ 800 nm. On the other hand, line illumination and detection, combined with the use of a high NA microscope objective, produce a confocal gate that prevents most scattered light that does not contribute to the signal from being detected by the camera. This confocal gate, which is absent in the full-field OCT technique, gives LC-OCT an advantage in terms of detection sensitivity and penetration in highly scattering media such as skin tissues. So far this technique has been used mainly for skin imaging in the fields of dermatology and cosmetology. Full-field OCT. An imaging approach to temporal OCT was developed by Claude Boccara's team in 1998, with an acquisition of the images without beam scanning. In this technique called full-field OCT (FF-OCT), unlike other OCT techniques that acquire cross-sections of the sample, the images are here "en-face" i.e. like images of classical microscopy: orthogonal to the light beam of illumination. More precisely, interferometric images are created by a Michelson interferometer where the path length difference is varied by a fast electric component (usually a piezo mirror in the reference arm). These images acquired by a CCD camera are combined in post-treatment (or online) by the phase shift interferometry method, where usually 2 or 4 images per modulation period are acquired, depending on the algorithm used. More recently, approaches that allow rapid single-shot imaging were developed to simultaneously capture multiple phase-shifted images required for reconstruction, using single camera. Single-shot time-domain OCM is limited only by the camera frame rate and available illumination. The "en-face" tomographic images are thus produced by a wide-field illumination, ensured by the Linnik configuration of the Michelson interferometer where a microscope objective is used in both arms. Furthermore, while the temporal coherence of the source must remain low as in classical OCT (i.e. a broad spectrum), the spatial coherence must also be low to avoid parasitical interferences (i.e. a source with a large size). Selected applications. Optical coherence tomography is an established medical imaging technique and is used across several medical specialties including ophthalmology and cardiology and is widely used in basic science research applications. Ophthalmology. Ocular (or ophthalmic) OCT is used heavily by ophthalmologists and optometrists to obtain high-resolution images of the retina and anterior segment. Owing to OCT's capability to show cross-sections of tissue layers with micrometer resolution, OCT provides a straightforward method of assessing cellular organization, photoreceptor integrity, and axonal thickness in glaucoma, macular degeneration, diabetic macular edema, multiple sclerosis, optic neuritis, and other eye diseases or systemic pathologies which have ocular signs. Additionally, ophthalmologists leverage OCT to assess the vascular health of the retina via a technique called OCT angiography (OCTA). In ophthalmological surgery, especially retinal surgery, an OCT can be mounted on the microscope. Such a system is called an "intraoperative OCT" (iOCT) and provides support during the surgery with clinical benefits. Polarization-sensitive OCT was recently applied in the human retina to determine optical polarization properties of vessel walls near the optic nerve. Retinal imaging with PS-OCT demonstrated how the thickness and birefringence of blood vessel wall tissue of healthy subjects could be quantified, in vivo. PS-OCT was subsequently applied to patients with diabetes and age-matched healthy subjects, and showed an almost 100% increase in vessel wall birefringence due to diabetes, without a significant change in vessel wall thickness. In patients with hypertension however, the retinal vessel wall thickness increased by 60% while the vessel wall birefringence dropped by 20%, on average. The large differences measured in healthy subjects and patients suggest that retinal measurements with PS-OCT could be used as a screening tool for hypertension and diabetes. Cardiology. In the settings of cardiology, OCT is used to image coronary arteries to visualize vessel wall lumen morphology and microstructure at a resolution ~10 times higher than other existing modalities such as intravascular ultrasounds, and x-ray angiography (intracoronary optical coherence tomography). For this type of application, 1 mm in diameter or smaller fiber-optics catheters are used to access artery lumen through semi-invasive interventions such as percutaneous coronary interventions. The first demonstration of endoscopic OCT was reported in 1997, by researchers in Fujimoto's laboratory at Massachusetts Institute of Technology. The first TD-OCT imaging catheter and system was commercialized by LightLab Imaging, Inc., a company based in Massachusetts in 2006. The first FD-OCT imaging study was reported by Massachusetts General Hospital in 2008. Intracoronary FD-OCT was first introduced in the market in 2009 by LightLab Imaging, Inc. followed by Terumo Corporation in 2012 and by Gentuity LLC in 2020. The higher acquisition speed of FD-OCT enabled the widespread adoption of this imaging technology for coronary artery imaging. It is estimated that over 100,000 FD-OCT coronary imaging cases are performed yearly, and that the market is increasing by approximately 20% every year. Other developments of intracoronary OCT included the combination with other optical imaging modalities for multi-modality imaging. Intravascular OCT has been combined with near-infrared fluorescence molecular imaging (NIRF) to enhance its capability to detect molecular/functional and tissue morphological information simultaneously. In a similar way, combination with near-infrared spectroscopy (NIRS) has been implemented. Neurovascular. Endoscopic/intravascular OCT has been further developed for use in neurovascular applications including imaging for guiding endovascular treatment of ischemic stroke and brain aneurysms. Initial clinical investigations with existing coronary OCT catheters have been limited to proximal intracranial anatomy of patient with limited tortuosity, as coronary OCT technology was not designed for the tortuous cerebrovasculature encountered in the brain. However, despite these limitations, it showed the potential of OCT for the imaging of neurovascular disease. An intravascular OCT imaging catheter design tailored for use in tortuous neurovascular anatomy has been proposed in 2020. A first-in-human study using endovascular neuro OCT ("n"OCT) has been reported in 2024. Oncology. Endoscopic OCT has been applied to the detection and diagnosis of cancer and precancerous lesions, such as Barrett's esophagus and esophageal dysplasia. Dermatology. The first use of OCT in dermatology dates back to 1997. Since then, OCT has been applied to the diagnosis of various skin lesions including carcinomas. However, the diagnosis of melanoma using conventional OCT is difficult, especially due to insufficient imaging resolution. Emerging high-resolution OCT techniques such as LC-OCT have the potential to improve the clinical diagnostic process, allowing for the early detection of malignant skin tumors – including melanoma – and a reduction in the number of surgical excisions of benign lesions. Other promising areas of application include the imaging of lesions where excisions are hazardous or impossible and the guidance of surgical interventions through identification of tumor margins. Dentistry. Researchers in Tokyo medical and Dental University were able to detect enamel white spot lesions around and beneath the orthodontic brackets using swept source OCT. Research applications. Researchers have used OCT to produce detailed images of mice brains, through a "window" made of zirconia that has been modified to be transparent and implanted in the skull. Optical coherence tomography is also applicable and increasingly used in industrial applications, such as nondestructive testing (NDT), material thickness measurements, and in particular thin silicon wafers and compound semiconductor wafers thickness measurements surface roughness characterization, surface and cross-section imaging and volume loss measurements. OCT systems with feedback can be used to control manufacturing processes. With high speed data acquisition, and sub-micron resolution, OCT is adaptable to perform both inline and off-line. Due to the high volume of produced pills, an interesting field of application is in the pharmaceutical industry to control the coating of tablets. Fiber-based OCT systems are particularly adaptable to industrial environments. These can access and scan interiors of hard-to-reach spaces, and are able to operate in hostile environments—whether radioactive, cryogenic, or very hot. Novel optical biomedical diagnostic and imaging technologies are currently being developed to solve problems in biology and medicine. As of 2014, attempts have been made to use optical coherence tomography to identify root canals in teeth, specifically canal in the maxillary molar, however, there is no difference with the current methods of dental operatory microscope. Research conducted in 2015 was successful in utilizing a smartphone as an OCT platform, although much work remains to be done before such a platform would be commercially viable. Photonic integrated circuits may be a promising option to miniaturized OCT. Similarly to integrated circuits silicon-based fabrication techniques can be used to produce miniaturized photonic systems. First in vivo human retinal imaging has been reported recently. In 3D microfabrication, OCT enables non-destructive testing and real-time inspection during additive manufacturing. Its high-resolution imaging detects defects, characterizes material properties and ensures the integrity of internal geometries without damaging the part. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " F _ODT \\left ( \\nu \\right ) = 2S_0 \\left ( \\nu \\right ) K_r \\left ( \\nu \\right ) K_s \\left ( \\nu \\right ) \\qquad \\quad (3) " }, { "math_id": 1, "text": "I_S" }, { "math_id": 2, "text": " I = k_1 I_S + k_2 I_S + 2 \\sqrt { \\left ( k_1 I_S \\right ) \\cdot \\left ( k_2 I_S \\right )} \\cdot Re \\left [\\gamma \\left ( \\tau \\right ) \\right] \\qquad (1) " }, { "math_id": 3, "text": "k_1 + k_2 < 1" }, { "math_id": 4, "text": " \\gamma ( \\tau ) " }, { "math_id": 5, "text": " \\tau " }, { "math_id": 6, "text": " \\gamma \\left ( \\tau \\right ) = \\exp \\left [- \\left ( \\frac{\\pi\\Delta\\nu\\tau}{2 \\sqrt{\\ln 2} } \\right )^2 \\right] \\cdot \\exp \\left ( -j2\\pi\\nu_0\\tau \\right ) \\qquad \\quad (2) " }, { "math_id": 7, "text": " \\Delta\\nu " }, { "math_id": 8, "text": " \\nu_0 " }, { "math_id": 9, "text": " f_{Dopp} = \\frac { 2 \\cdot \\nu_0 \\cdot v_s } { c } \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\quad (3) " }, { "math_id": 10, "text": " v_s " }, { "math_id": 11, "text": " c " }, { "math_id": 12, "text": " \\lambda_0 " }, { "math_id": 13, "text": " \\Delta\\lambda" } ]
https://en.wikipedia.org/wiki?curid=628583
62858445
Serre's theorem on a semisimple Lie algebra
In abstract algebra, specifically the theory of Lie algebras, Serre's theorem states: given a (finite reduced) root system formula_0, there exists a finite-dimensional semisimple Lie algebra whose root system is the given formula_0. Statement. The theorem states that: given a root system formula_0 in a Euclidean space with an inner product formula_1, formula_2 and a base formula_3 of formula_0, the Lie algebra formula_4 defined by (1) formula_5 generators formula_6 and (2) the relations formula_7 formula_8, formula_9, formula_10, formula_11. is a finite-dimensional semisimple Lie algebra with the Cartan subalgebra generated by formula_12's and with the root system formula_0. The square matrix formula_13 is called the Cartan matrix. Thus, with this notion, the theorem states that, give a Cartan matrix "A", there exists a unique (up to an isomorphism) finite-dimensional semisimple Lie algebra formula_14 associated to formula_15. The construction of a semisimple Lie algebra from a Cartan matrix can be generalized by weakening the definition of a Cartan matrix. The (generally infinite-dimensional) Lie algebra associated to a generalized Cartan matrix is called a Kac–Moody algebra. Sketch of proof. The proof here is taken from and . Let formula_16 and then let formula_17 be the Lie algebra generated by (1) the generators formula_6 and (2) the relations: Let formula_22 be the free vector space spanned by formula_12, "V" the free vector space with a basis formula_23 and formula_24 the tensor algebra over it. Consider the following representation of a Lie algebra: formula_25 given by: for formula_26, It is not trivial that this is indeed a well-defined representation and that has to be checked by hand. From this representation, one deduces the following properties: let formula_30 (resp. formula_31) the subalgebras of formula_17 generated by the formula_32's (resp. the formula_33's). For each ideal formula_39 of formula_17, one can easily show that formula_39 is homogeneous with respect to the grading given by the root space decomposition; i.e., formula_40. It follows that the sum of ideals intersecting formula_41 trivially, it itself intersects formula_41 trivially. Let formula_42 be the sum of all ideals intersecting formula_41 trivially. Then there is a vector space decomposition: formula_43. In fact, it is a formula_17-module decomposition. Let formula_44. Then it contains a copy of formula_41, which is identified with formula_41 and formula_45 where formula_46 (resp. formula_47) are the subalgebras generated by the images of formula_32's (resp. the images of formula_33's). One then shows: (1) the derived algebra formula_48 here is the same as formula_4 in the lead, (2) it is finite-dimensional and semisimple and (3) formula_49.
[ { "math_id": 0, "text": "\\Phi" }, { "math_id": 1, "text": "(, )" }, { "math_id": 2, "text": "\\langle \\beta, \\alpha \\rangle = 2(\\alpha, \\beta)/(\\alpha, \\alpha), \\beta, \\alpha \\in E" }, { "math_id": 3, "text": "\\{ \\alpha_1, \\dots, \\alpha_n \\}" }, { "math_id": 4, "text": "\\mathfrak g" }, { "math_id": 5, "text": "3n" }, { "math_id": 6, "text": "e_i, f_i, h_i" }, { "math_id": 7, "text": "[h_i, h_j] = 0," }, { "math_id": 8, "text": "[e_i, f_i] = h_i, \\, [e_i, f_j] = 0, i \\ne j" }, { "math_id": 9, "text": "[h_i, e_j] = \\langle \\alpha_i, \\alpha_j \\rangle e_j, \\, [h_i, f_j] = -\\langle \\alpha_i, \\alpha_j \\rangle f_j" }, { "math_id": 10, "text": "\\operatorname{ad}(e_i)^{-\\langle \\alpha_i, \\alpha_j \\rangle+1}(e_j) = 0, i \\ne j" }, { "math_id": 11, "text": "\\operatorname{ad}(f_i)^{-\\langle \\alpha_i, \\alpha_j \\rangle+1}(f_j) = 0, i \\ne j" }, { "math_id": 12, "text": "h_i" }, { "math_id": 13, "text": "[\\langle \\alpha_i, \\alpha_j \\rangle]_{1 \\le i, j \\le n}" }, { "math_id": 14, "text": "\\mathfrak g(A)" }, { "math_id": 15, "text": "A" }, { "math_id": 16, "text": "a_{ij} = \\langle \\alpha_i, \\alpha_j \\rangle" }, { "math_id": 17, "text": "\\widetilde{\\mathfrak g}" }, { "math_id": 18, "text": "[h_i, h_j] = 0" }, { "math_id": 19, "text": "[e_i, f_i] = h_i" }, { "math_id": 20, "text": "[e_i, f_j] = 0, i \\ne j" }, { "math_id": 21, "text": "[h_i, e_j] = a_{ij} e_j, [h_i, f_j] = -a_{ij} f_j" }, { "math_id": 22, "text": "\\mathfrak{h}" }, { "math_id": 23, "text": "v_1, \\dots, v_n" }, { "math_id": 24, "text": "T = \\bigoplus_{l=0}^{\\infty} V^{\\otimes l}" }, { "math_id": 25, "text": "\\pi : \\widetilde{\\mathfrak g} \\to \\mathfrak{gl}(T)" }, { "math_id": 26, "text": "a \\in T, h \\in \\mathfrak{h}, \\lambda \\in \\mathfrak{h}^*" }, { "math_id": 27, "text": "\\pi(f_i)a = v_i \\otimes a," }, { "math_id": 28, "text": "\\pi(h)1 = \\langle \\lambda, \\, h \\rangle 1, \\pi(h)(v_j \\otimes a) = -\\langle \\alpha_j, h \\rangle v_j \\otimes a + v_j \\otimes \\pi(h)a" }, { "math_id": 29, "text": "\\pi(e_i)1 = 0, \\, \\pi(e_i)(v_j \\otimes a) = \\delta_{ij} \\alpha_i(a) + v_j \\otimes \\pi(e_i)a" }, { "math_id": 30, "text": "\\widetilde{\\mathfrak{n}}_+" }, { "math_id": 31, "text": "\\widetilde{\\mathfrak{n}}_-" }, { "math_id": 32, "text": "e_i" }, { "math_id": 33, "text": "f_i" }, { "math_id": 34, "text": "\\widetilde{\\mathfrak g} = \\widetilde{\\mathfrak{n}}_+ \\bigoplus \\mathfrak{h} \\bigoplus \\widetilde{\\mathfrak{n}}_-" }, { "math_id": 35, "text": "\\widetilde{\\mathfrak{n}}_+ = \\bigoplus_{0 \\ne \\alpha \\in Q_+} \\widetilde{\\mathfrak g}_{\\alpha}" }, { "math_id": 36, "text": "\\widetilde{\\mathfrak g}_{\\alpha} = \\{ x \\in \\widetilde{\\mathfrak g}|[h, x] = \\alpha(h) x, h \\in \\mathfrak h \\}" }, { "math_id": 37, "text": "\\widetilde{\\mathfrak{n}}_- = \\bigoplus_{0 \\ne \\alpha \\in Q_+} \\widetilde{\\mathfrak g}_{-\\alpha}" }, { "math_id": 38, "text": "\\widetilde{\\mathfrak g} = \\left( \\bigoplus_{0 \\ne \\alpha \\in Q_+} \\widetilde{\\mathfrak g}_{-\\alpha} \\right) \\bigoplus \\mathfrak h \\bigoplus \\left( \\bigoplus_{0 \\ne \\alpha \\in Q_+} \\widetilde{\\mathfrak g}_{\\alpha} \\right)" }, { "math_id": 39, "text": "\\mathfrak i" }, { "math_id": 40, "text": "\\mathfrak i = \\bigoplus_{\\alpha} (\\widetilde{\\mathfrak g}_{\\alpha} \\cap \\mathfrak i)" }, { "math_id": 41, "text": "\\mathfrak h" }, { "math_id": 42, "text": "\\mathfrak r" }, { "math_id": 43, "text": "\\mathfrak r = (\\mathfrak r \\cap \\widetilde{\\mathfrak n}_-) \\oplus (\\mathfrak r \\cap \\widetilde{\\mathfrak n}_+)" }, { "math_id": 44, "text": "\\mathfrak g = \\widetilde{\\mathfrak g}/\\mathfrak r" }, { "math_id": 45, "text": "\\mathfrak g = \\mathfrak{n}_+ \\bigoplus \\mathfrak{h} \\bigoplus \\mathfrak{n}_-" }, { "math_id": 46, "text": "\\mathfrak{n}_+" }, { "math_id": 47, "text": "\\mathfrak{n}_-" }, { "math_id": 48, "text": "[\\mathfrak g, \\mathfrak g]" }, { "math_id": 49, "text": "[\\mathfrak g, \\mathfrak g] = \\mathfrak g" } ]
https://en.wikipedia.org/wiki?curid=62858445
62860439
Regular numerical predicate
In computer science and mathematics, more precisely in automata theory, model theory and formal language, a regular numerical predicate is a kind of relation over integers. Regular numerical predicates can also be considered as a subset of formula_0 for some arity formula_1. One of the main interests of this class of predicates is that it can be defined in plenty of different ways, using different logical formalisms. Furthermore, most of the definitions use only basic notions, and thus allows to relate foundations of various fields of fundamental computer science such as automata theory, syntactic semigroup, model theory and semigroup theory. The class of regular numerical predicate is denoted formula_2, formula_3 and REG. Definitions. The class of regular numerical predicate admits a lot of equivalent definitions. They are now given. In all of those definitions, we fix formula_4 and formula_5 a (numerical) predicate of arity formula_1. Automata with variables. The first definition encodes predicate as a formal language. A predicate is said to be regular if the formal language is regular. Let the alphabet formula_6 be the set of subset of formula_7. Given a vector of formula_1 integers formula_8, it is represented by the word formula_9 of length formula_10 whose formula_11-th letter is formula_12. For example, the vector formula_13 is represented by the word formula_14. We then define formula_15 as formula_16. The numerical predicate formula_17 is said to be regular if formula_15 is a regular language over the alphabet formula_6. This is the reason for the use of the word "regular" to describe this kind of numerical predicate. Automata reading unary numbers. This second definition is similar to the previous one. Predicates are encoded into languages in a different way, and the predicate is said to be regular if and only if the language is regular. Our alphabet formula_6 is the set of vectors of formula_1 binary digits. That is: formula_18. Before explaining how to encode a vector of numbers, we explain how to encode a single number. Given a length formula_19 and a number formula_20, the unary representation of formula_21 of length formula_19 is the word formula_22 over the binary alphabet formula_23, beginning by a sequence of formula_21 "1"'s, followed by formula_24 "0"'s. For example, the unary representation of 1 of length 4 is formula_25. Given a vector of formula_1 integers formula_8, let formula_26. The vector formula_27 is represented by the word formula_28 such that, the projection of formula_9 over its formula_11-th component is formula_29. For example, the representation of formula_13 is formula_30. This is a word whose letters are the vectors formula_31, formula_32 and formula_32 and whose projection over each components are formula_33, formula_34 and formula_33. As in the previous definition, the numerical predicate formula_17 is said to be regular if formula_15 is a regular language over the alphabet formula_6. formula_35. A predicate is regular if and only if it can be defined by a monadic second order formula formula_36, or equivalently by an existential monadic second order formula, where the only atomic predicate is the successor function formula_37. formula_38. A predicate is regular if and only if it can be defined by a first order logic formula formula_36, where the atomic predicates are: Congruence arithmetic. The language of congruence arithmetic is defined as the est of Boolean combinations, where the atomic predicates are: A predicate is regular if and only if it can be defined in the language of congruence arithmetic. The equivalence with previous definition is due to quantifier elimination. Using recursion and patterns. This definition requires a fixed parameter formula_40. A set is said to be regular if it is formula_40-regular for some formula_47. In order to introduce the definition of formula_40-regular, the trivial case where formula_48 should be considered separately. When formula_48, then the predicate formula_17 is either the constant true or the constant false. Those two predicates are said to be formula_40-regular (for every formula_40). Let us now assume that formula_49. In order to introduce the definition of regular predicate in this case, we need to introduce the notion of section of a predicate. The section formula_50 of formula_17 is the predicate of arity formula_51 where the formula_11-th component is fixed to formula_43. Formally, it is defined as formula_52. For example, let us consider the sum predicate formula_53. Then formula_54 is the predicate which adds the constant formula_43, and formula_55 is the predicate which states that the sum of its two elements is formula_43. The last equivalent definition of regular predicate can now be given. A predicate formula_17 of arity formula_49 is formula_40-regular if it satisfies the two following conditions: The second property intuitively means that, when number are big enough, then their exact value does not matter. The properties which matters are the order relation between the numbers and their value modulo the period formula_40. Using recognizable semigroups. Given a subset formula_60, let formula_61 be the characteristic vector of formula_62. That is, the vector in formula_18 whose formula_11-th component is 1 if formula_63, and 0 otherwise. Given a sequence formula_64 of sets, let formula_65. The predicate formula_17 is regular if and only if for each increasing sequence of set formula_66, formula_67 is a recognizable submonoid of formula_68. Definition of non regular language. The predicate formula_17 is regular if and only if all languages which can be defined in first order logic with atomic predicates for letters and the atomic predicate formula_17 are regular. The same property would hold for the monadic second order logic, and with modular quantifiers. Reducing arity. The following property allows to reduce an arbitrarily complex non-regular predicate to a simpler binary predicate which is also non-regular. Let us assume that formula_17 is definable in Presburger Arithmetic. The predicate formula_17 is non regular if and only if there exists a formula in formula_69 which defines the multiplication by a rational formula_70. More precisely, it allows to define the non-regular predicate formula_71 for some formula_72. Properties. The class of regular numerical predicate satisfies many properties. Satisfiability. As in previous case, let us assume that formula_17 is definable in Presburger Arithmetic. The satisfiability of formula_73 is decidable if and only if formula_17 is regular. This theorem is due to the previous property and the fact that the satisfiability of formula_74 is undecidable when formula_75 and formula_76. Closure property. The class of regular predicates is closed under union, intersection, complement, taking a section, projection and Cartesian product. All of those properties follows directly from the definition of this class as the class of predicates definable in formula_77. Decidability. It is decidable whether a predicate defined in Presburger arithmetic is regular. Elimination of quantifier. The logic formula_78 considered above admit the elimination of quantifier. More precisely, the algorithm for elimination of quantifier by Cooper does not introduce multiplication by constants nor sums of variable. Therefore, when applied to a formula_78 it returns a quantifier-free formula informula_79. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb N^r" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "\\mathcal C_{lca}" }, { "math_id": 3, "text": "\\mathcal N_{\\mathtt{thres, mod}}" }, { "math_id": 4, "text": "r\\in\\mathbb N" }, { "math_id": 5, "text": "P\\subseteq\\mathbb N^r" }, { "math_id": 6, "text": "A" }, { "math_id": 7, "text": "\\{1,\\dots,r\\}" }, { "math_id": 8, "text": "\\mathbf n=(n_0,\\dots,n_{r-1})\\in\\mathbb N^r" }, { "math_id": 9, "text": "\\overline{\\mathbf n}" }, { "math_id": 10, "text": "\\max(n_0,\\dots,n_{r-1})" }, { "math_id": 11, "text": "i" }, { "math_id": 12, "text": "\\{j\\mid n_j=i\\}" }, { "math_id": 13, "text": "(3,1,3)" }, { "math_id": 14, "text": "\\emptyset\\{1\\}\\emptyset\\{0,2\\}" }, { "math_id": 15, "text": "\\overline P" }, { "math_id": 16, "text": "\\{\\overline {\\mathbf n}\\mid\\mathbf n\\}" }, { "math_id": 17, "text": "P" }, { "math_id": 18, "text": "\\{0,1\\}^r" }, { "math_id": 19, "text": "l" }, { "math_id": 20, "text": "n\\le l" }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": "\\mid{n}\\mid_l" }, { "math_id": 23, "text": "\\{0,1\\}" }, { "math_id": 24, "text": "n-l" }, { "math_id": 25, "text": "1000" }, { "math_id": 26, "text": "l=\\max(n_0,\\dots,n_{r-1})" }, { "math_id": 27, "text": "\\mathbf n" }, { "math_id": 28, "text": "\\overline{\\mathbf n}\\in\\left(\\{0,1\\}^r\\right)^*" }, { "math_id": 29, "text": "\\mid{n_i}\\mid_{\\max(n_0,\\dots,n_{r-1})}" }, { "math_id": 30, "text": "\\begin{array}{l|l|l}1&1&1\\\\1&0&0\\\\1&1&1\\end{array}" }, { "math_id": 31, "text": "(1,1,1)" }, { "math_id": 32, "text": "(1,0,1)" }, { "math_id": 33, "text": "111" }, { "math_id": 34, "text": "100" }, { "math_id": 35, "text": "(\\exists)MSO(+1)" }, { "math_id": 36, "text": "\\phi(x_0,\\dots,x_{r-1})" }, { "math_id": 37, "text": "y+1=z" }, { "math_id": 38, "text": "FO(\\le,\\mod)" }, { "math_id": 39, "text": "y\\le z" }, { "math_id": 40, "text": "m" }, { "math_id": 41, "text": "y\\equiv 0\\mod m" }, { "math_id": 42, "text": "x_i+c=x_j" }, { "math_id": 43, "text": "c" }, { "math_id": 44, "text": "x_i\\le x_j" }, { "math_id": 45, "text": "x_i\\equiv c\\mod m" }, { "math_id": 46, "text": "x" }, { "math_id": 47, "text": "m\\ge2" }, { "math_id": 48, "text": "r=0" }, { "math_id": 49, "text": "r\\ge1" }, { "math_id": 50, "text": "P^{x_i=c}" }, { "math_id": 51, "text": "r-1" }, { "math_id": 52, "text": "\\{(x_0,\\dots,x_{i-1},x_{i+1},\\dots,x_{r-1})\\mid P(x_0,\\dots,x_{i-1},c,x_{i+1},\\dots,x_{r-1})\\}" }, { "math_id": 53, "text": "S=\\{(n_0,n_1,n_2)\\mid n_0+n_1=n_2\\}" }, { "math_id": 54, "text": "S^{x_0=c}=\\{(n_1,n_2)\\mid c+n_1=n_2\\}" }, { "math_id": 55, "text": "S^{x_2=c}=\\{(n_0,n_1)\\mid n_0+n_1=c\\}" }, { "math_id": 56, "text": "t\\in\\mathbb N" }, { "math_id": 57, "text": "(n_0,\\dots,n_r)\\in\\mathbb N^r" }, { "math_id": 58, "text": "n_i\\ge t" }, { "math_id": 59, "text": "P(n_0,\\dots,n_r)\\iff P(n_0+m,\\dots,n_r+m)" }, { "math_id": 60, "text": "s\\subseteq\\{0,\\dots,r-1\\}" }, { "math_id": 61, "text": "\\overline s" }, { "math_id": 62, "text": "s" }, { "math_id": 63, "text": "i\\in s" }, { "math_id": 64, "text": "\\mathbf s=s_0\\subsetneq\\dots\\subsetneq s_{p-1}" }, { "math_id": 65, "text": "P_{\\mathbf s}=\\{(n_0,\\dots,n_{p-1})\\in\\mathbb N^p\\mid P(\\sum n_ie_i)\\}" }, { "math_id": 66, "text": "\\mathbf s" }, { "math_id": 67, "text": "P_{\\mathbf s}" }, { "math_id": 68, "text": "\\mathbb N^p" }, { "math_id": 69, "text": "\\mathbf{FO}[\\le,R]" }, { "math_id": 70, "text": "\\frac pq\\not\\in\\{0,1\\}" }, { "math_id": 71, "text": "\\{(p\\times n,q\\times n)\\mid n\\in\\mathbb N\\}" }, { "math_id": 72, "text": "p\\not\\in{0,q}" }, { "math_id": 73, "text": "\\exists\\mathbf{MSO}(+1,P)" }, { "math_id": 74, "text": "\\exists\\mathbf{MSO}(+1,\\times{\\frac pq})" }, { "math_id": 75, "text": "p\\ne0" }, { "math_id": 76, "text": "p\\ne q" }, { "math_id": 77, "text": "\\mathbf{FO}(\\le, \\mod) " }, { "math_id": 78, "text": "\\mathbf{FO}(\\le, +c,\\mod) " }, { "math_id": 79, "text": "\\mathbf{FO}(\\le, +c,\\mod)" } ]
https://en.wikipedia.org/wiki?curid=62860439
62878887
Curvature Renormalization Group Method
In theoretical physics, the curvature renormalization group (CRG) method is an analytical approach to determine the phase boundaries and the critical behavior of topological systems. Topological phases are phases of matter that appear in certain quantum mechanical systems at zero temperature because of a robust degeneracy in the ground-state wave function. They are called topological because they can be described by different (discrete) values of a "nonlocal" topological invariant. This is to contrast with non-topological phases of matter (e.g. ferromagnetism) that can be described by different values of a "local" order parameter. States with different values of the topological invariant cannot change into each other without a phase transition. The topological invariant is constructed from a curvature function that can be calculated from the bulk Hamiltonian of the system. At the phase transition, the curvature function diverges, and the topological invariant correspondingly jumps abruptly from one value to another. The CRG method works by detecting the divergence in the curvature function, and thus determining the boundaries between different topological phases. Furthermore, from the divergence of the curvature function, it extracts scaling laws that describe the critical behavior, i.e. how different quantities (such as susceptibility or correlation length) behave as the topological phase transition is approached. The CRG method has been successfully applied to a variety of static, periodically driven, weakly and strongly interacting systems to classify the nature of the corresponding topological phase transitions. Background. Topological phases are quantum phases of matter that are characterized by robust ground state degeneracy and quantized geometric phases. Transitions between different topological phases are usually called topological phase transitions, which are characterized by discrete jumps of the topological invariant formula_0. Upon tuning one or multiple system parameters formula_1, formula_0 jumps abruptly from one integer to another at the critical point formula_2. Typically, the topological invariant formula_0 takes the form of an integration of a curvature function formula_3 in momentum space:formula_4Depending on the dimensionality and symmetries of the system, the curvature function can be a Berry connection, a Berry curvature, or a more complicated object. In the vicinity of high symmetry points formula_5 in a formula_6-dimensional momentum space, where formula_7 is a reciprocal lattice vector, the curvature function typically displays a Lorentzian shape formula_8where formula_9 defines the width of the multidimensional peak. Approaching the critical point formula_10 the peak gradually diverges, flipping sign across the transition:formula_11This behavior is displayed in the video on the side for the case formula_12. Scaling laws, critical exponents, and universality. The divergence of the curvature function permits the definition of critical exponents formula_13 asformula_14 The conservation of the topological invariant formula_15, as the transition is approached from one side or the other, yields a scaling law that constraints the exponentsformula_16 where formula_6 is the dimensionality of the problem. These exponents serve to classify topological phase transitions into different universality classes. To experimentally measure the critical exponents, one needs to have access to the curvature function with a certain level of accuracy. Good candidates at present are quantum engineered photonics and ultracold atomic systems. In the first case, the curvature function can be extracted from the anomalous displacement of wave packets under optical pulse pumping in coupled fibre loops. For ultracold atoms in optical lattices, the Berry curvature can be achieved through quantum interference or force-induced wave-packet velocity measurements. Correlation function. The Fourier transform of the curvature functionformula_17 typically measures the overlap of certain quantum mechanical wave functions or more complicated objects, and therefore it is interpreted as a correlation function. For instance, if the curvature function is the noninteracting or many-body Berry connection or Berry curvature, the correlation function formula_18 is a measure of the overlap of Wannier functions centered at two home cells that are distance formula_19 apart. Because of the Lorentzian shape of the curvature function mentioned above, the Fourier transform of the curvature function decays with the length scale formula_20. Hence, formula_20 is interpreted as the correlation length, and its critical exponent is assigned to be formula_21 like in Landau theory. Furthermore, the correlation length is related to the localization length of topological edge states, such as Majorana modes. Scaling equation. The scaling procedure that identifies the topological phase transitions is based on the divergence of the curvature function. It is an iterative procedure that, for a given parameter set formula_22 that controls the topology, searches for a new parameter set formula_23 that satisfies formula_24where formula_25 is a high-symmetry point and formula_26 is a small deviation away from it. This procedure searches for the path in the parameter space of formula_22 along which the divergence of the curvature function reduces, yielding a renormalization group flow that flows away from the topological phase transitions. The name "curvature renormalization group" is derived precisely from this procedure that renormalizes the profile of the curvature function. Writing formula_27and formula_28, and expanding the scaling equation above to leading order yields the generic renormalization group equation formula_29The renormalization group flow can be obtained directly as a stream plot of the right hand side of this differential equation. Numerically, this differential equation only requires the evaluation of the curvature function at few momenta. Hence, the method is a very efficient way to identify topological phase transitions, especially in periodically driven systems (aka Floquet systems) and interacting systems. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{C}" }, { "math_id": 1, "text": " \\mathbf{M} = (\\mathbf{M}_1 , \\mathbf{M}_2, \\dots)" }, { "math_id": 2, "text": " \\mathbf{M}_c" }, { "math_id": 3, "text": " F(\\mathbf{k})" }, { "math_id": 4, "text": " \\mathcal{C} = \\int \\mathrm{d}^D k \\, \\, F(\\mathbf{k}, {\\bf M})." }, { "math_id": 5, "text": " {\\bf k}_{0}={\\bf k}_{0}+{\\bf G}" }, { "math_id": 6, "text": " D" }, { "math_id": 7, "text": " {\\bf G}" }, { "math_id": 8, "text": " F({\\bf k}_{0}+\\delta{\\bf k},{\\bf M})=\\frac{F({\\bf k}_{0},{\\bf M})}{1+\\xi^{2}\\delta k^{2}}\\,,\n\n" }, { "math_id": 9, "text": " 1/\\xi" }, { "math_id": 10, "text": " {\\bf M}\\rightarrow{\\bf M}_{c}" }, { "math_id": 11, "text": " \\lim_{\\mathbf{M} \\rightarrow \\mathbf{M}_c^+} F( \\mathbf{k}_0,\\mathbf{M}) = -\\lim_{\\mathbf{M} \\rightarrow \\mathbf{M}_c^-} F(\\mathbf{k}_0, \\mathbf{M})=\\pm\\infty, \\;\\; \\lim_{\\mathbf{M} \\rightarrow \\mathbf{M}_c} \\xi=\\infty\\;," }, { "math_id": 12, "text": " D=1" }, { "math_id": 13, "text": "\\gamma,\\nu" }, { "math_id": 14, "text": "|F({\\bf k}_{0},{\\bf M})|\\propto|{\\bf M}-{\\bf M}_{c}|^{-\\gamma},\\;\\;\\;\\;\\;\\xi\\propto|{\\bf M}-{\\bf M}_{c}|^{-\\nu}." }, { "math_id": 15, "text": "\\mathcal{C}=\\mathrm{const.}" }, { "math_id": 16, "text": "\\gamma=D\\nu," }, { "math_id": 17, "text": " \\tilde{F}({\\bf R})=\\int \\frac{d^{D}{\\bf k}}{(2\\pi)^{D}}\\;e^{i{\\bf k}\\cdot{\\bf R}}\\;F({\\bf k},M)" }, { "math_id": 18, "text": " \\tilde{F}({\\bf R})" }, { "math_id": 19, "text": " {\\bf R}" }, { "math_id": 20, "text": " \\xi" }, { "math_id": 21, "text": " \\nu" }, { "math_id": 22, "text": "{\\bf M}" }, { "math_id": 23, "text": "{\\bf M}^{\\prime}" }, { "math_id": 24, "text": "F({\\bf k}_0, {\\bf M}^{\\prime}) = F({\\bf k}_0 + \\delta {\\bf k}, {\\bf M})," }, { "math_id": 25, "text": "{\\bf k}_{0}" }, { "math_id": 26, "text": "\\delta {\\bf k}" }, { "math_id": 27, "text": "\\mathrm{d}M_{i} = M_{i}^{\\prime} - M_{i}" }, { "math_id": 28, "text": "\\delta k_j^2 \\equiv \\mathrm{d}l" }, { "math_id": 29, "text": "\\frac{\\mathrm{d}M_{i}}{\\mathrm{d} l} = \\frac{1}{2} \\frac{\\partial^2_k F({\\bf k}, {\\bf M}) \\big|_{k=k_0}}{\\partial_{M_{i}} F({\\bf k}_{0}, {\\bf M})}." } ]
https://en.wikipedia.org/wiki?curid=62878887
62881422
Transverse-field Ising model
The transverse field Ising model is a quantum version of the classical Ising model. It features a lattice with nearest neighbour interactions determined by the alignment or anti-alignment of spin projections along the formula_0 axis, as well as an external magnetic field perpendicular to the formula_0 axis (without loss of generality, along the formula_1 axis) which creates an energetic bias for one x-axis spin direction over the other. An important feature of this setup is that, in a quantum sense, the spin projection along the formula_1 axis and the spin projection along the formula_0 axis are not commuting observable quantities. That is, they cannot both be observed simultaneously. This means classical statistical mechanics cannot describe this model, and a quantum treatment is needed. Specifically, the model has the following quantum Hamiltonian: formula_2 Here, the subscripts refer to lattice sites, and the sum formula_3 is done over pairs of nearest neighbour sites formula_4 and formula_5. formula_6 and formula_7 are representations of elements of the spin algebra (Pauli matrices, in the case of spin 1/2) acting on the spin variables of the corresponding sites. They anti-commute with each other if on the same site and commute with each other if on different sites. formula_8 is a prefactor with dimensions of energy, and formula_9 is another coupling coefficient that determines the relative strength of the external field compared to the nearest neighbour interaction. Phases of the 1D transverse field Ising model. Below the discussion is restricted to the one dimensional case where each lattice site is a two-dimensional complex Hilbert space (i.e., it represents a spin 1/2 particle). For simplicity here formula_10 and formula_11 are normalised to each have determinant -1. The Hamiltonian possesses a formula_12 symmetry group, as it is invariant under the unitary operation of flipping all of the spins in the formula_0 direction. More precisely, the symmetry transformation is given by the unitary formula_13. The 1D model admits two phases, depending on whether the ground state (specifically, in the case of degeneracy, a ground state which is not a macroscopically entangled state) breaks or preserves the aforementioned formula_13 spin-flip symmetry. The sign of formula_8 does not impact the dynamics, as the system with positive formula_8 can be mapped into the system with negative formula_8 by performing a formula_14 rotation around formula_6 for every second site formula_5. The model can be exactly solved for all coupling constants. However, in terms of on-site spins the solution is generally very inconvenient to write down explicitly in terms of the spin variables. It is more convenient to write the solution explicitly in terms of fermionic variables defined by Jordan-Wigner transformation, in which case the excited states have a simple quasiparticle or quasihole description. Ordered phase. When formula_15, the system is said to be in the ordered phase. In this phase the ground state breaks the spin-flip symmetry. Thus, the ground state is in fact two-fold degenerate. For formula_16 this phase exhibits ferromagnetic ordering, while for formula_17 antiferromagnetic ordering exists. Precisely, if formula_18 is a ground state of the Hamiltonian, then formula_19 is also a ground state, and together formula_20 and formula_21 span the degenerate ground state space. As a simple example, when formula_22 and formula_23, the ground states are formula_24 and formula_25, that is, with all the spins aligned along the formula_0 axis. This is a gapped phase, meaning that the lowest energy excited state(s) have an energy higher than the ground state energy by a nonzero amount (nonvanishing in the thermodynamic limit). In particular, this energy gap is formula_26. Disordered phase. In contrast, when formula_27, the system is said to be in the disordered phase. The ground state preserves the spin-flip symmetry, and is nondegenerate. As a simple example, when formula_9 is infinity, the ground state is formula_28, that is with the spin in the formula_29 direction on each site. This is also a gapped phase. The energy gap is formula_30. Gapless phase. When formula_31, the system undergoes a quantum phase transition. At this value of formula_9, the system has gapless excitations and its low-energy behaviour is described by the two-dimensional Ising conformal field theory. This conformal theory has central charge formula_32, and is the simplest of the unitary minimal models with central charge less than 1. Besides the identity operator, the theory has two primary fields, one with conformal weights formula_33 and another one with conformal weights formula_34. Jordan-Wigner transformation. It is possible to rewrite the spin variables as fermionic variables, using a highly nonlocal transformation known as the Jordan-Wigner Transformation. A fermion creation operator on site formula_35 can be defined as formula_36. Then the transverse field Ising Hamiltonian (assuming an infinite chain and ignoring boundary effects) can be expressed entirely as a sum of local quadratic terms containing Creation and annihilation operators. formula_37This Hamiltonian fails to conserve total fermion number and does not have the associated formula_38 global continuous symmetry, due to the presence of the formula_39 term. However, it does conserve fermion parity. That is, the Hamiltonian commutes with the quantum operator that indicates whether the total number of fermions is even or odd, and this parity does not change under time evolution of the system. The Hamiltonian is mathematically identical to that of a superconductor in the mean field Bogoliubov-de Gennes formalism and can be completely understood in the same standard way. The exact excitation spectrum and eigenvalues can be determined by Fourier transforming into momentum space and diagonalising the Hamiltonian. In terms of Majorana fermions formula_40 and formula_41, the Hamiltonian takes on an even simpler form (up to an additive constant): formula_42. Kramers-Wannier duality. A nonlocal mapping of Pauli matrices known as the Kramers–Wannier duality transformation can be done as follows: formula_43 Then, in terms of the newly defined Pauli matrices with tildes, which obey the same algebraic relations as the original Pauli matrices, the Hamiltonian is simply formula_44. This indicates that the model with coupling parameter formula_9 is dual to the model with coupling parameter formula_45, and establishes a duality between the ordered phase and the disordered phase. In terms of the Majorana fermions mentioned above, this duality is more obviously manifested in the trivial relabeling formula_46. Note that there are some subtle considerations at the boundaries of the Ising chain; as a result of these, the degeneracy and formula_47 symmetry properties of the ordered and disordered phases are changed under the Kramers-Wannier duality. Generalisations. The q-state quantum Potts model and the formula_48 quantum clock model are generalisations of the transverse field Ising model to lattice systems with formula_49 states per site. The transverse field Ising model represents the case where formula_50 . Classical Ising Model. The quantum transverse field Ising model in formula_51 dimensions is dual to an anisotropic classical Ising model in formula_52 dimensions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "z" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "H = -J\\left(\\sum_{ \\langle i, j \\rangle} Z_i Z_{j} + g \\sum_j X_j \\right)" }, { "math_id": 3, "text": "\\sum_{\\langle i, j \\rangle}" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "j" }, { "math_id": 6, "text": "X_j" }, { "math_id": 7, "text": "Z_j" }, { "math_id": 8, "text": "J" }, { "math_id": 9, "text": "g" }, { "math_id": 10, "text": "X" }, { "math_id": 11, "text": "Z" }, { "math_id": 12, "text": "\\mathbb{Z}_2" }, { "math_id": 13, "text": "\\prod_j X_j" }, { "math_id": 14, "text": "\\pi" }, { "math_id": 15, "text": "|g|<1" }, { "math_id": 16, "text": "J>0" }, { "math_id": 17, "text": "J < 0" }, { "math_id": 18, "text": "|\\psi_1 \\rangle" }, { "math_id": 19, "text": "|\\psi_2 \\rangle \\equiv \\prod_j X_j |\\psi_1 \\rangle \\neq |\\psi_1 \\rangle" }, { "math_id": 20, "text": "|\\psi_1\\rangle" }, { "math_id": 21, "text": "|\\psi_2 \\rangle" }, { "math_id": 22, "text": "g = 0" }, { "math_id": 23, "text": "J > 0" }, { "math_id": 24, "text": "|\\ldots \\uparrow \\uparrow \\uparrow \\ldots \\rangle" }, { "math_id": 25, "text": "|\\ldots \\downarrow \\downarrow \\downarrow \\ldots \\rangle " }, { "math_id": 26, "text": "2|J|(1-|g|)" }, { "math_id": 27, "text": "|g|>1" }, { "math_id": 28, "text": " | \\ldots \\rightarrow \\rightarrow \\rightarrow \\ldots \\rangle" }, { "math_id": 29, "text": "+x" }, { "math_id": 30, "text": "2|J|(|g|-1)" }, { "math_id": 31, "text": "|g|=1" }, { "math_id": 32, "text": " c=1/2 " }, { "math_id": 33, "text": " (1/16, 1/16) " }, { "math_id": 34, "text": " (1/2, 1/2) " }, { "math_id": 35, "text": "j " }, { "math_id": 36, "text": "c_j^\\dagger = \\frac{1}{2}(Z_j+iY_j)\\prod_{k<j} X_k" }, { "math_id": 37, "text": "H = -J \\sum_j ( c_j^\\dagger c_{j+1} + c_{j+1}^\\dagger c_j +c_{j}^\\dagger c_{j+1}^\\dagger + c_{j+1} c_j + 2g(c_j^\\dagger c_j-1/2))" }, { "math_id": 38, "text": "U(1)" }, { "math_id": 39, "text": "c_j^\\dagger c_{j+1}^\\dagger + c_{j+1}c_j" }, { "math_id": 40, "text": "a_j = c_j^\\dagger + c_j" }, { "math_id": 41, "text": "b_j = -i(c_j^\\dagger - c_j)" }, { "math_id": 42, "text": "H = i\\sum_j J(a_{j+1} b_j + gb_j a_j )" }, { "math_id": 43, "text": "\\begin{align}\\tilde{X_j} &= Z_j Z_{j+1} \\\\\n\\tilde{Z}_j \\tilde{Z}_{j+1} &= X_{j+1} \\end{align}\n" }, { "math_id": 44, "text": "H = -Jg \\sum_j ( \\tilde{Z}_j \\tilde{Z}_{j+1} + g^{-1}\\tilde{X}_{j} )" }, { "math_id": 45, "text": "g^{-1}" }, { "math_id": 46, "text": " a_j \\to b_j, b_j \\to a_{j+1}" }, { "math_id": 47, "text": "\\mathbb{Z}_2\n" }, { "math_id": 48, "text": " Z_q " }, { "math_id": 49, "text": " q " }, { "math_id": 50, "text": " q = 2" }, { "math_id": 51, "text": " d " }, { "math_id": 52, "text": " d+1 " } ]
https://en.wikipedia.org/wiki?curid=62881422
62881966
Representation theory of semisimple Lie algebras
In mathematics, the representation theory of semisimple Lie algebras is one of the crowning achievements of the theory of Lie groups and Lie algebras. The theory was worked out mainly by E. Cartan and H. Weyl and because of that, the theory is also known as the Cartan–Weyl theory. The theory gives the structural description and classification of a finite-dimensional representation of a semisimple Lie algebra (over formula_0); in particular, it gives a way to parametrize (or classify) irreducible finite-dimensional representations of a semisimple Lie algebra, the result known as the theorem of the highest weight. There is a natural one-to-one correspondence between the finite-dimensional representations of a simply connected compact Lie group "K" and the finite-dimensional representations of the complex semisimple Lie algebra formula_1 that is the complexification of the Lie algebra of "K" (this fact is essentially a special case of the Lie group–Lie algebra correspondence). Also, finite-dimensional representations of a connected compact Lie group can be studied through finite-dimensional representations of the universal cover of such a group. Hence, the representation theory of semisimple Lie algebras marks the starting point for the general theory of representations of connected compact Lie groups. The theory is a basis for the later works of Harish-Chandra that concern (infinite-dimensional) representation theory of real reductive groups. Classifying finite-dimensional representations of semisimple Lie algebras. There is a beautiful theory classifying the finite-dimensional representations of a semisimple Lie algebra over formula_0. The finite-dimensional "irreducible" representations are described by a theorem of the highest weight. The theory is described in various textbooks, including , , and . Following an overview, the theory is described in increasing generality, starting with two simple cases that can be done "by hand" and then proceeding to the general result. The emphasis here is on the representation theory; for the geometric structures involving root systems needed to define the term "dominant integral element," follow the above link on weights in representation theory. Overview. Classification of the finite-dimensional irreducible representations of a semisimple Lie algebra formula_2 over formula_3 or formula_4 generally consists of two steps. The first step amounts to analysis of hypothesized representations resulting in a tentative classification. The second step is actual realization of these representations. A real Lie algebra is usually complexified enabling analysis in an algebraically closed field. Working over the complex numbers in addition admits nicer bases. The following theorem applies: A real-linear finite-dimensional representation of a real Lie algebra extends to a complex-linear representation of its complexification. The real-linear representation is irreducible if and only if the corresponding complex-linear representation is irreducible. Moreover, a complex semisimple Lie algebra has the "complete reducibility property". This means that every finite-dimensional representation decomposes as a direct sum of irreducible representations. Conclusion: "Classification amounts to studying irreducible complex linear representations of the (complexified) Lie algebra." Classification: Step One. The first step is to "hypothesize" the existence of irreducible representations. That is to say, one hypothesizes that one has an irreducible representation formula_5 of a complex semisimple Lie algebra formula_6 without worrying about how the representation is constructed. The properties of these hypothetical representations are investigated, and conditions "necessary" for the existence of an irreducible representation are then established. The properties involve the weights of the representation. Here is the simplest description. Let formula_7 be a Cartan subalgebra of formula_1, that is a maximal commutative subalgebra with the property that formula_8 is diagonalizable for each formula_9, and let formula_10 be a basis for formula_7. A "weight" formula_11 for a representation formula_12 of formula_1 is a collection of simultaneous eigenvalues formula_13 for the commuting operators formula_14. In basis-independent language, formula_11 is a linear functional formula_11 on formula_7 such that there exists a nonzero vector formula_15 such that formula_16 for every formula_17. A partial ordering on the set of weights is defined, and the notion of "highest weight" in terms of this partial ordering is established for any set of weights. Using the structure on the Lie algebra, the notions dominant element and integral element are defined. Every finite-dimensional representation must have a maximal weight formula_11, i.e., one for which no strictly higher weight occurs. If formula_18 is irreducible and formula_19 is a weight vector with weight formula_11, then the entire space formula_18 must be generated by the action of formula_1 on formula_19. Thus, formula_12 is a "highest weight cyclic" representation. One then shows that the weight formula_11 is actually the "highest" weight (not just maximal) and that every highest weight cyclic representation is irreducible. One then shows that two irreducible representations with the same highest weight are isomorphic. Finally, one shows that the highest weight formula_11 must be dominant and integral. Conclusion: "Irreducible representations are classified by their highest weights, and the highest weight is always a dominant integral element." Step One has the side benefit that the structure of the irreducible representations is better understood. Representations decompose as direct sums of "weight spaces", with the weight space corresponding to the highest weight one-dimensional. Repeated application of the representatives of certain elements of the Lie algebra called "lowering operators" yields a set of generators for the representation as a vector space. The application of one such operator on a vector with definite weight results either in zero or a vector with "strictly lower" weight. "Raising operators" work similarly, but results in a vector with "strictly higher" weight or zero. The representatives of the Cartan subalgebra acts diagonally in a basis of weight vectors. Classification: Step Two. Step Two is concerned with constructing the representations that Step One allows for. That is to say, we now fix a dominant integral element formula_11 and try to "construct" an irreducible representation with highest weight formula_11. There are several standard ways of constructing irreducible representations: Conclusion: Every "dominant integral element of a complex semisimple Lie algebra gives rise to an irreducible, finite-dimensional representation. These are the only irreducible representations." The case of sl(2,C). The Lie algebra sl(2,C) of the special linear group SL(2,C) is the space of 2x2 trace-zero matrices with complex entries. The following elements form a basis: formula_23 These satisfy the commutation relations formula_24. Every finite-dimensional representation of sl(2,C) decomposes as a direct sum of irreducible representations. This claim follows from the general result on complete reducibility of semisimple Lie algebras, or from the fact that sl(2,C) is the complexification of the Lie algebra of the simply connected compact group SU(2). The irreducible representations formula_5, in turn, can be classified by the largest eigenvalue of formula_25, which must be a non-negative integer "m". That is to say, in this case, a "dominant integral element" is simply a non-negative integer. The irreducible representation with largest eigenvalue "m" has dimension formula_26 and is spanned by eigenvectors for formula_25 with eigenvalues formula_27. The operators formula_28 and formula_29 move up and down the chain of eigenvectors, respectively. This analysis is described in detail in the representation theory of SU(2) (from the point of the view of the complexified Lie algebra). One can give a concrete realization of the representations (Step Two in the overview above) in either of two ways. First, in this simple example, it is not hard to write down an explicit basis for the representation and an explicit formula for how the generators formula_30 of the Lie algebra act on this basis. Alternatively, one can realize the representation with highest weight formula_31 by letting formula_32 denote the space of homogeneous polynomials of degree formula_31 in two complex variables, and then defining the action of formula_33, formula_34, and formula_35 by formula_36 Note that the formulas for the action of formula_33, formula_34, and formula_35 do not depend on formula_31; the subscript in the formulas merely indicates that we are restricting the action of the indicated operators to the space of homogeneous polynomials of degree formula_31 in formula_37 and formula_38. The case of sl(3,C). There is a similar theory classifying the irreducible representations of sl(3,C), which is the complexified Lie algebra of the group SU(3). The Lie algebra sl(3,C) is eight dimensional. We may work with a basis consisting of the following two diagonal elements formula_39, together with six other matrices formula_40 and formula_41 each of which has a 1 in an off-diagonal entry and zeros elsewhere. (The formula_42's have a 1 above the diagonal and the formula_43's have a 1 below the diagonal.) The strategy is then to simultaneously diagonalize formula_44 and formula_45 in each irreducible representation formula_5. Recall that in the sl(2,C) case, the action of formula_28 and formula_29 raise and lower the eigenvalues of formula_25. Similarly, in the sl(3,C) case, the action of formula_46 and formula_47 "raise" and "lower" the eigenvalues of formula_44 and formula_45. The irreducible representations are then classified by the largest eigenvalues formula_48 and formula_49 of formula_44 and formula_45, respectively, where formula_48 and formula_49 are non-negative integers. That is to say, in this setting, a "dominant integral element" is precisely a pair of non-negative integers. Unlike the representations of sl(2,C), the representation of sl(3,C) cannot be described explicitly in general. Thus, it requires an argument to show that "every" pair formula_50 actually arises the highest weight of some irreducible representation (Step Two in the overview above). This can be done as follows. First, we construct the "fundamental representations", with highest weights (1,0) and (0,1). These are the three-dimensional standard representation (in which formula_51) and the dual of the standard representation. Then one takes a tensor product of formula_48 copies of the standard representation and formula_49 copies of the dual of the standard representation, and extracts an irreducible invariant subspace. Although the representations cannot be described explicitly, there is a lot of useful information describing their structure. For example, the dimension of the irreducible representation with highest weight formula_50 is given by formula_52 There is also a simple pattern to the multiplicities of the various weight spaces. Finally, the irreducible representations with highest weight formula_53 can be realized concretely on the space of homogeneous polynomials of degree formula_31 in three complex variables. The case of a general semisimple Lie algebras. Let formula_1 be a semisimple Lie algebra and let formula_7 be a Cartan subalgebra of formula_1, that is, a maximal commutative subalgebra with the property that ad"H" is diagonalizable for all "H" in formula_7. As an example, we may consider the case where formula_1 is sl("n",C), the algebra of "n" by "n" traceless matrices, and formula_7 is the subalgebra of traceless diagonal matrices. We then let "R" denote the associated root system. We then choose a base (or system of positive simple roots) formula_54 for "R". We now briefly summarize the structures needed to state the theorem of the highest weight; more details can be found in the article on weights in representation theory. We choose an inner product on formula_7 that is invariant under the action of the Weyl group of "R", which we use to identify formula_7 with its dual space. If formula_12 is a representation of formula_1, we define a weight of "V" to be an element formula_11 in formula_7 with the property that for some nonzero "v" in "V", we have formula_55 for all "H" in formula_7. We then define one weight formula_11 to be "higher" than another weight formula_56 if formula_57 is expressible as a linear combination of elements of formula_58 with non-negative real coefficients. A weight formula_56 is called a "highest weight" if formula_56 is higher than every other weight of formula_5. Finally, if formula_11 is a weight, we say that formula_11 is dominant if it has non-negative inner product with each element of formula_58 and we say that formula_11 is integral if formula_59 is an integer for each formula_60 in "R". Finite-dimensional representations of a semisimple Lie algebra are completely reducible, so it suffices to classify irreducible (simple) representations. The irreducible representations, in turn, may be classified by the "theorem of the highest weight" as follows: The last point of the theorem (Step Two in the overview above) is the most difficult one. In the case of the Lie algebra sl(3,C), the construction can be done in an elementary way, as described above. In general, the construction of the representations may be given by using Verma modules. Construction using Verma modules. If formula_11 is "any" weight, not necessarily dominant or integral, one can construct an infinite-dimensional representation formula_61 of formula_1 with highest weight formula_11 known as a Verma module. The Verma module then has a maximal proper invariant subspace formula_62, so that the quotient representation formula_63 is irreducible—and still has highest weight formula_11. In the case that formula_11 is dominant and integral, we wish to show that formula_64 is finite dimensional. The strategy for proving finite-dimensionality of formula_64 is to show that the set of weights of formula_64 is invariant under the action of the Weyl group formula_65 of formula_1 relative to the given Cartan subalgebra formula_7. (Note that the weights of the Verma module formula_61 itself are definitely not invariant under formula_65.) Once this invariance result is established, it follows that formula_64 has only finitely many weights. After all, if formula_56 is a weight of formula_64, then formula_56 must be integral—indeed, formula_56 must differ from formula_11 by an integer combination of roots—and by the invariance result, formula_66 must be lower than formula_11 for every formula_67 in formula_65. But there are only finitely many integral elements formula_56 with this property. Thus, formula_64 has only finitely many weights, each of which has finite multiplicity (even in the Verma module, so certainly also in formula_64). From this, it follows that formula_64 must be finite dimensional. Additional properties of the representations. Much is known about the representations of a complex semisimple Lie algebra formula_1, besides the classification in terms of highest weights. We mention a few of these briefly. We have already alluded to Weyl's theorem, which states that every finite-dimensional representation of formula_1 decomposes as a direct sum of irreducible representations. There is also the Weyl character formula, which leads to the Weyl dimension formula (a formula for the dimension of the representation in terms of its highest weight), the Kostant multiplicity formula (a formula for the multiplicities of the various weights occurring in a representation). Finally, there is also a formula for the eigenvalue of the Casimir element, which acts as a scalar in each irreducible representation. Lie group representations and Weyl's unitarian trick. Although it is possible to develop the representation theory of complex semisimple Lie algebras in a self-contained way, it can be illuminating to bring in a perspective using Lie "groups". This approach is particularly helpful in understanding Weyl's theorem on complete reducibility. It is known that every complex semisimple Lie algebra formula_1 has a "compact real form" formula_68. This means first that formula_1 is the complexification of formula_68: formula_69 and second that there exists a simply connected compact group formula_70 whose Lie algebra is formula_68. As an example, we may consider formula_71, in which case formula_70 may be taken to be the special unitary group SU(n). Given a finite-dimensional representation formula_18 of formula_1, we can restrict it to formula_68. Then since formula_70 is simply connected, we can integrate the representation to the group formula_70. The method of averaging over the group shows that there is an inner product on formula_18 that is invariant under the action of formula_70; that is, the action of formula_70 on formula_18 is "unitary". At this point, we may use unitarity to see that formula_18 decomposes as a direct sum of irreducible representations. This line of reasoning is called the "unitarian trick" and was Weyl's original argument for what is now called Weyl's theorem. There is also a purely algebraic argument for the complete reducibility of representations of semisimple Lie algebras. If formula_1 is a complex semisimple Lie algebra, there is a unique complex semisimple Lie group formula_72 with Lie algebra formula_1, in addition to the simply connected compact group formula_70. (If formula_71 then formula_73.) Then we have the following result about finite-dimensional representations. Statement: The objects in the following list are in one-to-one correspondence: Conclusion: "The representation theory of compact Lie groups can shed light on the representation theory of complex semisimple Lie algebras." Remarks. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{C}" }, { "math_id": 1, "text": "\\mathfrak g" }, { "math_id": 2, "text": "\\mathfrak{g}" }, { "math_id": 3, "text": "\\R" }, { "math_id": 4, "text": "\\C" }, { "math_id": 5, "text": "\\pi" }, { "math_id": 6, "text": "\\mathfrak g," }, { "math_id": 7, "text": "\\mathfrak h" }, { "math_id": 8, "text": "\\operatorname{ad}_H" }, { "math_id": 9, "text": "H\\in\\mathfrak h" }, { "math_id": 10, "text": "H_1,\\ldots ,H_n" }, { "math_id": 11, "text": "\\lambda" }, { "math_id": 12, "text": "(\\pi,V)" }, { "math_id": 13, "text": "(\\lambda_1,\\ldots,\\lambda_n)" }, { "math_id": 14, "text": "\\pi(H_1),\\ldots ,\\pi(H_n)" }, { "math_id": 15, "text": "v\\in V" }, { "math_id": 16, "text": "\\pi(H)v = \\lambda(H)v" }, { "math_id": 17, "text": "H \\in \\mathfrak h " }, { "math_id": 18, "text": "V" }, { "math_id": 19, "text": "v" }, { "math_id": 20, "text": "\\mathfrak g = \\operatorname{sl}(n,\\mathbb{C})" }, { "math_id": 21, "text": "\\operatorname{SU}(n)" }, { "math_id": 22, "text": "\\mathfrak g=\\operatorname{sl}(3,\\mathbb{C})" }, { "math_id": 23, "text": " X = \\begin{pmatrix}\n0 & 1\\\\\n0 & 0\n\\end{pmatrix}\n\\qquad\nY = \\begin{pmatrix}\n0 & 0\\\\\n1 & 0\n\\end{pmatrix}\n\\qquad\nH = \\begin{pmatrix}\n1 & 0\\\\\n0 & -1\n\\end{pmatrix} ~," }, { "math_id": 24, "text": "[H,X]=2X,\\quad [H,Y]=-2Y,\\quad [X,Y]=H" }, { "math_id": 25, "text": "\\pi(H)" }, { "math_id": 26, "text": "m+1" }, { "math_id": 27, "text": "m,m-2,\\ldots,-m+2,-m" }, { "math_id": 28, "text": "\\pi(X)" }, { "math_id": 29, "text": "\\pi(Y)" }, { "math_id": 30, "text": "X,Y,H" }, { "math_id": 31, "text": "m" }, { "math_id": 32, "text": "V_m" }, { "math_id": 33, "text": "X" }, { "math_id": 34, "text": "Y" }, { "math_id": 35, "text": "H" }, { "math_id": 36, "text": "\\pi_m(X)=-z_2\\frac{\\partial}{\\partial z_1};\\quad \\pi_m(Y)=-z_1\\frac{\\partial}{\\partial z_2};\\quad\\pi_m(H)=-z_1\\frac{\\partial}{\\partial z_1}+z_2\\frac{\\partial}{\\partial z_2}." }, { "math_id": 37, "text": "z_1" }, { "math_id": 38, "text": "z_2" }, { "math_id": 39, "text": "H_1 = \\begin{pmatrix} 1 & 0 & 0 \\\\ 0 & -1 & 0 \\\\ 0 & 0 & 0 \\end{pmatrix}, \\quad H_2 = \\begin{pmatrix} 0 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & -1 \\end{pmatrix}" }, { "math_id": 40, "text": "X_1,\\,X_2,\\,X_3" }, { "math_id": 41, "text": "Y_1,\\,Y_2,\\,Y_3" }, { "math_id": 42, "text": "X_i" }, { "math_id": 43, "text": "Y_i" }, { "math_id": 44, "text": "\\pi(H_1)" }, { "math_id": 45, "text": "\\pi(H_2)" }, { "math_id": 46, "text": "\\pi(X_i)" }, { "math_id": 47, "text": "\\pi(Y_i)" }, { "math_id": 48, "text": "m_1" }, { "math_id": 49, "text": "m_2" }, { "math_id": 50, "text": "(m_1,m_2)" }, { "math_id": 51, "text": "\\pi(X)=X" }, { "math_id": 52, "text": "\\dim(m_1,m_2)=\\frac{1}{2}(m_1+1)(m_2+1)(m_1+m_2+2)" }, { "math_id": 53, "text": "(0,m)" }, { "math_id": 54, "text": "\\Delta " }, { "math_id": 55, "text": "\\pi(H)v=\\langle\\lambda,H\\rangle v" }, { "math_id": 56, "text": "\\mu" }, { "math_id": 57, "text": "\\lambda-\\mu" }, { "math_id": 58, "text": "\\Delta" }, { "math_id": 59, "text": "2\\langle\\lambda,\\alpha\\rangle/\\langle\\alpha,\\alpha\\rangle" }, { "math_id": 60, "text": "\\alpha" }, { "math_id": 61, "text": "W_\\lambda" }, { "math_id": 62, "text": "U_\\lambda" }, { "math_id": 63, "text": "V_\\lambda:=W_\\lambda/U_\\lambda" }, { "math_id": 64, "text": "V_\\lambda" }, { "math_id": 65, "text": "W" }, { "math_id": 66, "text": "w\\cdot\\mu" }, { "math_id": 67, "text": "w" }, { "math_id": 68, "text": "\\mathfrak k" }, { "math_id": 69, "text": "\\mathfrak g = \\mathfrak k + i\\mathfrak k" }, { "math_id": 70, "text": "K" }, { "math_id": 71, "text": "\\mathfrak g=\\operatorname{sl}(n;\\mathbb C)" }, { "math_id": 72, "text": "G" }, { "math_id": 73, "text": "G=\\operatorname{SL}(n;\\mathbb C)" }, { "math_id": 74, "text": "\\mathfrak{k}" } ]
https://en.wikipedia.org/wiki?curid=62881966
62882276
Jiayang Sun
Chinese-American statistician Jiayang Sun is an American statistician whose research has included work on simultaneous confidence bands for multiple comparisons, selection bias, mixture models, Gaussian random fields, machine learning, big data, statistical computing, graphics, and applications in biostatistics, biomedical research, software bug tracking, astronomy, and intellectual property law. She is a statistics professor, Bernard J. Dunn Eminent Scholar, and chair of the statistics department at George Mason University, and a former president of the Caucus for Women in Statistics. Education and career. Sun earned a bachelor's degree in mathematics from Anhui University and a master's degree from Peking University. She completed her Ph.D. at Stanford University in 1989. Her dissertation, "formula_0-Values in Projection Pursuit", was supervised by David Siegmund. After she completed her doctorate, she became a faculty member at the University of Michigan and later at Case Western Reserve University, where she became an associate and full professor in statistics, and then a professor of biostatistics and director of Case's Center for Statistical Research, Computing and Collaboration. In 2019 she moved to the department of statistics at George Mason University as professor, Bernard J. Dunn Eminent Scholar, and department chair. She also became an ASA/ACM/AMS/IMS/MAA/SIAM Science and Technology Policy Fellow for 2019–2020, working with the United States Department of Agriculture in Washington, DC. Recognition. Sun is a Fellow of the American Statistical Association and of the Institute of Mathematical Statistics, and an Elected Member of the International Statistical Institute. She served as president of the Caucus for Women in Statistics for the 2016 term. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P" } ]
https://en.wikipedia.org/wiki?curid=62882276
62891333
Abelian Lie group
In geometry, an abelian Lie group is a Lie group that is an abelian group. A connected abelian real Lie group is isomorphic to formula_0. In particular, a connected abelian (real) compact Lie group is a torus; i.e., a Lie group isomorphic to formula_1. A connected complex Lie group that is a compact group is abelian and a connected compact complex Lie group is a complex torus; i.e., a quotient of formula_2 by a lattice. Let "A" be a compact abelian Lie group with the identity component formula_3. If formula_4 is a cyclic group, then formula_5 is topologically cyclic; i.e., has an element that generates a dense subgroup. (In particular, a torus is topologically cyclic.) Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Works cited. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^k \\times (S^1)^h" }, { "math_id": 1, "text": "(S^1)^h" }, { "math_id": 2, "text": "\\mathbb{\\Complex}^n" }, { "math_id": 3, "text": "A_0" }, { "math_id": 4, "text": "A/A_0" }, { "math_id": 5, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=62891333
6289189
Isotoxal figure
Polytope or tiling with one type of edge In geometry, a polytope (for example, a polygon or a polyhedron) or a tiling is isotoxal (from el " "τόξον' 'arc') or edge-transitive if its symmetries act transitively on its edges. Informally, this means that there is only one type of edge to the object: given two edges, there is a translation, rotation, and/or reflection that will move one edge to the other while leaving the region occupied by the object unchanged. Isotoxal polygons. An isotoxal polygon is an even-sided i.e. equilateral polygon, but not all equilateral polygons are isotoxal. The duals of isotoxal polygons are isogonal polygons. Isotoxal formula_0-gons are centrally symmetric, thus are also zonogons. In general, a (non-regular) isotoxal formula_1-gon has formula_2 dihedral symmetry. For example, a (non-square) rhombus is an isotoxal "formula_3×formula_3-gon" (quadrilateral) with formula_4 symmetry. All regular formula_5-gons (also with odd formula_6) are isotoxal, having double the minimum symmetry order: a regular formula_6-gon has formula_2 dihedral symmetry. An isotoxal formula_7-gon with outer internal angle formula_8 can be denoted by formula_9 The inner internal angle formula_10 may be less or greater than formula_11formula_12 making convex or concave polygons respectively. A star formula_13-gon can also be isotoxal, denoted by formula_14 with formula_15 and with the greatest common divisor formula_16 where formula_17 is the turning number or density. Concave inner vertices can be defined for formula_18 If formula_19 then formula_20 is "reduced" to a compound formula_21 of formula_22 rotated copies of formula_23 Caution: The vertices of formula_24 are not always placed like those of formula_25 whereas the vertices of the regular formula_26 are placed like those of the regular formula_27 A set of "uniform" tilings, actually isogonal tilings using isotoxal polygons as less symmetric faces than regular ones, can be defined. Isotoxal polyhedra and tilings. Regular polyhedra are isohedral (face-transitive), isogonal (vertex-transitive), and isotoxal (edge-transitive). Quasiregular polyhedra, like the cuboctahedron and the icosidodecahedron, are isogonal and isotoxal, but not isohedral. Their duals, including the rhombic dodecahedron and the rhombic triacontahedron, are isohedral and isotoxal, but not isogonal. Not every polyhedron or 2-dimensional tessellation constructed from regular polygons is isotoxal. For instance, the truncated icosahedron (the familiar soccerball) is not isotoxal, as it has two edge types: hexagon-hexagon and hexagon-pentagon, and it is not possible for a symmetry of the solid to move a hexagon-hexagon edge onto a hexagon-pentagon edge. An isotoxal polyhedron has the same dihedral angle for all edges. The dual of a convex polyhedron is also a convex polyhedron. The dual of a non-convex polyhedron is also a non-convex polyhedron. (By contraposition.) The dual of an isotoxal polyhedron is also an isotoxal polyhedron. (See the Dual polyhedron article.) There are nine convex isotoxal polyhedra: the five (regular) Platonic solids, the two (quasiregular) common cores of dual Platonic solids, and their two duals. There are fourteen non-convex isotoxal polyhedra: the four (regular) Kepler–Poinsot polyhedra, the two (quasiregular) common cores of dual Kepler–Poinsot polyhedra, and their two duals, plus the three quasiregular ditrigonal (3 | "p q") star polyhedra, and their three duals. There are at least five isotoxal polyhedral compounds: the five regular polyhedral compounds; their five duals are also the five regular polyhedral compounds (or one chiral twin). There are at least five isotoxal polygonal tilings of the Euclidean plane, and infinitely many isotoxal polygonal tilings of the hyperbolic plane, including the Wythoff constructions from the regular hyperbolic tilings {"p","q"}, and non-right ("p q r") groups. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "4n" }, { "math_id": 1, "text": "2n" }, { "math_id": 2, "text": "\\mathrm{D}_n, (^*nn)" }, { "math_id": 3, "text": "2" }, { "math_id": 4, "text": "\\mathrm{D}_2, (^*22)" }, { "math_id": 5, "text": "{\\color{royalblue}n}" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "\\bold{2}n" }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "\\{n_\\alpha\\}." }, { "math_id": 10, "text": "(\\beta)" }, { "math_id": 11, "text": "180" }, { "math_id": 12, "text": "{\\color{royalblue}^\\mathsf{o}}," }, { "math_id": 13, "text": "{\\color{royalblue}\\bold{2}n}" }, { "math_id": 14, "text": "\\{(n/q)_\\alpha\\}," }, { "math_id": 15, "text": "q \\le n - 1" }, { "math_id": 16, "text": "\\gcd(n,q) = 1," }, { "math_id": 17, "text": "q" }, { "math_id": 18, "text": "q < n/2." }, { "math_id": 19, "text": "D = \\gcd(n,q) \\ge 2," }, { "math_id": 20, "text": "\\{(n/q)_\\alpha\\} = \\{(Dm/Dp)_\\alpha\\}" }, { "math_id": 21, "text": "D \\{(m/p)_\\alpha\\}" }, { "math_id": 22, "text": "D" }, { "math_id": 23, "text": "\\{(m/p)_\\alpha\\}." }, { "math_id": 24, "text": "\\{(n/q)_\\alpha\\}" }, { "math_id": 25, "text": "\\{n_\\alpha\\}," }, { "math_id": 26, "text": "\\{n/q\\}" }, { "math_id": 27, "text": "\\{n\\}." } ]
https://en.wikipedia.org/wiki?curid=6289189
628931
Abraham bar Hiyya
Catalan Jewish mathematician, astronomer and philosopher (1070-1136) Abraham bar Ḥiyya ha-Nasi (; c. 1070 – 1136 or 1145), also known as Abraham Savasorda, Abraham Albargeloni, and Abraham Judaeus, was a Catalan Jewish mathematician, astronomer and philosopher who resided in Barcelona, then in the County of Barcelona. Bar Ḥiyya was active in translating the works of Islamic science into Latin and was likely the earliest to introduce algebra from the Muslim world into Christian Europe. He also wrote several original works on mathematics, astronomy, Jewish philosophy, chronology, and surveying. His most influential work is his "Ḥibbur ha-Meshiḥah ve-ha-Tishboret", translated in 1145 into Latin as "Liber embadorum". A Hebrew treatise on practical geometry and algebra, the book contains the first known complete solution of the quadratic equation formula_0, and influenced the work of Fibonacci. Biography. Abraham bar Ḥiyya was the great-grandson of Hezekiah ben David, the last Gaon of the Talmudic academies in Babylonia. Bar Ḥiyya occupied a high position in the royal court, serving as minister of police, and bore the title of governor (). Scholars assume that Bar Hiyya would have obtained this title in the court of Banu Hud of Zaragoza-Lleida; there is even a record of a Jewish Savasorda there in the beginning of the 12th century. In his travelogues, Benjamin of Tudela mentions bar Ḥiyya living in Barcelona in the 1160s. According to Adolph Drechsler, bar Ḥiyya was a pupil of Moshe ha-Darshan and teacher of Abraham ibn Ezra. He was held in high consideration by the ruler he served on account of his astronomical knowledge and had disputes with learned Catholic priests, to whom he demonstrated the accuracy of the Jewish calendar. Abraham bar Hiyya is said to have been a great astronomer and wrote some works on astronomy and geography. One talks about the form of the earth, the elements, and the structure of the spheres. Other works included papers on astrology, trigonometry, and music. Some scholars think that the Magister Abraham who dictated "De Astrolabio" (probably at Toulouse) to Rudolf of Bruges (a work that the latter finished in 1143) was identical with Abraham bar Ḥiyya. Although the title "Sephardi" is always appended to his name, Barcelona was at the time no longer under Muslim rule, and therefore not part of Sepharad. Abraham Albargeloni (i.e., from Barcelona) thus belonged to the community of the Jews of Catalonia. Catalonia joined Provence in 1112 and Aragon in 1137, and thus the County of Barcelona became the capital of a new Catalan-Aragonese confederation called the Crown of Aragon. The kings of Aragon extended their domains to Occitania in what is now southern France. Abraham Albargeloni spent some time in Narbonne, where he composed some works for the Hachmei Provence, in which he complained about the Provençal community's ignorance of mathematics. Work. Abraham bar Ḥiyya was one of the most important figures in the scientific movement which made the Jews of Provence, the Jews of Catalonia, Spain, and Italy the intermediaries between Arabic science and the Christian world, in both his original works and his translations. Bar Ḥiyya's "Yesode ha-Tebunah u-Migdal ha-Emunah" (), usually referred to as the "Encyclopedia", was the first European attempt to synthesize Greek and Arabic mathematics. Likely written in the first quarter of the 12th century, the book is said to elaborate on the interdependence of number theory, mathematical operations, business arithmetic, geometry, optics, and music. The book draws from a number of Greek sources then available in Arabic, as well as the works of al-Khwarizmi and Al-Karaji. Only a few short fragments of this work have been preserved. Bar Ḥiyya's most notable work is his "Ḥibbur ha-Meshiḥah ve-ha-Tishboret" (), probably intended to be a part of the preceding work. This is the celebrated geometry translated in 1145 by Plato of Tivoli, under the title "Liber embadorum a Savasordo in hebraico compositus". Fibonacci made the Latin translation of the "Ḥibbūr" the basis of his "Practica Geometriae", following it even to the sameness of some of the examples. Bar Ḥiyya also wrote two religious works in the field of Judaism and the Tanach: "Hegyon ha-Nefesh" ("Contemplation of the Soul") on repentance, and "Megillat ha-Megalleh" ("Scroll of the Revealer") on the redemption of the Jewish people. The latter was partly translated into Latin in the 14th century under the title "Liber de redemptione Israhel". Even these religious works contain scientific and philosophical speculation. His "Megillat ha-Megalleh" was also astrological in nature, and drew a horoscope of favourable and unfavourable days. Bar Ḥiyya forecasted that the Messiah would appear in AM 5118 (1358 CE). Abraham bar Ḥiyya wrote all his works in Hebrew, not in Judaeo-Arabic of the earlier Jewish scientific literature, which made him a pioneer in the use of the Hebrew language for scientific purposes. Translations. Abraham bar Ḥiyya co-operated with a number of scholars in the translation of scientific works from Arabic into Latin, most notably Plato of Tivoli with their translation of Ptolemy's "Tetrabiblos" in 1138 at Barcelona. There remains doubt as to the particulars: a number of Jewish translators named Abraham existed during the 12th century, and it is not always possible to identify the one in question. Known translations of bar Ḥiyya include: In the preface to "Ẓurat ha-Areẓ", bar Ḥiyya modestly states that, because none of the scientific works such as those which exist in Arabic were accessible to his brethren in France, he felt called upon to compose books which, though containing no research of his own, would help to popularize knowledge among Hebrew readers. His Hebrew terminology, therefore, occasionally lacks the clearness and precision of later writers and translators. Philosophy. Bar Ḥiyya was a pioneer in the field of philosophy: as shown by Guttmann in refutation of David Kaufmann's assumption that the "Hegyon ha-Nefesh" was originally written in Arabic, Abraham bar Ḥiyya had to wrestle with the difficulties of a language not yet adapted to philosophic terminology. Whether composed especially for the Ten Days of Repentance, as Rapoport and Rosin think, or not, the object of the work was a practical, rather than a theoretical, one. It was to be a homily in four chapters on repentance based on the Hafṭarot of the Day of Atonement and Shabbat Shuvah. In it, he exhorts the reader to lead a life of purity and devotion. At the same time he does not hesitate to borrow ideas from non-Jewish philosophers, and he pays homage to the ancient Greek philosophers who, without knowledge of the Torah, arrived at certain fundamental truths regarding the beginning of things, though in an imperfect way, because both the end and the divine source of wisdom remained hidden to them. In his opinion the non-Jew may attain to as high a degree of godliness as the Jew. Matter and Form. Abraham bar Ḥiyya's philosophical system is neoplatonic like that of Solomon ibn Gabirol and of the author of "Torot ha-Nefesh" "Reflections on the Soul" as Plotinus stated: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; He agrees with Plato that the soul in this world of flesh is imprisoned, while the animal soul craves for worldly pleasures, and experiences pain in foregoing them. Still, only the sensual man requires corrections of the flesh to liberate the soul from its bondage; the truly pious need not, or rather should not, undergo fasting or other forms of asceticism except such as the law has prescribed. But, precisely as man has been set apart among his fellow creatures as God's servant, so Israel is separate from the nations, the same three terms ("bara", "yaṣar", and "asah") being used by the prophet for Israel's creation as for that of man in Genesis. Three Classes of Pious Men. Like Baḥya ibn Paquda, Abraham bar Ḥiyya distinguishes three classes of pious men: In accordance with these three classes of servants of God, he finds the laws of the Torah to be divided into three groups: Guttmann has shown that Naḥmanides read and used the "Hegyon ha-Nefesh", though occasionally differing from it; but while Saadia Gaon is elsewhere quoted by Abraham bar Ḥiyya, he never refers to him in "Hegyon". Characteristic of the age is the fact that while Abraham bar Ḥiyya contended against every superstition, against the superstitions of the tequfoth, against prayers for the dead, and similar practises, he was, nevertheless, like Ibn Ezra, a firm believer in astrology. In his "Megillat ha-Megalleh" he calculated from Scripture the exact time for the advent of the Messiah to be the year of the world 5118. He wrote also a work on redemption, from which Isaac Abravanel appropriated many ideas. It is in defense of Judaism against Christian arguments, and also discusses Muhammed "the Insane", announcing the downfall of Islam, according to astrological calculation, for the year 4946 A.M. Mathematics. Bar Ḥiyya's "Ḥibbur ha-meshīḥah ve-ha-tishboret" contains the first appearance of quadratic equations in the West. Bar Ḥiyya proved by the method of indivisibles the following equation for any circle: formula_1, where formula_2 is the surface area, formula_3 is the circumference length and formula_4 is radius. The same proof appears in the commentary of the Tosafists (12th century) on the Babylonian Talmud. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;  This article incorporates text from a publication now in the public domain:  References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^2 - ax + b = c" }, { "math_id": 1, "text": " A = C \\times \\tfrac{R}{2}" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=628931
62895345
Peter Pan disk
Type of circumstellar disk A Peter Pan disk is a circumstellar disk around a star or brown dwarf that appears to have retained enough gas to form a gas giant planet for much longer than the typically assumed gas dispersal timescale of approximately 5 million years. Several examples of such disks have been observed to orbit stars with spectral types of M or later. The presence of gas around these disks has generally been inferred from the total amount of radiation emitted from the disk at infrared wavelengths, and/or spectroscopic signatures of hydrogen accreting onto the star. To fit one specific definition of a Peter Pan disk, the source needs to have an infrared "color" of formula_0, an age of &gt;20 Myr and spectroscopic evidence of accretion. In 2016 volunteers of the Disk Detective project discovered WISE J080822.18-644357.3 (or J0808). This low-mass star showed signs of youth, for example a strong infrared excess and active accretion of gaseous material. It is part of the 45 Myr old Carina young moving group, older than expected for these characteristics of an M-dwarf. Other stars and brown dwarfs were discovered to be similar to J0808, with signs of youth while being in an older moving group. Together with J0808, these older low-mass accretors in nearby moving groups have been called Peter Pan disks in one scientific paper published in early 2020. Since then the term was used by other independent research groups. Name. Peter Pan disks are named after the main character Peter Pan in the play and book Peter Pan, or The Boy Who Wouldn’t Grow Up, written by J.M. Barrie in 1904. The Peter Pan disks have a young appearance, while being old in years. In other words: The Peter Pan disks "refuse to grow up", a feature they share with the lost boys and titular character in Peter Pan. Characteristics. The known Peter Pan disks have the H-alpha spectroscopic line as a sign of accretion. J0808 shows variations in the Paschen-β and Brackett-γ lines, which is a clear sign of accretion. It was also identified as lithium-rich, which is a sign of youth. Two peter pan disks (J0808 and J0632) show variation due to material from the disk blocking the light of the star. J0808 and J0501 also showed flares. Some of the Peter Pan disks (J0446, J0949, LDS 5606 and J1915) are binaries or suspected binaries. J0226 is a candidate brown dwarf and Delorme 1 (AB)b is a planetary-mass object in a circumbinary orbit. It was suggested that Peter Pan disks take longer to dissipate due to lower photoevaporation caused by lower far-ultraviolet and X-ray emission coming from the M-dwarf. Modelling has shown that disk can survive for 50 Myrs around stars with a mass less than 0.6 M☉ and in low-radiation environments. At higher masses of 0.6 to 0.8 M☉ the stars form an inner gap before 50 Myr, preventing accretion. Observations with the Chandra X-ray Observatory showed that Peter Pan Disks have a similar X-ray luminosity as field M-dwarfs, with properties similar to weak-lined T Tauri stars. The researchers of this study concluded that the current X-ray luminosity of Peter Pan disk cannot explain their old age. The old age of the disk could be the result of weaker far-ultraviolet flux incident on the disk, due to weaker accretion in the pre-main sequence stage. It was proposed that disks do form with a lifetime distribution, with some disks only existing for a few Myrs and others for dozens of Myrs. This would explain why some &gt;20 Myr old M-dwarfs show accretion due to a disk, but not all M-dwarfs of this age. The research team found an initial disk fraction of 65% for M-dwarfs (M3.7-M6) and the disk lifetime distribution matches a Gaussian or Weibull distribution. Known Peter Pan disks. The prototype Peter Pan disk is WISE J080822.18-644357.3. It was discovered by the NASA-led citizen science project Disk Detective. Murphy et al. found additional Peter Pan disks in the literature, which were identified as part of the Columba and Tucana-Horologium associations. The Disk Detective Collaboration identified two additional Peter Pan disks in Columba and Carina associations. The paper also mentions that members of NGC 2547 were previously identified to have 22 μm excess and could be similar to Peter Pan disks. 2MASS 08093547-4913033, which is one of the M-dwarfs with a debris disk in NGC 2547 was observed with the Spitzer Infrared Spectrograph. In this system the first detection of silicate was made from a debris disk around an M-type star. While the system shows the H-alpha line, it was interpreted to be devoid of gas and non-accreting. In the following years additional objects were discovered. Some objects do not exactly fit the definition of Peter Pan disks, but are similar enough to be analogs: The object 2MASS J06195260-2903592 was found to be a Myr old analog to Peter Pan disks. This object does however not show accretion. The star PDS 111 is interpreted as a higher-mass analog of Peter Pan disks, with an age of Myrs, a mass of M☉, active accretion and a directly imaged disk. One team also found old accreting stars in the Large Magellanic Cloud in the Tarantula Nebula. This might be explained with a low metallicity in the LMC, which can lead to more massive disks that are less opaque. List of Peter Pan disk candidates. 2MASS J0041353-562112 was discarded as it belongs to the Beta Pictoris moving group and does not show excess. Implications for planet formation around M-stars. There are different models to explain the existence of Peter Pan disks, such as disrupted planetesimals or recent collisions of planetary bodies. One explanation is that Peter Pan disks are long-lived primordial disks. This would follow the trend of lower-mass stars requiring more time to dissipate their disks. Exoplanets around M-stars would have more time to form, significantly affecting the atmospheres on these planets. Peter Pan disks that form multiplanetary systems could force the planets in close-in, resonant orbits. The 7-planet system TRAPPIST-1 could be an end result of such a Peter Pan disk. A Peter Pan disk could also help to explain the existence of Jovian planets around M-dwarfs, such as TOI-5205b. A longer lifetime for a disk would give more time for a solid core to form, which could initiate runaway core-accretion. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Ks-W4>2" } ]
https://en.wikipedia.org/wiki?curid=62895345
628962
Approximation property
Mathematical concept In mathematics, specifically functional analysis, a Banach space is said to have the approximation property (AP), if every compact operator is a limit of finite-rank operators. The converse is always true. Every Hilbert space has this property. There are, however, Banach spaces which do not; Per Enflo published the first counterexample in a 1973 article. However, much work in this area was done by Grothendieck (1955). Later many other counterexamples were found. The space formula_0 of bounded operators on an infinite-dimensional Hilbert space formula_1 does not have the approximation property. The spaces formula_2 for formula_3 and formula_4 (see Sequence space) have closed subspaces that do not have the approximation property. Definition. A locally convex topological vector space "X" is said to have the approximation property, if the identity map can be approximated, uniformly on precompact sets, by continuous linear maps of finite rank. For a locally convex space "X", the following are equivalent: where formula_9 denotes the space of continuous linear operators from "X" to "Y" endowed with the topology of uniform convergence on pre-compact subsets of "X". If "X" is a Banach space this requirement becomes that for every compact set formula_12 and every formula_13, there is an operator formula_14 of finite rank so that formula_15, for every formula_16. Related definitions. Some other flavours of the AP are studied: Let formula_17 be a Banach space and let formula_18. We say that "X" has the formula_19"-approximation property" (formula_19-AP), if, for every compact set formula_12 and every formula_13, there is an operator formula_20 of finite rank so that formula_21, for every formula_16, and formula_22. A Banach space is said to have bounded approximation property (BAP), if it has the formula_19-AP for some formula_19. A Banach space is said to have metric approximation property (MAP), if it is 1-AP. A Banach space is said to have compact approximation property (CAP), if in the definition of AP an operator of finite rank is replaced with a compact operator. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal L(H)" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "\\ell^p" }, { "math_id": 3, "text": "p\\neq 2" }, { "math_id": 4, "text": "c_0" }, { "math_id": 5, "text": "X^{\\prime} \\otimes X" }, { "math_id": 6, "text": "\\operatorname{L}_p(X, X)" }, { "math_id": 7, "text": "\\operatorname{Id} : X \\to X" }, { "math_id": 8, "text": "X^{\\prime} \\otimes Y" }, { "math_id": 9, "text": "\\operatorname{L}_p(X, Y)" }, { "math_id": 10, "text": "Y^{\\prime} \\otimes X" }, { "math_id": 11, "text": "\\operatorname{L}_p(Y, X)" }, { "math_id": 12, "text": "K\\subset X" }, { "math_id": 13, "text": "\\varepsilon>0" }, { "math_id": 14, "text": "T\\colon X\\to X" }, { "math_id": 15, "text": "\\|Tx-x\\|\\leq\\varepsilon" }, { "math_id": 16, "text": "x \\in K" }, { "math_id": 17, "text": "X" }, { "math_id": 18, "text": "1\\leq\\lambda<\\infty" }, { "math_id": 19, "text": "\\lambda" }, { "math_id": 20, "text": "T\\colon X \\to X" }, { "math_id": 21, "text": "\\|Tx - x\\|\\leq\\varepsilon" }, { "math_id": 22, "text": "\\|T\\|\\leq\\lambda" }, { "math_id": 23, "text": "T" } ]
https://en.wikipedia.org/wiki?curid=628962
62900324
Gelfand–Fuks cohomology
In mathematics, Gelfand–Fuks cohomology, introduced in , is a cohomology theory for Lie algebras of smooth vector fields. It differs from the Lie algebra cohomology of Chevalley-Eilenberg in that its cochains are taken to be continuous multilinear alternating forms on the Lie algebra of smooth vector fields where the latter is given the formula_0 topology. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C^{\\infty}" } ]
https://en.wikipedia.org/wiki?curid=62900324
629005
Ba space
Class of Banach spaces In mathematics, the ba space formula_0 of an algebra of sets formula_1 is the Banach space consisting of all bounded and finitely additive signed measures on formula_1. The norm is defined as the variation, that is formula_2 If Σ is a sigma-algebra, then the space formula_3 is defined as the subset of formula_0 consisting of countably additive measures. The notation "ba" is a mnemonic for "bounded additive" and "ca" is short for "countably additive". If "X" is a topological space, and Σ is the sigma-algebra of Borel sets in "X", then formula_4 is the subspace of formula_3 consisting of all regular Borel measures on "X". Properties. All three spaces are complete (they are Banach spaces) with respect to the same norm defined by the total variation, and thus formula_3 is a closed subset of formula_0, and formula_4 is a closed set of formula_3 for Σ the algebra of Borel sets on "X". The space of simple functions on formula_1 is dense in formula_0. The ba space of the power set of the natural numbers, "ba"(2N), is often denoted as simply formula_5 and is isomorphic to the dual space of the ℓ∞ space. Dual of B(Σ). Let B(Σ) be the space of bounded Σ-measurable functions, equipped with the uniform norm. Then "ba"(Σ) = B(Σ)* is the continuous dual space of B(Σ). This is due to Hildebrandt and Fichtenholtz &amp; Kantorovich. This is a kind of Riesz representation theorem which allows for a measure to be represented as a linear functional on measurable functions. In particular, this isomorphism allows one to "define" the integral with respect to a finitely additive measure (note that the usual Lebesgue integral requires "countable" additivity). This is due to Dunford &amp; Schwartz, and is often used to define the integral with respect to vector measures, and especially vector-valued Radon measures. The topological duality "ba"(Σ) = B(Σ)* is easy to see. There is an obvious "algebraic" duality between the vector space of "all" finitely additive measures σ on Σ and the vector space of simple functions (formula_6). It is easy to check that the linear form induced by σ is continuous in the sup-norm if σ is bounded, and the result follows since a linear form on the dense subspace of simple functions extends to an element of B(Σ)* if it is continuous in the sup-norm. Dual of "L"∞("μ"). If Σ is a sigma-algebra and "μ" is a sigma-additive positive measure on Σ then the Lp space "L"∞("μ") endowed with the essential supremum norm is by definition the quotient space of B(Σ) by the closed subspace of bounded "μ"-null functions: formula_7 The dual Banach space "L"∞("μ")* is thus isomorphic to formula_8 i.e. the space of finitely additive signed measures on "Σ" that are absolutely continuous with respect to "μ" ("μ"-a.c. for short). When the measure space is furthermore sigma-finite then "L"∞("μ") is in turn dual to "L"1("μ"), which by the Radon–Nikodym theorem is identified with the set of all countably additive "μ"-a.c. measures. In other words, the inclusion in the bidual formula_9 is isomorphic to the inclusion of the space of countably additive "μ"-a.c. bounded measures inside the space of all finitely additive "μ"-a.c. bounded measures. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ba(\\Sigma)" }, { "math_id": 1, "text": "\\Sigma" }, { "math_id": 2, "text": "\\|\\nu\\|=|\\nu|(X)." }, { "math_id": 3, "text": "ca(\\Sigma)" }, { "math_id": 4, "text": "rca(X)" }, { "math_id": 5, "text": "ba" }, { "math_id": 6, "text": "\\mu(A)=\\zeta\\left(1_A\\right)" }, { "math_id": 7, "text": "N_\\mu:=\\{f\\in B(\\Sigma) : f = 0 \\ \\mu\\text{-almost everywhere} \\}." }, { "math_id": 8, "text": "N_\\mu^\\perp=\\{\\sigma\\in ba(\\Sigma) : \\mu(A)=0\\Rightarrow \\sigma(A)= 0 \\text{ for any }A\\in\\Sigma\\}," }, { "math_id": 9, "text": "L^1(\\mu)\\subset L^1(\\mu)^{**}=L^{\\infty}(\\mu)^*" } ]
https://en.wikipedia.org/wiki?curid=629005
62902
Stuart Kauffman
American medical doctor &amp; academic Stuart Alan Kauffman (born September 28, 1939) is an American medical doctor, theoretical biologist, and complex systems researcher who studies the origin of life on Earth. He was a professor at the University of Chicago, University of Pennsylvania, and University of Calgary. He is currently emeritus professor of biochemistry at the University of Pennsylvania and affiliate faculty at the Institute for Systems Biology. He has a number of awards including a MacArthur Fellowship and a Wiener Medal. He is best known for arguing that the complexity of biological systems and organisms might result as much from self-organization and far-from-equilibrium dynamics as from Darwinian natural selection, as discussed in his book "Origins of Order" (1993). In 1967 and 1969 he used random Boolean networks to investigate generic self-organizing properties of gene regulatory networks, proposing that cell types are dynamical attractors in gene regulatory networks and that cell differentiation can be understood as transitions between attractors. Recent evidence suggests that cell types in humans and other organisms are attractors. In 1971 he suggested that a zygote may not be able to access all the cell type attractors in its gene regulatory network during development and that some of the developmentally inaccessible cell types might be cancer cell types. This suggested the possibility of "cancer differentiation therapy". He also proposed the self-organized emergence of collectively autocatalytic sets of polymers, specifically peptides, for the origin of molecular reproduction, which have found experimental support. Education and early career. Kauffman graduated from Dartmouth in 1960, was awarded the BA (Hons) by Oxford University (where he was a Marshall Scholar) in 1963, and completed a medical degree (M.D.) at the University of California, San Francisco in 1968. After completing his internship, he moved into developmental genetics of the fruit fly, holding appointments first at the University of Chicago from 1969 to 1973, the National Cancer Institute from 1973 to 1975, and then at the University of Pennsylvania from 1975 to 1994, where he rose to professor of biochemistry and biophysics. Career. Kauffman became known through his association with the Santa Fe Institute (a non-profit research institute dedicated to the study of complex systems), where he was faculty in residence from 1986 to 1997, and through his work on models in various areas of biology. These included autocatalytic sets in origin of life research, gene regulatory networks in developmental biology, and fitness landscapes in evolutionary biology. With Marc Ballivet, Kauffman holds the founding broad biotechnology patents in combinatorial chemistry and applied molecular evolution, first issued in France in 1987, in England in 1989, and later in North America. In 1996, with Ernst and Young, Kauffman started BiosGroup, a Santa Fe, New Mexico-based for-profit company that applied complex systems methodology to business problems. BiosGroup was acquired by NuTech Solutions in early 2003. NuTech was bought by Netezza in 2008, and later by IBM. From 2005 to 2009 Kauffman held a joint appointment at the University of Calgary in biological sciences, physics, and astronomy. He was also an adjunct professor in the Department of Philosophy at the University of Calgary. He was an iCORE (Informatics Research Circle of Excellence) chair and the director of the Institute for Biocomplexity and Informatics. Kauffman was also invited to help launch the Science and Religion initiative at Harvard Divinity School; serving as visiting professor in 2009. In January 2009 Kauffman became a Finland Distinguished Professor (FiDiPro) at Tampere University of Technology, Department of Signal Processing. The appointment ended in December, 2012. The subject of the FiDiPro research project is the development of delayed stochastic models of genetic regulatory networks based on gene expression data at the single molecule level. In January 2010 Kauffman joined the University of Vermont faculty where he continued his work for two years with UVM's Complex Systems Center. From early 2011 to April 2013, Kauffman was a regular contributor to the NPR Blog 13.7, Cosmos and Culture, with topics ranging from the life sciences, systems biology, and medicine, to spirituality, economics, and the law. In May 2013 he joined the Institute for Systems Biology, in Seattle, Washington. Following the death of his wife, Kauffman cofounded Transforming Medicine: The Elizabeth Kauffman Institute. In 2014, Kauffman with Samuli Niiranen and Gabor Vattay was issued a founding patent on the "poised realm" (see below), an apparently new "state of matter" hovering reversibly between quantum and classical realms. In 2015, he was invited to help initiate a general a discussion on rethinking economic growth for the United Nations. Around the same time, he did research with University of Oxford professor Teppo Felin. Fitness landscapes. Kauffman's NK model defines a combinatorial phase space, consisting of every string (chosen from a given alphabet) of length formula_0. For each string in this search space, a scalar value (called the "fitness") is defined. If a distance metric is defined between strings, the resulting structure is a "landscape". Fitness values are defined according to the specific incarnation of the model, but the key feature of the NK model is that the fitness of a given string formula_1 is the sum of contributions from each locus formula_2 in the string: formula_3 and the contribution from each locus in general depends on the value of formula_4 other loci: formula_5 where formula_6 are the other loci upon which the fitness of formula_2 depends. Hence, the fitness function formula_7 is a mapping between strings of length "K" + 1 and scalars, which Weinberger's later work calls "fitness contributions". Such fitness contributions are often chosen randomly from some specified probability distribution. In 1991, Weinberger published a detailed analysis of the case in which formula_8 and the fitness contributions are chosen randomly. His analytical estimate of the number of local optima was later shown to be flawed. However, numerical experiments included in Weinberger's analysis support his analytical result that the expected fitness of a string is normally distributed with a mean of approximately formula_9 and a variance of approximately formula_10. Recognition and awards. Kauffman held a MacArthur Fellowship between 1987 and 1992. He also holds an Honorary Degree in Science from the University of Louvain (1997); He was awarded the Norbert Wiener Memorial Gold Medal for Cybernetics in 1973, the Gold Medal of the Accademia dei Lincei in Rome in 1990, the Trotter Prize for Information and Complexity in 2001, and the Herbert Simon award for Complex Systems in 2013. He became a Fellow of the Royal Society of Canada in 2009. Works. Kauffman is best known for arguing that the complexity of biological systems and organisms might result as much from self-organization and far-from-equilibrium dynamics as from Darwinian natural selection in three areas of evolutionary biology, namely population dynamics, molecular evolution, and morphogenesis. With respect to molecular biology, Kauffman's structuralist approach has been criticized for ignoring the role of energy in driving biochemical reactions in cells, which can fairly be called self-catalyzing but which do not simply self-organize. Some biologists and physicists working in Kauffman's area have questioned his claims about self-organization and evolution. A case in point is some comments in the 2001 book "Self-Organization in Biological Systems". Roger Sansom's 2011 book "Ingenious Genes: How Gene Regulation Networks Evolve to Control Development" is an extended criticism of Kauffman's model of self-organization in relation to gene regulatory networks. Borrowing from spin glass models in physics, Kauffman invented "N-K" fitness landscapes, which have found applications in biology and economics. In related work, Kauffman and colleagues have examined subcritical, critical, and supracritical behavior in economic systems. Kauffman's work translates his biological findings to the mind-body problem and issues in neuroscience, proposing attributes of a new "poised realm" that hovers indefinitely between quantum coherence and classicality. He published on this topic in his paper "Answering Descartes: beyond Turing". With Giuseppe Longo and Maël Montévil, he wrote (January 2012) "No Entailing Laws, But Enablement in the Evolution of the Biosphere", which argued that evolution is not "law entailed" like physics. Kauffman's work is posted on Physics ArXiv, including "Beyond the Stalemate: Mind/Body, Quantum Mechanics, Free Will, Possible Panpsychism, Possible Solution to the Quantum Enigma" (October 2014) and "Quantum Criticality at the Origin of Life" (February 2015). Kauffman has contributed to the emerging field of cumulative technological evolution by introducing a mathematics of the "adjacent possible". He has published over 350 articles and 6 books: "The Origins of Order" (1993), "At Home in the Universe" (1995), "Investigations" (2000), "Reinventing the Sacred" (2008), "Humanity in a Creative Universe" (2016), and "A World Beyond Physics" (2019). In 2016, Kauffman wrote a children's story, "Patrick, Rupert, Sly &amp; Gus Protocells", a narrative about unprestatable niche creation in the biosphere, which was later produced as a short animated video. In 2017, exploring the concept that reality consists of both ontologically real "possibles" (res potentia) and ontologically real "actuals" (res extensa), Kauffman co-authored, with Ruth Kastner and Michael Epperson, "Taking Heisenberg's Potentia Seriously". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "S_i" }, { "math_id": 3, "text": "F(S) = \\sum_i f(S_i)," }, { "math_id": 4, "text": "K" }, { "math_id": 5, "text": "f(S_i) = f(S_i, S^i_1, \\dots, S^i_K), \\, " }, { "math_id": 6, "text": "S^i_j" }, { "math_id": 7, "text": "f(S_i, S^i_1, \\dots, S^i_K)" }, { "math_id": 8, "text": "1 << k \\le N" }, { "math_id": 9, "text": " \\mu + \\sigma \\sqrt{{2 \\ln (k+1)} \\over {k+1}}" }, { "math_id": 10, "text": " {{(k+1)\\sigma^2} \\over {N[k+1 + 2(k+2)\\ln(k+1)]}}" } ]
https://en.wikipedia.org/wiki?curid=62902
629068
Polynomially reflexive space
In mathematics, a polynomially reflexive space is a Banach space "X", on which the space of all polynomials in each degree is a reflexive space. Given a multilinear functional "M""n" of degree "n" (that is, "M""n" is "n"-linear), we can define a polynomial "p" as formula_0 (that is, applying "M""n" on the "diagonal") or any finite sum of these. If only "n"-linear functionals are in the sum, the polynomial is said to be "n"-homogeneous. We define the space "P""n" as consisting of all "n"-homogeneous polynomials. The "P"1 is identical to the dual space, and is thus reflexive for all reflexive "X". This implies that reflexivity is a prerequisite for polynomial reflexivity. Relation to continuity of forms. On a finite-dimensional linear space, a quadratic form "x"↦"f"("x") is always a (finite) linear combination of products "x"↦"g"("x") "h"("x") of two linear functionals "g" and "h". Therefore, assuming that the scalars are complex numbers, every sequence "xn" satisfying "g"("xn") → 0 for all linear functionals "g", satisfies also "f"("xn") → 0 for all quadratic forms "f". In infinite dimension the situation is different. For example, in a Hilbert space, an orthonormal sequence "xn" satisfies "g"("xn") → 0 for all linear functionals "g", and nevertheless "f"("xn") = 1 where "f" is the quadratic form "f"("x") = ||"x"||2. In more technical words, this quadratic form fails to be weakly sequentially continuous at the origin. On a reflexive Banach space with the approximation property the following two conditions are equivalent: Quadratic forms are 2-homogeneous polynomials. The equivalence mentioned above holds also for "n"-homogeneous polynomials, "n"=3,4... Examples. For the formula_1 spaces, the "P""n" is reflexive if and only if n &lt; p. Thus, no formula_1 is polynomially reflexive. (formula_2 is ruled out because it is not reflexive.) Thus if a Banach space admits formula_1 as a quotient space, it is not polynomially reflexive. This makes polynomially reflexive spaces rare. The Tsirelson space "T"* is polynomially reflexive.
[ { "math_id": 0, "text": "p(x)=M_n(x,\\dots,x)" }, { "math_id": 1, "text": "\\ell^p" }, { "math_id": 2, "text": "\\ell^\\infty" } ]
https://en.wikipedia.org/wiki?curid=629068
6290771
Whitehead's point-free geometry
Geometric theory based on regions In mathematics, point-free geometry is a geometry whose primitive ontological notion is "region" rather than point. Two axiomatic systems are set out below, one grounded in mereology, the other in mereotopology and known as "connection theory". Point-free geometry was first formulated by Alfred North Whitehead, not as a theory of geometry or of spacetime, but of "events" and of an "extension relation" between events. Whitehead's purposes were as much philosophical as scientific and mathematical. Formalizations. Whitehead did not set out his theories in a manner that would satisfy present-day canons of formality. The two formal first-order theories described in this entry were devised by others in order to clarify and refine Whitehead's theories. The domain of discourse for both theories consists of "regions." All unquantified variables in this entry should be taken as tacitly universally quantified; hence all axioms should be taken as universal closures. No axiom requires more than three quantified variables; hence a translation of first-order theories into relation algebra is possible. Each set of axioms has but four existential quantifiers. Inclusion-based point-free geometry (mereology). The fundamental primitive binary relation is "inclusion", denoted by the infix operator "≤", which corresponds to the binary "Parthood" relation that is a standard feature in mereological theories. The intuitive meaning of "x" ≤ "y" is ""x" is part of "y"." Assuming that equality, denoted by the infix operator "=", is part of the background logic, the binary relation "Proper Part", denoted by the infix operator "&lt;", is defined as: formula_0 The axioms are: G1. formula_1 (reflexive) G2. formula_2 (transitive) WP4. G3. formula_3 (antisymmetric) G4. formula_4 G5. formula_5 G6. formula_6 G7. formula_7 A model of G1–G7 is an "inclusion space". Definition. Given some inclusion space S, an abstractive class is a class "G" of regions such that "S\G" is totally ordered by inclusion. Moreover, there does not exist a region included in all of the regions included in "G". Intuitively, an abstractive class defines a geometrical entity whose dimensionality is less than that of the inclusion space. For example, if the inclusion space is the Euclidean plane, then the corresponding abstractive classes are points and lines. Inclusion-based point-free geometry (henceforth "point-free geometry") is essentially an axiomatization of Simons's system W. In turn, W formalizes a theory of Whitehead whose axioms are not made explicit. Point-free geometry is W with this defect repaired. Simons did not repair this defect, instead proposing in a footnote that the reader do so as an exercise. The primitive relation of W is Proper Part, a strict partial order. The theory of Whitehead (1919) has a single primitive binary relation "K" defined as "xKy" ↔ "y" &lt; "x". Hence "K" is the converse of Proper Part. Simons's WP1 asserts that Proper Part is irreflexive and so corresponds to G1. G3 establishes that inclusion, unlike Proper Part, is antisymmetric. Point-free geometry is closely related to a dense linear order D, whose axioms are G1-3, G5, and the totality axiom formula_8 Hence inclusion-based point-free geometry would be a proper extension of D (namely D ∪ {G4, G6, G7}), were it not that the D relation "≤" is a total order. Connection theory (mereotopology). A different approach was proposed in Whitehead (1929), one inspired by De Laguna (1922). Whitehead took as primitive the topological notion of "contact" between two regions, resulting in a primitive "connection relation" between events. Connection theory C is a first-order theory that distills the first 12 of Whitehead's 31 assumptions into 6 axioms, C1-C6. C is a proper fragment of the theories proposed by Clarke, who noted their mereological character. Theories that, like C, feature both inclusion and topological primitives, are called mereotopologies. C has one primitive relation, binary "connection," denoted by the prefixed predicate letter "C". That "x" is included in "y" can now be defined as "x" ≤ "y" ↔ ∀z["Czx"→"Czy"]. Unlike the case with inclusion spaces, connection theory enables defining "non-tangential" inclusion, a total order that enables the construction of abstractive classes. Gerla and Miranda (2008) argue that only thus can mereotopology unambiguously define a point. C1. formula_9 C2. formula_10 C3. formula_11 C4. formula_12 C5. formula_13 C6. formula_14 A model of C is a "connection space". Following the verbal description of each axiom is the identifier of the corresponding axiom in Casati and Varzi (1999). Their system SMT ("strong mereotopology") consists of C1-C3, and is essentially due to Clarke (1981). Any mereotopology can be made atomless by invoking C4, without risking paradox or triviality. Hence C extends the atomless variant of SMT by means of the axioms C5 and C6, suggested by chapter 2 of part 4 of "Process and Reality". Biacino and Gerla (1991) showed that every model of Clarke's theory is a Boolean algebra, and models of such algebras cannot distinguish connection from overlap. It is doubtful whether either fact is faithful to Whitehead's intent. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x<y \\leftrightarrow (x \\le y \\land x \\not = y)." }, { "math_id": 1, "text": "x \\le x." }, { "math_id": 2, "text": "(x \\le z \\land z \\le y) \\rightarrow x \\le y." }, { "math_id": 3, "text": "(x \\le y \\land y \\le x) \\rightarrow x = y." }, { "math_id": 4, "text": "\\exists z[x \\le z \\land y\\le z]." }, { "math_id": 5, "text": "x<y\\rightarrow\\exists z [x<z<y]." }, { "math_id": 6, "text": "\\exists y \\exists z[y<x \\land x<z]." }, { "math_id": 7, "text": "\\forall z[z<x \\rightarrow z<y] \\rightarrow x\\le y." }, { "math_id": 8, "text": "x \\le y \\lor y \\le x." }, { "math_id": 9, "text": " \\ Cxx." }, { "math_id": 10, "text": "Cxy\\rightarrow Cyx." }, { "math_id": 11, "text": "\\forall z[Czx \\leftrightarrow Czy] \\rightarrow x = y." }, { "math_id": 12, "text": "\\exists y[y<x]." }, { "math_id": 13, "text": "\\exists z[Czx\\land Czy]." }, { "math_id": 14, "text": "\\exists y \\exists z[(y\\le x)\\land (z\\le x)\\land\\neg Cyz]." } ]
https://en.wikipedia.org/wiki?curid=6290771
62909762
Alexandre Mikhailovich Vinogradov
Russian-Italian mathematician (1938–2019) Alexandre Mikhailovich Vinogradov (; 18 February 1938 – 20 September 2019) was a Russian and Italian mathematician. He made important contributions to the areas of differential calculus over commutative algebras, the algebraic theory of differential operators, homological algebra, differential geometry and algebraic topology, mechanics and mathematical physics, the geometrical theory of nonlinear partial differential equations and secondary calculus. Biography. A.M. Vinogradov was born on 18 February 1938 in Novorossiysk. His father, Mikhail Ivanovich Vinogradov, was a hydraulics scientist; his mother, Ilza Alexandrovna Firer, was a medical doctor. Among his more distant ancestors, his great-grandfather, Anton Smagin, was a self-taught peasant and a deputy of the State Duma of the second convocation. Between 1955 and 1960 Vinogradov studied at the Mechanics and Mathematics Department of Moscow State University (Mech-mat). He pursued a PhD at the same institution, defending his thesis in 1964, under the supervision of V.G. Boltyansky. After teaching for one year at the Moscow Mining Institute, in 1965 he received a position at the Department of Higher Geometry and Topology of Moscow State University. He obtained his habilitation degree (doktorskaya dissertatsiya) in 1984 at the Institute of Mathematics of the Siberian Branch of the USSR Academy of Science in Novosibirsk in Russia. In 1990 he left the Soviet Union for Italy, and from 1993 to 2010 was professor in geometry at the University of Salerno. Research. Vinogradov published his first works in number theory, together with B.N. Delaunay and D.B. Fuchs, when he was a second year undergraduate student. By the end of his undergraduate years he changed research interests and started working on algebraic topology. His PhD thesis was devoted to homotopic properties of the embedding spaces of circles into the 2-sphere or the 3-disk. He continued working in algebraic and differential topology – in particular, on the Adams spectral sequence – until the early seventies. Between the sixties and the seventies, inspired by the ideas of Sophus Lie, Vinogradov changed once more research interests and began to investigate the foundations of the geometric theory of partial differential equations. Having become familiar with the work of Spencer, Goldschmidt and Quillen on formal integrability, he turned his attention to the algebraic (in particular, cohomological) component of that theory. In 1972, he published a short note containing what he called the main functors of the differential calculus over commutative algebras. Vinogradov’s approach to nonlinear differential equations as geometric objects, with their general theory and applications, is developed in details in some monographs as well as in some articles. He recast infinitely prolonged differential equations into a category whose objects, called diffieties, are studied in the framework of what he called secondary calculus (by analogy with secondary quantization). One of the central parts of this theory is based on the formula_0-spectral sequence (now known as the Vinogradov spectral sequence). The first term of this spectral sequence gives a unified cohomological approach to various notions and statements, including the Lagrangian formalism with constraints, conservation laws, cosymmetries, the Noether theorem, and the Helmholtz criterion in the inverse problem of the calculus of variations (for arbitrary nonlinear differential operators). A particular case of the formula_0-spectral sequence (for an “empty” equation, i.e., for the space of infinite jets) is the so-called variational bicomplex. Furthermore, Vinogradov introduced a new bracket on the graded algebra of linear transformations of a cochain complex. The Vinogradov bracket is skew-symmetric and satisfies the Jacobi identity modulo a coboundary. Vinogradov’s construction is a precursor of the general concept of a derived bracket on a differential Leibniz algebra introduced by Kosmann-Schwarzbach in 1996. These results were also applied to Poisson geometry. Together with Peter Michor, Vinogradov was concerned with the analysis and comparison of various generalizations of Lie (super) algebras, including formula_1 algebras and Filippov algebras. He also developed a theory of compatibility of Lie algebra structures and proved that any finite-dimensional Lie algebra over an algebraically closed field or over formula_2 can be assembled in a few steps from two elementary constituents, that he called dyons and triadons. Furthermore, he speculated that this particle-like structures could be related to the ultimate structure of elementary particles. Vinogradov's research interests were also motivated by problems of contemporary physics – for example the structure of Hamiltonian mechanics, the dynamics of acoustic beams, the equations of magnetohydrodynamics (the so-called Kadomtsev-Pogutse equations appearing in the stability theory of high-temperature plasma in tokamaks) and mathematical questions in general relativity. Considerable attention to the mathematical understanding of the fundamental physical notion of observable is given in a book written by Vinogradov jointly with several participants of his seminar, under the pen name of Jet Nestruev. Contribution to the mathematical community. From 1967 until 1990, Vinogradov headed a research seminar at Mekhmat, which became a prominent feature in the mathematical life of Moscow. In 1978, he was one of the organisers and first lecturers in the so-called People's University for students who were not accepted to Mekhmat because they were ethnically Jewish (he ironically called this school the “People’s Friendship University”). In 1985, he created a laboratory that studied various aspects of the geometry of differential equations at the Institute of Programming Systems in Pereslavl-Zalessky and was its scientific supervisor until his departure for Italy. Vinogradov was one of the initial founder of the mathematical journal "Differential Geometry and its Applications", remaining one of the editors from 1991 to his last days. A special issue of the journal, devoted to the geometry of PDEs, was published in his memory. In 1993 he was one of the promoters of the Schrödinger International Institute in Mathematical Physics in Vienna. In 1997 he organised the large conference "Secondary Calculus and Cohomological Physics" in Moscow, which was followed by a series of small conferences called "Current Geometry" that took place in Italy from 2000 to 2010. From 1998 to 2019, Vinogradov organised and directed the so-called "Diffiety Schools" in Italy, Russia, and Poland, in which a wide range of courses were taught, in order to prepare students and young researchers to work on the theory of diffieties and secondary calculus. He supervised 19 PhD students.
[ { "math_id": 0, "text": "\\cal C" }, { "math_id": 1, "text": "L_\\infty" }, { "math_id": 2, "text": "\\mathbb{R}" } ]
https://en.wikipedia.org/wiki?curid=62909762
6292
Convex set
In geometry, set whose intersection with every line is a single line segment In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points in the subset, the subset contains the whole line segment that joins them. Equivalently, a convex set or a convex region is a subset that intersects every line into a single line segment (possibly empty). For example, a solid cube is a convex set, but anything that is hollow or has an indent, for example, a crescent shape, is not convex. The boundary of a convex set in the plane is always a convex curve. The intersection of all the convex sets that contain a given subset A of Euclidean space is called the convex hull of A. It is the smallest convex set containing A. A convex function is a real-valued function defined on an interval with the property that its epigraph (the set of points on or above the graph of the function) is a convex set. Convex minimization is a subfield of optimization that studies the problem of minimizing convex functions over convex sets. The branch of mathematics devoted to the study of properties of convex sets and convex functions is called convex analysis. The notion of a convex set can be generalized as described below. Definitions. Let S be a vector space or an affine space over the real numbers, or, more generally, over some ordered field (this includes Euclidean spaces, which are affine spaces). A subset C of S is convex if, for all x and y in C, the line segment connecting x and y is included in C. This means that the affine combination (1 − "t")"x" + "ty" belongs to C for all x,y in C and t in the interval [0, 1]. This implies that convexity is invariant under affine transformations. Further, it implies that a convex set in a real or complex topological vector space is path-connected (and therefore also connected). A set C is &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;strictly convex if every point on the line segment connecting x and y other than the endpoints is inside the topological interior of C. A closed convex subset is strictly convex if and only if every one of its boundary points is an extreme point. A set C is absolutely convex if it is convex and balanced. Examples. The convex subsets of R (the set of real numbers) are the intervals and the points of R. Some examples of convex subsets of the Euclidean plane are solid regular polygons, solid triangles, and intersections of solid triangles. Some examples of convex subsets of a Euclidean 3-dimensional space are the Archimedean solids and the Platonic solids. The Kepler-Poinsot polyhedra are examples of non-convex sets. Non-convex set. A set that is not convex is called a "non-convex set". A polygon that is not a convex polygon is sometimes called a concave polygon, and some sources more generally use the term "concave set" to mean a non-convex set, but most authorities prohibit this usage. The complement of a convex set, such as the epigraph of a concave function, is sometimes called a "reverse convex set", especially in the context of mathematical optimization. Properties. Given r points "u"1, ..., "ur" in a convex set S, and r nonnegative numbers "λ"1, ..., "λr" such that "λ"1 + ... + "λr" 1, the affine combination formula_0 belongs to S. As the definition of a convex set is the case "r" = 2, this property characterizes convex sets. Such an affine combination is called a convex combination of "u"1, ..., "ur". Intersections and unions. The collection of convex subsets of a vector space, an affine space, or a Euclidean space has the following properties: Closed convex sets. Closed convex sets are convex sets that contain all their limit points. They can be characterised as the intersections of "closed half-spaces" (sets of points in space that lie on and to one side of a hyperplane). From what has just been said, it is clear that such intersections are convex, and they will also be closed sets. To prove the converse, i.e., every closed convex set may be represented as such intersection, one needs the supporting hyperplane theorem in the form that for a given closed convex set C and point P outside it, there is a closed half-space H that contains C and not P. The supporting hyperplane theorem is a special case of the Hahn–Banach theorem of functional analysis. Convex sets and rectangles. Let C be a convex body in the plane (a convex set whose interior is non-empty). We can inscribe a rectangle "r" in C such that a homothetic copy "R" of "r" is circumscribed about C. The positive homothety ratio is at most 2 and: formula_1 Blaschke-Santaló diagrams. The set formula_2 of all planar convex bodies can be parameterized in terms of the convex body diameter "D", its inradius "r" (the biggest circle contained in the convex body) and its circumradius "R" (the smallest circle containing the convex body). In fact, this set can be described by the set of inequalities given by formula_3 formula_4 formula_5 formula_6 and can be visualized as the image of the function "g" that maps a convex body to the R2 point given by ("r"/"R", "D"/2"R"). The image of this function is known a ("r", "D", "R") Blachke-Santaló diagram. Alternatively, the set formula_2 can also be parametrized by its width (the smallest distance between any two different parallel support hyperplanes), perimeter and area. Other properties. Let "X" be a topological vector space and formula_7 be convex. Convex hulls and Minkowski sums. Convex hulls. Every subset A of the vector space is contained within a smallest convex set (called the convex hull of A), namely the intersection of all convex sets containing A. The convex-hull operator Conv() has the characteristic properties of a hull operator: Conv("S"). The convex-hull operation is needed for the set of convex sets to form a lattice, in which the "join" operation is the convex hull of the union of two convex sets formula_18 The intersection of any collection of convex sets is itself convex, so the convex subsets of a (real or complex) vector space form a complete lattice. Minkowski addition. In a real vector-space, the "Minkowski sum" of two (non-empty) sets, "S"1 and "S"2, is defined to be the set "S"1 + "S"2 formed by the addition of vectors element-wise from the summand-sets formula_19 More generally, the "Minkowski sum" of a finite family of (non-empty) sets "Sn" is the set formed by element-wise addition of vectors formula_20 For Minkowski addition, the "zero set" {0} containing only the zero vector 0 has special importance: For every non-empty subset S of a vector space formula_21 in algebraic terminology, {0} is the identity element of Minkowski addition (on the collection of non-empty sets). Convex hulls of Minkowski sums. Minkowski addition behaves well with respect to the operation of taking convex hulls, as shown by the following proposition: Let "S"1, "S"2 be subsets of a real vector-space, the convex hull of their Minkowski sum is the Minkowski sum of their convex hulls formula_22 This result holds more generally for each finite collection of non-empty sets: formula_23 In mathematical terminology, the operations of Minkowski summation and of forming convex hulls are commuting operations. Minkowski sums of convex sets. The Minkowski sum of two compact convex sets is compact. The sum of a compact convex set and a closed convex set is closed. The following famous theorem, proved by Dieudonné in 1966, gives a sufficient condition for the difference of two closed convex subsets to be closed. It uses the concept of a recession cone of a non-empty convex subset "S", defined as: formula_24 where this set is a convex cone containing formula_25 and satisfying formula_26. Note that if "S" is closed and convex then formula_27 is closed and for all formula_28, formula_29 Theorem (Dieudonné). Let "A" and "B" be non-empty, closed, and convex subsets of a locally convex topological vector space such that formula_30 is a linear subspace. If "A" or "B" is locally compact then "A" − "B" is closed. Generalizations and extensions for convexity. The notion of convexity in the Euclidean space may be generalized by modifying the definition in some or other aspects. The common name "generalized convexity" is used, because the resulting objects retain certain properties of convex sets. Star-convex (star-shaped) sets. Let C be a set in a real or complex vector space. C is star convex (star-shaped) if there exists an "x"0 in C such that the line segment from "x"0 to any point y in C is contained in C. Hence a non-empty convex set is always star-convex but a star-convex set is not always convex. Orthogonal convexity. An example of generalized convexity is orthogonal convexity. A set S in the Euclidean space is called orthogonally convex or ortho-convex, if any segment parallel to any of the coordinate axes connecting two points of S lies totally within S. It is easy to prove that an intersection of any collection of orthoconvex sets is orthoconvex. Some other properties of convex sets are valid as well. Non-Euclidean geometry. The definition of a convex set and a convex hull extends naturally to geometries which are not Euclidean by defining a geodesically convex set to be one that contains the geodesics joining any two points in the set. Order topology. Convexity can be extended for a totally ordered set X endowed with the order topology. Let "Y" ⊆ "X". The subspace Y is a convex set if for each pair of points "a", "b" in Y such that "a" ≤ "b", the interval ["a", "b"] {"x" ∈ "X" | "a" ≤ "x" ≤ "b"} is contained in Y. That is, Y is convex if and only if for all "a", "b" in Y, "a" ≤ "b" implies ["a", "b"] ⊆ "Y". A convex set is not connected in general: a counter-example is given by the subspace {1,2,3} in Z, which is both convex and not connected. Convexity spaces. The notion of convexity may be generalised to other objects, if certain properties of convexity are selected as axioms. Given a set X, a convexity over X is a collection "𝒞" of subsets of X satisfying the following axioms: The elements of "𝒞" are called convex sets and the pair ("X", "𝒞") is called a convexity space. For the ordinary convexity, the first two axioms hold, and the third one is trivial. For an alternative definition of abstract convexity, more suited to discrete geometry, see the "convex geometries" associated with antimatroids. Convex spaces. Convexity can be generalised as an abstract algebraic structure: a space is convex if it is possible to take convex combinations of points. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sum_{k=1}^r\\lambda_k u_k" }, { "math_id": 1, "text": "\\tfrac{1}{2} \\cdot\\operatorname{Area}(R) \\leq \\operatorname{Area}(C) \\leq 2\\cdot \\operatorname{Area}(r)" }, { "math_id": 2, "text": "\\mathcal{K}^2" }, { "math_id": 3, "text": "2r \\le D \\le 2R" }, { "math_id": 4, "text": "R \\le \\frac{\\sqrt{3}}{3} D" }, { "math_id": 5, "text": "r + R \\le D" }, { "math_id": 6, "text": "D^2 \\sqrt{4R^2-D^2} \\le 2R (2R + \\sqrt{4R^2 -D^2})" }, { "math_id": 7, "text": "C \\subseteq X" }, { "math_id": 8, "text": "\\operatorname{Cl} C" }, { "math_id": 9, "text": "\\operatorname{Int} C" }, { "math_id": 10, "text": "a \\in \\operatorname{Int} C" }, { "math_id": 11, "text": "b \\in \\operatorname{Cl} C" }, { "math_id": 12, "text": "[a, b[ \\, \\subseteq \\operatorname{Int} C" }, { "math_id": 13, "text": "[a, b[ \\, := \\left\\{ (1 - r) a + r b : 0 \\leq r < 1 \\right\\}" }, { "math_id": 14, "text": "\\operatorname{Int} C \\neq \\emptyset" }, { "math_id": 15, "text": "\\operatorname{cl} \\left( \\operatorname{Int} C \\right) = \\operatorname{Cl} C" }, { "math_id": 16, "text": "\\operatorname{Int} C = \\operatorname{Int} \\left( \\operatorname{Cl} C \\right) = C^i" }, { "math_id": 17, "text": "C^{i}" }, { "math_id": 18, "text": "\\operatorname{Conv}(S)\\vee\\operatorname{Conv}(T) = \\operatorname{Conv}(S\\cup T) = \\operatorname{Conv}\\bigl(\\operatorname{Conv}(S)\\cup\\operatorname{Conv}(T)\\bigr)." }, { "math_id": 19, "text": "S_1+S_2=\\{x_1+x_2: x_1\\in S_1, x_2\\in S_2\\}." }, { "math_id": 20, "text": " \\sum_n S_n = \\left \\{ \\sum_n x_n : x_n \\in S_n \\right \\}." }, { "math_id": 21, "text": "S+\\{0\\}=S;" }, { "math_id": 22, "text": "\\operatorname{Conv}(S_1+S_2)=\\operatorname{Conv}(S_1)+\\operatorname{Conv}(S_2)." }, { "math_id": 23, "text": "\\text{Conv}\\left ( \\sum_n S_n \\right ) = \\sum_n \\text{Conv} \\left (S_n \\right)." }, { "math_id": 24, "text": "\\operatorname{rec} S = \\left\\{ x \\in X \\, : \\, x + S \\subseteq S \\right\\}," }, { "math_id": 25, "text": "0 \\in X " }, { "math_id": 26, "text": "S + \\operatorname{rec} S = S" }, { "math_id": 27, "text": "\\operatorname{rec} S" }, { "math_id": 28, "text": "s_0 \\in S" }, { "math_id": 29, "text": "\\operatorname{rec} S = \\bigcap_{t > 0} t (S - s_0)." }, { "math_id": 30, "text": "\\operatorname{rec} A \\cap \\operatorname{rec} B" } ]
https://en.wikipedia.org/wiki?curid=6292
62921631
Representation theory of sl 2
[ { "math_id": 0, "text": "\\mathfrak{sl}_2(\\mathbb C)" } ]
https://en.wikipedia.org/wiki?curid=62921631
62925989
Parallel task scheduling
Optimization problem in computer science Parallel task scheduling (also called parallel job scheduling or parallel processing scheduling) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job scheduling problem, we are given "n" jobs "J"1, "J"2, ..., "Jn" of varying processing times, which need to be scheduled on "m" machines while trying to minimize the makespan - the total length of the schedule (that is, when all the jobs have finished processing). In the specific variant known as "parallel-task scheduling", all machines are identical. Each job "j" has a "length" parameter "pj" and a "size" parameter "q"j, and it must run for exactly "pj" time-steps on exactly "q"j machines in "parallel". Veltman et al. and Drozdowski denote this problem by formula_0 in the three-field notation introduced by Graham et al. P means that there are several identical machines running in parallel; "sizej" means that each job has a size parameter; "C"max means that the goal is to minimize the maximum completion time. Some authors use formula_1 instead. Note that the problem of parallel-machines scheduling is a special case of parallel-task scheduling where formula_2 for all "j", that is, each job should run on a single machine. The origins of this problem formulation can be traced back to 1960. For this problem, there exists no polynomial time approximation algorithm with a ratio smaller than formula_3 unless formula_4. Definition. There is a set formula_5 of formula_6 jobs, and formula_7 identical machines. Each job formula_8 has a processing time formula_9 (also called the "length" of "j"), and requires the simultaneous use of formula_10 machines during its execution (also called the "size" or the "width" of j). A schedule assigns each job formula_8 to a starting time formula_11 and a set formula_12 of formula_13 machines to be processed on. A schedule is feasible if each processor executes at most one job at any given time. The objective of the problem denoted by formula_0 is to find a schedule with minimum length formula_14, also called the makespan of the schedule. A sufficient condition for the feasibility of a schedule is the following formula_15. If this property is satisfied for all starting times, a feasible schedule can be generated by assigning free machines to the jobs at each time starting with time formula_16. Furthermore, the number of machine intervals used by jobs and idle intervals at each time step can be bounded by formula_17. Here a machine interval is a set of consecutive machines of maximal cardinality such that all machines in this set are processing the same job. A machine interval is completely specified by the index of its first and last machine. Therefore, it is possible to obtain a compact way of encoding the output with polynomial size. Computational hardness. This problem is NP-hard even when there are only two machines and the sizes of all jobs are formula_18 (i.e., each job needs to run only on a single machine). This special case, denoted by formula_19, is a variant of the partition problem, which is known to be NP-hard. When the number of machines "m" is at most 3, that is: for the variants formula_20 and formula_21, there exists a pseudo-polynomial time algorithm, which solves the problem exactly. In contrast, when the number of machines is at least 4, that is: for the variants formula_22 for any formula_23, the problem is also strongly NP-hard (this result improved a previous result showing strong NP-hardness for formula_24). If the number of machines is not bounded by a constant, then there can be no approximation algorithm with an approximation ratio smaller than formula_3 unless formula_25. This holds even for the special case in which the processing time of all jobs is formula_26, since this special case is equivalent to the bin packing problem: each time-step corresponds to a bin, "m" is the bin size, each job corresponds to an item of size "qj", and minimizing the makespan corresponds to minimizing the number of bins. Variants. Several variants of this problem have been studied. The following variants also have been considered in combination with each other. Contiguous jobs: In this variant, the machines have a fixed order formula_27. Instead of assigning the jobs to any subset formula_28, the jobs have to be assigned to a "contiguous interval" of machines. This problem corresponds to the problem formulation of the strip packing problem. Multiple platforms: In this variant, the set of machines is partitioned into independent platforms. A scheduled job can only use the machines of one platform and is not allowed to span over multiple platforms when processed. Moldable jobs: In this variant each job formula_8 has a set of feasible machine-counts formula_29. For each count formula_30, the job can be processed on "d" machines in parallel, and in this case, its processing time will be formula_31. To schedule a job formula_8, an algorithm has to choose a machine count formula_30 and assign "j" to a starting time formula_32 and to formula_33 machines during the time interval formula_34 A usual assumption for this kind of problem is that the total workload of a job, which is defined as formula_35, is non-increasing for an increasing number of machines. Release dates: In this variant, denoted by formula_36, not all jobs are available at time 0; each job "j" becomes available at a fixed and known time "rj". It must be scheduled after that time. Preemption: In this variant, denoted by formula_37, it is possible to interrupt jobs that are already running, and schedule other jobs that become available at that time. Algorithms. The list scheduling algorithm by Garey and Graham has an absolute ratio formula_38, as pointed out by Turek et al. and Ludwig and Tiwari. Feldmann, Sgall and Teng observed that the length of a non-preemptive schedule produced by the list scheduling algorithm is actually at most formula_39 times the optimum preemptive makespan. A polynomial-time approximation scheme (PTAS) for the case when the number formula_7 of processors is constant, denoted by formula_22, was presented by Amoura et al. and Jansen et al. Later, Jansen and Thöle found a PTAS for the case where the number of processors is polynomially bounded in the number of jobs. In this algorithm, the number of machines appears polynomially in the time complexity of the algorithm. Since, in general, the number of machines appears only in logarithmic in the size of the instance, this algorithm is a pseudo-polynomial time approximation scheme as well. A formula_40-approximation was given by Jansen, which closes the gap to the lower bound of formula_3 except for an arbitrarily small formula_41. Differences between contiguous and non-contiguous jobs. Given an instance of the parallel task scheduling problem, the optimal makespan can differ depending on the constraint to the contiguity of the machines. If the jobs can be scheduled on non-contiguous machines, the optimal makespan can be smaller than in the case that they have to be scheduled on contiguous ones. The difference between contiguous and non-contiguous schedules has been first demonstrated in 1992 on an instance with formula_42 tasks, formula_43 processors, formula_44, and formula_45. Błądek et al. studied these so-called c/nc-differences and proved the following points: Furthermore, they proposed the following two conjectures, which remain unproven: Related problems. There are related scheduling problems in which each job consists of several operations, which must be executed "in sequence" (rather than in parallel). These are the problems of open shop scheduling, flow shop scheduling and job shop scheduling.
[ { "math_id": 0, "text": " P|size_j|C_{\\max} " }, { "math_id": 1, "text": " P|m_j|C_{\\max} " }, { "math_id": 2, "text": " size_j=1 " }, { "math_id": 3, "text": "3/2" }, { "math_id": 4, "text": "P=NP" }, { "math_id": 5, "text": "\\mathcal{J}" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "m" }, { "math_id": 8, "text": "j \\in \\mathcal{J}" }, { "math_id": 9, "text": "p_j \\in \\mathbb{N}" }, { "math_id": 10, "text": "q_j\\in \\mathbb{N}" }, { "math_id": 11, "text": "s_j \\in \\mathbb{N}_0" }, { "math_id": 12, "text": "m_j \\subseteq \\{1,\\dots,m\\}" }, { "math_id": 13, "text": "|m_j| = q_j" }, { "math_id": 14, "text": "C_{\\max}= \\max_{j \\in \\mathcal{J}}(s_j+p_j)" }, { "math_id": 15, "text": "\\sum_{j \\in \\mathcal{J}, s_j \\leq t < s_j+p_j}q_j \\leq m \\, \\forall t \\in \\{s_1, \\dots, s_n\\}" }, { "math_id": 16, "text": "t = 0" }, { "math_id": 17, "text": "|\\mathcal{J}| + 1" }, { "math_id": 18, "text": "q_j=1" }, { "math_id": 19, "text": "P2||C_{\\max}" }, { "math_id": 20, "text": "P2|size_j|C_{\\max}" }, { "math_id": 21, "text": "P3|size_j|C_{\\max}" }, { "math_id": 22, "text": "Pm|size_j|C_{\\max}" }, { "math_id": 23, "text": "m \\geq 4" }, { "math_id": 24, "text": "m \\geq 5" }, { "math_id": 25, "text": "P = NP" }, { "math_id": 26, "text": "p_j=1" }, { "math_id": 27, "text": "(M_1, \\dots, M_m)" }, { "math_id": 28, "text": "m_j \\subseteq \\{M_1,\\dots,M_m\\}" }, { "math_id": 29, "text": "D_j \\subseteq \\{1, \\dots m\\}" }, { "math_id": 30, "text": "d \\in D_j" }, { "math_id": 31, "text": "p_{j,d}" }, { "math_id": 32, "text": "s_j" }, { "math_id": 33, "text": "d" }, { "math_id": 34, "text": "[s_j,s_j+p_{j,d})." }, { "math_id": 35, "text": "d \\cdot p_{j,d}" }, { "math_id": 36, "text": " P|size_j,r_j|C_{\\max} " }, { "math_id": 37, "text": " P|size_j,r_j,\\text{pmtn}|C_{\\max} " }, { "math_id": 38, "text": "2" }, { "math_id": 39, "text": "(2-1/m)" }, { "math_id": 40, "text": "(3/2+\\varepsilon)" }, { "math_id": 41, "text": "\\varepsilon" }, { "math_id": 42, "text": "n=8" }, { "math_id": 43, "text": "m=23" }, { "math_id": 44, "text": "C^{nc}_{\\max}=17" }, { "math_id": 45, "text": "C^c_{\\max}=18" }, { "math_id": 46, "text": "q_j > 1." }, { "math_id": 47, "text": "p_j>1." }, { "math_id": 48, "text": "m=4" }, { "math_id": 49, "text": "C^{nc}_{\\max}=4." }, { "math_id": 50, "text": "\\sup_{I}C^{c}_{\\max}(I)/C^{nc}_{\\max}(I)" }, { "math_id": 51, "text": "5/4" }, { "math_id": 52, "text": "2." }, { "math_id": 53, "text": "n=7" }, { "math_id": 54, "text": "\\sup_{I}C^{c}_{\\max}(I)/C^{nc}_{\\max}(I) = 5/4 " } ]
https://en.wikipedia.org/wiki?curid=62925989
6294249
Type-I superconductor
Type of superconductor with a single critical magnetic field The interior of a bulk superconductor cannot be penetrated by a weak magnetic field, a phenomenon known as the Meissner effect. When the applied magnetic field becomes too large, superconductivity breaks down. Superconductors can be divided into two types according to how this breakdown occurs. In type-I superconductors, superconductivity is abruptly destroyed via a first order phase transition when the strength of the applied field rises above a critical value "H"c. This type of superconductivity is normally exhibited by pure metals, e.g. aluminium, lead, and mercury. The only alloy known up to now which exhibits type I superconductivity is tantalum silicide (TaSi2). The covalent superconductor SiC:B, silicon carbide heavily doped with boron, is also type-I. Depending on the demagnetization factor, one may obtain an intermediate state. This state, first described by Lev Landau, is a phase separation into macroscopic non-superconducting and superconducting domains forming a Husimi Q representation. This behavior is different from type-II superconductors which exhibit two critical magnetic fields. The first, lower critical field occurs when magnetic flux vortices penetrate the material but the material remains superconducting outside of these microscopic vortices. When the vortex density becomes too large, the entire material becomes non-superconducting; this corresponds to the second, higher critical field. The ratio of the London penetration depth "λ" to the superconducting coherence length "ξ" determines whether a superconductor is type-I or type-II. Type-I superconductors are those with formula_0, and type-II superconductors are those with formula_1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0 < \\tfrac{\\lambda}{\\xi} < \\tfrac{1}{\\sqrt{2}}" }, { "math_id": 1, "text": "\\tfrac{\\lambda}{\\xi} > \\tfrac{1}{\\sqrt{2}}" } ]
https://en.wikipedia.org/wiki?curid=6294249
6294571
Rank-dependent expected utility
Generalized expected utility model of choice under uncertainty The rank-dependent expected utility model (originally called anticipated utility) is a generalized expected utility model of choice under uncertainty, designed to explain the behaviour observed in the Allais paradox, as well as for the observation that many people both purchase lottery tickets (implying risk-loving preferences) and insure against losses (implying risk aversion). A natural explanation of these observations is that individuals overweight low-probability events such as winning the lottery, or suffering a disastrous insurable loss. In the Allais paradox, individuals appear to forgo the chance of a very large gain to avoid a one per cent chance of missing out on an otherwise certain large gain, but are less risk averse when offered the chance of reducing an 11 per cent chance of loss to 10 per cent. A number of attempts were made to model preferences incorporating probability theory, most notably the original version of prospect theory, presented by Daniel Kahneman and Amos Tversky (1979). However, all such models involved violations of first-order stochastic dominance. In prospect theory, violations of dominance were avoided by the introduction of an 'editing' operation, but this gave rise to violations of transitivity. The crucial idea of rank-dependent expected utility was to overweigh only unlikely extreme outcomes, rather than all unlikely events. Formalising this insight required transformations to be applied to the cumulative probability distribution function, rather than to individual probabilities (Quiggin, 1982, 1993). The central idea of rank-dependent weightings was then incorporated by Daniel Kahneman and Amos Tversky into prospect theory, and the resulting model was referred to as cumulative prospect theory (Tversky &amp; Kahneman, 1992). Formal representation. As the name implies, the rank-dependent model is applied to the increasing rearrangement formula_0 of formula_1 which satisfies formula_2. formula_3 where formula_4 and formula_5 is a probability weight such that formula_6 and formula_7 for a transformation function formula_8 with formula_9, formula_10. Note that formula_11 so that the decision weights sum to 1.
[ { "math_id": 0, "text": "\\mathbf{y}_{[ \\; ]}" }, { "math_id": 1, "text": "\\mathbf{y}" }, { "math_id": 2, "text": "y_{[1]}\\leq y_{[2]}\\leq ...\\leq\ny_{[S]}" }, { "math_id": 3, "text": "W(\\mathbf{y})=\\sum_{s\\in \\Omega }h_{[s]}(\\mathbf{\\pi })u(y_{[s]}) " }, { "math_id": 4, "text": "\\mathbf{\\pi }\\in \\Pi ,u:\\mathbb{R} \\rightarrow \\mathbb{R} ," }, { "math_id": 5, "text": "h_{[s]}(\n\\mathbf{\\pi })" }, { "math_id": 6, "text": "h_{[s]}(\\mathbf{\\pi })=q\\left( \\sum\\limits_{t=1}^{s}\\pi _{[t]}\\right)\n-q\\left( \\sum\\limits_{t=1}^{s-1}\\pi _{[t]}\\right) " }, { "math_id": 7, "text": "h_{[S]}(\\mathbf{\\pi })=q\\left( \\pi _{[S]}\\right)" }, { "math_id": 8, "text": "q:[0,1]\\rightarrow [0,1]" }, { "math_id": 9, "text": "q(0)=0" }, { "math_id": 10, "text": "\nq(1)=1 " }, { "math_id": 11, "text": "\\sum_{s\\in \\Omega }h_{[s]}(\\mathbf{\\pi })=q\\left( \\sum\\limits_{t=1}^{S}\\pi\n_{[t]}\\right) =q(1)=1 " } ]
https://en.wikipedia.org/wiki?curid=6294571
62946879
Transmitarray antenna
A transmitarray antenna (or just transmitarray or called as layered lens antenna) is a phase-shifting surface (PSS), a structure capable of focusing electromagnetic radiation from a source antenna to produce a high-gain beam. Transmitarrays consist of an array of unit cells placed above a source (feeding) antenna. Phase shifts are applied to the unit cells, between elements on the receive and transmit surfaces, to focus the incident wavefronts from the feeding antenna. These thin surfaces can be used instead of a dielectric lens. Unlike phased arrays, transmitarrays do not require a feed network, so losses can be greatly reduced. Similarly, they have an advantage over reflectarrays in that feed blockage is avoided. It is worth clarifying that transmitarrays can be used in both transmit and receive modes: the waves are transmitted through the structure in either direction. An important parameter in transmitarray design is the formula_0 ratio, which determines the aperture efficiency. formula_1 is the focal length and formula_2 is the diameter of the transmitarray. The projected area of the feeding antenna determines the illumination efficiency of a transmitarray panel. Provided that the insertion loss of each unit cell is minimised, an aperture area appropriate to the feed radiation pattern can efficiently focus the wavefronts from the feed. Overview of techniques. Transmitarrays can be split into two types: fixed and reconfigurable. As described earlier, a transmitarray is a phase-shifting surface consisting of an array of unit cells. These focus the wavefronts from a feeding antenna into a narrower beamwidth. By applying a progressive phase shift across the aperture of the transmitarray, the beam can be focused and steered towards a direction away from boresight (0° angles). Fixed transmitarrays. First, consider fixed transmitarrays. At each location on the surface of the structure, the unit cells are physically scaled or rotated in order to obtain the required amplitude and phase distribution. Thus, only one focusing direction is available. The aim is to approximate the ideal phase distribution, such as formula_3 for a feed located at formula_4, which can be achieved by discretising the surface of the transmitarray into several Fresnel zones. High aperture efficiency (55%) can be achieved at oblique angles of incidence using precision-machined double split ring slot unit cells. A switched-beam transmitarray covering the 57 – 66 GHz band has been reported. Three different types of unit cells were used, based on patches and coupling slots. Similarly, a 60 GHz design used unit cells with a 2-bit phase resolution and selected an optimal formula_0 ratio to widen the bandwidth. When formula_0 = 0.5, a scan loss of 2.2 dB was achieved at a 30° steering angle. Different types of unit cells have been used within the same transmitarray. In, slot elements were placed near the centre of the transmitarray, as their polarisation performance is better at normal incidence, whereas double square ring slot elements were used at the edges, as they perform better at oblique incidence angles. This enabled the subtended (flare) angle of the feed horn to be increased, and hence the length of the horn, and the overall antenna size, to be reduced. Unit cells were not required at the centre of the transmitarray, where the phase shift was 0°. This reduced the insertion loss to around 1 dB at 105 GHz, as the majority of the beam amplitude was in the central region. In a different design, substrate integrated waveguide (SIW) aperture coupling was employed to reduce insertion losses and widen the bandwidth of a transmitarray operating at 140 GHz. Due to the large number of vias required, this performance improvement was at the expense of a more complex and costly fabrication. It has been shown that transmitarray implementation can be divided into two approaches: layered-scatterer and guided-wave. The first approach uses multiple coupled layers to achieve a phase shift, but has poor sidelobe level (SLL) performance when steering due to higher-order Floquet modes. The second approach enables wider steering, at the expense of increased hardware cost and complexity. Reconfiguration methods. In a reconfigurable transmitarray, the focusing direction is determined by electronically controlling the phase shift through each unit cell. This enables the beam to be steered towards the user. Electronic reconfiguration can be achieved by several possible methods. PIN diodes can be used to enable fast phase reconfiguration with an insertion loss below 1 dB. However, a large number of components is typically required, which increases the cost. A reconfigurable transmitarray, operating at 29 GHz with circular polarisation, has been demonstrated as a beamformer. A boresight gain of 20.8 dBi was achieved, and the scan loss was 2.5 dB at 40°. Another implementation example is an active Fresnel reflectarray with control circuitry for the PIN diodes. Although the unit cells were optimised, the scan loss was 3.4 dB at 30°. Reconfigurable near-field focusing can be implemented using slots containing PIN diodes. By adjusting the phase compared to a reference wave, holographic principles enabled the use of a compact, planar feeding structure and suppression of undesired lobes. This was extended in to an implementation of a Mills cross based on PIN diodes, in which an aperture was synthesised for imaging applications. Radial stubs were used to isolate the bias lines from the RF. By switching combinations of meta-elements on or off, the scan loss was 0 dB for steering angles of ±30°, but the total efficiency was only 35%. In 2019, a transmitarray was fed by a planar phased array operating at 10 GHz, in order to achieve a high beam crossover gain level whilst maintaining an aperture efficiency of 57.5%. The scan loss was 3.13 dB at ±30°. Similarly, a lens-enhanced phased array antenna, similar to a transmitarray, has been demonstrated. By combining the beam steering capabilities of phased arrays and the focusing properties of transmitarrays, this hybrid antenna has a smaller form factor, and steers to ±45° in both planes with a 3.2 dB increase in directivity at this angle. Its reconfigurable phase-shifting surface (PSS) contained micro-electro-mechanical (MEMS) switches to change the length of resonators, sandwiched within an antenna-filter-antenna structure. The PSS created the optimal 2D phase distribution needed to achieve high-gain beam focusing, but the MEMS fabrication process was complex and costly, requiring a large number of control lines. MEMS and other mechanical switching methods can achieve a relatively low insertion loss (2.5 dB) and an excellent linearity, but are prone to stiction and reliability issues Reconfigurable materials have shown promise for enabling a low-loss beam steering transmitarray. A vanadium dioxide reconfigurable metasurface operating at 100 GHz was presented in using a crossed-slot unit cell. A heating element was used to thermally control the phase shift through each cell. The permittivity of liquid crystal (and hence the phase shift) can be reconfigured by applying a voltage between two parallel conducting plates. However, liquid crystal has several practical challenges. The liquid must be hermetically sealed in a cavity, and the crystal orientations aligned with the cavity walls in an unbiased state. The liquid can flow between cells, causing a variation in the RF properties of the transmitarray, and dynamic instabilities. Liquid crystal reflectarrays have been extensively investigated at 78 GHz and 100 GHz. In, a fishnet metamaterial lens was designed, using liquid crystal to achieve a 360° electronically controlled phase range. The 5 dB unit cell insertion loss could be reduced by controlling the Bloch impedance (both formula_5 and formula_6) of each unit cell. The advantage of liquid crystal is that its loss tangent reduces with frequency, however it suffers from a slow switching time of around 100 ms and fabrication difficulties. Geometry and radiation pattern. A conventional transmitarray consists of a planar arrangement of unit cells, illuminated by a feed source. For this structure, the required phase distribution is: formula_7 where (formula_8, formula_9) are the elevation and azimuth steering directions, and formula_10 are the coordinates of unit cell formula_11. Note that formula_12, formula_13, and formula_14. formula_15 and formula_16 are the total numbers of unit cells in the formula_17- and formula_18-directions respectively. When steering in azimuth only, this simplifies to: formula_19 where formula_20 and (formula_21,formula_22,formula_23) are the coordinates of the feed, in this case (0,0,-formula_1). The overall radiation pattern can be calculated, using. Here, terms are combined to express the formula in full: formula_24 where the radiation pattern of the steered array source is modelled as formula_25. The term formula_26 corresponds to the phases applied to the transmitarray unit cells, to undo the phase variation due to the geometry of the cells from the feed, i.e. formula_27. Edge taper and aperture efficiency. formula_28 An edge taper of around -10 dB is desired, so that the illumination efficiency is maximised. For a planar (conventional) transmitarray, fed by an antenna with radiation pattern formula_29, and subtended angle formula_30, the taper efficiency is calculated by: formula_31 formula_32 formula_33 is a function of formula_0. Note that formula_34, so using formula_35, this formula can be expressed in terms of formula_0, rather than the subtended angle. The illumination efficiency is the product of these: formula_36. The overall aperture efficiency formula_37 is obtained by multiplying by material losses and any directivity reduction terms. Unit cell design. A variety of unit cell shapes have been proposed, including double square loops, U-shaped resonators, microstrip patches, and slots. The double square loop has the best transmission performance at wide angles of incidence, whereas a large bandwidth can be achieved if Jerusalem cross slots are used. A switchable FSS using MEMS capacitors was demonstrated in. The four-legged loaded element was used to obtain full control of the bandwidth and incidence angle properties. For space applications, in which thermal expansion must be considered, air gaps between layers can be used instead of dielectric, to minimise the insertion loss (metal-only transmitarray). However, this increases the thickness, and requires a large number of screws for mechanical support. Design example. Consider the structure of the proposed 1-bit unit cell, which operates at 28 GHz. It is based on the design presented in. It consists of two metal layers, printed on a Rogers RT5880 substrate material having a thickness of 0.254 mm, a dielectric constant of 2.2, and a loss tangent of 0.0009. Each metal layer consists of a pair of crossed slots, and the incident fields are vertically polarised (formula_38). By selecting a symmetrical unit cell shape, they can be adapted for dual linear or circular polarisation. The two metal layers are separated by a 3 mm thick layer of ePTFE material (of dielectric constant formula_5 = 1.4), which creates a 100° phase shift between these layers. The unit cell has reduced thickness and insertion loss compared with multilayer designs. The unit cell can be reconfigured between two phase states, OFF (0°) and ON (180°). For the OFF state, it has a Jerusalem cross slot structure. In the ON state, the slots are not loaded with Jerusalem cross (JC) shaped caps, producing a large phase change. Due to the use of single-pole resonators (a two-layer structure), the transmission performance was challenging to achieve, requiring fine-tuning of the unit cell physical dimensions. Both unit cell states were simulated in CST Microwave Studio using Floquet ports and the frequency domain solver. This included the magnitude and phase of the formula_38 transmission coefficient through the unit cell in ON and OFF states. A phase change of 189° was observed, which is close to 180°, and the transmission magnitude is at least -1.76 dB at 28 GHz for both states. For the JC cells, the surface currents are in opposite directions (anti-phase) on each conductor layers, whereas for the CS cells, the surface currents are in the same direction (in-phase). The phase difference between states is given by: formula_39. Biasing reconfigurable unit cells. PIN diodes can be placed across the ends of the Jerusalem cross caps, applying a different bias voltage for each state. DC blocking in the form of interdigital capacitors would be needed to isolate the bias voltages, and RF choke inductors would be needed at the ends of the bias lines. To demonstrate the transmitarray concept, unit cells with fixed phase shifts were used in the fabricated prototypes. For electronic reconfiguration, PIN diodes would need to be placed on both the top and bottom layers. When the diodes are forward biased (ON), incident radiation is transmitted through the slots with a 180° phase change, but when the diodes are reverse biased (OFF), the current path is lengthened so that there is minimal phase change (around 0°). The MACOM MA4GP907 diode has an ON resistance formula_40 = 4.2 formula_41, an OFF resistance formula_42 = 300 kformula_41, and small parasitic inductance and capacitance values (formula_43 = 0.05 nH, formula_44 = 42 fF in the 28 GHz band). Given that it has a high OFF resistance value, and that the switching time is very fast (2 ns), this component is suitable for the design. The position and orientation of the bias lines must be chosen to minimise their effect on the transmission of the incident waves through the structure. If the lines are sufficiently narrow (width up to 0.1 mm), they will present a high impedance, so will have less effect on the wavefronts. As they act as a polarising grid, the bias lines should be perpendicular to the incident field direction. This design has no ground plane, so each group of active unit cells must have both a formula_45 and a ground connection. As groups of cells share the same bias voltages, these lines can be routed between adjacent cells. The required number of external control lines is equal to the number of beam directions supported, so is inversely proportional to the steering resolution. The bias lines could be implemented as large blocks of copper around the unit cells, separated by thin gaps (through which the RF wave propagation is heavily attenuated). The gaps may need to be meandered to form DC block capacitors. Radial stubs or high-impedance lines of length formula_46 (a quarter of a guided wavelength) could be used as chokes (inductors) on the external control lines, to prevent the RF signal from affecting the DC control circuitry. Discussion. A key challenge in transmitarray design is that the insertion loss increases with the number of conductor layers within the unit cell. In, it was shown that the optimal number of layers to maximise the gain (directivity vs. loss) is 3 layers. This has been corroborated by an analysis of cascaded sheet admittances. However, for scenarios when cost and efficiency are more important, a low-cost two-layer transmitarray may be preferred. Alternatively, the efficiency can be improved by integrating the antenna used to feed the transmitarray within a monolithic chip, as recently demonstrated in the D-band frequency range (114 – 144 GHz). Another high-gain transmitarray was demonstrated, operating at D-band (110 – 170 GHz). The formula_0 was optimised to maximise the aperture efficiency. The antenna was connected to an integrated frequency multiplier to demonstrate a communication link. A data rate of 1 Gbit/s was achieved over a distance of 2.5 m, with an error vector magnitude (EVM) of 25% References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F/D" }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "D" }, { "math_id": 3, "text": "\\Delta \\phi(x,y) = \\frac{-2\\pi}{\\lambda_{0}}\\sqrt{x^2 + y^2 + F^2}" }, { "math_id": 4, "text": "(0,0,-F)" }, { "math_id": 5, "text": "\\epsilon_{r}" }, { "math_id": 6, "text": "\\mu_{r}" }, { "math_id": 7, "text": "\n\\phi_{m}(x_{m}, y_{m}, z_{m}) = -k_{0}(\\sin{\\theta_{0}}\\cos{\\phi_{0}}x_{m} + \\sin{\\theta_{0}}\\sin{\\phi_{0}}y_{m} + \\cos{\\theta_{0}}z_{m})\n" }, { "math_id": 8, "text": "\\theta_{0}" }, { "math_id": 9, "text": "\\phi_{0}" }, { "math_id": 10, "text": "(x_{m}, y_{m}, z_{m})" }, { "math_id": 11, "text": "m" }, { "math_id": 12, "text": "x_{m} = \\left(m + \\frac{M-1}{2}\\right)d" }, { "math_id": 13, "text": "y_{n} = \\left(n + \\frac{N-1}{2}\\right)d" }, { "math_id": 14, "text": "z_{m} = 0" }, { "math_id": 15, "text": "M" }, { "math_id": 16, "text": "N" }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "y" }, { "math_id": 19, "text": "\n\\phi_{m}(x_{m}, y_{m}) = k_{0}(d_{m} - \\sin{\\theta_{0}}(x_{m}\\cos{\\phi_{0}} + y_{m}\\sin{\\phi_{0}}))\n" }, { "math_id": 20, "text": "\nd_{m} = \\sqrt{(x_{m} - x_{f})^2 + (y_{m} - y_{f})^2 + z_{f})^2}\n" }, { "math_id": 21, "text": "x_{f}" }, { "math_id": 22, "text": "y_{f}" }, { "math_id": 23, "text": "z_{f}" }, { "math_id": 24, "text": "\nE(\\theta, \\phi) = \\sum_{m=1}^{M} \\sum_{n=1}^{N} \\cos^{q_{e}}\\left({\\theta - \\theta_{0}}\\right) \\frac{\\cos^{q_{f}}\\left({\\theta_{f mn}}\\right)}{\\sqrt{(md)^2 + (nd)^2 + F^2}}|T_{mn}| e^{j \\Psi_{mn}}\\times e^{-jk\\left(\\sqrt{(md)^2 + (nd)^2 + F^2} - d(m\\sin{\\theta}\\cos{\\phi} + n\\sin{\\theta}\\sin{\\phi})\\right)}\n" }, { "math_id": 25, "text": "\\cos^{q_{e}}\\left({\\theta - \\theta_{0}}\\right)" }, { "math_id": 26, "text": "e^{j \\Psi_{mn}}" }, { "math_id": 27, "text": "e^{j \\Psi_{mn}}e^{-jk \\angle G_{mn}} = 1" }, { "math_id": 28, "text": "\nG_{\\textrm{edge}} = 10\\log_{10}\\left(\\cos^{n}{\\theta_{sub}}\\right)\n" }, { "math_id": 29, "text": "G_{f}(\\theta,\\phi) = \\cos^{n}{\\theta}" }, { "math_id": 30, "text": "\\theta_{sub} = \\tan^{-1}\\left({\\frac{D}{2F}}\\right)" }, { "math_id": 31, "text": "\n\\eta_{s} = 1 - \\cos^{n+1}{\\theta_{sub}}\n" }, { "math_id": 32, "text": "\n\\eta_{t} = \\frac{2n}{\\tan^2{\\theta_{sub}}} \\frac{(1 - \\cos^{(n/2) - 1}{\\theta_{sub}})^2}{(\\frac{n}{2} - 1)^2 (1 - \\cos^{n}{\\theta_{sub}})}\n" }, { "math_id": 33, "text": "\\theta_{sub}" }, { "math_id": 34, "text": "\\cos(\\tan^{-1}{x}) = \\frac{1}{\\sqrt{1+x^2}}" }, { "math_id": 35, "text": "x = \\frac{D}{2F}" }, { "math_id": 36, "text": "\\eta_{i} = \\eta_{s}\\eta_{t}" }, { "math_id": 37, "text": "\\eta_{ap}" }, { "math_id": 38, "text": "E_{y}" }, { "math_id": 39, "text": "\\Delta\\phi = \\angle{S_{21}}_{\\textrm{OFF}} -\\angle{S_{21}}_{\\textrm{ON}}" }, { "math_id": 40, "text": "R_{\\textrm{ON}}" }, { "math_id": 41, "text": "\\Omega" }, { "math_id": 42, "text": "R_{\\textrm{OFF}}" }, { "math_id": 43, "text": "L_{\\textrm{ON}}" }, { "math_id": 44, "text": "C_{\\textrm{OFF}}" }, { "math_id": 45, "text": "V_{\\textrm{bias}}" }, { "math_id": 46, "text": "\\frac{\\lambda_{g}}{4}" } ]
https://en.wikipedia.org/wiki?curid=62946879
6295
Chaos theory
Field of mathematics and science based on non-linear systems and initial conditions Chaos theory is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause a tornado in Texas. Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Chaos: When the present determines the future but the approximate present does not approximately determine the future. Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through the analysis of a chaotic mathematical model or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory and self-assembly processes. Chaos theory differs from numerous fields, such as structural stability for instance, whereas the latter concerns minor differentiations in models, as opposed to the former focusing upon slight changes in states. Furthermore, time also holds different roles within the definitions of chaos as well as structural theory. Introduction. Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random. Chaos theory is a method of qualitative and quantitative analysis to investigate the behavior of dynamic systems that cannot be explained and predicted by single data relationships, but must be explained and predicted by whole, continuous data relationships. Chaotic dynamics. In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties: In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list. Sensitivity to initial conditions. Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior. Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled "Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?". The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different. As suggested in Lorenz's book entitled "The Essence of Chaos", published in 1993, "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories). A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach or fall below on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year. In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation formula_3, the two trajectories end up diverging at a rate given by formula_4 where formula_5 is the time and formula_6 is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic. In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system. Non-periodicity. A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior. Topological mixing. Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system. Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity. Topological transitivity. A map formula_8 is said to be topologically transitive if for any pair of non-empty open sets formula_9, there exists formula_10 such that formula_11. Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point "x" and a region "V", there exists a point "y" near "x" whose orbit passes through "V". This implies that it is impossible to decompose the system into two open sets. An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if "X" is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in "X" that have dense orbits. Density of periodic orbits. For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by "x" → 4 "x" (1 – "x") is one of the simplest systems with density of periodic orbits. For example, formula_12 → formula_13 → formula_12 (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem). Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits. Strange attractors. Some dynamical systems, like the one-dimensional logistic map defined by "x" → 4 "x" (1 – "x"), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region. An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly. Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them. Coexisting attractors. In contrast to single type chaotic solutions, recent studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic". Minimum complexity of a chaotic system. Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional. The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as: formula_14 where formula_15, formula_7, and formula_16 make up the system state, formula_5 is time, and formula_1, formula_0, formula_2 are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved. While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can still exhibit some chaotic properties. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis. The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability. Infinite dimensional maps. The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps: formula_17, where kernel formula_18 is propagator derived as Green function of a relevant physical system, formula_19 might be logistic map alike formula_20 or complex map. For examples of complex maps the Julia set formula_21 or Ikeda map formula_22 may serve. When wave propagation problems at distance formula_23 with wavelength formula_24 are considered the kernel formula_25 may have a form of Green function for Schrödinger equation:. formula_26. Jerk systems. In physics, jerk is the third derivative of position, with respect to time. As such, differential equations of the form formula_27 are sometimes called "jerk equations". It has been shown that a jerk equation, which is equivalent to a system of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for solutions showing chaotic behavior. This motivates mathematical interest in jerk systems. Systems involving a fourth or higher derivative are called accordingly hyperjerk systems. A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic circuits can model solutions. These circuits are known as jerk circuits. One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations that can combine into a single (although rather complicated) jerk equation. Another example of a jerk equation with nonlinearity in the magnitude of formula_15 is: formula_28 Here, "A" is an adjustable parameter. This equation has a chaotic solution for "A"=3/5 and can be implemented with the following jerk circuit; the required nonlinearity is brought about by the two diodes: In the above circuit, all resistors are of equal value, except formula_29, and all capacitors are of equal size. The dominant frequency is formula_30. The output of op amp 0 will correspond to the x variable, the output of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative. Similar circuits only require one diode or no diodes at all. See also the well-known Chua's circuit, one basis for chaotic true random number generators. The ease of construction of the circuit has made it a ubiquitous real-world example of a chaotic system. Spontaneous order. Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system. Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions. Moreover, from the theoretical physics standpoint, dynamical chaos itself, in its most general manifestation, is a spontaneous order. The essence here is that most orders in nature arise from the spontaneous breakdown of various symmetries. This large family of phenomena includes elasticity, superconductivity, ferromagnetism, and many others. According to the supersymmetric theory of stochastic dynamics, chaos, or more precisely, its stochastic generalization, is also part of this family. The corresponding symmetry being broken is the topological supersymmetry which is hidden in all stochastic (partial) differential equations, and the corresponding order parameter is a field-theoretic embodiment of the butterfly effect. History. James Clerk Maxwell first emphasized the "butterfly effect", and is seen as being one of the earliest to discuss chaos theory, with work in the 1860s and 1870s. An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent. Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing. Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. In 1959 Boris Valerianovich Chirikov proposed a criterion for the emergence of classical chaos in Hamiltonian systems (Chirikov criterion). He applied this criterion to explain some experimental results on plasma confinement in open mirror traps. This is regarded as the very first physical theory of chaos, which succeeded in explaining a concrete experiment. And Boris Chirikov himself is considered as a pioneer in classical and quantum chaos. The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970. Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter and Margaret Hamilton were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions. In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or "snowflake", which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published "The Fractal Geometry of Nature", which became a classic of chaos theory. In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet &amp; Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena. In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements. In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking dysfunction among people with schizophrenia. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles. In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in "Physical Review Letters" describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature. Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws. Also in 1987 James Gleick published "", which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in "The Structure of Scientific Revolutions" (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick. The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc. Lorenz's pioneering contributions to chaotic modeling. Throughout his career, Professor Lorenz authored a total of 61 research papers, out of which 58 were solely authored by him. Commencing with the 1960 conference in Japan, Lorenz embarked on a journey of developing diverse models aimed at uncovering the SDIC and chaotic features. A recent review of Lorenz's model progression spanning from 1960 to 2008 revealed his adeptness at employing varied physical systems to illustrate chaotic phenomena. These systems encompassed Quasi-geostrophic systems, the Conservative Vorticity Equation, the Rayleigh-Bénard Convection Equations, and the Shallow Water Equations. Moreover, Lorenz can be credited with the early application of the logistic map to explore chaotic solutions, a milestone he achieved ahead of his colleagues (e.g. Lorenz 1964). In 1972, Lorenz coined the term "butterfly effect" as a metaphor to discuss whether a small perturbation could eventually create a tornado with a three-dimensional, organized, and coherent structure. While connected to the original butterfly effect based on sensitive dependence on initial conditions, its metaphorical variant carries distinct nuances. To commemorate this milestone, a reprint book containing invited papers that deepen our understanding of both butterfly effects was officially published to celebrate the 50th anniversary of the metaphorical butterfly effect. A popular but inaccurate analogy for chaos. The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore: &lt;poem style="margin-left: 2em;"&gt; For want of a nail, the shoe was lost. For want of a shoe, the horse was lost. For want of a horse, the rider was lost. For want of a rider, the battle was lost. For want of a battle, the kingdom was lost. And all for the want of a horseshoe nail. &lt;/poem&gt; Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome. Based on the analysis, the verse only indicates divergence, not boundedness. Boundedness is important for the finite size of a butterfly pattern. In a recent study, the characteristic of the aforementioned verse was recently denoted as "finite-time sensitive dependence". Applications. Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing. Cryptography. Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking, and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient. Robotics. Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model. Chaotic dynamics have been exhibited by passive walking biped robots. Biology. For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling. As Perry points out, modeling of chaotic time series in ecology is helped by constraint. There is always potential difficulty in distinguishing real chaos from chaos that is only in the model. Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry &amp; Wall 1984. Gene-for-gene co-evolution sometimes shows chaotic dynamics in allele frequencies. Adding variables exaggerates this: Chaos is more common in models incorporating additional variables to reflect additional facets of real populations. Robert M. May himself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field. Even for a steady environment, merely combining one crop and one pathogen may result in quasi-periodic- or chaotic- oscillations in pathogen population.169 Economics. It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships. Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos theory could help in modeling how an economy operates as well as in embedding shocks due to external events such as COVID-19. Finite Predictability in Weather and Climate. Due to the sensitive dependence of solutions on initial conditions (SDIC), also known as the butterfly effect, chaotic systems like the Lorenz 1963 model imply a finite predictability horizon. This means that while accurate predictions are possible over a finite time period, they are not feasible over an infinite time span. Considering the nature of Lorenz's chaotic solutions, the committee led by Charney et al. in 1966 extrapolated a doubling time of five days from a general circulation model, suggesting a predictability limit of two weeks. This connection between the five-day doubling time and the two-week predictability limit was also recorded in a 1969 report by the Global Atmospheric Research Program (GARP). To acknowledge the combined direct and indirect influences from the Mintz and Arakawa model and Lorenz's models, as well as the leadership of Charney et al., Shen et al. refer to the two-week predictability limit as the "Predictability Limit Hypothesis," drawing an analogy to Moore's Law. AI-Extended Modeling Framework. In AI-driven large language models, responses can exhibit sensitivities to factors like alterations in formatting and variations in prompts. These sensitivities are akin to butterfly effects. Although classifying AI-powered large language models as classical deterministic chaotic systems poses challenges, chaos-inspired approaches and techniques (such as ensemble modeling) may be employed to extract reliable information from these expansive language models (see also "Butterfly Effect in Popular Culture"). Other areas. In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately. Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior. Researchers have continued to apply chaos theory to psychology. For example, in modeling group behavior in which heterogeneous members may behave as if sharing to different degrees what in Wilfred Bion's theory is a basic assumption, researchers have found that the group dynamic is the result of the individual dynamics of the members: each individual reproduces the group dynamics in a different scale, and the chaotic behavior of the group is reflected in each member. Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so. In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r. Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model. By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable. Some say the chaos metaphor—used in verbal theories—grounded on mathematical models and psychological aspects of human behavior provides helpful insights to describing the complexity of small work groups, that go beyond the metaphor itself. Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right). Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics. See also. Examples of chaotic systems &lt;templatestyles src="Div col/styles.css"/&gt; Other related topics &lt;templatestyles src="Div col/styles.css"/&gt; People &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho" }, { "math_id": 1, "text": "\\sigma" }, { "math_id": 2, "text": "\\beta" }, { "math_id": 3, "text": "\\delta \\mathbf{Z}_0" }, { "math_id": 4, "text": " | \\delta\\mathbf{Z}(t) | \\approx e^{\\lambda t} | \\delta \\mathbf{Z}_0 |," }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "\\lambda" }, { "math_id": 7, "text": "y" }, { "math_id": 8, "text": "f:X \\to X" }, { "math_id": 9, "text": "U, V \\subset X" }, { "math_id": 10, "text": "k > 0" }, { "math_id": 11, "text": "f^{k}(U) \\cap V \\neq \\emptyset" }, { "math_id": 12, "text": "\\tfrac{5-\\sqrt{5}}{8}" }, { "math_id": 13, "text": "\\tfrac{5+\\sqrt{5}}{8}" }, { "math_id": 14, "text": " \\begin{align}\n\\frac{\\mathrm{d}x}{\\mathrm{d}t} &= \\sigma y - \\sigma x, \\\\\n\\frac{\\mathrm{d}y}{\\mathrm{d}t} &= \\rho x - x z - y, \\\\\n\\frac{\\mathrm{d}z}{\\mathrm{d}t} &= x y - \\beta z.\n\\end{align} " }, { "math_id": 15, "text": "x" }, { "math_id": 16, "text": "z" }, { "math_id": 17, "text": "\\psi_{n+1}(\\vec r,t) = \\int K(\\vec r - \\vec r^{,},t) f [\\psi_{n}(\\vec r^{,},t) ]d {\\vec r}^{,}" }, { "math_id": 18, "text": "K(\\vec r - \\vec r^{,},t)" }, { "math_id": 19, "text": " f [\\psi_{n}(\\vec r,t) ] " }, { "math_id": 20, "text": " \\psi \\rightarrow G \\psi [1 - \\tanh (\\psi)]" }, { "math_id": 21, "text": " f[\\psi] = \\psi^2" }, { "math_id": 22, "text": " \\psi_{n+1} = A + B \\psi_n e^{i (|\\psi_n|^2 + C)} " }, { "math_id": 23, "text": "L=ct" }, { "math_id": 24, "text": "\\lambda=2\\pi/k" }, { "math_id": 25, "text": "K" }, { "math_id": 26, "text": " K(\\vec r - \\vec r^{,},L) = \\frac {ik\\exp[ikL]}{2\\pi L}\\exp[\\frac {ik|\\vec r-\\vec r^{,}|^2}{2 L} ]" }, { "math_id": 27, "text": "J\\left(\\overset{...}{x},\\ddot{x},\\dot {x},x\\right)=0" }, { "math_id": 28, "text": "\\frac{\\mathrm{d}^3 x}{\\mathrm{d} t^3}+A\\frac{\\mathrm{d}^2 x}{\\mathrm{d} t^2}+\\frac{\\mathrm{d} x}{\\mathrm{d} t}-|x|+1=0." }, { "math_id": 29, "text": "R_A=R/A=5R/3" }, { "math_id": 30, "text": "1/2\\pi R C" } ]
https://en.wikipedia.org/wiki?curid=6295
62951851
Random recursive tree
In probability theory, a random recursive tree is a rooted tree chosen uniformly at random from the recursive trees with a given number of vertices. Definition and generation. In a recursive tree with formula_0 vertices, the vertices are labeled by the numbers from formula_1 to formula_0, and the labels must decrease along any path to the root of the tree. These trees are unordered, in the sense that there is no distinguished ordering of the children of each vertex. In a random recursive tree, all such trees are equally likely. Alternatively, a random recursive tree can be generated by starting from a single vertex, the root of the tree, labeled formula_1, and then for each successive label from formula_2 to formula_0 choosing a random vertex with a smaller label to be its parent. If each of the choices is uniform and independent of the other choices, the resulting tree will be a random recursive tree. Properties. With high probability, the longest path from the root to the leaf of an formula_0-vertex random recursive tree has length formula_3. The maximum number of children of any vertex, i.e., degree, in the tree is, with high probability, formula_4. The expected distance of the formula_5th vertex from the root is the formula_5th harmonic number, from which it follows by linearity of expectation that the sum of all root-to-vertex path lengths is, with high probability, formula_6. The expected number of leaves of the tree is formula_7 with variance formula_8, so with high probability the number of leaves is formula_9. Applications. lists several applications of random recursive trees in modeling phenomena including disease spreading, pyramid schemes, the evolution of languages, and the growth of computer networks. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "1" }, { "math_id": 2, "text": "2" }, { "math_id": 3, "text": "e\\log n" }, { "math_id": 4, "text": "(1\\pm o(1))\\log_2 n" }, { "math_id": 5, "text": "k" }, { "math_id": 6, "text": "(1\\pm o(1))n\\log n" }, { "math_id": 7, "text": "n/2" }, { "math_id": 8, "text": "n/12" }, { "math_id": 9, "text": "(1\\pm o(1))n/2" } ]
https://en.wikipedia.org/wiki?curid=62951851
629554
Hadwiger's theorem
Theorem in integral geometry In integral geometry (otherwise called geometric probability theory), Hadwiger's theorem characterises the valuations on convex bodies in formula_0 It was proved by Hugo Hadwiger. Introduction. Valuations. Let formula_1 be the collection of all compact convex sets in formula_0 A valuation is a function formula_2 such that formula_3 and for every formula_4 that satisfy formula_5 formula_6 A valuation is called continuous if it is continuous with respect to the Hausdorff metric. A valuation is called invariant under rigid motions if formula_7 whenever formula_8 and formula_9 is either a translation or a rotation of formula_0 Quermassintegrals. The quermassintegrals formula_10 are defined via Steiner's formula formula_11 where formula_12 is the Euclidean ball. For example, formula_13 is the volume, formula_14 is proportional to the surface measure, formula_15 is proportional to the mean width, and formula_16 is the constant formula_17 formula_18 is a valuation which is homogeneous of degree formula_19 that is, formula_20 Statement. Any continuous valuation formula_21 on formula_1 that is invariant under rigid motions can be represented as formula_22 Corollary. Any continuous valuation formula_21 on formula_1 that is invariant under rigid motions and homogeneous of degree formula_23 is a multiple of formula_24 References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; An account and a proof of Hadwiger's theorem may be found in An elementary and self-contained proof was given by Beifang Chen in
[ { "math_id": 0, "text": "\\R^n." }, { "math_id": 1, "text": "\\mathbb{K}^n" }, { "math_id": 2, "text": "v : \\mathbb{K}^n \\to \\R" }, { "math_id": 3, "text": "v(\\varnothing) = 0" }, { "math_id": 4, "text": "S, T \\in \\mathbb{K}^n" }, { "math_id": 5, "text": "S \\cup T \\in \\mathbb{K}^n," }, { "math_id": 6, "text": "v(S) + v(T) = v(S \\cap T) + v(S \\cup T)~." }, { "math_id": 7, "text": "v(\\varphi(S)) = v(S)" }, { "math_id": 8, "text": "S \\in \\mathbb{K}^n" }, { "math_id": 9, "text": "\\varphi" }, { "math_id": 10, "text": "W_j : \\mathbb{K}^n \\to \\R" }, { "math_id": 11, "text": "\\mathrm{Vol}_n(K + t B) = \\sum_{j=0}^n \\binom{n}{j} W_j(K) t^j~," }, { "math_id": 12, "text": "B" }, { "math_id": 13, "text": "W_0" }, { "math_id": 14, "text": "W_1" }, { "math_id": 15, "text": "W_{n-1}" }, { "math_id": 16, "text": "W_n" }, { "math_id": 17, "text": "\\operatorname{Vol}_n(B)." }, { "math_id": 18, "text": "W_j" }, { "math_id": 19, "text": "n - j," }, { "math_id": 20, "text": "W_j(tK) = t^{n-j} W_j(K)~, \\quad t \\geq 0~." }, { "math_id": 21, "text": "v" }, { "math_id": 22, "text": "v(S) = \\sum_{j=0}^n c_j W_j(S)~." }, { "math_id": 23, "text": "j" }, { "math_id": 24, "text": "W_{n-j}." } ]
https://en.wikipedia.org/wiki?curid=629554
62961256
Limeburners Creek National Park
Limeburners Creek National Park is a protected national park on the Mid North Coast of New South Wales, Australia. The 91.2 kmformula_0 (9123 ha) national park is located to the north of Port Macquarie and exists across both the Kempsey Shire and Port Macquarie-Hastings Council local government areas, but is chiefly managed by National Parks and Wildlife Service (New South Wales) (NPWS NSW). The area was originally erected as a nature reserve but this reservation was revoked when it became formally recognised as a national park in 2010 under the National Parks and Wildlife Act (1974). Many threatened ecological habitats and species of fauna and flora are found within this park, alongside several heritage sites of cultural significance, particularly to the local Birpai and Dunghutti people upon whose land the park exists. The protected status of this national park is largely owing to the ecological and cultural value of the area, in addition to the value of the ecosystems to further scientific research. Cultural heritage. Evidence of Indigenous occupation within Limeburners Creek National Park has been extrapolated through archaeological excavation of sites, revealing artefacts dating back 5000–6000 years ago. Such sites and artefacts evidencing the historical occupation of the Birpai and Dunghutti Aboriginal people include a stone quarry, grooves in sandstone used which were used to grind and sharpen tools, shell middens and burial sites. NPWS NSW are actively trying to preserve the integrity of these sites, particularly from vandalism due to improper use of 4WD tracks. The local Aboriginal people are thought to have sourced their food from the land and sea, with shellfish, mussels and pipis found plentifully throughout the park. Since 2006, NPWS NSW have been working in close partnership with local Aboriginal elders and Land Councils to plan and erect a Cultural Camp in the park, which is to be used for cultural and educational programs, as well as approved camping, in an aim to improve understandings of local Indigenous connections to the land. History. During the European settlement of Port Macquarie, the area which is now Limeburners Creek National Park was occupied for the production of lime. This lime was produced by burning large masses of oyster shells which were sourced throughout the park and was used in mortar to construct buildings in the settlement, affording the park its name. Following this, in 1881 this same land was declared part of the Orara Gold Field and for a short period of time, gold mining took place. During the 1960s, extensive sand mining took place along the NSW North Coast in an effort to extract minerals including zircon and rutile. Although a report excluded the area which became Limeburners Creek Nature Reserve from sand mining, such operations occurred just south of Point Plomer, in what is now Limeburners Creek National Park. In 1971, the area was declared a nature reserve covering approximately 6,879 hectares, however, this area was expanded in 2010 when it became formally recognised as a national park encompassing a total of 9,123 hectares Geology. Being situated on the coast, Limeburners Creek National Park consists predominantly of dunal and swampy land and sits for the most part, no more than 10 metres above sea level. Soils throughout the park consist mostly of sands, silts and clays which were deposited as alluvial sediment by the ancestral Hastings River as the river mouth drifted between Port Macquarie and Crescent Head. The highly sandy nature of these soils, particularly those close to the coast, renders them subject to rapid erosion, particularly in the event that surrounding vegetation is removed, exposing the soils to the elements. This is a key management concern for NPWS NSW who have to actively ensure that visitors only use the roads and trails provided. 8 wetlands have been identified in the park and these each receive relief mainly from the coastal dune systems but also from the headlands of Point Plomer, Big Hill and Queens Head, as well as a number of small inland hills. However, the park is ultimately drained by Limeburners Creek. Saltwater Lake, located in the centre of the park, is a major feature of the landscape. This lake system and the surrounding wetlands are exposed to tidal influences from the Hastings River, rendering them saline. However, following heavy rains, these wetlands rapidly become filled with mostly fresh water. Headlands within Limeburners Creek National Park are known to belong to the Touchwood Formation which formed during the Devonian. Extending to the west, sand dunes and ridges belonging to the Quaternary become prevalent, with recent and ongoing sand deposition occurring along the coast on top of an underlying degraded barrier dating back to the Pleistocene. Climate. Limeburners Creek National Park uniformly experiences a subtropical climate. Typically, February is the wettest month, while August usually proves to be the driest. Ecology. Limeburners Creek National Park is part of the North Coast Bioregion and sustains several ecological communities which, according to the Environment Protection and Biodiversity Act 1999, are considered to be critically endangered. These include communities of coastal saltmarsh, swamp oak floodplain forest and littoral rainforests. The park also hosts a diverse array of threatened fauna and flora including the rare ground parrot, spotted quoll and koala, in addition to broad-leaved tea trees, swamp oaks and bangalow palms. These unique plant and animal species also face ecological competition in the park from several introduced and exotic species. Noxious exotic weeds such as bitou bush and lantana are prevalent within the park, responsible for outcompeting native flora and degrading the existing fragile and threatened native communities. Foxes, feral cats and feral dogs are problematic introduced animals which are found throughout the park. These species are known to predate on small mammal and bird species in particular, posing a serious threat to the viability of populations of native species in the park. Domestic cattle are also problematic and known to frequent the fringes of Limeburners Creek National Park after escaping adjacent properties. The biodiversity of Limeburners Creek National Park is also threatened by bushfires of increasing frequency and intensity. Flora. Eucalyptus species are the dominant vegetation type found along the western fringe of Limeburners Creek National Park, being well-suited to the drier soils in this area. Dissimilarly, environments in the park's centre and east are mostly swamps, dominated by sclerophyll woodlands, wet heath and swamp shrublands. Another unique community within the park is that of the littoral rainforests which are found behind Queen's Head Beach as well as on the northern and southern slopes of Big Hill. Littoral rainforests have recently been listed as an endangered ecological community under section 1 of the "Threatened Species Conservation Act 1995", which has rendered the asset a priority for conservation programs conducted by NPWS NSW. The flora found within Limeburners Creek National Park is one of the key motives for its protection, given that 12 species have been identified as being at the limits of their geographical range or have been included in the "Threatened Species Conservation Act" (1995). Dominating floral species include heath banksia in the swampy shrublands, swamp oak and stands of broad-leaved tea tree in pockets of sclerophyll forests, with grass trees, fern-leaved banksia and tea tree the most prevalent plants in wet heath communities. Fauna. Limeburners Creek National Park hosts an array of unique and endangered faunal species. The park has been identified as being one of only areas throughout coastal NSW where dingoes and quolls have not been displaced by human developments. Estuaries and islands in the park's southern region have also been identified as ecosystems that are critical in supporting populations of sea birds and water fowl. In addition, a bat colony of an undetermined species is known to reside in the limestone cave on Big Hill. A number of reptile and frog species have been recorded in Limeburners Creek National Park. The lace monitor is one of the more common reptile species found within the park, often wandering in the bushland surrounding campgrounds where they frequently feed on scraps discarded or left behind by negligent campers. The park also supports several unique marsupial species, including the swamp wallaby, red-necked wallaby, feathertail glider and sugar glider, as well as an array of bird life, with species such as the pied oystercatcher and osprey having been recorded in the park's wetlands and dune environments. The park also provides habitats for an assortment of endangered species, including the elusive eastern ground parrot, which shelters in forested wetlands and heathlands. Brush-tailed phascogales and koalas are among the threatened marsupials hosted by the park. Unique and vulnerable insects are too, known to reside in the park, such as the critically endangered laced fritillary, which reaches the southern limits of its geographical range in Limeburners Creek National Park, and the sword grass brown butterfly, which is found solely in the Port Macquarie area and reliant on the gahnia swamps in the park's west. Given the diversity and vulnerability of ecological communities and their populations in Limeburners Creek National Park, it has been recommended that areas for scientific research be erected in and around saltwater lake, as well as the estuaries and islands in the park's south. These areas would facilitate environmental research and promote the undertaking of faunal and floral surveys to aid in population viability analysis. Pest management. As a result of human introduction, exotic plants and animals have intruded Limeburners Creek National Park and many have since developed into feral populations. In accordance with the "Noxious Weeds Act (1993"), NPWS NSW have an ongoing obligation to control noxious weeds present within National Parks in NSW. In addition, NPWS NSW also control populations of introduced and feral animals in an effort to mitigate their impact on the existing native ecological communities. Lantana and bitou bush are the most problematic weed species in Limeburners National Park, however, coastal morning glory and senna are also noxious weeds which are found and controlled in the park. Records indicate that lantana was introduced to the Port Macquarie area in 1838 and it has since spread and propagated itself, now found in almost every environment in Limeburners Creek National Park. Lantana is generally controlled with a combination of pesticide use and 'cut-and-spray' techniques, followed by hand removal and ongoing assessments. Bitou bush remains a very invasive and problematic species in dune environments and is of particular concern in those environments adjacent to littoral rainforest communities where bitou bush has the potential to cause further degradation through competition and displacement. Despite the persistent nature of bitou bush, it has been successfully removed from areas around Point Plomer and Queens Head using the Bradley method of bush regeneration. Limeburners Creek National Park is fringed by several properties, many of which run domestic cattle that enter the park through poorly-constructed fencing. Cattle are problematic as they erode and create new tracks by removing vegetation, and also contribute to the dispersal of weeds throughout the park. Wild dogs and foxes are a greater cause for concern in the park. This is because these feral animals threaten the vulnerable native populations of endemic fauna and pose a threat to public safety of visitors in the national park. These target species are managed by a combination of baiting and trapping operations which are overseen by NPWS NSW Fire management. It is understood that Australian biota has adapted to moderately high wildfire frequency which has previously been governed by Indigenous land management practices and natural phenomena, however, NPWS NSW have also recognised that the frequency of wildfires has increased dramatically since European occupation of the land. The changing fire regime is largely attributed to the changing climate, however, it is important to note the role of arson, with some of the more recent fires having been lit alongside Maria River Road and various tracks throughout the park. Although ecosystems within the national park all demand a level of protection from wildfires, they have differing fire regimes and so careful management by NPWS NSW is critical to ensuring that wildfires burn with particular intensities, frequencies and extents in these ecosystems. Fires occurring within Limeburners Creek National Park are largely controlled by NPWS NSW, sometimes in conjunction with the New South Wales Rural Fire Service, however, controlled burns are generally performed and managed chiefly by NPWS NSW, in accordance with their Fire Management Strategy. See also. &lt;templatestyles src="Stack/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "^2" } ]
https://en.wikipedia.org/wiki?curid=62961256
6296376
Triple product rule
Relation between relative derivatives of three variables The triple product rule, known variously as the cyclic chain rule, cyclic relation, cyclical rule or Euler's chain rule, is a formula which relates partial derivatives of three interdependent variables. The rule finds application in thermodynamics, where frequently three variables can be related by a function of the form "f"("x", "y", "z") = 0, so each variable is given as an implicit function of the other two variables. For example, an equation of state for a fluid relates temperature, pressure, and volume in this manner. The triple product rule for such interrelated variables "x", "y", and "z" comes from using a reciprocity relation on the result of the implicit function theorem, and is given by formula_0 where each factor is a partial derivative of the variable in the numerator, considered to be a function of the other two. The advantage of the triple product rule is that by rearranging terms, one can derive a number of substitution identities which allow one to replace partial derivatives which are difficult to analytically evaluate, experimentally measure, or integrate with quotients of partial derivatives which are easier to work with. For example, formula_1 Various other forms of the rule are present in the literature; these can be derived by permuting the variables {"x", "y", "z"}. Derivation. An informal derivation follows. Suppose that "f"("x", "y", "z") = 0. Write "z" as a function of "x" and "y". Thus the total differential "dz" is formula_2 Suppose that we move along a curve with "dz" = 0, where the curve is parameterized by "x". Thus "y" can be written in terms of "x", so on this curve formula_3 Therefore, the equation for "dz" = 0 becomes formula_4 Since this must be true for all "dx", rearranging terms gives formula_5 Dividing by the derivatives on the right hand side gives the triple product rule formula_6 Note that this proof makes many implicit assumptions regarding the existence of partial derivatives, the existence of the exact differential "dz", the ability to construct a curve in some neighborhood with "dz" = 0, and the nonzero value of partial derivatives and their reciprocals. A formal proof based on mathematical analysis would eliminate these potential ambiguities. Alternative derivation. Suppose a function "f"("x", "y", "z") = 0, where x, y, and z are functions of each other. Write the total differentials of the variables formula_7 formula_8 Substitute "dy" into "dx" formula_9 By using the chain rule one can show the coefficient of "dx" on the right hand side is equal to one, thus the coefficient of "dz" must be zero formula_10 Subtracting the second term and multiplying by its inverse gives the triple product rule formula_11 Applications. Example: Ideal Gas Law. The ideal gas law relates the state variables of pressure (P), volume (V), and temperature (T) via formula_12 which can be written as formula_13 so each state variable can be written as an implicit function of the other state variables: formula_14 From the above expressions, we have formula_15 Geometric Realization. A geometric realization of the triple product rule can be found in its close ties to the velocity of a traveling wave formula_16 shown on the right at time "t" (solid blue line) and at a short time later "t"+Δ"t" (dashed). The wave maintains its shape as it propagates, so that a point at position "x" at time "t" will correspond to a point at position "x"+Δ"x" at time "t"+Δ"t", formula_17 This equation can only be satisfied for all "x" and "t" if "k" Δ"x" − "ω" Δ"t" = 0, resulting in the formula for the phase velocity formula_18 To elucidate the connection with the triple product rule, consider the point "p"1 at time "t" and its corresponding point (with the same height) "p̄"1 at "t"+Δ"t". Define "p"2 as the point at time "t" whose x-coordinate matches that of "p̄"1, and define "p̄"2 to be the corresponding point of "p"2 as shown in the figure on the right. The distance Δ"x" between "p"1 and "p̄"1 is the same as the distance between "p"2 and "p̄"2 (green lines), and dividing this distance by Δ"t" yields the speed of the wave. To compute Δ"x", consider the two partial derivatives computed at "p"2, formula_19 formula_20 Dividing these two partial derivatives and using the definition of the slope (rise divided by run) gives us the desired formula for formula_21 where the negative sign accounts for the fact that "p"1 lies behind "p"2 relative to the wave's motion. Thus, the wave's velocity is given by formula_22 For infinitesimal Δ"t", formula_23 and we recover the triple product rule formula_22
[ { "math_id": 0, "text": "\\left(\\frac{\\partial x}{\\partial y}\\right)\\left(\\frac{\\partial y}{\\partial z}\\right)\\left(\\frac{\\partial z}{\\partial x}\\right) = -1," }, { "math_id": 1, "text": "\\left(\\frac{\\partial x}{\\partial y}\\right) = - \\frac{\\left(\\frac{\\partial z}{\\partial y}\\right)}{\\left(\\frac{\\partial z}{\\partial x}\\right)}" }, { "math_id": 2, "text": "dz = \\left(\\frac{\\partial z}{\\partial x}\\right)dx + \\left(\\frac{\\partial z}{\\partial y}\\right) dy" }, { "math_id": 3, "text": "dy = \\left(\\frac{\\partial y}{\\partial x}\\right) dx" }, { "math_id": 4, "text": "0 = \\left(\\frac{\\partial z}{\\partial x}\\right) \\, dx + \\left(\\frac{\\partial z}{\\partial y}\\right) \\left(\\frac{\\partial y}{\\partial x}\\right) \\, dx" }, { "math_id": 5, "text": "\\left(\\frac{\\partial z}{\\partial x}\\right) = -\\left(\\frac{\\partial z}{\\partial y}\\right) \\left(\\frac{\\partial y}{\\partial x}\\right)" }, { "math_id": 6, "text": "\\left(\\frac{\\partial x}{\\partial y}\\right)\\left(\\frac{\\partial y}{\\partial z}\\right) \\left(\\frac{\\partial z}{\\partial x}\\right) = -1" }, { "math_id": 7, "text": "dx = \\left(\\frac{\\partial x}{\\partial y}\\right) dy + \\left(\\frac{\\partial x}{\\partial z}\\right) dz" }, { "math_id": 8, "text": "dy = \\left(\\frac{\\partial y}{\\partial x}\\right) dx + \\left(\\frac{\\partial y}{\\partial z}\\right) dz" }, { "math_id": 9, "text": "dx = \\left(\\frac{\\partial x}{\\partial y}\\right) \\left[ \\left(\\frac{\\partial y}{\\partial x}\\right) dx + \\left(\\frac{\\partial y}{\\partial z}\\right) dz\\right] + \\left(\\frac{\\partial x}{\\partial z}\\right) dz" }, { "math_id": 10, "text": " \\left(\\frac{\\partial x}{\\partial y}\\right) \\left(\\frac{\\partial y}{\\partial z}\\right) + \\left(\\frac{\\partial x}{\\partial z}\\right) = 0" }, { "math_id": 11, "text": "\\left(\\frac{\\partial x}{\\partial y}\\right) \\left(\\frac{\\partial y}{\\partial z}\\right) \\left(\\frac{\\partial z}{\\partial x}\\right) = -1." }, { "math_id": 12, "text": "PV=nRT" }, { "math_id": 13, "text": "f(P,V,T) = PV-nRT = 0" }, { "math_id": 14, "text": "\n\\begin{align} \nP &= P(V,T) = \\frac{nRT}{V} \\\\[1em]\nV &= V(P,T) = \\frac{nRT}{P} \\\\[1em]\nT &= T(P,V) = \\frac{PV}{nR}\n\\end{align}\n" }, { "math_id": 15, "text": "\n\\begin{align} \n-1 &= \\left( \\frac{\\partial P}{\\partial V} \\right) \\left( \\frac{\\partial V}{\\partial T} \\right) \\left( \\frac{\\partial T}{\\partial P} \\right) \\\\[1em]\n&= \\left( -\\frac{nRT}{V^2} \\right) \\left( \\frac{nR}{P} \\right) \\left( \\frac{V}{nR} \\right) \\\\[1em]\n&= \\left( -\\frac{nRT}{PV} \\right) \\\\[1em]\n& = -\\frac{P}{P} = -1\n\\end{align}\n" }, { "math_id": 16, "text": "\\phi(x,t) = A \\cos (kx - \\omega t) " }, { "math_id": 17, "text": "A \\cos (kx - \\omega t) = A \\cos (k (x + \\Delta x) - \\omega (t + \\Delta t)). " }, { "math_id": 18, "text": " v = \\frac{\\Delta x}{\\Delta t} = \\frac{\\omega}{k}. " }, { "math_id": 19, "text": " \\left( \\frac{\\partial \\phi}{\\partial t} \\right) \\Delta t = \\text{rise from }p_2\\text{ to }\\bar{p}_1\\text{ in time }\\Delta t\\text{ (gold line)} " }, { "math_id": 20, "text": " \\left( \\frac{\\partial \\phi}{\\partial x} \\right) = \\text{slope of the wave (red line) at time }t. " }, { "math_id": 21, "text": " \\Delta x = - \\frac{\\left( \\frac{\\partial \\phi}{\\partial t} \\right) \\Delta t}{\\left( \\frac{\\partial \\phi}{\\partial x} \\right)}, " }, { "math_id": 22, "text": " v = \\frac{\\Delta x}{\\Delta t} = - \\frac{\\left( \\frac{\\partial \\phi}{\\partial t} \\right)}{\\left( \\frac{\\partial \\phi}{\\partial x} \\right)}." }, { "math_id": 23, "text": " \\frac{\\Delta x}{\\Delta t} = \\left( \\frac{\\partial x}{\\partial t} \\right)" } ]
https://en.wikipedia.org/wiki?curid=6296376
629881
Mouse keys
Feature of some graphical user interfaces that uses the keyboard as a pointing device Mouse keys is a feature of some graphical user interfaces that uses the keyboard (especially numeric keypad) as a pointing device (usually replacing a mouse). Its roots lie in the earliest days of visual editors when line and column navigation was controlled with arrow keys. Today, mouse keys usually refers to the numeric keypad layout standardized with the introduction of the X Window System in 1984. History. Historically, MouseKeys supported GUI programs when many terminals had no dedicated pointing device. As pointing devices became ubiquitous, the use of mouse keys narrowed to situations where a pointing device was missing, unusable, or inconvenient. Such situations may arise from the following: In 1987, Macintosh Operating System 4.2 Easy Access provided MouseKeys support to all applications. Easy access was (de)activated by clicking the key five times. By the early 2020s, with graphics tablets becoming more common, a configuration change may be required before enabling MouseKeys. MouseKeysAccel. The X Window System MouseKeysAccel control applies action (usually cursor movement) repeatedly while a direction key {1,2,3,4,6,7,8,9} remains depressed. When the key is depressed, an "action_delta" is immediately applied. If the key remains depressed, longer than "mk_delay" milliseconds, some action is applied every "mk_interval" milliseconds until the key is released. If the key remains depressed, after more than "mk_time_to_max" actions have been applied, "action_delta" magnified "mk_max_speed" times, is applied every "mk_interval" milliseconds. The first "mk_time_to_max" actions increase smoothly according to an exponential. formula_0 These five parameters are configurable. Enabling. Under the X Window Systems X.Org and XFree86 used on Unix-like systems such as Linux, BSD, and AIX, MouseKeys (and MouseKeysAccel), when available, is nominally (de)activated by ++. MouseKeys without acceleration (also known as plot mode) is sometimes available with +. This is nominally independent of the window manager in use, but may be overridden, or even made unavailable by a configuration file. Before enabling, it may be necessary to change system configuration. The setxkbmap utility can be used to change the configuration under Xorg: codice_0 There are also various utilities to allow more precise control via user-configurable key bindings, such as xmousekeys and xdotool. Since KDE 5, MouseKeys is enabled and configured by systemsetting5 (Hardware → Input Devices → Mouse → Keyboard Navigation) MouseKeys for Apple Inc.'s macOS is enabled and configured via the Accessibility ([apple] → System Preferences → Accessibility → Mouse &amp; Trackpad). Microsoft changed the method of enabling between Windows 2000, Windows XP (added diagonal cursor movement and MouseKeysAccel), and Windows Vista. Common usage. Replacing the mouse keys. Replacing the mouse keys by the numeric keypad is as follows: Typing (with the numeric keypad) is equivalent to clicking the selected button. By default, the selected button is the primary button (nominally under index finger, left button for most right-handed people and right button for most left-handed people). Typing (with the numeric keypad) selects the alternate button (nominally under ring finger, right button for most right-handed people and left button for most left-handed people). Typing (with the numeric keypad) selects the modifier button (nominally under the middle finger, middle button of a 3-button mouse). Typing (with the numeric keypad) selects the primary button. The selection remains in effect until a different button is selected. Assignment of left/middle/right button to primary/modifier/alternate, alternate/modifier/primary, or something else is settable by many means. Some mice have a switch, that swaps assignment of right and left keys. Many laptop bioses have a setting for mouse button assignment. Many window managers have a setting that permutes the assignment. Within the X Window System core protocol, permutation can be applied by xmodmap. Moving the pointer by keys. Other than , all other numeric keys from the numeric keypad are used to move the pointer on the screen. For example, will move the pointer upwards, while will move it diagonally downwards to the left. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\mathrm{action\\_delta} \\times \\mathrm{mk\\_max\\_speed} \\times \\left(\n \\frac{ i } { \\mathrm{mk\\_time\\_to\\_max} } \\right)\n^{\\frac{ 1000 + \\mathrm{mk\\_curve} } { 1000 }}\n" } ]
https://en.wikipedia.org/wiki?curid=629881
6299003
Cubic reciprocity
Conditions under which the congruence x^3 equals p (mod q) is solvable Cubic reciprocity is a collection of theorems in elementary and algebraic number theory that state conditions under which the congruence "x"3 ≡ "p" (mod "q") is solvable; the word "reciprocity" comes from the form of the main theorem, which states that if "p" and "q" are primary numbers in the ring of Eisenstein integers, both coprime to 3, the congruence "x"3 ≡ "p" (mod "q") is solvable if and only if "x"3 ≡ "q" (mod "p") is solvable. History. Sometime before 1748 Euler made the first conjectures about the cubic residuacity of small integers, but they were not published until 1849, 62 years after his death. Gauss's published works mention cubic residues and reciprocity three times: there is one result pertaining to cubic residues in the Disquisitiones Arithmeticae (1801). In the introduction to the fifth and sixth proofs of quadratic reciprocity (1818) he said that he was publishing these proofs because their techniques (Gauss's lemma and Gaussian sums, respectively) can be applied to cubic and biquadratic reciprocity. Finally, a footnote in the second (of two) monographs on biquadratic reciprocity (1832) states that cubic reciprocity is most easily described in the ring of Eisenstein integers. From his diary and other unpublished sources, it appears that Gauss knew the rules for the cubic and quartic residuacity of integers by 1805, and discovered the full-blown theorems and proofs of cubic and biquadratic reciprocity around 1814. Proofs of these were found in his posthumous papers, but it is not clear if they are his or Eisenstein's. Jacobi published several theorems about cubic residuacity in 1827, but no proofs. In his Königsberg lectures of 1836–37 Jacobi presented proofs. The first published proofs were by Eisenstein (1844). Integers. A cubic residue (mod "p") is any number congruent to the third power of an integer (mod "p"). If "x"3 ≡ "a" (mod "p") does not have an integer solution, "a" is a cubic nonresidue (mod "p"). Cubic residues are usually only defined in modulus "n" such that formula_0 (the Carmichael lambda function of "n") is divisible by 3, since for other integer "n", all residues are cubic residues. As is often the case in number theory, it is easier to work modulo prime numbers, so in this section all moduli "p", "q", etc., are assumed to be positive odd primes. We first note that if "q" ≡ 2 (mod 3) is a prime then every number is a cubic residue modulo "q". Let "q" = 3"n" + 2; since 0 = 03 is obviously a cubic residue, assume "x" is not divisible by "q". Then by Fermat's little theorem, formula_1 Multiplying the two congruences we have formula_2 Now substituting 3"n" + 2 for "q" we have: formula_3 Therefore, the only interesting case is when the modulus "p" ≡ 1 (mod 3). In this case the non-zero residue classes (mod "p") can be divided into three sets, each containing ("p"−1)/3 numbers. Let "e" be a cubic non-residue. The first set is the cubic residues; the second one is "e" times the numbers in the first set, and the third is "e"2 times the numbers in the first set. Another way to describe this division is to let "e" be a primitive root (mod "p"); then the first (resp. second, third) set is the numbers whose indices with respect to this root are congruent to 0 (resp. 1, 2) (mod 3). In the vocabulary of group theory, the cubic residues form a subgroup of index 3 of the multiplicative group formula_4 and the three sets are its cosets. Primes ≡ 1 (mod 3). A theorem of Fermat states that every prime "p" ≡ 1 (mod 3) can be written as "p" = "a"2 + 3"b"2 and (except for the signs of "a" and "b") this representation is unique. Letting "m" = "a" + "b" and "n" = "a" − "b", we see that this is equivalent to "p" = "m"2 − "mn" + "n"2 (which equals ("n" − "m")2 − ("n" − "m")"n" + "n"2 = "m"2 + "m"("n" − "m") + ("n" − "m")2, so "m" and "n" are not determined uniquely). Thus, formula_5 and it is a straightforward exercise to show that exactly one of "m", "n", or "m" − "n" is a multiple of 3, so formula_6 and this representation is unique up to the signs of "L" and "M". For relatively prime integers "m" and "n" define the rational cubic residue symbol as formula_7 It is important to note that this symbol does "not" have the multiplicative properties of the Legendre symbol; for this, we need the true cubic character defined below. Euler's Conjectures. Let "p" = "a"2 + 3"b"2 be a prime. Then the following hold: formula_8 The first two can be restated as follows. Let "p" be a prime that is congruent to 1 modulo 3. Then: Gauss's Theorem. Let "p" be a positive prime such that formula_9 Then formula_10 One can easily see that Gauss's Theorem implies: formula_11 Jacobi's Theorem (stated without proof). Let "q" ≡ "p" ≡ 1 (mod 6) be positive primes. Obviously both "p" and "q" are also congruent to 1 modulo 3, therefore assume: formula_12 Let "x" be a solution of "x"2 ≡ −3 (mod "q"). Then formula_13 and we have: formula_14 Lehmer's Theorem. Let "q" and "p" be primes, with formula_15 Then: formula_16 where formula_17 Note that the first condition implies: that any number that divides "L" or "M" is a cubic residue (mod "p"). The first few examples of this are equivalent to Euler's conjectures: formula_18 Since obviously "L" ≡ "M" (mod 2), the criterion for "q" = 2 can be simplified as: formula_19 Martinet's theorem. Let "p" ≡ "q" ≡ 1 (mod 3) be primes, formula_20 Then formula_21 Sharifi's theorem. Let "p" = 1 + 3"x" + 9"x"2 be a prime. Then any divisor of "x" is a cubic residue (mod "p"). Eisenstein integers. Background. In his second monograph on biquadratic reciprocity, Gauss says: The theorems on biquadratic residues gleam with the greatest simplicity and genuine beauty only when the field of arithmetic is extended to imaginary numbers, so that without restriction, the numbers of the form "a" + "bi" constitute the object of study ... we call such numbers integral complex numbers. [bold in the original] These numbers are now called the ring of Gaussian integers, denoted by Z["i"]. Note that "i" is a fourth root of 1. In a footnote he adds The theory of cubic residues must be based in a similar way on a consideration of numbers of the form "a" + "bh" where "h" is an imaginary root of the equation "h"3 = 1 ... and similarly the theory of residues of higher powers leads to the introduction of other imaginary quantities. In his first monograph on cubic reciprocity Eisenstein developed the theory of the numbers built up from a cube root of unity; they are now called the ring of Eisenstein integers. Eisenstein said that to investigate the properties of this ring one need only consult Gauss's work on Z["i"] and modify the proofs. This is not surprising since both rings are unique factorization domains. The "other imaginary quantities" needed for the "theory of residues of higher powers" are the rings of integers of the cyclotomic number fields; the Gaussian and Eisenstein integers are the simplest examples of these. Facts and terminology. Let formula_22 And consider the ring of Eisenstein integers: formula_23 This is a Euclidean domain with the norm function given by: formula_24 Note that the norm is always congruent to 0 or 1 (mod 3). The group of units in formula_25 (the elements with a multiplicative inverse or equivalently those with unit norm) is a cyclic group of the sixth roots of unity, formula_26 formula_25 is a unique factorization domain. The primes fall into three classes: formula_27 It is the only prime in formula_28 divisible by the square of a prime in formula_25. The prime 3 is said to ramify in formula_25. formula_30 formula_31 for example formula_32 A number is primary if it is coprime to 3 and congruent to an ordinary integer modulo formula_33 which is the same as saying it is congruent to formula_34 modulo 3. If formula_35 one of formula_36 or formula_37 is primary. Moreover, the product of two primary numbers is primary and the conjugate of a primary number is also primary. The unique factorization theorem for formula_25 is: if formula_38 then formula_39 where each formula_40 is a primary (under Eisenstein's definition) prime. And this representation is unique, up to the order of the factors. The notions of congruence and greatest common divisor are defined the same way in formula_25 as they are for the ordinary integers formula_28. Because the units divide all numbers, a congruence modulo formula_41 is also true modulo any associate of formula_41, and any associate of a GCD is also a GCD. Cubic residue character. Definition. An analogue of Fermat's little theorem is true in formula_25: if formula_42 is not divisible by a prime formula_43, formula_44 Now assume that formula_45 so that formula_46 Or put differently formula_47 Then we can write: formula_48 for a unique unit formula_49 This unit is called the cubic residue character of formula_42 modulo formula_43 and is denoted by formula_50 Properties. The cubic residue character has formal properties similar to those of the Legendre symbol: formula_62 where formula_63 Statement of the theorem. Let α and β be primary. Then formula_64 There are supplementary theorems for the units and the prime 1 − ω: Let α = "a" + "b"ω be primary, "a" = 3"m" + 1 and "b" = 3"n". (If "a" ≡ 2 (mod 3) replace α with its associate −α; this will not change the value of the cubic characters.) Then formula_65 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. The references to the original papers of Euler, Jacobi, and Eisenstein were copied from the bibliographies in Lemmermeyer and Cox, and were not used in the preparation of this article. Euler. This was actually written 1748–1750, but was only published posthumously; It is in Vol V, pp. 182–283 of Gauss. The two monographs Gauss published on biquadratic reciprocity have consecutively numbered sections: the first contains §§ 1–23 and the second §§ 24–76. Footnotes referencing these are of the form "Gauss, BQ, § "n"". Footnotes referencing the "Disquisitiones Arithmeticae" are of the form "Gauss, DA, Art. "n"". These are in Gauss's "Werke", Vol II, pp. 65–92 and 93–148 Gauss's fifth and sixth proofs of quadratic reciprocity are in This is in Gauss's "Werke", Vol II, pp. 47–64 German translations of all three of the above are the following, which also has the Disquisitiones Arithmeticae and Gauss's other papers on number theory. Eisenstein. These papers are all in Vol I of his "Werke". Jacobi. This is in Vol VI of his "Werke".
[ { "math_id": 0, "text": "\\lambda(n)" }, { "math_id": 1, "text": "x^q \\equiv x \\bmod{q}, \\qquad x^{q - 1} \\equiv 1 \\bmod{q}" }, { "math_id": 2, "text": " x^{2q-1} \\equiv x \\bmod{q}" }, { "math_id": 3, "text": " x^{2q-1} = x^{6n + 3} = \\left (x^{2n+1} \\right )^3." }, { "math_id": 4, "text": "(\\Z/p\\Z)^{\\times}" }, { "math_id": 5, "text": "\\begin{align}\n4p &= (2m-n)^2 + 3n^2 \\\\\n&= (2n-m)^2 + 3m^2 \\\\\n&= (m+n)^2 + 3(m-n)^2\n\\end{align}" }, { "math_id": 6, "text": "p = \\frac14 (L^2+ 27M^2)," }, { "math_id": 7, "text": "\\left[\\frac{m}{n}\\right]_3 = \\begin{cases} 1 & m \\text{ is a cubic residue } \\bmod n \\\\ -1 & m \\text{ is a cubic non-residue }\\bmod n \\end{cases}" }, { "math_id": 8, "text": "\\begin{align}\n\\left[\\tfrac{2}{p}\\right]_3 =1 \\quad &\\Longleftrightarrow \\quad 3\\mid b\\\\\n\\left[\\tfrac{3}{p}\\right]_3 =1 \\quad &\\Longleftrightarrow \\quad 9\\mid b \\text{ or } 9\\mid(a\\pm b)\\\\\n\\left[\\tfrac{5}{p}\\right]_3 =1 \\quad &\\Longleftrightarrow \\quad 15\\mid b \\text{ or } 3\\mid b \\text{ and } 5\\mid a \\text{ or } 15\\mid(a\\pm b) \\text{ or } 15\\mid(2a\\pm b)\\\\\n\\left[\\tfrac{6}{p}\\right]_3 =1 \\quad &\\Longleftrightarrow \\quad 9\\mid b \\text{ or } 9\\mid(a\\pm 2b)\\\\\n\\left[\\tfrac{7}{p}\\right]_3 =1 \\quad &\\Longrightarrow \\quad (3\\mid b\\text{ and }7\\mid a) \\text{ or } 21\\mid (b\\pm a) \\text{ or } 7\\mid(4b\\pm a) \\text{ or } 21\\mid b \\text{ or } 7\\mid(b\\pm 2a)\n\\end{align}" }, { "math_id": 9, "text": "p = 3n + 1= \\tfrac14 \\left(L^2+ 27M^2\\right)." }, { "math_id": 10, "text": " L(n!)^3\\equiv 1 \\bmod p." }, { "math_id": 11, "text": "\\left[\\tfrac{L}{p}\\right]_3 = \\left[\\tfrac{M}{p}\\right]_3 =1." }, { "math_id": 12, "text": "p = \\tfrac14 \\left(L^2+ 27M^2\\right), \\qquad q = \\tfrac14 \\left(L'^2+ 27M'^2\\right)." }, { "math_id": 13, "text": "x\\equiv\\pm \\frac{L'}{3M'}\\bmod q," }, { "math_id": 14, "text": "\\begin{align}\n\\left[\\frac{q}{p}\\right]_3 =1 \\quad &\\Longleftrightarrow \\quad \\left[\\frac{\\frac{L+3Mx}{2}p}{q}\\right]_3 =1 \\quad \\Longleftrightarrow \\quad \\left[\\frac{\\frac{L+3Mx}{L-3Mx}}{q}\\right]_3 =1 \\\\\n\\left[\\frac{q}{p}\\right]_3 =1 \\quad &\\Longrightarrow \\quad \\left[\\frac{\\frac{LM'+L'M}{LM'-L'M}}{q}\\right]_3 =1\n\\end{align}" }, { "math_id": 15, "text": "p = \\tfrac14 \\left(L^2+ 27M^2\\right)." }, { "math_id": 16, "text": "\\left[\\frac{q}{p}\\right]_3 = 1 \\quad \\Longleftrightarrow \\quad q \\mid LM \\text{ or } L\\equiv\\pm \\frac{9r}{2u+1} M\\bmod{q}," }, { "math_id": 17, "text": "u\\not\\equiv 0,1,-\\tfrac12, -\\tfrac13 \\bmod q \\quad \\text{and} \\quad 3u+1 \\equiv r^2 (3u-3)\\bmod q." }, { "math_id": 18, "text": "\\begin{align}\n\\left[\\frac{2}{p}\\right]_3 =1 \\quad &\\Longleftrightarrow \\quad L \\equiv M \\equiv 0 \\bmod 2 \\\\\n\\left[\\frac{3}{p}\\right]_3 =1 \\quad &\\Longleftrightarrow \\quad M \\equiv 0 \\bmod 3 \\\\\n\\left[\\frac{5}{p}\\right]_3 =1 \\quad &\\Longleftrightarrow \\quad LM \\equiv 0 \\bmod 5 \\\\\n\\left[\\frac{7}{p}\\right]_3 =1 \\quad &\\Longleftrightarrow \\quad LM \\equiv 0 \\bmod 7\n\\end{align}" }, { "math_id": 19, "text": " \\left[\\frac{2}{p}\\right]_3 =1 \\quad \\Longleftrightarrow \\quad M \\equiv 0 \\bmod 2. " }, { "math_id": 20, "text": " pq = \\tfrac14 (L^2+ 27M^2)." }, { "math_id": 21, "text": "\\left[\\frac{L}{p}\\right]_3 \\left[\\frac{L}{q}\\right]_3 =1\\quad \\Longleftrightarrow \\quad \\left[\\frac{q}{p}\\right]_3 \\left[\\frac{p}{q}\\right]_3 =1." }, { "math_id": 22, "text": "\\omega = \\frac{-1 + i\\sqrt 3}{2} = e^\\frac{2\\pi i}{3}, \\qquad \\omega^3 = 1." }, { "math_id": 23, "text": "\\Z[\\omega] = \\left \\{ a + b \\omega \\ : \\ a, b \\in \\Z \\right \\}." }, { "math_id": 24, "text": "N(a + b \\omega) = a^2 -ab + b^2." }, { "math_id": 25, "text": "\\Z[\\omega]" }, { "math_id": 26, "text": "\\left \\{ \\pm 1, \\pm \\omega, \\pm \\omega^2\\right \\}." }, { "math_id": 27, "text": " 3 = -\\omega^2 (1-\\omega)^2." }, { "math_id": 28, "text": "\\Z" }, { "math_id": 29, "text": "q" }, { "math_id": 30, "text": "N(q) = q^2 \\equiv 1 \\bmod{3}." }, { "math_id": 31, "text": "p=N (\\pi) = N (\\overline{\\pi})= \\pi \\overline{\\pi}." }, { "math_id": 32, "text": " 7 = ( 3 + \\omega) ( 2 - \\omega)." }, { "math_id": 33, "text": "(1-\\omega)^2," }, { "math_id": 34, "text": "\\pm 2" }, { "math_id": 35, "text": "\\gcd(N(\\lambda), 3) = 1" }, { "math_id": 36, "text": "\\lambda, \\omega \\lambda," }, { "math_id": 37, "text": "\\omega^2 \\lambda" }, { "math_id": 38, "text": "\\lambda \\neq 0," }, { "math_id": 39, "text": "\\lambda = \\pm\\omega^\\mu(1-\\omega)^\\nu\\pi_1^{\\alpha_1}\\pi_2^{\\alpha_2}\\pi_3^{\\alpha_3} \\cdots, \\qquad \\mu \\in \\{0, 1, 2\\}, \\quad \\nu, \\alpha_1, \\alpha_2, \\ldots \\geqslant 0" }, { "math_id": 40, "text": "\\pi_i" }, { "math_id": 41, "text": "\\lambda" }, { "math_id": 42, "text": "\\alpha" }, { "math_id": 43, "text": "\\pi" }, { "math_id": 44, "text": "\\alpha^{N (\\pi) - 1} \\equiv 1 \\bmod{\\pi}." }, { "math_id": 45, "text": "N(\\pi) \\neq 3" }, { "math_id": 46, "text": "N(\\pi) \\equiv 1 \\bmod{3}." }, { "math_id": 47, "text": "3\\mid N(\\pi) -1." }, { "math_id": 48, "text": "\\alpha^{\\frac{N ( \\pi )- 1}{3}}\\equiv \\omega^k \\bmod\\pi, " }, { "math_id": 49, "text": "\\omega^k." }, { "math_id": 50, "text": "\\left(\\frac{\\alpha}{\\pi}\\right)_3 = \\omega^k \\equiv \\alpha^{\\frac{N(\\pi) - 1}{3}} \\bmod{\\pi}." }, { "math_id": 51, "text": "\\alpha \\equiv \\beta \\bmod{\\pi}" }, { "math_id": 52, "text": "\\left (\\tfrac{\\alpha}{\\pi}\\right )_3=\\left (\\tfrac{\\beta}{\\pi}\\right )_3." }, { "math_id": 53, "text": "\\left (\\tfrac{\\alpha\\beta}{\\pi}\\right )_3=\\left (\\tfrac{\\alpha}{\\pi}\\right )_3\\left (\\tfrac{\\beta}{\\pi}\\right )_3." }, { "math_id": 54, "text": "\\overline{\\left (\\tfrac{\\alpha}{\\pi}\\right )_3}=\\left (\\tfrac{\\overline{\\alpha}}{\\overline{\\pi}}\\right )_3," }, { "math_id": 55, "text": "\\theta" }, { "math_id": 56, "text": "\\left (\\tfrac{\\alpha}{\\pi}\\right )_3=\\left (\\tfrac{\\alpha}{\\theta}\\right )_3" }, { "math_id": 57, "text": "x^3 \\equiv \\alpha \\bmod{\\pi}" }, { "math_id": 58, "text": "\\left(\\tfrac{\\alpha}{\\pi}\\right)_3 = 1." }, { "math_id": 59, "text": "a, b \\in \\Z" }, { "math_id": 60, "text": "\\gcd(a, b) = \\gcd(b, 3) = 1," }, { "math_id": 61, "text": "\\left(\\tfrac{a}{b}\\right)_3 = 1." }, { "math_id": 62, "text": "\\left(\\frac{\\alpha}{\\lambda}\\right)_3 = \\left(\\frac{\\alpha}{\\pi_1}\\right)_3^{\\alpha_1} \\left(\\frac{\\alpha}{\\pi_2}\\right)_3^{\\alpha_2} \\cdots," }, { "math_id": 63, "text": "\\lambda = \\pi_1^{\\alpha_1}\\pi_2^{\\alpha_2}\\pi_3^{\\alpha_3} \\cdots" }, { "math_id": 64, "text": "\\Bigg(\\frac{\\alpha}{\\beta}\\Bigg)_3 = \\Bigg(\\frac{\\beta}{\\alpha}\\Bigg)_3. " }, { "math_id": 65, "text": "\n\\Bigg(\\frac{\\omega}{\\alpha}\\Bigg)_3 = \\omega^\\frac{1-a-b}{3}= \\omega^{-m-n},\\;\\;\\;\n\\Bigg(\\frac{1-\\omega}{\\alpha}\\Bigg)_3 = \\omega^\\frac{a-1}{3}= \\omega^m,\\;\\;\\;\n\\Bigg(\\frac{3}{\\alpha}\\Bigg)_3 = \\omega^\\frac{b}{3}= \\omega^n.\n" } ]
https://en.wikipedia.org/wiki?curid=6299003
62998740
2MASS J11011926−7732383
Brown dwarf in the constellation Chamaleon &lt;/td&gt; ! style="text-align: center; background-color: #FFFFC0;" colspan="2" | Observation dataEpoch J2000      Equinox J2000 ! style="text-align:left" | Constellation ! style="text-align:left" | Right ascension ! style="text-align:left" | Declination &lt;/th&gt;&lt;/tr&gt; 2MASS J11011926–7732383 AB (abbreviated 2M1101AB; LUH 1) is a brown dwarf binary about 600 light-years distant in the Chamaeleon. constellation. The wide binary pair is separated by about 240 astronomical units. The system was the first discovery of a brown dwarf binary with a separation greater than 20 au. The discovery gave fundamental insights into the formation of brown dwarfs. Previously it was thought that such wide binary brown dwarfs are not formed or at least are disrupted at ages of 1-10 Myrs. Together with other wide binaries, such as Oph 162225-240515 or UScoCTIO 108, the existence of this system was inconsistent with the ejection hypothesis, a proposed hypothesis in which brown dwarfs form in a multiple system, but are ejected before they gain enough mass to burn hydrogen. The ejection hypothesis predicted a maximum separation of 10 au for brown dwarf binaries. The system was discovered by Kevin Luhman in 2004 during observations of candidate young brown dwarfs in Chamaeleon I, using the Magellan I telescope. The primary 2M1101A has a spectral type of M7.25 ± 0.25, with a mass of about 52 MJ and a temperature of 2838 K (2565 °C; 4649 °F). The secondary 2M1101B has a spectral type of M8.25 ± 0.25, with a mass of about 26 MJ and a temperature of 2632 K (2359 °C; 4279 °F). Based on spectral features, such as sodium and potassium absorption lines it was concluded that both brown dwarfs are young and part of Chamaeleon I. The brown dwarfs in 2M1101AB belong to the youngest substellar members of Chamaeleon I with an approximate age of 1 million years. Measurements by ESA's Gaia satellite show a similar parallax and proper motion for both brown dwarfs. The system has a relatively low binding energy of formula_0 ergs. The system was detected in x-rays with Chandra and XMM-Newton. While XMM-Newton could not resolve the binary it detected the primary. Chandra resolved the binary and detected the secondary in the system. These apparently contradictory results were interpreted as strong variability of the x-ray emissions by this system. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0.91 \\times 10^{41}" } ]
https://en.wikipedia.org/wiki?curid=62998740
6299994
Cable theory
Mathematical model of a dendrite In neuroscience, classical cable theory uses mathematical models to calculate the electric current (and accompanying voltage) along passive neurites, particularly the dendrites that receive synaptic inputs at different sites and times. Estimates are made by modeling dendrites and axons as cylinders composed of segments with capacitances formula_0 and resistances formula_1 combined in parallel (see Fig. 1). The capacitance of a neuronal fiber comes about because electrostatic forces are acting through the very thin lipid bilayer (see Figure 2). The resistance in series along the fiber formula_2 is due to the axoplasm's significant resistance to movement of electric charge. History. Cable theory in computational neuroscience has roots leading back to the 1850s, when Professor William Thomson (later known as Lord Kelvin) began developing mathematical models of signal decay in submarine (underwater) telegraphic cables. The models resembled the partial differential equations used by Fourier to describe heat conduction in a wire. The 1870s saw the first attempts by Hermann to model neuronal electrotonic potentials also by focusing on analogies with heat conduction. However, it was Hoorweg who first discovered the analogies with Kelvin's undersea cables in 1898 and then Hermann and Cremer who independently developed the cable theory for neuronal fibers in the early 20th century. Further mathematical theories of nerve fiber conduction based on cable theory were developed by Cole and Hodgkin (1920s–1930s), Offner et al. (1940), and Rushton (1951). Experimental evidence for the importance of cable theory in modelling the behavior of axons began surfacing in the 1930s from work done by Cole, Curtis, Hodgkin, Sir Bernard Katz, Rushton, Tasaki and others. Two key papers from this era are those of Davis and Lorente de Nó (1947) and Hodgkin and Rushton (1946). The 1950s saw improvements in techniques for measuring the electric activity of individual neurons. Thus cable theory became important for analyzing data collected from intracellular microelectrode recordings and for analyzing the electrical properties of neuronal dendrites. Scientists like Coombs, Eccles, Fatt, Frank, Fuortes and others now relied heavily on cable theory to obtain functional insights of neurons and for guiding them in the design of new experiments. Later, cable theory with its mathematical derivatives allowed ever more sophisticated neuron models to be explored by workers such as Jack, Rall, Redman, Rinzel, Idan Segev, Tuckwell, Bell, and Iannella. More recently, cable theory has been applied to model electrical activity in bundled neurons in the white matter of the brain. Deriving the cable equation. Note, various conventions of "r""m" exist. Here "r""m" and "c""m", as introduced above, are measured per membrane-length unit (per meter (m)). Thus "r""m" is measured in ohm·meters (Ω·m) and "c""m" in farads per meter (F/m). This is in contrast to "R""m" (in Ω·m2) and "C""m" (in F/m2), which represent the specific resistance and capacitance respectively of one unit area of membrane (in m2). Thus, if the radius, "a", of the axon is known, then its circumference is 2"πa", and its "r""m", and its "c""m" values can be calculated as: These relationships make sense intuitively, because the greater the circumference of the axon, the greater the area for charge to escape through its membrane, and therefore the lower the membrane resistance (dividing "R""m" by 2"πa"); and the more membrane available to store charge (multiplying "C""m" by 2"πa"). The specific electrical resistance, "ρ""l", of the axoplasm allows one to calculate the longitudinal intracellular resistance per unit length, "r""l", (in Ω·m−1) by the equation: The greater the cross sectional area of the axon, "πa"2, the greater the number of paths for the charge to flow through its axoplasm, and the lower the axoplasmic resistance. Several important avenues of extending classical cable theory have recently seen the introduction of endogenous structures in order to analyze the effects of protein polarization within dendrites and different synaptic input distributions over the dendritic surface of a neuron. To better understand how the cable equation is derived, first simplify the theoretical neuron even further and pretend it has a perfectly sealed membrane ("r""m"=∞) with no loss of current to the outside, and no capacitance ("c""m" = 0). A current injected into the fiber at position "x" = 0 would move along the inside of the fiber unchanged. Moving away from the point of injection and by using Ohm's law ("V" = "IR") we can calculate the voltage change as: where the negative is because current flows down the potential gradient. Letting Δ"x" go towards zero and having infinitely small increments of "x", one can write (4) as: or Bringing "r""m" back into the picture is like making holes in a garden hose. The more holes, the faster the water will escape from the hose, and the less water will travel all the way from the beginning of the hose to the end. Similarly, in an axon, some of the current traveling longitudinally through the axoplasm will escape through the membrane. If "i""m" is the current escaping through the membrane per length unit, m, then the total current escaping along "y" units must be "y·i""m". Thus, the change of current in the axoplasm, Δ"i""l", at distance, Δ"x", from position "x"=0 can be written as: or, using continuous, infinitesimally small increments: formula_3 can be expressed with yet another formula, by including the capacitance. The capacitance will cause a flow of charge (a current) towards the membrane on the side of the cytoplasm. This current is usually referred to as displacement current (here denoted formula_4.) The flow will only take place as long as the membrane's storage capacity has not been reached. formula_4 can then be expressed as: where formula_0 is the membrane's capacitance and formula_5 is the change in voltage over time. The current that passes the membrane (formula_6) can be expressed as: and because formula_7 the following equation for formula_3 can be derived if no additional current is added from an electrode: where formula_8 represents the change per unit length of the longitudinal current. Combining equations (6) and (11) gives a first version of a cable equation: which is a second-order partial differential equation (PDE). By a simple rearrangement of equation (12) (see later) it is possible to make two important terms appear, namely the length constant (sometimes referred to as the space constant) denoted formula_9 and the time constant denoted formula_10. The following sections focus on these terms. Length constant. The length constant, formula_9 (lambda), is a parameter that indicates how far a stationary current will influence the voltage along the cable. The larger the value of formula_9, the farther the charge will flow. The length constant can be expressed as: The larger the membrane resistance, "r""m", the greater the value of formula_9, and the more current will remain inside the axoplasm to travel longitudinally through the axon. The higher the axoplasmic resistance, formula_2, the smaller the value of formula_9, the harder it will be for current to travel through the axoplasm, and the shorter the current will be able to travel. It is possible to solve equation (12) and arrive at the following equation (which is valid in steady-state conditions, i.e. when time approaches infinity): Where formula_11 is the depolarization at formula_12 (point of current injection), "e" is the exponential constant (approximate value 2.71828) and formula_13 is the voltage at a given distance "x" from "x"=0. When formula_14 then and which means that when we measure formula_15 at distance formula_9 from formula_12 we get Thus formula_16 is always 36.8 percent of formula_11. Time constant. Neuroscientists are often interested in knowing how fast the membrane potential, formula_17, of an axon changes in response to changes in the current injected into the axoplasm. The time constant, formula_10, is an index that provides information about that value. formula_10 can be calculated as: The larger the membrane capacitance, formula_0, the more current it takes to charge and discharge a patch of membrane and the longer this process will take. The larger the membrane resistance formula_18, the harder it is for a current to induce a change in membrane potential. So the higher the formula_19 the slower the nerve impulse can travel. That means, membrane potential (voltage across the membrane) lags more behind current injections. Response times vary from 1–2 milliseconds in neurons that are processing information that needs high temporal precision to 100 milliseconds or longer. A typical response time is around 20 milliseconds. Generic form and mathematical structure. If one multiplies equation (12) by formula_1 on both sides of the equal sign we get: and recognize formula_20 on the left side and formula_21 on the right side. The cable equation can now be written in its perhaps best known form: This is a 1D heat equation or diffusion equation for which many solution methods, such as Green's functions and Fourier methods, have been developed. It is also a special degenerate case of the Telegrapher's equation, where the inductance formula_22 vanishes and the signal propagation speed formula_23 is infinite. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c_m" }, { "math_id": 1, "text": "r_m" }, { "math_id": 2, "text": "r_l" }, { "math_id": 3, "text": "i_m" }, { "math_id": 4, "text": "i_c" }, { "math_id": 5, "text": "{\\partial V}/{\\partial t}" }, { "math_id": 6, "text": "i_r" }, { "math_id": 7, "text": "i_m = i_r + i_c" }, { "math_id": 8, "text": "{\\partial i_l}/{\\partial x}" }, { "math_id": 9, "text": "\\lambda" }, { "math_id": 10, "text": "\\tau" }, { "math_id": 11, "text": "V_0" }, { "math_id": 12, "text": "x=0" }, { "math_id": 13, "text": "V_x" }, { "math_id": 14, "text": "x=\\lambda" }, { "math_id": 15, "text": "V" }, { "math_id": 16, "text": "V_\\lambda" }, { "math_id": 17, "text": "V_m" }, { "math_id": 18, "text": " r_m " }, { "math_id": 19, "text": " \\tau " }, { "math_id": 20, "text": "\\lambda^2 = {r_m}/{r_l}" }, { "math_id": 21, "text": "\\tau = c_m r_m" }, { "math_id": 22, "text": "L" }, { "math_id": 23, "text": "1/\\sqrt{LC}" } ]
https://en.wikipedia.org/wiki?curid=6299994
630005
Dirichlet function
Indicator function of rational numbers In mathematics, the Dirichlet function is the indicator function formula_0 of the set of rational numbers formula_1, i.e. formula_2 if x is a rational number and formula_3 if x is not a rational number (i.e. is an irrational number). formula_4 It is named after the mathematician Peter Gustav Lejeune Dirichlet. It is an example of a pathological function which provides counterexamples to many situations. Periodicity. For any real number x and any positive rational number T, formula_5. The Dirichlet function is therefore an example of a real periodic function which is not constant but whose set of periods, the set of rational numbers, is a dense subset of formula_6. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{1}_\\Q" }, { "math_id": 1, "text": "\\Q" }, { "math_id": 2, "text": "\\mathbf{1}_\\Q(x) = 1" }, { "math_id": 3, "text": "\\mathbf{1}_\\Q(x) = 0" }, { "math_id": 4, "text": "\\mathbf 1_\\Q(x) = \\begin{cases}\n1 & x \\in \\Q \\\\\n0 & x \\notin \\Q\n\\end{cases}" }, { "math_id": 5, "text": "\\mathbf{1}_\\Q(x + T) = \\mathbf{1}_\\Q(x)" }, { "math_id": 6, "text": "\\R" } ]
https://en.wikipedia.org/wiki?curid=630005
63001204
Artin's theorem on induced characters
In representation theory, a branch of mathematics, Artin's theorem, introduced by E. Artin, states that a character on a finite group is a rational linear combination of characters induced from all cyclic subgroups of the group. There is a similar but somehow more precise theorem due to Brauer, which says that the theorem remains true if "rational" and "cyclic subgroup" are replaced with "integer" and "elementary subgroup". Statement. In Linear Representation of Finite Groups Serre states in Chapter 9.2, 17 the theorem in the following, more general way: Let formula_0 finite group, formula_1 family of subgroups. Then the following are equivalent: This in turn implies the general statement, by choosing formula_1 as all cyclic subgroups of formula_0.
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "G = \\cup_{g\\in G, H \\in X} g^{-1}Hg" }, { "math_id": 3, "text": "\\forall \\chi \\text{ character of } G \\exists \\chi_H, H \\in X, d \\in \\N : d \\chi = \\sum_{H\\in X} Ind_H^G(\\chi_H)" } ]
https://en.wikipedia.org/wiki?curid=63001204
630017
Feynman–Kac formula
Formula relating stochastic processes to partial differential equations The Feynman–Kac formula, named after Richard Feynman and Mark Kac, establishes a link between parabolic partial differential equations and stochastic processes. In 1947, when Kac and Feynman were both faculty members at Cornell University, Kac attended a presentation of Feynman's and remarked that the two of them were working on the same thing from different directions. The Feynman–Kac formula resulted, which proves rigorously the real-valued case of Feynman's path integrals. The complex case, which occurs when a particle's spin is included, is still an open question. It offers a method of solving certain partial differential equations by simulating random paths of a stochastic process. Conversely, an important class of expectations of random processes can be computed by deterministic methods. Theorem. Consider the partial differential equation formula_0 defined for all formula_1 and formula_2, subject to the terminal condition formula_3 where formula_4 are known functions, formula_5 is a parameter, and formula_6 is the unknown. Then the Feynman–Kac formula expresses formula_7 as a conditional expectation under the probability measure formula_8 formula_9 where formula_10 is an Itô process satisfying formula_11 and formula_12 a Wiener process (also called Brownian motion) under formula_8. Intuitive interpretation. Suppose that the position formula_13 of a particle evolves according to the diffusion process formula_14 Let the particle incur "cost" at a rate of formula_15 at location formula_16 at time formula_17. Let it incur a final cost at formula_18. Also, allow the particle to decay. If the particle is at location formula_16 at time formula_17, then it decays with rate formula_19. After the particle has decayed, all future cost is zero. Then formula_20 is the expected cost-to-go, if the particle starts at formula_21 Partial proof. A proof that the above formula is a solution of the differential equation is long, difficult and not presented here. It is however reasonably straightforward to show that, "if a solution exists", it must have the above form. The proof of that lesser result is as follows: Let formula_7 be the solution to the above partial differential equation. Applying the product rule for Itô processes to the process formula_22 one gets: formula_23 Since formula_24 the third term is formula_25 and can be dropped. We also have that formula_26 Applying Itô's lemma to formula_27, it follows that formula_28 The first term contains, in parentheses, the above partial differential equation and is therefore zero. What remains is: formula_29 Integrating this equation from formula_30 to formula_5, one concludes that: formula_31 Upon taking expectations, conditioned on formula_32, and observing that the right side is an Itô integral, which has expectation zero, it follows that: formula_33 The desired result is obtained by observing that: formula_34 and finally formula_35 Remarks. The Feynman–Kac formula can also be interpreted as a method for evaluating functional integrals of a certain form. If formula_49 where the integral is taken over all random walks, then formula_50 where "w"("x", "t") is a solution to the parabolic partial differential equation formula_51 with initial condition "w"("x", 0) = "f"("x"). Applications. Finance. In quantitative finance, the Feynman–Kac formula is used to efficiently calculate solutions to the Black–Scholes equation to price options on stocks and zero-coupon bond prices in affine term structure models. For example, consider a stock price formula_52 undergoing geometric Brownian motion formula_53 where formula_54 is the risk-free interest rate and formula_55 is the volatility. Equivalently, by Itô's lemma, formula_56 Now consider a European call option on an formula_52 expiring at time formula_5 with strike formula_57. At expiry, it is worth formula_58 Then, the risk-neutral price of the option, at time formula_30 and stock price formula_59, is formula_60 Plugging into the Feynman–Kac formula, we obtain the Black–Scholes equation: formula_61 where formula_62 More generally, consider an option expiring at time formula_5 with payoff formula_63. The same calculation shows that its price formula_7 satisfies formula_64 Some other options like the American option do not have a fixed expiry. Some options have value at expiry determined by the past stock prices. For example, an average option has a payoff that is not determined by the underlying price at expiry but by the average underlying price over some predetermined period of time. For these, the Feynman–Kac formula does not directly apply. Quantum mechanics. In quantum chemistry, it is used to solve the Schrödinger equation with the Pure Diffusion Monte Carlo method. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\frac{\\partial u}{\\partial t}(x,t) + \\mu(x,t) \\frac{\\partial u}{\\partial x}(x,t) + \\tfrac{1}{2} \\sigma^2(x,t) \\frac{\\partial^2 u}{\\partial x^2}(x,t) -V(x,t) u(x,t) + f(x,t) = 0,\n" }, { "math_id": 1, "text": "x \\in \\mathbb{R}" }, { "math_id": 2, "text": "t \\in [0, T]" }, { "math_id": 3, "text": "\nu(x,T)=\\psi(x),\n" }, { "math_id": 4, "text": "\\mu,\\sigma,\\psi,V,f" }, { "math_id": 5, "text": "T" }, { "math_id": 6, "text": "u:\\mathbb{R} \\times [0,T] \\to \\mathbb{R}" }, { "math_id": 7, "text": "u(x,t)" }, { "math_id": 8, "text": "Q" }, { "math_id": 9, "text": "\nu(x,t) = E^Q\\left[e^{-\\int_t^T V(X_\\tau,\\tau)\\, \\mathrm{d}\\tau}\\psi(X_T) + \\int_t^T e^{-\\int_t^\\tau V(X_s,s)\\, \\mathrm{d}s}f(X_\\tau,\\tau)\\,\\mathrm{d}\\tau \\,\\Bigg|\\, X_t=x \\right]\n" }, { "math_id": 10, "text": "X" }, { "math_id": 11, "text": "\n\\mathrm{d}X_t = \\mu(X_t,t)\\,\\mathrm{d}t + \\sigma(X_t,t)\\,\\mathrm{d} W^Q_t,\n" }, { "math_id": 12, "text": "W_t^{Q}" }, { "math_id": 13, "text": "X_t" }, { "math_id": 14, "text": "\ndX_t = \\mu(X_t,t)\\,\\mathrm{d}t + \\sigma(X_t,t)\\,\\mathrm{d} W^Q_t.\n" }, { "math_id": 15, "text": "f(X_s, s)" }, { "math_id": 16, "text": "X_s" }, { "math_id": 17, "text": "s" }, { "math_id": 18, "text": "\\psi(X_T)" }, { "math_id": 19, "text": "V(X_s, s)" }, { "math_id": 20, "text": "u(x, t)" }, { "math_id": 21, "text": "(t, X_t = x)." }, { "math_id": 22, "text": " Y(s) = \\exp\\left(-\\int_t^s V(X_\\tau,\\tau)\\, d\\tau\\right)u(X_s,s) +\n\\int_t^s \\exp\\left(-\\int_t^r V(X_\\tau,\\tau)\\, d\\tau\\right)\nf(X_r,r) \\, dr" }, { "math_id": 23, "text": "\n\\begin{align}\ndY_s = {} & d\\left(\\exp\\left(-\\int_t^s V(X_\\tau,\\tau)\\, d\\tau\\right)\\right) u(X_s,s) +\n\\exp\\left(-\\int_t^s V(X_\\tau,\\tau)\\, d\\tau\\right)\\,du(X_s,s) \\\\[6pt]\n& {} + d\\left(\\exp\\left(-\\int_t^s V(X_\\tau,\\tau)\\, d\\tau\\right)\\right)du(X_s,s) + d\\left(\\int_t^s \\exp\\left(-\\int_t^r V(X_\\tau,\\tau)\\, d\\tau\\right) f(X_r,r) \\, dr\\right)\n\\end{align}\n" }, { "math_id": 24, "text": "d\\left(\\exp\\left(- \\int_t^s V(X_\\tau,\\tau)\\, d\\tau\\right)\\right) = -V(X_s,s) \\exp\\left(-\\int_t^s V(X_\\tau,\\tau)\\, d\\tau\\right) \\,ds," }, { "math_id": 25, "text": " O(dt \\, du) " }, { "math_id": 26, "text": " d\\left(\\int_t^s \\exp\\left(- \\int_t^r V(X_\\tau,\\tau)\\, d\\tau\\right)f(X_r,r)dr\\right) =\n\\exp\\left(-\\int_t^s V(X_\\tau,\\tau)\\, d\\tau\\right) f(X_s,s) ds. " }, { "math_id": 27, "text": "du(X_s,s)" }, { "math_id": 28, "text": "\n\\begin{align}\ndY_s = {} & \\exp\\left(-\\int_t^s V(X_\\tau,\\tau)\\, d\\tau\\right)\\,\\left(-V(X_s,s) u(X_s,s) +f(X_s,s)+\\mu(X_s,s)\\frac{\\partial u}{\\partial X}+\\frac{\\partial u}{\\partial s}+\\tfrac{1}{2}\\sigma^2(X_s,s)\\frac{\\partial^2 u}{\\partial X^2}\\right)\\,ds \\\\[6pt]\n& {} + \\exp\\left(- \\int_t^s V(X_\\tau,\\tau)\\, d\\tau\\right)\\sigma(X,s)\\frac{\\partial u}{\\partial X}\\,dW.\n\\end{align}\n" }, { "math_id": 29, "text": "\ndY_s=\\exp\\left(-\\int_t^s V(X_\\tau,\\tau)\\, d\\tau\\right)\\sigma(X,s)\\frac{\\partial u}{\\partial X}\\,dW." }, { "math_id": 30, "text": "t" }, { "math_id": 31, "text": "\nY(T) - Y(t) = \\int_t^T\n\\exp\\left(-\\int_t^s V(X_\\tau,\\tau)\\, d\\tau\\right)\n\\sigma(X,s)\\frac{\\partial u}{\\partial X}\\,dW.\n" }, { "math_id": 32, "text": "X_{t} = x" }, { "math_id": 33, "text": "\nE[Y(T)\\mid X_t=x] = E[Y(t)\\mid X_t=x] = u(x,t).\n" }, { "math_id": 34, "text": "\nE[Y(T)\\mid X_t=x] =\nE \\left [\\exp\\left(-\\int_t^T V(X_\\tau,\\tau)\\, d\\tau\\right) u(X_T,T) + \\int_t^T \\exp\\left(-\\int_t^r V(X_\\tau,\\tau)\\, d\\tau\\right)f(X_r,r)\\,dr \\,\\Bigg|\\, X_t=x \\right ]\n" }, { "math_id": 35, "text": " u(x,t) = E \\left [\\exp\\left(-\\int_t^T V(X_\\tau,\\tau)\\, d\\tau\\right) \\psi(X_T) + \\int_t^T \\exp\\left(-\\int_t^s V(X_\\tau,\\tau)\\,d\\tau\\right) f(X_s,s)\\,ds \\,\\Bigg|\\, X_t=x \\right ]" }, { "math_id": 36, "text": "f(x,t)" }, { "math_id": 37, "text": " u:\\mathbb{R}^N\\times [0,T] \\to\\mathbb{R}" }, { "math_id": 38, "text": "\\frac{\\partial u}{\\partial t} + \\sum_{i=1}^N \\mu_i(x,t)\\frac{\\partial u}{\\partial x_i} + \\frac{1}{2} \\sum_{i=1}^N \\sum_{j=1}^N\\gamma_{ij}(x,t) \\frac{\\partial^2 u}{\\partial x_i \\partial x_j} -r(x,t)\\,u = f(x,t), " }, { "math_id": 39, "text": " \\gamma_{ij}(x,t) = \\sum_{k=1}^N \\sigma_{ik}(x,t)\\sigma_{jk}(x,t)," }, { "math_id": 40, "text": "\\gamma = \\sigma \\sigma^{\\mathrm{T}}" }, { "math_id": 41, "text": "\\sigma^{\\mathrm{T}}" }, { "math_id": 42, "text": "\\sigma" }, { "math_id": 43, "text": "A" }, { "math_id": 44, "text": "\\frac{\\partial u}{\\partial t} + A u -r(x,t)\\,u = f(x,t), " }, { "math_id": 45, "text": "\n\\exp\\left(-\\int_0^t V(x(\\tau))\\, d\\tau\\right)\n" }, { "math_id": 46, "text": "u V(x) \\geq 0" }, { "math_id": 47, "text": "\nE\\left[\\exp\\left(- u \\int_0^t V(x(\\tau))\\, d\\tau\\right) \\right] = \\int_{-\\infty}^{\\infty} w(x,t)\\, dx " }, { "math_id": 48, "text": "\\frac{\\partial w}{\\partial t} = \\frac{1}{2} \\frac{\\partial^2 w}{\\partial x^2} - u V(x) w.\n" }, { "math_id": 49, "text": "\nI = \\int f(x(0)) \\exp\\left(-u\\int_0^t V(x(t))\\, dt\\right) g(x(t))\\, Dx " }, { "math_id": 50, "text": " I = \\int w(x,t) g(x)\\, dx " }, { "math_id": 51, "text": " \\frac{\\partial w}{\\partial t} = \\frac{1}{2} \\frac{\\partial^2 w}{\\partial x^2} - u V(x) w " }, { "math_id": 52, "text": "S_t" }, { "math_id": 53, "text": "dS_t = (r_t dt + \\sigma_t dW_t) S_t\n" }, { "math_id": 54, "text": "r_t" }, { "math_id": 55, "text": "\\sigma_t" }, { "math_id": 56, "text": "\nd\\ln S_t = \\left(r_t - \\tfrac 1 2 \\sigma_t^2\\right)dt + \\sigma_t \\, dW_t.\n" }, { "math_id": 57, "text": "K" }, { "math_id": 58, "text": "(X_T - K)^+." }, { "math_id": 59, "text": "x" }, { "math_id": 60, "text": "\nu(x, t) = E\\left[e^{-\\int_t^T r_s ds} (S_T - K)^+ | \\ln S_t = \\ln x \\right].\n" }, { "math_id": 61, "text": "\\begin{cases}\n\\partial_t u + Au - r_t u = 0 \\\\\nu(x, T) = (x-K)^+\n\\end{cases}" }, { "math_id": 62, "text": "\nA = (r_t -\\sigma_t^2/2)\\partial_{\\ln x} + \\frac 12 \\sigma_t^2 \\partial_{\\ln x}^2 = r_t x\\partial_x + \\frac 1 2 \\sigma_t^2 x^2 \\partial_{x}^2.\n" }, { "math_id": 63, "text": "g(S_T)" }, { "math_id": 64, "text": "\\begin{cases}\n\\partial_t u + Au - r_t u = 0 \\\\\nu(x, T) = g(x).\n\\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=630017
6300421
The Message in the Bottle
1975 collection of essays on semiotics by Walker Percy The Message in the Bottle: How Queer Man Is, How Queer Language Is, and What One Has to Do with the Other is a collection of essays on semiotics written by Walker Percy and first published in 1975. Percy writes at what he sees as the conclusion of the modern age and attempts to create a middle ground between the two dying ideologies of that age: Judeo-Christian ethics, which give the individual freedom and responsibility; and the rationalism of science and behavioralism, which positions man as an organism in an environment and strips him of this freedom. "The Delta Factor". "The Delta Factor," first published in January 1975 in the Southern Review, sets out the overall themes of the entire book. Percy begins by asking why modern humans are so sad despite the 20th century's technological innovations and unprecedented levels of comfort. More specifically, he is interested in why humans feel happy in bad situations and sad in good situations (a question also posed in his novel "The Last Gentleman"). He posits that this overarching sadness is due to contemporary society's position between two ages: the modern age, which is more or less slowly becoming out of date, and a new age, which is dawning but has not yet truly dawned. The anthropological theories of the modern age, according to Percy, "no longer work and the theories of the new age are not yet known" (7). Percy therefore sees his task as coming up with a new theory of humanity, which he chooses to center on language, the human attribute that separates us from the animals; "The Message in the Bottle" will attempt to explain humans' strange behavior and unexplained sadness by explaining how humans deal with language and symbols. Percy says that the current theories of humanity make us into a sort of monster, a "centaur organism-plus soul . . . one not different from beasts yet somehow nevertheless possessing 'freedom' and 'dignity' and 'individuality' and 'mind' and such" (9). Modern humanity is, then, the collision of Judeo-Christian ethics and its focus upon individual freedom and scientific behaviorism, which says that humans are no different from the animals—in other words, modern people believe themselves to be no different from animals and yet somehow above them. What's more, no existing research really deals with the question of how language really works, of "how" human beings use and understand the symbols of linguistics. Percy puts this question into a sort of no-man's land, what he calls a "terra incognita" (17), between linguistics and psychology, the former of which deals with the results of language and the latter of which deals with the way people respond to language. The Delta Factor, Percy's theory of language, is framed in the context of the story of Helen Keller's learning to say and sign the word "water" while Annie Sullivan poured water over her hands and repeatedly made the signs for the word into her hand. A behaviorist linguistic reading of this scene might suggest a causal relationship—in other words, Keller felt Sullivan's sign-language stimulus in her hand and in response made a connection in her brain between the signifier and the signified. This is too simplistic a reading, says Percy, because Keller was receiving from both the signifier (the sign for "water") and the referent (the water itself). This creates a triangle between "water" (the word), water (the liquid), and Helen, in which all three corners lead to the other two corners and which Percy says is "absolutely irreducible" (40). This linguistic triangle is thus the building block for all of human intelligence. The moment when this Delta Δ entered the mind of a person—whether this happened via random chance or through the intervention of a deity—that person became human. Further, in Delta Δ, the corners of the triangle are removed from their behaviorist contexts. Helen Keller, in other words, becomes something other than just an organism in her environment because she is coupling two unrelated things—"water" the word and water the liquid—together. Likewise, water the liquid is made something more than water the liquid because Keller has coupled it with the arbitrary sound "water", and "water" the word becomes more than just the sound of the word "water" (and the shape of the sign language for "water"). In this way, "the Delta phenomenon yielded a new world and maybe a new way of getting at it. It was not the world of organisms and environments but just as real and twice as human" (44)—humans are made whole by the Delta Δ where the popular notions of religion and science had split us in two. "The Loss of the Creature". "The Loss of the Creature" is an exploration of the way the more or less objective reality of the individual is obscured in and ultimately lost to systems of education and classification. Percy begins by discussing the Grand Canyon—he says that, whereas García López de Cárdenas, who discovered the canyon, was amazed and awed by it, the modern-day sightseer can see it only through the lens of "the symbolic complex which has already been formed in the sightseer's mind" (47). Because of this, the sightseer does not appreciate the Grand Canyon on its own merits; he appreciates it based on how well or poorly it conforms to his preexisting image of the Grand Canyon, formed by the mythology surrounding it. What is more, instead of approaching the site directly, he approaches it by taking photographs, which, Percy says, is not approaching it at all. By these two processes—judging the site on postcards and taking his own pictures of it instead of confronting it himself—the tourist subjugates the present to the past and to the future, respectively. Percy suggests several ways of getting around this situation, almost all of them involving bypassing the structure of organized approaches—one could go off the beaten path, for example, or be removed from the presence of other tourists by a natural disaster. This bypassing, however, can lead to other problems: Namely, the methods used are not necessarily authentic; "some stratagems obviously serve other purposes than that of providing access to being" (51). Percy gives the example of a pair of tourists who, disgusted with the proliferation of other tourists in the popular areas of Mexico, stumble into a tiny village where a festival is taking place. The couple enjoys themselves and repeatedly tells themselves, "Now we are really living," but Percy judges their experience inauthentic because they are constantly concerned that things may not go perfectly. When they return home, they tell an ethnologist friend of theirs about the festival and how they wish he could have been there. This, says Percy, is their real problem: "They wanted him, not to share their experience, but to certify their experience as genuine" (53). The layman in modern society, then, surrenders his ownership to the specialist, whom he believes has authority over him in his field. This creates a caste system of sorts between laymen and experts, but Percy says that the worst thing about this system is that the layman does not even realize what it is he has lost. This is most evident in education. Percy alludes to a metaphor he had used in "The Delta Factor," that of the literature student who cannot read a Shakespearean sonnet that is easily read by a post apocalyptic survivor in Aldous Huxley's "Brave New World". The literature student is blocked from the sonnet by the educational system built around it, what Percy calls its "package." Instead of transmitting the subject of education, education often transmits only itself, and the student does not view the subject as open and delightful, nor does he view himself as sovereign. Percy offers two ways around this, both involving, as did his solution to the problem of the Grand Canyon, an indirect approach. Either the student can suffer some sort of ordeal that opens the text to him in a new way; or else he can be apprenticed to a teacher who takes a very unusual approach to the subject. He suggests that biology students be occasionally taught literature, and vice versa. The overall effect of this obscuration by structure is one of the basic conditions of modern society: The individual layman is reduced to being a consumer. The individual thing becomes lost to the systems of classification and theory created for the consumer, and the individual man loses all sense of ownership. The solution to this problem, according to Percy, is not to get rid of museums but for "the sightseer to be prepared to enter into a struggle to recover a sight from a museum" (62). "Metaphor as Mistake". Percy begins "Metaphor as Mistake" (1958) with five metaphors which were misunderstood; these misunderstood metaphors, he says, have nevertheless "resulted in an authentic poetic experience . . . an experience, moreover, which was notably absent before the mistake was made" (65). Metaphor, in Percy's view, is a way of getting at the real nature of a thing by comparing it to something that it does not resemble on the surface. It becomes a tool for ontological exploration. Existing inquiries have failed to notice this, however, because they either abstract their viewpoints from both effective and ineffective metaphors (this is the path of philosophy) or focus on the individual effects of the individual poet (this is the path of literary criticism). As he does in "The Delta Factor," Percy wishes to seek a middle ground between these two extremes. He makes it clear, however, that the metaphor has scientific, rather than strictly poetic, value for him; he sees metaphor as a method of getting at the way things actually are. Two qualifications exist for the metaphor as mistake: It must be given by an authority figure, and it must have a certain aura of mystery around it. In this way, the metaphor becomes both right (given by authority) and wrong (not strictly true as a descriptor). Percy's example is of a boy on a hunting trip who sees a bird and asks what it is. The African-American accompanying him and his father calls the bird a "blue dollar", which excites the boy until his father corrects him and tells him the bird is actually a blue darter. The term "blue darter" may describe what the bird does and what color it is, says Percy, but "blue dollar" in some mystical way gets at what the bird actually "is". When the boy saw the bird, he formed a subjective impression of it—what Percy calls the bird's "apprehended nature" (72)--and in some sense the mistaken name "blue dollar" gets right at the heart of that apprehended nature. In this way, the metaphor becomes both science and poetry; it is a sort of subjective science, the ontology of the world as it appears to the individual. Percy says that we can only understand reality through metaphor. We never "per"ceive the world--"We can only "con"ceive being, sidle up to it by laying something else alongside" (72). All language, then, and perhaps all intelligence, are therefore metaphorical. When one person makes a metaphor, the people who hear it hope that it corresponds to their subjective understanding of reality—an understanding they may or may not even be consciously aware of. The poet, according to Percy, has a double-edged task: his metaphors must ring true, but they must be flexible enough to reverberate with his audience and for them to gain a new understanding of the things to which they refer. The poet must refer to things we already know, but he must do so in new ways; in this, he gives his audience access to their own private experiences. This can lead to a sort of blind groping for metaphors, however, a process which Percy sees as effective but harmful. Authority and intention are essential for metaphors to be shared between the Namer and the Hearer. "Notes for a Novel About the End of the World". "A Novel About the End of the World" makes a striking counterpart to Percy's novel "Love in the Ruins", subtitled "The Adventures of a Bad Catholic at a Time Near the End of the World" and published only four years after the essay. The apocalyptic novel is a form of prophecy, a warning about what will happen if society does not change its ways. This sort of novel is written by a particular type of novelist, one defined not by his quality but by his goals. Percy refers to this novelist as a "religious novelist" but notes that he includes atheists such as Jean-Paul Sartre and Albert Camus in this category because of their "passionate conviction about man's nature, the world, and man's obligation in the world" (103). The religious novelist, says Percy, has very different concerns than the mainstream of the society in which he lives—so different, in fact, one must decide whether society is blind or whether the novelist is insane or a charlatan. The central difference between the novelist and the rest of society is that the former tends to be pessimistic and the latter tends to be optimistic. The novelist has a "profound disquiet" (106). The novelist is set off in particular against the scientist and against the "new theologian"—from the former because the novelists insists on the individual while science measures only categories, and from the latter because the novelist still believes in original sin. The Christian novelist in particular recognizes that the problem is not that Christianity is not relevant to modern society but that man's blind acceptance of "the magical aura of science, whose credentials he accepts for all sectors of reality" (113) is changing his consciousness to the point where he can no longer recognize the Gospel. The novel about the end of the world, then, is an attempt to shock the complacent reader out of his scientism and into the light of the real world. "The Message in the Bottle". In "The Message in the Bottle," Percy attempts to separate information into two categories: knowledge and news. The essay is built on an extended metaphor of a castaway with amnesia who remembers nothing but the island he washes up on and who creates a new life with the natives of the island. The castaway frequently finds on the beach bottles that have one-sentence messages on the inside, such as "There is fresh water in the next cove," "The British are coming to Concord," or "Lead melts at 330 degrees." A group of scientists lives on the island, and they separate these messages into two categories: empirical facts and analytic facts. The castaway is disturbed by this classification, however, because it does not take into account the messages' effect on the reader. Thus, he comes up with the categories of knowledge and news. Knowledge belongs to science, to psychology and to the arts; simply put, it is that "which can be arrived at anywhere by anyone and at any time" (125). News, on the other hand, bears directly and immediately on his life. The scientists, because of their commitment to objectivity above all else, cannot recognize the difference between these two categories. A piece of news is not verified the way a piece of knowledge is—whereas knowledge can be verified empirically, news can be verified empirically only after the hearer has already heeded its call. The castaway must first, however, decide when to heed the call of a piece of news and when to ignore it. Percy sets forth three criteria for the acceptance of a piece of news: (a) its relevance to the hearer's predicament; (b) the trustworthiness of the newsbearer; and (c) its likelihood or possibility. As news depends so heavily on its bearer, the messages in bottles that the castaway finds cannot be sufficient credential in and of themselves. The castaway must know something about the person who wrote them. The problem with modern society is that too many people attempt to cure their feelings of homelessness by seeking knowledge in the fields of science and art. Their real problem, says Percy, is that their feelings of homelessness come from their being stranded on the island—they should be looking for news from across the seas. Percy links this distinction between news and knowledge to how the world understands the Christian gospel. He writes that the gospel must be understood as a piece of news and not a piece of knowledge. To Percy, the gospel is news from across the seas. "Culture: The Antinomy of the Scientific Method". In this essay, Percy tries to expose the limitation of our present-day science and scientific method when it is applied to human beings, especially to human cultures. Science, for its own completeness, must be able to address humans and our cultures. He attributes the limitation to the fact that science does not take all humans' assertions as valid statements. After proving his point that discarding assertions leads to antinomies, i.e., contradictions between human culture and science, though by themselves each is reasonable, he goes on to propose a radical change to the presumptions of the scientific method. The outline here does not strictly follow the original essay to the letter in the sense that there are references to newer pieces of information, but the essence of the essay is unconditionally maintained. The additional material only bolsters Percy’s point of view. The culmination of a scientific method is always an assertion. For example, the physical, space-time event of energy exchange happening inside a closed system can be examined carefully, resulting in an assertion that mass cannot be created or destroyed. This can be further asserted concisely with an equation of the form formula_0. Another experiment is, say, we take a mass, formula_1, and convert it into equivalent energy. This is also a physical space-time event, happening right in front of us. The event can be asserted by a scientist as formula_2, and all scientists (and non-scientists too) can interpret and understand the associated meaning. In both the examples above, there are two different kinds of activities happening. There is a definite qualitative difference between (1) and (2). The first one is a palpable physical space-time happening which can be observed by everybody. The second one requires an understanding of the meaning by which an intellect (Percy uses this word in a generic sense as any human being) can grasp the meaning. (2) is not part of (1). There is a difference between reality and humans understanding it via assertory statements. All real events, under the investigation of scientific method (whose main parts are hypothesizing, experimental verification, and conclusion), result in assertions. The two activities are equivalent, but qualitatively different. The scientific method is a functional method in the sense that relations are in terms of functions (purpose, role, its job, formula_3 etc.) What purpose is served? How do the causes, formula_5, work together for function formula_4 to happen? Let us consider another reality in front of us, viz., culture. By culture, we mean the activities of human beings which are not primarily physiological or psychological but simply assertory: for example, language, art, religions, myths, science (as an activity) and economics (also as an activity). The question is, why shouldn’t the scientific method be applied to culture also? What happens when the functional method of the sciences is applied to cultural phenomenon? Most scientists cringe at this idea but they should not. Science – or natural science – should be able to explain everything in this universe. In fact, it has been able to explain much, reaching far out into space, and go deep into physical matter as well as biological organisms, except it has left us – human organisms – in the lurch, hanging in there like an orphan. While cultural anthropology or ethnology has attempted to study and explain the culture, it ignores human being’s role in making the culture, leading to a dichotomized body and mind. It ignores much of the reality that we see every day in front of our eyes – our culture – and therefore science is incomplete. If we were to force and use the scientific method on cultures, we run into contradictions and antinomies. Percy demonstrates this with three quick examples but before that he characterizes assertions in detail. Three types of assertions are recognized. Type (1) is a classificatory goal of the scientific method whereas (2) and (3) are towards establishing functional relations. Culture as a subject of the scientific method. Culture is a group of rules, or assertions, accepted by (or imposed on) the local people. Culture is defined as all human inheritance, material as well as cultural and spiritual, propagated via genes and memes. It includes hoes, baskets, manuscripts, monuments, language, myths, art, religions, and even science. Culture is the totality of the different ways in which the human spirit of the local people construes the world and asserts its knowledge and belief. But culture is not just assertory just like how the heart of science is not the paraphernalia of the laboratory; it is the method, the hunch, the theory, the formula or a law which is the final product and can be disseminated. Similarly, the artwork is not the paint on the canvas or the print on the page; it is the moment of creation by the artist and the moment of understanding by the viewer. Assertions are the basic, most elemental means of intersubjective communication (subjective understandings between people) used by humans and carry meanings. Culture is part of our reality which (natural) science, therefore, must explain. The objective of anthropology as a science is to understand culture as a dynamic and lawful process that is happening in our midst. The main reason scientific method has failed, as will be demonstrated later, is that culture is a mental activity which our enlightenment sciences have had problem handling and has artificially bifurcated us humans as made of mind and body. Assertions (which, incidentally, science also uses extensively) carry "meaning" and must be "understood;" they cannot be described as just space-time events. Antinomies of scientific method and culture. Can culture be understood by the scientific method? Can the functional aspects of scientific method be applied to culture, or are there inherent limitations in the scientific method? Percy looks at three examples when the scientific method, when applied to aspects of culture, leads to contradictions – antinomies – and therefore fails. "Jesus walked on water" "Rama vanquished the ten-headed Ravana to safeguard dharma" "Hunuman jumped at the sun and ate it which is why his cheeks are swollen" "What is this? Milk." "What is this [pointing at a] “round thing”? Ball." "What color is this? Blue." "formula_8 (Kepler’s third law of planetary orbits)" "formula_9 (Newton’s law of gravitation)" "formula_10(Einstein’s mass-energy equivalency)" Such scientists have two options, viz. (1) Applied science: their science must be meaningful, with a functional principle (good or bad is not the issue here), that can only be gauged by biological, cultural, and economic needs of the humans (sociobiological science) or (2) Pure science: scientists can do science for the sake of the pure knowledge that comes with it, as a purely semántico-logical one, looking for natural laws, refusing to deal with the intersubjectivity of human organisms, like a “private science.” Both will lead to antinomy. The antinomy of the second option – of science as a pure endeavor – is also overt. A theoretical physicist studying black holes contribute “nothing” to societies on earth, thereby eliminating human organisms and societies. When a Mars landing happened, late night talk shows asked, what’s in it for me? Scientists must bend backwards to give a convincing answer, but the average person on the street (who is in the majority) is quite right – there is nothing in it for him at least in the foreseeable future. In all of science and its assertory statements, a human being is always involved to interpret. This means the claim to “infallible knowledge” itself becomes suspect and any claim to valid knowledge by the knower, however modest it may be, is subjective. In other words, the scientific method’s presupposition that there is something to be known, that a degree of knowledge is possible, becomes questionable. As Friedrich Nietzsche said, “There is no absolute truth, only interpretations!” Source of the antinomy. In physics, chemistry, and biological sciences (of non-human organisms), when a real, physical space-time event in which a state A results in state B happens, and it is represented by an assertory statement such as formula_3 (causes formula_5 → effect formula_4) or a natural law, there is no antinomy observed because the separation between the world event and the intersubjective assertion is not violated. The subject of study – rocks, planets, chemicals, germs, rats, dogs – do not butt in to reinterpret or misinterpret; they do not have a mind of their own, and scientists win. However, when humans are the subject of study of the scientific method or when the science (science and scientific method) itself is subjected to scientific method, Percy demonstrated how assertory statements of the human organisms (exemplified by the examples of myth, language, and science itself) cannot be recognized as classificatory or functional (formula_3 or formula_6 is formula_7) types and leads to contradictions. The main reason is that the assertions (artwork, myths, stories, image-worlds, etc.) that humans make are real (whether they are true or false or nonsense cannot be determined and is, in fact, irrelevant) with palpable results in societies, "but they are not just space-time events;" they are immaterial (as in spiritual, rather than physical, lacking state changes or energy exchanges) and “mental”, i.e., in the minds of the humans involved, with meanings and understandings! The scientific method is supervenient upon the assertory acts of symbolization, which is a trademark of human beings, akin to how molecules are supervenient upon atoms. Furthermore, just like how, in molecules, some basic properties of atoms are lost, scientific method also loses some basic properties of the underlying human being. Therefore, if science alone – as it stands today – were the organon of reality and other cognitive claims denied, we end up with mind-body duality, with antinomies, and with incomplete (natural) sciences. Hence the solution begs for a radical paradigm shift in the scientific method. Towards a radical anthropology. While it is perfectly legitimate to study objectively languages, religions, societies, just as we do tools, hunting, warfare, etc., it is not sufficient. The creature himself who makes the culture possible should not be ignored. It is also futile to try to find literal and rational meaning in the assertory statements of myths; it will lead us down a rabbit hole and take us nowhere. Present day ethnological anthropology studies human customs, institutions, artifacts, product of mental exercise, societies, and the “laws” of their development, but – notice – the role of the human being as an active culture member is missing! Once Percy recognizes that the assertory statements of human beings are the root cause leading to antinomies, he suggests the following radical change in the scientific method when it is applied to study of human beings, i.e., to anthropology. For this, Percy’s radical proposal is to allow as eligible, "all real events," not merely the space-time physical events involving energy exchanges and transformations. It should deal with all assertory statements as such on par. All assertory behavior of humans – mythic or scientific, true or false – must be commensurable with each other. Humans should be scrutinized under the microscope just like we do rats in a maze. It is the scientific method which demands “proof” for assertions, but societal reality is not so. Myths, however absurd they sound, are real and kicking. Language’s symbolic association is, if seen logically, scandalous "(how can this round thing be the word ball?)" but without this cosmic blunder, humans would not be human! In fact, trying to explain assertory nature of humans using scientific method’s intersubjective assertions is self-referential and is the root cause of the limitation of science in addressing culture. The societal norm is that each human being falls in a spectral scale of individual sense of right-wrong, truth-falsity, authentic-inauthentic spread. The radical anthropology must include such normative behavior rather than the functional assertions (formula_3, causes formula_5 → effect formula_4) only. In addition, the normative character should be understood not just as cultural values but the very mode of existence of the asserting creature of culture. It is the cultural creature – the human being – who lives normatively. Once all the assertions, not just functional ones, are accepted to be on the same ontological plane, the next step would be to recognize that some will allow humans to flourish and by some, he will languish. By doing so, science will have a chance to recognize potentialities for human nature and recommend which to follow and which to ditch (a recommendation that could have been useful during the COVID-19’s to lockdown or not dilemma), while an inorganic scientist (physicist or chemist) or even a biologist studying plants and animals has no such good or bad morality to grapple with. Paleontologist Stephen Gould has attempted to link religion and science as a Non-Overlapping Magisteria (NOMA) – two magisterial pillars of inquiry each addressing non-overlapping aspects of being – but Percy’s radical anthropology is overlapping and integrated. In essence, science must view the world not as split into observers and data – i.e., those who know and those who behave and are encultured – but as an integrated unit. Science and scientists must recognize that, just like themselves, all other humans are making equally valid assertions – from their viewpoint and their culture’s point of view – about the world, in their quest for meaning, sometimes getting the answers, and sometimes falling short. Such a radical view might seem impossible, even ludicrous, but it is necessitated by the demand of science itself – paraphrasing Terrence Deacon, the scientific Theory of Everything, if of everything, cannot omit us, our feelings, meanings, consciousness, and purposes that make us what we are; we need a theory of everything that does not leave it absurd that we exist. Maturana and Varela in their book, "The Tree of Knowledge," also allude to the necessity of including all of humankind into any investigations and highlight the fact that, given that each of us is unique yet must coexist in congruence, our only option is to “"see" the other person and open up to him room to exist besides us.” Nevertheless, Percy warns about extreme cultural relativism seeping into science, making science nonsensical! Percy’s endeavor can be exemplified with the observation that a meta-scientific and metacultural reality exists, on top of the science and cultural symbols, that must not be forgotten or ignored which the purely scientific method tends to, due to antinomies that the current scientific method deteriorates into when dealing with humans.
[ { "math_id": 0, "text": "\\Sigma M_i = \\Sigma M_f" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "E=mc^2" }, { "math_id": 3, "text": "E=f(C)" }, { "math_id": 4, "text": "E" }, { "math_id": 5, "text": "C" }, { "math_id": 6, "text": "S" }, { "math_id": 7, "text": "P" }, { "math_id": 8, "text": "T^2=KD^3" }, { "math_id": 9, "text": "F = G \\frac{m_1 m_2}{r^2}" }, { "math_id": 10, "text": "E = mc^2" } ]
https://en.wikipedia.org/wiki?curid=6300421
63011965
Valuation (geometry)
In geometry, a valuation is a finitely additive function from a collection of subsets of a set formula_0 to an abelian semigroup. For example, Lebesgue measure is a valuation on finite unions of convex bodies of formula_1 Other examples of valuations on finite unions of convex bodies of formula_2 are surface area, mean width, and Euler characteristic. In geometry, continuity (or smoothness) conditions are often imposed on valuations, but there are also purely discrete facets of the theory. In fact, the concept of valuation has its origin in the dissection theory of polytopes and in particular Hilbert's third problem, which has grown into a rich theory reliant on tools from abstract algebra. Definition. Let formula_0 be a set, and let formula_3 be a collection of subsets of formula_4 A function formula_5 on formula_3 with values in an abelian semigroup formula_6 is called a valuation if it satisfies formula_7 whenever formula_8 formula_9 formula_10 and formula_11 are elements of formula_12 If formula_13 then one always assumes formula_14 Examples. Some common examples of formula_3 are Let formula_15 be the set of convex bodies in formula_1 Then some valuations on formula_15 are Some other valuations are Valuations on convex bodies. From here on, let formula_22, let formula_23 be the set of convex bodies in formula_24, and let formula_5 be a valuation on formula_23. We say formula_5 is "translation invariant" if, for all formula_25 and formula_26, we have formula_27. Let formula_28. The Hausdorff distance formula_29 is defined as formula_30 where formula_31 is the formula_32-neighborhood of formula_33 under some Euclidean inner product. Equipped with this metric, formula_23 is a locally compact space. The space of continuous, translation-invariant valuations from formula_23 to formula_34 is denoted by formula_35 The topology on formula_36 is the topology of uniform convergence on compact subsets of formula_37 Equipped with the norm formula_38 where formula_39 is a bounded subset with nonempty interior, formula_36 is a Banach space. Homogeneous valuations. A translation-invariant continuous valuation formula_40 is said to be "formula_41-homogeneous" if formula_42 for all formula_43 and formula_44 The subset formula_45 of formula_41-homogeneous valuations is a vector subspace of formula_35 McMullen's decomposition theorem states that formula_46 In particular, the degree of a homogeneous valuation is always an integer between formula_47 and formula_48 Valuations are not only graded by the degree of homogeneity, but also by the parity with respect to the reflection through the origin, namely formula_49 where formula_50 with formula_51 if and only if formula_52 for all convex bodies formula_53 The elements of formula_54 and formula_55 are said to be "even" and "odd", respectively. It is a simple fact that formula_56 is formula_57-dimensional and spanned by the Euler characteristic formula_58 that is, consists of the constant valuations on formula_59 In 1957 Hadwiger proved that formula_60 (where formula_61) coincides with the formula_57-dimensional space of Lebesgue measures on formula_62 A valuation formula_63 is "simple" if formula_64 for all convex bodies with formula_65 Schneider in 1996 described all simple valuations on formula_2: they are given by formula_66 where formula_67 formula_68 is an arbitrary odd function on the unit sphere formula_69 and formula_70 is the surface area measure of formula_53 In particular, any simple valuation is the sum of an formula_71- and an formula_72-homogeneous valuation. This in turn implies that an formula_41-homogeneous valuation is uniquely determined by its restrictions to all formula_73-dimensional subspaces. Embedding theorems. The Klain embedding is a linear injection of formula_74 the space of even formula_41-homogeneous valuations, into the space of continuous sections of a canonical complex line bundle over the Grassmannian formula_75 of formula_41-dimensional linear subspaces of formula_62 Its construction is based on Hadwiger's characterization of formula_71-homogeneous valuations. If formula_76 and formula_77 then the restriction formula_78 is an element formula_79 and by Hadwiger's theorem it is a Lebesgue measure. Hence formula_80 defines a continuous section of the line bundle formula_81 over formula_75 with fiber over formula_82 equal to the formula_57-dimensional space formula_83 of densities (Lebesgue measures) on formula_84 Theorem (Klain). The linear map formula_85 is injective. A different injection, known as the Schneider embedding, exists for odd valuations. It is based on Schneider's description of simple valuations. It is a linear injection of formula_86 the space of odd formula_41-homogeneous valuations, into a certain quotient of the space of continuous sections of a line bundle over the partial flag manifold of cooriented pairs formula_87 Its definition is reminiscent of the Klain embedding, but more involved. Details can be found in. The Goodey-Weil embedding is a linear injection of formula_88 into the space of distributions on the formula_41-fold product of the formula_72-dimensional sphere. It is nothing but the Schwartz kernel of a natural polarization that any formula_89 admits, namely as a functional on the formula_90-fold product of formula_91 the latter space of functions having the geometric meaning of differences of support functions of smooth convex bodies. For details, see. Irreducibility Theorem. The classical theorems of Hadwiger, Schneider and McMullen give fairly explicit descriptions of valuations that are homogeneous of degree formula_92 formula_93 and formula_48 But for degrees formula_94 very little was known before the turn of the 21st century. McMullen's conjecture is the statement that the valuations formula_95 span a dense subspace of formula_35 McMullen's conjecture was confirmed by Alesker in a much stronger form, which became known as the Irreducibility Theorem: Theorem (Alesker). For every formula_96 the natural action of formula_97 on the spaces formula_98 and formula_99 is irreducible. Here the action of the general linear group formula_97 on formula_36 is given by formula_100 The proof of the Irreducibility Theorem is based on the embedding theorems of the previous section and Beilinson-Bernstein localization. Smooth valuations. A valuation formula_101 is called "smooth" if the map formula_102 from formula_97 to formula_36 is smooth. In other words, formula_5 is smooth if and only if formula_5 is a smooth vector of the natural representation of formula_97 on formula_35 The space of smooth valuations formula_103 is dense in formula_36; it comes equipped with a natural Fréchet-space topology, which is finer than the one induced from formula_35 For every (complex-valued) smooth function formula_104 on formula_105 formula_106 where formula_107 denotes the orthogonal projection and formula_108 is the Haar measure, defines a smooth even valuation of degree formula_109 It follows from the Irreducibility Theorem, in combination with the Casselman-Wallach theorem, that any smooth even valuation can be represented in this way. Such a representation is sometimes called a "Crofton formula". For any (complex-valued) smooth differential form formula_110 that is invariant under all the translations formula_111 and every number formula_112 integration over the "normal cycle" defines a smooth valuation: As a set, the normal cycle formula_113 consists of the outward unit normals to formula_53 The Irreducibility Theorem implies that every smooth valuation is of this form. Operations on translation-invariant valuations. There are several natural operations defined on the subspace of smooth valuations formula_114 The most important one is the product of two smooth valuations. Together with pullback and pushforward, this operation extends to valuations on manifolds. Exterior product. Let formula_115 be finite-dimensional real vector spaces. There exists a bilinear map, called the exterior product, formula_116 which is uniquely characterized by the following two properties: formula_127 Product. The product of two smooth valuations formula_128 is defined by formula_129 where formula_130 is the diagonal embedding. The product is a continuous map formula_131 Equipped with this product, formula_132 becomes a commutative associative graded algebra with the Euler characteristic as the multiplicative identity. Alesker-Poincaré duality. By a theorem of Alesker, the restriction of the product formula_133 is a non-degenerate pairing. This motivates the definition of the formula_90-homogeneous "generalized valuation", denoted formula_134 as formula_135 topologized with the weak topology. By the Alesker-Poincaré duality, there is a natural dense inclusion formula_136 Convolution. Convolution is a natural product on formula_137 For simplicity, we fix a density formula_138 on formula_125 to trivialize the second factor. Define for fixed formula_139 with smooth boundary and strictly positive Gauss curvature formula_140 There is then a unique extension by continuity to a map formula_141 called the convolution. Unlike the product, convolution respects the co-grading, namely if formula_142 formula_143 then formula_144 For instance, let formula_145 denote the mixed volume of the convex bodies formula_146 If convex bodies formula_147 in formula_2 with a smooth boundary and strictly positive Gauss curvature are fixed, then formula_148 defines a smooth valuation of degree formula_109 The convolution two such valuations is formula_149 where formula_150 is a constant depending only on formula_151 Fourier transform. The Alesker-Fourier transform is a natural, formula_97-equivariant isomorphism of complex-valued valuations formula_152 discovered by Alesker and enjoying many properties resembling the classical Fourier transform, which explains its name. It reverses the grading, namely formula_153 and intertwines the product and the convolution: formula_154 Fixing for simplicity a Euclidean structure to identify formula_155 formula_156 we have the identity formula_157 On even valuations, there is a simple description of the Fourier transform in terms of the Klain embedding: formula_158 In particular, even real-valued valuations remain real-valued after the Fourier transform. For odd valuations, the description of the Fourier transform is substantially more involved. Unlike the even case, it is no longer of purely geometric nature. For instance, the space of real-valued odd valuations is not preserved. Pullback and pushforward. Given a linear map formula_159 there are induced operations of pullback formula_160 and pushforward formula_161 The pullback is the simpler of the two, given by formula_162 It evidently preserves the parity and degree of homogeneity of a valuation. Note that the pullback does not preserve smoothness when formula_104 is not injective. The pushforward is harder to define formally. For simplicity, fix Lebesgue measures on formula_163 and formula_62 The pushforward can be uniquely characterized by describing its action on valuations of the form formula_164 for all formula_165 and then extended by continuity to all valuations using the Irreducibility Theorem. For a surjective map formula_166 formula_167 For an inclusion formula_168 choose a splitting formula_169 Then formula_170 Informally, the pushforward is dual to the pullback with respect to the Alesker-Poincaré pairing: for formula_40 and formula_171 formula_172 However, this identity has to be carefully interpreted since the pairing is only well-defined for smooth valuations. For further details, see. Valuations on manifolds. In a series of papers beginning in 2006, Alesker laid down the foundations for a theory of valuations on manifolds that extends the theory of valuations on convex bodies. The key observation leading to this extension is that via integration over the normal cycle (1), a smooth translation-invariant valuation may be evaluated on sets much more general than convex ones. Also (1) suggests to define smooth valuations in general by dropping the requirement that the form formula_173 be translation-invariant and by replacing the translation-invariant Lebesgue measure with an arbitrary smooth measure. Let formula_0 be an n-dimensional smooth manifold and let formula_174 be the co-sphere bundle of formula_175 that is, the oriented projectivization of the cotangent bundle. Let formula_176 denote the collection of compact differentiable polyhedra in formula_4 The normal cycle formula_177 of formula_178 which consists of the outward co-normals to formula_8 is naturally a Lipschitz submanifold of dimension formula_179 For ease of presentation we henceforth assume that formula_0 is oriented, even though the concept of smooth valuations in fact does not depend on orientability. The space of smooth valuations formula_180 on formula_0 consists of functions formula_181 of the form formula_182 where formula_183 and formula_184 can be arbitrary. It was shown by Alesker that the smooth valuations on open subsets of formula_0 form a soft sheaf over formula_4 Examples. The following are examples of smooth valuations on a smooth manifold formula_0: formula_194 where the integration is with respect to the Haar probability measure on formula_195 is a smooth valuation. This follows from the work of Fu. Filtration. The space formula_180 admits no natural grading in general, however it carries a canonical filtration formula_196 Here formula_197 consists of the smooth measures on formula_175 and formula_198 is given by forms formula_173 in the ideal generated by formula_199 where formula_200 is the canonical projection. The associated graded vector space formula_201 is canonically isomorphic to the space of smooth sections formula_202 where formula_203 denotes the vector bundle over formula_0 such that the fiber over a point formula_204 is formula_205 the space of formula_41-homogeneous smooth translation-invariant valuations on the tangent space formula_206 Product. The space formula_180 admits a natural product. This product is continuous, commutative, associative, compatible with the filtration: formula_207 and has the Euler characteristic as the identity element. It also commutes with the restriction to embedded submanifolds, and the diffeomorphism group of formula_0 acts on formula_180 by algebra automorphisms. For example, if formula_0 is Riemannian, the Lipschitz-Killing valuations satisfy formula_208 The Alesker-Poincaré duality still holds. For compact formula_0 it says that the pairing formula_209 formula_210 is non-degenerate. As in the translation-invariant case, this duality can be used to define generalized valuations. Unlike the translation-invariant case, no good definition of continuous valuations exists for valuations on manifolds. The product of valuations closely reflects the geometric operation of intersection of subsets. Informally, consider the generalized valuation formula_211 The product is given by formula_212 Now one can obtain smooth valuations by averaging generalized valuations of the form formula_213 more precisely formula_214 is a smooth valuation if formula_215 is a sufficiently large measured family of diffeomorphisms. Then one has formula_216 see. Pullback and pushforward. Every smooth immersion formula_217 of smooth manifolds induces a pullback map formula_218 If formula_104 is an embedding, then formula_219 The pullback is a morphism of filtered algebras. Every smooth proper submersion formula_220 defines a pushforward map formula_221 by formula_222 The pushforward is compatible with the filtration as well: formula_223 For general smooth maps, one can define pullback and pushforward for generalized valuations under some restrictions. Applications in Integral Geometry. Let formula_224 be a Riemannian manifold and let formula_225 be a Lie group of isometries of formula_224 acting transitively on the sphere bundle formula_226 Under these assumptions the space formula_227 of formula_225-invariant smooth valuations on formula_224 is finite-dimensional; let formula_228 be a basis. Let formula_229 be differentiable polyhedra in formula_230 Then integrals of the form formula_231 are expressible as linear combinations of formula_232 with coefficients formula_233 independent of formula_19 and formula_234: Formulas of this type are called "kinematic formulas". Their existence in this generality was proved by Fu. For the three simply connected real space forms, that is, the sphere, Euclidean space, and hyperbolic space, they go back to Blaschke, Santaló, Chern, and Federer. Describing the kinematic formulas explicitly is typically a difficult problem. In fact already in the step from real to complex space forms, considerable difficulties arise and these have only recently been resolved by Bernig, Fu, and Solanes. The key insight responsible for this progress is that the kinematic formulas contain the same information as the algebra of invariant valuations formula_235 For a precise statement, let formula_236 be the kinematic operator, that is, the map determined by the kinematic formulas (2). Let formula_237 denote the Alesker-Poincaré duality, which is a linear isomorphism. Finally let formula_238 be the adjoint of the product map formula_239 The Fundamental theorem of algebraic integral geometry relating operations on valuations to integral geometry, states that if the Poincaré duality is used to identify formula_240 with formula_241 then formula_242: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\R^n." }, { "math_id": 2, "text": "\\R^n" }, { "math_id": 3, "text": "\\mathcal S" }, { "math_id": 4, "text": "X." }, { "math_id": 5, "text": "\\phi" }, { "math_id": 6, "text": "R" }, { "math_id": 7, "text": " \\phi(A\\cup B)+ \\phi(A\\cap B) = \\phi(A) + \\phi(B)" }, { "math_id": 8, "text": "A," }, { "math_id": 9, "text": "B," }, { "math_id": 10, "text": "A\\cup B," }, { "math_id": 11, "text": "A\\cap B" }, { "math_id": 12, "text": "\\mathcal S." }, { "math_id": 13, "text": "\\emptyset\\in \\mathcal S," }, { "math_id": 14, "text": "\\phi(\\emptyset)=0." }, { "math_id": 15, "text": "\\mathcal K(\\R^n)" }, { "math_id": 16, "text": "\\chi : K(\\R^n) \\to \\Z" }, { "math_id": 17, "text": "A \\mapsto h_A," }, { "math_id": 18, "text": "h_A" }, { "math_id": 19, "text": "A" }, { "math_id": 20, "text": "P \\mapsto |\\Z^n\\cap P|" }, { "math_id": 21, "text": "P" }, { "math_id": 22, "text": "V = \\R^n " }, { "math_id": 23, "text": "\\mathcal K(V)" }, { "math_id": 24, "text": "V " }, { "math_id": 25, "text": "K \\in \\mathcal K(V)" }, { "math_id": 26, "text": "x\\in V" }, { "math_id": 27, "text": "\\phi(K+x)=\\phi(K)" }, { "math_id": 28, "text": "(K,L) \\in \\mathcal K(V)^2" }, { "math_id": 29, "text": "d_H(K,L)" }, { "math_id": 30, "text": " d_H(K,L)= \\inf\\{\\varepsilon >0 : K\\subset L_\\varepsilon \\text{ and } L\\subset K_\\varepsilon \\}," }, { "math_id": 31, "text": "K_\\varepsilon" }, { "math_id": 32, "text": "\\varepsilon" }, { "math_id": 33, "text": "K" }, { "math_id": 34, "text": "\\Complex " }, { "math_id": 35, "text": "\\operatorname{Val}(V)." }, { "math_id": 36, "text": "\\operatorname{Val}(V)" }, { "math_id": 37, "text": "\\mathcal K (V)." }, { "math_id": 38, "text": " \\|\\phi\\| = \\max\\{ |\\phi(K)| : K\\subset B\\}," }, { "math_id": 39, "text": "B\\subset V" }, { "math_id": 40, "text": "\\phi\\in \\operatorname{Val}(V)" }, { "math_id": 41, "text": "i" }, { "math_id": 42, "text": " \\phi(\\lambda K)= \\lambda^i\\phi(K)" }, { "math_id": 43, "text": "\\lambda>0" }, { "math_id": 44, "text": "K\\in \\mathcal K(V)." }, { "math_id": 45, "text": "\\operatorname{Val}_i(V)" }, { "math_id": 46, "text": " \\operatorname{Val}(V)= \\bigoplus_{i=0}^n \\operatorname{Val}_i(V), \\qquad n=\\dim V." }, { "math_id": 47, "text": "0" }, { "math_id": 48, "text": "n=\\operatorname{dim} V." }, { "math_id": 49, "text": " \\operatorname{Val}_i = \\operatorname{Val}_i^+ \\oplus \\operatorname{Val}_i^-," }, { "math_id": 50, "text": "\\phi\\in \\operatorname{Val}_i^\\epsilon" }, { "math_id": 51, "text": "\\epsilon \\in \\{+,-\\}" }, { "math_id": 52, "text": "\\phi(- K)= \\epsilon \\phi(K)" }, { "math_id": 53, "text": "K." }, { "math_id": 54, "text": "\\operatorname{Val}_i^+" }, { "math_id": 55, "text": "\\operatorname{Val}_i^-" }, { "math_id": 56, "text": "\\operatorname{Val}_0(V)" }, { "math_id": 57, "text": "1" }, { "math_id": 58, "text": "\\chi," }, { "math_id": 59, "text": "\\mathcal K(V)." }, { "math_id": 60, "text": "\\operatorname{Val}_n(V)" }, { "math_id": 61, "text": "n=\\dim V" }, { "math_id": 62, "text": "V." }, { "math_id": 63, "text": "\\phi\\in \\operatorname{Val}(\\R^n)" }, { "math_id": 64, "text": "\\phi(K)=0" }, { "math_id": 65, "text": "\\dim K<n." }, { "math_id": 66, "text": "\\phi(K)=c\\operatorname{vol}(K)+\\int_{S^{n-1}}f(\\theta)d\\sigma_K(\\theta)," }, { "math_id": 67, "text": "c \\in \\Complex," }, { "math_id": 68, "text": "f \\in C(S^{n-1})" }, { "math_id": 69, "text": "S^{n-1}\\subset \\R^n," }, { "math_id": 70, "text": "\\sigma_K" }, { "math_id": 71, "text": "n" }, { "math_id": 72, "text": "(n-1)" }, { "math_id": 73, "text": "(i+1)" }, { "math_id": 74, "text": "\\operatorname{Val}_i^+(V)," }, { "math_id": 75, "text": "\\operatorname{Gr}_i(V)" }, { "math_id": 76, "text": "\\phi\\in \\operatorname{Val}_i(V)" }, { "math_id": 77, "text": "E\\in \\operatorname{Gr}_i(V)," }, { "math_id": 78, "text": "\\phi|_E" }, { "math_id": 79, "text": "\\operatorname{Val}_i(E)," }, { "math_id": 80, "text": "\\operatorname{Kl}_\\phi(E)= \\phi|_E" }, { "math_id": 81, "text": "Dens" }, { "math_id": 82, "text": "E" }, { "math_id": 83, "text": "\\operatorname{Dens}(E)" }, { "math_id": 84, "text": "E." }, { "math_id": 85, "text": "\\operatorname{Kl} : \\operatorname{Val}_i^+(V)\\to C(\\operatorname{Gr}_i(V),\\operatorname{Dens})" }, { "math_id": 86, "text": "\\operatorname{Val}_i^-(V)," }, { "math_id": 87, "text": "(F^i\\subset E^{i+1})." }, { "math_id": 88, "text": "\\operatorname{Val}_i" }, { "math_id": 89, "text": "\\phi\\in\\operatorname{Val}_k(V)" }, { "math_id": 90, "text": "k" }, { "math_id": 91, "text": "C^2(S^{n-1})," }, { "math_id": 92, "text": "1," }, { "math_id": 93, "text": "n-1," }, { "math_id": 94, "text": "1<i<n-1" }, { "math_id": 95, "text": "\\phi_A(K)=\\operatorname{vol}_n(K+A), \\qquad A\\in\\mathcal K(V)," }, { "math_id": 96, "text": "0\\leq i\\leq n," }, { "math_id": 97, "text": "GL(V)" }, { "math_id": 98, "text": "\\operatorname{Val}_i^+(V)" }, { "math_id": 99, "text": "\\operatorname{Val}_i^-(V)" }, { "math_id": 100, "text": " (g\\cdot \\phi)(K)= \\phi(g^{-1} K)." }, { "math_id": 101, "text": "\\phi\\in\\operatorname{Val}(V)" }, { "math_id": 102, "text": "g\\mapsto g\\cdot \\phi" }, { "math_id": 103, "text": "\\operatorname{Val}^\\infty(V)" }, { "math_id": 104, "text": "f" }, { "math_id": 105, "text": "\\operatorname{Gr}_i(\\R^n)," }, { "math_id": 106, "text": "\\phi(K)=\\int_{\\operatorname{Gr}_i(\\R^n)} \\operatorname{vol}_i(P_E K) f(E) dE," }, { "math_id": 107, "text": "P_E : \\R^n\\to E" }, { "math_id": 108, "text": "dE" }, { "math_id": 109, "text": "i." }, { "math_id": 110, "text": "\\omega\\in \\Omega^{n-1}(\\R^n\\times S^{n-1})" }, { "math_id": 111, "text": "(x,u)\\mapsto (x+t,u)" }, { "math_id": 112, "text": "c\\in \\Complex," }, { "math_id": 113, "text": "N(K)" }, { "math_id": 114, "text": "\\operatorname{Val}^\\infty(V)\\subset \\operatorname{Val}(V)." }, { "math_id": 115, "text": "V,W" }, { "math_id": 116, "text": "\\boxtimes : \\operatorname{Val}^\\infty(V)\\times \\operatorname{Val}^\\infty(W)\\to \\operatorname{Val}(V\\times W) " }, { "math_id": 117, "text": "\\operatorname{Val}" }, { "math_id": 118, "text": "\\operatorname{Val}^\\infty." }, { "math_id": 119, "text": "\\phi=\\operatorname{vol}_V(\\bullet + A)" }, { "math_id": 120, "text": "\\psi= \\operatorname{vol}_W(\\bullet + B)" }, { "math_id": 121, "text": "A\\in \\mathcal K(V)" }, { "math_id": 122, "text": "B\\in \\mathcal K(W)" }, { "math_id": 123, "text": "\\operatorname{vol}_V" }, { "math_id": 124, "text": "\\operatorname{vol}_W" }, { "math_id": 125, "text": "V" }, { "math_id": 126, "text": "W," }, { "math_id": 127, "text": " \\phi\\boxtimes \\psi = (\\operatorname{vol}_V\\boxtimes \\operatorname{vol}_W) (\\bullet + A\\times B)." }, { "math_id": 128, "text": "\\phi,\\psi\\in\\operatorname{Val}^\\infty (V)" }, { "math_id": 129, "text": " (\\phi\\cdot \\psi)(K)= (\\phi\\boxtimes \\psi)(\\Delta(K))," }, { "math_id": 130, "text": "\\Delta : V\\to V\\times V" }, { "math_id": 131, "text": " \\operatorname{Val}^\\infty (V) \\times \\operatorname{Val}^\\infty (V) \\to \\operatorname{Val}^\\infty (V) ." }, { "math_id": 132, "text": " \\operatorname{Val}^\\infty (V) " }, { "math_id": 133, "text": "\\operatorname{Val}_k^\\infty(V)\\times \\operatorname{Val}_{n-k}^\\infty(V)\\to \\operatorname{Val}_n^\\infty(V)=\\operatorname{Dens}(V)" }, { "math_id": 134, "text": "\\operatorname{Val}_k^{-\\infty}(V)," }, { "math_id": 135, "text": "\\operatorname{Val}^\\infty_{n-k}(V)^*\\otimes\\operatorname{Dens}(V)," }, { "math_id": 136, "text": "\\operatorname{Val}_k^\\infty(V)\\hookrightarrow\\operatorname{Val}_k^{-\\infty}(V)/" }, { "math_id": 137, "text": "\\operatorname{Val}^\\infty(V)\\otimes \\operatorname{Dens}(V^*)." }, { "math_id": 138, "text": "\\operatorname{vol}" }, { "math_id": 139, "text": "A,B\\in\\mathcal K(V)" }, { "math_id": 140, "text": "\\operatorname{vol}(\\bullet+A)\\ast\\operatorname{vol}(\\bullet+B)=\\operatorname{vol}(\\bullet+A+B)." }, { "math_id": 141, "text": " \\operatorname{Val}^\\infty (V) \\times \\operatorname{Val}^\\infty (V) \\to \\operatorname{Val}^\\infty (V)," }, { "math_id": 142, "text": "\\phi\\in\\operatorname{Val}^\\infty_{n-i}(V)," }, { "math_id": 143, "text": "\\psi\\in\\operatorname{Val}^\\infty_{n-j}(V)," }, { "math_id": 144, "text": "\\phi\\ast\\psi\\in \\operatorname{Val}^\\infty_{n-i-j}(V)." }, { "math_id": 145, "text": "V(K_1,\\ldots, K_n)" }, { "math_id": 146, "text": "K_1,\\ldots, K_n\\subset \\R^n." }, { "math_id": 147, "text": "A_1,\\dots,A_{n-i}" }, { "math_id": 148, "text": " \\phi(K) = V(K[i], A_1,\\dots,A_{n-i})" }, { "math_id": 149, "text": "V(\\bullet[i], A_1,\\dots,A_{n-i})\\ast V(\\bullet[j],B_1,\\dots,B_{n-j})=c_{i,j}V(\\bullet[n-j-i], A_1,\\dots,A_{n-i},B_1,\\dots,B_{n-j})," }, { "math_id": 150, "text": "c_{i,j}" }, { "math_id": 151, "text": "i,j,n." }, { "math_id": 152, "text": "\\mathbb F: \\operatorname{Val}^\\infty(V)\\to \\operatorname{Val}^\\infty(V^*)\\otimes \\operatorname{Dens}(V)," }, { "math_id": 153, "text": "\\mathbb F: \\operatorname{Val}_k^\\infty(V) \\to \\operatorname{Val}^\\infty_{n-k}(V^*)\\otimes \\operatorname{Dens}(V)," }, { "math_id": 154, "text": "\\mathbb F(\\phi\\cdot \\psi) = \\mathbb F\\phi\\ast\\mathbb F\\psi." }, { "math_id": 155, "text": "V = V^*," }, { "math_id": 156, "text": "\\operatorname{Dens}(V)=\\Complex," }, { "math_id": 157, "text": "\\mathbb F^2\\phi(K) = \\phi(-K)." }, { "math_id": 158, "text": "\\operatorname{Kl}_{\\mathbb F\\phi}(E) = \\operatorname{Kl}_\\phi(E^\\perp)." }, { "math_id": 159, "text": "f:U\\to V," }, { "math_id": 160, "text": "f^*:\\operatorname{Val}(V)\\to \\operatorname{Val}(U)" }, { "math_id": 161, "text": "f_*:\\operatorname{Val}(U)\\otimes\\operatorname{Dens}(U)^*\\to \\operatorname{Val}(V)\\otimes \\operatorname{Dens}(V)^*." }, { "math_id": 162, "text": "f^*\\phi(K)=\\phi(f(K))." }, { "math_id": 163, "text": "U" }, { "math_id": 164, "text": "\\operatorname{vol}(\\bullet+A)," }, { "math_id": 165, "text": "A\\in \\mathcal K(U)," }, { "math_id": 166, "text": "f," }, { "math_id": 167, "text": "f_*\\operatorname{vol}(\\bullet+A)=\\operatorname{vol}(\\bullet+f(A))." }, { "math_id": 168, "text": "f:U\\hookrightarrow V," }, { "math_id": 169, "text": "V=U\\oplus W." }, { "math_id": 170, "text": " f_*\\operatorname{vol}(\\bullet + A) (K)= \\int_{W}\\operatorname{vol}(K\\cap (U+w) + A) dw." }, { "math_id": 171, "text": "\\psi\\in\\operatorname{Val}(U)\\otimes\\operatorname{Dens}(U)^*," }, { "math_id": 172, "text": "\\langle f^*\\phi,\\psi\\rangle =\\langle \\phi, f_*\\psi\\rangle." }, { "math_id": 173, "text": "\\omega" }, { "math_id": 174, "text": "\\mathbb P_X= \\mathbb P_+(T^* X)" }, { "math_id": 175, "text": "X," }, { "math_id": 176, "text": "\\mathcal P(X)" }, { "math_id": 177, "text": "N(A)\\subset \\mathbb P_X" }, { "math_id": 178, "text": "A\\in \\mathcal P(X)," }, { "math_id": 179, "text": "n-1." }, { "math_id": 180, "text": "\\mathcal V^\\infty(X)" }, { "math_id": 181, "text": "\\phi : \\mathcal P(X)\\to \\Complex" }, { "math_id": 182, "text": " \\phi(A)= \\int_A\\mu + \\int_{N(A)}\\omega,\\qquad A\\in \\mathcal P(X)," }, { "math_id": 183, "text": "\\mu\\in\\Omega^n(X)" }, { "math_id": 184, "text": "\\omega\\in \\Omega^{n-1}(\\mathbb P_X)" }, { "math_id": 185, "text": "\\mu" }, { "math_id": 186, "text": "V_0^X = \\chi, V_1^X, \\ldots, V_n^X = \\mathrm{vol}_X" }, { "math_id": 187, "text": "f : X \\to \\R^m" }, { "math_id": 188, "text": "V_i^X = f^* V_i^{\\R^m}," }, { "math_id": 189, "text": "V_i^{\\R^m}" }, { "math_id": 190, "text": "\\R^m" }, { "math_id": 191, "text": "\\Complex P^n" }, { "math_id": 192, "text": "\\mathrm{Gr}_k^\\Complex" }, { "math_id": 193, "text": "k." }, { "math_id": 194, "text": "\\phi(A) = \\int_{\\mathrm{Gr}_k^\\Complex} \\chi(A\\cap E) dE, \\qquad A\\in \\mathcal P (\\Complex P^n)," }, { "math_id": 195, "text": "\\mathrm{Gr}_k^\\Complex," }, { "math_id": 196, "text": "\\mathcal V^\\infty(X) = W_0\\supset W_1\\supset \\cdots \\supset W_n." }, { "math_id": 197, "text": "W_n" }, { "math_id": 198, "text": "W_j" }, { "math_id": 199, "text": "\\pi^*\\Omega^j(X)," }, { "math_id": 200, "text": "\\pi : \\mathbb P_X\\to X" }, { "math_id": 201, "text": "\\bigoplus_{i=0}^n W_i/W_{i+1}" }, { "math_id": 202, "text": "\\bigoplus_{i=0}^n C^\\infty (X, \\operatorname{Val}_i^\\infty(TX))," }, { "math_id": 203, "text": "\\operatorname{Val}_i^\\infty(TX)" }, { "math_id": 204, "text": "x\\in X" }, { "math_id": 205, "text": "\\operatorname{Val}_i^\\infty(T_x X)," }, { "math_id": 206, "text": "T_x X." }, { "math_id": 207, "text": " W_i\\cdot W_j\\subset W_{i+j}," }, { "math_id": 208, "text": " V_i^X\\cdot V_j^X= V_{i+j}^X." }, { "math_id": 209, "text": " \\mathcal V^\\infty(X)\\times \\mathcal V^\\infty(X)\\to \\Complex," }, { "math_id": 210, "text": "(\\phi, \\psi)\\mapsto (\\phi\\cdot\\psi) (X)" }, { "math_id": 211, "text": "\\chi_A=\\chi(A\\cap\\bullet)." }, { "math_id": 212, "text": "\\chi_A\\cdot\\chi_B=\\chi_{A\\cap B}." }, { "math_id": 213, "text": "\\chi_A," }, { "math_id": 214, "text": "\\phi(X)=\\int_S \\chi_{s(A)}ds" }, { "math_id": 215, "text": "S" }, { "math_id": 216, "text": "\\int_S \\chi_{s(A)}ds\\cdot \\int_{S'} \\chi_{s'(B)}ds'=\\int_{S\\times S'} \\chi_{s(A)\\cap s'(B)}dsds'," }, { "math_id": 217, "text": "f : X \\to Y" }, { "math_id": 218, "text": "f^* : \\mathcal V^\\infty(Y) \\to \\mathcal V^\\infty(X)." }, { "math_id": 219, "text": "(f^* \\phi)(A) = \\phi(f(A)), \\qquad A\\in\\mathcal P (X)." }, { "math_id": 220, "text": "f : X\\to Y" }, { "math_id": 221, "text": "f^* : \\mathcal V^\\infty(X) \\to \\mathcal V^\\infty(Y)" }, { "math_id": 222, "text": "(f_* \\phi)(A) = \\phi(f^{-1}(A)), \\qquad A\\in\\mathcal P (Y)." }, { "math_id": 223, "text": "f_* : W_i(X)\\to W_{i-(\\dim X-\\dim Y)}(Y)." }, { "math_id": 224, "text": "M" }, { "math_id": 225, "text": "G" }, { "math_id": 226, "text": "SM." }, { "math_id": 227, "text": "\\mathcal V^\\infty(M)^G" }, { "math_id": 228, "text": "\\phi_1, \\ldots, \\phi_m" }, { "math_id": 229, "text": "A,B\\in \\mathcal P(M)" }, { "math_id": 230, "text": "M." }, { "math_id": 231, "text": "\\int_G \\phi_i(A\\cap gB)dg" }, { "math_id": 232, "text": "\\phi_k(A)\\phi_l(B)" }, { "math_id": 233, "text": "c_i^{kl}" }, { "math_id": 234, "text": "B" }, { "math_id": 235, "text": "\\mathcal V^\\infty(M)^G." }, { "math_id": 236, "text": "k_G : \\mathcal V^\\infty(M)^G \\to \\mathcal V^\\infty(M)^G\\otimes \\mathcal V^\\infty(M)^G" }, { "math_id": 237, "text": "\\operatorname{pd} : \\mathcal V^\\infty(M)^G \\to \\mathcal V^\\infty(M)^{G*}" }, { "math_id": 238, "text": "m_G^*" }, { "math_id": 239, "text": "m_G : \\mathcal V^\\infty(M)^{G}\\otimes \\mathcal V^\\infty(M)^{G} \\to \\mathcal V^\\infty(M)^{G}." }, { "math_id": 240, "text": "\\mathcal V^\\infty(M)^{G}" }, { "math_id": 241, "text": "\\mathcal V^\\infty(M)^{G*}," }, { "math_id": 242, "text": "k_G=m_G^*" } ]
https://en.wikipedia.org/wiki?curid=63011965
63029619
Pokhozhaev's identity
Pokhozhaev's identity is an integral relation satisfied by stationary localized solutions to a nonlinear Schrödinger equation or nonlinear Klein–Gordon equation. It was obtained by and is similar to the virial theorem. This relation is also known as G.H. Derrick's theorem. Similar identities can be derived for other equations of mathematical physics. The Pokhozhaev identity for the stationary nonlinear Schrödinger equation. Here is a general form due to H. Berestycki and P.-L. Lions. Let formula_0 be continuous and real-valued, with formula_1. Denote formula_2. Let formula_3 be a solution to the equation formula_4, in the sense of distributions. Then formula_5 satisfies the relation formula_6 The Pokhozhaev identity for the stationary nonlinear Dirac equation. There is a form of the virial identity for the stationary nonlinear Dirac equation in three spatial dimensions (and also the Maxwell-Dirac equations) and in arbitrary spatial dimension. Let formula_7 and let formula_8 and formula_9 be the self-adjoint Dirac matrices of size formula_10: formula_11 Let formula_12 be the massless Dirac operator. Let formula_0 be continuous and real-valued, with formula_1. Denote formula_2. Let formula_13 be a spinor-valued solution that satisfies the stationary form of the nonlinear Dirac equation, formula_14 in the sense of distributions, with some formula_15. Assume that formula_16 Then formula_17 satisfies the relation formula_18
[ { "math_id": 0, "text": "g(s)" }, { "math_id": 1, "text": "g(0)=0" }, { "math_id": 2, "text": "G(s)=\\int_0^s g(t)\\,dt" }, { "math_id": 3, "text": "u\\in L^\\infty_{\\mathrm{loc}}(\\R^n),\n\\qquad\n\\nabla u\\in L^2(\\R^n),\n\\qquad\nG(u)\\in L^1(\\R^n),\n\\qquad\nn\\in\\N,\n" }, { "math_id": 4, "text": "-\\nabla^2 u=g(u)" }, { "math_id": 5, "text": "u" }, { "math_id": 6, "text": "\\frac{n-2}{2}\\int_{\\R^n}|\\nabla u(x)|^2\\,dx=n\\int_{\\R^n}G(u(x))\\,dx." }, { "math_id": 7, "text": "n\\in\\N,\\,N\\in\\N" }, { "math_id": 8, "text": "\\alpha^i,\\,1\\le i\\le n" }, { "math_id": 9, "text": "\\beta" }, { "math_id": 10, "text": "N\\times N" }, { "math_id": 11, "text": "\n\\alpha^i\\alpha^j+\\alpha^j\\alpha^i=2\\delta_{ij}I_N,\n\\quad\n\\beta^2=I_N,\n\\quad\n\\alpha^i\\beta+\\beta\\alpha^i=0,\n\\quad\n1\\le i,j\\le n.\n" }, { "math_id": 12, "text": "D_0=-\\mathrm{i}\\alpha\\cdot\\nabla=-\\mathrm{i}\\sum_{i=1}^n\\alpha^i\\frac{\\partial}{\\partial x^i}" }, { "math_id": 13, "text": "\\phi\\in L^\\infty_{\\mathrm{loc}}(\\R^n,\\C^N)" }, { "math_id": 14, "text": "\n\\omega\\phi=D_0\\phi+g(\\phi^\\ast\\beta\\phi)\\beta\\phi,\n" }, { "math_id": 15, "text": "\\omega\\in\\R" }, { "math_id": 16, "text": "\n\\phi\\in H^1(\\R^n,\\C^N),\\qquad\nG(\\phi^\\ast\\beta\\phi)\\in L^1(\\R^n).\n" }, { "math_id": 17, "text": "\\phi" }, { "math_id": 18, "text": "\n\\omega\\int_{\\R^n}\\phi(x)^\\ast\\phi(x)\\,dx\n=\\frac{n-1}{n}\\int_{\\R^n}\\phi(x)^\\ast D_0\\phi(x)\\,dx\n+\\int_{\\R^n}G(\\phi(x)^\\ast\\beta\\phi(x))\\,dx.\n" } ]
https://en.wikipedia.org/wiki?curid=63029619
63030703
Matching in hypergraphs
Set of hyperedges where every pair is disjoint In graph theory, a matching in a hypergraph is a set of hyperedges, in which every two hyperedges are disjoint. It is an extension of the notion of matching in a graph. Definition. Recall that a hypergraph H is a pair ("V", "E"), where V is a set of vertices and E is a set of subsets of V called "hyperedges". Each hyperedge may contain one or more vertices. A matching in H is a subset M of E, such that every two hyperedges "e"1 and "e"2 in M have an empty intersection (have no vertex in common). The matching number of a hypergraph H is the largest size of a matching in H. It is often denoted by ν("H"). As an example, let V be the set {1,2,3,4,5,6,7}. Consider a 3-uniform hypergraph on V (a hypergraph in which each hyperedge contains exactly 3 vertices). Let H be a 3-uniform hypergraph with 4 hyperedges: { {1,2,3}, {1,4,5}, {4,5,6}, {2,3,6} } Then H admits several matchings of size 2, for example: { {1,2,3}, {4,5,6} } { {1,4,5}, {2,3,6} } However, in any subset of 3 hyperedges, at least two of them intersect, so there is no matching of size 3. Hence, the matching number of H is 2. Intersecting hypergraph. A hypergraph "H" = ("V", "E") is called "intersecting" if every two hyperedges in E have a vertex in common. A hypergraph H is intersecting if and only if it has no matching with two or more hyperedges, if and only if ν("H") = 1. Matching in a graph as a special case. A graph without self-loops is just a 2-uniform hypergraph: each edge can be considered as a set of the two vertices that it connects. For example, this 2-uniform hypergraph represents a graph with 4 vertices {1,2,3,4} and 3 edges: { {1,3}, {1,4}, {2,4} } By the above definition, a matching in a graph is a set M of edges, such that each two edges in M have an empty intersection. This is equivalent to saying that no two edges in M are adjacent to the same vertex; this is exactly the definition of a matching in a graph. Fractional matching. A fractional matching in a hypergraph is a function that assigns a fraction in [0,1] to each hyperedge, such that for every vertex v in V, the sum of fractions of hyperedges containing v is at most 1. A matching is a special case of a fractional matching in which all fractions are either 0 or 1. The "size" of a fractional matching is the sum of fractions of all hyperedges. The fractional matching number of a hypergraph H is the largest size of a fractional matching in H. It is often denoted by "ν"*("H"). Since a matching is a special case of a fractional matching, for every hypergraph H: Matching-number(H) ≤ fractional-matching-number(H) Symbolically, this principle is written: formula_0 In general, the fractional matching number may be larger than the matching number. A theorem by Zoltán Füredi provides upper bounds on the fractional-matching-number(H) ratio: formula_1 In particular, in a simple graph: formula_2 formula_3 formula_3 In particular, in a bipartite graph, "ν"*("H") = "ν"("H"). This was proved by András Gyárfás. Perfect matching. A matching M is called perfect if every vertex v in V is contained in "exactly" one hyperedge of M. This is the natural extension of the notion of perfect matching in a graph. A fractional matching M is called perfect if for every vertex v in V, the sum of fractions of hyperedges in M containing v is "exactly" 1. Consider a hypergraph H in which each hyperedge contains at most n vertices. If H admits a perfect fractional matching, then its fractional matching number is at least . If each hyperedge in H contains exactly n vertices, then its fractional matching number is at exactly . This is a generalization of the fact that, in a graph, the size of a perfect matching is . Given a set V of vertices, a collection E of subsets of V is called "balanced" if the hypergraph ("V","E") admits a perfect fractional matching. For example, if "V" = {1,2,3,a,b,c} and "E" = { {1,a}, {2,a}, {1,b}, {2,b}, {3,c} }, then E is balanced, with the perfect fractional matching { 1/2, 1/2, 1/2, 1/2, 1 }. There are various sufficient conditions for the existence of a perfect matching in a hypergraph: Balanced set-family. A set-family E over a ground set V is called "balanced" (with respect to V) if the hypergraph "H" = ("V", "E") admits a perfect fractional matching. For example, consider the vertex set "V" = {1,2,3,a,b,c} and the edge set "E" = {1-a, 2-a, 1-b, 2-b, 3-c}. E is balanced, since there is a perfect fractional matching with weights {1/2, 1/2, 1/2, 1/2, 1}. Computing a maximum matching. The problem of finding a maximum-cardinality matching in a hypergraph, thus calculating formula_4, is NP-hard even for 3-uniform hypergraphs (see 3-dimensional matching). This is in contrast to the case of simple (2-uniform) graphs in which computing a maximum-cardinality matching can be done in polynomial time. Matching and covering. A "vertex-cover in a hypergraph" "H" = ("V", "E") is a subset T of V, such that every hyperedge in E contains at least one vertex of T (it is also called a transversal or a hitting set, and is equivalent to a set cover). It is a generalization of the notion of a vertex cover in a graph. The vertex-cover number of a hypergraph H is the smallest size of a vertex cover in H. It is often denoted by "τ"("H"), for transversal. A fractional vertex-cover is a function assigning a weight to each vertex in V, such that for every hyperedge e in E, the sum of fractions of vertices in e is at least 1. A vertex cover is a special case of a fractional vertex cover in which all weights are either 0 or 1. The "size" of a fractional vertex-cover is the sum of fractions of all vertices. The fractional vertex-cover number of a hypergraph H is the smallest size of a fractional vertex-cover in H. It is often denoted by "τ"*("H"). Since a vertex-cover is a special case of a fractional vertex-cover, for every hypergraph H: fractional-vertex-cover-number (H) ≤ vertex-cover-number (H). Linear programming duality implies that, for every hypergraph H: fractional-matching-number (H) = fractional-vertex-cover-number(H). Hence, for every hypergraph H: formula_5 If the size of each hyperedge in H is at most r then the union of all hyperedges in a maximum matching is a vertex-cover (if there was an uncovered hyperedge, we could have added it to the matching). Therefore: formula_6 This inequality is tight: equality holds, for example, when V contains "r"⋅"ν"("H") + "r" – 1 vertices and E contains all subsets of r vertices. However, in general "τ"*("H") &lt; "r"⋅"ν"("H"), since "ν"*("H") &lt; "r"⋅"ν"("H"); see Fractional matching above. "Ryser's conjecture" says that, in every r-partite r-uniform hypergraph: formula_7 Some special cases of the conjecture have been proved; see Ryser's conjecture. Kőnig's property. A hypergraph has the Kőnig property if its maximum matching number equals its minimum vertex-cover number, namely if "ν"("H") = "τ"("H"). The Kőnig-Egerváry theorem shows that every bipartite graph has the Kőnig property. To extend this theorem to hypergraphs, we need to extend the notion of bipartiteness to hypergraphs. A natural generalization is as follows. A hypergraph is called 2-colorable if its vertices can be 2-colored so that every hyperedge (of size at least 2) contains at least one vertex of each color. An alternative term is Property B. A simple graph is bipartite iff it is 2-colorable. However, there are 2-colorable hypergraphs without Kőnig's property. For example, consider the hypergraph with "V" = {1,2,3,4} with all triplets "E" = { {1,2,3} , {1,2,4} , {1,3,4} , {2,3,4} }. It is 2-colorable, for example, we can color {1,2} blue and {3,4} white. However, its matching number is 1 and its vertex-cover number is 2. A stronger generalization is as follows. Given a hypergraph "H" = ("V", "E") and a subset V' of V, the restriction of H to V' is the hypergraph whose vertices are V, and for every hyperedge e in E that intersects V', it has a hyperedge e' that is the intersection of e and V'. A hypergraph is called balanced if all its restrictions are "essentially 2-colorable", meaning that we ignore singleton hyperedges in the restriction. A simple graph is bipartite iff it is balanced. A simple graph is bipartite iff it has no odd-length cycles. Similarly, a hypergraph is balanced iff it has no odd-length "circuits". A circuit of length k in a hypergraph is an alternating sequence ("v"1, "e"1, "v"2, "e"2, …, "vk", "ek", "vk"+1 = "v"1), where the vi are distinct vertices and the ei are distinct hyperedges, and each hyperedge contains the vertex to its left and the vertex to its right. The circuit is called "unbalanced" if each hyperedge contains no other vertices in the circuit. Claude Berge proved that a hypergraph is balanced if and only if it does not contain an unbalanced odd-length circuit. Every balanced hypergraph has Kőnig's property. The following are equivalent: Matching and packing. The problem of set packing is equivalent to hypergraph matching. A vertex-packing in a (simple) graph is a subset P of its vertices, such that no two vertices in P are adjacent. The problem of finding a maximum vertex-packing in a graph is equivalent to the problem of finding a maximum matching in a hypergraph:
[ { "math_id": 0, "text": "\\nu(H) \\leq \\nu^*(H) " }, { "math_id": 1, "text": "\\frac{\\nu^*(H)}{ \\nu (H)} \\leq r-1+ \\frac{1}{r}." }, { "math_id": 2, "text": "\\frac{\\nu^*(H)}{ \\nu (H)} \\leq \\frac{3}{2}." }, { "math_id": 3, "text": "\\frac{\\nu^*(H)}{\\nu (H)} \\leq r-1." }, { "math_id": 4, "text": "\\nu(H)" }, { "math_id": 5, "text": "\\nu(H) \\leq \\nu^*(H) = \\tau^*(H)\\leq \\tau(H) " }, { "math_id": 6, "text": "\\tau(H)\\leq r\\cdot \\nu(H)." }, { "math_id": 7, "text": "\\tau (H)\\leq (r-1) \\nu(H)." } ]
https://en.wikipedia.org/wiki?curid=63030703
63033543
Complemented subspace
In the branch of mathematics called functional analysis, a complemented subspace of a topological vector space formula_0 is a vector subspace formula_1 for which there exists some other vector subspace formula_2 of formula_0 called its (topological) complement in formula_3, such that formula_3 is the direct sum formula_4 in the category of topological vector spaces. Formally, topological direct sums strengthen the algebraic direct sum by requiring certain maps be continuous; the result retains many nice properties from the operation of direct sum in finite-dimensional vector spaces. Every finite-dimensional subspace of a Banach space is complemented, but other subspaces may not. In general, classifying all complemented subspaces is a difficult problem, which has been solved only for some well-known Banach spaces. The concept of a complemented subspace is analogous to, but distinct from, that of a set complement. The set-theoretic complement of a vector subspace is never a complementary subspace. Preliminaries: definitions and notation. If formula_3 is a vector space and formula_1 and formula_2 are vector subspaces of formula_3 then there is a well-defined addition map formula_5 The map formula_6 is a morphism in the category of vector spaces — that is to say, linear. Algebraic direct sum. The vector space formula_3 is said to be the algebraic direct sum (or direct sum in the category of vector spaces) formula_7 when any of the following equivalent conditions are satisfied: When these conditions hold, the inverse formula_11 is well-defined and can be written in terms of coordinates asformula_12 The first coordinate formula_13 is called the canonical projection of formula_3 onto formula_1; likewise the second coordinate is the canonical projection onto formula_14 Equivalently, formula_15 and formula_16 are the unique vectors in formula_1 and formula_17 respectively, that satisfy formula_18 As maps, formula_19 where formula_20 denotes the identity map on formula_3. Motivation. Suppose that the vector space formula_3 is the algebraic direct sum of formula_7. In the category of vector spaces, finite products and coproducts coincide: algebraically, formula_4 and formula_21 are indistinguishable. Given a problem involving elements of formula_3, one can break the elements down into their components in formula_1 and formula_2, because the projection maps defined above act as inverses to the natural inclusion of formula_1 and formula_2 into formula_3. Then one can solve the problem in the vector subspaces and recombine to form an element of formula_3. In the category of topological vector spaces, that algebraic decomposition becomes less useful. The definition of a topological vector space requires the addition map formula_6 to be continuous; its inverse formula_11 may not be. The categorical definition of direct sum, however, requires formula_22 and formula_23 to be morphisms — that is, "continuous" linear maps. The space formula_3 is the topological direct sum of formula_1 and formula_2 if (and only if) any of the following equivalent conditions hold: The topological direct sum is also written formula_24; whether the sum is in the topological or algebraic sense is usually clarified through context. Definition. Every topological direct sum is an algebraic direct sum formula_24; the converse is not guaranteed. Even if both formula_1 and formula_2 are closed in formula_3, formula_25 may "still" fail to be continuous. formula_2 is a (topological) complement or supplement to formula_1 if it avoids that pathology — that is, if, topologically, formula_24. (Then formula_1 is likewise complementary to formula_2.) Condition 2(d) above implies that any topological complement of formula_1 is isomorphic, as a topological vector space, to the quotient vector space formula_26. formula_1 is called complemented if it has a topological complement formula_2 (and uncomplemented if not). The choice of formula_2 can matter quite strongly: every complemented vector subspace formula_1 has algebraic complements that do not complement formula_1 topologically. Because a linear map between two normed (or Banach) spaces is bounded if and only if it is continuous, the definition in the categories of normed (resp. Banach) spaces is the same as in topological vector spaces. Equivalent characterizations. The vector subspace formula_1 is complemented in formula_3 if and only if any of the following holds: If in addition formula_3 is Banach, then an equivalent condition is Sufficient conditions. For any two topological vector spaces formula_3 and formula_34, the subspaces formula_43 and formula_44 are topological complements in formula_45. Every algebraic complement of formula_46, the closure of formula_39, is also a topological complement. This is because formula_46 has the indiscrete topology, and so the algebraic projection is continuous. If formula_47 and formula_48 is surjective, then formula_49. Finite dimension. Suppose formula_3 is Hausdorff and locally convex and formula_34 a free topological vector subspace: for some set formula_50, we have formula_51 (as a t.v.s.). Then formula_34 is a closed and complemented vector subspace of formula_3. In particular, any finite-dimensional subspace of formula_3 is complemented. In arbitrary topological vector spaces, a finite-dimensional vector subspace formula_34 is topologically complemented if and only if for every non-zero formula_53, there exists a continuous linear functional on formula_3 that separates formula_54 from formula_39. For an example in which this fails, see . Finite codimension. Not all finite-codimensional vector subspaces of a TVS are closed, but those that are, do have complements. Hilbert spaces. In a Hilbert space, the orthogonal complement formula_55 of any closed vector subspace formula_1 is always a topological complement of formula_1. This property characterizes Hilbert spaces within the class of Banach spaces: every infinite dimensional, non-Hilbert Banach space contains a closed uncomplemented subspace, a deep theorem of Joram Lindenstrauss and Lior Tzafriri. Fréchet spaces. Let formula_3 be a Fréchet space over the field formula_56. Then the following are equivalent: Properties; examples of uncomplemented subspaces. A complemented (vector) subspace of a Hausdorff space formula_3 is necessarily a closed subset of formula_3, as is its complement. From the existence of Hamel bases, every infinite-dimensional Banach space contains unclosed linear subspaces. Since any complemented subspace is closed, none of those subspaces is complemented. Likewise, if formula_3 is a complete TVS and formula_26 is not complete, then formula_1 has no topological complement in formula_52 Applications. If formula_59 is a continuous linear surjection, then the following conditions are equivalent: The Method of Decomposition. Topological vector spaces admit the following Cantor-Schröder-Bernstein–type theorem: Let formula_3 and formula_34 be TVSs such that formula_68 and formula_69 Suppose that formula_34 contains a complemented copy of formula_3 and formula_3 contains a complemented copy of formula_70 Then formula_3 is TVS-isomorphic to formula_70 The "self-splitting" assumptions that formula_68 and formula_71 cannot be removed: Tim Gowers showed in 1996 that there exist non-isomorphic Banach spaces formula_3 and formula_34, each complemented in the other. In classical Banach spaces. Understanding the complemented subspaces of an arbitrary Banach space formula_3 up to isomorphism is a classical problem that has motivated much work in basis theory, particularly the development of absolutely summing operators. The problem remains open for a variety of important Banach spaces, most notably the space formula_72. For some Banach spaces the question is closed. Most famously, if formula_73 then the only complemented infinite-dimensional subspaces of formula_74 are isomorphic to formula_75 and the same goes for formula_76 Such spaces are called prime (when their only infinite-dimensional complemented subspaces are isomorphic to the original). These are not the only prime spaces, however. The spaces formula_77 are not prime whenever formula_78 in fact, they admit uncountably many non-isomorphic complemented subspaces. The spaces formula_79 and formula_80 are isomorphic to formula_81 and formula_82 respectively, so they are indeed prime. The space formula_72 is not prime, because it contains a complemented copy of formula_83. No other complemented subspaces of formula_72 are currently known. Indecomposable Banach spaces. An infinite-dimensional Banach space is called indecomposable whenever its only complemented subspaces are either finite-dimensional or -codimensional. Because a finite-codimensional subspace of a Banach space formula_3 is always isomorphic to formula_0 indecomposable Banach spaces are prime. The most well-known example of indecomposable spaces are in fact hereditarily indecomposable, which means every infinite-dimensional subspace is also indecomposable. Proofs. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X," }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "M \\oplus N" }, { "math_id": 5, "text": "\\begin{alignat}{4}\nS :\\;&& M \\times N &&\\;\\to \\;& X \\\\\n && (m, n) &&\\;\\mapsto\\;& m + n \\\\\n\\end{alignat}" }, { "math_id": 6, "text": "S" }, { "math_id": 7, "text": "M\\oplus N" }, { "math_id": 8, "text": "S : M \\times N \\to X" }, { "math_id": 9, "text": "M \\cap N = \\{0\\}" }, { "math_id": 10, "text": "M + N = X" }, { "math_id": 11, "text": "S^{-1} : X \\to M \\times N" }, { "math_id": 12, "text": "S^{-1} = \\left(P_M, P_N\\right)\\text{.}" }, { "math_id": 13, "text": "P_M : X \\to M" }, { "math_id": 14, "text": "N." }, { "math_id": 15, "text": "P_M(x)" }, { "math_id": 16, "text": "P_N(x)" }, { "math_id": 17, "text": "N," }, { "math_id": 18, "text": "x = P_M(x) + P_N(x)\\text{.}" }, { "math_id": 19, "text": "P_M + P_N = \\operatorname{Id}_X, \\qquad \\ker P_M = N, \\qquad \\text{ and } \\qquad \\ker P_N = M" }, { "math_id": 20, "text": "\\operatorname{Id}_X" }, { "math_id": 21, "text": "M \\times N" }, { "math_id": 22, "text": "P_M" }, { "math_id": 23, "text": "P_N" }, { "math_id": 24, "text": "X = M \\oplus N" }, { "math_id": 25, "text": "S^{-1}" }, { "math_id": 26, "text": "X / M" }, { "math_id": 27, "text": "P_M : X \\to X" }, { "math_id": 28, "text": "P_M(X) = M" }, { "math_id": 29, "text": "P \\circ P = P" }, { "math_id": 30, "text": "X=M\\oplus\\ker{P}" }, { "math_id": 31, "text": "Y," }, { "math_id": 32, "text": "R : L(X; Y) \\to L(M; Y); R(u)=u|_M" }, { "math_id": 33, "text": "N\\subseteq X" }, { "math_id": 34, "text": "Y" }, { "math_id": 35, "text": "X\\subseteq Y" }, { "math_id": 36, "text": "L^p(X)" }, { "math_id": 37, "text": "L^p(Y)" }, { "math_id": 38, "text": "c_0" }, { "math_id": 39, "text": "0" }, { "math_id": 40, "text": "c" }, { "math_id": 41, "text": "L^1([0,1])" }, { "math_id": 42, "text": "\\mathrm{rca}([0,1])\\cong C([0,1])^*" }, { "math_id": 43, "text": "X \\times \\{0\\}" }, { "math_id": 44, "text": "\\{0\\} \\times Y" }, { "math_id": 45, "text": "X \\times Y" }, { "math_id": 46, "text": "\\overline{\\{0\\}}" }, { "math_id": 47, "text": "X=M\\oplus N" }, { "math_id": 48, "text": "A:X\\to Y" }, { "math_id": 49, "text": "Y=AM\\oplus AN" }, { "math_id": 50, "text": "I" }, { "math_id": 51, "text": "Y\\cong\\mathbb{K}^I" }, { "math_id": 52, "text": "X." }, { "math_id": 53, "text": "y\\in Y" }, { "math_id": 54, "text": "y" }, { "math_id": 55, "text": "M^{\\bot}" }, { "math_id": 56, "text": "\\mathbb{K}" }, { "math_id": 57, "text": "\\mathbb{K}^{\\N}." }, { "math_id": 58, "text": "\\mathbb{K}^{\\N}" }, { "math_id": 59, "text": "A : X \\to Y" }, { "math_id": 60, "text": "A" }, { "math_id": 61, "text": "B : Y \\to X" }, { "math_id": 62, "text": "AB = \\mathrm{Id}_Y" }, { "math_id": 63, "text": "\\operatorname{Id}_Y : Y \\to Y" }, { "math_id": 64, "text": "\\mathbb{R}" }, { "math_id": 65, "text": "X \\to Y" }, { "math_id": 66, "text": "\\{0\\}" }, { "math_id": 67, "text": "A: X \\to Y" }, { "math_id": 68, "text": "X = X \\oplus X" }, { "math_id": 69, "text": "Y = Y \\oplus Y." }, { "math_id": 70, "text": "Y." }, { "math_id": 71, "text": "Y = Y \\oplus Y" }, { "math_id": 72, "text": "L_1[0,1]" }, { "math_id": 73, "text": "1 \\leq p \\leq \\infty" }, { "math_id": 74, "text": "\\ell_p" }, { "math_id": 75, "text": "\\ell_p," }, { "math_id": 76, "text": "c_0." }, { "math_id": 77, "text": "L_p[0,1]" }, { "math_id": 78, "text": "p \\in (1, 2) \\cup (2, \\infty);" }, { "math_id": 79, "text": "L_2[0,1]" }, { "math_id": 80, "text": "L_{\\infty}[0,1]" }, { "math_id": 81, "text": "\\ell_2" }, { "math_id": 82, "text": "\\ell_{\\infty}," }, { "math_id": 83, "text": "\\ell_1" } ]
https://en.wikipedia.org/wiki?curid=63033543
630360
Proper convex function
In mathematical analysis, in particular the subfields of convex analysis and optimization, a proper convex function is an extended real-valued convex function with a non-empty domain, that never takes on the value formula_0 and also is not identically equal to formula_1 In convex analysis and variational analysis, a point (in the domain) at which some given function formula_2 is minimized is typically sought, where formula_2 is valued in the extended real number line formula_3 Such a point, if it exists, is called a global minimum point of the function and its value at this point is called the global minimum (value) of the function. If the function takes formula_0 as a value then formula_0 is necessarily the global minimum value and the minimization problem can be answered; this is ultimately the reason why the definition of "proper" requires that the function never take formula_0 as a value. Assuming this, if the function's domain is empty or if the function is identically equal to formula_4 then the minimization problem once again has an immediate answer. Extended real-valued function for which the minimization problem is not solved by any one of these three trivial cases are exactly those that are called proper. Many (although not all) results whose hypotheses require that the function be proper add this requirement specifically to exclude these trivial cases. If the problem is instead a maximization problem (which would be clearly indicated, such as by the function being concave rather than convex) then the definition of "proper" is defined in an analogous (albeit technically different) manner but with the same goal: to exclude cases where the maximization problem can be answered immediately. Specifically, a concave function formula_5 is called proper if its negation formula_6 which is a convex function, is proper in the sense defined above. Definitions. Suppose that formula_7 is a function taking values in the extended real number line formula_3 If formula_2 is a convex function or if a minimum point of formula_2 is being sought, then formula_2 is called proper if formula_8 for every formula_9 and if there also exists some point formula_10 such that formula_11 That is, a function is proper if it never attains the value formula_0 and its effective domain is nonempty. This means that there exists some formula_9 at which formula_12 and formula_2 is also never equal to formula_13 Convex functions that are not proper are called improper convex functions. A proper concave function is by definition, any function formula_14 such that formula_15 is a proper convex function. Explicitly, if formula_14 is a concave function or if a maximum point of formula_5 is being sought, then formula_5 is called proper if its domain is not empty, it never takes on the value formula_16 and it is not identically equal to formula_13 Properties. For every proper convex function formula_17 there exist some formula_18 and formula_19 such that formula_20 for every formula_21 The sum of two proper convex functions is convex, but not necessarily proper. For instance if the sets formula_22 and formula_23 are non-empty convex sets in the vector space formula_24 then the characteristic functions formula_25 and formula_26 are proper convex functions, but if formula_27 then formula_28 is identically equal to formula_1 The infimal convolution of two proper convex functions is convex but not necessarily proper convex. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "-\\infty" }, { "math_id": 1, "text": "+\\infty." }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "[-\\infty, \\infty] = \\mathbb{R} \\cup \\{ \\pm\\infty \\}." }, { "math_id": 4, "text": "+\\infty" }, { "math_id": 5, "text": "g" }, { "math_id": 6, "text": "-g," }, { "math_id": 7, "text": "f : X \\to [-\\infty, \\infty]" }, { "math_id": 8, "text": "f(x) > -\\infty" }, { "math_id": 9, "text": "x \\in X" }, { "math_id": 10, "text": "x_0 \\in X" }, { "math_id": 11, "text": "f\\left( x_0 \\right) < +\\infty." }, { "math_id": 12, "text": "f(x) \\in \\mathbb{R}" }, { "math_id": 13, "text": "-\\infty." }, { "math_id": 14, "text": "g : X \\to [-\\infty, \\infty]" }, { "math_id": 15, "text": "f := -g" }, { "math_id": 16, "text": "+\\infty," }, { "math_id": 17, "text": "f : \\mathbb{R}^n \\to [-\\infty, \\infty]," }, { "math_id": 18, "text": "b \\in \\mathbb{R}^n" }, { "math_id": 19, "text": "r \\in \\mathbb{R}" }, { "math_id": 20, "text": "f(x) \\geq x \\cdot b - r" }, { "math_id": 21, "text": "x \\in \\mathbb{R}^n." }, { "math_id": 22, "text": "A \\subset X" }, { "math_id": 23, "text": "B \\subset X" }, { "math_id": 24, "text": "X," }, { "math_id": 25, "text": "I_A" }, { "math_id": 26, "text": "I_B" }, { "math_id": 27, "text": "A \\cap B = \\varnothing" }, { "math_id": 28, "text": "I_A + I_B" } ]
https://en.wikipedia.org/wiki?curid=630360