id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
64905081
Integrable module
In algebra, an integrable module (or integrable representation) of a Kac–Moody algebra formula_0 (a certain infinite-dimensional Lie algebra) is a representation of formula_0 such that (1) it is a sum of weight spaces and (2) the Chevalley generators formula_1 of formula_0 are locally nilpotent. For example, the adjoint representation of a Kac–Moody algebra is integrable. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathfrak g" }, { "math_id": 1, "text": "e_i, f_i" } ]
https://en.wikipedia.org/wiki?curid=64905081
649115
E7 (mathematics)
133-dimensional exceptional simple Lie group In mathematics, E7 is the name of several closely related Lie groups, linear algebraic groups or their Lie algebras e7, all of which have dimension 133; the same notation E7 is used for the corresponding root lattice, which has rank 7. The designation E7 comes from the Cartan–Killing classification of the complex simple Lie algebras, which fall into four infinite series labeled A"n", B"n", C"n", D"n", and five exceptional cases labeled E6, E7, E8, F4, and G2. The E7 algebra is thus one of the five exceptional cases. The fundamental group of the (adjoint) complex form, compact real form, or any algebraic version of E7 is the cyclic group Z/2Z, and its outer automorphism group is the trivial group. The dimension of its fundamental representation is 56. Real and complex forms. There is a unique complex Lie algebra of type E7, corresponding to a complex group of complex dimension 133. The complex adjoint Lie group E7 of complex dimension 133 can be considered as a simple real Lie group of real dimension 266. This has fundamental group Z/2Z, has maximal compact subgroup the compact form (see below) of E7, and has an outer automorphism group of order 2 generated by complex conjugation. As well as the complex Lie group of type E7, there are four real forms of the Lie algebra, and correspondingly four real forms of the group with trivial center (all of which have an algebraic double cover, and three of which have further non-algebraic covers, giving further real forms), all of real dimension 133, as follows: For a complete list of real forms of simple Lie algebras, see the list of simple Lie groups. The compact real form of E7 is the isometry group of the 64-dimensional exceptional compact Riemannian symmetric space EVI (in Cartan's classification). It is known informally as the "quateroctonionic projective plane" because it can be built using an algebra that is the tensor product of the quaternions and the octonions, and is also known as a Rosenfeld projective plane, though it does not obey the usual axioms of a projective plane. This can be seen systematically using a construction known as the "magic square", due to Hans Freudenthal and Jacques Tits. The Tits–Koecher construction produces forms of the E7 Lie algebra from Albert algebras, 27-dimensional exceptional Jordan algebras. E7 as an algebraic group. By means of a Chevalley basis for the Lie algebra, one can define E7 as a linear algebraic group over the integers and, consequently, over any commutative ring and in particular over any field: this defines the so-called split (sometimes also known as "untwisted") adjoint form of E7. Over an algebraically closed field, this and its double cover are the only forms; however, over other fields, there are often many other forms, or "twists" of E7, which are classified in the general framework of Galois cohomology (over a perfect field "k") by the set "H"1("k", Aut(E7)) which, because the Dynkin diagram of E7 (see below) has no automorphisms, coincides with "H"1("k", E7, ad). Over the field of real numbers, the real component of the identity of these algebraically twisted forms of E7 coincide with the three real Lie groups mentioned above, but with a subtlety concerning the fundamental group: all adjoint forms of E7 have fundamental group Z/2Z in the sense of algebraic geometry, meaning that they admit exactly one double cover; the further non-compact real Lie group forms of E7 are therefore not algebraic and admit no faithful finite-dimensional representations. Over finite fields, the Lang–Steinberg theorem implies that "H"1("k", E7) = 0, meaning that E7 has no twisted forms: see below. Algebra. Dynkin diagram. The Dynkin diagram for E7 is given by . Root system. Even though the roots span a 7-dimensional space, it is more symmetric and convenient to represent them as vectors lying in a 7-dimensional subspace of an 8-dimensional vector space. The roots are all the 8×7 permutations of (1,−1,0,0,0,0,0,0) and all the formula_0 permutations of (,,−,−,−,−) Note that the 7-dimensional subspace is the subspace where the sum of all the eight coordinates is zero. There are 126 roots. The simple roots are (0,−1,1,0,0,0,0,0) (0,0,−1,1,0,0,0,0) (0,0,0,−1,1,0,0,0) (0,0,0,0,−1,1,0,0) (0,0,0,0,0,−1,1,0) (0,0,0,0,0,0,−1,1) They are listed so that their corresponding nodes in the Dynkin diagram are ordered from left to right (in the diagram depicted above) with the side node last. An alternative description. An alternative (7-dimensional) description of the root system, which is useful in considering E7 × SU(2) as a subgroup of E8, is the following: All formula_1 permutations of (±1,±1,0,0,0,0,0) preserving the zero at the last entry, all of the following roots with an even number of + formula_2 and the two following roots formula_3 Thus the generators consist of a 66-dimensional so(12) subalgebra as well as 64 generators that transform as two self-conjugate Weyl spinors of spin(12) of opposite chirality, and their chirality generator, and two other generators of chiralities formula_4. Given the E7 Cartan matrix (below) and a Dynkin diagram node ordering of: one choice of simple roots is given by the rows of the following matrix: formula_5 Weyl group. The Weyl group of E7 is of order 2903040: it is the direct product of the cyclic group of order 2 and the unique simple group of order 1451520 (which can be described as PSp6(2) or PSΩ7(2)). formula_6 Important subalgebras and representations. E7 has an SU(8) subalgebra, as is evident by noting that in the 8-dimensional description of the root system, the first group of roots are identical to the roots of SU(8) (with the same Cartan subalgebra as in the E7). In addition to the 133-dimensional adjoint representation, there is a 56-dimensional "vector" representation, to be found in the E8 adjoint representation. The characters of finite dimensional representations of the real and complex Lie algebras and Lie groups are all given by the Weyl character formula. The dimensions of the smallest irreducible representations are (sequence in the OEIS): 1, 56, 133, 912, 1463, 1539, 6480, 7371, 8645, 24320, 27664, 40755, 51072, 86184, 150822, 152152, 238602, 253935, 293930, 320112, 362880, 365750, 573440, 617253, 861840, 885248, 915705, 980343, 2273920, 2282280, 2785552, 3424256, 3635840... The underlined terms in the sequence above are the dimensions of those irreducible representations possessed by the adjoint form of E7 (equivalently, those whose weights belong to the root lattice of E7), whereas the full sequence gives the dimensions of the irreducible representations of the simply connected form of E7. There exist non-isomorphic irreducible representation of dimensions 1903725824, 16349520330, etc. The fundamental representations are those with dimensions 133, 8645, 365750, 27664, 1539, 56 and 912 (corresponding to the seven nodes in the Dynkin diagram in the order chosen for the Cartan matrix above, i.e., the nodes are read in the six-node chain first, with the last node being connected to the third). The embeddings of the maximal subgroups of E7 up to dimension 133 are shown to the right. E7 Polynomial Invariants. E7 is the automorphism group of the following pair of polynomials in 56 non-commutative variables. We divide the variables into two groups of 28, ("p", "P") and ("q", "Q") where "p" and "q" are real variables and "P" and "Q" are 3×3 octonion hermitian matrices. Then the first invariant is the symplectic invariant of Sp(56, R): formula_7 The second more complicated invariant is a symmetric quartic polynomial: formula_8 Where formula_9 and the binary circle operator is defined by formula_10. An alternative quartic polynomial invariant constructed by Cartan uses two anti-symmetric 8x8 matrices each with 28 components. formula_11 Chevalley groups of type E7. The points over a finite field with "q" elements of the (split) algebraic group E7 (see above), whether of the adjoint (centerless) or simply connected form (its algebraic universal cover), give a finite Chevalley group. This is closely connected to the group written E7("q"), however there is ambiguity in this notation, which can stand for several things: From the finite group perspective, the relation between these three groups, which is quite analogous to that between SL("n", "q"), PGL("n", "q") and PSL("n", "q"), can be summarized as follows: E7("q") is simple for any "q", E7,sc("q") is its Schur cover, and the E7,ad("q") lies in its automorphism group; furthermore, when "q" is a power of 2, all three coincide, and otherwise (when "q" is odd), the Schur multiplier of E7("q") is 2 and E7("q") is of index 2 in E7,ad("q"), which explains why E7,sc("q") and E7,ad("q") are often written as 2·E7("q") and E7("q")·2. From the algebraic group perspective, it is less common for E7("q") to refer to the finite simple group, because the latter is not in a natural way the set of points of an algebraic group over F"q" unlike E7,sc("q") and E7,ad("q"). As mentioned above, E7("q") is simple for any "q", and it constitutes one of the infinite families addressed by the classification of finite simple groups. Its number of elements is given by the formula (sequence in the OEIS): formula_12 The order of E7,sc("q") or E7,ad("q") (both are equal) can be obtained by removing the dividing factor gcd(2, "q"−1) (sequence in the OEIS). The Schur multiplier of E7("q") is gcd(2, "q"−1), and its outer automorphism group is the product of the diagonal automorphism group Z/gcd(2, "q"−1)Z (given by the action of E7,ad("q")) and the group of field automorphisms (i.e., cyclic of order "f" if "q" = "pf" where "p" is prime). Importance in physics. "N" = 8 supergravity in four dimensions, which is a dimensional reduction from eleven-dimensional supergravity, admit an E7 bosonic global symmetry and an SU(8) bosonic local symmetry. The fermions are in representations of SU(8), the gauge fields are in a representation of E7, and the scalars are in a representation of both (Gravitons are singlets with respect to both). Physical states are in representations of the coset E7 / SU(8). In string theory, E7 appears as a part of the gauge group of one of the (unstable and non-supersymmetric) versions of the heterotic string. It can also appear in the unbroken gauge group E8 × E7 in six-dimensional compactifications of heterotic string theory, for instance on the four-dimensional surface K3. Notes. <templatestyles src="Reflist/styles.css" /> References. 8 Supergravity Theory. 1. The Lagrangian", Phys.Lett.B80:48,1978. Online scanned version at http://ac.els-cdn.com/0370269378903039/1-s2.0-0370269378903039-main.pdf?_tid=79273f80-539d-11e4-a133-00000aab0f6c&acdnat=1413289833_5f3539a6365149b108ddcec889200964.
[ { "math_id": 0, "text": "\\begin{pmatrix}8\\\\4\\end{pmatrix}" }, { "math_id": 1, "text": "4\\times\\begin{pmatrix}6\\\\2\\end{pmatrix}" }, { "math_id": 2, "text": "\\left(\\pm{1\\over 2},\\pm{1\\over 2},\\pm{1\\over 2},\\pm{1\\over 2},\\pm{1\\over 2},\\pm{1\\over 2},\\pm{1\\over \\sqrt{2}}\\right)" }, { "math_id": 3, "text": "\\left(0,0,0,0,0,0,\\pm \\sqrt{2}\\right)." }, { "math_id": 4, "text": "\\pm \\sqrt{2}" }, { "math_id": 5, "text": "\\begin{bmatrix}\n1&-1&0&0&0&0&0 \\\\\n0&1&-1&0&0&0&0 \\\\\n0&0&1&-1&0&0&0 \\\\\n0&0&0&1&-1&0&0 \\\\\n0&0&0&0&1&1&0 \\\\\n-\\frac{1}{2}&-\\frac{1}{2}&-\\frac{1}{2}&-\\frac{1}{2}&-\\frac{1}{2}&-\\frac{1}{2}&\\frac{\\sqrt{2}}{2}\\\\\n0&0&0&0&1&-1&0 \\\\\n\\end{bmatrix}." }, { "math_id": 6, "text": "\\begin{bmatrix}\n 2 & -1 & 0 & 0 & 0 & 0 & 0 \\\\\n-1 & 2 & -1 & 0 & 0 & 0 & 0 \\\\\n 0 & -1 & 2 & -1 & 0 & 0 & 0 \\\\\n 0 & 0 & -1 & 2 & -1 & 0 & -1 \\\\\n 0 & 0 & 0 & -1 & 2 & -1 & 0 \\\\\n 0 & 0 & 0 & 0 & -1 & 2 & 0 \\\\\n 0 & 0 & 0 & -1 & 0 & 0 & 2\n\\end{bmatrix}." }, { "math_id": 7, "text": "C_1 = pq - qp + Tr[PQ] - Tr[QP]" }, { "math_id": 8, "text": "C_2 = (pq + Tr[P\\circ Q])^2 + p Tr[Q\\circ \\tilde{Q}]+q Tr[P\\circ \\tilde{P}]+Tr[\\tilde{P}\\circ \\tilde{Q}] " }, { "math_id": 9, "text": "\\tilde{P} \\equiv \\det(P) P^{-1}" }, { "math_id": 10, "text": "A\\circ B = (AB+BA)/2" }, { "math_id": 11, "text": " C_2 = Tr[(XY)^2] - \\dfrac{1}{4} Tr[XY]^2 +\\frac{1}{96}\\epsilon_{ijklmnop}\\left( X^{ij}X^{kl}X^{mn}X^{op} + Y^{ij}Y^{kl}Y^{mn}Y^{op} \\right)" }, { "math_id": 12, "text": "\\frac{1}{\\mathrm{gcd}(2,q-1)}q^{63}(q^{18}-1)(q^{14}-1)(q^{12}-1)(q^{10}-1)(q^8-1)(q^6-1)(q^2-1)" } ]
https://en.wikipedia.org/wiki?curid=649115
649128
Modulatory space
The spaces described in this article are pitch class spaces which model the relationships between pitch classes in some musical system. These models are often graphs, groups or lattices. Closely related to pitch class space is pitch space, which represents pitches rather than pitch classes, and chordal space, which models relationships between chords. Circular pitch class space. The simplest pitch space model is the real line. In the MIDI Tuning Standard, for example, fundamental frequencies "f" are mapped to numbers "p" according to the equation formula_0 This creates a linear space in which octaves have size 12, semitones (the distance between adjacent keys on the piano keyboard) have size 1, and A440 is assigned the number 69 (meaning middle C is assigned the number 60). To create circular pitch class space we identify or "glue together" pitches "p" and "p" + 12. The result is a continuous, circular pitch class space that mathematicians call Z/12Z. Circles of generators. Other models of pitch class space, such as the circle of fifths, attempt to describe the special relationship between pitch classes related by perfect fifth. In equal temperament, twelve successive fifths equate to seven octaves exactly, and hence in terms of pitch classes closes back to itself, forming a circle. We say that the pitch class of the fifth generates – or is a generator of – the space of twelve pitch classes. By dividing the octave into n equal parts, and choosing an integer m<n such that m and n are relatively prime – that is, have no common divisor – we obtain similar circles, which all have the structure of finite cyclic groups. By drawing a line between two pitch classes when they differ by a generator, we can depict the circle of generators as a cycle graph, in the shape of a regular polygon. Toroidal modulatory spaces. If we divide the octave into n parts, where n = rs is the product of two relatively prime integers r and s, we may represent every element of the tone space as the product of a certain number of "r" generators times a certain number of "s" generators; in other words, as the direct sum of two cyclic groups of orders r and s. We may now define a graph with n vertices on which the group acts, by adding an edge between two pitch classes whenever they differ by either an "r" generator or an "s" generator (the so-called Cayley graph of formula_1 with generators "r" and "s"). The result is a graph of genus one, which is to say, a graph with a donut or torus shape. Such a graph is called a toroidal graph. An example is equal temperament; twelve is the product of 3 and 4, and we may represent any pitch class as a combination of thirds of an octave, or major thirds, and fourths of an octave, or minor thirds, and then draw a toroidal graph by drawing an edge whenever two pitch classes differ by a major or minor third. We may generalize immediately to any number of relatively prime factors, producing graphs can be drawn in a regular manner on an n-torus. Chains of generators. A linear temperament is a regular temperament of rank two generated by the octave and another interval, commonly called "the" generator. The most familiar example by far is meantone temperament, whose generator is a flattened, meantone fifth. The pitch classes of any linear temperament can be represented as lying along an infinite chain of generators; in meantone for instance this would be -F-C-G-D-A- etc. This defines a linear modulatory space. Cylindrical modulatory spaces. A temperament of rank two which is not linear has one generator which is a fraction of an octave, called the period. We may represent the modulatory space of such a temperament as n chains of generators in a circle, forming a cylinder. Here n is the number of periods in an octave. For example, diaschismic temperament is the temperament which tempers out the diaschisma, or 2048/2025. It can be represented as two chains of slightly (3.25 to 3.55 cents) sharp fifths a half-octave apart, which can be depicted as two chains perpendicular to a circle and at opposite side of it. The cylindrical appearance of this sort of modulatory space becomes more apparent when the period is a smaller fraction of an octave; for example, ennealimmal temperament has a modulatory space consisting of nine chains of minor thirds in a circle (where the thirds may be only 0.02 to 0.03 cents sharp.) Five-limit modulatory space. Five limit just intonation has a modulatory space based on the fact that its pitch classes can be represented by 3a 5b, where a and b are integers. It is therefore a free abelian group with the two generators 3 and 5, and can be represented in terms of a square lattice with fifths along the horizontal axis, and major thirds along the vertical axis. In many ways a more enlightening picture emerges if we represent it in terms of a hexagonal lattice instead; this is the Tonnetz of Hugo Riemann, discovered independently around the same time by Shohé Tanaka. The fifths are along the horizontal axis, and the major thirds point off to the right at an angle of sixty degrees. Another sixty degrees gives us the axis of major sixths, pointing off to the left. The non-unison elements of the 5-limit tonality diamond, 3/2, 5/4, 5/3, 4/3, 8/5, 6/5 are now arranged in a regular hexagon around 1. The triads are the equilateral triangles of this lattice, with the upwards-pointing triangles being major triads, and downward-pointing triangles being minor triads. This picture of five-limit modulatory space is generally preferable since it treats the consonances in a uniform way, and does not suggest that, for instance, a major third is more of a consonance than a major sixth. When two lattice points are as close as possible, a unit distance apart, then and only then are they separated by a consonant interval. Hence the hexagonal lattice provides a superior picture of the structure of the five-limit modulatory space. In more abstract mathematical terms, we can describe this lattice as the integer pairs (a, b), where instead of the usual Euclidean distance we have a Euclidean distance defined in terms of the vector space norm formula_2 Seven-limit modulatory space. In similar fashion, we can define a modulatory space for seven-limit just intonation, by representing 3a 5b 7c in terms of a corresponding cubic lattice. Once again, however, a more enlightening picture emerges if we represent it instead in terms of the three-dimensional analog of the hexagonal lattice, a lattice called A3, which is equivalent to the face centered cubic lattice, or D3. Abstractly, it can be defined as the integer triples (a, b, c), associated to 3a 5b 7c, where the distance measure is not the usual Euclidean distance but rather the Euclidean distance deriving from the vector space norm formula_3 In this picture, the twelve non-unison elements of the seven-limit tonality diamond are arranged around 1 in the shape of a cuboctahedron.
[ { "math_id": 0, "text": "\np = 69 + 12\\log_2 {(f/440)}\n" }, { "math_id": 1, "text": "\\mathbb{Z}_{12}" }, { "math_id": 2, "text": "||(a, b)|| = \\sqrt{a^2 + ab + b^2}." }, { "math_id": 3, "text": "||(a, b, c)|| = \\sqrt{a^2 + b^2 + c^2 + ab + bc + ca}." } ]
https://en.wikipedia.org/wiki?curid=649128
64921659
Hooper's paradox
Optical illusion Hooper's paradox is a falsidical paradox based on an optical illusion. A geometric shape with an area of 32 units is dissected into four parts, which afterwards get assembled into a rectangle with an area of only 30 units. Explanation. Upon close inspection one can notice that the triangles of the dissected shape are not identical to the triangles in the rectangle. The length of the shorter side at the right angle measures 2 units in the original shape but only 1.8 units in the rectangle. This means, the real triangles of the original shape overlap in the rectangle. The overlapping area is a parallelogram, the diagonals and sides of which can be computed via the Pythagorean theorem. formula_0 formula_1 formula_2 formula_3 The area of this parallelogram can determined using Heron's formula for triangles. This yields formula_4 for the halved circumference of the triangle (half of the parallelogram) and with that for the area of the parallelogram formula_5. So the overlapping area of the two triangles accounts exactly for the vanished area of 2 units. History. William Hooper published the paradox in 1774 in his book "Rational Recreations", calling it "The geometric money". The 1774 edition of his book still contained a false drawing, which got corrected in the 1782 edition. However Hooper was not the first to publish this geometric fallacy, since Hooper's book was largely an adaption of Edmé-Gilles Guyot's "Nouvelles récréations physiques et mathétiques", which had been published in France in 1769. The description in this book contains the same false drawing as in Hooper's book, but it got corrected in a later edition as well.
[ { "math_id": 0, "text": " d_1=\\sqrt{2^2+1^2}=\\sqrt{5} " }, { "math_id": 1, "text": " d_2=\\sqrt{3^2+10^2}=\\sqrt{109} " }, { "math_id": 2, "text": " s_1=\\sqrt{2^2+6^2}=\\sqrt{40} " }, { "math_id": 3, "text": " s_2=\\sqrt{1^2+4^2}=\\sqrt{17} " }, { "math_id": 4, "text": " s=\\frac{d_1+s_1+s_2}{2}=\\frac{\\sqrt{5}+\\sqrt{17}+\\sqrt{40}}{2} " }, { "math_id": 5, "text": "\n\\begin{align} \nF&=2\\cdot \\sqrt{s\\cdot (s-s_1) \\cdot (s-s_2) \\cdot (s-d_1)} \\\\[5pt]\n &=2\\cdot\\frac{1}{4}\\cdot\\sqrt{(\\sqrt{5}+\\sqrt{17}+\\sqrt{40})\\cdot(-\\sqrt{5}+\\sqrt{17}+\\sqrt{40})\\cdot(\\sqrt{5}-\\sqrt{17}+\\sqrt{40})\\cdot(\\sqrt{5}+\\sqrt{17}-\\sqrt{40})} \\\\[5pt]\n &=2\\cdot\\frac{1}{4}\\cdot\\sqrt{16} \\\\[5pt]\n &=2\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=64921659
64926134
Quadric (algebraic geometry)
In mathematics, a quadric or quadric hypersurface is the subspace of "N"-dimensional space defined by a polynomial equation of degree 2 over a field. Quadrics are fundamental examples in algebraic geometry. The theory is simplified by working in projective space rather than affine space. An example is the quadric surface formula_0 in projective space formula_1 over the complex numbers C. A quadric has a natural action of the orthogonal group, and so the study of quadrics can be considered as a descendant of Euclidean geometry. Many properties of quadrics hold more generally for projective homogeneous varieties. Another generalization of quadrics is provided by Fano varieties. Property of quadric By definition, a quadric "X" of dimension "n" over a field "k" is the subspace of formula_2 defined by "q" = 0, where "q" is a nonzero homogeneous polynomial of degree 2 over "k" in variables formula_3. (A homogeneous polynomial is also called a form, and so "q" may be called a quadratic form.) If "q" is the product of two linear forms, then "X" is the union of two hyperplanes. It is common to assume that formula_4 and "q" is irreducible, which excludes that special case. Here algebraic varieties over a field "k" are considered as a special class of schemes over "k". When "k" is algebraically closed, one can also think of a projective variety in a more elementary way, as a subset of formula_5 defined by homogeneous polynomial equations with coefficients in "k". If "q" can be written (after some linear change of coordinates) as a polynomial in a proper subset of the variables, then "X" is the projective cone over a lower-dimensional quadric. It is reasonable to focus attention on the case where "X" is not a cone. For "k" of characteristic not 2, "X" is not a cone if and only if "X" is smooth over "k". When "k" has characteristic not 2, smoothness of a quadric is also equivalent to the Hessian matrix of "q" having nonzero determinant, or to the associated bilinear form "b"("x","y") = "q"("x"+"y") – "q"("x") – "q"("y") being nondegenerate. In general, for "k" of characteristic not 2, the rank of a quadric means the rank of the Hessian matrix. A quadric of rank "r" is an iterated cone over a smooth quadric of dimension "r" − 2. It is a fundamental result that a smooth quadric over a field "k" is rational over "k" if and only if "X" has a "k"-rational point. That is, if there is a solution of the equation "q" = 0 of the form formula_6 with formula_7 in "k", not all zero (hence corresponding to a point in projective space), then there is a one-to-one correspondence defined by rational functions over "k" between formula_8 minus a lower-dimensional subset and "X" minus a lower-dimensional subset. For example, if "k" is infinite, it follows that if "X" has one "k"-rational point then it has infinitely many. This equivalence is proved by stereographic projection. In particular, every quadric over an algebraically closed field is rational. A quadric over a field "k" is called isotropic if it has a "k"-rational point. An example of an anisotropic quadric is the quadric formula_9 in projective space formula_10 over the real numbers R. Linear subspaces of quadrics. A central part of the geometry of quadrics is the study of the linear spaces that they contain. (In the context of projective geometry, a linear subspace of formula_11 is isomorphic to formula_12 for some formula_13.) A key point is that every linear space contained in a smooth quadric has dimension at most half the dimension of the quadric. Moreover, when "k" is algebraically closed, this is an optimal bound, meaning that every smooth quadric of dimension "n" over "k" contains a linear subspace of dimension formula_14. Over any field "k", a smooth quadric of dimension "n" is called split if it contains a linear space of dimension formula_14 over "k". Thus every smooth quadric over an algebraically closed field is split. If a quadric "X" over a field "k" is split, then it can be written (after a linear change of coordinates) as formula_15 if "X" has dimension 2"m" − 1, or formula_16 if "X" has dimension 2"m". In particular, over an algebraically closed field, there is only one smooth quadric of each dimension, up to isomorphism. For many applications, it is important to describe the space "Y" of all linear subspaces of maximal dimension in a given smooth quadric "X". (For clarity, assume that "X" is split over "k".) A striking phenomenon is that "Y" is connected if "X" has odd dimension, whereas it has two connected components if "X" has even dimension. That is, there are two different "types" of maximal linear spaces in "X" when "X" has even dimension. The two families can be described by: for a smooth quadric "X" of dimension 2"m", fix one "m"-plane "Q" contained in "X". Then the two types of "m"-planes "P" contained in "X" are distinguished by whether the dimension of the intersection formula_17 is even or odd. (The dimension of the empty set is taken to be −1 here.) Low-dimensional quadrics. Let "X" be a split quadric over a field "k". (In particular, "X" can be any smooth quadric over an algebraically closed field.) In low dimensions, "X" and the linear spaces it contains can be described as follows. As these examples suggest, the space of "m"-planes in a split quadric of dimension 2"m" always has two connected components, each isomorphic to the isotropic Grassmannian of ("m" − 1)-planes in a split quadric of dimension 2"m" − 1. Any reflection in the orthogonal group maps one component isomorphically to the other. The Bruhat decomposition. A smooth quadric over a field "k" is a projective homogeneous variety for the orthogonal group (and for the special orthogonal group), viewed as linear algebraic groups over "k". Like any projective homogeneous variety for a split reductive group, a split quadric "X" has an algebraic cell decomposition, known as the Bruhat decomposition. (In particular, this applies to every smooth quadric over an algebraically closed field.) That is, "X" can be written as a finite union of disjoint subsets that are isomorphic to affine spaces over "k" of various dimensions. (For projective homogeneous varieties, the cells are called Schubert cells, and their closures are called Schubert varieties.) Cellular varieties are very special among all algebraic varieties. For example, a cellular variety is rational, and (for "k" = C) the Hodge theory of a smooth projective cellular variety is trivial, in the sense that formula_24 for formula_25. For a cellular variety, the Chow group of algebraic cycles on "X" is the free abelian group on the set of cells, as is the integral homology of "X" (if "k" = C). A split quadric "X" of dimension "n" has only one cell of each dimension "r", except in the middle dimension of an even-dimensional quadric, where there are two cells. The corresponding cell closures (Schubert varieties) are: Using the Bruhat decomposition, it is straightforward to compute the Chow ring of a split quadric of dimension "n" over a field, as follows. When the base field is the complex numbers, this is also the integral cohomology ring of a smooth quadric, with formula_29 mapping isomorphically to formula_30. (The cohomology in odd degrees is zero.) Here "h" is the class of a hyperplane section and "l" is the class of a maximal linear subspace of "X". (For "n" = 2"m", the class of the other type of maximal linear subspace is formula_33.) This calculation shows the importance of the linear subspaces of a quadric: the Chow ring of all algebraic cycles on "X" is generated by the "obvious" element "h" (pulled back from the class formula_34 of a hyperplane in formula_10) together with the class of a maximal linear subspace of "X". Isotropic Grassmannians and the projective pure spinor variety. The space of "r"-planes in a smooth "n"-dimensional quadric (like the quadric itself) is a projective homogeneous variety, known as the isotropic Grassmannian or orthogonal Grassmannian OGr("r" + 1, "n" + 2). (The numbering refers to the dimensions of the corresponding vector spaces. In the case of middle-dimensional linear subspaces of a quadric of even dimension 2"m", one writes formula_35 for one of the two connected components.) As a result, the isotropic Grassmannians of a split quadric over a field also have algebraic cell decompositions. The isotropic Grassmannian "W" = OGr("m",2"m" + 1) of ("m" − 1)-planes in a smooth quadric of dimension 2"m" − 1 may also be viewed as the variety of Projective pure spinors, or simple spinor variety, of dimension "m"("m" + 1)/2. (Another description of the pure spinor variety is as formula_35.) To explain the name: the smallest SO(2"m" + 1)-equivariant projective embedding of "W" lands in projective space of dimension formula_36. The action of SO(2"m" + 1) on this projective space does not come from a linear representation of SO(2"m"+1) over "k", but rather from a representation of its simply connected double cover, the spin group Spin(2"m" + 1) over "k". This is called the spin representation of Spin(2"m" + 1), of dimension formula_37. Over the complex numbers, the isotropic Grassmannian OGr("r" + 1, "n" + 2) of "r"-planes in an "n"-dimensional quadric "X" is a homogeneous space for the complex algebraic group formula_38, and also for its maximal compact subgroup, the compact Lie group SO("n" + 2). From the latter point of view, this isotropic Grassmannian is formula_39 where U("r"+1) is the unitary group. For "r" = 0, the isotropic Grassmannian is the quadric itself, which can therefore be viewed as formula_40 For example, the complex projectivized pure spinor variety OGr("m", 2"m" + 1) can be viewed as SO(2"m" + 1)/U("m"), and also as SO(2"m"+2)/U("m"+1). These descriptions can be used to compute the cohomology ring (or equivalently the Chow ring) of the spinor variety: formula_41 where the Chern classes formula_42 of the natural rank-"m" vector bundle are equal to formula_43. Here formula_44 is understood to mean 0 for "j" > "m". Spinor bundles on quadrics. The spinor bundles play a special role among all vector bundles on a quadric, analogous to the maximal linear subspaces among all subvarieties of a quadric. To describe these bundles, let "X" be a split quadric of dimension "n" over a field "k". The special orthogonal group SO("n"+2) over "k" acts on "X", and therefore so does its double cover, the spin group "G" = Spin("n"+2) over "k". In these terms, "X" is a homogeneous space "G"/"P", where "P" is a maximal parabolic subgroup of "G". The semisimple part of "P" is the spin group Spin("n"), and there is a standard way to extend the spin representations of Spin("n") to representations of "P". (There are two spin representations formula_45 for "n" = 2"m", each of dimension formula_46, and one spin representation "V" for "n" = 2"m" − 1, of dimension formula_46.) Then the spinor bundles on the quadric "X" = "G"/"P" are defined as the "G"-equivariant vector bundles associated to these representations of "P". So there are two spinor bundles formula_47 of rank formula_46 for "n" = 2"m", and one spinor bundle "S" of rank formula_46 for "n" = 2"m" − 1. For "n" even, any reflection in the orthogonal group switches the two spinor bundles on "X". For example, the two spinor bundles on a quadric surface formula_48 are the line bundles O(−1,0) and O(0,−1). The spinor bundle on a quadric 3-fold "X" is the natural rank-2 subbundle on "X" viewed as the isotropic Grassmannian of 2-planes in a 4-dimensional symplectic vector space. To indicate the significance of the spinor bundles: Mikhail Kapranov showed that the bounded derived category of coherent sheaves on a split quadric "X" over a field "k" has a full exceptional collection involving the spinor bundles, along with the "obvious" line bundles "O"("j") restricted from projective space: formula_49 if "n" is even, and formula_50 if "n" is odd. Concretely, this implies the split case of Richard Swan's calculation of the Grothendieck group of algebraic vector bundles on a smooth quadric; it is the free abelian group formula_51 for "n" even, and formula_52 for "n" odd. When "k" = C, the topological K-group formula_53 (of continuous complex vector bundles on the quadric "X") is given by the same formula, and formula_54 is zero. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "xy=zw" }, { "math_id": 1, "text": "{\\mathbf P}^3" }, { "math_id": 2, "text": "\\mathbf{P}^{n+1}" }, { "math_id": 3, "text": "x_0,\\ldots,x_{n+1}" }, { "math_id": 4, "text": "n\\geq 1" }, { "math_id": 5, "text": "{\\mathbf P}^N(k)=(k^{N+1}-0)/k^*" }, { "math_id": 6, "text": "(a_0,\\ldots,a_{n+1})" }, { "math_id": 7, "text": "a_0,\\ldots,a_{n+1}" }, { "math_id": 8, "text": "{\\mathbf P}^n" }, { "math_id": 9, "text": "x_0^2+x_1^2+\\cdots+x_{n+1}^2=0" }, { "math_id": 10, "text": "{\\mathbf P}^{n+1}" }, { "math_id": 11, "text": "{\\mathbf P}^N" }, { "math_id": 12, "text": "{\\mathbf P}^a" }, { "math_id": 13, "text": "a\\leq N" }, { "math_id": 14, "text": "\\lfloor n/2\\rfloor" }, { "math_id": 15, "text": "x_0x_1+x_2x_3+\\cdots+x_{2m-2}x_{2m-1}+x_{2m}^2=0" }, { "math_id": 16, "text": "x_0x_1+x_2x_3+\\cdots+x_{2m}x_{2m+1}=0" }, { "math_id": 17, "text": "P\\cap Q" }, { "math_id": 18, "text": "\\mathbf{P}^2" }, { "math_id": 19, "text": "\\mathbf{P}^1" }, { "math_id": 20, "text": "\\mathbf{P}^1\\times \\mathbf{P}^1" }, { "math_id": 21, "text": "\\mathbf{P}^3" }, { "math_id": 22, "text": "\\operatorname{Sp}(4,k)/\\{\\pm 1\\}" }, { "math_id": 23, "text": "\\operatorname{SL}(4,k)/\\{\\pm 1\\}" }, { "math_id": 24, "text": "h^{p,q}(X)=0" }, { "math_id": 25, "text": "p\\neq q" }, { "math_id": 26, "text": "0\\leq r<n/2" }, { "math_id": 27, "text": "\\mathbf{P}^r" }, { "math_id": 28, "text": "n/2<r\\leq n" }, { "math_id": 29, "text": "CH^j" }, { "math_id": 30, "text": "H^{2j}" }, { "math_id": 31, "text": "CH^*(X)\\cong \\Z[h,l]/(h^m-2l, l^2)" }, { "math_id": 32, "text": "CH^*(X)\\cong \\Z[h,l]/(h^{m+1}-2hl, l^2-ah^ml)" }, { "math_id": 33, "text": "h^m-l" }, { "math_id": 34, "text": "c_1O(1)" }, { "math_id": 35, "text": "\\operatorname{OGr}_{+}(m+1,2m+2)" }, { "math_id": 36, "text": "2^m-1" }, { "math_id": 37, "text": "2^m" }, { "math_id": 38, "text": "G=\\operatorname{SO}(n+2,\\mathbf{C})" }, { "math_id": 39, "text": "\\operatorname{SO}(n+2)/(\\operatorname{U}(r+1)\\times \\operatorname{SO}(n-2r))," }, { "math_id": 40, "text": "\\operatorname{SO}(n+2)/(\\operatorname{U}(1)\\times \\operatorname{SO}(n))." }, { "math_id": 41, "text": "CH^*\\operatorname{OGr}(m,2m+1)\\cong \\Z[e_1,\\ldots,e_m]/(e_j^2-2e_{j-1}e_{j+1}+2e_{j-2}e_{j+2}-\\cdots+(-1)^je_{2j}=0\\text{ for all }j)," }, { "math_id": 42, "text": "c_j" }, { "math_id": 43, "text": "2e_j" }, { "math_id": 44, "text": "e_j" }, { "math_id": 45, "text": "V_{+}, V_{-}" }, { "math_id": 46, "text": "2^{m-1}" }, { "math_id": 47, "text": "S_{+},S_{-}" }, { "math_id": 48, "text": "X\\cong \\mathbf{P}^1\\times\\mathbf{P}^1" }, { "math_id": 49, "text": "D^b(X)=\\langle S_{+},S_{-},O,O(1),\\ldots,O(n-1)\\rangle" }, { "math_id": 50, "text": "D^b(X)=\\langle S,O,O(1),\\ldots,O(n-1)\\rangle" }, { "math_id": 51, "text": "K_0(X)=\\Z\\{S_{+},S_{-},O,O(1),\\ldots,O(n-1)\\}" }, { "math_id": 52, "text": "K_0(X)=\\Z\\{S,O,O(1),\\ldots,O(n-1)\\}" }, { "math_id": 53, "text": "K^0(X)" }, { "math_id": 54, "text": "K^1(X)" } ]
https://en.wikipedia.org/wiki?curid=64926134
6494256
CTL*
Branching-time logic that is a superset of LTL and CTL CTL* is a superset of computational tree logic (CTL) and linear temporal logic (LTL). It freely combines path quantifiers and temporal operators. Like CTL, CTL* is a branching-time logic. The formal semantics of CTL* formulae are defined with respect to a given Kripke structure. History. LTL had been proposed for the verification of computer programs, first by Amir Pnueli in 1977. Four years later in 1981 E. M. Clarke and E. A. Emerson invented CTL and CTL model checking. CTL* was defined by E. A. Emerson and Joseph Y. Halpern in 1983. CTL and LTL were developed independently before CTL*. Both sublogics have become standards in the model checking community, while CTL* is of practical importance because it provides an expressive testbed for representing and comparing these and other logics. This is surprising because the computational complexity of model checking in CTL* is not worse than that of LTL: they both lie in PSPACE. Syntax. The language of well-formed CTL* formulae is generated by the following unambiguous (with respect to bracketing) context-free grammar: formula_0 formula_1 where formula_2 ranges over a set of atomic formulas. Valid CTL*-formulae are built using the nonterminal formula_3. These formulae are called "state formulae", while those created by the symbol formula_4 are called "path formulae". (The above grammar contains some redundancies; for example formula_5 as well as implication and equivalence can be defined as just for Boolean algebras (or propositional logic) from negation and conjunction, and the temporal operators X and U are sufficient to define the other two.) The operators basically are the same as in CTL. However, in CTL, every temporal operator (formula_6) has to be directly preceded by a quantifier, while in CTL* this is not required. The universal path quantifier may be defined in CTL* in the same way as for classical predicate calculus formula_7, although this is not possible in the CTL fragment. Examples of formulae. Remark: When taking LTL as subset of CTL*, any LTL formula is implicitly prefixed with the universal path quantifier formula_12. Semantics. The semantics of CTL* are defined with respect to some Kripke structure. As the names imply, state formulae are interpreted with respect to the states of this structure, while path formulae are interpreted over paths on it. State formulae. If a state formula_13 of the Kripke structure satisfies a state formula formula_3 it is denoted formula_14. This relation is defined inductively as follows: Path formulae. The satisfaction relation formula_26 for path formulae formula_27 and a path formula_28 is also defined inductively. For this, let formula_29 denote the sub-path formula_30: Decision problems. CTL* model checking (of an input formula on a fixed model) is PSPACE-complete and the satisfiability problem is 2EXPTIME-complete. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Phi::=\\bot \\mid \\top \\mid p \\mid (\\neg\\Phi) \\mid (\\Phi\\land\\Phi) \\mid (\\Phi\\lor\\Phi) \\mid \n(\\Phi\\Rightarrow\\Phi) \\mid (\\Phi\\Leftrightarrow\\Phi) \\mid A\\phi \\mid E\\phi" }, { "math_id": 1, "text": "\\phi::=\\Phi \\mid (\\neg\\phi) \\mid (\\phi\\land\\phi) \\mid (\\phi\\lor\\phi) \\mid \n(\\phi\\Rightarrow\\phi) \\mid (\\phi\\Leftrightarrow\\phi) \\mid X\\phi \\mid F\\phi \\mid G\\phi \\mid [\\phi U \\phi] \\mid [\\phi R \\phi]\n" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "\\Phi" }, { "math_id": 4, "text": "\\phi" }, { "math_id": 5, "text": "\\Phi\\lor\\Phi" }, { "math_id": 6, "text": "X, F, G, U" }, { "math_id": 7, "text": "A\\phi = \\neg E \\neg \\phi" }, { "math_id": 8, "text": "EX(p) \\land AFG(p)" }, { "math_id": 9, "text": "\\ AFG(p)" }, { "math_id": 10, "text": "\\ EX(p)" }, { "math_id": 11, "text": "\\ AG(p)" }, { "math_id": 12, "text": "A" }, { "math_id": 13, "text": "s" }, { "math_id": 14, "text": "s\\models\\Phi" }, { "math_id": 15, "text": "\\Big( (\\mathcal{M}, s) \\models \\top \\Big) \\land \\Big( (\\mathcal{M}, s) \\not\\models \\bot \\Big)" }, { "math_id": 16, "text": "\\Big( (\\mathcal{M}, s) \\models p \\Big) \\Leftrightarrow \\Big( p \\in L(s) \\Big)" }, { "math_id": 17, "text": "\\Big( (\\mathcal{M}, s) \\models \\neg\\Phi \\Big) \\Leftrightarrow \\Big( (\\mathcal{M}, s) \\not\\models \\Phi \\Big)" }, { "math_id": 18, "text": "\\Big( (\\mathcal{M}, s) \\models \\Phi_1 \\land \\Phi_2 \\Big) \\Leftrightarrow \\Big( \\big((\\mathcal{M}, s) \\models \\Phi_1 \\big) \\land \\big((\\mathcal{M}, s) \\models \\Phi_2 \\big) \\Big)" }, { "math_id": 19, "text": "\\Big( (\\mathcal{M}, s) \\models \\Phi_1 \\lor \\Phi_2 \\Big) \\Leftrightarrow \\Big( \\big((\\mathcal{M}, s) \\models \\Phi_1 \\big) \\lor \\big((\\mathcal{M}, s) \\models \\Phi_2 \\big) \\Big)" }, { "math_id": 20, "text": "\\Big( (\\mathcal{M}, s) \\models \\Phi_1 \\Rightarrow \\Phi_2 \\Big) \\Leftrightarrow \\Big( \\big((\\mathcal{M}, s) \\not\\models \\Phi_1 \\big) \\lor \\big((\\mathcal{M}, s) \\models \\Phi_2 \\big) \\Big)" }, { "math_id": 21, "text": "\\bigg( (\\mathcal{M}, s) \\models \\Phi_1 \\Leftrightarrow \\Phi_2 \\bigg) \\Leftrightarrow \\bigg( \\Big( \\big((\\mathcal{M}, s) \\models \\Phi_1 \\big) \\land \\big((\\mathcal{M}, s) \\models \\Phi_2 \\big) \\Big) \\lor \\Big( \\neg \\big((\\mathcal{M}, s) \\models \\Phi_1 \\big) \\land \\neg \\big((\\mathcal{M}, s) \\models \\Phi_2 \\big) \\Big) \\bigg)" }, { "math_id": 22, "text": "\\Big( (\\mathcal{M}, s) \\models A\\phi \\Big) \\Leftrightarrow \\Big(\\pi\\models\\phi" }, { "math_id": 23, "text": "\\ \\pi" }, { "math_id": 24, "text": "s\\Big)" }, { "math_id": 25, "text": "\\Big( (\\mathcal{M}, s) \\models E\\phi \\Big) \\Leftrightarrow \\Big(\\pi\\models\\phi" }, { "math_id": 26, "text": "\\pi\\models\\phi" }, { "math_id": 27, "text": "\\ \\phi" }, { "math_id": 28, "text": "\\pi = s_0 \\to s_1 \\to \\cdots" }, { "math_id": 29, "text": "\\ \\pi[n]" }, { "math_id": 30, "text": "s_n \\to s_{n+1} \\to \\cdots" }, { "math_id": 31, "text": "\\Big( \\pi \\models \\Phi \\Big) \\Leftrightarrow \\Big((\\mathcal{M}, s_0) \\models \\Phi\\Big)" }, { "math_id": 32, "text": "\\Big( \\pi \\models \\neg\\phi \\Big) \\Leftrightarrow \\Big( \\pi \\not\\models \\phi \\Big)" }, { "math_id": 33, "text": "\\Big( \\pi \\models \\phi_1 \\land \\phi_2 \\Big) \\Leftrightarrow \\Big( \\big(\\pi \\models \\phi_1 \\big) \\land \\big(\\pi \\models \\phi_2 \\big) \\Big)" }, { "math_id": 34, "text": "\\Big( \\pi \\models \\phi_1 \\lor \\phi_2 \\Big) \\Leftrightarrow \\Big( \\big(\\pi \\models \\phi_1 \\big) \\lor \\big(\\pi \\models \\phi_2 \\big) \\Big)" }, { "math_id": 35, "text": "\\Big( \\pi \\models \\phi_1 \\Rightarrow \\phi_2 \\Big) \\Leftrightarrow \\Big( \\big(\\pi \\not\\models \\phi_1 \\big) \\lor \\big(\\pi \\models \\phi_2 \\big) \\Big)" }, { "math_id": 36, "text": "\\bigg( \\pi \\models \\phi_1 \\Leftrightarrow \\phi_2 \\bigg) \\Leftrightarrow \\bigg( \\Big( \\big(\\pi \\models \\phi_1 \\big) \\land \\big(\\pi \\models \\phi_2 \\big) \\Big) \\lor \\Big( \\neg \\big(\\pi \\models \\phi_1 \\big) \\land \\neg \\big(\\pi \\models \\phi_2 \\big) \\Big) \\bigg)" }, { "math_id": 37, "text": "\\Big( \\pi \\models X\\phi \\Big) \\Leftrightarrow \\Big( \\pi[1] \\models \\phi \\Big)" }, { "math_id": 38, "text": "\\Big( \\pi \\models F\\phi \\Big) \\Leftrightarrow \\Big( \\exists n\\geqslant 0: \\pi[n] \\models \\phi \\Big)" }, { "math_id": 39, "text": "\\Big( \\pi \\models G\\phi \\Big) \\Leftrightarrow \\Big( \\forall n\\geqslant 0: \\pi[n] \\models \\phi \\Big)" }, { "math_id": 40, "text": "\\Big( \\pi \\models [\\phi_1U\\phi_2] \\Big) \\Leftrightarrow \\Big( \\exists n\\geqslant 0: \\big(\\pi[n] \\models \\phi_2 \\land \\forall 0\\leqslant k < n:~ \\pi[k] \\models \\phi_1 \\big)\\Big)" } ]
https://en.wikipedia.org/wiki?curid=6494256
6494433
Curing (chemistry)
Chemical process by which polymeric materials are hardened Curing is a chemical process employed in polymer chemistry and process engineering that produces the toughening or hardening of a polymer material by cross-linking of polymer chains. Even if it is strongly associated with the production of thermosetting polymers, the term "curing" can be used for all the processes where a solid product is obtained from a liquid solution, such as with PVC plastisols. Curing process. During the curing process, single monomers and oligomers, mixed with or without a curing agent, react to form a tridimensional polymeric network. In the very first part of the reaction branches of molecules with various architectures are formed, and their molecular weight increases in time with the extent of the reaction until the network size is equal to the size of the system. The system has lost its solubility and its viscosity tends to infinite. The remaining molecules start to coexist with the macroscopic network until they react with the network creating other crosslinks. The crosslink density increases until the system reaches the end of the chemical reaction. Curing can be induced by heat, radiation, electron beams, or chemical additives. To quote from IUPAC: curing "might or might not require mixing with a chemical curing agent". Thus, two broad classes are curing induced by chemical additives (also called curing agents, hardeners) and curing in the absence of additives. An intermediate case involves a mixture of resin and additives that requires external stimulus (light, heat, radiation) to induce curing. The curing methodology depends on the resin and the application. Particular attention is paid to the shrinkage induced by the curing. Usually small values of shrinkage (2–3%) are desirable. Curing induced by additives. Epoxy resins are typically cured by the use of additives, often called hardeners. Polyamines are often used. The amine groups ring-open the epoxide rings. In rubber, the curing is also induced by the addition of a crosslinker. The resulting process is called sulfur vulcanization. Sulfur breaks down to form polysulfide cross-links (bridges) between sections of the polymer chains. The degree of crosslinking determines the rigidity and durability, as well as other properties of the material. Paints and varnishes commonly contain oil drying agents, usually metallic soaps that catalyze cross-linking of the unsaturated drying oils that largely comprise them. When paint is described as "drying" it is in fact hardening by crosslinking. Oxygen atoms serve as the crosslinks, analogous to the role played by sulfur in the vulcanization of rubber. Curing without additives. In the case of concrete, curing entails the formation of silicate crosslinks. The process is not induced by additives. In many cases, the resin is provided as a solution or mixture with a thermally-activated catalyst, which induces crosslinking but only upon heating. For example, some acrylate-based resins are formulated with dibenzoyl peroxide. Upon heating the mixture, the peroxide converts to a free radical, which adds to an acrylate, initiating crosslinking. Some organic resins are cured with heat. As heat is applied, the viscosity of the resin drops before the onset of crosslinking, whereupon it increases as the constituent oligomers interconnect. This process continues until a tridimensional network of oligomer chains is created – this stage is termed gelation. In terms of processability of the resin this marks an important stage: before gelation the system is relatively mobile, after it the mobility is very limited, the micro-structure of the resin and the composite material is fixed and severe diffusion limitations to further cure are created. Thus, in order to achieve vitrification in the resin, it is usually necessary to increase the process temperature after gelation. When catalysts are activated by ultraviolet radiation, the process is called UV cure. Monitoring methods. Cure monitoring is, for example, an essential component for the control of the manufacturing process of composite materials. The material, initially liquid, at the end of the process will be solid: viscosity is the most important property that changes during the process. Cure monitoring relies on monitoring various physical or chemical properties. Rheological analysis. A simple way to monitor the change in viscosity, and thus, the extent of the reaction, in a curing process is to measure the variation of the elastic modulus. To measure the elastic modulus of a system during curing, a rheometer can be used. With dynamic mechanical analysis, the storage modulus (G') and the loss modulus (G") can be measured. The variation of G' and G" in time can indicate the extent of the curing reaction. As shown in Figure 4, after an "induction time", G' and G" start to increase, with an abrupt change in slope. At a certain point they cross each other; afterwards, the rates of G' and G" decrease, and the moduli tend to a plateau. When they reach the plateau the reaction is concluded. When the system is liquid, the storage modulus is very low: the system behaves like a liquid. Then the reaction continues and the system starts to react more like a solid: the storage modulus increases. The degree of curing, formula_0, can be defined as follow: formula_1 The degree of curing starts from zero (at the beginning of the reaction) and grows until one (the end of the reaction). The slope of the curve changes with time and has his maximum about at half of the reaction. Thermal analysis. If the reactions occurring during crosslinking are exothermic, the crosslinking rate can be related to the heat released during the process. Higher is the number of bonds created, higher is the heat released in the reaction. At the end of the reaction, no more heat will be released. To measure the heat flow differential scanning calorimetry can be used. Assuming that each bond formed during the crosslinking releases the same amount of energy, the degree of curing, formula_0, can be defined as follows: formula_2 where formula_3 is the heat released up to a certain time formula_4, formula_5 is the instantaneous rate of heat and formula_6is the total amount of heat released in formula_7, when the reaction finishes. Also in this case the degree of curing goes from zero (no bonds created) to one (no more reactions occur) with a slope that changes in time and has its maximum about at half of the reaction. Dielectrometric analysis. Conventional dielectrometry is carried out typically in a parallel plate configuration of the dielectric sensor (capacitance probe) and has the capability of monitoring the resin cure throughout the entire cycle, from the liquid to the rubber to the solid state. It is capable of monitoring phase separation in complex resin blends curing also within a fibrous perform. The same attributes belong to the more recent development of the dielectric technique, namely microdielectrometry. Several versions of dielectric sensors are available commercially. The most suitable format for use in cure monitoring applications are the flat interdigital capacitive structures bearing a sensing grid on their surface. Depending on their design (specifically those on durable substrates) they have some reusability, while flexible substrate sensors can be used also in the bulk of the resin systems as embedded sensors. Spectroscopic analysis. The curing process can be monitored by measuring changes in various parameters: Ultrasonic analysis. Ultrasonic cure monitoring methods are based on the relationships between changes in the characteristics of propagating ultrasound and the real-time mechanical properties of a component, by measuring: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\alpha " }, { "math_id": 1, "text": " \\alpha = \\frac {G'(t) - G'_{min}} {G'_{max} - G'_{min}} " }, { "math_id": 2, "text": " \\alpha = \\frac {Q} {Q_T} = \\frac\t{\\int_{0}^{s} \\dot Q\\, dt} {\\int_{0}^{s_f} \\dot Q\\, dt} " }, { "math_id": 3, "text": " Q " }, { "math_id": 4, "text": " s " }, { "math_id": 5, "text": " \\dot Q " }, { "math_id": 6, "text": " Q_T " }, { "math_id": 7, "text": " s_f " } ]
https://en.wikipedia.org/wiki?curid=6494433
64956453
Leavitt path algebra
In mathematics, a Leavitt path algebra is a universal algebra constructed from a directed graph. Leavitt path algebras generalize Leavitt algebras and may be considered as algebraic analogues of graph C*-algebras. History. Leavitt path algebras were simultaneously introduced in 2005 by Gene Abrams and Gonzalo Aranda Pino as well as by Pere Ara, María Moreno, and Enrique Pardo, with neither of the two groups aware of the other's work. Leavitt path algebras have been investigated by dozens of mathematicians since their introduction, and in 2020 Leavitt path algebras were added to the Mathematics Subject Classification with code 16S88 under the general discipline of Associative Rings and Algebras. The basic reference is the book "Leavitt Path Algebras". Graph terminology. The theory of Leavitt path algebras uses terminology for graphs similar to that of C*-algebraists, which differs slightly from that used by graph theorists. The term graph is typically taken to mean a directed graph formula_0 consisting of a countable set of vertices formula_1, a countable set of edges formula_2, and maps formula_3 identifying the range and source of each edge, respectively. A vertex formula_4 is called a sink when formula_5; i.e., there are no edges in formula_6 with source formula_7. A vertex formula_4 is called an infinite emitter when formula_8 is infinite; i.e., there are infinitely many edges in formula_6 with source formula_7. A vertex is called a singular vertex if it is either a sink or an infinite emitter, and a vertex is called a regular vertex if it is not a singular vertex. Note that a vertex formula_7 is regular if and only if the number of edges in formula_6 with source formula_7 is finite and nonzero. A graph is called row-finite if it has no infinite emitters; i.e., if every vertex is either a regular vertex or a sink. A path is a finite sequence of edges formula_9 with formula_10 for all formula_11. An infinite path is a countably infinite sequence of edges formula_12 with formula_10 for all formula_13. A cycle is a path formula_9 with formula_14, and an exit for a cycle formula_9 is an edge formula_15 such that formula_16 and formula_17 for some formula_18. A cycle formula_9 is called a simple cycle if formula_19 for all formula_20. The following are two important graph conditions that arise in the study of Leavitt path algebras. Condition (L): Every cycle in the graph has an exit. Condition (K): There is no vertex in the graph that is on exactly one simple cycle. Equivalently, a graph satisfies Condition (K) if and only if each vertex in the graph is either on no cycles or on two or more simple cycles. The Cuntz–Krieger relations and the universal property. Fix a field formula_21. A Cuntz–Krieger formula_6-family is a collection formula_22 in a formula_21-algebra such that the following three relations (called the Cuntz–Krieger relations) are satisfied: (CK0) formula_23 for all formula_24, (CK1) formula_25 for all formula_26, (CK2) formula_27 whenever formula_7 is a regular vertex, and (CK3) formula_28 for all formula_29. The Leavitt path algebra corresponding to formula_6, denoted by formula_30, is defined to be the formula_21-algebra generated by a Cuntz–Krieger formula_6-family that is universal in the sense that whenever formula_31 is a Cuntz–Krieger formula_6-family in a formula_21-algebra formula_32 there exists a formula_21-algebra homomorphism formula_33 with formula_34 for all formula_29, formula_35 for all formula_29, and formula_36 for all formula_4. We define formula_37 for formula_4, and for a path formula_38 we define formula_39 and formula_40. Using the Cuntz–Krieger relations, one can show that formula_41 Thus a typical element of formula_30 has the form formula_42 for scalars formula_43 and paths formula_44 in formula_6. If formula_21 is a field with an involution formula_45 (e.g., when formula_46), then one can define a *-operation on formula_30 by formula_47 that makes formula_30 into a *-algebra. Moreover, one can show that for any graph formula_6, the Leavitt path algebra formula_48 is isomorphic to a dense *-subalgebra of the graph C*-algebra formula_49. Examples. Leavitt path algebras has been computed for many graphs, and the following table shows some particular graphs and their Leavitt path algebras. We use the convention that a double arrow drawn from one vertex to another and labeled formula_50 indicates that there are a countably infinite number of edges from the first vertex to the second. Correspondence between graph and algebraic properties. As with graph C*-algebras, graph-theoretic properties of formula_6 correspond to algebraic properties of formula_30. Interestingly, it is often the case that the graph properties of formula_6 that are equivalent to an algebraic property of formula_30 are the same graph properties of formula_6 that are equivalent to corresponding C*-algebraic property of formula_49, and moreover, many of the properties for formula_30 are independent of the field formula_21. The following table provides a short list of some of the more well-known equivalences. The reader may wish to compare this table with the corresponding table for graph C*-algebras. The grading. For a path formula_38 we let formula_52 denote the length of formula_51. For each integer formula_53 we define formula_54. One can show that this defines a formula_55-grading on the Leavitt path algebra formula_30 and that formula_56 with formula_57 being the component of homogeneous elements of degree formula_58. It is important to note that the grading depends on the choice of the generating Cuntz-Krieger formula_6-family formula_59. The grading on the Leavitt path algebra formula_30 is the algebraic analogue of the gauge action on the graph C*-algebra formula_60, and it is a fundamental tool in analyzing the structure of formula_30. The uniqueness theorems. There are two well-known uniqueness theorems for Leavitt path algebras: the graded uniqueness theorem and the Cuntz-Krieger uniqueness theorem. These are analogous, respectively, to the gauge-invariant uniqueness theorem and Cuntz-Krieger uniqueness theorem for graph C*-algebras. Formal statements of the uniqueness theorems are as follows: The Graded Uniqueness Theorem: Fix a field formula_21. Let formula_6 be a graph, and let formula_30 be the associated Leavitt path algebra. If formula_32 is a graded formula_21-algebra and formula_33 is a graded algebra homomorphism with formula_61 for all formula_4, then formula_62 is injective. The Cuntz-Krieger Uniqueness Theorem: Fix a field formula_21. Let formula_6 be a graph satisfying Condition (L), and let formula_30 be the associated Leavitt path algebra. If formula_32 is a formula_21-algebra and formula_33 is an algebra homomorphism with formula_61 for all formula_4, then formula_62 is injective. Ideal structure. We use the term ideal to mean "two-sided ideal" in our Leavitt path algebras. The ideal structure of formula_30 can be determined from formula_6. A subset of vertices formula_63 is called hereditary if for all formula_29, formula_64 implies formula_65. A hereditary subset formula_66 is called saturated if whenever formula_7 is a regular vertex with formula_67, then formula_68. The saturated hereditary subsets of formula_6 are partially ordered by inclusion, and they form a lattice with meet formula_69 and join formula_70 defined to be the smallest saturated hereditary subset containing formula_71. If formula_66 is a saturated hereditary subset, formula_72 is defined to be two-sided ideal in formula_30 generated by formula_73. A two-sided ideal formula_74 of formula_30 is called a graded ideal if the formula_74 has a formula_55-grading formula_75 and formula_76 for all formula_53. The graded ideals are partially ordered by inclusion and form a lattice with meet formula_77 and joint formula_78 defined to be the ideal generated by formula_79. For any saturated hereditary subset formula_66, the ideal formula_72 is graded. The following theorem describes how graded ideals of formula_30 correspond to saturated hereditary subsets of formula_6. Theorem: Fix a field formula_21, and let formula_6 be a row-finite graph. Then the following hold:
[ { "math_id": 0, "text": "E=(E^0, E^1, r, s)" }, { "math_id": 1, "text": "E^0" }, { "math_id": 2, "text": "E^1" }, { "math_id": 3, "text": "r, s : E^1 \\rightarrow E^0" }, { "math_id": 4, "text": "v \\in E^0" }, { "math_id": 5, "text": "s^{-1}(v) = \\emptyset" }, { "math_id": 6, "text": "E" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "s^{-1}(v)" }, { "math_id": 9, "text": "e_1 e_2 \\ldots e_n" }, { "math_id": 10, "text": "r(e_i) = s(e_{i+1})" }, { "math_id": 11, "text": "1 \\leq i \\leq n-1" }, { "math_id": 12, "text": "e_1 e_2 \\ldots " }, { "math_id": 13, "text": "i \\in \\mathbb{N}" }, { "math_id": 14, "text": "r(e_n) = s(e_1)" }, { "math_id": 15, "text": "f \\in E^1" }, { "math_id": 16, "text": "s(f) = s(e_i)" }, { "math_id": 17, "text": "f \\neq e_i" }, { "math_id": 18, "text": "1 \\leq i \\leq n" }, { "math_id": 19, "text": "s(e_i) \\neq s(e_1)" }, { "math_id": 20, "text": "2 \\leq i \\leq n" }, { "math_id": 21, "text": "K" }, { "math_id": 22, "text": "\\{ s_e^*, s_e, p_v : e \\in E^1, v \\in E^0 \\}" }, { "math_id": 23, "text": " p_v p_w = \\begin{cases} p_v & \\text{if } v=w \\\\ 0 & \\text{if } v\\neq w \\end{cases} \\quad" }, { "math_id": 24, "text": "v, w \\in E^0" }, { "math_id": 25, "text": " s_e^* s_f = \\begin{cases} p_{r(e)} & \\text{if } e=f \\\\ 0 & \\text{if } e\\neq f \\end{cases} \\quad" }, { "math_id": 26, "text": "e, f \\in E^0" }, { "math_id": 27, "text": "p_v = \\sum_{s(e)=v} s_e s_e^*" }, { "math_id": 28, "text": "p_{s(e)} s_e = s_e" }, { "math_id": 29, "text": "e \\in E^1" }, { "math_id": 30, "text": "L_K(E)" }, { "math_id": 31, "text": "\\{ t_e, t_e^*, q_v : e \\in E^1, v \\in E^0 \\}" }, { "math_id": 32, "text": "A" }, { "math_id": 33, "text": "\\phi : L_K(E) \\to A" }, { "math_id": 34, "text": "\\phi(s_e) = t_e" }, { "math_id": 35, "text": "\\phi(s_e^*) = t_e^*" }, { "math_id": 36, "text": "\\phi(p_v)=q_v" }, { "math_id": 37, "text": "p_v^* := p_v" }, { "math_id": 38, "text": "\\alpha := e_1 \\ldots e_n" }, { "math_id": 39, "text": "s_\\alpha := s_{e_1} \\ldots s_{e_n}" }, { "math_id": 40, "text": "s_\\alpha^* := s_{e_n}^* \\ldots s_{e_1}^*" }, { "math_id": 41, "text": "L_K(E) = \\operatorname{span}_K \\{ s_\\alpha s_\\beta^* : \\alpha \\text{ and } \\beta \\text{ are paths in } E \\}." }, { "math_id": 42, "text": "\\sum_{i=1}^n \\lambda_i s_{\\alpha_i} s_{\\beta_i}^*" }, { "math_id": 43, "text": "\\lambda_1, \\ldots,\\lambda_n \\in K" }, { "math_id": 44, "text": "\\alpha_1, \\ldots, \\alpha_n, \\beta_1, \\ldots, \\beta_n" }, { "math_id": 45, "text": "\\lambda \\mapsto \\overline{\\lambda}" }, { "math_id": 46, "text": "K=\\mathbb{C}" }, { "math_id": 47, "text": "\\sum_{i=1}^n \\lambda_i s_{\\alpha_i} s_{\\beta_i}^* \\mapsto \\sum_{i=1}^n \\overline{\\lambda_i} s_{\\beta_i} s_{\\alpha_i}^*" }, { "math_id": 48, "text": "L_{\\mathbb{C}}(E)" }, { "math_id": 49, "text": "C^*(E)" }, { "math_id": 50, "text": "\\infty" }, { "math_id": 51, "text": "\\alpha" }, { "math_id": 52, "text": "|\\alpha| := n" }, { "math_id": 53, "text": "n \\in \\mathbb{Z}" }, { "math_id": 54, "text": "L_K(E)_n := \\operatorname{span}_K \\{ s_\\alpha s_\\beta^* : |\\alpha|-|\\beta| = n \\}" }, { "math_id": 55, "text": "\\mathbb{Z}" }, { "math_id": 56, "text": "L_K(E) = \\bigoplus_{n \\in \\mathbb{Z}} L_K(E)_n" }, { "math_id": 57, "text": "L_K(E)_n" }, { "math_id": 58, "text": "n" }, { "math_id": 59, "text": "\\{ s_e, s_e^*, p_v : e \\in E^1, v \\in E^0 \\}" }, { "math_id": 60, "text": "C*(E)" }, { "math_id": 61, "text": "\\phi(p_v) \\neq 0" }, { "math_id": 62, "text": "\\phi" }, { "math_id": 63, "text": "H \\subseteq E^0" }, { "math_id": 64, "text": "s(e) \\in H" }, { "math_id": 65, "text": "r(e) \\in H" }, { "math_id": 66, "text": "H" }, { "math_id": 67, "text": "r(s^{-1}(v)) \\subseteq H" }, { "math_id": 68, "text": "v \\in H" }, { "math_id": 69, "text": "H_1 \\wedge H_2 := H_1 \\cap H_2" }, { "math_id": 70, "text": "H_1 \\vee H_2" }, { "math_id": 71, "text": "H_1 \\cup H_2" }, { "math_id": 72, "text": "I_H" }, { "math_id": 73, "text": "\\{ p_v : v \\in H \\}" }, { "math_id": 74, "text": "I" }, { "math_id": 75, "text": "I = \\bigoplus_{n \\in \\mathbb{Z}} I_n" }, { "math_id": 76, "text": "I_n = L_K(E)_n \\cap I" }, { "math_id": 77, "text": "I_1 \\wedge I_2 := I_1 \\cap I_2" }, { "math_id": 78, "text": "I_1 \\vee I_2" }, { "math_id": 79, "text": "I_1 \\cup I_2" }, { "math_id": 80, "text": "H \\mapsto I_H" }, { "math_id": 81, "text": "I \\mapsto \\{ v \\in E^0 : p_v \\in I \\}" }, { "math_id": 82, "text": "L_K(E)/I_H" }, { "math_id": 83, "text": "*" }, { "math_id": 84, "text": "L_K(E \\setminus H)" }, { "math_id": 85, "text": "E \\setminus H" }, { "math_id": 86, "text": "(E \\setminus H)^0 := E^0 \\setminus H" }, { "math_id": 87, "text": "(E \\setminus H)^1 := E^1 \\setminus r^{-1}(H)" }, { "math_id": 88, "text": "L_K(E_H)" }, { "math_id": 89, "text": "E_H" }, { "math_id": 90, "text": "E_H^0 := H" }, { "math_id": 91, "text": "E_H^1 := s^{-1}(H)" } ]
https://en.wikipedia.org/wiki?curid=64956453
6495714
Rentcharge
In English law, payment by a freeholder In English land law, a rentcharge is an annual sum paid by the owner of freehold land (terre-tenant) to the owner of the rentcharge (rentcharger), a person who need have no other legal interest in the land. They are often known as chief rents in the north west of England but the term "ground rent" is used in many parts of the country to refer to either a rentcharge or a rent payable on leasehold land. This is confusing because a true ground rent is a sum payable in relation to land held under a lease rather than freehold land. As a result, the first question a conveyancer or other adviser, such as the free Rentcharges Unit, will demand is information from the Land Registry, which the public can also obtain cheaply, as to whether the subjected land is freehold or held on a lease (a leasehold estate). History. Rentcharge is a legal device which permitted an annual payment to be continually levied on a freehold property. A deed made with the parties' knowledge is legally effective against land to effect this and has been lawful since the 1290 Statute of Quia Emptores. Such sums were originally payable to the local lord of the manor in perpetuity; however, a more common system of such lords was copyhold. Function. Rentcharges provided a regular income for landowners who were prepared to release land for development, to the original builder, or in some cases a third party. The payments due are typically between £2 and £5 per annum, which are no longer a significant burden due to past price inflation. Sometimes the land was released without a capital sum being paid with the rentcharge being the only payment. Once imposed, a rentcharge continues to bind all the land even when the land is later divided and sold off in plots. In such cases one terre-tenant can be made responsible for paying the whole rent. That person is then left to collect the appropriate portion from the other terre-tenants whose land is subject to the rentcharge. Rentcharges Act 1977. United Kingdom legislation Section 2 of the Rentcharges Act 1977 (c. 30) prohibits the creation of most new rentcharges except for 'estate rentcharges' (those for communal/own-property benefit). Other parts of the Rentcharges Act 1977 will abolish rentcharges of a feudal nature and provide a financial means to extinguish (way to redeem) any, regarding any freehold, by the freeholder, with or without the free assistance of the Rentcharges Unit of the Ministry of Housing, Communities and Local Government, Birkenhead, in the meantime. United Kingdom legislation Any to be redeemed (ended) by the freeholder owning land subject to such a rentcharge, work out to very roughly 10 to 11 times (based on a capitalisation rate of 4.6%) the annual rentcharge set out in the property's title, specifically per the Act's formula. The Act has an optional procedure, enabled by Rentcharges (Redemption Price) (England) Regulations 2016, in which the government's Rentcharges Unit assist without charge. To be ensured as effective the rentcharge must no longer appear on the subjected freehold's Register of Title. There is a rare exception to this simple proof. Very few non-agricultural freeholds remain unregistered land and those that do may consider voluntary first registration. Many rentchargers (owners of rentcharge) can make a private settlement with the freeholder (who is technically a terre-tenant, a rentcharge payer, a mildly incumbranced freeholder until redeemed/rentcharge expires/ends at abolition). The Act has a formula which enables Department for Communities and Local Government (DCLG) to calculate the redemption figure that the latter has to pay to redeem their rentcharge. This is per the Rentcharges (Redemption Price) (England) Regulations 2016: formula_0 = the redemption price; = the annual amount of the rentcharge (or, as the case may be, the rent to which section 20(1) of the Landlord and Tenant Act 1927 applies) to be redeemed; = the maturity rate, expressed as a decimal fraction, of the “over-30-, not over 30.5-year” National Loans Fund interest rate published by the UK Debt Management Office; and = the period, expressed in years (rounding up any part of a year to a whole year), for which the rentcharge (or, as the case may be, the rent to which section 20(1) of the Landlord and Tenant Act 1927 applies) would remain payable if it were not redeemed. The point of longstop (longest date for this) will be 22 August 2037, thus longest period is that year minus the current gap in years, rounded up as to any part-year. Rentcharges are extinguished on 22 August 2037; therefore in mid-2023, their lifetime is 14 years and if input into the formula above and using a yield of 4.6% the rate applicable in mid 2023 gives approximately 10.26 years of the rent to redeem it. When the transaction has been completed DCLG, on behalf of the Secretary of State, issues a redemption certificate to the terre-tenant. Any existing rentcharges other than estate rentcharges will be extinguished on 22 August 2037. Estate rentcharges. The 1977 Act retained and continues to allow developers on sale to make appropriate "estate rentcharges", those: These are, thus, still a means of paying for/requiring desirable upkeep, such as ancillary communal maintenance and shared infrastructure. Within the UK, the regulation surrounding potential abuses (known as "fleecehold") has not kept pace with developments in the housing sector, but the government has consulted on potential reforms. As to leasehold land (such as flats), ground rent is the equivalent of the feudal-style of rentcharge; service charge is the equivalent of the estate rentcharge. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P=\\frac{R}{Y}-\\frac{R}{Y(1+Y)^N}" } ]
https://en.wikipedia.org/wiki?curid=6495714
6495737
Borel–Moore homology
Homology theory for locally compact spaces In topology, Borel−Moore homology or homology with closed support is a homology theory for locally compact spaces, introduced by Armand Borel and John Moore in 1960. For reasonable compact spaces, Borel−Moore homology coincides with the usual singular homology. For non-compact spaces, each theory has its own advantages. In particular, a closed oriented submanifold defines a class in Borel–Moore homology, but not in ordinary homology unless the submanifold is compact. Note: Borel equivariant cohomology is an invariant of spaces with an action of a group "G"; it is defined as formula_0 That is not related to the subject of this article. Definition. There are several ways to define Borel−Moore homology. They all coincide for reasonable spaces such as manifolds and locally finite CW complexes. Definition via sheaf cohomology. For any locally compact space "X", Borel–Moore homology with integral coefficients is defined as the cohomology of the dual of the chain complex which computes sheaf cohomology with compact support. As a result, there is a short exact sequence analogous to the universal coefficient theorem: formula_1 In what follows, the coefficients formula_2 are not written. Definition via locally finite chains. The singular homology of a topological space "X" is defined as the homology of the chain complex of singular chains, that is, finite linear combinations of continuous maps from the simplex to "X". The Borel−Moore homology of a reasonable locally compact space "X", on the other hand, is isomorphic to the homology of the chain complex of locally finite singular chains. Here "reasonable" means "X" is locally contractible, σ-compact, and of finite dimension. In more detail, let formula_3 be the abelian group of formal (infinite) sums formula_4 where "σ" runs over the set of all continuous maps from the standard "i"-simplex Δ"i" to "X" and each "aσ" is an integer, such that for each compact subset "K" of "X", we have formula_5 for only finitely many "σ" whose image meets "K". Then the usual definition of the boundary ∂ of a singular chain makes these abelian groups into a chain complex: formula_6 The Borel−Moore homology groups formula_7 are the homology groups of this chain complex. That is, formula_8 If "X" is compact, then every locally finite chain is in fact finite. So, given that "X" is "reasonable" in the sense above, Borel−Moore homology formula_7 coincides with the usual singular homology formula_9 for "X" compact. Definition via compactifications. Suppose that "X" is homeomorphic to the complement of a closed subcomplex "S" in a finite CW complex "Y". Then Borel–Moore homology formula_7 is isomorphic to the relative homology "H""i"("Y", "S"). Under the same assumption on "X", the one-point compactification of "X" is homeomorphic to a finite CW complex. As a result, Borel–Moore homology can be viewed as the relative homology of the one-point compactification with respect to the added point. Definition via Poincaré duality. Let "X" be any locally compact space with a closed embedding into an oriented manifold "M" of dimension "m". Then formula_10 where in the right hand side, relative cohomology is meant. Definition via the dualizing complex. For any locally compact space "X" of finite dimension, let "DX" be the dualizing complex of X. Then formula_11 where in the right hand side, hypercohomology is meant. Properties. Borel−Moore homology is a covariant functor with respect to proper maps. That is, a proper map "f": "X" → "Y" induces a pushforward homomorphism formula_12 for all integers "i". In contrast to ordinary homology, there is no pushforward on Borel−Moore homology for an arbitrary continuous map "f". As a counterexample, one can consider the non-proper inclusion formula_13 Borel−Moore homology is a contravariant functor with respect to inclusions of open subsets. That is, for "U" open in "X", there is a natural pullback or restriction homomorphism formula_14 For any locally compact space "X" and any closed subset "F", with formula_15 the complement, there is a long exact localization sequence: formula_16 Borel−Moore homology is homotopy invariant in the sense that for any space "X", there is an isomorphism formula_17 The shift in dimension means that Borel−Moore homology is not homotopy invariant in the naive sense. For example, the Borel−Moore homology of Euclidean space formula_18 is isomorphic to formula_2 in degree "n" and is otherwise zero. Poincaré duality extends to non-compact manifolds using Borel–Moore homology. Namely, for an oriented "n"-manifold "X", Poincaré duality is an isomorphism from singular cohomology to Borel−Moore homology, formula_19 for all integers "i". A different version of Poincaré duality for non-compact manifolds is the isomorphism from cohomology with compact support to usual homology: formula_20 A key advantage of Borel−Moore homology is that every oriented manifold "M" of dimension "n" (in particular, every smooth complex algebraic variety), not necessarily compact, has a fundamental class formula_21 If the manifold "M" has a triangulation, then its fundamental class is represented by the sum of all the top dimensional simplices. In fact, in Borel−Moore homology, one can define a fundamental class for arbitrary (possibly singular) complex varieties. In this case the complement of the set of smooth points formula_22 has (real) codimension at least 2, and by the long exact sequence above the top dimensional homologies of M and formula_23 are canonically isomorphic. The fundamental class of M is then defined to be the fundamental class of formula_23. Examples. Compact Spaces. Given a compact topological space formula_24 its Borel-Moore homology agrees with its standard homology; that is, formula_25 Real line. The first non-trivial calculation of Borel-Moore homology is of the real line. First observe that any formula_26-chain is cohomologous to formula_26. Since this reduces to the case of a point formula_27, notice that we can take the Borel-Moore chain formula_28 since the boundary of this chain is formula_29 and the non-existent point at infinity, the point is cohomologous to zero. Now, we can take the Borel-Moore chain formula_30 which has no boundary, hence is a homology class. This shows that formula_31 Real n-space. The previous computation can be generalized to the case formula_32 We get formula_33 Infinite Cylinder. Using the Kunneth decomposition, we can see that the infinite cylinder formula_34 has homology formula_35 Real n-space minus a point. Using the long exact sequence in Borel-Moore homology, we get (for formula_36) the non-zero exact sequences formula_37 and formula_38 From the first sequence we get that formula_39 and from the second we get that formula_40 and formula_41 We can interpret these non-zero homology classes using the following observations: hence we can use the computation for the infinite cylinder to interpret formula_44 as the homology class represented by formula_45 and formula_46 as formula_47 Plane with Points Removed. Let formula_48 have formula_49-distinct points removed. Notice the previous computation with the fact that Borel-Moore homology is an isomorphism invariant gives this computation for the case formula_50. In general, we will find a formula_51-class corresponding to a loop around a point, and the fundamental class formula_52 in formula_53. Double Cone. Consider the double cone formula_54. If we take formula_55 then the long exact sequence shows formula_56 Genus Two Curve with Three Points Removed. Given a genus two curve (Riemann surface) formula_24 and three points formula_57, we can use the long exact sequence to compute the Borel-Moore homology of formula_58 This gives formula_59 Since formula_57 is only three points we have formula_60 This gives us that formula_61 Using Poincare-duality we can compute formula_62 since formula_63 deformation retracts to a one-dimensional CW-complex. Finally, using the computation for the homology of a compact genus 2 curve we are left with the exact sequence formula_64 showing formula_65 since we have the short exact sequence of free abelian groups formula_66 from the previous sequence. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H^*_G(X) = H^*((EG \\times X)/G)." }, { "math_id": 1, "text": "0 \\to \\text{Ext}^1_{\\Z}(H^{i+1}_c(X,\\Z),\\Z) \\to H_i^{BM}(X,\\Z) \\to \\text{Hom}(H^i_c(X,\\Z),\\Z) \\to 0." }, { "math_id": 2, "text": "\\Z" }, { "math_id": 3, "text": "C_i^{BM}(X)" }, { "math_id": 4, "text": " u = \\sum_{\\sigma} a_{\\sigma } \\sigma, " }, { "math_id": 5, "text": "a_\\sigma\\neq 0" }, { "math_id": 6, "text": " \\cdots \\to C_2^{BM}(X) \\to C_1^{BM}(X) \\to C_0^{BM}(X) \\to 0." }, { "math_id": 7, "text": "H_i^{BM}(X)" }, { "math_id": 8, "text": "H^{BM}_i (X) = \\ker \\left (\\partial : C_i^{BM}(X) \\to C_{i-1}^{BM}(X) \\right )/ \\text{im} \\left (\\partial :C_{i+1}^{BM}(X) \\to C_i^{BM}(X) \\right ). " }, { "math_id": 9, "text": "H_i(X)" }, { "math_id": 10, "text": "H^{BM}_i(X)= H^{m-i}(M,M\\setminus X)," }, { "math_id": 11, "text": "H^{BM}_i (X)=\\mathbb{H}^{-i} (X, D_X), " }, { "math_id": 12, "text": "H_i^{BM}(X) \\to H_i^{BM}(Y)" }, { "math_id": 13, "text": "\\R^2 \\setminus \\{0\\} \\to \\R^2." }, { "math_id": 14, "text": "H_i^{BM}(X) \\to H_i^{BM}(U)." }, { "math_id": 15, "text": "U = X\\setminus F" }, { "math_id": 16, "text": " \\cdots \\to H^{BM}_i (F) \\to H^{BM}_i (X) \\to H^{BM}_i (U) \\to H^{BM}_{i-1} (F) \\to \\cdots " }, { "math_id": 17, "text": "H_i^{BM}(X) \\to H_{i+1}^{BM}(X\\times \\R)." }, { "math_id": 18, "text": "\\R^n" }, { "math_id": 19, "text": "H^i(X) \\stackrel{\\cong}{\\to} H_{n-i}^{BM}(X)" }, { "math_id": 20, "text": "H^i_c(X) \\stackrel{\\cong}{\\to} H_{n-i}(X)." }, { "math_id": 21, "text": "[M] \\in H_n^{BM}(M)." }, { "math_id": 22, "text": "M^{\\text{reg}} \\subset M" }, { "math_id": 23, "text": "M^{\\text{reg}}" }, { "math_id": 24, "text": "X" }, { "math_id": 25, "text": "H^{BM}_*(X) \\cong H_*(X)" }, { "math_id": 26, "text": "0" }, { "math_id": 27, "text": "p" }, { "math_id": 28, "text": "\\sigma = \\sum_{i=0}^\\infty 1\\cdot [p+i,p+i+1]" }, { "math_id": 29, "text": "\\partial\\sigma = p" }, { "math_id": 30, "text": "\\sigma = \\sum_{-\\infty<k<\\infty} [k, k+1]" }, { "math_id": 31, "text": " H_k^{BM}(\\R) = \\begin{cases} \\Z & k = 1 \\\\ 0 & \\text{otherwise} \\end{cases} " }, { "math_id": 32, "text": "\\R^n." }, { "math_id": 33, "text": " H_k^{BM}(\\R^n) = \\begin{cases} \\Z & k = n \\\\ 0 & \\text{otherwise} \\end{cases} " }, { "math_id": 34, "text": "S^1\\times\\R" }, { "math_id": 35, "text": " H_k^{BM}(S^1\\times \\R ) = \\begin{cases}\n\\Z & k = 1 \\\\\n\\Z & k = 2 \\\\\n0 & \\text{otherwise}\n\\end{cases}\n" }, { "math_id": 36, "text": "n>1" }, { "math_id": 37, "text": "0 \\to H_n^{BM}(\\{0\\}) \\to H_n^{BM}(\\R ^n) \\to H_n^{BM}(\\R ^n-\\{0\\}) \\to 0" }, { "math_id": 38, "text": "0 \\to H_1^{BM}(\\R ^n-\\{0\\}) \\to H_0^{BM}(\\{0\\}) \\to H_0^{BM}(\\R ^n) \\to H_0^{BM}(\\R ^n-\\{0\\}) \\to 0" }, { "math_id": 39, "text": "H_n^{BM}(\\R ^n) \\cong H_n^{BM}(\\R ^n-\\{0\\})" }, { "math_id": 40, "text": "H_1^{BM}(\\R ^n-\\{0\\}) \\cong H_0^{BM}(\\{0\\})" }, { "math_id": 41, "text": "0 \\cong H_0^{BM}(\\R ^n) \\cong H_0^{BM}(\\R ^n-\\{0\\})" }, { "math_id": 42, "text": "\\R ^n-\\{0\\} \\simeq S^{n-1}." }, { "math_id": 43, "text": "\\R ^n-\\{0\\} \\cong S^{n-1} \\times \\R _{>0}." }, { "math_id": 44, "text": "H_n^{BM}" }, { "math_id": 45, "text": "S^{n-1}\\times\\R _{>0}" }, { "math_id": 46, "text": "H_1^{BM}" }, { "math_id": 47, "text": "\\R _{>0}." }, { "math_id": 48, "text": "X = \\R^2 - \\{p_1,\\ldots, p_k \\}" }, { "math_id": 49, "text": "k" }, { "math_id": 50, "text": "k = 1" }, { "math_id": 51, "text": "1" }, { "math_id": 52, "text": "[X]" }, { "math_id": 53, "text": "H_2^{BM}" }, { "math_id": 54, "text": "X = \\mathbb{V}(x^2 + y^2 - z^2) \\subset \\R ^3" }, { "math_id": 55, "text": "U = X \\setminus \\{0\\}" }, { "math_id": 56, "text": "\\begin{align}\nH_2^{BM}(X) &= \\Z^{\\oplus 2} \\\\\nH_1^{BM}(X) &= \\Z \\\\\nH_k^{BM}(X) &= 0 && \\text{ for } k \\not\\in \\{1,2\\}\n\\end{align}" }, { "math_id": 57, "text": "F" }, { "math_id": 58, "text": "U = X \\setminus F." }, { "math_id": 59, "text": "\\begin{align}\n H_2^{BM}(F) \\to & H_2^{BM}(X) \\to H_2^{BM}(U) \\\\\n\\to H_1^{BM}(F) \\to & H_1^{BM}(X) \\to H_1^{BM}(U) \\\\\n\\to H_0^{BM}(F) \\to & H_0^{BM}(X) \\to H_0^{BM}(U) \\to 0\n\\end{align}" }, { "math_id": 60, "text": "H_1^{BM}(F) = H_2^{BM}(F) =0." }, { "math_id": 61, "text": "H_2^{BM}(U) = \\Z." }, { "math_id": 62, "text": "H_0^{BM}(U) = H^2(U) = 0," }, { "math_id": 63, "text": "U" }, { "math_id": 64, "text": "0 \\to \\Z ^{\\oplus 4} \\to H_1^{BM}(U) \\to \\Z ^{\\oplus 3} \\to \\Z \\to 0" }, { "math_id": 65, "text": "H_1^{BM}(U) \\cong \\Z ^{\\oplus 6}" }, { "math_id": 66, "text": "0 \\to \\Z ^{\\oplus 4} \\to H_1^{BM}(U) \\to \\Z ^{\\oplus 2} \\to 0" } ]
https://en.wikipedia.org/wiki?curid=6495737
64958837
Ultragraph C*-algebra
In mathematics, an ultragraph C*-algebra is a universal C*-algebra generated by partial isometries on a collection of Hilbert spaces constructed from ultragraphs.pp. 6-7. These C*-algebras were created in order to simultaneously generalize the classes of graph C*-algebras and Exel–Laca algebras, giving a unified framework for studying these objects. This is because every graph can be encoded as an ultragraph, and similarly, every infinite graph giving an Exel-Laca algebras can also be encoded as an ultragraph. Definitions. Ultragraphs. An ultragraph formula_0 consists of a set of vertices formula_1, a set of edges formula_2, a source map formula_3, and a range map formula_4 taking values in the power set collection formula_5 of nonempty subsets of the vertex set. A directed graph is the special case of an ultragraph in which the range of each edge is a singleton, and ultragraphs may be thought of as generalized directed graph in which each edges starts at a single vertex and points to a nonempty subset of vertices. Example. An easy way to visualize an ultragraph is to consider a directed graph with a set of labelled vertices, where each label corresponds to a subset in the image of an element of the range map. For example, given an ultragraph with vertices and edge labelsformula_6, formula_7with source an range mapsformula_8can be visualized as the image on the right. Ultragraph algebras. Given an ultragraph formula_0, we define formula_9 to be the smallest subset of formula_10 containing the singleton sets formula_11, containing the range sets formula_12, and closed under intersections, unions, and relative complements. A Cuntz–Krieger formula_13-family is a collection of projections formula_14 together with a collection of partial isometries formula_15 with mutually orthogonal ranges satisfying The ultragraph C*-algebra formula_25 is the universal C*-algebra generated by a Cuntz–Krieger formula_13-family. Properties. Every graph C*-algebra is seen to be an ultragraph algebra by simply considering the graph as a special case of an ultragraph, and realizing that formula_9 is the collection of all finite subsets of formula_1 and formula_26 for each formula_19. Every Exel–Laca algebras is also an ultragraph C*-algebra: If formula_27 is an infinite square matrix with index set formula_28 and entries in formula_29, one can define an ultragraph by formula_30, formula_31, formula_32, and formula_33. It can be shown that formula_25 is isomorphic to the Exel–Laca algebra formula_34. Ultragraph C*-algebras are useful tools for studying both graph C*-algebras and Exel–Laca algebras. Among other benefits, modeling an Exel–Laca algebra as ultragraph C*-algebra allows one to use the ultragraph as a tool to study the associated C*-algebras, thereby providing the option to use graph-theoretic techniques, rather than matrix techniques, when studying the Exel–Laca algebra. Ultragraph C*-algebras have been used to show that every simple AF-algebra is isomorphic to either a graph C*-algebra or an Exel–Laca algebra. They have also been used to prove that every AF-algebra with no (nonzero) finite-dimensional quotient is isomorphic to an Exel–Laca algebra. While the classes of graph C*-algebras, Exel–Laca algebras, and ultragraph C*-algebras each contain C*-algebras not isomorphic to any C*-algebra in the other two classes, the three classes have been shown to coincide up to Morita equivalence. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{G} = (G^0, \\mathcal{G}^1, r, s)" }, { "math_id": 1, "text": "G^0" }, { "math_id": 2, "text": "\\mathcal{G}^1" }, { "math_id": 3, "text": "s:\\mathcal{G}^1 \\to G^0" }, { "math_id": 4, "text": "r : \\mathcal{G}^1 \\to P(G^0) \\setminus \\{ \\emptyset \\}" }, { "math_id": 5, "text": "P(G^0) \\setminus \\{ \\emptyset \\}" }, { "math_id": 6, "text": "\\mathcal{G}^0 = \\{v,w,x \\}" }, { "math_id": 7, "text": "\\mathcal{G}^1 = \\{e,f,g \\}" }, { "math_id": 8, "text": "\\begin{matrix}\ns(e) = v & s(f) = w & s(g) = x \\\\\nr(e) = \\{v,w,x \\} & r(f) = \\{x \\} & r(g) = \\{v,w \\}\n\\end{matrix}" }, { "math_id": 9, "text": "\\mathcal{G}^0" }, { "math_id": 10, "text": "P(G^0)" }, { "math_id": 11, "text": "\\{ \\{ v \\} : v \\in G^0 \\}" }, { "math_id": 12, "text": "\\{ r(e) : e \\in \\mathcal{G}^1 \\}" }, { "math_id": 13, "text": "\\mathcal{G}" }, { "math_id": 14, "text": "\\{ p_A : A \\in \\mathcal{G}^0 \\}" }, { "math_id": 15, "text": "\\{ s_e : e \\in \\mathcal{G}^1 \\}" }, { "math_id": 16, "text": "p_{\\emptyset}" }, { "math_id": 17, "text": " p_A p_B = p_{A \\cap B}" }, { "math_id": 18, "text": "p_A + p_B - p_{A \\cap B} = p_{A \\cup B}" }, { "math_id": 19, "text": "A \\in \\mathcal{G}^0" }, { "math_id": 20, "text": "s_e^*s_e = p_{r(e)}" }, { "math_id": 21, "text": "e \\in \\mathcal{G}^1" }, { "math_id": 22, "text": "p_v = \\sum_{s(e)=v} s_e s_e^*" }, { "math_id": 23, "text": "v \\in G^0" }, { "math_id": 24, "text": "s_e s_e^* \\le p_{s(e)}" }, { "math_id": 25, "text": "C^*(\\mathcal{G})" }, { "math_id": 26, "text": "p_A = \\sum_{v \\in A} p_v" }, { "math_id": 27, "text": "A" }, { "math_id": 28, "text": "I" }, { "math_id": 29, "text": "\\{ 0, 1 \\}" }, { "math_id": 30, "text": "G^0 :=" }, { "math_id": 31, "text": "G^1 := I" }, { "math_id": 32, "text": "s(i) = i" }, { "math_id": 33, "text": "r(i) = \\{ j \\in I : A(i,j)=1 \\}" }, { "math_id": 34, "text": "\\mathcal{O}_A" } ]
https://en.wikipedia.org/wiki?curid=64958837
649616
Positive-definite function
Bimodal function In mathematics, a positive-definite function is, depending on the context, either of two types of function. Definition 1. Let formula_0 be the set of real numbers and formula_1 be the set of complex numbers. A function formula_2 is called "positive semi-definite" if for any real numbers "x"1, …, "x""n" the "n" × "n" matrix formula_3 is a positive "semi-"definite matrix. By definition, a positive semi-definite matrix, such as formula_4, is Hermitian; therefore "f"(−"x") is the complex conjugate of "f"("x")). In particular, it is necessary (but not sufficient) that formula_5 A function is "negative semi-definite" if the inequality is reversed. A function is "definite" if the weak inequality is replaced with a strong (&lt;, &gt; 0). Examples. If formula_6 is a real inner product space, then formula_7, formula_8 is positive definite for every formula_9: for all formula_10 and all formula_11 we have formula_12 As nonnegative linear combinations of positive definite functions are again positive definite, the cosine function is positive definite as a nonnegative linear combination of the above functions: formula_13 One can create a positive definite function formula_14 easily from positive definite function formula_15 for any vector space formula_16: choose a linear function formula_17 and define formula_18. Then formula_19 where formula_20 where formula_21 are distinct as formula_22 is linear. Bochner's theorem. Positive-definiteness arises naturally in the theory of the Fourier transform; it can be seen directly that to be positive-definite it is sufficient for "f" to be the Fourier transform of a function "g" on the real line with "g"("y") ≥ 0. The converse result is "Bochner's theorem", stating that any continuous positive-definite function on the real line is the Fourier transform of a (positive) measure. Applications. In statistics, and especially Bayesian statistics, the theorem is usually applied to real functions. Typically, "n" scalar measurements of some scalar value at points in formula_23 are taken and points that are mutually close are required to have measurements that are highly correlated. In practice, one must be careful to ensure that the resulting covariance matrix (an "n" × "n" matrix) is always positive-definite. One strategy is to define a correlation matrix "A" which is then multiplied by a scalar to give a covariance matrix: this must be positive-definite. Bochner's theorem states that if the correlation between two points is dependent only upon the distance between them (via function "f"), then function "f" must be positive-definite to ensure the covariance matrix "A" is positive-definite. See Kriging. In this context, Fourier terminology is not normally used and instead it is stated that "f"("x") is the characteristic function of a symmetric probability density function (PDF). Generalization. One can define positive-definite functions on any locally compact abelian topological group; Bochner's theorem extends to this context. Positive-definite functions on groups occur naturally in the representation theory of groups on Hilbert spaces (i.e. the theory of unitary representations). Definition 2. Alternatively, a function formula_24 is called "positive-definite" on a neighborhood "D" of the origin if formula_25 and formula_26 for every non-zero formula_27. Note that this definition conflicts with definition 1, given above. In physics, the requirement that formula_25 is sometimes dropped (see, e.g., Corney and Olsen).
[ { "math_id": 0, "text": "\\mathbb{R}" }, { "math_id": 1, "text": "\\mathbb{C}" }, { "math_id": 2, "text": " f: \\mathbb{R} \\to \\mathbb{C} " }, { "math_id": 3, "text": " A = \\left(a_{ij}\\right)_{i,j=1}^n~, \\quad a_{ij} = f(x_i - x_j) " }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": " f(0) \\geq 0~, \\quad |f(x)| \\leq f(0) " }, { "math_id": 6, "text": "(X, \\langle \\cdot, \\cdot \\rangle)" }, { "math_id": 7, "text": "g_y \\colon X \\to \\mathbb{C}" }, { "math_id": 8, "text": "x \\mapsto \\exp(i \\langle y, x \\rangle)" }, { "math_id": 9, "text": "y \\in X" }, { "math_id": 10, "text": "u \\in \\mathbb{C}^n" }, { "math_id": 11, "text": "x_1, \\ldots, x_n" }, { "math_id": 12, "text": "\nu^* A^{(g_y)} u\n= \\sum_{j, k = 1}^{n} \\overline{u_k} u_j e^{i \\langle y, x_k - x_j \\rangle}\n= \\sum_{k = 1}^{n} \\overline{u_k} e^{i \\langle y, x_k \\rangle} \\sum_{j = 1}^{n} u_j e^{- i \\langle y, x_j \\rangle}\n= \\left| \\sum_{j = 1}^{n} \\overline{u_j} e^{i \\langle y, x_j \\rangle} \\right|^2\n\\ge 0.\n" }, { "math_id": 13, "text": "\n\\cos(x) = \\frac{1}{2} ( e^{i x} + e^{- i x}) = \\frac{1}{2}(g_{1} + g_{-1}).\n" }, { "math_id": 14, "text": "f \\colon X \\to \\mathbb{C}" }, { "math_id": 15, "text": "f \\colon \\R \\to \\mathbb C" }, { "math_id": 16, "text": "X" }, { "math_id": 17, "text": "\\phi \\colon X \\to \\R" }, { "math_id": 18, "text": "f^* := f \\circ \\phi" }, { "math_id": 19, "text": "\nu^* A^{(f^*)} u\n= \\sum_{j, k = 1}^{n} \\overline{u_k} u_j f^*(x_k - x_j) \n= \\sum_{j, k = 1}^{n} \\overline{u_k} u_j f(\\phi(x_k) - \\phi(x_j)) \n= u^* \\tilde{A}^{(f)} u\n\\ge 0,\n" }, { "math_id": 20, "text": "\\tilde{A}^{(f)} = \\big( f(\\phi(x_i) - \\phi(x_j)) = f(\\tilde{x}_i - \\tilde{x}_j) \\big)_{i, j}" }, { "math_id": 21, "text": "\\tilde{x}_k := \\phi(x_k)" }, { "math_id": 22, "text": "\\phi" }, { "math_id": 23, "text": "R^d" }, { "math_id": 24, "text": "f : \\reals^n \\to \\reals" }, { "math_id": 25, "text": "f(0) = 0" }, { "math_id": 26, "text": "f(x) > 0" }, { "math_id": 27, "text": "x \\in D" } ]
https://en.wikipedia.org/wiki?curid=649616
64961918
Quasitrace
In mathematics, especially functional analysis, a quasitrace is a not necessarily additive tracial functional on a C*-algebra. An additive quasitrace is called a trace. It is a major open problem if every quasitrace is a trace. Definition. A quasitrace on a C*-algebra "A" is a map formula_0 such that: formula_2 for every formula_3 and formula_4. formula_5 for every formula_6. formula_7 for every formula_8 that satisfy formula_9. formula_11 has the same properties. A quasitrace formula_1 is: formula_12 formula_13 formula_14 is closed for each formula_15. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau\\colon A_+\\to[0,\\infty]" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "\\tau(\\lambda a)=\\lambda\\tau(a)" }, { "math_id": 3, "text": "a\\in A_+" }, { "math_id": 4, "text": "\\lambda\\in[0,\\infty)" }, { "math_id": 5, "text": "\\tau(xx^*)=\\tau(x^*x)" }, { "math_id": 6, "text": "x\\in A" }, { "math_id": 7, "text": "\\tau(a+b)=\\tau(a)+\\tau(b)" }, { "math_id": 8, "text": "a,b\\in A_+" }, { "math_id": 9, "text": "ab=ba" }, { "math_id": 10, "text": "n\\geq 1" }, { "math_id": 11, "text": "\\tau_n\\colon M_n(A)_+\\to[0,\\infty], (a_{j,k})_{j,k=1,...,n}\\mapsto\\tau(a_{11})+...\\tau(a_{nn})" }, { "math_id": 12, "text": "\\sup\\{\\tau(a):a\\in A_+, \\|a\\|\\leq 1\\} < \\infty." }, { "math_id": 13, "text": "\\sup\\{\\tau(a):a\\in A_+, \\|a\\|\\leq 1\\} = 1." }, { "math_id": 14, "text": "\\{a\\in A_+ : \\tau(a)\\leq t\\}" }, { "math_id": 15, "text": "t\\in[0,\\infty)" }, { "math_id": 16, "text": "A_+\\to[0,\\infty]" }, { "math_id": 17, "text": "M_n(A)" } ]
https://en.wikipedia.org/wiki?curid=64961918
64963843
Joshua 2
Book of Joshua, chapter 2 Joshua 2 is the second chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition, the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter focuses on the spies sent by Joshua to Jericho and their encounter with Rahab, a part of a section comprising Joshua 1:1–5:12 about the entry to the land of Canaan. Text. This chapter was originally written in the Hebrew language. It is divided into 24 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including XJoshua (XJosh, X1; 50 BCE) with extant verses 4–5. and 4Q48 (4QJoshb; 100–50 BCE) with extant verses 11–12. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Fragments of the Septuagint Greek text containing this chapter is found in manuscripts such as Washington Manuscript I (5th century CE), and a reduced version of the Septuagint text is found in the illustrated Joshua Roll. Analysis. The narrative of Israelites entering the land of Canaan comprises verses 1:1 to 5:12 of the Book of Joshua and has the following outline: A. Preparations for Entering the Land (1:1–18) 1. Directives to Joshua (1:1–9) 2. Directives to the Leaders (1:10–11) 3. Discussions with the Eastern Tribes (1:12–18) B. Rahab and the Spies in Jericho (2:1–24) 1. Directives to the Spies (2:1a) 2. Deceiving the King of Jericho (2:1b–7) 3. The Oath with Rahab (2:8–21) 4. The Report to Joshua (2:22–24) C. Crossing the Jordan (3:1–4:24) 1. Initial Preparations for Crossing (3:1–6) 2. Directives for Crossing (3:7–13) 3. A Miraculous Crossing: Part 1 (3:14–17) 4. Twelve-Stone Memorial: Part 1 (4:1–10a) 5. A Miraculous Crossing: Part 2 (4:10b–18) 6. Twelve-Stone Memorial: Part 2 (4:19–24) D. Circumcision and Passover (5:1–12) 1. Canaanite Fear (5:1) 2. Circumcision (5:2–9) 3. Passover (5:10–12) Rahab welcomes the spies (2:1–7). The narrative in this chapter seems to be an interruption, but actually provides a background material for the stories of the crossing of the Jordan River and the Battle of Jericho. The sending out of spies follows Moses's example (Numbers 13, Deuteronomy 1:21–23; cf. Joshua 7:2–3), but unlike the earlier mission, which had resulted in failure to take the promised land because of fear (Numbers 13–14), this time the spies encouraged the people to march forward (verse 24; contrast Numbers 13:31–33). Rahab becomes the center of the narrative, being the only named person in the whole story, and practically in control of the whole actions in the narrative: she provides the spies information, protection and advice for their safety, whereas the spies, the king of Jericho and his officers were as passive objects. "And Joshua the son of Nun sent out of Shittim two men to spy secretly, saying, Go view the land, even Jericho. And they went, and came into an harlot's house, named Rahab, and lodged there." The promise to Rahab (2:8–24). Rahab's confessions of faith (verses 8–11) encouraged the spies of God's promise (cf. Exodus 23:27; Numbers 22:3), as she reminds them of the victories in Transjordan as evidence that they will succeed in Canaan (Deuteronomy 3:21-2), and therefore she demands the life of herself and her family to be 'dealt kindly' (Hebrew: "hesed"; verse 12) with the expected loyalty in a covenant relationship (cf. 1 Samuel 20:8). The spies agrees, swearing on their own lives to guarantee those of Rahab and family (verse 14,19), provided she does not 'tell this business of ours' (verses 14, 20), in spite of the Holy War concept which demands the killing of every people in Jericho (Deuteronomy 2:32–37; 7:1–5; 20:16–18). "And the men said to her, “Our life for yours even to death! If you do not tell this business of ours, then when the LORD gives us the land we will deal kindly and faithfully with you.”" Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=64963843
64965378
Calculus on Euclidean space
In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space formula_0 as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra (or some functional analysis) more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces. Calculus on Euclidean space is also a local model of calculus on manifolds, a theory of functions on manifolds. Basic notions. Functions in one real variable. This section is a brief review of function theory in one-variable calculus. A real-valued function formula_1 is continuous at formula_2 if it is "approximately constant" near formula_2; i.e., formula_3 In contrast, the function formula_4 is differentiable at formula_2 if it is "approximately linear" near formula_2; i.e., there is some real number formula_5 such that formula_6 The number formula_5 depends on formula_2 and thus is denoted as formula_12. If formula_4 is differentiable on an open interval formula_13 and if formula_14 is a continuous function on formula_13, then formula_4 is called a "C"1 function. More generally, formula_4 is called a "C"k function if its derivative formula_14 is "C"k-1 function. Taylor's theorem states that a "C"k function is precisely a function that can be approximated by a polynomial of degree "k". If formula_1 is a "C"1 function and formula_15 for some formula_2, then either formula_16 or formula_17; i.e., either formula_4 is strictly increasing or strictly decreasing in some open interval containing "a". In particular, formula_18 is bijective for some open interval formula_13 containing formula_19. The inverse function theorem then says that the inverse function formula_20 is differentiable on "U" with the derivatives: for formula_21 formula_22 Derivative of a map and chain rule. For functions formula_4 defined in the plane or more generally on an Euclidean space formula_0, it is necessary to consider functions that are vector-valued or matrix-valued. It is also conceptually helpful to do this in an invariant manner (i.e., a coordinate-free way). Derivatives of such maps at a point are then vectors or linear maps, not real numbers. Let formula_23 be a map from an open subset formula_24 of formula_0 to an open subset formula_25 of formula_26. Then the map formula_4 is said to be differentiable at a point formula_27 in formula_24 if there exists a (necessarily unique) linear transformation formula_28, called the derivative of formula_4 at formula_27, such that formula_29 where formula_30 is the application of the linear transformation formula_31 to formula_32. If formula_4 is differentiable at formula_27, then it is continuous at formula_27 since formula_33 as formula_34. As in the one-variable case, there is This is proved exactly as for functions in one variable. Indeed, with the notation formula_36, we have: formula_37 Here, since formula_4 is differentiable at formula_27, the second term on the right goes to zero as formula_34. As for the first term, it can be written as: formula_38 Now, by the argument showing the continuity of formula_4 at formula_27, we see formula_39 is bounded. Also, formula_40 as formula_34 since formula_4 is continuous at formula_27. Hence, the first term also goes to zero as formula_34 by the differentiability of formula_35 at formula_41. formula_42 The map formula_4 as above is called continuously differentiable or formula_43 if it is differentiable on the domain and also the derivatives vary continuously; i.e., formula_44 is continuous. As a linear transformation, formula_31 is represented by an formula_45-matrix, called the Jacobian matrix formula_46 of formula_4 at formula_27 and we write it as: formula_47 Taking formula_32 to be formula_48, formula_32 a real number and formula_49 the "j"-th standard basis element, we see that the differentiability of formula_4 at formula_27 implies: formula_50 where formula_51 denotes the "i"-th component of formula_4. That is, each component of formula_4 is differentiable at formula_27 in each variable with the derivative formula_52. In terms of Jacobian matrices, the chain rule says formula_53; i.e., as formula_54, formula_55 which is the form of the chain rule that is often stated. A partial converse to the above holds. Namely, if the partial derivatives formula_56 are all defined and continuous, then formula_4 is continuously differentiable. This is a consequence of the mean value inequality: Indeed, let formula_58. We note that, if formula_59, then formula_60 For simplicity, assume formula_61 (the argument for the general case is similar). Then, by mean value inequality, with the operator norm formula_62, formula_63 which implies formula_64 as required. formula_42 Example: Let formula_13 be the set of all invertible real square matrices of size "n". Note formula_13 can be identified as an open subset of formula_65 with coordinates formula_66. Consider the function formula_67 = the inverse matrix of formula_35 defined on formula_13. To guess its derivatives, assume formula_4 is differentiable and consider the curve formula_68 where formula_69 means the matrix exponential of formula_70. By the chain rule applied to formula_71, we have: formula_72. Taking formula_73, we get: formula_74. Now, we then have: formula_75 Since the operator norm is equivalent to the Euclidean norm on formula_65 (any norms are equivalent to each other), this implies formula_4 is differentiable. Finally, from the formula for formula_14, we see the partial derivatives of formula_4 are smooth (infinitely differentiable); whence, formula_4 is smooth too. Higher derivatives and Taylor formula. If formula_76 is differentiable where formula_77 is an open subset, then the derivatives determine the map formula_78, where formula_79 stands for homomorphisms between vector spaces; i.e., linear maps. If formula_14 is differentiable, then formula_80. Here, the codomain of formula_81 can be identified with the space of bilinear maps by: formula_82 where formula_83 and formula_84 is bijective with the inverse formula_85 given by formula_86. In general, formula_87 is a map from formula_24 to the space of formula_88-multilinear maps formula_89. Just as formula_31 is represented by a matrix (Jacobian matrix), when formula_90 (a bilinear map is a bilinear form), the bilinear form formula_91 is represented by a matrix called the Hessian matrix of formula_4 at formula_27; namely, the square matrix formula_92 of size formula_93 such that formula_94, where the paring refers to an inner product of formula_0, and formula_92 is none other than the Jacobian matrix of formula_95. The formula_96-th entry of formula_92 is thus given explicitly as formula_97. Moreover, if formula_81 exists and is continuous, then the matrix formula_92 is symmetric, the fact known as the symmetry of second derivatives. This is seen using the mean value inequality. For vectors formula_98 in formula_0, using mean value inequality twice, we have: formula_99 which says formula_100 Since the right-hand side is symmetric in formula_98, so is the left-hand side: formula_101. By induction, if formula_4 is formula_102, then the "k"-multilinear map formula_103 is symmetric; i.e., the order of taking partial derivatives does not matter. As in the case of one variable, the Taylor series expansion can then be proved by integration by parts: formula_104 Taylor's formula has an effect of dividing a function by variables, which can be illustrated by the next typical theoretical use of the formula. Example: Let formula_105 be a linear map between the vector space formula_106 of smooth functions on formula_0 with rapidly decreasing derivatives; i.e., formula_107 for any multi-index formula_108. (The space formula_106 is called a Schwartz space.) For each formula_84 in formula_106, Taylor's formula implies we can write: formula_109 with formula_110, where formula_85 is a smooth function with compact support and formula_111. Now, assume formula_112 commutes with coordinates; i.e., formula_113. Then formula_114. Evaluating the above at formula_41, we get formula_115 In other words, formula_112 is a multiplication by some function formula_116; i.e., formula_117. Now, assume further that formula_112 commutes with partial differentiations. We then easily see that formula_116 is a constant; formula_112 is a multiplication by a constant. A partial converse to the Taylor formula also holds; see Borel's lemma and Whitney extension theorem. Inverse function theorem and submersion theorem. A formula_102-map with the formula_102-inverse is called a formula_102-diffeomorphism. Thus, the theorem says that, for a map formula_4 satisfying the hypothesis at a point formula_27, formula_4 is a diffeomorphism near formula_122 For a proof, see . The implicit function theorem says: given a map formula_123, if formula_124, formula_4 is formula_102 in a neighborhood of formula_125 and the derivative of formula_126 at formula_127 is invertible, then there exists a differentiable map formula_128 for some neighborhoods formula_121 of formula_129 such that formula_130. The theorem follows from the inverse function theorem; see . Another consequence is the submersion theorem. Integrable functions on Euclidean spaces. A partition of an interval formula_131 is a finite sequence formula_132. A partition formula_133 of a rectangle formula_134 (product of intervals) in formula_0 then consists of partitions of the sides of formula_134; i.e., if formula_135, then formula_133 consists of formula_136 such that formula_137 is a partition of formula_138. Given a function formula_4 on formula_134, we then define the upper Riemann sum of it as: formula_139 where The lower Riemann sum formula_145 of formula_4 is then defined by replacing formula_146 by formula_147. Finally, the function formula_4 is called integrable if it is bounded and formula_148. In that case, the common value is denoted as formula_149. A subset of formula_0 is said to have measure zero if for each formula_150, there are some possibly infinitely many rectangles formula_151 whose union contains the set and formula_152 A key theorem is The next theorem allows us to compute the integral of a function as the iteration of the integrals of the function in one-variables: In particular, the order of integrations can be changed. Finally, if formula_153 is a bounded open subset and formula_4 a function on formula_154, then we define formula_155 where formula_134 is a closed rectangle containing formula_154 and formula_156 is the characteristic function on formula_154; i.e., formula_157 if formula_158 and formula_159 if formula_160 provided formula_161 is integrable. Surface integral. If a bounded surface formula_154 in formula_162 is parametrized by formula_163 with domain formula_134, then the surface integral of a measurable function formula_164 on formula_154 is defined and denoted as: formula_165 If formula_166 is vector-valued, then we define formula_167 where formula_168 is an outward unit normal vector to formula_154. Since formula_169, we have: formula_170 Vector analysis. Tangent vectors and vector fields. Let formula_171 be a differentiable curve. Then the tangent vector to the curve formula_172 at formula_173 is a vector formula_174 at the point formula_175 whose components are given as: formula_176. For example, if formula_177 is a helix, then the tangent vector at "t" is: formula_178 It corresponds to the intuition that the a point on the helix moves up in a constant speed. If formula_153 is a differentiable curve or surface, then the tangent space to formula_154 at a point "p" is the set of all tangent vectors to the differentiable curves formula_179 with formula_180. A vector field "X" is an assignment to each point "p" in "M" a tangent vector formula_181 to "M" at "p" such that the assignment varies smoothly. Differential forms. The dual notion of a vector field is a differential form. Given an open subset formula_154 in formula_0, by definition, a differential 1-form (often just 1-form) formula_182 is an assignment to a point formula_183 in formula_154 a linear functional formula_184 on the tangent space formula_185 to formula_154 at formula_183 such that the assignment varies smoothly. For a (real or complex-valued) smooth function formula_4, define the 1-form formula_186 by: for a tangent vector formula_174 at formula_183, formula_187 where formula_188 denotes the directional derivative of formula_4 in the direction formula_174 at formula_183. For example, if formula_189 is the formula_190-th coordinate function, then formula_191; i.e., formula_192 are the dual basis to the standard basis on formula_185. Then every differential 1-form formula_182 can be written uniquely as formula_193 for some smooth functions formula_194 on formula_154 (since, for every point formula_183, the linear functional formula_184 is a unique linear combination of formula_195 over real numbers). More generally, a differential "k"-form is an assignment to a point formula_183 in formula_154 a vector formula_184 in the formula_88-th exterior power formula_196 of the dual space formula_197 of formula_185 such that the assignment varies smoothly. In particular, a 0-form is the same as a smooth function. Also, any formula_88-form formula_182 can be written uniquely as: formula_198 for some smooth functions formula_199. Like a smooth function, we can differentiate and integrate differential forms. If formula_4 is a smooth function, then formula_186 can be written as: formula_200 since, for formula_201, we have: formula_202. Note that, in the above expression, the left-hand side (whence the right-hand side) is independent of coordinates formula_203; this property is called the invariance of differential. The operation formula_204 is called the exterior derivative and it extends to any differential forms inductively by the requirement (Leibniz rule) formula_205 where formula_108 are a "p"-form and a "q"-form. The exterior derivative has the important property that formula_206; that is, the exterior derivative formula_204 of a differential form formula_207 is zero. This property is a consequence of the symmetry of second derivatives (mixed partials are equal). Boundary and orientation. A circle can be oriented clockwise or counterclockwise. Mathematically, we say that a subset formula_154 of formula_0 is oriented if there is a consistent choice of normal vectors to formula_154 that varies continuously. For example, a circle or, more generally, an "n"-sphere can be oriented; i.e., orientable. On the other hand, a Möbius strip (a surface obtained by identified by two opposite sides of the rectangle in a twisted way) cannot oriented: if we start with a normal vector and travel around the strip, the normal vector at end will point to the opposite direction. The proposition is useful because it allows us to give an orientation by giving a volume form. Integration of differential forms. If formula_208 is a differential "n"-form on an open subset "M" in formula_0 (any "n"-form is that form), then the integration of it over formula_154 with the standard orientation is defined as: formula_209 If "M" is given the orientation opposite to the standard one, then formula_210 is defined as the negative of the right-hand side. Then we have the fundamental formula relating exterior derivative and integration: Here is a sketch of proof of the formula. If formula_4 is a smooth function on formula_0 with compact support, then we have: formula_212 (since, by the fundamental theorem of calculus, the above can be evaluated on boundaries of the set containing the support.) On the other hand, formula_213 Let formula_4 approach the characteristic function on formula_154. Then the second term on the right goes to formula_214 while the first goes to formula_215, by the argument similar to proving the fundamental theorem of calculus. formula_42 The formula generalizes the fundamental theorem of calculus as well as Stokes' theorem in multivariable calculus. Indeed, if formula_216 is an interval and formula_217, then formula_218 and the formula says: formula_219. Similarly, if formula_154 is an oriented bounded surface in formula_162 and formula_220, then formula_221 and similarly for formula_222 and formula_222. Collecting the terms, we thus get: formula_223 Then, from the definition of the integration of formula_182, we have formula_224 where formula_225 is the vector-valued function and formula_226. Hence, Stokes’ formula becomes formula_227 which is the usual form of the Stokes' theorem on surfaces. Green’s theorem is also a special case of Stokes’ formula. Stokes' formula also yields a general version of Cauchy's integral formula. To state and prove it, for the complex variable formula_228 and the conjugate formula_229, let us introduce the operators formula_230 In these notations, a function formula_4 is holomorphic (complex-analytic) if and only if formula_231 (the Cauchy–Riemann equations). Also, we have: formula_232 Let formula_233 be a punctured disk with center formula_234. Since formula_235 is holomorphic on formula_236, We have: formula_237. By Stokes’ formula, formula_238 Letting formula_239 we then get: formula_240 Winding numbers and Poincaré lemma. A differential form formula_182 is called closed if formula_241 and is called exact if formula_242 for some differential form formula_243 (often called a potential). Since formula_206, an exact form is closed. But the converse does not hold in general; there might be a non-exact closed form. A classic example of such a form is: formula_244, which is a differential form on formula_245. Suppose we switch to polar coordinates: formula_246 where formula_247. Then formula_248 This does not show that formula_182 is exact: the trouble is that formula_249 is not a well-defined continuous function on formula_245. Since any function formula_4 on formula_245 with formula_250 differ from formula_249 by constant, this means that formula_182 is not exact. The calculation, however, shows that formula_182 is exact, for example, on formula_251 since we can take formula_252 there. There is a result (Poincaré lemma) that gives a condition that guarantees closed forms are exact. To state it, we need some notions from topology. Given two continuous maps formula_253 between subsets of formula_254 (or more generally topological spaces), a homotopy from formula_4 to formula_35 is a continuous function formula_255 such that formula_256 and formula_257. Intuitively, a homotopy is a continuous variation of one function to another. A loop in a set formula_24 is a curve whose starting point coincides with the end point; i.e., formula_258 such that formula_259. Then a subset of formula_0 is called simply connected if every loop is homotopic to a constant function. A typical example of a simply connected set is a disk formula_260. Indeed, given a loop formula_261, we have the homotopy formula_262 from formula_172 to the constant function formula_263. A punctured disk, on the other hand, is not simply connected. Geometry of curves and surfaces. Moving frame. Vector fields formula_264 on formula_162 are called a frame field if they are orthogonal to each other at each point; i.e., formula_265 at each point. The basic example is the standard frame formula_266; i.e., formula_267 is a standard basis for each point formula_27 in formula_162. Another example is the cylindrical frame formula_268 For the study of the geometry of a curve, the important frame to use is a Frenet frame formula_269 on a unit-speed curve formula_270 given as: The Gauss–Bonnet theorem. The Gauss–Bonnet theorem relates the "topology" of a surface and its geometry. Calculus of variations. Method of Lagrange multiplier. The set formula_271 is usually called a constraint. Example: Suppose we want to find the minimum distance between the circle formula_272 and the line formula_273. That means that we want to minimize the function formula_274, the square distance between a point formula_275 on the circle and a point formula_276 on the line, under the constraint formula_277. We have: formula_278 formula_279 Since the Jacobian matrix of formula_35 has rank 2 everywhere on formula_271, the Lagrange multiplier gives: formula_280 If formula_281, then formula_282, not possible. Thus, formula_283 and formula_284 From this, it easily follows that formula_285 and formula_286. Hence, the minimum distance is formula_287 (as a minimum distance clearly exists). Here is an application to linear algebra. Let formula_288 be a finite-dimensional real vector space and formula_289 a self-adjoint operator. We shall show formula_288 has a basis consisting of eigenvectors of formula_112 (i.e., formula_112 is diagonalizable) by induction on the dimension of formula_288. Choosing a basis on formula_288 we can identify formula_290 and formula_112 is represented by the matrix formula_291. Consider the function formula_292, where the bracket means the inner product. Then formula_293. On the other hand, for formula_294, since formula_271 is compact, formula_4 attains a maximum or minimum at a point formula_295 in formula_271. Since formula_296, by Lagrange multiplier, we find a real number formula_5 such that formula_297 But that means formula_298. By inductive hypothesis, the self-adjoint operator formula_299, formula_300 the orthogonal complement to formula_295, has a basis consisting of eigenvectors. Hence, we are done. formula_42. Weak derivatives. Up to measure-zero sets, two functions can be determined to be equal or not by means of integration against other functions (called test functions). Namely, the following sometimes called the fundamental lemma of calculus of variations: Given a continuous function formula_4, by the lemma, a continuously differentiable function formula_295 is such that formula_302 if and only if formula_303 for every formula_301. But, by integration by parts, the partial derivative on the left-hand side of formula_295 can be moved to that of formula_84; i.e., formula_304 where there is no boundary term since formula_84 has compact support. Now the key point is that this expression makes sense even if formula_295 is not necessarily differentiable and thus can be used to give sense to a derivative of such a function. Note each locally integrable function formula_295 defines the linear functional formula_305 on formula_306 and, moreover, each locally integrable function can be identified with such linear functional, because of the early lemma. Hence, quite generally, if formula_295 is a linear functional on formula_306, then we define formula_307 to be the linear functional formula_308 where the bracket means formula_309. It is then called the weak derivative of formula_295 with respect to formula_189. If formula_295 is continuously differentiable, then the weak derivate of it coincides with the usual one; i.e., the linear functional formula_307 is the same as the linear functional determined by the usual partial derivative of formula_295 with respect to formula_189. A usual derivative is often then called a classical derivative. When a linear functional on formula_306 is continuous with respect to a certain topology on formula_306, such a linear functional is called a distribution, an example of a generalized function. A classic example of a weak derivative is that of the Heaviside function formula_92, the characteristic function on the interval formula_310. For every test function formula_84, we have: formula_311 Let formula_312 denote the linear functional formula_313, called the Dirac delta function (although not exactly a function). Then the above can be written as: formula_314 Cauchy's integral formula has a similar interpretation in terms of weak derivatives. For the complex variable formula_228, let formula_315. For a test function formula_84, if the disk formula_316 contains the support of formula_84, by Cauchy's integral formula, we have: formula_317 Since formula_318, this means: formula_319 or formula_320 In general, a generalized function is called a fundamental solution for a linear partial differential operator if the application of the operator to it is the Dirac delta. Hence, the above says formula_321 is the fundamental solution for the differential operator formula_322. "This section requires some background in general topology." Calculus on manifolds. Definition of a manifold. A manifold is a Hausdorff topological space that is locally modeled by an Euclidean space. By definition, an atlas of a topological space formula_154 is a set of maps formula_323, called charts, such that By definition, a manifold is a second-countable Hausdorff topological space with a maximal atlas (called a differentiable structure); "maximal" means that it is not contained in strictly larger atlas. The dimension of the manifold formula_154 is the dimension of the model Euclidean space formula_0; namely, formula_93 and a manifold is called an "n"-manifold when it has dimension "n". A function on a manifold formula_154 is said to be smooth if formula_327 is smooth on formula_328 for each chart formula_329 in the differentiable structure. A manifold is paracompact; this has an implication that it admits a partition of unity subordinate to a given open cover. If formula_0 is replaced by an upper half-space formula_330, then we get the notion of a manifold-with-boundary. The set of points that map to the boundary of formula_330 under charts is denoted by formula_211 and is called the boundary of formula_154. This boundary may not be the topological boundary of formula_154. Since the interior of formula_330 is diffeomorphic to formula_0, a manifold is a manifold-with-boundary with empty boundary. The next theorem furnishes many examples of manifolds. For example, for formula_331, the derivative formula_332 has rank one at every point formula_183 in formula_271. Hence, the "n"-sphere formula_271 is an "n"-manifold. The theorem is proved as a corollary of the inverse function theorem. Many familiar manifolds are subsets of formula_0. The next theoretically important result says that there is no other kind of manifolds. An immersion is a smooth map whose differential is injective. An embedding is an immersion that is homeomorphic (thus diffeomorphic) to the image. The proof that a manifold can be embedded into formula_333 for "some "N is considerably easier and can be readily given here. It is known that a manifold has a finite atlas formula_334. Let formula_335 be smooth functions such that formula_336 and formula_337 cover formula_154 (e.g., a partition of unity). Consider the map formula_338 It is easy to see that formula_4 is an injective immersion. It may not be an embedding. To fix that, we shall use: formula_339 where formula_35 is a smooth proper map. The existence of a smooth proper map is a consequence of a partition of unity. See for the rest of the proof in the case of an immersion. formula_42 Nash's embedding theorem says that, if formula_154 is equipped with a Riemannian metric, then the embedding can be taken to be isometric with an expense of increasing formula_340; for this, see this T. Tao's blog. Tubular neighborhood and transversality. A technically important result is: This can be proved by putting a Riemannian metric on the manifold formula_154. Indeed, the choice of metric makes the normal bundle formula_342 a complementary bundle to formula_343; i.e., formula_344 is the direct sum of formula_343 and formula_345. Then, using the metric, we have the exponential map formula_346 for some neighborhood formula_13 of formula_341 in the normal bundle formula_345 to some neighborhood formula_288 of formula_341 in formula_154. The exponential map here may not be injective but it is possible to make it injective (thus diffeomorphic) by shrinking formula_13 (for now, see see ). Integration on manifolds and distribution densities. The starting point for the topic of integration on manifolds is that there is no "invariant way" to integrate functions on manifolds. This may be obvious if we asked: what is an integration of functions on a finite-dimensional real vector space? (In contrast, there is an invariant way to do differentiation since, by definition, a manifold comes with a differentiable structure). There are several ways to introduce integration theory to manifolds: For example, if a manifold is embedded into an Euclidean space formula_0, then it acquires the Lebesgue measure restricting from the ambient Euclidean space and then the second approach works. The first approach is fine in many situations but it requires the manifold to be oriented (and there is a non-orientable manifold that is not pathological). The third approach generalizes and that gives rise to the notion of a density. Generalizations. Extensions to infinite-dimensional normed spaces. The notions like differentiability extend to normed spaces. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^n" }, { "math_id": 1, "text": "f : \\mathbb{R} \\to \\mathbb{R}" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "\\lim_{h \\to 0} (f(a + h) - f(a)) = 0." }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "\\lambda" }, { "math_id": 6, "text": "\\lim_{h \\to 0} \\frac{f(a + h) - f(a) - \\lambda h}{h} = 0." }, { "math_id": 7, "text": "f(a) = 0" }, { "math_id": 8, "text": "f(a + h) = \\lambda h + g(a, h)" }, { "math_id": 9, "text": "g(a, h)" }, { "math_id": 10, "text": "f(a + h)" }, { "math_id": 11, "text": "\\lambda h" }, { "math_id": 12, "text": "f'(a)" }, { "math_id": 13, "text": "U" }, { "math_id": 14, "text": "f'" }, { "math_id": 15, "text": "f'(a) \\ne 0" }, { "math_id": 16, "text": "f'(a) > 0" }, { "math_id": 17, "text": "f'(a) < 0" }, { "math_id": 18, "text": "f : f^{-1}(U) \\to U" }, { "math_id": 19, "text": "f(a)" }, { "math_id": 20, "text": "f^{-1}" }, { "math_id": 21, "text": "y \\in U" }, { "math_id": 22, "text": "(f^{-1})'(y) = {1 \\over f'(f^{-1}(y))}." }, { "math_id": 23, "text": "f : X \\to Y" }, { "math_id": 24, "text": "X" }, { "math_id": 25, "text": "Y" }, { "math_id": 26, "text": "\\mathbb{R}^m" }, { "math_id": 27, "text": "x" }, { "math_id": 28, "text": "f'(x) : \\mathbb{R}^n \\to \\mathbb{R}^m" }, { "math_id": 29, "text": "\\lim_{ h \\to 0 } \\frac{1}{|h|} |f(x + h) - f(x) - f'(x)h| = 0" }, { "math_id": 30, "text": "f'(x)h" }, { "math_id": 31, "text": "f'(x)" }, { "math_id": 32, "text": "h" }, { "math_id": 33, "text": "|f(x + h) - f(x)| \\le (|h|^{-1}|f(x + h) - f(x) - f'(x)h|) |h| + |f'(x)h| \\to 0" }, { "math_id": 34, "text": "h \\to 0" }, { "math_id": 35, "text": "g" }, { "math_id": 36, "text": "\\widetilde{h} = f(x + h) - f(x)" }, { "math_id": 37, "text": "\\begin{align}\n& \\frac{1}{|h|} |g(f(x + h)) - g(y) - g'(y) f'(x) h| \\\\\n& \\le \\frac{1}{|h|} |g(y + \\widetilde{h}) - g(y) - g'(y)\\widetilde{h}| + \\frac{1}{|h|} |g'(y)(f(x+h) - f(x) - f'(x) h)|.\n\\end{align}" }, { "math_id": 38, "text": "\\begin{cases}\n\\frac{|\\widetilde{h}|}{|h|} |g(y+ \\widetilde{h}) - g(y) - g'(y)\\widetilde{h}|/|\\widetilde{h}|, & \\widetilde{h} \\neq 0, \\\\\n0, & \\widetilde{h} = 0.\n\\end{cases}" }, { "math_id": 39, "text": "\\frac{|\\widetilde{h}|}{|h|}" }, { "math_id": 40, "text": "\\widetilde{h} \\to 0" }, { "math_id": 41, "text": "y" }, { "math_id": 42, "text": "\\square" }, { "math_id": 43, "text": "C^1" }, { "math_id": 44, "text": "x \\mapsto f'(x)" }, { "math_id": 45, "text": "m \\times n" }, { "math_id": 46, "text": "Jf(x)" }, { "math_id": 47, "text": "(Jf)(x) = \\begin{bmatrix}\n\\frac{\\partial f_1}{\\partial x_1}(x) & \\cdots & \\frac{\\partial f_1}{\\partial x_n}(x) \\\\\n\\vdots & \\ddots & \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x_1}(x) & \\cdots & \\frac{\\partial f_m}{\\partial x_n}(x)\n\\end{bmatrix}." }, { "math_id": 48, "text": "h e_j" }, { "math_id": 49, "text": "e_j = (0, \\cdots, 1, \\cdots, 0)" }, { "math_id": 50, "text": "\\lim_{h \\to 0} \\frac{f_i(x + h e_j) - f_i(x)}{h} = \\frac{\\partial f_i}{\\partial x_j}(x)" }, { "math_id": 51, "text": "f_i" }, { "math_id": 52, "text": "\\frac{\\partial f_i}{\\partial x_j}(x)" }, { "math_id": 53, "text": "J(g \\circ f)(x) = Jg(y) Jf(x)" }, { "math_id": 54, "text": "(g \\circ f)_i = g_i \\circ f" }, { "math_id": 55, "text": "\\frac{\\partial (g_i \\circ f)}{\\partial x_j}(x) = \\frac{\\partial g_i}{\\partial y_1} (y) \\frac{\\partial f_1}{\\partial x_j}(x) + \\cdots + \\frac{\\partial g_i}{\\partial y_m} (y) \\frac{\\partial f_m}{\\partial x_j}(x)," }, { "math_id": 56, "text": "{\\partial f_i}/{\\partial x_j}" }, { "math_id": 57, "text": "[0, 1] \\to \\mathbb{R}^m, \\, t \\mapsto f(x + ty) - tv" }, { "math_id": 58, "text": "g(x) = (Jf)(x)" }, { "math_id": 59, "text": "y = y_i e_i" }, { "math_id": 60, "text": "\\frac{d}{dt}f(x + ty) = \\frac{\\partial f}{\\partial x_i}(x+ty)y = g(x + ty)(y_i e_i)." }, { "math_id": 61, "text": "n = 2" }, { "math_id": 62, "text": "\\| \\cdot \\|" }, { "math_id": 63, "text": "\\begin{align}\n&|\\Delta_y f (x) - g(x)y| \\\\\n&\\le |\\Delta_{y_1 e_1} f(x_1, x_2 + y_2) - g(x)(y_1 e_1)| + |\\Delta_{y_2 e_2} f(x_1, x_2) - g(x)(y_2 e_2)| \\\\\n&\\le |y_1| \\sup_{0 < t < 1}\\|g(x_1 + t y_1, x_2 + y_2) - g(x)\\| + |y_2| \\sup_{0 < t < 1}\\|g(x_1, x_2 + ty_2) - g(x)\\|,\n\\end{align}\n" }, { "math_id": 64, "text": "|\\Delta_y f (x) - g(x)y|/|y| \\to 0" }, { "math_id": 65, "text": "\\mathbb{R}^{n^2}" }, { "math_id": 66, "text": "x_{ij}, 0 \\le i, j \\ne n" }, { "math_id": 67, "text": "f(g) = g^{-1}" }, { "math_id": 68, "text": "c(t) = ge^{tg^{-1}h}" }, { "math_id": 69, "text": "e^A" }, { "math_id": 70, "text": "A" }, { "math_id": 71, "text": "f(c(t)) = e^{-t g^{-1}h} g^{-1} " }, { "math_id": 72, "text": "f'(c(t)) \\circ c'(t) = -g^{-1}h e^{-t g^{-1}h} g^{-1}" }, { "math_id": 73, "text": "t = 0" }, { "math_id": 74, "text": "f'(g) h = -g^{-1}h g^{-1}" }, { "math_id": 75, "text": "\\|(g+h)^{-1} - g^{-1} + g^{-1}h g^{-1}\\| \\le \\| (g+h)^{-1} \\| \\|h\\| \\|g^{-1} h g^{-1}\\|." }, { "math_id": 76, "text": "f : X \\to \\mathbb{R}^m" }, { "math_id": 77, "text": "X \\subset \\mathbb{R}^n" }, { "math_id": 78, "text": "f' : X \\to \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)" }, { "math_id": 79, "text": "\\operatorname{Hom}" }, { "math_id": 80, "text": "f'' : X \\to \\operatorname{Hom}(\\mathbb{R}^n, \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m))" }, { "math_id": 81, "text": "f''" }, { "math_id": 82, "text": "\\operatorname{Hom}(\\mathbb{R}^n, \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)) \\overset{\\varphi}\\underset{\\sim}\\to \\{ (\\mathbb{R}^n)^2 \\to \\mathbb{R}^m \\text{ bilinear}\\}" }, { "math_id": 83, "text": "\\varphi(g)(x, y) = g(x)y" }, { "math_id": 84, "text": "\\varphi" }, { "math_id": 85, "text": "\\psi" }, { "math_id": 86, "text": "(\\psi(g)x)y = g(x, y)" }, { "math_id": 87, "text": "f^{(k)} = (f^{(k-1)})'" }, { "math_id": 88, "text": "k" }, { "math_id": 89, "text": "(\\mathbb{R}^n)^k \\to \\mathbb{R}^m" }, { "math_id": 90, "text": "m = 1" }, { "math_id": 91, "text": "f''(x)" }, { "math_id": 92, "text": "H" }, { "math_id": 93, "text": "n" }, { "math_id": 94, "text": "f''(x)(y, z) = (Hy, z)" }, { "math_id": 95, "text": "f' : X \\to (\\mathbb{R}^n)^* \\simeq \\mathbb{R}^n" }, { "math_id": 96, "text": "(i, j)" }, { "math_id": 97, "text": "H_{ij} = \\frac{\\partial^2 f}{\\partial x_i \\partial x_j}(x)" }, { "math_id": 98, "text": "u, v" }, { "math_id": 99, "text": "|\\Delta_v \\Delta_u f(x) - f''(x)(u, v)| \\le \\sup_{0 < t_1, t_2 < 1} | f''(x + t_1 u + t_2 v)(u, v) - f''(x)(u, v) |," }, { "math_id": 100, "text": "f''(x)(u, v) = \\lim_{s, t \\to 0} (\\Delta_{tv} \\Delta_{su} f(x) - f(x))/(st)." }, { "math_id": 101, "text": "f''(x)(u, v) = f''(x)(v, u)" }, { "math_id": 102, "text": "C^k" }, { "math_id": 103, "text": "f^{(k)}(x)" }, { "math_id": 104, "text": "f(z+(h,k))=\\sum_{a+b<n} \\partial_x^a\\partial_y^b f(z){h^a k^b\\over a! b!} + n\\int_0^1 (1-t)^{n-1} \\sum_{a+b=n} \\partial_x^a\\partial_y^b f(z+t(h,k)){h^a k^b\\over a! b!} \\, dt." }, { "math_id": 105, "text": "T : \\mathcal{S} \\to \\mathcal{S}" }, { "math_id": 106, "text": "\\mathcal{S}" }, { "math_id": 107, "text": "\\sup |x^{\\beta} \\partial^{\\alpha} \\varphi| < \\infty" }, { "math_id": 108, "text": "\\alpha, \\beta" }, { "math_id": 109, "text": "\\varphi - \\psi \\varphi(y) = \\sum_{j=1}^n (x_j - y_j) \\varphi_j" }, { "math_id": 110, "text": "\\varphi_j \\in \\mathcal{S}" }, { "math_id": 111, "text": "\\psi(y) = 1" }, { "math_id": 112, "text": "T" }, { "math_id": 113, "text": "T (x_j \\varphi) = x_j T\\varphi" }, { "math_id": 114, "text": "T\\varphi - \\varphi(y) T\\psi = \\sum_{j=1}^n (x_j - y_j) T\\varphi_j" }, { "math_id": 115, "text": "T\\varphi(y) = \\varphi(y) T\\psi(y)." }, { "math_id": 116, "text": "m" }, { "math_id": 117, "text": "T\\varphi = m \\varphi" }, { "math_id": 118, "text": "F, R : \\mathcal{S} \\to \\mathcal{S}" }, { "math_id": 119, "text": "(R \\varphi)(x) = \\varphi(-x)" }, { "math_id": 120, "text": "T = RF^2" }, { "math_id": 121, "text": "U, V" }, { "math_id": 122, "text": "x, f(x)." }, { "math_id": 123, "text": "f : \\mathbb{R}^n \\times \\mathbb{R}^m \\to \\mathbb{R}^m" }, { "math_id": 124, "text": "f(a, b) = 0" }, { "math_id": 125, "text": "(a, b)" }, { "math_id": 126, "text": "y \\mapsto f(a, y)" }, { "math_id": 127, "text": "b" }, { "math_id": 128, "text": "g : U \\to V" }, { "math_id": 129, "text": "a, b" }, { "math_id": 130, "text": "f(x, g(x)) = 0" }, { "math_id": 131, "text": "[a, b]" }, { "math_id": 132, "text": "a = t_0 \\le t_1 \\le \\cdots \\le t_k = b" }, { "math_id": 133, "text": "P" }, { "math_id": 134, "text": "D" }, { "math_id": 135, "text": "D = \\prod_1^n [a_i, b_i]" }, { "math_id": 136, "text": "P_1, \\dots, P_n" }, { "math_id": 137, "text": "P_i" }, { "math_id": 138, "text": "[a_i, b_i]" }, { "math_id": 139, "text": "U(f, P) = \\sum_{Q \\in P} (\\sup_Q f) \\operatorname{vol}(Q)" }, { "math_id": 140, "text": "Q" }, { "math_id": 141, "text": "Q = \\prod_{i = 1}^n [t_{i, j_i}, t_{i, j_i+1}]" }, { "math_id": 142, "text": "P_i : a_i = t_{i, 0} \\le \\dots \\cdots \\le t_{i, k_i} = b_i" }, { "math_id": 143, "text": "\\operatorname{vol}(Q)" }, { "math_id": 144, "text": "\\operatorname{vol}(Q) = \\prod_1^n (t_{i, j_i+1} - t_{i, j_i})" }, { "math_id": 145, "text": "L(f, P)" }, { "math_id": 146, "text": "\\sup" }, { "math_id": 147, "text": "\\inf" }, { "math_id": 148, "text": "\\sup \\{ L(f, P) \\mid P \\} = \\inf \\{ U(f, P) \\mid P \\}" }, { "math_id": 149, "text": "\\int_D f \\, dx" }, { "math_id": 150, "text": "\\epsilon > 0" }, { "math_id": 151, "text": "D_1, D_2, \\dots, " }, { "math_id": 152, "text": "\\sum_i \\operatorname{vol}(D_i) < \\epsilon." }, { "math_id": 153, "text": "M \\subset \\mathbb{R}^n" }, { "math_id": 154, "text": "M" }, { "math_id": 155, "text": "\\int_M f \\, dx := \\int_D \\chi_M f \\, dx" }, { "math_id": 156, "text": "\\chi_M" }, { "math_id": 157, "text": "\\chi_M(x) = 1" }, { "math_id": 158, "text": "x \\in M" }, { "math_id": 159, "text": "=0" }, { "math_id": 160, "text": "x \\not\\in M," }, { "math_id": 161, "text": "\\chi_M f" }, { "math_id": 162, "text": "\\mathbb{R}^3" }, { "math_id": 163, "text": "\\textbf{r} = \\textbf{r}(u, v)" }, { "math_id": 164, "text": "F" }, { "math_id": 165, "text": "\\int_M F \\, dS := \\int \\int_D (F \\circ \\textbf{r}) | \\textbf{r}_u \\times \\textbf{r}_v | \\, du dv" }, { "math_id": 166, "text": "F : M \\to \\mathbb{R}^3" }, { "math_id": 167, "text": "\\int_M F \\cdot dS := \\int_M (F \\cdot \\textbf{n}) \\, dS" }, { "math_id": 168, "text": "\\textbf{n}" }, { "math_id": 169, "text": "\\textbf{n} = \\frac{\\textbf{r}_u \\times \\textbf{r}_v}{|\\textbf{r}_u \\times \\textbf{r}_v|}" }, { "math_id": 170, "text": "\\int_M F \\cdot dS = \\int \\int_D (F \\circ \\textbf{r}) \\cdot (\\textbf{r}_u \\times \\textbf{r}_v) \\, du dv = \\int \\int_D \\det(F \\circ \\textbf{r}, \\textbf{r}_u, \\textbf{r}_v) \\, dudv." }, { "math_id": 171, "text": "c : [0, 1] \\to \\mathbb{R}^n" }, { "math_id": 172, "text": "c" }, { "math_id": 173, "text": "t" }, { "math_id": 174, "text": "v" }, { "math_id": 175, "text": "c(t)" }, { "math_id": 176, "text": "v = (c_1'(t), \\dots, c_n'(t))" }, { "math_id": 177, "text": "c(t) = (a \\cos(t), a \\sin(t), bt), a > 0, b > 0" }, { "math_id": 178, "text": "c'(t) = (-a \\sin(t), a \\cos(t), b)." }, { "math_id": 179, "text": "c: [0, 1] \\to M" }, { "math_id": 180, "text": "c(0) = p" }, { "math_id": 181, "text": "X_p" }, { "math_id": 182, "text": "\\omega" }, { "math_id": 183, "text": "p" }, { "math_id": 184, "text": "\\omega_p" }, { "math_id": 185, "text": "T_p M" }, { "math_id": 186, "text": "df" }, { "math_id": 187, "text": "df_p(v) = v(f)" }, { "math_id": 188, "text": "v(f)" }, { "math_id": 189, "text": "x_i" }, { "math_id": 190, "text": "i" }, { "math_id": 191, "text": "dx_{i, p}(v) = v_i" }, { "math_id": 192, "text": "dx_{i,p}" }, { "math_id": 193, "text": "\\omega = f_1 \\, dx_1 + \\cdots + f_n \\, dx_n" }, { "math_id": 194, "text": "f_1, \\dots, f_n" }, { "math_id": 195, "text": "dx_i" }, { "math_id": 196, "text": "\\bigwedge^k T^*_p M" }, { "math_id": 197, "text": "T^*_p M" }, { "math_id": 198, "text": "\\omega = \\sum_{i_1 < \\cdots < i_k} f_{i_1 \\dots i_k} \\, dx_{i_1} \\wedge \\cdots \\wedge dx_{i_k}" }, { "math_id": 199, "text": "f_{i_1 \\dots i_k}" }, { "math_id": 200, "text": "df = \\sum_{i=1}^n \\frac{\\partial f}{\\partial x_i} \\, dx_i" }, { "math_id": 201, "text": "v = \\partial / \\partial x_j |_p" }, { "math_id": 202, "text": "df_p(v) = \\frac{\\partial f}{\\partial x_j}(p) = \\sum_{i=1}^n \\frac{\\partial f}{\\partial x_i}(p) \\, dx_i(v)" }, { "math_id": 203, "text": "x_1, \\dots, x_n" }, { "math_id": 204, "text": "d" }, { "math_id": 205, "text": "d(\\alpha \\wedge \\beta) = d \\alpha \\wedge \\beta + (-1)^p \\alpha \\wedge d \\beta." }, { "math_id": 206, "text": "d \\circ d = 0" }, { "math_id": 207, "text": "d \\omega" }, { "math_id": 208, "text": "\\omega = f \\, dx_1 \\wedge \\cdots \\wedge dx_n" }, { "math_id": 209, "text": "\\int_M \\omega = \\int_M f \\, dx_1 \\cdots dx_n." }, { "math_id": 210, "text": "\\int_M \\omega" }, { "math_id": 211, "text": "\\partial M" }, { "math_id": 212, "text": "\\int d(f \\omega) = 0" }, { "math_id": 213, "text": "\\int d(f \\omega) = \\int df \\wedge \\omega + \\int f \\, d\\omega." }, { "math_id": 214, "text": "\\int_M d \\omega" }, { "math_id": 215, "text": "-\\int_{\\partial M} \\omega" }, { "math_id": 216, "text": "M = [a, b]" }, { "math_id": 217, "text": "\\omega = f" }, { "math_id": 218, "text": "d\\omega = f' \\, dx" }, { "math_id": 219, "text": "\\int_M f' \\, dx = f(b) - f(a)" }, { "math_id": 220, "text": "\\omega = f\\,dx + g\\,dy + h\\,dz" }, { "math_id": 221, "text": "d(f\\,dx) = df \\wedge dx = \\frac{\\partial f}{\\partial y} \\, dy \\wedge dx + \\frac{\\partial f}{\\partial z} \\,dz \\wedge dx" }, { "math_id": 222, "text": "d(g\\,dy)" }, { "math_id": 223, "text": "d\\omega = \\left( \\frac{\\partial h}{\\partial y} - \\frac{\\partial g}{\\partial z} \\right) dy \\wedge dz + \\left( \\frac{\\partial f}{\\partial z} - \\frac{\\partial h}{\\partial x} \\right) dz \\wedge dx + \\left( \\frac{\\partial g}{\\partial x} - \\frac{\\partial f}{\\partial y} \\right) dx \\wedge dy." }, { "math_id": 224, "text": "\\int_M d \\omega = \\int_M (\\nabla \\times F) \\cdot dS" }, { "math_id": 225, "text": "F = (f, g, h)" }, { "math_id": 226, "text": "\\nabla = \\left( \\frac{\\partial}{\\partial x}, \\frac{\\partial}{\\partial y}, \\frac{\\partial}{\\partial z} \\right)" }, { "math_id": 227, "text": "\\int_M (\\nabla \\times F) \\cdot dS = \\int_{\\partial M} (f\\,dx + g\\,dy + h\\,dz)," }, { "math_id": 228, "text": "z = x + iy" }, { "math_id": 229, "text": "\\bar z" }, { "math_id": 230, "text": "\\frac{\\partial}{\\partial z} = \\frac{1}{2}\\left( \\frac{\\partial}{\\partial x} - i \\frac{\\partial}{\\partial y} \\right), \\, \\frac{\\partial}{\\partial \\bar{z}} = \\frac{1}{2}\\left( \\frac{\\partial}{\\partial x} + i \\frac{\\partial}{\\partial y} \\right)." }, { "math_id": 231, "text": "\\frac{\\partial f}{\\partial \\bar z} = 0" }, { "math_id": 232, "text": "df = \\frac{\\partial f}{\\partial z}dz + \\frac{\\partial f}{\\partial \\bar{z}}d \\bar{z}." }, { "math_id": 233, "text": "D_{\\epsilon} = \\{ z \\in \\mathbb{C} \\mid \\epsilon < |z - z_0| < r \\}" }, { "math_id": 234, "text": "z_0" }, { "math_id": 235, "text": "1/(z - z_0)" }, { "math_id": 236, "text": "D_{\\epsilon}" }, { "math_id": 237, "text": "d \\left( \\frac{f}{z-z_0} dz \\right) = \\frac{\\partial f}{\\partial \\bar z} \\frac{d \\bar{z} \\wedge dz}{z - z_0} " }, { "math_id": 238, "text": "\\int_{D_{\\epsilon}} \\frac{\\partial f}{\\partial \\bar z} \\frac{d \\bar{z} \\wedge dz}{z - z_0} = \\left( \\int_{|z - z_0| = r} - \\int_{|z - z_0| = \\epsilon} \\right) \\frac{f}{z-z_0} dz." }, { "math_id": 239, "text": "\\epsilon \\to 0" }, { "math_id": 240, "text": "2\\pi i \\, f(z_0) = \\int_{|z - z_0| = r} \\frac{f}{z-z_0} dz + \\int_{|z - z_0| \\le r} \\frac{\\partial f}{\\partial \\bar z} \\frac{dz \\wedge d \\bar z}{z - z_0}." }, { "math_id": 241, "text": "d\\omega = 0" }, { "math_id": 242, "text": "\\omega = d\\eta" }, { "math_id": 243, "text": "\\eta" }, { "math_id": 244, "text": "\\omega = \\frac{-y}{x^2 + y^2} + \\frac{x}{x^2 + y^2}" }, { "math_id": 245, "text": "\\mathbb{R}^2 - 0" }, { "math_id": 246, "text": "x = r \\cos \\theta, y = r \\sin \\theta" }, { "math_id": 247, "text": " r = \\sqrt{x^2 + y^2}" }, { "math_id": 248, "text": "\\omega = r^{-2}(-r \\sin \\theta \\, dx + r \\cos \\theta \\, dy) = d \\theta." }, { "math_id": 249, "text": "\\theta" }, { "math_id": 250, "text": "df = \\omega" }, { "math_id": 251, "text": "\\mathbb{R}^2 - \\{ x = 0 \\}" }, { "math_id": 252, "text": "\\theta = \\arctan(y/x)" }, { "math_id": 253, "text": "f, g : X \\to Y" }, { "math_id": 254, "text": "\\mathbb{R}^m, \\mathbb{R}^n" }, { "math_id": 255, "text": "H : X \\times [0, 1] \\to Y" }, { "math_id": 256, "text": "f(x) = H(x, 0)" }, { "math_id": 257, "text": "g(x) = H(x, 1)" }, { "math_id": 258, "text": "c : [0, 1] \\to X" }, { "math_id": 259, "text": "c(0) = c(1)" }, { "math_id": 260, "text": "D = \\{ (x, y) \\mid \\sqrt{x^2 + y^2} \\le r \\} \\subset \\mathbb{R}^2" }, { "math_id": 261, "text": "c : [0, 1] \\to D" }, { "math_id": 262, "text": "H : [0, 1]^2 \\to D, \\, H(x, t) = (1-t) c(x) + t c(0)" }, { "math_id": 263, "text": "c(0)" }, { "math_id": 264, "text": "E_1, \\dots, E_3" }, { "math_id": 265, "text": "E_i \\cdot E_j = \\delta_{ij}" }, { "math_id": 266, "text": "U_i" }, { "math_id": 267, "text": "U_i(x)" }, { "math_id": 268, "text": "E_1 = \\cos \\theta U_1 + \\sin \\theta U_2, \\, E_2 = -\\sin \\theta U_1 + \\cos \\theta U_2, \\, E_3 = U_3." }, { "math_id": 269, "text": "T, N, B" }, { "math_id": 270, "text": "\\beta : I \\to \\mathbb{R}^3" }, { "math_id": 271, "text": "g^{-1}(0)" }, { "math_id": 272, "text": "x^2 + y^2 = 1" }, { "math_id": 273, "text": "x + y = 4" }, { "math_id": 274, "text": "f(x, y, u, v) = (x - u)^2 + (y - v)^2" }, { "math_id": 275, "text": "(x, y)" }, { "math_id": 276, "text": "(u, v)" }, { "math_id": 277, "text": "g = (x^2 + y^2 - 1, u + v - 4)" }, { "math_id": 278, "text": "\\nabla f = (2(x - u), 2(y - v), -2(x - u), -2(y - v))." }, { "math_id": 279, "text": "\\nabla g_1 = (2x, 2y, 0, 0), \\nabla g_2 = (0, 0, 1, 1)." }, { "math_id": 280, "text": "x - u = \\lambda_1 x, \\, y - v = \\lambda_1 y, \\, 2(x-u) = -\\lambda_2, \\, 2(y-v) = -\\lambda_2." }, { "math_id": 281, "text": "\\lambda_1 = 0" }, { "math_id": 282, "text": "x = u, y = v" }, { "math_id": 283, "text": "\\lambda_1 \\ne 0" }, { "math_id": 284, "text": "x = \\frac{x-u}{\\lambda_1}, \\, y = \\frac{y-v}{\\lambda_1}." }, { "math_id": 285, "text": "x = y = 1/\\sqrt{2}" }, { "math_id": 286, "text": "u = v = 2" }, { "math_id": 287, "text": "2\\sqrt{2} - 1" }, { "math_id": 288, "text": "V" }, { "math_id": 289, "text": "T : V \\to V" }, { "math_id": 290, "text": "V = \\mathbb{R}^n" }, { "math_id": 291, "text": "[a_{ij}]" }, { "math_id": 292, "text": "f(x) = (Tx, x)" }, { "math_id": 293, "text": "\\nabla f = 2(\\sum a_{1i} x_i, \\dots, \\sum a_{ni} x_i)" }, { "math_id": 294, "text": "g = \\sum x_i^2 - 1" }, { "math_id": 295, "text": "u" }, { "math_id": 296, "text": "\\nabla g = 2(x_1, \\dots, x_n)" }, { "math_id": 297, "text": "2 \\sum_i a_{ji} u_i = 2 \\lambda u_j, 1 \\le j \\le n." }, { "math_id": 298, "text": "Tu = \\lambda u" }, { "math_id": 299, "text": "T : W \\to W" }, { "math_id": 300, "text": "W" }, { "math_id": 301, "text": "\\varphi \\in C_c^{\\infty}(M)" }, { "math_id": 302, "text": "\\frac{\\partial u}{\\partial x_i} = f" }, { "math_id": 303, "text": "\\int \\frac{\\partial u}{\\partial x_i} \\varphi \\, dx = \\int f \\varphi \\, dx" }, { "math_id": 304, "text": "-\\int u \\frac{\\partial \\varphi}{\\partial x_i} \\, dx = \\int f \\varphi \\, dx" }, { "math_id": 305, "text": "\\varphi \\mapsto \\int u \\varphi \\, dx" }, { "math_id": 306, "text": "C_c^{\\infty}(M)" }, { "math_id": 307, "text": "\\frac{\\partial u}{\\partial x_i}" }, { "math_id": 308, "text": "\\varphi \\mapsto -\\left \\langle u, \\frac{\\partial \\varphi}{\\partial x_i} \\right\\rangle" }, { "math_id": 309, "text": "\\langle \\alpha, \\varphi \\rangle = \\alpha(\\varphi)" }, { "math_id": 310, "text": "(0, \\infty)" }, { "math_id": 311, "text": "\\langle H', \\varphi \\rangle = -\\int_0^{\\infty} \\varphi' \\, dx = \\varphi(0)." }, { "math_id": 312, "text": "\\delta_a" }, { "math_id": 313, "text": "\\varphi \\mapsto \\varphi(a)" }, { "math_id": 314, "text": "H' = \\delta_0." }, { "math_id": 315, "text": "E_{z_0}(z) = \\frac{1}{\\pi (z - z_0)}" }, { "math_id": 316, "text": "| z - z_0 | \\le r" }, { "math_id": 317, "text": "\\varphi(z_0) = {1 \\over 2 \\pi i} \\int \\frac{\\partial \\varphi}{\\partial \\bar z} \\frac{dz \\wedge d \\bar z}{z - z_0}." }, { "math_id": 318, "text": "dz \\wedge d \\bar z = -2i dx \\wedge dy" }, { "math_id": 319, "text": "\\varphi(z_0) = -\\int E_{z_0} \\frac{\\partial \\varphi}{\\partial \\bar z} dxdy = \\left\\langle \\frac{\\partial E_{z_0}}{\\partial \\bar z}, \\varphi \\right \\rangle," }, { "math_id": 320, "text": "\\frac{\\partial E_{z_0}}{\\partial \\bar z} = \\delta_{z_0}." }, { "math_id": 321, "text": "E_{z_0}" }, { "math_id": 322, "text": "\\partial/\\partial \\bar z" }, { "math_id": 323, "text": "\\varphi_i : U_i \\to \\mathbb{R}^n" }, { "math_id": 324, "text": "M = \\cup_i U_i" }, { "math_id": 325, "text": "\\varphi_i : U_i \\to \\varphi_i(U_i)" }, { "math_id": 326, "text": "\\varphi_j \\circ \\varphi_i^{-1} : \\varphi_i(U_i \\cap U_j) \\to \\varphi_j(U_i \\cap U_j)" }, { "math_id": 327, "text": "f|_U \\circ \\varphi^{-1}" }, { "math_id": 328, "text": "\\varphi(U)" }, { "math_id": 329, "text": "\\varphi : U \\to \\mathbb{R}^n" }, { "math_id": 330, "text": "\\mathbb{H}^n" }, { "math_id": 331, "text": "g(x) = x_1^2 + \\cdots + x_{n+1}^2 - 1" }, { "math_id": 332, "text": "g'(x) = \\begin{bmatrix}2 x_1 & 2 x_2 & \\cdots & 2 x_{n+1}\\end{bmatrix}" }, { "math_id": 333, "text": "\\mathbb{R}^N" }, { "math_id": 334, "text": "\\{ \\varphi_i : U_i \\to \\mathbb{R}^n \\mid 1 \\le i \\le r \\}" }, { "math_id": 335, "text": "\\lambda_i" }, { "math_id": 336, "text": "\\operatorname{Supp}(\\lambda_i) \\subset U_i" }, { "math_id": 337, "text": "\\{ \\lambda_i = 1 \\}" }, { "math_id": 338, "text": "f = (\\lambda_1 \\varphi_1, \\dots, \\lambda_r \\varphi_r, \\lambda_1, \\dots, \\lambda_r) : M \\to \\mathbb{R}^{(k+1)r}" }, { "math_id": 339, "text": "(f, g) : M \\to \\mathbb{R}^{(k+1)r+1}" }, { "math_id": 340, "text": "2k" }, { "math_id": 341, "text": "N" }, { "math_id": 342, "text": "\\nu_i" }, { "math_id": 343, "text": "TN" }, { "math_id": 344, "text": "TM|_N" }, { "math_id": 345, "text": "\\nu_N" }, { "math_id": 346, "text": "\\exp : U \\to V" } ]
https://en.wikipedia.org/wiki?curid=64965378
64966180
Timeline of the COVID-19 pandemic in Massachusetts
The following is a timeline of the COVID-19 pandemic in Massachusetts. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; 2020. February. The first case of COVID-19 was confirmed by state health officials on February 1. Massachusetts became the fifth state in the U.S. to report a case of COVID-19. The individual, a University of Massachusetts Boston student, had returned to Boston from Wuhan, China. Upon returning to Boston he began experiencing symptoms and sought medical care. 175 executives of Biogen, a biotechnology company based in Cambridge, held a two-day leadership conference from February 26–28 at the Boston Marriott Long Wharf hotel. On February 29, a Biogen executive began to develop symptoms and sought treatment at a Boston area hospital. Suspecting COVID-19 was the cause of the illness, the executive requested a test, but was told by hospital staff that it was not necessary. March. March 1–7. On March 2, the second confirmed case in Massachusetts was reported. The patient was a woman in her 20s from Norfolk County. She had recently traveled to Italy with a school group from Saint Raphael Academy in Pawtucket, Rhode Island. She was the third person from the trip to test positive, with two people from Rhode Island who had gone on the trip also testing positive. On March 4, staff from Biogen contacted the Massachusetts Department of Public Health (MDPH) to report that two executives who had recently traveled from Europe to Boston and had attended the February employee meeting had tested positive for SARS-CoV-2 upon returning home. The same day, a "significant number" of Biogen employees asked to be tested for the virus at Massachusetts General Hospital (MGH), which had not been informed that anyone at the company had been exposed. The state police announced Shattuck Street would be closed because a group of 60 individuals were being transported along the route to Brigham and Women's Hospital. On March5, Biogen reported that three individuals who had attended the company event in Boston the previous week had tested positive for SARS-CoV-2. On March 6, public health officials reported five new cases bringing the state total to eight. Four cases were in Suffolk County, three in Norfolk County, and one in Middlesex County. Two cases were associated with travel to Italy and one to Wuhan. All five new cases were associated with the Biogen meeting. On March 7, five more presumptive positive cases of COVID-19 were reported, bringing the total to 13. Among those cases was the index case in Berkshire County, a man in his 60s from Clarksburg whose infection could not be traced. March 8–14. On March 8, the MDPH reported 15 more presumptive cases of COVID-19, all of which were individuals present at the Biogen conference, bringing the total to 28. In response to the outbreak, Biogen instituted remote work. The fifteen new presumptive cases included five from Suffolk County, five from Middlesex County, four from Norfolk County, and one whose county of residence was unknown. Officials in North Carolina reported that five residents of Wake County tested positive for COVID-19; all five were participants in the previous week's Biogen meeting in Boston. On March 10, the first evidence of community transmission, also known as community spread, was found in a handful of cases in the Berkshires. A man in Sudbury tested presumptive positive for COVID-19, and the first case in Essex County was also reported. On March 12, there were 108 people in Massachusetts with confirmed or presumptive cases of COVID-19. Among those cases, 82 (75% of the total) were associates or employees of Biogen. Governor Baker said the state had tested more than 200 patients and had the capacity to test up to 5,000. The Boston Marriott Long Wharf hotel, which had hosted the Biogen company gathering, closed temporarily. In a letter to their guests, the hotel said it made the decision in cooperation with the Boston Public Health Commission. Acton-Boxborough announced school closures from March 13 until March 20. On March 13, the Boston Marathon was postponed from April 20 until September 14. A few hours later, Governor Baker prohibited gatherings of more than 250 people. The measure was targeted at large events and exempted most workplaces, transit buildings, polling locations, government buildings, and schools. Cardinal O'Malley, the Roman Catholic Archbishop of Boston, announced that all daily and Sunday masses and other religious services would be suspended in the Archdiocese of Boston until further notice. Boston Mayor Marty Walsh announced that Boston Public Schools would be closed starting on March 17 until April 27. Woburn announced that a presumptive positive case in the city had been confirmed as negative. On March 14, Cape Cod (Barnstable County) confirmed its first case, a man in his 60s from Sandwich. Officials in Worcester and Malden both announced their respective cities' first confirmed case of COVID-19, both linked to Biogen. Of the state's 138 cases, 104 (75%) could be traced to employees or contacts of Biogen. A 59-year-old Worcester man died on a flight from Dubai to Boston, sparking speculation that he had died from COVID-19. He had been sick with gastrointestinal problems and was in cardiac arrest during the flight. On March 16, Massachusetts State Police said an autopsy revealed he did not have COVID-19. March 15–21. On March 15, Baker ordered all public and private schools in Massachusetts to close for three weeks, from March 17 through April7. The same day, he also banned eating at restaurants, banned gatherings of more than 25 people, relaxed unemployment claim requirements, and enacted other interventions to try to slow the spread of COVID-19. Hampden and Plymouth counties had their first cases. Plymouth County's first case, in Hanover, resulted from travel. Hampden County's first case tested positive at Baystate Medical Center in Springfield; the hospital noted an additional 23 suspected cases. On March 16, Brockton announced its first case, and the mayor declared a state of emergency for the city. Boston Mayor Marty Walsh ordered construction projects to shut down by March 23, maintaining only minimal staff for security. He also announced that all branches of the Boston Public Library would close beginning that night. The Massachusetts Bay Transportation Authority (MBTA) announced that, starting March 17, it would run the subway and buses at Saturday levels of service during the week, with express buses still running, ferries not running, and commuter rail running on a modified schedule. The next day, service was increased on the Blue Line, Green Line E branch (which serves Longwood Medical Area), and some bus lines to reduce crowding. Frequency on Massport shuttles to Logan International Airport was reduced or canceled. The number of hospitalized patients with suspected or known infections quadrupled to 53 between March 16 and 17. Major hospitals began reusing protective gear or asking the public for donations of masks. The number of cases where initial exposure was under investigation began to rise rapidly, whereas cases tracked to Biogen attendees and household contacts continued an overall mild decline. On March 19, Governor Baker activated up to 2,000 Massachusetts National Guard to assist in the management of the pandemic. The number of cases increased by 72, putting the total at 328, with 119 in Middlesex County. Franklin and Hampshire counties – both in Western Massachusetts and the last non-island counties – had their first confirmed cases of COVID-19. On March 20, Massachusetts experienced its first death due to COVID-19. The fatality was an 87-year-old man from Suffolk County, who was hospitalized and who had preexisting health conditions. Martha's Vineyard in Dukes County had its first case, a 50-year-old man in Tisbury. This was the thirteenth of 14 counties in Massachusetts to report a case of COVID-19. The cities of Somerville and Cambridge closed non-essential businesses. Governor Baker announced that 5,207 people had been tested for COVID-19 in Massachusetts through state and commercial laboratories. That night the state announced its second death due to COVID-19, a woman from Middlesex County in her 50s who had a preexisting health condition. Nantucket County, the last county to have no cases of the virus, reported its first COVID-19 case. In order to reduce contact between drivers and customers, the MBTA began rear-door boarding on above-ground stops for buses, the Green Line, and the Mattapan Trolley, except for passengers with disabilities who need to use the front door. March 22–31. On March 22, Nantucket issued a shelter-in-place order, to start March 23 and end on April 6. Exceptions were made for essential services to remain open. Governor Baker instructed people in mainland Massachusetts with second homes in Nantucket and Dukes County to stay on the mainland. Three new deaths were reported by Massachusetts DPH, two men, both in their 70s, from Hampden and Berkshire counties, and a man in his 90s from Suffolk County. On March 23, Governor Baker announced a stay-at-home advisory effective from noon March 24 until noon April 7. Nonessential businesses were ordered to close physical workplaces, and restaurants and bars were restricted to offering takeout and delivery. People were told they could go out to obtain essential goods and services, such as groceries and medicine, but should follow social distancing protocols. On March 24, the number of cases jumped by 382 to 1,159, with two new deaths attributed to COVID-19. This unusually large jump in cases (49%, versus 20–28% in the previous five days) was attributable to Quest Diagnostics processing 3,843 tests in one day, yielding 267 of the state's 382 new positive results. On March 25, the Commissioner of Public Health issued emergency regulations for grocery stores and pharmacies, requiring them to designate a daily shopping hour for senior citizens and provide checkout line distancing markers, hand washing and sanitizer for employees, disinfecting wipes for customers to use on carts. A ban on reusable bags became mandatory, overriding local bans on single-use plastic bags and eliminating fees for store-provided bags. Self-service food stations were ordered to be closed, and regular sanitization was required. On March 27, the state extended the tax filing deadline to July 15 and announced new travel guidelines. State officials announced that the Massachusetts Department of Public Health Commissioner, Monica Bharel, had tested positive for SARS-CoV-2; she had mild symptoms and planned to recover at home. On March 30, the state announced that it had conducted almost 43,000 tests of Massachusetts residents, with Quest Diagnostics having conducted 21,321 (almost half) of the total tests administered. Later that evening, the MBTA announced that 18 transit workers had tested positive for the virus. In addition, the Boston Police Department confirmed that 19 officers and three civilian employees had all tested positive. April. April 1–7. The Archdiocese of Boston announced that eight priests had tested positive for the disease. On April2 Boston Mayor Walsh announced plans to convert the Boston Convention and Exhibition Center (BCEC) into a field hospital, later named Boston Hope, with 500 beds assigned to the homeless and 500 to accept COVID-19 patients from city hospitals. On April 2, more than 500 healthcare workers in Boston hospitals were reported to have tested positive for COVID-19. On April 5, Boston Mayor Walsh announced a voluntary city-wide curfew for non-emergency workers in Boston from 9p.m. to 6a.m., and asked all Bostonians to wear face coverings in public. April 8–14. On April 9, the Massachusetts Institute of Technology published a preliminary study of sewage samples taken in the Boston area on March 25, in an effort to determine the extent of COVID-19 infections. Based on concentrations of the virus found in the samples, the study suggested that approximately 115,000 of the Boston region's 2.3 million people were infected. At the time of sampling, Massachusetts had only 646 confirmed cases in the area. Starting the evening of Friday April 10, the Massachusetts Department of Conservation and Recreation closed some parkways to vehicle traffic to allow recreational pedestrians to spread out, and reduced parking availability at some state parks. The city of Boston also reduced parking near the Arnold Arboretum. The Massachusetts Education Commissioner canceled MCAS standardized tests for the first time, taking advantage of a federal waiver. On April 12, there were 25,475 total cases, with 2,615 new cases, making Massachusetts the state with the third-most cases in the United States, behind only New York and New Jersey. Massachusetts officials warned of ebb and flow of the spread of COVID-19. April 15–21. On April 15, the Massachusetts DPH announced a plan to release town-by-town infection rates. This was a reversal from the earlier policy of discouraging the release of town-specific information concerning the number of infected in each particular community. On April 18, Baker announced that a third field hospital has opened in Cape Cod. On April 20, Governor Baker signed a law banning residential and small business evictions and foreclosures on homeowners (other than emergencies), for four months or until the state of emergency is ended. As of July 21, the moratorium expires on October 17, 2020. On April 21, Governor Baker announced that Massachusetts schools would not return to in-person learning for the remainder of the academic year. He also extended through June 29 a previous order to close non-emergency childcare services. April 22–30. On April 22, former 2020 Democratic presidential candidate and U.S. Senator Elizabeth Warren from Massachusetts announced that her oldest brother had died from COVID-19 in Oklahoma. On April 24, Governor Baker announced that while COVID-19 cases and testing were up in Massachusetts, hospitalizations have started to decrease and reached the lowest point since early April. Massachusetts recorded 4,946 new cases partially due to an error by Quest Diagnostics in missing more than 10,000 test results, both positive and negative, recorded in April 24 data. On April 25, Governor Baker addressed the topic of when stay-at-home measures and closures of non-essential businesses would end. When restrictions were originally announced in mid-March, they were slated to end at noon on April7; later their projected end date was pushed to May4. Baker said it was unlikely restrictions would be lifted by then because the surge of cases had hit later than expected – May4 presumed a surge in "early" April. Baker said the process of reopening will begin when hospitalizations start to decline consistently, and when there is "some evidence that we are in fact over the hump... with respect to the surge." On April 28, Governor Baker extended the statewide stay-at-home advisory by two weeks, to May 18. He also said that once the advisory expires, the process of reopening will begin in stages, and not happen all at once. Also on April 28, it was reported that at the Soldiers' Home in Holyoke, at least 68 veterans – nearly 30 percent of the home's residents – had died of COVID-19 in what is believed to be the deadliest outbreak at a long-term care facility in the United States during the COVID-19 pandemic. May. May 1–7. On May 1, Governor Baker issued an order, effective May 6, to require people to cover their faces in public when in situations where they are unable to keep six feet away from others. On May 4, a group of several hundred anti-lockdown protesters gathered outside the Massachusetts State House to urge Governor Baker to lift the state's stay-at-home advisory and reopen businesses. Organizers had planned to hold the protest, named the "Liberty Rally", if businesses were not reopened by May 1. The event was promoted by conservative talk radio host Jeffrey Kuhner and Super Happy Fun America, the group responsible for organizing the controversial 2019 Boston Straight Pride Parade. May 8–14. On May 8, Boston Mayor Walsh announced that parades and festivals would not take place in Boston at least until Labor Day (September 7). On May 11, Governor Baker announced a four-phased plan to reopen the state. In phase one, a small number of industries that do not involve much face-to-face interaction will be allowed to return to operating, with strict restrictions in place. In phase two, more industries will be allowed to open, with restrictions including limits on the number of people allowed to gather in one place. In phase three, more industries will open, with guidance on how to operate safely. Phase four is set to occur if a vaccine or therapy is developed allowing restrictions to be loosened. The state also published Mandatory Workplace Safety Standards to be followed by industries that will open as a part of phase one. These standards include requirements for social distancing, hygiene, staffing policies, and cleaning and disinfecting. May 15–21. On May 18, Governor Baker released the details of the plan to reopen businesses in Massachusetts and renamed the stay-at-home advisory to a "safer at home" advisory. The plan allows places of worship, essential businesses, manufacturing businesses, and construction sites to reopen with strict restrictions on May 18. Also, as of May 18, hospitals and health centers may begin providing urgent preventative care and treatment services to high-risk patients. Baker also announced that people who choose to ride the MBTA would be required to wear masks. Beginning on May 25, other businesses will be able to open, also with restrictions. Although Baker's plan includes office buildings in the list of businesses allowed to open on May 25, offices within Boston will not be allowed to open until June 1. May 22–31. On May 26, Baker said in a press conference that the surge in COVID-19 cases in Massachusetts is over, as evidenced by declining numbers of people hospitalized by the disease. He announced that the Boston Hope field hospital, located in the Boston Convention and Exhibition Center, would no longer be accepting new patients. The facility has treated more than 700 people with COVID-19, and has also provided shelter to some of Boston's homeless community. Baker also said that other field hospitals around the state would begin to close. Baker also announced $6million in grants to go to small businesses to help them purchase protective equipment and implement the safety precautions indicated in the reopening plan. The Boston Athletic Association announced on May 28 that the 2020 Boston Marathon, which had already been postponed to September, would be canceled. June. On June 1, Massachusetts began reporting probable cases of and deaths due to COVID-19 in their data, when previously they had only been reporting confirmed cases and deaths. This change follows guidance from the U.S. CDC. The Massachusetts Department of Public Health said in a statement that probable cases are recorded for people who "have either 1) had a positive antibody test and either had COVID symptoms or were likely to be exposed to a positive case or 2) did not have an antibody test but had COVID symptoms and were known to be exposed to a positive case." Probable deaths are defined as deaths where COVID-19 was listed on the death certificate as the cause of death, but where no test was administered. Due to these changes in reporting, Massachusetts became the 5th state in the U.S. to report over 100,000 cases of the contagious disease. On June 3, Massachusetts began reporting recoveries in their weekly data report. Previously, they had not been reporting the number of cumulative recoveries in their data. A patient is considered to be recovered if they have either been sick for 21 days or 21 days have passed since they tested positive. Governor Baker announced on June 6 that Massachusetts would begin entering phase two of the reopening plan starting on Monday, June 8, following positive trends in access to testing and decreasing hospitalizations. The first portion of the phase will allow childcare, day camps, lodging retail stores, outdoor seating at restaurants, and children's sports programs to reopen with strict precautions. Additional services, including indoor dining and nail and tanning salons, will be allowed to reopen at an unspecified later date as a part of phase two if the positive trends in COVID-19 cases continue. Boston entered phase two of their reopening plan on June 8. Amid ongoing protests over the May murder of George Floyd, Governor Baker announced pop-up testing sites would open throughout the state on June 17 and 18 to provide free tests to protesters and anyone else who wished to be tested. During the period 11–17 June, Worcester county had the second highest number of deaths in the state with 39. On June 22, WBUR reported that Massachusetts had become the state with the lowest COVID-19 transmission rate in the country. The June 22 Rt, a value measuring the average rate of transmission of a virus at a point in time, was 0.67 in Massachusetts. According to rt.live, the website calculating the values, an Rt value of 1.0 or above is considered to signify "rapid spread". The June unemployment rate was later calculated at 17.4%, a record high and the worst of any U.S. state at the time. July. On July 2, Governor Baker announced that Massachusetts would enter the first stage of phase three of its reopening plan starting on Monday, July 6. Phase three allows companies including gyms, casinos, and museums to open with safety precautions. Boston will delay entering phase three until July 13. Governor Baker announced on July 8 that free testing centers would open in eight communities that are seeing high viral spread: Chelsea, Everett, Fall River, Lawrence, Lowell, Lynn, Marlborough, and New Bedford. These areas are experiencing considerably higher positive test rates than the state average, and the testing rate has been growing lower. The Massachusetts State Collegiate Athletic Conference announced on July 16 that it was suspending the fall 2020 sports season. Applying to both indoor and outdoor sports, the decision impacts Bridgewater State University, Fitchburg State University, Framingham State University, Massachusetts College of Liberal Arts, Massachusetts Maritime Academy, Salem State University, Westfield State University, and Worcester State University, as well as other universities which are affiliate members of the conference for football or golf. To improve revenues for restaurants with liquor licenses, Governor Baker signed a law on July 21 allowing restaurants to serve cocktails to go in sealed containers until at least February. Also on July 21, Baker extended the moratorium on evictions and foreclosures in the state through October 17, 2020. The MBTA resumed collecting fares and requiring front-door boarding on buses and trolleys on July 20, having installed plexiglass shields for drivers. Towards the end of the month, Massachusetts began to experience a slight reversal in what had previously been positive trends in case data. Governor Baker blamed 'disturbing reports of large gatherings' on the uptick in cases, a trend he described as attributable to people not following guidelines rather than a result of moving forward in the state's reopening plan. On July 26, president of the Massachusetts Medical Society Dr. David Rosman tweeted, "Pay attention #Massachusetts — #COVID19 is on the rise. The numbers show it. The anecdotes show it." Rosman is among a group of people who have pushed Baker to pause the reopening plan, or move back from stage three to stage two, if case data continues to show a negative trend. The city of Somerville, which was the only city or town in Massachusetts that had not entered phase three of the reopening plan by the end of July, announced on July 31 that they would be further delaying entering the third phase. Officials said they had based the decision on concerns about case trends, issues with testing and contact tracing, and the possibility of another surge in cases. August. Beginning on August 1, visitors and Massachusetts residents returning from out of state need to fill out a form and quarantine for two weeks, unless they are coming from an exempt state or have tested negative for COVID-19 in the past 72 hours. State exemptions are based on a threshold for rolling averages of daily cases and positive test rates; individuals can also be exempted if they are commuting for work, coming to the state for medical treatment, following military orders, or traveling to work in federally-identified critical sectors. Governor Baker announced on August 7 that Massachusetts would postpone entering the second portion of phase three of the state's reopening plan, intensify enforcement of COVID-19 regulation violations, and reduce the limit on the number of people allowed at public and private outdoors events from 100 to 50. The changes were announced after several incidents in which large parties were found to be violating the state guidelines on the numbers of people allowed to gather, as well as on masks and physical distancing. Massachusetts school districts were required to submit their final plans for teaching in the fall, along with detailed safety protocols, by August 14. As of August 14, at least 32 of Massachusetts' 289 school districts had announced they would be teaching completely remotely to start the school year. Boston Public Schools announced on August 13 they had requested a waiver to delay the start of the school year from September 10 to September 21, saying they planned to use the time to provide training to staff. On August 21, after several weeks of pressure from the Boston Teacher's Union and other Massachusetts unions, BPS announced they would begin the school year remotely, with classes returning to in-person learning based on need beginning in October. Some colleges in Massachusetts began moving students in to dorms in mid-August. As of August 16[ [update]], students were beginning to move in to on-campus housing at Boston University in Boston and Clark University in Worcester. On August 19, the Massachusetts Department of Public Health announced that all children aged six months and older would need to receive a flu vaccine by December 31, 2020 in order to attend childcare, K–12 schools, and colleges and universities in the state. There is an allowance for medical or religious exemptions, and homeschooled K–12 students or higher education students who are not going on campus will not be required to receive the vaccination. This spurred some protests, including a rally of several hundred outside the State House on August 30, from parents and those who believed the decision was governmental overreach. In the Andover School District, which said it would start the year using a hybrid model of in-person and remote learning, teachers said they would only work remotely due to safety concerns. The Andover School Committee described the teachers' decision an 'illegal work stoppage'. When 45% of the members of the Andover Education Association refused to enter the building on August 31 for training, the Andover School Committee voted to take legal action against the educators. The teachers said they would "reluctantly" and "under duress" return to work, with the "hope that the School Committee will begin to negotiate reasonable health and safety benchmarks with us in good faith." September. On September 3, Governor Baker announced a community messaging campaign targeted towards Chelsea, Everett, Lawrence, Lynn, and Revere: communities that were still experiencing very high rates of COVID-19. By early September, some Massachusetts universities had received media attention when students disregarded social distancing rules imposed by the schools. In mid-August, more than 20 College of the Holy Cross students tested positive for SARS-CoV-2 after partying off-campus in Worcester, Massachusetts. The school said the students who organized the party would be punished for breaking an agreement they had made before returning to school. Northeastern University announced on September 4 that they had dismissed eleven students caught violating social distancing rules within a day or two of many students moving in. Northeastern reported they would not be reimbursing the students' tuition or housing payments. Later in September, Middlebury College barred 22 students from campus for not following the campus COVID guidelines. Following seventeen cases of COVID-19 in one dorm, Merrimack College quarantined all 266 residents of the dorm. On September 29, Governor Baker announced that communities classified by the state as "lower risk" would be allowed to move into step two of the third phase of the state's reopening plan beginning on October 5. This step includes allowing both indoor and outdoor performance venues to open at 50% capacity (up to 250 people); fitting rooms to open in retail stores; and gyms, museums, libraries, and driving and flight schools to increase capacity to 50%. The Massachusetts Coalition for Health Equity criticized Baker's choice to continue reopening the state, citing concerns over increasing cases and positive test rates. October. The Massachusetts Department of Elementary and Secondary Education announced on October 2 that they would begin providing weekly reports on the number of COVID-19 cases detected in schools. In the first report, covering the period from September 24 to September 30, 63 cases were found among students and 34 among staff. Student cases were spread across 41 districts, with Plymouth having the highest number with four cases reported. The Massachusetts Department of Health reported in early October that there had been outbreaks of COVID-19 at a substance abuse treatment center in Plymouth, as well as an outbreak at a correctional center in Middleton. Almost a third of the patient population of the Plymouth treatment facility tested positive, as did a dozen staff. On October 7, Boston Mayor Walsh announced that plans to allow additional students in the Boston Public School system to return to fully in-person or hybrid learning models would be further delayed after Boston's coronavirus positivity rate exceeded 4%. Also on October 7, Governor Baker announced his administration would be forming an advisory group to consult on plans to distribute a vaccine in Massachusetts when one is developed. On October 13, Health and Human Services Secretary Marylou Sudders announced at a press conference that they would be extending through December the "Stop the Spread" initiative, which provides free testing to high-risk communities. On October 22, the Baker administration announced a $774million plan to bolster economic recovery among businesses in the state. The same day, the Department of Public Health announced that for two weeks they would no longer be allowing indoor ice rinks to operate, following clusters of COVID-19 at various rinks throughout the state connected to indoor ice hockey. "The Boston Globe" reported on October 26 that coronavirus cases in the state had risen sharply on October 22 and "have been maintaining levels we haven't seen in months". The previous day, the "Globe" reported that the state had acknowledged not knowing the source of infection in approximately half of the known cases in the state, raising concerns with the state's ability to identify and quickly reduce the impact of pockets of infection. Thirteen communities in Massachusetts returned to step 1 of phase 3 of the state's reopening plan after spending three weeks in the "high risk" designation, including Brockton, Malden, Waltham, and Woburn. November. On November 2, Governor Baker announced a statewide curfew for businesses, a tighter limit on the number of people allowed to gather indoors, and stricter face mask requirements. The curfew requires some businesses such as theaters and casinos to close at 9:30pm, and requires restaurants to stop providing table service at that same time. Baker also implemented a stay-at-home advisory, to begin on November 6, to encourage people to stay home between the hours of 10p.m. and 5a.m. In a press conference on November 12, Boston Mayor Walsh warned that if Boston sees similar surges in cases as are occurring in Tennessee and elsewhere in the country, 'we're going to have to shut everything down again. The first one was bad on business. I think the second one will be far worse.' December. On December 3, Massachusetts' average positive COVID-19 test rate exceeded 4.9% for the first time since June. Total daily case numbers in the first few days of December began to surpass those seen at the April peak of the first wave of COVID-19 in the state. On December 8, Governor Baker announced that all cities and towns in Massachusetts would be required to roll back to Phase 3, Step 1 of the state's reopening plan. On December 9, Baker announced an estimated timeline for distribution of a COVID-19 vaccine. The first doses of the vaccine arrived in Massachusetts on December 14. 2021. January. Governor Baker announced on January 4, 2021 that first responders would begin to receive doses of the COVID-19 vaccine on January 11. The following day, Baker warned that it was likely that the highly contagious variant of COVID-19 first discovered in the United Kingdom had made its way to Massachusetts, and urged state residents to 'be very vigilant and careful and cautious about [their] physical engagement with other people'. On January 17, the first case of the variant in Massachusetts was confirmed. February. On February 1, Massachusetts entered phase two of its vaccine program, making residents 75 years of age and older eligible for the vaccine. A mass vaccination site opened at Boston's Fenway Park on the same day; it is one of two such sites operating in the state, along with one at Gillette Stadium in Foxborough. March. Baker announced on March 3 that teachers, tutors and day-care providers would be eligible to begin signing up for appointments to receive the vaccine beginning March 11. 'On March 10, 2020, Governor Charlie Baker declared a state of emergency, giving the Administration more flexibility to respond to the Coronavirus outbreak.' April. MassHealth (Massachusetts Health) introduces a document stating, 'Authorized Providers for Coronavirus Disease 2019,' the new document includes, "dental providers, including but not limited to dentists, public health dental hygienists, and dental clinics," as well as, "Home Health and Hospice Providers," along with the previous mentioned, residents of 75 years of age or older. Along with a new policy stating that everyone over the age of 12 were eligible to be vaccinated. May. The Department of Public Health releases multiple visual documents of propaganda in order to prevent the spread of the COVID-19 virus, its aims are to rule down and decrease the spread on the regional scale of the Municipalities of Massachusetts. Relating to the vaccine program, Baker releases this poster to advice unvaccinated members of the population to practice, "Social Distancing". The Missouri Department of Elementary and Secondary Education stated that, "in-person", lessons would be required starting Fall 2021. 'All Massachusetts schools and districts will be required to hold classes in-person next fall and health and safety requirements imposed by the department of Elementary and Secondary Education will be lifted for the new school year, the department said in new guidance sent to superintendents Thursday evening.' June. 'COVID-19 State of Emergency, Previously issued emergency orders and guidance associated with the COVID-19 State of Emergency, which terminated on June 15, 2021.' On June 15, 2021 the State of Emergency was terminated by Governor Baker, the DPH and multiple other associated state agencies, such as MassHealth. Further guidance on how to keep safe during the COVID-19 pandemic where also published during the same month. July. An unprecedented increase in the spread of the SARS-CoV-2 virus happened during the month of July 2021 in Massachusetts, most noticeably of them all is B.1.617.2 (Delta). 'During July 2021, 469 cases of COVID-19 associated with multiple summer events and large public gatherings in a town in Barnstable County, Massachusetts, were identified among Massachusetts residents; vaccination coverage among eligible Massachusetts residents was 69%.' August. The 'Coverage and Payment Policy for Services Related to COVID-19, Vaccine Counseling and 3rd Dose of Pfizer-BioNTech and Moderna, Vaccines for Immunocompromised Individuals' document is published, making under-12's with immunodeficient diseases eligible to be vaccinated, along with FDA struggles in an urge after the unprecedented increases of infected individuals across the U.S.A, vaccines are now further regulated within the U.S.A. and the Massachusetts region. The Delta and C.1.2 variants emerge, surging to be among the most contagious in Massachusetts. September. On September the 1st, Marc Lipsitch argued on as to whether it may be possible that the new Delta variant would be more contagious and severe in infants and/or minors as to that of the adults and/or elderly or middle-aged people, 'There's every reason to believe that (the delta variant) is more contagious to children and from children than the older variants, and that means that at a societal level, we're seeing higher numbers of cases in all age groups, including in children.' New documents on how to quarantine are published by the Mass.gov, due to the recent new infections, as well as further debate on as to whether public officials and public servants are to be mandated to have both vaccines and/or the booster too, to help with the spread of the new variants in Massachusetts. Debates on who should be vaccinated with the booster shots on the 28th of September, as the due reason being the unprecedent recent spreads of the new variants of the COVID 2019 disease. Not only did the COVID 2019 disease affect the widespread global population, but also scientific researchers involved in researching the field of the disease, and as well as healthcare providers, proving the COVID 2019 disease to be quite severe in its consequences through its period of the pandemic. October. Harvard T.H. Chan said. "October 1: Rapid Tests Are the Answer to Living With Covid-19 (New York Times) In this opinion piece, Michael Mina, assistant professor of epidemiology, and Stephen Phillips of the COVID Collaborative argued that President Biden should take executive action to change the U.S. regulatory structure to help bring more rapid COVID-19 tests into the U.S. market. They wrote that "the White House should also treat rapid testing with the same urgency and private sector partnership approach that Operation Warp Speed pioneered for vaccines." They noted that, "for public health purposes, we need fast, accessible tests that answer the question, 'Am I infectious now?' Rapid tests can help prevent spread to your children, spouse, friend, colleague, classmate or the stranger sitting next to you at dinner."" Rapid tests are majorly and predominantly discussed during October 2021, as to the means of helping with the spread and quarantining of citizens which were infected or otherwise could be infected — potentially. The 'Update to Caring for Long-Term Care Residents during the COVID-19 Response, including Visitation Conditions, Communal Dining, and Congregate Activities' document was published on behalf of Mass.gov, by the DPH, and the BHCSQ, in accordance with the newly formed regulatory guidelines of the U.S. Govt. federal agency of the Centers for Medicare and Medicaid Services (CMS). On the 4th of October, William Hanage, associate professor of epidemiology, discusses the average American using, "cloth masks", which are not sufficient against the COVID 2019 disease. Again on the 4th of October, William states, "[experts] don't expect another coronavirus surge in the U.S. as big as previous ones during the pandemic". November. On the 19th of November 2021, the Providence Journal publishes a newspaper article of an interview with Shekhar Saxena, professor of the practice of global mental health, who was evidentially a guest on, "Story in the Public Square," where he talks about COVID 2019 affecting mental health as well as physical, and that no-one is immune to the detrimental nature of the [COVID-19] disease. On the same day, Boston Globe journal publishes a transcript of an interview with Howard Koh, Harvey V. Fineberg Professor of the Practice of Public Health and former assistant U.S. secretary of health and Massachusetts public health commissioner, where he said, "We are still in a purgatory, unfortunately, and no one wants to hear it, but we have to double down on our public health commitment." On the 21st, William Hanage calls the B.1.617.2 Delta variant a "super variant", further on this Newsweek calls it a "COVID Variant That Spreads Easily, Evades Vaccines". On the 24th, the newly discovered, "Omicron", B.1.1.529 from South Africa is published in scientific research newspaper articles... December. …and spreads through the worldwide news in mid-December 2021, a little while after the discovery it spreads worldwide at an unprecedented demographic herculean rate. On the 4th of December 2021, Sikhulile Moyo, director at the Botswana Harvard HIV Reference Laboratory, the explorer in immunology and infectious diseases scientific field discussed about the newly discovered B.1.1.529 variant of the COVID 2019 disease, stating its rapid everchanging state to be quite worrying for the public. The B.1.1.529, B.1.617.2, and C.1.2 COVID 2019 are the most noticeably and predominantly of the COVID-19 disease variants: contagious, infectious, severe, and worrying; for both the public and healthcare providers. 2022. This year is the most evidentially prosperous and uplifting of the pandemic. January. Throughout the course of the late 2021 year the total number of cases has gradually started to decrease from week to week, but (as of January 19, 2022) there is still a total of 1,389,830 confirmed COVID-19 related cases of the 38,031,854 tests (meaning that 3.65% of the tests tested positive)formula_0 it is ~3.65%, 14,647 newly reported cases, and 199 newly reported deaths relating to the COVID 2019 disease and its variants. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(1,389,830\\div38,031,854)\\times100=3.654384032 \\therefore " } ]
https://en.wikipedia.org/wiki?curid=64966180
649721
Cubic plane curve
Type of a mathematical curve In mathematics, a cubic plane curve is a plane algebraic curve C defined by a cubic equation &amp;NoBreak;&amp;NoBreak; applied to homogeneous coordinates &amp;NoBreak;&amp;NoBreak; for the projective plane; or the inhomogeneous version for the affine space determined by setting "z" = 1 in such an equation. Here F is a non-zero linear combination of the third-degree monomials &amp;NoBreak;&amp;NoBreak; These are ten in number; therefore the cubic curves form a projective space of dimension 9, over any given field K. Each point P imposes a single linear condition on F, if we ask that C pass through P. Therefore, we can find some cubic curve through any nine given points, which may be degenerate, and may not be unique, but will be unique and non-degenerate if the points are in general position; compare to two points determining a line and how five points determine a conic. If two cubics pass through a given set of nine points, then in fact a pencil of cubics does, and the points satisfy additional properties; see Cayley–Bacharach theorem. A cubic curve may have a singular point, in which case it has a parametrization in terms of a projective line. Otherwise a "non-singular" cubic curve is known to have nine points of inflection, over an algebraically closed field such as the complex numbers. This can be shown by taking the homogeneous version of the Hessian matrix, which defines again a cubic, and intersecting it with C; the intersections are then counted by Bézout's theorem. However, only three of these points may be real, so that the others cannot be seen in the real projective plane by drawing the curve. The nine inflection points of a non-singular cubic have the property that every line passing through two of them contains exactly three inflection points. The real points of cubic curves were studied by Isaac Newton. The real points of a non-singular projective cubic fall into one or two 'ovals'. One of these ovals crosses every real projective line, and thus is never bounded when the cubic is drawn in the Euclidean plane; it appears as one or three infinite branches, containing the three real inflection points. The other oval, if it exists, does not contain any real inflection point and appears either as an oval or as two infinite branches. Like for conic sections, a line cuts this oval at, at most, two points. A non-singular plane cubic defines an elliptic curve, over any field K for which it has a point defined. Elliptic curves are now normally studied in some variant of Weierstrass's elliptic functions, defining a quadratic extension of the field of rational functions made by extracting the square root of a cubic. This does depend on having a K-rational point, which serves as the point at infinity in Weierstrass form. There are many cubic curves that have no such point, for example when K is the rational number field. The singular points of an irreducible plane cubic curve are quite limited: one double point, or one cusp. A reducible plane cubic curve is either a conic and a line or three lines, and accordingly have two double points or a tacnode (if a conic and a line), or up to three double points or a single triple point (concurrent lines) if three lines. Cubic curves in the plane of a triangle. Suppose that △"ABC" is a triangle with sidelengths formula_0 formula_1 formula_2 Relative to △"ABC", many named cubics pass through well-known points. Examples shown below use two kinds of homogeneous coordinates: trilinear and barycentric. To convert from trilinear to barycentric in a cubic equation, substitute as follows: formula_3 to convert from barycentric to trilinear, use formula_4 Many equations for cubics have the form formula_5 In the examples below, such equations are written more succinctly in "cyclic sum notation", like this: formula_6. The cubics listed below can be defined in terms of the isogonal conjugate, denoted by X*, of a point X not on a sideline of △"ABC". A construction of X* follows. Let LA be the reflection of line XA about the internal angle bisector of angle A, and define LB and LC analogously. Then the three reflected lines concur in X*. In trilinear coordinates, if formula_7 then formula_8 Neuberg cubic. Trilinear equation: formula_9 Barycentric equation: formula_10 The Neuberg cubic (named after Joseph Jean Baptiste Neuberg) is the locus of a point X such that X* is on the line EX, where E is the Euler infinity point ("X"(30) in the Encyclopedia of Triangle Centers). Also, this cubic is the locus of X such that the triangle △"XAXBXC" is perspective to △"ABC", where △"XAXBXC" is the reflection of X in the lines BC, CA, AB, respectively The Neuberg cubic passes through the following points: incenter, circumcenter, orthocenter, both Fermat points, both isodynamic points, the Euler infinity point, other triangle centers, the excenters, the reflections of A, B, C in the sidelines of △"ABC", and the vertices of the six equilateral triangles erected on the sides of △"ABC". For a graphical representation and extensive list of properties of the Neuberg cubic, see K001 at Berhard Gibert's Cubics in the Triangle Plane. Thomson cubic. Trilinear equation: formula_11 Barycentric equation: formula_12 The Thomson cubic is the locus of a point X such that X* is on the line GX, where G is the centroid. The Thomson cubic passes through the following points: incenter, centroid, circumcenter, orthocenter, symmedian point, other triangle centers, the vertices A, B, C, the excenters, the midpoints of sides BC, CA, AB, and the midpoints of the altitudes of △"ABC". For each point P on the cubic but not on a sideline of the cubic, the isogonal conjugate of P is also on the cubic. For graphs and properties, see K002 at Cubics in the Triangle Plane. Darboux cubic. Trilinear equation:formula_13 Barycentric equation: formula_14 The Darboux cubic is the locus of a point X such that X* is on the line LX, where L is the de Longchamps point. Also, this cubic is the locus of X such that the pedal triangle of X is the cevian triangle of some point (which lies on the Lucas cubic). Also, this cubic is the locus of a point X such that the pedal triangle of X and the anticevian triangle of X are perspective; the perspector lies on the Thomson cubic. The Darboux cubic passes through the incenter, circumcenter, orthocenter, de Longchamps point, other triangle centers, the vertices A, B, C, the excenters, and the antipodes of A, B, C on the circumcircle. For each point P on the cubic but not on a sideline of the cubic, the isogonal conjugate of P is also on the cubic. For graphics and properties, see K004 at Cubics in the Triangle Plane. Napoleon–Feuerbach cubic. Trilinear equation: formula_15 Barycentric equation: formula_16 The Napoleon–Feuerbach cubic is the locus of a point X* is on the line NX, where N is the nine-point center, ("N" = "X"(5) in the Encyclopedia of Triangle Centers). The Napoleon–Feuerbach cubic passes through the incenter, circumcenter, orthocenter, 1st and 2nd Napoleon points, other triangle centers, the vertices A, B, C, the excenters, the projections of the centroid on the altitudes, and the centers of the 6 equilateral triangles erected on the sides of △"ABC". For a graphics and properties, see K005 at Cubics in the Triangle Plane. Lucas cubic. Trilinear equation: formula_17 Barycentric equation: formula_18 The Lucas cubic is the locus of a point X such that the cevian triangle of X is the pedal triangle of some point; the point lies on the Darboux cubic. The Lucas cubic passes through the centroid, orthocenter, Gergonne point, Nagel point, de Longchamps point, other triangle centers, the vertices of the anticomplementary triangle, and the foci of the Steiner circumellipse. For graphics and properties, see K007 at Cubics in the Triangle Plane. 1st Brocard cubic. Trilinear equation:formula_19 Barycentric equation: formula_20 Let △"A'B'C"' be the 1st Brocard triangle. For arbitrary point X, let XA, XB, XC be the intersections of the lines XA′, XB′, XC′ with the sidelines BC, CA, AB, respectively. The 1st Brocard cubic is the locus of X for which the points XA, XB, XC are collinear. The 1st Brocard cubic passes through the centroid, symmedian point, Steiner point, other triangle centers, and the vertices of the 1st and 3rd Brocard triangles. For graphics and properties, see K017 at Cubics in the Triangle Plane. 2nd Brocard cubic. Trilinear equation: formula_21 Barycentric equation: formula_22 The 2nd Brocard cubic is the locus of a point X for which the pole of the line XX* in the circumconic through X and X* lies on the line of the circumcenter and the symmedian point (i.e., the Brocard axis). The cubic passes through the centroid, symmedian point, both Fermat points, both isodynamic points, the Parry point, other triangle centers, and the vertices of the 2nd and 4th Brocard triangles. For a graphics and properties, see K018 at Cubics in the Triangle Plane. 1st equal areas cubic. Trilinear equation: formula_23 Barycentric equation: formula_24 The 1st equal areas cubic is the locus of a point X such that area of the cevian triangle of X equals the area of the cevian triangle of X*. Also, this cubic is the locus of X for which X* is on the line S*X, where S is the Steiner point. ("S" = "X"(99) in the Encyclopedia of Triangle Centers). The 1st equal areas cubic passes through the incenter, Steiner point, other triangle centers, the 1st and 2nd Brocard points, and the excenters. For a graphics and properties, see K021 at Cubics in the Triangle Plane. 2nd equal areas cubic. Trilinear equation: formula_25 Barycentric equation:formula_26 For any point formula_27 (trilinears), let formula_28 and formula_29 The 2nd equal areas cubic is the locus of X such that the area of the cevian triangle of XY equals the area of the cevian triangle of XZ. The 2nd equal areas cubic passes through the incenter, centroid, symmedian point, and points in Encyclopedia of Triangle Centers indexed as "X"(31), "X"(105), "X"(238), "X"(292), "X"(365), "X"(672), "X"(1453), "X"(1931), "X"(2053), and others. For a graphics and properties, see K155 at Cubics in the Triangle Plane.
[ { "math_id": 0, "text": "a = |BC|," }, { "math_id": 1, "text": "b = |CA|," }, { "math_id": 2, "text": "c = |AB|." }, { "math_id": 3, "text": "x \\to bcx, \\quad y \\to cay, \\quad z \\to abz;" }, { "math_id": 4, "text": "x \\to ax, \\quad y \\to by, \\quad z \\to cz." }, { "math_id": 5, "text": "f(a,b,c,x,y,z) + f(b,c,a,y,z,x) + f(c,a,b,z,x,y) = 0." }, { "math_id": 6, "text": "\\sum_{\\text{cyclic}} f(x,y,z,a,b,c) = 0 " }, { "math_id": 7, "text": "X = x:y:z," }, { "math_id": 8, "text": "X^* = \\tfrac{1}{x}:\\tfrac{1}{y}:\\tfrac{1}{z}." }, { "math_id": 9, "text": "\\sum_{\\text{cyclic}} (\\cos{A} - 2\\cos{B}\\cos{C})x(y^2-z^2)= 0 " }, { "math_id": 10, "text": "\\sum_{\\text{cyclic}} (a^2(b^2 + c^2) + (b^2 - c^2)^2 - 2a^4)x(c^2y^2-b^2z^2) = 0 " }, { "math_id": 11, "text": "\\sum_{\\text{cyclic}} bcx(y^2-z^2)= 0 " }, { "math_id": 12, "text": "\\sum_{\\text{cyclic}} x(c^2y^2-b^2z^2)= 0 " }, { "math_id": 13, "text": "\\sum_{\\text{cyclic}} (\\cos{A} - \\cos{B}\\cos{C})x(y^2-z^2)= 0 " }, { "math_id": 14, "text": "\\sum_{\\text{cyclic}} (2a^2(b^2 + c^2) + (b^2 - c^2)^2 - 3a^4)x(c^2y^2-b^2z^2) = 0 " }, { "math_id": 15, "text": "\\sum_{\\text{cyclic}} \\cos(B-C)x(y^2-z^2)= 0 " }, { "math_id": 16, "text": "\\sum_{\\text{cyclic}} (a^2(b^2 + c^2) + (b^2 - c^2)^2)x(c^2y^2-b^2z^2) = 0 " }, { "math_id": 17, "text": "\\sum_{\\text{cyclic}} \\cos(A)x(b^2y^2- c^2z^2)= 0 " }, { "math_id": 18, "text": "\\sum_{\\text{cyclic}} (b^2+c^2-a^2)x(y^2-z^2)= 0 " }, { "math_id": 19, "text": "\\sum_{\\text{cyclic}} bc(a^4-b^2c^2)x(y^2+z^2)= 0 " }, { "math_id": 20, "text": "\\sum_{\\text{cyclic}} (a^4-b^2c^2)x(c^2y^2+b^2z^2)= 0 " }, { "math_id": 21, "text": "\\sum_{\\text{cyclic}} bc(b^2-c^2)x(y^2+z^2)= 0 " }, { "math_id": 22, "text": "\\sum_{\\text{cyclic}} (b^2-c^2)x(c^2y^2+b^2z^2)= 0 " }, { "math_id": 23, "text": "\\sum_{\\text{cyclic}} a(b^2-c^2)x(y^2-z^2)= 0 " }, { "math_id": 24, "text": "\\sum_{\\text{cyclic}} a^2(b^2-c^2)x(c^2y^2-b^2z^2)= 0 " }, { "math_id": 25, "text": "(bz+cx)(cx+ay)(ay+bz) = (bx+cy)(cy +az)(az+bx) " }, { "math_id": 26, "text": "\\sum_{\\text{cyclic}} a(a^2-bc)x(c^3y^2 - b^3z^2) = 0 " }, { "math_id": 27, "text": "X = x:y:z" }, { "math_id": 28, "text": "X_Y = y:z:x" }, { "math_id": 29, "text": "X_Z = z:x:y." } ]
https://en.wikipedia.org/wiki?curid=649721
64972148
Albert L. Allred
American chemist (born 1931) Albert Louis Allred (born September 19, 1931) is an American chemist accomplished in the fields of inorganic chemistry and electronegativity. He was born in Mount Airy, North Carolina, United States. Education and career. Allred studied chemistry at the University of North Carolina and earned a bachelor's degree in 1953. He studied at Harvard University and earned a master's degree in 1955, followed by a doctorate in 1957. In 1956 he was an instructor, 1958 assistant professor, and 1969 professor at the College of Arts and Sciences of Northwestern University. From 1980 to 1986, he was chairman of the chemistry department. In 1992, he became acting vice president for research as well as dean of the graduate school. In 1987, he was a visiting scholar at Cambridge University, in 1965 Honorary Research Associate at University College London and in 1967 at the University of Rome. From 1963 to 1965, he was a Sloan Research Fellow. He has been a Fellow of the American Association for the Advancement of Science since 1981. Allred introduced the Allred-Rochow scale of electronegativity with Eugene G. Rochow in 1958. They predicted that electronegativity, should be related to the charge experienced by an electron on the "surface" of an atom. They calculated this formula for the electronegativity, "χ", where "formula_0" is equal to the effective nuclear charge and "formula_1" is the covalent radius. When the covalent radius is expressed in picometers: formula_2. When expressed in Angstroms however, the value 3590, becomes .359. formula_3 Allred has also since dealt with synthetic inorganic, organometallic chemistry, and electrochemistry. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Z_{\\rm eff}" }, { "math_id": 1, "text": "r_{\\rm cov}" }, { "math_id": 2, "text": "\\chi = 3590{{Z_{\\rm eff}}\\over{r^2_{\\rm cov}}} + 0.744" }, { "math_id": 3, "text": "\\chi = .359{{Z_{\\rm eff}}\\over{r^2_{\\rm cov}}} + 0.744" } ]
https://en.wikipedia.org/wiki?curid=64972148
6497220
Computational complexity of mathematical operations
Algorithmic runtime requirements for common math procedures The following tables list the computational complexity of various algorithms for common mathematical operations. Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an explanation of the notation used. Note: Due to the variety of multiplication algorithms, formula_1 below stands in for the complexity of the chosen multiplication algorithm. Arithmetic functions. This table lists the complexity of mathematical operations on integers. On stronger computational models, specifically a pointer machine and consequently also a unit-cost random-access machine it is possible to multiply two n-bit numbers in time "O"("n"). Algebraic functions. Here we consider operations over polynomials and n denotes their degree; for the coefficients we use a unit-cost model, ignoring the number of bits in a number. In practice this means that we assume them to be machine integers. Special functions. Many of the methods in this section are given in Borwein &amp; Borwein. Elementary functions. The elementary functions are constructed by composing arithmetic operations, the exponential function (formula_3), the natural logarithm (formula_4), trigonometric functions (formula_5), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible by means of Newton's method. In particular, if either formula_3 or formula_4 in the complex domain can be computed with some complexity, then that complexity is attainable for all other elementary functions. Below, the size formula_0 refers to the number of digits of precision at which the function is to be evaluated. It is not known whether formula_2 is the optimal complexity for elementary functions. The best known lower bound is the trivial bound formula_6formula_7. Mathematical constants. This table gives the complexity of computing approximations to the given constants to formula_0 correct digits. Number theory. Algorithms for number theoretical calculations are studied in computational number theory. Matrix algebra. The following complexity figures assume that arithmetic with individual elements has complexity "O"(1), as is the case with fixed-precision floating-point arithmetic or operations on a finite field. In 2005, Henry Cohn, Robert Kleinberg, Balázs Szegedy, and Chris Umans showed that either of two different conjectures would imply that the exponent of matrix multiplication is 2. Transforms. Algorithms for computing transforms of functions (particularly integral transforms) are widely used in all areas of mathematics, particularly analysis and signal processing. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "M(n)" }, { "math_id": 2, "text": "O(M(n) \\log n)" }, { "math_id": 3, "text": "\\exp" }, { "math_id": 4, "text": "\\log" }, { "math_id": 5, "text": "\\sin, \\cos" }, { "math_id": 6, "text": "\\Omega" }, { "math_id": 7, "text": "(M(n))" } ]
https://en.wikipedia.org/wiki?curid=6497220
649724
Affine plane (incidence geometry)
Axiomatically defined geometrical space In geometry, an affine plane is a system of points and lines that satisfy the following axioms: In an affine plane, two lines are called "parallel" if they are equal or disjoint. Using this definition, Playfair's axiom above can be replaced by: Parallelism is an equivalence relation on the lines of an affine plane. Since no concepts other than those involving the relationship between points and lines are involved in the axioms, an affine plane is an object of study belonging to incidence geometry. They are non-degenerate linear spaces satisfying Playfair's axiom. The familiar Euclidean plane is an affine plane. There are many finite and infinite affine planes. As well as affine planes over fields (and division rings), there are also many non-Desarguesian planes, not derived from coordinates in a division ring, satisfying these axioms. The Moulton plane is an example of one of these. Finite affine planes. If the number of points in an affine plane is finite, then if one line of the plane contains "n" points then: The number "n" is called the "order" of the affine plane. All known finite affine planes have orders that are prime or prime power integers. The smallest affine plane (of order 2) is obtained by removing a line and the three points on that line from the Fano plane. A similar construction, starting from the projective plane of order 3, produces the affine plane of order 3 sometimes called the Hesse configuration. An affine plane of order "n" exists if and only if a projective plane of order "n" exists (however, the definition of order in these two cases is not the same). Thus, there is no affine plane of order 6 or order 10 since there are no projective planes of those orders. The Bruck–Ryser–Chowla theorem provides further limitations on the order of a projective plane, and thus, the order of an affine plane. The "n"2 + "n" lines of an affine plane of order "n" fall into "n" + 1 equivalence classes of "n" lines apiece under the equivalence relation of parallelism. These classes are called "parallel classes" of lines. The lines in any parallel class form a partition the points of the affine plane. Each of the "n" + 1 lines that pass through a single point lies in a different parallel class. The parallel class structure of an affine plane of order "n" may be used to construct a set of "n" − 1 mutually orthogonal latin squares. Only the incidence relations are needed for this construction. Relation with projective planes. An affine plane can be obtained from any projective plane by removing a line and all the points on it, and conversely any affine plane can be used to construct a projective plane by adding a line at infinity, each of whose points is that point at infinity where an equivalence class of parallel lines meets. If the projective plane is non-Desarguesian, the removal of different lines could result in non-isomorphic affine planes. For instance, there are exactly four projective planes of order nine, and seven affine planes of order nine. There is only one affine plane corresponding to the Desarguesian plane of order nine since the collineation group of that projective plane acts transitively on the lines of the plane. Each of the three non-Desarguesian planes of order nine have collineation groups having two orbits on the lines, producing two non-isomorphic affine planes of order nine, depending on which orbit the line to be removed is selected from. Affine translation planes. A line "l" in a projective plane Π is a translation line if the group of elations with axis "l" acts transitively on the points of the affine plane obtained by removing "l" from the plane Π. A projective plane with a translation line is called a translation plane and the affine plane obtained by removing the translation line is called an affine translation plane. While in general it is often easier to work with projective planes, in this context the affine planes are preferred and several authors simply use the term translation plane to mean affine translation plane. An alternate view of affine translation planes can be obtained as follows: Let "V" be a 2"n"-dimensional vector space over a field "F". A spread of "V" is a set "S" of "n"-dimensional subspaces of "V" that partition the non-zero vectors of "V". The members of "S" are called the components of the spread and if "V""i" and "V""j" are distinct components then "V""i" ⊕ "V""j" = "V". Let "A" be the incidence structure whose points are the vectors of "V" and whose lines are the cosets of components, that is, sets of the form "v" + "U" where "v" is a vector of "V" and "U" is a component of the spread "S". Then: "A" is an affine plane and the group of translations "x" → "x" + "w" for a vector "w" is an automorphism group acting regularly on the points of this plane. Generalization: "k"-nets. An incidence structure more general than a finite affine plane is a "k"-"net of order" "n". This consists of "n"2 points and "nk" lines such that: An ("n" + 1)-net of order "n" is precisely an affine plane of order "n". A "k"-"net of order" "n" is equivalent to a set of "k" − 2 mutually orthogonal Latin squares of order "n". Example: translation nets. For an arbitrary field "F", let Σ be a set of "n"-dimensional subspaces of the vector space "F"2"n", any two of which intersect only in {0} (called a partial spread). The members of Σ, and their cosets in "F"2"n", form the lines of a translation net on the points of "F"2"n". If |Σ| = "k" this is a "k"-net of order . Starting with an affine translation plane, any subset of the parallel classes will form a translation net. Given a translation net, it is not always possible to add parallel classes to the net to form an affine plane. However, if "F" is an infinite field, any partial spread Σ with fewer than members can be extended and the translation net can be completed to an affine translation plane. Geometric codes. Given the "line/point" incidence matrix of any finite incidence structure, "M", and any field, "F" the row space of "M" over "F" is a linear code that we can denote by "C" = "C""F"("M"). Another related code that contains information about the incidence structure is the Hull of "C" which is defined as: formula_0 where "C"⊥ is the orthogonal code to "C". Not much can be said about these codes at this level of generality, but if the incidence structure has some "regularity" the codes produced this way can be analyzed and information about the codes and the incidence structures can be gleaned from each other. When the incidence structure is a finite affine plane, the codes belong to a class of codes known as "geometric codes". How much information the code carries about the affine plane depends in part on the choice of field. If the characteristic of the field does not divide the order of the plane, the code generated is the full space and does not carry any information. On the other hand, Furthermore, When "π" = AG(2, "q") the geometric code generated is the "q"-ary Reed-Muller Code. Affine spaces. Affine spaces can be defined in an analogous manner to the construction of affine planes from projective planes. It is also possible to provide a system of axioms for the higher-dimensional affine spaces which does not refer to the corresponding projective space. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{Hull}(C) = C \\cap C^{\\perp}," } ]
https://en.wikipedia.org/wiki?curid=649724
64972913
Expected mean squares
In statistics, expected mean squares (EMS) are the expected values of certain statistics arising in partitions of sums of squares in the analysis of variance (ANOVA). They can be used for ascertaining which statistic should appear in the denominator in an F-test for testing a null hypothesis that a particular effect is absent. Definition. When the total corrected sum of squares in an ANOVA is partitioned into several components, each attributed to the effect of a particular predictor variable, each of the sums of squares in that partition is a random variable that has an expected value. That expected value divided by the corresponding number of degrees of freedom is the expected mean square for that predictor variable. Example. The following example is from "Longitudinal Data Analysis" by Donald Hedeker and Robert D. Gibbons. Each of "s" treatments (one of which may be a placebo) is administered to a sample of (capital) "N" randomly chosen patients, on whom certain measurements formula_0 are observed at each of (lower-case) "n" specified times, for formula_1 (thus the numbers of patients receiving different treatments may differ), and formula_2 We assume the sets of patients receiving different treatments are disjoint, so patients are nested within treatments and not crossed with treatments. We have formula_3 where The total corrected sum of squares is formula_15 The ANOVA table below partitions the sum of squares (where formula_16): Use in F-tests. A null hypothesis of interest is that there is no difference between effects of different treatments—thus no difference among treatment means. This may be expressed by saying formula_17 (with the notation as used in the table above). Under this null hypothesis, the expected mean square for effects of treatments is formula_18 The numerator in the F-statistic for testing this hypothesis is the mean square due to differences among treatments, i.e. it is formula_19 The denominator, however, is not formula_20 The reason is that the random variable below, although under the null hypothesis it has an F-distribution, is not observable—it is not a statistic—because its value depends on the unobservable parameters formula_21 and formula_22 formula_23 Instead, one uses as the test statistic the following random variable that is not defined in terms of formula_24: formula_25 Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Y_{hij} " }, { "math_id": 1, "text": " h=1,\\ldots,s, \\quad i=1,\\ldots,N_h " }, { "math_id": 2, "text": " j=1,\\ldots, n." }, { "math_id": 3, "text": " Y_{hij} = \\mu + \\gamma_h + \\tau_j + (\\gamma\\tau)_{hj} + \\pi_{i(h)} + \\varepsilon_{hij} " }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "\\gamma_h" }, { "math_id": 6, "text": "h" }, { "math_id": 7, "text": "\\tau_j" }, { "math_id": 8, "text": "j" }, { "math_id": 9, "text": "(\\gamma\\tau)_{hj}" }, { "math_id": 10, "text": "\\pi_{i(h)}" }, { "math_id": 11, "text": "i" }, { "math_id": 12, "text": "\\varepsilon_{hij}" }, { "math_id": 13, "text": "\\sigma_\\pi^2" }, { "math_id": 14, "text": "\\sigma_\\varepsilon" }, { "math_id": 15, "text": " \\sum_{hij} (Y_{hij} - \\overline Y)^2 \\quad\\text{where } \\overline Y = \\frac 1 n \\sum_{hij} Y_{hij}. " }, { "math_id": 16, "text": " N = \\sum_h N_h " }, { "math_id": 17, "text": " D_\\text{Tr}=0, " }, { "math_id": 18, "text": " \\sigma_\\varepsilon^2 + n \\sigma_\\pi^2. " }, { "math_id": 19, "text": " \\left. \\text{SS}_\\text{Tr} \\right/(s-1). " }, { "math_id": 20, "text": " \\left. \\text{SS}_\\text{E}\\right/ \\big( (N-s)(n-1) \\big). " }, { "math_id": 21, "text": " \\sigma_\\pi^2 " }, { "math_id": 22, "text": " \\sigma_\\varepsilon^2. " }, { "math_id": 23, "text": " \\frac{\\left.\\frac{\\text{SS}_\\text{Tr}}{\\sigma_\\varepsilon^2 + n\\sigma_\\pi^2} \\right/(s-1)}{ \\left. \\frac{\\text{SS}_\\text{E}}{\\sigma_\\varepsilon^2} \\right/ \\big( (N-s)(n-1) \\big)} \\ne \\frac{\\text{SS}_\\text{Tr}/(s-1)}{\\text{SS}_\\text{E}/\\big((N-s)(n-1)\\big)} " }, { "math_id": 24, "text": " \\text{SS}_\\text{E}" }, { "math_id": 25, "text": " F = \\frac{\\left.\\frac{\\text{SS}_\\text{Tr}}{\\sigma_\\varepsilon^2 + n\\sigma_\\pi^2} \\right/(s-1)}{ \\left. \\frac{\\text{SS}_{\\text{S}(\\text{Tr})}}{\\sigma_\\varepsilon^2+ n\\sigma_\\pi^2} \\right/ (N-s)} = \\frac{\\left. \\text{SS}_\\text{Tr} \\right/(s-1)}{ \\left. \\text{SS}_\\text{S(Tr)} \\right/ (N-s)} " } ]
https://en.wikipedia.org/wiki?curid=64972913
6497307
Rubber ducky antenna
Type of radio antenna The rubber ducky antenna (or rubber duck aerial) is an electrically short monopole antenna, invented by Richard B. Johnson, that functions somewhat like a base-loaded whip antenna. It consists of a springy wire in the shape of a narrow helix, sealed in a rubber or plastic jacket to protect the antenna. The rubber ducky antenna is a form of normal-mode helical antenna. Electrically short antennas like the rubber ducky are used in portable handheld radio equipment at VHF and UHF frequencies in place of a quarter-wavelength whip antenna, which is inconveniently long and cumbersome at these frequencies. Many years after its invention in 1958, the rubber ducky antenna became the antenna of choice for many portable radio devices, including walkie-talkies and other portable transceivers, scanners and other devices where safety and robustness take precedence over electromagnetic performance. The rubber ducky is quite flexible, making it more suitable for handheld operation, especially when worn on the belt, than earlier rigid telescoping antennas. Origin of the name. The term rubber duck stems from the rubberized protective coating commonly seen on handheld radios and police radios. An alternative name is based on the short stub format: the "stubby antenna". Description. Before the rubber ducky, antennas on portable radios usually consisted of quarter-wave whip antennas, rods whose length was one-quarter of the wavelength of the radio waves used. In the VHF range where they were used, these antennas were long, making them cumbersome. They were often made of telescoping tubes that could be retracted when not in use. To make the antenna more compact, electrically short antennas, shorter than one-quarter wavelength, began to be used. Electrically short antennas have considerable capacitive reactance, so to make them resonant at the operating frequency an inductor (loading coil) is added in series with the antenna. Antennas which have these inductors built into their bases are called base-loaded whips. The rubber ducky is an electrically short quarter-wave antenna in which the inductor, instead of being in the base, is built into the antenna itself. The antenna is made of a narrow helix of wire like a spring, which functions as the needed inductor. The springy wire is flexible, making it less prone to damage than a stiff antenna. The spring antenna is further enclosed in a plastic or rubber-like covering to protect it. The technical name for this type of antenna is a normal-mode helix. Rubber ducky antennas are typically 4% to 15% of a wavelength long; that is, 16% to 60% of the length of a standard quarter-wave whip. Effective aperture. Because the length of this antenna is significantly smaller than a wavelength the effective aperture, if 100% efficient, would be approximately: formula_0 Like other electrically short antennas the rubber ducky has poorer performance (less gain) due to losses and thus considerably less gain than a quarter-wave whip. However it has somewhat better performance than an equal length base loaded antenna. This is because the inductance is distributed throughout the antenna and so allows somewhat greater current in the antenna. Performance. Rubber ducky antennas have lower gain than a full size quarter-wavelength antenna, reducing the range of the radio. They are typically used in short-range two way radios where maximum range is not a requirement. Their design is a compromise between antenna gain and small size. They are difficult to characterize electrically because the current distribution along the element is not sinusoidal as is the case with a thin linear antenna. In common with other inductively loaded short monopoles, the rubber ducky has a high Q factor and thus a narrow bandwidth. This means that as the frequency departs from the antenna's designed center frequency, its SWR increases and thus its efficiency falls off quickly. This type of antenna is often used over a wide frequency range, e.g. 100–500 MHz, and over this range its performance is poor, but in many mobile radio applications there is sufficient excess signal strength to overcome any deficiencies in the antenna. Design rules. From these rules, one can surmise that it is possible to design a rubber ducky antenna that has about 50 Ω impedance at its feed-point, but a compromise of bandwidth may be necessary. Modern rubber ducky antennas such as those used on cell phones are tapered in such a way that few performance compromises are necessary. Variations. Some rubber ducky antennas are designed quite differently than the original design. One type uses a spring only for support. The spring is electrically shorted out. The antenna is therefore electrically a linear element antenna. Some other rubber ducky antennas use a spring of non-conducting material for support and comprise a collinear array antenna. Such antennas are still called rubber ducky antennas even though they function quite differently (and often better) than the original spring antenna. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A_e = \\frac{3 \\lambda ^2 }{8 \\pi} " } ]
https://en.wikipedia.org/wiki?curid=6497307
649743
Fundamental representation
In representation theory of Lie groups and Lie algebras, a fundamental representation is an irreducible finite-dimensional representation of a semisimple Lie group or Lie algebra whose highest weight is a fundamental weight. For example, the defining module of a classical Lie group is a fundamental representation. Any finite-dimensional irreducible representation of a semisimple Lie group or Lie algebra can be constructed from the fundamental representations by a procedure due to Élie Cartan. Thus in a certain sense, the fundamental representations are the elementary building blocks for arbitrary finite-dimensional representations. Explanation. The irreducible representations of a simply-connected compact Lie group are indexed by their highest weights. These weights are the lattice points in an orthant "Q"+ in the weight lattice of the Lie group consisting of the dominant integral weights. It can be proved that there exists a set of "fundamental weights", indexed by the vertices of the Dynkin diagram, such that any dominant integral weight is a non-negative integer linear combination of the fundamental weights. The corresponding irreducible representations are the fundamental representations of the Lie group. From the expansion of a dominant weight in terms of the fundamental weights one can take a corresponding tensor product of the fundamental representations and extract one copy of the irreducible representation corresponding to that dominant weight. Other uses. Outside of Lie theory, the term "fundamental representation" is sometimes loosely used to refer to a smallest-dimensional faithful representation, though this is also often called the "standard" or "defining" representation (a term referring more to the history, rather than having a well-defined mathematical meaning).
[ { "math_id": 0, "text": "\\operatorname{Alt}^k\\ {\\mathbb C}^n" } ]
https://en.wikipedia.org/wiki?curid=649743
64979699
Gale diagram
In the mathematical discipline of polyhedral combinatorics, the Gale transform turns the vertices of any convex polytope into a set of vectors or points in a space of a different dimension, the Gale diagram of the polytope. It can be used to describe high-dimensional polytopes with few vertices, by transforming them into sets with the same number of points, but in a space of a much lower dimension. The process can also be reversed, to construct polytopes with desired properties from their Gale diagrams. The Gale transform and Gale diagram are named after David Gale, who introduced these methods in a 1956 paper on neighborly polytopes. Definitions. Transform. Given a formula_0-dimensional polytope, with formula_1 vertices, adjoin 1 to the Cartesian coordinates of each vertex, to obtain a formula_2-dimensional column vector. The matrix formula_3 of these formula_1 column vectors has dimensions formula_4, defining a linear mapping from formula_1-space to formula_2-space, surjective with rank formula_5. The kernel of formula_3 describes linear dependencies among the formula_1 original vertices with coefficients summing to zero; this kernel has dimension formula_6. The Gale transform of formula_3 is a matrix formula_7 of dimension formula_8, whose "column vectors" are a chosen basis for the kernel of formula_3. Then formula_7 has formula_1 "row vectors" of dimension formula_6. These row vectors form the Gale diagram of the polytope. A different choice of basis for the kernel changes the result only by a linear transformation. Note that the vectors in the Gale diagram are in natural bijection with the formula_1 vertices of the original formula_0-dimensional polytope, but the dimension of the Gale diagram is smaller whenever formula_9. A proper subset of the vertices of a polytope forms the vertex set of a face of the polytope, if and only if the complementary set of vectors of the Gale transform has a convex hull that contains the origin in its relative interior. Equivalently, the subset of vertices forms a face if and only if its affine span does not intersect the convex hull of the complementary vectors. Linear diagram. Because the Gale transform is defined only up to a linear transformation, its nonzero vectors can be normalized to all be formula_10-dimensional unit vectors. The linear Gale diagram is a normalized version of the Gale transform, in which all the vectors are zero or unit vectors. Affine diagram. Given a Gale diagram of a polytope, that is, a set of formula_1 unit vectors in an formula_10-dimensional space, one can choose a formula_11-dimensional subspace formula_12 through the origin that avoids all of the vectors, and a parallel subspace formula_13 that does not pass through the origin. Then, a central projection from the origin to formula_13 will produce a set of formula_11-dimensional points. This projection loses the information about which vectors lie above formula_12 and which lie below it, but this information can be represented by assigning a sign (positive, negative, or zero) or equivalently a color (black, white, or gray) to each point. The resulting set of signed or colored points is the affine Gale diagram of the given polytope. This construction has the advantage, over the Gale transform, of using one less dimension to represent the structure of the given polytope. Gale transforms and linear and affine Gale diagrams can also be described through the duality of oriented matroids. As with the linear diagram, a subset of vertices forms a face if and only if there is no affine function (a linear function with a possibly nonzero constant term) that assigns a non-negative value to each positive vector in the complementary set and a non-positive value to each negative vector in the complementary set. Examples. The Gale diagram is particularly effective in describing polyhedra whose numbers of vertices are only slightly larger than their dimensions. Simplices. A formula_0-dimensional polytope with formula_14 vertices, the minimum possible, is a simplex. In this case, the linear Gale diagram is 0-dimensional, consisting only of zero vectors. The affine diagram has formula_1 gray points. One additional vertex. In a formula_0-dimensional polytope with formula_15 vertices, the linear Gale diagram is one-dimensional, with the vector representing each point being one of the three numbers formula_16, formula_17, or formula_18. In the affine diagram, the points are zero-dimensional, so they can be represented only by their signs or colors without any location value. In order to represent a polytope, the diagram must have at least two points with each nonzero sign. Two diagrams represent the same combinatorial equivalence class of polytopes when they have the same numbers of points of each sign, or when they can be obtained from each other by negating all of the signs. For formula_19, the only possibility is two points of each nonzero sign, representing a convex quadrilateral. For formula_20, there are two possible Gale diagrams: the diagram with two points of each nonzero sign and one zero point represents a square pyramid, while the diagram with two points of one nonzero sign and three points with the other sign represents the triangular bipyramid. In general, the number of distinct Gale diagrams with formula_15, and the number of combinatorial equivalence classes of formula_0-dimensional polytopes with formula_1 vertices, is formula_21. Two additional vertices. In a formula_0-dimensional polytope with formula_22 vertices, the linear Gale diagram consists of points on the unit circle (unit vectors) and at its center. The affine Gale diagram consists of labeled points or clusters of points on a line. Unlike for the case of formula_15 vertices, it is not completely trivial to determine when two Gale diagrams represent the same polytope. Three-dimensional polyhedra with six vertices provide natural examples where the original polyhedron is of a low enough dimension to visualize, but where the Gale diagram still provides a dimension-reducing effect. Applications. Gale diagrams have been used to provide a complete combinatorial enumeration of the formula_0-dimensional polytopes with formula_22 vertices, and to construct polytopes with unusual properties. These include: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "(d+1)" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "(d+1)\\times n" }, { "math_id": 5, "text": "d+1" }, { "math_id": 6, "text": "n-d-1" }, { "math_id": 7, "text": "B" }, { "math_id": 8, "text": "n\\times (n-d-1)" }, { "math_id": 9, "text": "n \\leq 2d" }, { "math_id": 10, "text": "(n-d-1)" }, { "math_id": 11, "text": "(n-d-2)" }, { "math_id": 12, "text": "S" }, { "math_id": 13, "text": "S'" }, { "math_id": 14, "text": "n=d+1" }, { "math_id": 15, "text": "n=d+2" }, { "math_id": 16, "text": "-1" }, { "math_id": 17, "text": "0" }, { "math_id": 18, "text": "+1" }, { "math_id": 19, "text": "d=2" }, { "math_id": 20, "text": "d=3" }, { "math_id": 21, "text": "\\lfloor d^2/4 \\rfloor" }, { "math_id": 22, "text": "n=d+3" }, { "math_id": 23, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=64979699
649861
Al-Khwarizmi
Persian polymath (c. 780 – c. 850) Muhammad ibn Musa al-Khwarizmi (; c. 780 – c. 850), often referred to as simply al-Khwarizmi, was a polymath who produced vastly influential Arabic-language works in mathematics, astronomy, and geography. Hailing from Khwarazm, he was appointed as the astronomer and head of the House of Wisdom in the city of Baghdad around 820 CE. His popularizing treatise on algebra, compiled between 813–33 as "Al-Jabr (The Compendious Book on Calculation by Completion and Balancing)", presented the first systematic solution of linear and quadratic equations. One of his achievements in algebra was his demonstration of how to solve quadratic equations by completing the square, for which he provided geometric justifications. Because al-Khwarizmi was the first person to treat algebra as an independent discipline and introduced the methods of "reduction" and "balancing" (the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation), he has been described as the father or founder of algebra. The English term "algebra" comes from the short-hand title of his aforementioned treatise ( , transl. "completion" or "rejoining"). His name gave rise to the English terms "algorism" and "algorithm"; the Spanish, Italian, and Portuguese terms "algoritmo"; and the Spanish term and Portuguese term , both meaning "digit". In the 12th century, Latin-language translations of al-Khwarizmi's textbook on Indian arithmetic (), which codified the various Indian numerals, introduced the decimal-based positional number system to the Western world. Likewise, "Al-Jabr", translated into Latin by the English scholar Robert of Chester in 1145, was used until the 16th century as the principal mathematical textbook of European universities. Al-Khwarizmi revised "Geography", the 2nd-century Greek-language treatise by the Roman polymath Claudius Ptolemy, listing the longitudes and latitudes of cities and localities. He further produced a set of astronomical tables and wrote about calendric works, as well as the astrolabe and the sundial. Al-Khwarizmi made important contributions to trigonometry, producing accurate sine and cosine tables and the first table of tangents. Life. Few details of al-Khwārizmī's life are known with certainty. Ibn al-Nadim gives his birthplace as Khwarazm, and he is generally thought to have come from this region. Of Persian stock, his name means 'of Khwarazm', a region that was part of Greater Iran, and is now part of Turkmenistan and Uzbekistan. Al-Tabari gives his name as Muḥammad ibn Musá al-Khwārizmī al-Majūsī al-Quṭrubbullī (). The epithet "al-Qutrubbulli" could indicate he might instead have come from Qutrubbul (Qatrabbul), near Baghdad. However, Roshdi Rashed denies this: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;There is no need to be an expert on the period or a philologist to see that al-Tabari's second citation should read "Muhammad ibn Mūsa al-Khwārizmī "and" al-Majūsi al-Qutrubbulli," and that there are two people (al-Khwārizmī and al-Majūsi al-Qutrubbulli) between whom the letter "wa" [Arabic " for the conjunction 'and'] has been omitted in an early copy. This would not be worth mentioning if a series of errors concerning the personality of al-Khwārizmī, occasionally even the origins of his knowledge, had not been made. Recently, G.J. Toomer ... with naive confidence constructed an entire fantasy on the error which cannot be denied the merit of amusing the reader. On the other hand, David A. King affirms his nisba to Qutrubul, noting that he was called al-Khwārizmī al-Qutrubbulli because he was born just outside of Baghdad. Regarding al-Khwārizmī's religion, Toomer writes: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Another epithet given to him by al-Ṭabarī, "al-Majūsī," would seem to indicate that he was an adherent of the old Zoroastrian religion. This would still have been possible at that time for a man of Iranian origin, but the pious preface to al-Khwārizmī's "Algebra" shows that he was an orthodox Muslim, so al-Ṭabarī's epithet could mean no more than that his forebears, and perhaps he in his youth, had been Zoroastrians. Ibn al-Nadīm's includes a short biography on al-Khwārizmī together with a list of his books. Al-Khwārizmī accomplished most of his work between 813 and 833. After the Muslim conquest of Persia, Baghdad had become the centre of scientific studies and trade. Around 820 CE, he was appointed as the astronomer and head of the library of the House of Wisdom. The House of Wisdom was established by the Abbasid Caliph al-Ma'mūn. Al-Khwārizmī studied sciences and mathematics, including the translation of Greek and Sanskrit scientific manuscripts. He was also a historian who is cited by the likes of al-Tabari and Ibn Abi Tahir. During the reign of al-Wathiq, he is said to have been involved in the first of two embassies to the Khazars. Douglas Morton Dunlop suggests that Muḥammad ibn Mūsā al-Khwārizmī might have been the same person as Muḥammad ibn Mūsā ibn Shākir, the eldest of the three Banū Mūsā brothers. Contributions. Al-Khwārizmī's contributions to mathematics, geography, astronomy, and cartography established the basis for innovation in algebra and trigonometry. His systematic approach to solving linear and quadratic equations led to "algebra", a word derived from the title of his book on the subject, "Al-Jabr". "On the Calculation with Hindu Numerals," written about 820, was principally responsible for spreading the Hindu–Arabic numeral system throughout the Middle East and Europe. It was translated into Latin as "Algoritmi de numero Indorum". Al-Khwārizmī, rendered in Latin as "Algoritmi", led to the term "algorithm". Some of his work was based on Persian and Babylonian astronomy, Indian numbers, and Greek mathematics. Al-Khwārizmī systematized and corrected Ptolemy's data for Africa and the Middle East. Another major book was "Kitab surat al-ard" ("The Image of the Earth"; translated as Geography), presenting the coordinates of places based on those in the "Geography" of Ptolemy, but with improved values for the Mediterranean Sea, Asia, and Africa. He wrote on mechanical devices like the astrolabe and sundial. He assisted a project to determine the circumference of the Earth and in making a world map for al-Ma'mun, the caliph, overseeing 70 geographers. When, in the 12th century, his works spread to Europe through Latin translations, it had a profound impact on the advance of mathematics in Europe. Algebra. "Al-Jabr (The Compendious Book on Calculation by Completion and Balancing", ) is a mathematical book written approximately 820 CE. It was written with the encouragement of Caliph al-Ma'mun as a popular work on calculation and is replete with examples and applications to a range of problems in trade, surveying and legal inheritance. The term "algebra" is derived from the name of one of the basic operations with equations (, meaning "restoration", referring to adding a number to both sides of the equation to consolidate or cancel terms) described in this book. The book was translated in Latin as "Liber algebrae et almucabala" by Robert of Chester (Segovia, 1145) hence "algebra", and by Gerard of Cremona. A unique Arabic copy is kept at Oxford and was translated in 1831 by F. Rosen. A Latin translation is kept in Cambridge. It provided an exhaustive account of solving polynomial equations up to the second degree, and discussed the fundamental method of "reduction" and "balancing", referring to the transposition of terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. Al-Khwārizmī's method of solving linear and quadratic equations worked by first reducing the equation to one of six standard forms (where "b" and "c" are positive integers) by dividing out the coefficient of the square and using the two operations ( "restoring" or "completion") and ("balancing"). is the process of removing negative units, roots and squares from the equation by adding the same quantity to each side. For example, "x"2 = 40"x" − 4"x"2 is reduced to 5"x"2 = 40"x". is the process of bringing quantities of the same type to the same side of the equation. For example, "x"2 + 14 = "x" + 5 is reduced to "x"2 + 9 = "x". The above discussion uses modern mathematical notation for the types of problems that the book discusses. However, in al-Khwārizmī's day, most of this notation had not yet been invented, so he had to use ordinary text to present problems and their solutions. For example, for one problem he writes, (from an 1831 translation) &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; In modern notation this process, with "x" the "thing" ( "shayʾ") or "root", is given by the steps, formula_0 formula_1 formula_2 Let the roots of the equation be "x" = "p" and "x = q". Then formula_3, formula_4 and formula_5 So a root is given by formula_6 Several authors have published texts under the name of , including Abū Ḥanīfa Dīnawarī, Abū Kāmil, Abū Muḥammad al-'Adlī, Abū Yūsuf al-Miṣṣīṣī, 'Abd al-Hamīd ibn Turk, Sind ibn 'Alī, Sahl ibn Bišr, and Sharaf al-Dīn al-Ṭūsī. Solomon Gandz has described Al-Khwarizmi as the father of Algebra: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Al-Khwarizmi's algebra is regarded as the foundation and cornerstone of the sciences. In a sense, al-Khwarizmi is more entitled to be called "the father of algebra" than Diophantus because al-Khwarizmi is the first to teach algebra in an elementary form and for its own sake, Diophantus is primarily concerned with the theory of numbers. Victor J. Katz adds : &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The first true algebra text which is still extant is the work on al-jabr and al-muqabala by Mohammad ibn Musa al-Khwarizmi, written in Baghdad around 825. John J. O'Connor and Edmund F. Robertson wrote in the "MacTutor History of Mathematics Archive": &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Roshdi Rashed and Angela Armstrong write: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Al-Khwarizmi's text can be seen to be distinct not only from the Babylonian tablets, but also from Diophantus' "Arithmetica". It no longer concerns a series of problems to be solved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study. On the other hand, the idea of an equation for its own sake appears from the beginning and, one could say, in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems. According to Swiss-American historian of mathematics, Florian Cajori, Al-Khwarizmi's algebra was different from the work of Indian mathematicians, for Indians had no rules like the "restoration" and "reduction". Regarding the dissimilarity and significance of Al-Khwarizmi's algebraic work from that of Indian Mathematician Brahmagupta, Carl B. Boyer wrote: It is true that in two respects the work of al-Khowarizmi represented a retrogression from that of Diophantus. First, it is on a far more elementary level than that found in the Diophantine problems and, second, the algebra of al-Khowarizmi is thoroughly rhetorical, with none of the syncopation found in the Greek "Arithmetica" or in Brahmagupta's work. Even numbers were written out in words rather than symbols! It is quite unlikely that al-Khwarizmi knew of the work of Diophantus, but he must have been familiar with at least the astronomical and computational portions of Brahmagupta; yet neither al-Khwarizmi nor other Arabic scholars made use of syncopation or of negative numbers. Nevertheless, the "Al-jabr" comes closer to the elementary algebra of today than the works of either Diophantus or Brahmagupta, because the book is not concerned with difficult problems in indeterminant analysis but with a straight forward and elementary exposition of the solution of equations, especially that of second degree. The Arabs in general loved a good clear argument from premise to conclusion, as well as systematic organization – respects in which neither Diophantus nor the Hindus excelled. Arithmetic. Al-Khwārizmī's second most influential work was on the subject of arithmetic, which survived in Latin translations but is lost in the original Arabic. His writings include the text "kitāb al-ḥisāb al-hindī" ('Book of Indian computation'), and perhaps a more elementary text, "kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī" ('Addition and subtraction in Indian arithmetic'). These texts described algorithms on decimal numbers (Hindu–Arabic numerals) that could be carried out on a dust board. Called "takht" in Arabic (Latin: "tabula"), a board covered with a thin layer of dust or sand was employed for calculations, on which figures could be written with a stylus and easily erased and replaced when necessary. Al-Khwarizmi's algorithms were used for almost three centuries, until replaced by Al-Uqlidisi's algorithms that could be carried out with pen and paper. As part of 12th century wave of Arabic science flowing into Europe via translations, these texts proved to be revolutionary in Europe. Al-Khwarizmi's Latinized name, "Algorismus", turned into the name of method used for computations, and survives in the term "algorithm". It gradually replaced the previous abacus-based methods used in Europe. Four Latin texts providing adaptions of Al-Khwarizmi's methods have survived, even though none of them is believed to be a literal translation: "Dixit Algorizmi" ('Thus spake Al-Khwarizmi') is the starting phrase of a manuscript in the University of Cambridge library, which is generally referred to by its 1857 title "Algoritmi de Numero Indorum". It is attributed to the Adelard of Bath, who had translated the astronomical tables in 1126. It is perhaps the closest to Al-Khwarizmi's own writings. Al-Khwarizmi's work on arithmetic was responsible for introducing the Arabic numerals, based on the Hindu–Arabic numeral system developed in Indian mathematics, to the Western world. The term "algorithm" is derived from the algorism, the technique of performing arithmetic with Hindu-Arabic numerals developed by al-Khwārizmī. Both "algorithm" and "algorism" are derived from the Latinized forms of al-Khwārizmī's name, "Algoritmi" and "Algorismi", respectively. Astronomy. Al-Khwārizmī's (, "astronomical tables of "Siddhanta"") is a work consisting of approximately 37 chapters on calendrical and astronomical calculations and 116 tables with calendrical, astronomical and astrological data, as well as a table of sine values. This is the first of many Arabic "Zijes" based on the Indian astronomical methods known as the "sindhind". The word Sindhind is a corruption of the Sanskrit "Siddhānta", which is the usual designation of an astronomical textbook. In fact, the mean motions in the tables of al-Khwarizmi are derived from those in the "corrected Brahmasiddhanta" (Brahmasphutasiddhanta) of Brahmagupta. The work contains tables for the movements of the sun, the moon and the five planets known at the time. This work marked the turning point in Islamic astronomy. Hitherto, Muslim astronomers had adopted a primarily research approach to the field, translating works of others and learning already discovered knowledge. The original Arabic version (written c. 820) is lost, but a version by the Spanish astronomer Maslama al-Majriti (c. 1000) has survived in a Latin translation, presumably by Adelard of Bath (26 January 1126). The four surviving manuscripts of the Latin translation are kept at the Bibliothèque publique (Chartres), the Bibliothèque Mazarine (Paris), the Biblioteca Nacional (Madrid) and the Bodleian Library (Oxford). Trigonometry. Al-Khwārizmī's "Zīj as-Sindhind" contained tables for the trigonometric functions of sines and cosine. A related treatise on spherical trigonometry is attributed to him. Al-Khwārizmī produced accurate sine and cosine tables, and the first table of tangents. Geography. Al-Khwārizmī's third major work is his (, "Book of the Description of the Earth"), also known as his "Geography", which was finished in 833. It is a major reworking of Ptolemy's second-century "Geography", consisting of a list of 2402 coordinates of cities and other geographical features following a general introduction. There is one surviving copy of , which is kept at the Strasbourg University Library. A Latin translation is at the Biblioteca Nacional de España in Madrid. The book opens with the list of latitudes and longitudes, in order of "weather zones", that is to say in blocks of latitudes and, in each weather zone, by order of longitude. As Paul Gallez notes, this system allows the deduction of many latitudes and longitudes where the only extant document is in such a bad condition, as to make it practically illegible. Neither the Arabic copy nor the Latin translation include the map of the world; however, Hubert Daunicht was able to reconstruct the missing map from the list of coordinates. Daunicht read the latitudes and longitudes of the coastal points in the manuscript, or deduced them from the context where they were not legible. He transferred the points onto graph paper and connected them with straight lines, obtaining an approximation of the coastline as it was on the original map. He did the same for the rivers and towns. Al-Khwārizmī corrected Ptolemy's gross overestimate for the length of the Mediterranean Sea from the Canary Islands to the eastern shores of the Mediterranean; Ptolemy overestimated it at 63 degrees of longitude, while al-Khwārizmī almost correctly estimated it at nearly 50 degrees of longitude. He "depicted the Atlantic and Indian Oceans as open bodies of water, not land-locked seas as Ptolemy had done." Al-Khwārizmī's Prime Meridian at the Fortunate Isles was thus around 10° east of the line used by Marinus and Ptolemy. Most medieval Muslim gazetteers continued to use al-Khwārizmī's prime meridian. Jewish calendar. Al-Khwārizmī wrote several other works including a treatise on the Hebrew calendar, titled (, "Extraction of the Jewish Era"). It describes the Metonic cycle, a 19-year intercalation cycle; the rules for determining on what day of the week the first day of the month Tishrei shall fall; calculates the interval between the Anno Mundi or Jewish year and the Seleucid era; and gives rules for determining the mean longitude of the sun and the moon using the Hebrew calendar. Similar material is found in the works of Al-Bīrūnī and Maimonides. Other works. Ibn al-Nadim's , an index of Arabic books, mentions al-Khwārizmī's (), a book of annals. No direct manuscript survives; however, a copy had reached Nusaybin by the 11th century, where its metropolitan bishop, Mar Elias bar Shinaya, found it. Elias's chronicle quotes it from "the death of the Prophet" through to 169 AH, at which point Elias's text itself hits a lacuna. Several Arabic manuscripts in Berlin, Istanbul, Tashkent, Cairo and Paris contain further material that surely or with some probability comes from al-Khwārizmī. The Istanbul manuscript contains a paper on sundials; the "Fihrist" credits al-Khwārizmī with (). Other papers, such as one on the determination of the direction of Mecca, are on the spherical astronomy. Two texts deserve special interest on the morning width () and the determination of the azimuth from a height (). He wrote two books on using and constructing astrolabes. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(10-x)^2=81 x" }, { "math_id": 1, "text": "100 + x^2 - 20 x = 81 x" }, { "math_id": 2, "text": "x^2+100=101 x" }, { "math_id": 3, "text": "\\tfrac{p+q}{2}=50\\tfrac{1}{2}" }, { "math_id": 4, "text": "pq =100" }, { "math_id": 5, "text": "\\frac{p-q}{2} = \\sqrt{\\left(\\frac{p+q}{2}\\right)^2 - pq}=\\sqrt{2550\\tfrac{1}{4} - 100}=49\\tfrac{1}{2}" }, { "math_id": 6, "text": "x=50\\tfrac{1}{2}-49\\tfrac{1}{2}=1" } ]
https://en.wikipedia.org/wiki?curid=649861
6498864
Multifractal system
System with multiple fractal dimensions A multifractal system is a generalization of a fractal system in which a single exponent (the fractal dimension) is not enough to describe its dynamics; instead, a continuous spectrum of exponents (the so-called singularity spectrum) is needed. Multifractal systems are common in nature. They include the length of coastlines, mountain topography, fully developed turbulence, real-world scenes, heartbeat dynamics, human gait and activity, human brain activity, and natural luminosity time series. Models have been proposed in various contexts ranging from turbulence in fluid dynamics to internet traffic, finance, image modeling, texture synthesis, meteorology, geophysics and more. The origin of multifractality in sequential (time series) data has been attributed to mathematical convergence effects related to the central limit theorem that have as foci of convergence the family of statistical distributions known as the Tweedie exponential dispersion models, as well as the geometric Tweedie models. The first convergence effect yields monofractal sequences, and the second convergence effect is responsible for variation in the fractal dimension of the monofractal sequences. Multifractal analysis is used to investigate datasets, often in conjunction with other methods of fractal and lacunarity analysis. The technique entails distorting datasets extracted from patterns to generate multifractal spectra that illustrate how scaling varies over the dataset. Multifractal analysis has been used to decipher the generating rules and functionalities of complex networks. Multifractal analysis techniques have been applied in a variety of practical situations, such as predicting earthquakes and interpreting medical images. Definition. In a multifractal system formula_0, the behavior around any point is described by a local power law: formula_1 The exponent formula_2 is called the singularity exponent, as it describes the local degree of singularity or regularity around the point formula_3. The ensemble formed by all the points that share the same singularity exponent is called the "singularity manifold of exponent h", and is a fractal set of fractal dimension formula_4 the singularity spectrum. The curve formula_5 versus formula_6 is called the "singularity spectrum" and fully describes the statistical distribution of the variable formula_0. In practice, the multifractal behaviour of a physical system formula_7 is not directly characterized by its singularity spectrum formula_5. Rather, data analysis gives access to the "multiscaling exponents" formula_8. Indeed, multifractal signals generally obey a "scale invariance" property that yields power-law behaviours for multiresolution quantities, depending on their scale formula_9. Depending on the object under study, these multiresolution quantities, denoted by formula_10, can be local averages in boxes of size formula_9, gradients over distance formula_9, wavelet coefficients at scale formula_9, etc. For multifractal objects, one usually observes a global power-law scaling of the form: formula_11 at least in some range of scales and for some range of orders formula_12. When such behaviour is observed, one talks of scale invariance, self-similarity, or multiscaling. Estimation. Using so-called "multifractal formalism", it can be shown that, under some well-suited assumptions, there exists a correspondence between the singularity spectrum formula_5 and the multi-scaling exponents formula_13 through a Legendre transform. While the determination of formula_5 calls for some exhaustive local analysis of the data, which would result in difficult and numerically unstable calculations, the estimation of the formula_13 relies on the use of statistical averages and linear regressions in log-log diagrams. Once the formula_13 are known, one can deduce an estimate of formula_14 thanks to a simple Legendre transform. Multifractal systems are often modeled by stochastic processes such as multiplicative cascades. The formula_13 are statistically interpreted, as they characterize the evolution of the distributions of the formula_10 as formula_9 goes from larger to smaller scales. This evolution is often called "statistical intermittency" and betrays a departure from Gaussian models. Modelling as a multiplicative cascade also leads to estimation of multifractal properties. This methods works reasonably well, even for relatively small datasets. A maximum likely fit of a multiplicative cascade to the dataset not only estimates the complete spectrum but also gives reasonable estimates of the errors. Estimating multifractal scaling from box counting. Multifractal spectra can be determined from box counting on digital images. First, a box counting scan is done to determine how the pixels are distributed; then, this "mass distribution" becomes the basis for a series of calculations. The chief idea is that for multifractals, the probability formula_15 of a number of pixels formula_16, appearing in a box formula_17, varies as box size formula_18, to some exponent formula_19, which changes over the image, as in Eq.0.0 (NB: For monofractals, in contrast, the exponent does not change meaningfully over the set). formula_15 is calculated from the box-counting pixel distribution as in Eq.2.0. formula_18 = an arbitrary scale (box size in box counting) at which the set is examined formula_17 = the index for each box laid over the set for an formula_18 formula_20 = the number of pixels or "mass" in any box, formula_17, at size formula_18 formula_21 = the total boxes that contained more than 0 pixels, for each formula_18 formula_15 is used to observe how the pixel distribution behaves when distorted in certain ways as in Eq.3.0 and Eq.3.1: formula_22 = an arbitrary range of values to use as exponents for distorting the data set *When formula_23, Eq.3.0 equals 1, the usual sum of all probabilities, and when formula_24, every term is equal to 1, so the sum is equal to the number of boxes counted, formula_21. These distorting equations are further used to address how the set behaves when scaled or resolved or cut up into a series of formula_18-sized pieces and distorted by Q, to find different values for the dimension of the set, as in the following: *An important feature of Eq.3.0 is that it can also be seen to vary according to scale raised to the exponent formula_25 in Eq.4.0: Thus, a series of values for formula_26 can be found from the slopes of the regression line for the log of Eq.3.0 versus the log of formula_18 for each formula_22, based on Eq.4.1: *For the generalized dimension: *formula_27 is estimated as the slope of the regression line for log Aformula_18,Q versus log formula_18 where: *Then formula_28 is found from Eq.5.3. *The mean formula_29 is estimated as the slope of the log-log regression line for formula_30 versus formula_18, where: In practice, the probability distribution depends on how the dataset is sampled, so optimizing algorithms have been developed to ensure adequate sampling. Applications. Multifractal analysis has been successfully used in many fields, including physical, information, and biological sciences. For example, the quantification of residual crack patterns on the surface of reinforced concrete shear walls. Dataset distortion analysis. Multifractal analysis has been used in several scientific fields to characterize various types of datasets. In essence, multifractal analysis applies a distorting factor to datasets extracted from patterns, to compare how the data behave at each distortion. This is done using graphs known as multifractal spectra, analogous to viewing the dataset through a "distorting lens", as shown in the illustration. Several types of multifractal spectra are used in practise. DQ vs Q. One practical multifractal spectrum is the graph of DQ vs Q, where DQ is the generalized dimension for a dataset and Q is an arbitrary set of exponents. The expression "generalized dimension" thus refers to a set of dimensions for a dataset (detailed calculations for determining the generalized dimension using box counting are described below). Dimensional ordering. The general pattern of the graph of DQ vs Q can be used to assess the scaling in a pattern. The graph is generally decreasing, sigmoidal around Q=0, where D(Q=0) ≥ D(Q=1) ≥ D(Q=2). As illustrated in the figure, variation in this graphical spectrum can help distinguish patterns. The image shows D(Q) spectra from a multifractal analysis of binary images of non-, mono-, and multi-fractal sets. As is the case in the sample images, non- and mono-fractals tend to have flatter D(Q) spectra than multifractals. The generalized dimension also gives important specific information. D(Q=0) is equal to the capacity dimension, which—in the analysis shown in the figures here—is the box counting dimension. D(Q=1) is equal to the information dimension, and D(Q=2) to the correlation dimension. This relates to the "multi" in multifractal, where multifractals have multiple dimensions in the D(Q) versus Q spectra, but monofractals stay rather flat in that area. f(α) versus α. Another useful multifractal spectrum is the graph of formula_31 versus formula_19 (see calculations). These graphs generally rise to a maximum that approximates the fractal dimension at Q=0, and then fall. Like DQ versus Q spectra, they also show typical patterns useful for comparing non-, mono-, and multi-fractal patterns. In particular, for these spectra, non- and mono-fractals converge on certain values, whereas the spectra from multifractal patterns typically form humps over a broader area. Generalized dimensions of species abundance distributions in space. One application of Dq versus Q in ecology is characterizing the distribution of species. Traditionally the relative species abundances is calculated for an area without taking into account the locations of the individuals. An equivalent representation of relative species abundances are species ranks, used to generate a surface called the species-rank surface, which can be analyzed using generalized dimensions to detect different ecological mechanisms like the ones observed in the neutral theory of biodiversity, metacommunity dynamics, or niche theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s" }, { "math_id": 1, "text": "s(\\vec{x}+\\vec{a})-s(\\vec{x}) \\sim a^{h(\\vec{x})}." }, { "math_id": 2, "text": "h(\\vec{x})" }, { "math_id": 3, "text": "\\vec{x}" }, { "math_id": 4, "text": "D(h):" }, { "math_id": 5, "text": "D(h)" }, { "math_id": 6, "text": "h" }, { "math_id": 7, "text": "X" }, { "math_id": 8, "text": "\\zeta(q),\\ q\\in{\\mathbb R}" }, { "math_id": 9, "text": "a" }, { "math_id": 10, "text": "T_X(a)" }, { "math_id": 11, "text": "\\langle T_X(a)^q \\rangle \\sim a^{\\zeta(q)}\\ " }, { "math_id": 12, "text": "q" }, { "math_id": 13, "text": "\\zeta(q)" }, { "math_id": 14, "text": "D(h)," }, { "math_id": 15, "text": "P" }, { "math_id": 16, "text": "m" }, { "math_id": 17, "text": "i" }, { "math_id": 18, "text": "\\epsilon" }, { "math_id": 19, "text": "\\alpha" }, { "math_id": 20, "text": "m_{[i,\\epsilon]}" }, { "math_id": 21, "text": "N_\\epsilon" }, { "math_id": 22, "text": "Q" }, { "math_id": 23, "text": "Q=1" }, { "math_id": 24, "text": "Q=0" }, { "math_id": 25, "text": "\\tau" }, { "math_id": 26, "text": "\\tau_{(Q)} " }, { "math_id": 27, "text": "\\alpha_{(Q)}" }, { "math_id": 28, "text": "f_{\\left(\\alpha_{{(Q)}}\\right)}" }, { "math_id": 29, "text": "\\tau_{(Q)}" }, { "math_id": 30, "text": "\\tau_{{(Q)}_{[\\epsilon]}}" }, { "math_id": 31, "text": "f(\\alpha)" } ]
https://en.wikipedia.org/wiki?curid=6498864
649929
Mycin
Expert system for bacterial infections MYCIN was an early backward chaining expert system that used artificial intelligence to identify bacteria causing severe infections, such as bacteremia and meningitis, and to recommend antibiotics, with the dosage adjusted for patient's body weight — the name derived from the antibiotics themselves, as many antibiotics have the suffix "-mycin". The Mycin system was also used for the diagnosis of blood clotting diseases. MYCIN was developed over five or six years in the early 1970s at Stanford University. It was written in Lisp as the doctoral dissertation of Edward Shortliffe under the direction of Bruce G. Buchanan, Stanley N. Cohen and others. Method. MYCIN operated using a fairly simple inference engine and a knowledge base of ~600 rules. It would query the physician running the program via a long series of simple yes/no or textual questions. At the end, it provided a list of possible culprit bacteria ranked from high to low based on the probability of each diagnosis, its confidence in each diagnosis' probability, the reasoning behind each diagnosis (that is, MYCIN would also list the questions and rules which led it to rank a diagnosis a particular way), and its recommended course of drug treatment. MYCIN sparked debate about the use of its ad hoc, but principled, uncertainty framework known as "certainty factors". The developers performed studies showing that MYCIN's performance was minimally affected by perturbations in the uncertainty metrics associated with individual rules, suggesting that the power in the system was related more to its knowledge representation and reasoning scheme than to the details of its numerical uncertainty model. Some observers felt that it should have been possible to use classical Bayesian statistics. MYCIN's developers argued that this would require either unrealistic assumptions of probabilistic independence, or require the experts to provide estimates for an unfeasibly large number of conditional probabilities. Subsequent studies later showed that the certainty factor model could indeed be interpreted in a probabilistic sense, and highlighted problems with the implied assumptions of such a model. However the modular structure of the system would prove very successful, leading to the development of graphical models such as Bayesian networks. Evidence combination. In MYCIN it was possible that two or more rules might draw conclusions about a parameter with different weights of evidence. For example, one rule may conclude that the organism in question is "E. Coli" with a certainty of 0.8 whilst another concludes that it is "E. Coli" with a certainty of 0.5 or even -0.8. In the event the certainty is less than zero the evidence is actually against the hypothesis. In order to calculate the certainty factor MYCIN combined these weights using the formula below to yield a single certainty factor: formula_0 Where X and Y are the certainty factors. This formula can be applied more than once if more than two rules draw conclusions about the same parameter. It is commutative, so it does not matter in which order the weights were combined. Results. Research conducted at the Stanford Medical School found MYCIN received an acceptability rating of 65% on treatment plan from a panel of eight independent specialists, which was comparable to the 42.5% to 62.5% rating of five faculty members. This study is often cited as showing the potential for disagreement about therapeutic decisions, even among experts, when there is no "gold standard" for correct treatment. Practical use. MYCIN was never actually used in practice. This wasn't because of any weakness in its performance. Some observers raised ethical and legal issues related to the use of computers in medicine, regarding the responsibility of the physicians in case the system gave wrong diagnosis. However, the greatest problem, and the reason that MYCIN was not used in routine practice, was the state of technologies for system integration, especially at the time it was developed. MYCIN was a stand-alone system that required a user to enter all relevant information about a patient by typing in responses to questions MYCIN posed. The program ran on a large time-shared system, available over the early Internet (ARPANet), before personal computers were developed. MYCIN's greatest influence was accordingly its demonstration of the power of its representation and reasoning approach. Rule-based systems in many non-medical domains were developed in the years that followed MYCIN's introduction of the approach. In the 1980s, expert system "shells" were introduced (including one based on MYCIN, known as E-MYCIN (followed by Knowledge Engineering Environment - KEE)) and supported the development of expert systems in a wide variety of application areas. A difficulty that rose to prominence during the development of MYCIN and subsequent complex expert systems has been the extraction of the necessary knowledge for the inference engine to use from the human expert in the relevant fields into the rule base (the so-called "knowledge acquisition bottleneck"). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "CF(x,y )=\\begin{cases} X+Y -XY & \\text{if } X,Y>0 \\\\ \n X+Y+XY & \\text{if } X,Y<0 \\\\\n \\frac{X+Y}{1-\\min(|X|,|Y|)} & \\text{otherwise} \n\\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=649929
6499353
NOvA
Observatory The NOνA (NuMI Off-Axis νe Appearance) experiment is a particle physics experiment designed to detect neutrinos in Fermilab's NuMI (Neutrinos at the Main Injector) beam. Intended to be the successor to MINOS, NOνA consists of two detectors, one at Fermilab (the "near detector"), and one in northern Minnesota (the "far detector"). Neutrinos from NuMI pass through 810 km of Earth to reach the far detector. NOνA's main goal is to observe the oscillation of muon neutrinos to electron neutrinos. The primary physics goals of NOvA are: Physics goals. Primary goals. Neutrino oscillation is parameterized by the PMNS matrix and the mass squared differences between the neutrino mass eigenstates. Assuming that three flavors of neutrinos participate in neutrino mixing, there are six variables that affect neutrino oscillation: the three angles "θ"12, "θ"23, and "θ"13, a CP-violating phase "δ", and any two of the three mass squared differences. There is currently no compelling theoretical reason to expect any particular value of, or relationship between, these parameters. "θ"23 and "θ"12 have been measured to be non-zero by several experiments but the most sensitive search for non-zero "θ"13 by the Chooz collaboration yielded only an upper limit. In 2012, "θ"13 was measured at Daya Bay to be non-zero to a statistical significance of 5.2 "σ". The following year, T2K discovered the transition formula_0 excluding the non-appearance hypothesis with a significance of 7.3 "σ". No measurement of "δ" has been made. The absolute values of two mass squared differences are known, but because one is very small compared to the other, the ordering of the masses has not been determined. NOνA is an order of magnitude more sensitive to "θ"13 than the previous generation of experiments, such as MINOS. It will measure it by searching for the transition formula_0 in the Fermilab NuMI beam. If a non-zero value of "θ"13 is resolvable by NOνA, it will be possible to obtain measurements of "δ" and the mass ordering by also observing formula_1 The parameter "δ" can be measured because it modifies the probabilities of oscillation differently for neutrinos and anti-neutrinos. The mass ordering, similarly, can be determined because the neutrinos pass through the Earth, which, through the MSW effect, modifies the probabilities of oscillation differently for neutrinos and anti-neutrinos. Importance. The neutrino masses and mixing angles are, to the best of our knowledge, fundamental constants of the universe. Measuring them is a basic requirement for our understanding of physics. Knowing the value of the CP violating parameter "δ" will help us understand why the universe has a matter-antimatter asymmetry. Also, according to the Seesaw mechanism theory, the very small masses of neutrinos may be related to very large masses of particles that we do not yet have the technology to study directly. Neutrino measurements are then an indirect way of studying physics at extremely high energies. In our current theory of physics, there is no reason why the neutrino mixing angles should have any particular values. And yet, of the three neutrino mixing angles, only "θ"12 has been resolved as being neither maximal or minimal. If the measurements of NOνA and other future experiments continue to show "θ"23 as maximal and "θ"13 as minimal, it may suggest some as yet unknown symmetry of nature. Relationship to other experiments. NOνA can potentially resolve the mass hierarchy because it operates at a relatively high energy. Of the experiments currently running it has the broadest scope for making this measurement unambiguously with least dependence on the value of "δ". Many future experiments that seek to make precision measurements of neutrino properties will rely on NOνA's measurement to know how to configure their apparatus for greatest accuracy, and how to interpret their results. An experiment similar to NOνA is T2K, a neutrino beam experiment in Japan similar to NOνA. Like NOνA, it is intended to measure "θ"13 and "δ". It will have a 295 km baseline and will use lower energy neutrinos than NOνA, about 0.6 GeV. Since matter effects are less pronounced both at lower energies and shorter baselines, it is unable to resolve the mass ordering for the majority of possible values of "δ". The interpretation of Neutrinoless double beta decay experiments will also benefit from knowing the mass ordering, since the mass hierarchy affects the theoretical lifetimes of this process. Reactor experiments also have the ability to measure "θ"13. While they cannot measure "δ" or the mass ordering, their measurement of the mixing angle is not dependent on knowledge of these parameters. The three experiments that have measured a value for "θ"13, in deceasing order of sensitivity are Daya Bay in China, RENO in South Korea and Double Chooz in France, which use 1-2 km baselines, optimized for observation of the first "θ"13-controlled oscillation maximum. Secondary goals. In addition to its primary physics goals, NOνA will be able to improve upon the measurements of the already measured oscillation parameters. NOνA, like MINOS, is well suited to detecting muon neutrinos and so will be able to refine our knowledge of "θ"23. The NOνA near detector will be used to conduct measurements of neutrino interaction cross sections which are currently not known to a high degree of precision. Its measurements in this area will complement other similar upcoming experiments, such as MINERνA, which also uses the NuMI beam. Since it is capable of detecting neutrinos from galactic supernovas, NOνA will form part of the Supernova Early Warning System. Supernova data from NOνA can be correlated with that from Super-Kamiokande to study the matter effects on the oscillation of these neutrinos. Design. To accomplish its physics goals, NOνA needs to be efficient at detecting electron neutrinos, which are expected to appear in the NuMI beam (originally made only of muon neutrinos) as the result of neutrino oscillation. Previous neutrino experiments, such as MINOS, have reduced backgrounds from cosmic rays by being underground. However, NOνA is on the surface and relies on precise timing information and a well-defined beam energy to reduce spurious background counts. It is situated 810 km from the origin of the NuMI beam and 14 milliradians (12 km) west of the beam's central axis. In this position, it samples a beam that has a much narrower energy distribution than if it were centrally located, further reducing the effect of backgrounds. The detector is designed as a pair of finely grained liquid scintillator detectors. The near detector is at Fermilab and samples the unoscillated beam. The far detector is in northern Minnesota, and consists of about 500,000 cells, each 4 cm × 6 cm × 16 m, filled with liquid scintillator. Each cell contains a loop of bare fiber optic cable to collect the scintillation light, both ends of which lead to an avalanche photodiode for readout. The near detector has the same general design, but is only about &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄200 as massive. This 222 ton detector is constructed of 186 planes of scintillator-filled cells (6 blocks of 31 planes) followed by a muon catcher. Although all the planes are identical, the first 6 are used as a veto region; particle showers which begin in them are assumed to not be neutrinos and ignored. The next 108 planes serve as the fiducial region; particle showers beginning in them are neutrino interactions of interest. The final 72 planes are a "shower containment region" which observe the trailing portion of particle showers which began in the fiducial region. Finally, a 1.7 meter long "muon catcher" region is constructed of steel plates interleaved with 10 active planes of liquid scintillator. Collaboration. The NOνA experiment includes scientists from a large number of institutions. Different institutions take on different tasks. The collaboration, and subgroups thereof, meets regularly via phone for weekly meetings, and in person several times a year. Participating institutions as of May 2024 are: Funding history. In late 2007, NOνA passed a Department of Energy "Critical Decision 2" review, meaning roughly that its design, cost, schedule, and scientific goals had been approved. This also allowed the project to be included in the Department of Energy congressional budget request. (NOνA still required a "Critical Decision 3" review to begin construction.) On 21 December 2007, President Bush signed an omnibus spending bill, H.R. 2764, which cut the funding for high energy physics by 88 million dollars from the expected value of 782 million dollars. The budget of Fermilab was cut by 52 million dollars. This bill explicitly stated that "Within funding for Proton Accelerator-Based Physics, no funds are provided for the NOνA activity in Tevatron Complex Improvements." So although the NOνA project retained its approval from both the Department of Energy and Fermilab, Congress left NOνA with no funds for the 2008 fiscal year to build its detector, pay its staff, or to continue in the pursuit of scientific results. However, in July 2008, Congress passed, and the President signed, a supplemental budget bill, which included funding for NOνA, allowing the collaboration to resume its work. The NOνA prototype near detector (Near Detector on Surface, or NDOS) began running at Fermilab in November and registered its first neutrinos from the NuMI beam on 15 December 2010. As a prototype, NDOS served the collaboration well in establishing a use case and suggesting improvements in the design of detector components that were later installed as a near detector at Fermilab, and a far detector at Ash River, MN (). Once construction of the NOvA building was complete, construction of the detector modules began. On 26 July 2012 the first module was laid in place. Placement and gluing of the modules continued over a year until the detector hall was filled. The first detection occurred on 11 February 2014 and construction completed in September that year. Full operation began in October 2014. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu_{\\mu}\\rightarrow\\nu_{e}" }, { "math_id": 1, "text": "\\bar{\\nu}_{\\mu}\\rightarrow\\bar{\\nu}_{e}." } ]
https://en.wikipedia.org/wiki?curid=6499353
649976
Yuan-Cheng Fung
Chinese-American bioengineer and writer (1919–2019) Yuan-Cheng "Bert" Fung (September 15, 1919 – December 15, 2019) was a Chinese-American bioengineer and writer. He is regarded as a founding figure of bioengineering, tissue engineering, and the "Founder of Modern Biomechanics". Biography. Fung was born in Jiangsu Province, China in 1919. He earned a bachelor's degree in 1941 and a master's degree in 1943 from the National Central University (later renamed Nanjing University in mainland China and reinstated in Taiwan), and earned a Ph.D. from the California Institute of Technology in 1948. Fung was Professor Emeritus and Research Engineer at the University of California San Diego. He published prominent texts along with Pin Tong who was then at Hong Kong University of Science &amp; Technology. Fung died at the Jacobs Medical Center in San Diego, California, aged 100, on December 15, 2019. Fung was married to Luna Yu Hsien-Shih, a former mathematician and cofounder of the UC San Diego International Center, until her death in 2017. The couple raised two children. Research. He is the author of numerous books including Foundations of Solid Mechanics, Continuum Mechanics, and a series of books on Biomechanics. He is also one of the principal founders of the "Journal of Biomechanics" and was a past chair of the ASME International Applied Mechanics Division. In 1972, Fung established the Biomechanics Symposium under the American Society of Mechanical Engineers. This biannual summer meeting, first held at the Georgia Institute of Technology, became the annual Summer Bioengineering Conference. Fung and colleagues were also the first to recognize the importance of residual stress on arterial mechanical behavior. Fung's Law. Fung's famous exponential strain constitutive equation for preconditioned soft tissues is formula_0 with formula_1 quadratic forms of Green-Lagrange strains formula_2 and formula_3, formula_4 and formula_5 material constants. formula_6 is a strain energy function per volume unit, which is the mechanical strain energy for a given temperature. Materials that follow this law are known as Fung-elastic. Honors and awards. Fung was elected to the United States National Academy of Sciences (1993), the National Academy of Engineering (1979), the Institute of Medicine (1991), the Academia Sinica (1968), and was a Foreign Member of the Chinese Academy of Sciences (1994 election). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "w = \\frac{1}{2}\\left[q + c\\left( e^Q -1 \\right) \\right]" }, { "math_id": 1, "text": "q=a_{ijkl}E_{ij}E_{kl} \\qquad Q=b_{ijkl}E_{ij}E_{kl}" }, { "math_id": 2, "text": "E_{ij}" }, { "math_id": 3, "text": "a_{ijkl}" }, { "math_id": 4, "text": "b_{ijkl}" }, { "math_id": 5, "text": "c" }, { "math_id": 6, "text": "w" } ]
https://en.wikipedia.org/wiki?curid=649976
65000020
Jane Dewey
American physicist (1900–1976) Jane Mary Dewey (July 11, 1900 – September 19, 1976) was an American physicist. Early life and education. Jane Mary Dewey was born in Chicago, the daughter (and sixth child) of philosopher John Dewey and educator Alice Chipman Dewey. Her parents named her in honor of Jane Addams, an activist, sociologist, and reformer; and Mary Rozet Smith, a philanthropist who was Addams's longtime companion. She was educated at the Ethical Culture School and then the Spence School, after which she attended Barnard College, graduating in 1922. She moved from New York to New England for graduate studies, earning a PhD in Physical Chemistry from the Massachusetts Institute of Technology in 1925. Career. After graduating from MIT, Dewey worked for two years researching in the newly emergent field of quantum mechanics with Nobelist Niels Bohr and future-Nobelist Werner Heisenberg as a postdoctoral researcher at the Universitets Institut for Teoretisk Fysik in Copenhagen. During this time, she delivered a series of lectures on wave mechanics to the rest of Bohr's research team. She then moved to Princeton University, where she worked with Karl Taylor Compton with support from a National Research Council fellowship. In 1929, she became a faculty member at the University of Rochester, nominally under the geology department but in fact at the university's Institute of Applied Optics. Between her time at MIT and her time at Rochester, Dewey was a prolific author, publishing 8 articles in major science journals, the first being "Intensities in the Stark Effect of Helium," published in Physical Review in 1926. In 1931, Dewey left Rochester for Bryn Mawr College, where she became an assistant professor in physics and, later, the chair of the department. That year, she was elected a Fellow of the American Physical Society, and she soon took on the position of department chair. However, her marriage — to fellow physicist J. Alston Clark — broke apart, and her health worsened, forcing her to take medical leave. During her absence, Bryn Mawr replaced her as chair with a male physics professor (Walter C. Michels), and Dewey was unemployed until 1940, when she found a part-time instructor position at Hunter College. Her health suddenly restored, she moved to industry, taking a wartime job at the United States Rubber Company and then, in 1947, a staff position at the Army's Ballistic Research Laboratory (BRL) at Aberdeen Proving Ground, where she headed the Terminal Ballistics Laboratory. Legacy. Dewey-Mackenzie estimate. In a landmark paper, while at United States Rubber Company, Dewey derived the elastic constants of a solid material filled with non-rigid particles. In 1950, Mackenzie presented a similar derivation for a solid containing spherical holes. They both made the assumption that the distribution of the inclusions is diffuse enough that neighboring inclusions do not affect one another. This has led to similar derivations using this assumption being called "Dewey-Mackenzie estimates." Mackenzie's solution may be considered a special case of the more general and difficult problem that Dewey set herself and succeeded in solving exactly. As of 2021, her paper has been cited over 130 times in scientific journals. In fact, the approach in her original paper is now so well known that it is often referred to only indirectly, as a "Dewey-Mackenzie estimate," without citation. Slade-Dewey equation. While at BRL, one of her contributions to ballistic science has come to be known as the Slade-Dewey equation, which empirically relates the critical impact velocity "Vt" for initiating detonation of a solid secondary explosive or propellant to the diameter "d" of an impacting projectile, formula_0, where "A" and "B" are empirical constants that depend on the explosive. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_t = A / \\sqrt{d} + B" } ]
https://en.wikipedia.org/wiki?curid=65000020
650022
Richardson extrapolation
Sequence acceleration method in numerical analysis In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value formula_0. In essence, given the value of formula_1 for several values of formula_2, we can estimate formula_3 by extrapolating the estimates to formula_4. It is named after Lewis Fry Richardson, who introduced the technique in the early 20th century, though the idea was already known to Christiaan Huygens in his calculation of formula_5. In the words of Birkhoff and Rota, "its usefulness for practical computations can hardly be overestimated." Practical applications of Richardson extrapolation include Romberg integration, which applies Richardson extrapolation to the trapezoid rule, and the Bulirsch–Stoer algorithm for solving ordinary differential equations. General formula. Notation. Let formula_6 be an approximation of "formula_7"(exact value) that depends on a step size h (where formula_8) with an error formula of the form formula_9 where the formula_10 are unknown constants and the formula_11 are known constants such that formula_12. Furthermore, formula_13 represents the truncation error of the formula_14 approximation such that formula_15 Similarly, in formula_16 the approximation formula_14 is said to be an formula_13 approximation. Note that by simplifying with Big O notation, the following formulae are equivalent: formula_17 Purpose. Richardson extrapolation is a process that finds a better approximation of formula_7 by changing the error formula from formula_18 to formula_19 Therefore, by replacing formula_6 with formula_20 the truncation error has reduced from formula_21 to formula_22 for the same step size formula_2. The general pattern occurs in which formula_14 is a more accurate estimate than formula_23 when formula_24. By this process, we have achieved a better approximation of formula_7 by subtracting the largest term in the error which was formula_21. This process can be repeated to remove more error terms to get even better approximations. Process. Using the step sizes formula_2 and formula_25 for some constant formula_26, the two formulas for formula_7 are: To improve our approximation from formula_27 to formula_28 by removing the first error term, we multiply equation 2 by formula_29 and subtract equation 1 to give us formula_30 This multiplication and subtraction was performed because formula_31 is an formula_28 approximation of formula_32. We can solve our current formula for formula_7 to give formula_33 which can be written as formula_34 by setting formula_35 Recurrence relation. A general recurrence relation can be defined for the approximations by formula_36 where formula_37 satisfies formula_38 Properties. The Richardson extrapolation can be considered as a linear sequence transformation. Additionally, the general formula can be used to estimate formula_39 (leading order step size behavior of Truncation error) when neither its value nor formula_7 is known "a priori". Such a technique can be useful for quantifying an unknown rate of convergence. Given approximations of formula_7 from three distinct step sizes formula_2, formula_25, and formula_40, the exact relationshipformula_41yields an approximate relationship (please note that the notation here may cause a bit of confusion, the two O appearing in the equation above only indicates the leading order step size behavior but their explicit forms are different and hence cancelling out of the two "O" terms is only approximately valid) formula_42 which can be solved numerically to estimate formula_39 for some arbitrary valid choices of formula_2, formula_43, and formula_26. As formula_44, if formula_45 and formula_43 is chosen so that formula_46, this approximate relation reduces to a quadratic equation in formula_29, which is readily solved for formula_39 in terms of formula_2 and formula_26. Example of Richardson extrapolation. Suppose that we wish to approximate formula_7, and we have a method formula_1 that depends on a small parameter formula_2 in such a way that formula_47 Let us define a new functionformula_48where formula_2 and formula_49 are two distinct step sizes. Then formula_50 formula_51 is called the Richardson extrapolation of "A"("h"), and has a higher-order error estimate formula_52 compared to formula_53. Very often, it is much easier to obtain a given precision by using "R"("h") rather than "A"("h′") with a much smaller "h′". Where "A"("h′") can cause problems due to limited precision (rounding errors) and/or due to the increasing number of calculations needed (see examples below). Example pseudocode for Richardson extrapolation. The following pseudocode in MATLAB style demonstrates Richardson extrapolation to help solve the ODE formula_54, formula_55 with the Trapezoidal method. In this example we halve the step size formula_2 each iteration and so in the discussion above we'd have that formula_56. The error of the Trapezoidal method can be expressed in terms of odd powers so that the error over multiple steps can be expressed in even powers; this leads us to raise formula_26 to the second power and to take powers of formula_57 in the pseudocode. We want to find the value of formula_58, which has the exact solution of formula_59 since the exact solution of the ODE is formula_60. This pseudocode assumes that a function called codice_0 exists which attempts to compute codice_1 by performing the trapezoidal method on the function codice_2, with starting point codice_3 and codice_4 and step size codice_5. Note that starting with too small an initial step size can potentially introduce error into the final solution. Although there are methods designed to help pick the best initial step size, one option is to start with a large step size and then to allow the Richardson extrapolation to reduce the step size each iteration until the error reaches the desired tolerance. tStart = 0 % Starting time tEnd = 5 % Ending time f = -y^2 % The derivative of y, so y' = f(t, y(t)) = -y^2 % The solution to this ODE is y = 1/(1 + t) y0 = 1 % The initial position (i.e. y0 = y(tStart) = y(0) = 1) tolerance = 10^-11 % 10 digit accuracy is desired % Don't allow the iteration to continue indefinitely maxRows = 20 % Pick an initial step size initialH = tStart - tEnd % Were we able to find the solution to within the desired tolerance? not yet. haveWeFoundSolution = false h = initialH % Create a 2D matrix of size maxRows by maxRows to hold the Richardson extrapolates % Note that this will be a lower triangular matrix and that at most two rows are actually % needed at any time in the computation. A = zeroMatrix(maxRows, maxRows) % Compute the top left element of the matrix. % The first row of this (lower triangular) matrix has now been filled. A(1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) % Each row of the matrix requires one call to Trapezoidal % This loops starts by filling the second row of the matrix, % since the first row was computed above for i = 1 : maxRows - 1 % Starting at i = 1, iterate at most maxRows - 1 times % Halve the previous value of h since this is the start of a new row. h = h/2 % Starting filling row i+1 from the left by calling % the Trapezoidal function with this new smaller step size A(i + 1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) % Go across this current (i+1)-th row until the diagonal is reached for j = 1 : i % To compute A(i + 1, j + 1), which is the next Richardson extrapolate, % use the most recently computed value (i.e. A(i + 1, j)) % and the value from the row above it (i.e. A(i, j)). A(i + 1, j + 1) = ((4^j).*A(i + 1, j) - A(i, j))/(4^j - 1); end % After leaving the above inner loop, the diagonal element of row i + 1 has been computed % This diagonal element is the latest Richardson extrapolate to be computed. % The difference between this extrapolate and the last extrapolate of row i is a good % indication of the error. if (absoluteValue(A(i + 1, i + 1) - A(i, i)) &lt; tolerance) % If the result is within tolerance % Display the result of the Richardson extrapolation print("y = ", A(i + 1, i + 1)) haveWeFoundSolution = true % Done, so leave the loop break end end % If we were not able to find a solution to within the desired tolerance if (not haveWeFoundSolution) print("Warning: Not able to find solution to within the desired tolerance of ", tolerance); print("The last computed extrapolate was ", A(maxRows, maxRows)) end References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "A^\\ast = \\lim_{h\\to 0} A(h)" }, { "math_id": 1, "text": "A(h)" }, { "math_id": 2, "text": "h" }, { "math_id": 3, "text": "A^\\ast" }, { "math_id": 4, "text": "h=0" }, { "math_id": 5, "text": "\\pi" }, { "math_id": 6, "text": "A_0(h)" }, { "math_id": 7, "text": "A^*" }, { "math_id": 8, "text": "0 < h < 1" }, { "math_id": 9, "text": " A^* = A_0(h)+a_0h^{k_0} + a_1h^{k_1} + a_2h^{k_2} + \\cdots " }, { "math_id": 10, "text": "a_i" }, { "math_id": 11, "text": "k_i" }, { "math_id": 12, "text": "h^{k_i} > h^{k_{i+1}}" }, { "math_id": 13, "text": "O(h^{k_i})" }, { "math_id": 14, "text": "A_i(h)" }, { "math_id": 15, "text": "A^* = A_i(h)+O(h^{k_i})." }, { "math_id": 16, "text": "A^*=A_i(h)+O(h^{k_i})," }, { "math_id": 17, "text": " \\begin{align} \nA^* &= A_0(h) + a_0h^{k_0} + a_1h^{k_1} + a_2h^{k_2} + \\cdots \\\\ \nA^* &= A_0(h)+ a_0h^{k_0} + O(h^{k_1}) \\\\\nA^* &= A_0(h)+O(h^{k_0}) \n\\end{align} " }, { "math_id": 18, "text": "A^*=A_0(h)+O(h^{k_0})" }, { "math_id": 19, "text": "A^* = A_1(h) + O(h^{k_1})." }, { "math_id": 20, "text": "A_1(h)" }, { "math_id": 21, "text": "O(h^{k_0}) " }, { "math_id": 22, "text": "O(h^{k_1}) " }, { "math_id": 23, "text": "A_j(h)" }, { "math_id": 24, "text": "i>j" }, { "math_id": 25, "text": "h / t" }, { "math_id": 26, "text": "t" }, { "math_id": 27, "text": "O(h^{k_0})" }, { "math_id": 28, "text": "O(h^{k_1})" }, { "math_id": 29, "text": "t^{k_0}" }, { "math_id": 30, "text": " (t^{k_0}-1)A^* = \\bigg[t^{k_0}A_0\\left(\\frac{h}{t}\\right) - A_0(h)\\bigg] + \\bigg(t^{k_0}a_1\\bigg(\\frac{h}{t}\\bigg)^{k_1}-a_1h^{k_1}\\bigg)+ \\bigg(t^{k_0}a_2\\bigg(\\frac{h}{t}\\bigg)^{k_2}-a_2h^{k_2}\\bigg) + O(h^{k_3}). " }, { "math_id": 31, "text": "\\big[t^{k_0}A_0\\left(\\frac{h}{t}\\right) - A_0(h)\\big]" }, { "math_id": 32, "text": "(t^{k_0}-1)A^*" }, { "math_id": 33, "text": "A^* = \\frac{\\bigg[t^{k_0}A_0\\left(\\frac{h}{t}\\right) - A_0(h)\\bigg]}{t^{k_0}-1}\n+ \\frac{\\bigg(t^{k_0}a_1\\bigg(\\frac{h}{t}\\bigg)^{k_1}-a_1h^{k_1}\\bigg)}{t^{k_0}-1}\n+ \\frac{\\bigg(t^{k_0}a_2\\bigg(\\frac{h}{t}\\bigg)^{k_2}-a_2h^{k_2}\\bigg)}{t^{k_0}-1}\n+O(h^{k_3}) " }, { "math_id": 34, "text": "A^* = A_1(h)+O(h^{k_1})" }, { "math_id": 35, "text": "A_1(h) = \\frac{t^{k_0}A_0\\left(\\frac{h}{t}\\right) - A_0(h)}{t^{k_0}-1} ." }, { "math_id": 36, "text": " A_{i+1}(h) = \\frac{t^{k_i}A_i\\left(\\frac{h}{t}\\right) - A_i(h)}{t^{k_i}-1} " }, { "math_id": 37, "text": "k_{i+1}" }, { "math_id": 38, "text": " A^* = A_{i+1}(h) + O(h^{k_{i+1}}) ." }, { "math_id": 39, "text": "k_0" }, { "math_id": 40, "text": "h / s" }, { "math_id": 41, "text": "A^*=\\frac{t^{k_0}A_i\\left(\\frac{h}{t}\\right) - A_i(h)}{t^{k_0}-1} + O(h^{k_1}) = \\frac{s^{k_0}A_i\\left(\\frac{h}{s}\\right) - A_i(h)}{s^{k_0}-1} + O(h^{k_1})" }, { "math_id": 42, "text": "A_i\\left(\\frac{h}{t}\\right) + \\frac{A_i\\left(\\frac{h}{t}\\right) - A_i(h)}{t^{k_0}-1} \\approx A_i\\left(\\frac{h}{s}\\right) +\\frac{A_i\\left(\\frac{h}{s}\\right) - A_i(h)}{s^{k_0}-1}" }, { "math_id": 43, "text": "s" }, { "math_id": 44, "text": "t \\neq 1" }, { "math_id": 45, "text": "t>0" }, { "math_id": 46, "text": "s = t^2" }, { "math_id": 47, "text": "A(h) = A^\\ast + C h^n + O(h^{n+1})." }, { "math_id": 48, "text": " R(h,t) := \\frac{ t^n A(h/t) - A(h)}{t^n-1} " }, { "math_id": 49, "text": "\\frac{h}{t}" }, { "math_id": 50, "text": " R(h, t) = \\frac{ t^n ( A^* + C \\left(\\frac{h}{t}\\right)^n + O(h^{n+1}) ) - ( A^* + C h^n + O(h^{n+1}) ) }{ t^n - 1} = A^* + O(h^{n+1}). " }, { "math_id": 51, "text": " R(h,t) " }, { "math_id": 52, "text": " O(h^{n+1}) " }, { "math_id": 53, "text": " A(h) " }, { "math_id": 54, "text": "y'(t) = -y^2" }, { "math_id": 55, "text": "y(0) = 1" }, { "math_id": 56, "text": "t = 2" }, { "math_id": 57, "text": "4 = 2^2 = t^2" }, { "math_id": 58, "text": "y(5)" }, { "math_id": 59, "text": "\\frac{1}{5 + 1} = \\frac{1}{6} = 0.1666..." }, { "math_id": 60, "text": "y(t) = \\frac{1}{1 + t}" } ]
https://en.wikipedia.org/wiki?curid=650022
65002467
Convex Polyhedra (book)
1950 book on geometry by Aleksandr Danilovich Aleksandrov Convex Polyhedra is a book on the mathematics of convex polyhedra, written by Soviet mathematician Aleksandr Danilovich Aleksandrov, and originally published in Russian in 1950, under the title "Выпуклые многогранники". It was translated into German by Wilhelm Süss as "Konvexe Polyeder" in 1958. An updated edition, translated into English by Nurlan S. Dairbekov, Semën Samsonovich Kutateladze and Alexei B. Sossinsky, with added material by Victor Zalgaller, L. A. Shor, and Yu. A. Volkov, was published as "Convex Polyhedra" by Springer-Verlag in 2005. Topics. The main focus of the book is on the specification of geometric data that will determine uniquely the shape of a three-dimensional convex polyhedron, up to some class of geometric transformations such as congruence or similarity. It considers both bounded polyhedra (convex hulls of finite sets of points) and unbounded polyhedra (intersections of finitely many half-spaces). The 1950 Russian edition of the book included 11 chapters. The first chapter covers the basic topological properties of polyhedra, including their topological equivalence to spheres (in the bounded case) and Euler's polyhedral formula. After a lemma of Augustin Cauchy on the impossibility of labeling the edges of a polyhedron by positive and negative signs so that each vertex has at least four sign changes, the remainder of chapter 2 outlines the content of the remaining book. Chapters 3 and 4 prove Alexandrov's uniqueness theorem, characterizing the surface geometry of polyhedra as being exactly the metric spaces that are topologically spherical locally like the Euclidean plane except at a finite set of points of positive angular defect, obeying Descartes' theorem on total angular defect that the total angular defect should be formula_0. Chapter 5 considers the metric spaces defined in the same way that are topologically a disk rather than a sphere, and studies the flexible polyhedral surfaces that result. Chapters 6 through 8 of the book are related to a theorem of Hermann Minkowski that a convex polyhedron is uniquely determined by the areas and directions of its faces, with a new proof based on invariance of domain. A generalization of this theorem implies that the same is true for the perimeters and directions of the faces. Chapter 9 concerns the reconstruction of three-dimensional polyhedra from a two-dimensional perspective view, by constraining the vertices of the polyhedron to lie on rays through the point of view. The original Russian edition of the book concludes with two chapters, 10 and 11, related to Cauchy's theorem that polyhedra with flat faces form rigid structures, and describing the differences between the rigidity and infinitesimal rigidity of polyhedra, as developed analogously to Cauchy's rigidity theorem by Max Dehn. The 2005 English edition adds comments and bibliographic information regarding many problems that were posed as open in the 1950 edition but subsequently solved. It also includes in a chapter of supplementary material the translations of three related articles by Volkov and Shor, including a simplified proof of Pogorelov's theorems generalizing Alexandrov's uniqueness theorem to non-polyhedral convex surfaces. Audience and reception. Robert Connelly writes that, for a work describing significant developments in the theory of convex polyhedra that was however hard to access in the west, the English translation of "Convex Polyhedra" was long overdue. He calls the material on Alexandrov's uniqueness theorem "the star result in the book", and he writes that the book "had a great influence on countless Russian mathematicians". Nevertheless, he complains about the book's small number of exercises, and about an inconsistent level presentation that fails to distinguish important and basic results from specialized technicalities. Although intended for a broad mathematical audience, "Convex Polyhedra" assumes a significant level of background knowledge in material including topology, differential geometry, and linear algebra. Reviewer Vasyl Gorkaviy recommends "Convex Polyhedra" to students and professional mathematicians as an introduction to the mathematics of convex polyhedra. He also writes that, over 50 years after its original publication, "it still remains of great interest for specialists", after being updated to include many new developments and to list new open problems in the area. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "4\\pi" } ]
https://en.wikipedia.org/wiki?curid=65002467
650086
Contour line
Curve along which a 3-D surface is at equal elevation A contour line (also isoline, isopleth, isoquant or isarithm) of a function of two variables is a curve along which the function has a constant value, so that the curve joins points of equal value. It is a plane section of the three-dimensional graph of the function formula_0 parallel to the formula_1-plane. More generally, a contour line for a function of two variables is a curve connecting points where the function has the same particular value. In cartography, a contour line (often just called a "contour") joins points of equal elevation (height) above a given level, such as mean sea level. A contour map is a map illustrated with contour lines, for example a topographic map, which thus shows valleys and hills, and the steepness or gentleness of slopes. The contour interval of a contour map is the difference in elevation between successive contour lines. The gradient of the function is always perpendicular to the contour lines. When the lines are close together the magnitude of the gradient is large: the variation is steep. A level set is a generalization of a contour line for functions of any number of variables. Contour lines are curved, straight or a mixture of both lines on a map describing the intersection of a real or hypothetical surface with one or more horizontal planes. The configuration of these contours allows map readers to infer the relative gradient of a parameter and estimate that parameter at specific places. Contour lines may be either traced on a visible three-dimensional model of the surface, as when a photogrammetrist viewing a stereo-model plots elevation contours, or interpolated from the estimated surface elevations, as when a computer program threads contours through a network of observation points of area centroids. In the latter case, the method of interpolation affects the reliability of individual isolines and their portrayal of slope, pits and peaks. History. The idea of lines that join points of equal value was rediscovered several times. The oldest known isobath (contour line of constant depth) is found on a map dated 1584 of the river Spaarne, near Haarlem, by Dutchman Pieter Bruinsz. In 1701, Edmond Halley used such lines (isogons) on a chart of magnetic variation. The Dutch engineer Nicholas Cruquius drew the bed of the river Merwede with lines of equal depth (isobaths) at intervals of 1 fathom in 1727, and Philippe Buache used them at 10-fathom intervals on a chart of the English Channel that was prepared in 1737 and published in 1752. Such lines were used to describe a land surface (contour lines) in a map of the Duchy of Modena and Reggio by Domenico Vandelli in 1746, and they were studied theoretically by Ducarla in 1771, and Charles Hutton used them in the Schiehallion experiment. In 1791, a map of France by J. L. Dupain-Triel used contour lines at 20-metre intervals, hachures, spot-heights and a vertical section. In 1801, the chief of the French Corps of Engineers, Haxo, used contour lines at the larger scale of 1:500 on a plan of his projects for Rocca d'Anfo, now in northern Italy, under Napoleon. By around 1843, when the Ordnance Survey started to regularly record contour lines in Great Britain and Ireland, they were already in general use in European countries. Isobaths were not routinely used on nautical charts until those of Russia from 1834, and those of Britain from 1838. As different uses of the technique were invented independently, cartographers began to recognize a common theme, and debated what to call these "lines of equal value" generally. The word "isogram" (from grc " "ἴσος" (isos)" 'equal' and " "γράμμα" (gramma)" 'writing, drawing') was proposed by Francis Galton in 1889 for lines indicating equality of some physical condition or quantity, though "isogram" can also refer to a word without a repeated letter. As late as 1944, John K. Wright still preferred "isogram", but it never attained wide usage. During the early 20th century, "isopleth" () was being used by 1911 in the United States, while "isarithm" () had become common in Europe. Additional alternatives, including the Greek-English hybrid "isoline" and "isometric line" (), also emerged. Despite attempts to select a single standard, all of these alternatives have survived to the present. When maps with contour lines became common, the idea spread to other applications. Perhaps the latest to develop are air quality and noise pollution contour maps, which first appeared in the United States in approximately 1970, largely as a result of national legislation requiring spatial delineation of these parameters. Types. Contour lines are often given specific names beginning with "iso-" according to the nature of the variable being mapped, although in many usages the phrase "contour line" is most commonly used. Specific names are most common in meteorology, where multiple maps with different variables may be viewed simultaneously. The prefix ""'iso-" can be replaced with "isallo-"" to specify a contour line connecting points where a variable changes at the same "rate" during a given time period. An isogon (from grc " "γωνία" (gonia)" 'angle') is a contour line for a variable which measures direction. In meteorology and in geomagnetics, the term "isogon" has specific meanings which are described below. An isocline () is a line joining points with equal slope. In population dynamics and in geomagnetics, the terms "isocline" and "isoclinic line" have specific meanings which are described below. Equidistant points. A curve of equidistant points is a set of points all at the same distance from a given point, line, or polyline. In this case the function whose value is being held constant along a contour line is a distance function. Isopleths. In 1944, John K. Wright proposed that the term "isopleth" be used for contour lines that depict a variable which cannot be measured at a point, but which instead must be calculated from data collected over an area, as opposed to "isometric lines" for variables that could be measured at a point; this distinction has since been followed generally. An example of an isopleth is population density, which can be calculated by dividing the population of a census district by the surface area of that district. Each calculated value is presumed to be the value of the variable at the centre of the area, and isopleths can then be drawn by a process of interpolation. The idea of an isopleth map can be compared with that of a choropleth map. In meteorology, the word "isopleth" is used for any type of contour line. Meteorology. Meteorological contour lines are based on interpolation of the point data received from weather stations and weather satellites. Weather stations are seldom exactly positioned at a contour line (when they are, this indicates a measurement precisely equal to the value of the contour). Instead, lines are drawn to best approximate the locations of exact values, based on the scattered information points available. Meteorological contour maps may present collected data such as actual air pressure at a given time, or generalized data such as average pressure over a period of time, or forecast data such as predicted air pressure at some point in the future. Thermodynamic diagrams use multiple overlapping contour sets (including isobars and isotherms) to present a picture of the major thermodynamic factors in a weather system. Barometric pressure. An isobar (from grc " "βάρος" (baros)" 'weight') is a line of equal or constant pressure on a graph, plot, or map; an isopleth or contour line of pressure. More accurately, isobars are lines drawn on a map joining places of equal average atmospheric pressure reduced to sea level for a specified period of time. In meteorology, the barometric pressures shown are reduced to sea level, not the surface pressures at the map locations. The distribution of isobars is closely related to the magnitude and direction of the wind field, and can be used to predict future weather patterns. Isobars are commonly used in television weather reporting. Isallobars are lines joining points of equal pressure change during a specific time interval. These can be divided into "anallobars", lines joining points of equal pressure increase during a specific time interval, and "katallobars", lines joining points of equal pressure decrease. In general, weather systems move along an axis joining high and low isallobaric centers. Isallobaric gradients are important components of the wind as they increase or decrease the geostrophic wind. An isopycnal is a line of constant density. An "isoheight" or "isohypse" is a line of constant geopotential height on a constant pressure surface chart. Isohypse and isoheight are simply known as lines showing equal pressure on a map. Temperature and related subjects. An isotherm (from grc " "θέρμη" (thermē)" 'heat') is a line that connects points on a map that have the same temperature. Therefore, all points through which an isotherm passes have the same or equal temperatures at the time indicated. An isotherm at 0 °C is called the freezing level. The term was coined by the Prussian geographer and naturalist Alexander von Humboldt, who as part of his research into the geographical distribution of plants published the first map of isotherms in Paris, in 1817. An isocheim is a line of equal mean winter temperature, and an isothere is a line of equal mean summer temperature. An isohel () is a line of equal or constant solar radiation. An isogeotherm is a line of equal temperature beneath the Earth's surface. Rainfall and air moisture. An isohyet or isohyetal line (from grc " "ὑετός" (huetos)" 'rain') is a line joining points of equal rainfall on a map in a given period. A map with isohyets is called an isohyetal map. An isohume is a line of constant relative humidity, while an isodrosotherm (from grc " "δρόσος" (drosos)" 'dew' and " "θέρμη" (therme)" 'heat') is a line of equal or constant dew point. An isoneph is a line indicating equal cloud cover. An isochalaz is a line of constant frequency of hail storms, and an isobront is a line drawn through geographical points at which a given phase of thunderstorm activity occurred simultaneously. Snow cover is frequently shown as a contour-line map. Wind. An isotach (from grc " "ταχύς" (tachus)" 'fast') is a line joining points with constant wind speed. In meteorology, the term isogon refers to a line of constant wind direction. Freeze and thaw. An isopectic line denotes equal dates of ice formation each winter, and an isotac denotes equal dates of thawing. Physical geography and oceanography. Elevation and depth. Contours are one of several common methods used to denote elevation or altitude and depth on maps. From these contours, a sense of the general terrain can be determined. They are used at a variety of scales, from large-scale engineering drawings and architectural plans, through topographic maps and bathymetric charts, up to continental-scale maps. "Contour line" is the most common usage in cartography, but isobath for underwater depths on bathymetric maps and isohypse for elevations are also used. In cartography, the contour interval is the elevation difference between adjacent contour lines. The contour interval should be the same over a single map. When calculated as a ratio against the map scale, a sense of the hilliness of the terrain can be derived. Interpretation. There are several rules to note when interpreting terrain contour lines: Of course, to determine differences in elevation between two points, the contour interval, or distance in altitude between two adjacent contour lines, must be known, and this is normally stated in the map key. Usually contour intervals are consistent throughout a map, but there are exceptions. Sometimes intermediate contours are present in flatter areas; these can be dashed or dotted lines at half the noted contour interval. When contours are used with hypsometric tints on a small-scale map that includes mountains and flatter low-lying areas, it is common to have smaller intervals at lower elevations so that detail is shown in all areas. Conversely, for an island which consists of a plateau surrounded by steep cliffs, it is possible to use smaller intervals as the height increases. Electrostatics. An isopotential map is a measure of electrostatic potential in space, often depicted in two dimensions with the electrostatic charges inducing that electric potential. The term equipotential line or isopotential line refers to a curve of constant electric potential. Whether crossing an equipotential line represents ascending or descending the potential is inferred from the labels on the charges. In three dimensions, equipotential surfaces may be depicted with a two dimensional cross-section, showing equipotential lines at the intersection of the surfaces and the cross-section. The general mathematical term level set is often used to describe the full collection of points having a particular potential, especially in higher dimensional space. Magnetism. In the study of the Earth's magnetic field, the term isogon or isogonic line refers to a line of constant magnetic declination, the variation of magnetic north from geographic north. An agonic line is drawn through points of zero magnetic declination. An isoporic line refers to a line of constant annual variation of magnetic declination An isoclinic line connects points of equal magnetic dip, and an aclinic line is the isoclinic line of magnetic dip zero. An isodynamic line (from or "dynamis" meaning 'power') connects points with the same intensity of magnetic force. Oceanography. Besides ocean depth, oceanographers use contour to describe diffuse variable phenomena much as meteorologists do with atmospheric phenomena. In particular, isobathytherms are lines showing depths of water with equal temperature, isohalines show lines of equal ocean salinity, and isopycnals are surfaces of equal water density. Geology. Various geological data are rendered as contour maps in structural geology, sedimentology, stratigraphy and economic geology. Contour maps are used to show the below ground surface of geologic strata, fault surfaces (especially low angle thrust faults) and unconformities. Isopach maps use isopachs (lines of equal thickness) to illustrate variations in thickness of geologic units. Environmental science. In discussing pollution, density maps can be very useful in indicating sources and areas of greatest contamination. Contour maps are especially useful for diffuse forms or scales of pollution. Acid precipitation is indicated on maps with isoplats. Some of the most widespread applications of environmental science contour maps involve mapping of environmental noise (where lines of equal sound pressure level are denoted isobels), air pollution, soil contamination, thermal pollution and groundwater contamination. By contour planting and contour ploughing, the rate of water runoff and thus soil erosion can be substantially reduced; this is especially important in riparian zones. Ecology. An isoflor is an isopleth contour connecting areas of comparable biological diversity. Usually, the variable is the number of species of a given genus or family that occurs in a region. Isoflor maps are thus used to show distribution patterns and trends such as centres of diversity. Social sciences. In economics, contour lines can be used to describe features which vary quantitatively over space. An isochrone shows lines of equivalent drive time or travel time to a given location and is used in the generation of isochrone maps. An isotim shows equivalent transport costs from the source of a raw material, and an isodapane shows equivalent cost of travel time. Contour lines are also used to display non-geographic information in economics. Indifference curves (as shown at left) are used to show bundles of goods to which a person would assign equal utility. An isoquant (in the image at right) is a curve of equal production quantity for alternative combinations of input usages, and an isocost curve (also in the image at right) shows alternative usages having equal production costs. In political science an analogous method is used in understanding coalitions (for example the diagram in Laver and Shepsle's work). In population dynamics, an isocline shows the set of population sizes at which the rate of change, or partial derivative, for one population in a pair of interacting populations is zero. Statistics. In statistics, isodensity lines or isodensanes are lines that join points with the same value of a probability density. Isodensanes are used to display bivariate distributions. For example, for a bivariate elliptical distribution the isodensity lines are ellipses. Thermodynamics, engineering, and other sciences. Various types of graphs in thermodynamics, engineering, and other sciences use isobars (constant pressure), isotherms (constant temperature), isochors (constant specific volume), or other types of isolines, even though these graphs are usually not related to maps. Such isolines are useful for representing more than two dimensions (or quantities) on two-dimensional graphs. Common examples in thermodynamics are some types of phase diagrams. Isoclines are used to solve ordinary differential equations. In interpreting radar images, an isodop is a line of equal Doppler velocity, and an isoecho is a line of equal radar reflectivity. In the case of hybrid contours, energies of hybrid orbitals and the energies of pure atomic orbitals are plotted. The graph obtained is called hybrid contour. Graphical design. To maximize readability of contour maps, there are several design choices available to the map creator, principally line weight, line color, line type and method of numerical marking. Line weight is simply the darkness or thickness of the line used. This choice is made based upon the least intrusive form of contours that enable the reader to decipher the background information in the map itself. If there is little or no content on the base map, the contour lines may be drawn with relatively heavy thickness. Also, for many forms of contours such as topographic maps, it is common to vary the line weight and/or color, so that a different line characteristic occurs for certain numerical values. For example, in the topographic map above, the even hundred foot elevations are shown in a different weight from the twenty foot intervals. Line color is the choice of any number of pigments that suit the display. Sometimes a sheen or gloss is used as well as color to set the contour lines apart from the base map. Line colour can be varied to show other information. Line type refers to whether the basic contour line is solid, dashed, dotted or broken in some other pattern to create the desired effect. Dotted or dashed lines are often used when the underlying base map conveys very important (or difficult to read) information. Broken line types are used when the location of the contour line is inferred. Numerical marking is the manner of denoting the arithmetical values of contour lines. This can be done by placing numbers along some of the contour lines, typically using interpolation for intervening lines. Alternatively a map key can be produced associating the contours with their values. If the contour lines are not numerically labeled and adjacent lines have the same style (with the same weight, color and type), then the direction of the gradient cannot be determined from the contour lines alone. However, if the contour lines cycle through three or more styles, then the direction of the gradient can be determined from the lines. The orientation of the numerical text labels is often used to indicate the direction of the slope. Plan view versus profile view. Most commonly contour lines are drawn in plan view, or as an observer in space would view the Earth's surface: ordinary map form. However, some parameters can often be displayed in profile view showing a vertical profile of the parameter mapped. Some of the most common parameters mapped in profile are air pollutant concentrations and sound levels. In each of those cases it may be important to analyze (air pollutant concentrations or sound levels) at varying heights so as to determine the air quality or noise health effects on people at different elevations, for example, living on different floor levels of an urban apartment. In actuality, both plan and profile view contour maps are used in air pollution and noise pollution studies. Labeling contour maps. Labels are a critical component of elevation maps. A properly labeled contour map helps the reader to quickly interpret the shape of the terrain. If numbers are placed close to each other, it means that the terrain is steep. Labels should be placed along a slightly curved line "pointing" to the summit or nadir, from several directions if possible, making the visual identification of the summit or nadir easy. Contour labels can be oriented so a reader is facing uphill when reading the label. Manual labeling of contour maps is a time-consuming process, however, there are a few software systems that can do the job automatically and in accordance with cartographic conventions, called automatic label placement. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x,y)" }, { "math_id": 1, "text": "(x,y)" } ]
https://en.wikipedia.org/wiki?curid=650086
6500860
Answer-seizure ratio
The answer-seizure ratio (ASR) is a measurement of network quality and call success rates in telecommunication. It is the percentage of answered telephone calls with respect to the total call volume. Definition. In telecommunication an attempted call is termed a "seizure". The answer-seizure ratio is defined as 100 times the number of answered calls, i.e. the number of seizures resulting in an "answer" signal, divided by the total number of seizures: formula_0 Busy signals and other call rejections by the telephone network count as call failures. However, the inclusion in the ASR accounting of some failed calls varies in practical applications. This makes the ASR highly dependent on end-user action. Low answer-seizure ratios may be caused by far-end switch congestion, not answering by called parties and busy destination circuits. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ASR = 100 \\ \\frac {answered \\ calls}{seized \\ calls}" } ]
https://en.wikipedia.org/wiki?curid=6500860
65037
Tire
Ring-shaped covering that fits around a wheel's rim A tire (en-US) or tyre (en-GB) is a ring-shaped component that surrounds a wheel's rim to transfer a vehicle's load from the axle through the wheel to the ground and to provide traction on the surface over which the wheel travels. Most tires, such as those for automobiles and bicycles, are pneumatically inflated structures, providing a flexible cushion that absorbs shock as the tire rolls over rough features on the surface. Tires provide a footprint, called a contact patch, designed to match the vehicle's weight and the bearing on the surface that it rolls over by exerting a pressure that will avoid deforming the surface. The materials of modern pneumatic tires are synthetic rubber, natural rubber, fabric, and wire, along with carbon black and other chemical compounds. They consist of a tread and a body. The tread provides traction while the body provides containment for a quantity of compressed air. Before rubber was developed, tires were metal bands fitted around wooden wheels to hold the wheel together under load and to prevent wear and tear. Early rubber tires were solid (not pneumatic). Pneumatic tires are used on many vehicles, including cars, bicycles, motorcycles, buses, trucks, heavy equipment, and aircraft. Metal tires are used on locomotives and railcars, and solid rubber (or other polymers) tires are also used in various non-automotive applications, such as casters, carts, lawnmowers, and wheelbarrows. Unmaintained tires can lead to severe hazards for vehicles and people, ranging from flat tires making the vehicle inoperable to blowouts, where tires explode during operation and possibly damage vehicles and injure people. The manufacture of tires is often highly regulated for this reason. Because of the widespread use of tires for motor vehicles, tire waste is a substantial portion of global waste. There is a need for tire recycling through mechanical recycling and reuse, such as for crumb rubber and other tire-derived aggregate, and pyrolysis for chemical reuse, such as for tire-derived fuel. If not recycled properly or burned, waste tires release toxic chemicals into the environment. Moreover, the regular use of tires produces micro-plastic particles that contain these chemicals that both enter the environment and affect human health. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Etymology and spelling. The word "tire" is a short form of "attire", from the idea that a wheel with a tire is a dressed wheel. "Tyre" is the oldest spelling, and both "tyre" and "tire" were used during the 15th and 16th centuries. During the 17th and 18th centuries, "tire" became more common in print. The spelling "tyre" did not reappear until the 1840s when the English began shrink-fitting railway car wheels with malleable iron. Nevertheless, many publishers continued using "tire". "The Times" newspaper in London was still using "tire" as late as 1905. The spelling "tyre" began to be commonly used in the 19th century for pneumatic tires in the UK. The 1911 edition of the "Encyclopædia Britannica" states that "The spelling 'tyre' is not now accepted by the best English authorities, and is unrecognized in the US", while Fowler's "Modern English Usage" of 1926 describes that "there is nothing to be said for 'tyre', which is etymologically wrong, as well as needlessly divergent from our own [sc. British] older &amp; the present American usage". However, over the 20th century, "tyre" became established as the standard British spelling. History. The earliest tires were bands of leather, then iron (later steel) placed on wooden wheels used on carts and wagons. A skilled worker, known as a wheelwright, would cause the tire to expand by heating it in a forge fire, placing it over the wheel, and quenching it, causing the metal to contract back to its original size to fit tightly on the wheel. The first patent for what appears to be a standard pneumatic tire appeared in 1847 and was lodged by Scottish inventor Robert William Thomson. However, this idea never went into production. The first practical pneumatic tire was made in 1888 on May Street, Belfast, by Scots-born John Boyd Dunlop, owner of one of Ireland's most prosperous veterinary practices. It was an effort to prevent the headaches of his 10-year-old son Johnnie while riding his tricycle on rough pavements. His doctor, John, later Sir John Fagan, had prescribed cycling as an exercise for the boy and was a regular visitor. Fagan participated in designing the first pneumatic tires. Cyclist Willie Hume demonstrated the supremacy of Dunlop's tires in 1889, winning the tire's first-ever races in Ireland and then England. In Dunlop's tire patent specification dated 31 October 1888, his interest is only in its use in cycles and light vehicles. In September 1890, he was made aware of an earlier development, but the company kept the information to itself. In 1892, Dunlop's patent was declared invalid because of the prior art by forgotten fellow Scot Robert William Thomson of London (patents London 1845, France 1846, USA 1847). However, Dunlop is credited with "realizing rubber could withstand the wear and tear of being a tire while retaining its resilience". John Boyd Dunlop and Harvey du Cros worked through the ensuing considerable difficulties. They employed inventor Charles Kingston Welch and acquired other rights and patents, which allowed them some limited protection of their Pneumatic Tyre business's position. Pneumatic Tyre would become Dunlop Rubber and Dunlop Tyres. The development of this technology hinged on myriad engineering advances, including the vulcanization of natural rubber using sulfur, as well as the development of the "clincher" rim for holding the tire in place laterally on the wheel rim. Synthetic rubbers were invented in the laboratories of Bayer in the 1920s. Rubber shortages in the United Kingdom during WWII prompted research on alternatives to rubber tires with suggestions including leather, compressed asbestos, rayon, felt, bristles, and paper. In 1946, Michelin developed the radial tire method of construction. Michelin had bought the bankrupt Citroën automobile company in 1934 to utilize this new technology. Because of its superiority in handling and fuel economy, use of this technology quickly spread throughout Europe and Asia. In the US, the outdated bias-ply tire construction persisted until the Ford Motor Company adopted radial tires in the early 1970s, following a 1968 article in an influential American magazine, "Consumer Reports", highlighting the superiority of radial construction. The US tire industry lost its market share to Japanese and European manufacturers, which bought out US companies. Applications. Tires may be classified according to the type of vehicle they serve. They may be distinguished by the load they carry and by their application, e.g. to a motor vehicle, aircraft, or bicycle. Automotive. Light–medium duty. Light-duty tires for passenger vehicles carry loads in the range of on the drive wheel. Light-to-medium duty trucks and vans carry loads in the range of on the drive wheel. They are differentiated by speed rating for different vehicles, including (starting from the lowest speed to the highest): winter tires, light truck tires, entry-level car tires, sedans and vans, sport sedans, and high-performance cars. Apart from road tires, there are special categories: Other types of light-duty automotive tires include run-flat tires and race car tires: Heavy duty. Heavy-duty tires for large trucks and buses come in a variety of profiles and carry loads in the range of on the drive wheel. These are typically mounted in tandem on the drive axle. Other. Aircraft, bicycles, and a variety of industrial applications have distinct design requirements. Construction types. Tire construction spans pneumatic tires used on cars, trucks, and aircraft, but also includes non-automotive applications with slow-moving, light-duty, or railroad applications, which may have non-pneumatic tires. Automotive. Following the 1968 "Consumer Reports" announcement of the superiority of the radial design, radial tires began an inexorable climb in market share, reaching 100% of the North American market in the 1980s. Radial tire technology is now the standard design for essentially all automotive tires, but other methods have been used. Radial tire construction utilizes body ply cords extending from the beads and across the tread so that the cords are laid at approximately right angles to the centerline of the tread, and parallel to each other, as well as stabilizer belts directly beneath the tread. The belts may be cord or steel. The advantages of this construction include longer tread life, better steering control, fewer blowouts, improved fuel economy, and lower rolling resistance. Disadvantages of the radial tire are a harder ride at low speeds on rough roads and in the context of off-roading, decreased "self-cleaning" ability, and lower grip ability at low speeds. &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Bias tire (or cross ply) construction utilizes body ply cords that extend diagonally from bead to bead, usually at angles in the range of 30 to 40 degrees. Successive plies are laid at opposing angles forming a crisscross pattern to which the tread is applied. The design allows the entire tire body to flex easily, providing the main advantage of this construction, a smooth ride on rough surfaces. This cushioning characteristic also causes the major disadvantages of a bias tire: increased rolling resistance and less control and traction at higher speeds. A belted bias tire starts with two or more bias plies to which stabilizer belts are bonded directly beneath the tread. This construction provides a smoother ride that is similar to the bias tire, while lessening rolling resistance because the belts increase tread stiffness. The design was introduced by Armstrong, while Goodyear made it popular with the "Polyglas" trademark tire featuring a polyester carcass with belts of fiberglass. The "belted" tire starts two main plies of polyester, rayon, or nylon annealed as in conventional tires, and then placed on top are circumferential belts at different angles that improve performance compared to non-belted bias tires. The belts may be fiberglass or steel. Other. Tubeless tires are pneumatic tires that do not require a separate inner tube. Semi-pneumatic tires have a hollow center, but they are not pressurized. They are lightweight, low-cost, puncture-proof, and provide cushioning. These tires often come as a complete assembly with the wheel and even integral ball bearings. They are used on lawn mowers, wheelchairs, and wheelbarrows. They can also be rugged, typically used in industrial applications, and are designed to not pull off their rim under use. An airless tire is a non-pneumatic tire that is not supported by air pressure. They are most commonly used on small vehicles, such as golf carts, and on utility vehicles in situations where the risk of puncture is high, such as on construction equipment. Many tires used in industrial and commercial applications are non-pneumatic, and are manufactured from solid rubber and plastic compounds via molding operations. Solid tires include those used for lawnmowers, skateboards, golf carts, scooters, and many types of light industrial vehicles, carts, and trailers. One of the most common applications for solid tires is for material handling equipment (forklifts). Such tires are installed utilizing a hydraulic tire press. Wooden wheels for horse-drawn vehicles usually have a wrought iron tire. This construction was extended to wagons on horse-drawn tramways, rolling on granite setts or cast iron rails. The wheels of some railway engines and older types of rolling stock are fitted with railway tires to prevent the need to replace the entirety of a wheel. The tire, usually made of steel, surrounds the wheel and is primarily held in place by interference fit. Aircraft tires may operate at pressures that exceed . Some aircraft tires are inflated with nitrogen to "eliminate the possibility of a chemical reaction between atmospheric oxygen and volatile gases from the tire inner liner producing a tire explosion". Manufacturing. Pneumatic tires are manufactured in about 450 tire factories around the world. Tire production starts with bulk raw materials such as rubber, carbon black, and chemicals and produces numerous specialized components that are assembled and cured. Many kinds of rubber are used, the most common being styrene-butadiene copolymer. Forecasts for the global automotive tire market indicate continued growth through 2027. Estimates put the value of worldwide sales volume around $126 billion in 2022, it is expected to reach the value of over $176 billion by 2027. Production of tires is also experiencing growth. In 2015, the US manufactured almost 170 million tires. Over 2.5 billion tires are manufactured annually, making the tire industry a major consumer of natural rubber. It is estimated that by 2019, 3 billion tires will be to be sold globally every year. Estimates put worldwide tire production of 2,268 million in 2021 and is predicted to reach 2,665 million tires by 2027. As of 2011, the top three tire manufacturing companies by revenue were Bridgestone (manufacturing 190 million tires), Michelin (184 million), Goodyear (181 million); they were followed by Continental, and Pirelli. The Lego group produced over 318 million toy tires in 2011 and was recognized by Guinness World Records as having the highest annual production of tires by any manufacturer. Components. A tire comprises several components: the tread, bead, sidewall, shoulder, and ply. Tread. The tread is the part of the tire that comes in contact with the road surface. The portion that is in contact with the road at a given instant in time is the contact patch. The tread is a thick rubber, or rubber/composite compound formulated to provide an appropriate level of traction that does not wear away too quickly. The tread pattern is characterized by a system of circumferential grooves, lateral sipes, and slots for road tires or a system of lugs and voids for tires designed for soft terrain or snow. Grooves run circumferentially around the tire and are needed to channel away water. Lugs are that portion of the tread design that contacts the road surface. Grooves, sipes, and slots allow tires to evacuate water. The design of treads and the interaction of specific tire types with the roadway surface affects roadway noise, a source of noise pollution emanating from moving vehicles. These sound intensities increase with higher vehicle speeds. Tires treads may incorporate a variety of distances between slots ("pitch lengths") to minimize noise levels at discrete frequencies. Sipes are slits cut across the tire, usually perpendicular to the grooves, which allow the water from the grooves to escape sideways and mitigate hydroplaning. Different tread designs address a variety of driving conditions. As the ratio of tire tread area to groove area increases, so does tire friction on dry pavement, as seen on Formula One tires, some of which have no grooves. High-performance tires often have smaller void areas to provide more rubber in contact with the road for higher traction, but may be compounded with softer rubber that provides better traction, but wears quickly. Mud and snow (M&amp;S) tires employ larger and deeper slots to engage mud and snow. Snow tires have still larger and deeper slots that compact snow and create shear strength within the compacted snow to improve braking and cornering performance. Wear bars (or wear indicators) are raised features located at the bottom of the tread grooves that indicate the tire has reached its wear limit. When the tread lugs are worn to the point that the wear bars connect across the lugs, the tires are fully worn and should be taken out of service, typically at a remaining tread depth of . Other. The tire bead is the part of the tire that contacts the rim on the wheel. This essential component is constructed with robust steel cables encased in durable, specially formulated rubber designed to resist stretching. The precision of the bead's fit is crucial, as it seals the tire against the wheel, maintaining air pressure integrity and preventing any loss of air. The bead's design ensures a secure, non-slip connection, preventing the tire from rotating independently from the wheel during vehicle motion. Additionally, the interplay between the bead's dimensions and the wheel's width significantly influences the vehicle's steering responsiveness and stability, as it helps to maintain the tire’s intended shape and contact with the road. The sidewall is that part of the tire, or bicycle tire, that bridges between the tread and bead. The sidewall is largely rubber but reinforced with fabric or steel cords that provide for tensile strength and flexibility. The sidewall contains air pressure and transmits the torque applied by the drive axle to the tread to create traction but supports little of the weight of the vehicle, as is clear from the total collapse of the tire when punctured. Sidewalls are molded with manufacturer-specific detail, government-mandated warning labels, and other consumer information. Sidewall may also have sometimes decorative ornamentation that includes whitewall or red-line inserts as well as tire lettering. The shoulder is that part of the tire at the edge of the tread as it makes the transition to the sidewall. Plies are layers of relatively inextensible cords embedded in the rubber to hold its shape by preventing the rubber from stretching in response to the internal pressure. The orientations of the plies play a large role in the performance of the tire and are one of the main ways that tires are categorized. Blems. Blem (short for "blemished") is a term used for a tire that failed inspection during manufacturing - but only for superficial/cosmetic/aesthetic reasons. For example, a tire with white painted lettering which is smudged or incomplete might be classified as a "blem". Blem tires are fully functional and generally carry the same warranty as flawless tires - but are sold at a discount. Materials. The materials of modern pneumatic tires can be divided into two groups, the cords that make up the ply and the elastomer which encases them. Cords. The cords, which form the ply and bead and provide the tensile strength necessary to contain the inflation pressure, can be composed of steel, natural fibers such as cotton or silk, or synthetic fibers such as nylon or kevlar. Good adhesion between the cords and the rubber is important. To achieve this the steel cords are coated in a thin layer of brass, various additives will also be added to the rubber to improve binding, such as resorcinol/HMMM mixtures. Elastomer. The elastomer, which forms the tread and encases the cords to protect them from abrasion and hold them in place, is a key component of pneumatic tire design. It can be composed of various composites of rubber material – the most common being styrene-butadiene copolymer – with other chemical compounds such as silica and carbon black. Optimizing rolling resistance in the elastomer material is a key challenge for reducing fuel consumption in the transportation sector. It is estimated that passenger vehicles consume approximately 5~15% of their fuel to overcome rolling resistance, while the estimate is understood to be higher for heavy trucks. However, there is a trade-off between rolling resistance and wet traction and grip: while low rolling resistance can be achieved by reducing the viscoelastic properties of the rubber compound (low tangent (δ)), it comes at the cost of wet traction and grip, which requires hysteresis and energy dissipation (high tangent (δ)). A low tangent (δ) value at 60 °C is used as an indicator of low rolling resistance, while a high tangent (δ) value at 0 °C is used as an indicator of high wet traction. Designing an elastomer material that can achieve both high wet traction and low rolling resistance is key in achieving safety and fuel efficiency in the transportation sector. The most common elastomer material used today is a styrene-butadiene copolymer. It combines the properties of polybutadiene, which is a highly rubbery polymer ("Tg" = -100 °C) having high hysteresis and thus offering good wet grip properties, with the properties of polystyrene, which is a glassy polymer ("Tg" = 100 °C) having low hysteresis and thus offering low rolling resistance in addition to wear resistance. Therefore, the ratio of the two monomers in the styrene-butadiene copolymer is considered key in determining the glass transition temperature of the material, which is correlated to its grip and resistance properties. Non-exhaust emissions of particulate matter, generated by the wearing down of brakes, clutches, tires, and road surfaces, as well as by the suspension of road dust, constitute a little-known but rising share of emissions from road traffic and significantly harm public health. On the wheel. Associated components of tires include the wheel on which it is mounted, the valve stem through which air is introduced, and, for some tires, an inner tube that provides the airtight means for maintaining tire pressure. Performance characteristics. The interactions of a tire with the pavement are complex. A commonly used (empirical) model of tire properties is Pacejka's "Magic Formula". Some are explained below, alphabetically, by section. Wear. Tire wear is a major source of rubber pollution. A concern hereby is that vehicle tire wear pollution is unregulated, unlike exhaust emissions. Sizes, codes, standards, and regulatory agencies. Automotive tires have a variety of identifying markings molded onto the sidewall as a tire code. They denote size, rating, and other information pertinent to that individual tire. Americas. The National Highway and Traffic Safety Administration (NHTSA) is a U.S. government body within the Department of Transportation (DOT) tasked with regulating automotive safety in the United States. NHTSA established the Uniform Tire Quality Grading System (UTQG), is a system for comparing the performance of tires according to the Code of Federal Regulations 49 CFR 575.104; it requires labeling of tires for tread wear, traction, and temperature. The DOT Code is an alphanumeric character sequence molded into the sidewall of the tire and allows the identification of the tire and its age. The code is mandated by the U.S. Department of Transportation but is used worldwide. The DOT Code is also useful in identifying tires subject to product recall or at end of life due to age. The "Tire and Rim Association" (T&amp;RA) is a voluntary U.S. standards organization that promotes the interchangeability of tires, rims, and allied parts. Of particular interest, they publish key tire dimensions, rim contour dimensions, tire valve dimension standards, and load/inflation standards. The National Institute of Metrology Standardization and Industrial Quality (INMETRO) is the Brazilian federal body responsible for automotive wheel and tire certification. Europe. The European Tyre and Rim Technical Organisation (ETRTO) is the European standards organization "to establish engineering dimensions, load/pressure characteristics and operating guidelines". All tires sold for road use in Europe after July 1997 must carry an E-mark. The mark itself is either an upper case "E" or lower case "e" – followed by a number in a circle or rectangle, followed by a further number. An (upper case) "E" indicates that the tire is certified to comply with the dimensional, performance, and marking requirements of ECE regulation 30. A (lowercase) "e" indicates that the tire is certified to comply with the dimensional, performance, and marking requirements of Directive 92/23/EEC. The number in the circle or rectangle denotes the country code of the government that granted the type approval. The last number outside the circle or rectangle is the number of the type approval certificate issued for that particular tire size and type. The British Rubber Manufacturers Association (BRMA) recommended practice, issued June 2001, states, "BRMA members strongly recommend that unused tires should not be put into service if they are over six years old and that all tires should be replaced ten years from the date of their manufacture." Asia. The Japanese Automobile Tire Manufacturers Association (JATMA) is the Japanese standards organization for tires, rims, and valves. It performs similar functions as the T&amp;RA and ETRTO. The China Compulsory Certification (CCC) is a mandatory certification system concerning product safety in China that went into effect in August 2002. The CCC certification system is operated by the State General Administration for Quality Supervision and Inspection and Quarantine of the People's Republic of China (AQSIQ) and the Certification and Accreditation Administration of the People's Republic of China (CNCA). Maintenance. To maintain tire health, several actions are appropriate, tire rotation, wheel alignment, and, sometimes, retreading the tire. Inflation. Inflation is key to proper wear and rolling resistance of pneumatic tires. Many vehicles have monitoring systems to assure proper inflation. Most passenger cars are advised to maintain a tire pressure within the range of when the tires are not warmed by driving. Hazards. Tire hazards may occur from failure of the tire, itself, or from loss of traction on the surface over which it is rolling. Structural failures of a tire can result in flat tires or more dangerous blowouts. Some of these failures can be caused by manufacture error and may lead to recalls, such as the widespread Firestone tire failures on Ford vehicles that lead to the Firestone and Ford tire controversy in the 1990s. Tire failure. Tires may fail for any of a variety of reasons, including: Health impacts. Tires contain a number of trace toxic chemicals including heavy metals and chemical agents used to increase the durability of the tires. These typically include polycyclic aromatic hydrocarbon, benzothiazoles, isoprene and heavy metals such as zinc and lead. As tires are used for vehicle operations, the natural wear of the tire leaves microfine particles equivalent to PM0.1, PM2.5, and PM10 as tire residue. This residue accumulates near roadways and vehicle use areas, but also will travel into the environment through surface runoff. Both humans and animals are exposed to these chemicals at the site of accumulation (i.e. walking on the road surface) and through the accumulation in natural environments and foodchains. A 2023 literature review from Imperial College London, warned of both the toxic chemicals and microplastics produced from tire wear as having potential widespread serious environmental and health consequences. Moreover, burning of tires releases these chemicals as air pollutants as well as leaving toxic residues, that can have significant effects on local communities and first responders. End of use. Once tires are discarded, they are considered scrap tires. Scrap tires are often re-used for things from bumper car barriers to weights to hold down tarps. Tires are not desired at landfills, due to their large volumes and 75% void space, which quickly consumes valuable space. Rubber tires are likely to contain some traces of heavy metals or other serious pollutants, but these are tightly bonded within the actual rubber compound so they are unlikely to be hazardous unless the tire structure is seriously damaged by fire or strong chemicals. Some facilities are permitted to recycle scrap tires by chipping and processing them into new products or selling the material to licensed power plants for fuel. Some tires may also be retreaded for re-use. Environmental issues. Americans generate about 285 million scrap tires per year. Many states have regulations as to the number of scrap tires that can be held on-site, due to concerns with dumping, fire hazards, and mosquitoes. In the past, millions of tires have been discarded into open fields. This creates a breeding ground for mosquitoes, since the tires often hold water inside and remain warm enough for mosquito breeding. Mosquitoes create a nuisance and may increase the likelihood of spreading disease. It also creates a fire danger, since such a large tire pile is a lot of fuel. Some tire fires have burned for months, since water does not adequately penetrate or cool the burning tires. Tires have been known to liquefy, releasing hydrocarbons and other contaminants to the ground and even groundwater, under extreme heat and temperatures from a fire. The black smoke from a tire fire causes air pollution and is a hazard to downwind properties. The use of scrap tire chips for landscaping has become controversial, due to the leaching of metals and other contaminants from the tire pieces. Zinc is concentrated (up to 2% by weight) to levels high enough to be highly toxic to aquatic life and plants. Of particular concern is evidence that some of the compounds that leach from tires into the water contain hormone disruptors and cause liver lesions. Tires are a major source of microplastic pollution. Retreading. Tires that are fully worn can be retreaded, re-manufactured to replace the worn tread. This is known as retreading or recapping, a process of buffing away the worn tread and applying a new tread. There are two main processes used for retreading tires, called mold-cure and pre-cure methods. Both processes start with the inspection of the tire, followed by non-destructive inspection method such as shearography to locate non-visible damage and embedded debris and nails. Some casings are repaired and some are discarded. Tires can be retreaded multiple times if the casing is in usable condition. Tires used for short delivery vehicles are retreaded more than long haul tires over the life of the tire body. Casings fit for retreading have the old tread buffed away to prepare for retreading. During the retreading process, retread technicians must ensure the casing is in the best condition possible to minimize the possibility of a casing failure. Casings with problems such as capped tread, tread separation, irreparable cuts, corroded belts or sidewall damage, or any run-flat or skidded tires, will be rejected. The mold cure method involves the application of raw rubber on the previously buffed and prepared casing, which is later cured in matrices. During the curing period, vulcanization takes place, and the raw rubber bonds to the casing, taking the tread shape of the matrix. On the other hand, the pre-cure method involves the application of a ready-made tread band on the buffed and prepared casing, which later is cured in an autoclave so that vulcanization can occur. Recycling. Tires can be recycled into, among other things, the hot melt asphalt, typically as crumb rubber modifier—recycled asphalt pavement (CRM—RAP), and as an aggregate in portland cement concrete. Shredded tires can create rubber mulch on playgrounds to diminish fall injuries. There are some "green" buildings that are being made both private and public buildings that are made from old tires. The tire pyrolysis method for recycling used tires is a technique that heats whole or shredded tires in a reactor vessel containing an oxygen-free atmosphere and a heat source. In the reactor, the rubber is softened after which the rubber polymers continuously break down into smaller molecules. Other uses. Other downstream uses have been developed for worn-out tires, including: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v_x" }, { "math_id": 1, "text": "v_y" } ]
https://en.wikipedia.org/wiki?curid=65037
650405
Spherical trigonometry
Geometry of figures on the surface of a sphere Spherical trigonometry is the branch of spherical geometry that deals with the metrical relationships between the sides and angles of spherical triangles, traditionally expressed using trigonometric functions. On the sphere, geodesics are great circles. Spherical trigonometry is of great importance for calculations in astronomy, geodesy, and navigation. The origins of spherical trigonometry in Greek mathematics and the major developments in Islamic mathematics are discussed fully in History of trigonometry and Mathematics in medieval Islam. The subject came to fruition in Early Modern times with important developments by John Napier, Delambre and others, and attained an essentially complete form by the end of the nineteenth century with the publication of Todhunter's textbook "Spherical trigonometry for the use of colleges and Schools". Since then, significant developments have been the application of vector methods, quaternion methods, and the use of numerical methods. Preliminaries. Spherical polygons. A spherical polygon is a "polygon" on the surface of the sphere. Its sides are arcs of great circles—the spherical geometry equivalent of line segments in plane geometry. Such polygons may have any number of sides greater than 1. Two-sided spherical polygons—"lunes", also called "digons" or "bi-angles"—are bounded by two great-circle arcs: a familiar example is the curved outward-facing surface of a segment of an orange. Three arcs serve to define a spherical triangle, the principal subject of this article. Polygons with higher numbers of sides (4-sided spherical quadrilaterals, 5-sided spherical pentagons, etc.) are defined in similar manner. Analogously to their plane counterparts, spherical polygons with more than 3 sides can always be treated as the composition of spherical triangles. One spherical polygon with interesting properties is the pentagramma mirificum, a 5-sided spherical star polygon with a right angle at every vertex. From this point in the article, discussion will be restricted to spherical triangles, referred to simply as "triangles". Notation. In particular, the sum of the angles of a spherical triangle is strictly greater than the sum of the angles of a triangle defined on the Euclidean plane, which is always exactly π radians. Polar triangles. The polar triangle associated with a triangle △"ABC" is defined as follows. Consider the great circle that contains the side BC. This great circle is defined by the intersection of a diametral plane with the surface. Draw the normal to that plane at the centre: it intersects the surface at two points and the point that is on the same side of the plane as A is (conventionally) termed the pole of A and it is denoted by A'. The points B' and C' are defined similarly. The triangle △"A'B'C' " is the polar triangle corresponding to triangle △"ABC". A very important theorem (Todhunter, Art.27) proves that the angles and sides of the polar triangle are given by formula_2 Therefore, if any identity is proved for △"ABC" then we can immediately derive a second identity by applying the first identity to the polar triangle by making the above substitutions. This is how the supplemental cosine equations are derived from the cosine equations. Similarly, the identities for a quadrantal triangle can be derived from those for a right-angled triangle. The polar triangle of a polar triangle is the original triangle. Cosine rules and sine rules. Cosine rules. The cosine rule is the fundamental identity of spherical trigonometry: all other identities, including the sine rule, may be derived from the cosine rule: formula_3 These identities generalize the cosine rule of plane trigonometry, to which they are asymptotically equivalent in the limit of small interior angles. (On the unit sphere, if formula_4 set formula_5 and formula_6 etc.; see Spherical law of cosines.) Sine rules. The spherical law of sines is given by the formula formula_7 These identities approximate the sine rule of plane trigonometry when the sides are much smaller than the radius of the sphere. Derivation of the cosine rule. The spherical cosine formulae were originally proved by elementary geometry and the planar cosine rule (Todhunter, Art.37). He also gives a derivation using simple coordinate geometry and the planar cosine rule (Art.60). The approach outlined here uses simpler vector methods. (These methods are also discussed at Spherical law of cosines.) Consider three unit vectors "OA", "OB", "OC" drawn from the origin to the vertices of the triangle (on the unit sphere). The arc subtends an angle of magnitude a at the centre and therefore "OB" · "OC" = cos "a". Introduce a Cartesian basis with along the z-axis and in the xz-plane making an angle c with the z-axis. The vector projects to ON in the xy-plane and the angle between ON and the x-axis is A. Therefore, the three vectors have components: formula_8 The scalar product in terms of the components is formula_9 Equating the two expressions for the scalar product gives formula_10 This equation can be re-arranged to give explicit expressions for the angle in terms of the sides: formula_11 The other cosine rules are obtained by cyclic permutations. Derivation of the sine rule. This derivation is given in Todhunter, (Art.40). From the identity formula_12 and the explicit expression for cos "A" given immediately above formula_13 Since the right hand side is invariant under a cyclic permutation of a, b, and c the spherical sine rule follows immediately. Alternative derivations. There are many ways of deriving the fundamental cosine and sine rules and the other rules developed in the following sections. For example, Todhunter gives two proofs of the cosine rule (Articles 37 and 60) and two proofs of the sine rule (Articles 40 and 42). The page on Spherical law of cosines gives four different proofs of the cosine rule. Text books on geodesy and spherical astronomy give different proofs and the online resources of MathWorld provide yet more. There are even more exotic derivations, such as that of Banerjee who derives the formulae using the linear algebra of projection matrices and also quotes methods in differential geometry and the group theory of rotations. The derivation of the cosine rule presented above has the merits of simplicity and directness and the derivation of the sine rule emphasises the fact that no separate proof is required other than the cosine rule. However, the above geometry may be used to give an independent proof of the sine rule. The scalar triple product, "OA" · ("OB" × "OC") evaluates to sin "b" sin "c" sin "A" in the basis shown. Similarly, in a basis oriented with the z-axis along , the triple product "OB" · ("OC" × "OA"), evaluates to sin "c" sin "a" sin "B". Therefore, the invariance of the triple product under cyclic permutations gives sin "b" sin "A" = sin "a" sin "B" which is the first of the sine rules. See curved variations of the law of sines to see details of this derivation. Identities. Supplemental cosine rules. Applying the cosine rules to the polar triangle gives (Todhunter, Art.47), "i.e." replacing A by , a by etc., formula_14 Cotangent four-part formulae. The six parts of a triangle may be written in cyclic order as (aCbAcB). The cotangent, or four-part, formulae relate two sides and two angles forming four "consecutive" parts around the triangle, for example (aCbA) or BaCb). In such a set there are inner and outer parts: for example in the set (BaCb) the inner angle is C, the inner side is a, the outer angle is B, the outer side is b. The cotangent rule may be written as (Todhunter, Art.44) formula_15 and the six possible equations are (with the relevant set shown at right): formula_16 To prove the first formula start from the first cosine rule and on the right-hand side substitute for cos "c" from the third cosine rule: formula_17 The result follows on dividing by sin "a" sin "b". Similar techniques with the other two cosine rules give CT3 and CT5. The other three equations follow by applying rules 1, 3 and 5 to the polar triangle. Half-angle and half-side formulae. With formula_18 and formula_19 formula_20 Another twelve identities follow by cyclic permutation. The proof (Todhunter, Art.49) of the first formula starts from the identity formula_21 using the cosine rule to express A in terms of the sides and replacing the sum of two cosines by a product. (See sum-to-product identities.) The second formula starts from the identity formula_22 the third is a quotient and the remainder follow by applying the results to the polar triangle. Delambre analogies. The Delambre analogies (also called Gauss analogies) were published independently by Delambre, Gauss, and Mollweide in 1807–1809. formula_23 Another eight identities follow by cyclic permutation. Proved by expanding the numerators and using the half angle formulae. (Todhunter, Art.54 and Delambre) Napier's analogies. formula_24 Another eight identities follow by cyclic permutation. These identities follow by division of the Delambre formulae. (Todhunter, Art.52) Taking quotients of these yields the law of tangents, first stated by Persian mathematician Nasir al-Din al-Tusi (1201–1274), formula_25 Napier's rules for right spherical triangles. When one of the angles, say C, of a spherical triangle is equal to π/2 the various identities given above are considerably simplified. There are ten identities relating three elements chosen from the set a, b, c, A, and B. Napier provided an elegant mnemonic aid for the ten independent equations: the mnemonic is called Napier's circle or Napier's pentagon (when the circle in the above figure, right, is replaced by a pentagon). First, write the six parts of the triangle (three vertex angles, three arc angles for the sides) in the order they occur around any circuit of the triangle: for the triangle shown above left, going clockwise starting with a gives aCbAcB. Next replace the parts that are not adjacent to C (that is A, c, and B) by their complements and then delete the angle C from the list. The remaining parts can then be drawn as five ordered, equal slices of a pentagram, or circle, as shown in the above figure (right). For any choice of three contiguous parts, one (the "middle" part) will be adjacent to two parts and opposite the other two parts. The ten Napier's Rules are given by The key for remembering which trigonometric function goes with which part is to look at the first vowel of the kind of part: middle parts take the sine, adjacent parts take the tangent, and opposite parts take the cosine. For an example, starting with the sector containing a we have: formula_26 The full set of rules for the right spherical triangle is (Todhunter, Art.62) formula_27 Napier's rules for quadrantal triangles. A quadrantal spherical triangle is defined to be a spherical triangle in which one of the sides subtends an angle of π/2 radians at the centre of the sphere: on the unit sphere the side has length π/2. In the case that the side c has length π/2 on the unit sphere the equations governing the remaining sides and angles may be obtained by applying the rules for the right spherical triangle of the previous section to the polar triangle △"A'B'C' " with sides a', b', c' such that "A' "= π − "a", "a' "= π − "A" etc. The results are: formula_28 Five-part rules. Substituting the second cosine rule into the first and simplifying gives: formula_29 Cancelling the factor of sin "c" gives formula_30 Similar substitutions in the other cosine and supplementary cosine formulae give a large variety of 5-part rules. They are rarely used. Cagnoli's Equation. Multiplying the first cosine rule by cos "A" gives formula_31 Similarly multiplying the first supplementary cosine rule by cos "a" yields formula_32 Subtracting the two and noting that it follows from the sine rules that formula_33 produces Cagnoli's equation formula_34 which is a relation between the six parts of the spherical triangle. Solution of triangles. Oblique triangles. The solution of triangles is the principal purpose of spherical trigonometry: given three, four or five elements of the triangle, determine the others. The case of five given elements is trivial, requiring only a single application of the sine rule. For four given elements there is one non-trivial case, which is discussed below. For three given elements there are six cases: three sides, two sides and an included or opposite angle, two angles and an included or opposite side, or three angles. (The last case has no analogue in planar trigonometry.) No single method solves all cases. The figure below shows the seven non-trivial cases: in each case the given sides are marked with a cross-bar and the given angles with an arc. (The given elements are also listed below the triangle). In the summary notation here such as ASA, A refers to a given angle and S refers to a given side, and the sequence of A's and S's in the notation refers to the corresponding sequence in the triangle. The solution methods listed here are not the only possible choices: many others are possible. In general it is better to choose methods that avoid taking an inverse sine because of the possible ambiguity between an angle and its supplement. The use of half-angle formulae is often advisable because half-angles will be less than π/2 and therefore free from ambiguity. There is a full discussion in Todhunter. The article Solution of triangles#Solving spherical triangles presents variants on these methods with a slightly different notation. There is a full discussion of the solution of oblique triangles in Todhunter. See also the discussion in Ross. Nasir al-Din al-Tusi was the first to list the six distinct cases (2-7 in the diagram) of a right triangle in spherical trigonometry. Solution by right-angled triangles. Another approach is to split the triangle into two right-angled triangles. For example, take the Case 3 example where b, c, and B are given. Construct the great circle from A that is normal to the side BC at the point D. Use Napier's rules to solve the triangle △"ABD": use c and B to find the sides AD and BD and the angle ∠"BAD". Then use Napier's rules to solve the triangle △"ACD": that is use AD and b to find the side DC and the angles C and ∠"DAC". The angle A and side a follow by addition. Numerical considerations. Not all of the rules obtained are numerically robust in extreme examples, for example when an angle approaches zero or π. Problems and solutions may have to be examined carefully, particularly when writing code to solve an arbitrary triangle. Area and spherical excess. Consider an N-sided spherical polygon and let An denote the n-th interior angle. The area of such a polygon is given by (Todhunter, Art.99) formula_35 For the case of a spherical triangle with angles A, B, and C this reduces to Girard's theorem formula_36 where E is the amount by which the sum of the angles exceeds π radians, called the spherical excess of the triangle. This theorem is named after its author, Albert Girard. An earlier proof was derived, but not published, by the English mathematician Thomas Harriot. On a sphere of radius R both of the above area expressions are multiplied by "R"2. The definition of the excess is independent of the radius of the sphere. The converse result may be written as formula_37 Since the area of a triangle cannot be negative the spherical excess is always positive. It is not necessarily small, because the sum of the angles may attain 5π (3π for "proper" angles). For example, an octant of a sphere is a spherical triangle with three right angles, so that the excess is π/2. In practical applications it "is" often small: for example the triangles of geodetic survey typically have a spherical excess much less than 1' of arc. On the Earth the excess of an equilateral triangle with sides 21.3 km (and area 393 km2) is approximately 1 arc second. There are many formulae for the excess. For example, Todhunter, (Art.101—103) gives ten examples including that of L'Huilier: formula_38 where formula_39 Because some triangles are badly characterized by their edges (e.g., if formula_40), it is often better to use the formula for the excess in terms of two edges and their included angle formula_41 When triangle △"ABC" is a right triangle with right angle at C, then cos "C" = 0 and sin "C" = 1, so this reduces to formula_42 Angle deficit is defined similarly for hyperbolic geometry. From latitude and longitude. The spherical excess of a spherical quadrangle bounded by the equator, the two meridians of longitudes formula_43 and formula_44 and the great-circle arc between two points with longitude and latitude formula_45 and formula_46 is formula_47 This result is obtained from one of Napier's analogies. In the limit where formula_48 are all small, this reduces to the familiar trapezoidal area, formula_49. The area of a polygon can be calculated from individual quadrangles of the above type, from (analogously) individual triangle bounded by a segment of the polygon and two meridians, by a line integral with Green's theorem, or via an equal-area projection as commonly done in GIS. The other algorithms can still be used with the side lengths calculated using a great-circle distance formula. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\pi < A + B + C < 3\\pi\n" }, { "math_id": 1, "text": "0 < a + b + c < 2\\pi\n" }, { "math_id": 2, "text": "\\begin{alignat}{3}\n A' &= \\pi - a, &\\qquad B' &= \\pi - b , &\\qquad C' &= \\pi - c, \\\\\n a' &= \\pi - A, & b' &= \\pi - B , & c' &= \\pi - C .\n\\end{alignat}" }, { "math_id": 3, "text": "\\begin{align}\n \\cos a &= \\cos b \\cos c + \\sin b \\sin c \\cos A, \\\\[2pt]\n \\cos b &= \\cos c \\cos a + \\sin c \\sin a \\cos B, \\\\[2pt]\n \\cos c &= \\cos a \\cos b + \\sin a \\sin b \\cos C.\n\\end{align}" }, { "math_id": 4, "text": "a, b, c \\rightarrow 0" }, { "math_id": 5, "text": " \\sin a \\approx a " }, { "math_id": 6, "text": " (\\cos a - \\cos b)^2 \\approx 0" }, { "math_id": 7, "text": "\\frac{\\sin A}{\\sin a} = \\frac{\\sin B}{\\sin b} = \\frac{\\sin C}{\\sin c}." }, { "math_id": 8, "text": "\\begin{align}\n \\vec{OA}: &\\quad (0,\\,0,\\,1) \\\\\n \\vec{OB}: &\\quad (\\sin c,\\,0,\\,\\cos c) \\\\\n \\vec{OC}: &\\quad (\\sin b\\cos A,\\,\\sin b\\sin A,\\,\\cos b).\n\\end{align}" }, { "math_id": 9, "text": "\\vec{OB} \\cdot \\vec{OC} =\\sin c \\sin b \\cos A + \\cos c \\cos b." }, { "math_id": 10, "text": "\\cos a = \\cos b \\cos c + \\sin b \\sin c \\cos A." }, { "math_id": 11, "text": "\\cos A = \\frac{\\cos a-\\cos b\\cos c}{\\sin b \\sin c}." }, { "math_id": 12, "text": "\\sin^2 A=1-\\cos^2 A" }, { "math_id": 13, "text": "\n\\begin{align}\n \\sin^2 A &= 1 - \\left(\\frac{\\cos a - \\cos b \\cos c}{\\sin b \\sin c}\\right)^2 \\\\[5pt]\n &= \\frac{(1-\\cos^2 b)(1-\\cos^2 c)-(\\cos a - \\cos b\\cos c)^2}{\\sin^2\\!b \\,\\sin^2\\!c} \\\\[5pt]\n \\frac{\\sin A}{\\sin a} &= \\frac{\\sqrt{1-\\cos^2\\!a-\\cos^2\\!b-\\cos^2\\!c + 2\\cos a\\cos b\\cos c}}{\\sin a\\sin b\\sin c}.\n\\end{align}" }, { "math_id": 14, "text": "\\begin{align}\n\\cos A &= -\\cos B \\, \\cos C + \\sin B \\, \\sin C \\, \\cos a, \\\\\n\\cos B &= -\\cos C \\, \\cos A + \\sin C \\, \\sin A \\, \\cos b, \\\\\n\\cos C &= -\\cos A \\, \\cos B + \\sin A \\, \\sin B \\, \\cos c.\n\\end{align}" }, { "math_id": 15, "text": "\n \\cos\\!\\Bigl(\\begin{smallmatrix}\\text{inner} \\\\ \\text{side}\\end{smallmatrix}\\Bigr)\n \\cos\\!\\Bigl(\\begin{smallmatrix}\\text{inner} \\\\ \\text{angle}\\end{smallmatrix}\\Bigr) =\n \\cot\\!\\Bigl(\\begin{smallmatrix}\\text{outer} \\\\ \\text{side}\\end{smallmatrix}\\Bigr)\n \\sin\\!\\Bigl(\\begin{smallmatrix}\\text{inner} \\\\ \\text{side}\\end{smallmatrix}\\Bigr) -\n \\cot\\!\\Bigl(\\begin{smallmatrix}\\text{outer} \\\\ \\text{angle}\\end{smallmatrix}\\Bigr)\n \\sin\\!\\Bigl(\\begin{smallmatrix}\\text{inner} \\\\ \\text{angle}\\end{smallmatrix}\\Bigr),\n" }, { "math_id": 16, "text": "\\begin{alignat}{5}\n \\text{(CT1)}&& \\qquad \\cos b\\,\\cos C &= \\cot a\\,\\sin b - \\cot A \\,\\sin C \\qquad&&(aCbA)\\\\[0ex]\n \\text{(CT2)}&& \\cos b\\,\\cos A &= \\cot c\\,\\sin b - \\cot C \\,\\sin A &&(CbAc)\\\\[0ex]\n \\text{(CT3)}&& \\cos c\\,\\cos A &= \\cot b\\,\\sin c - \\cot B \\,\\sin A &&(bAcB)\\\\[0ex]\n \\text{(CT4)}&& \\cos c\\,\\cos B &= \\cot a\\,\\sin c - \\cot A \\,\\sin B &&(AcBa)\\\\[0ex]\n \\text{(CT5)}&& \\cos a\\,\\cos B &= \\cot c\\,\\sin a - \\cot C \\,\\sin B &&(cBaC)\\\\[0ex]\n \\text{(CT6)}&& \\cos a\\,\\cos C &= \\cot b\\,\\sin a - \\cot B \\,\\sin C &&(BaCb)\n\\end{alignat}" }, { "math_id": 17, "text": "\\begin{align}\n \\cos a &= \\cos b \\cos c + \\sin b \\sin c \\cos A \\\\\n &= \\cos b\\ (\\cos a \\cos b + \\sin a \\sin b \\cos C) + \\sin b \\sin C \\sin a \\cot A \\\\\n \\cos a \\sin^2 b &= \\cos b \\sin a \\sin b \\cos C + \\sin b \\sin C \\sin a \\cot A.\n\\end{align}" }, { "math_id": 18, "text": "2s=(a+b+c)" }, { "math_id": 19, "text": "2S=(A+B+C)," }, { "math_id": 20, "text": "\n\\begin{alignat}{5}\n \\sin{\\tfrac{1}{2}}A &= \\sqrt{\\frac{\\sin(s-b)\\sin(s-c)}{\\sin b\\sin c}}\n&\\qquad\\qquad\n \\sin{\\tfrac{1}{2}}a &= \\sqrt{\\frac{-\\cos S\\cos (S-A)}{\\sin B\\sin C}} \\\\[2ex]\n \\cos{\\tfrac{1}{2}}A &= \\sqrt{\\frac{\\sin s\\sin(s-a)}{\\sin b\\sin c}}\n & \\cos{\\tfrac{1}{2}}a &= \\sqrt{\\frac{\\cos (S-B)\\cos (S-C)}{\\sin B\\sin C}} \\\\[2ex]\n \\tan{\\tfrac{1}{2}}A &= \\sqrt{\\frac{\\sin(s-b)\\sin(s-c)}{\\sin s\\sin(s-a)}}\n & \\tan{\\tfrac{1}{2}}a &= \\sqrt{\\frac{-\\cos S\\cos (S-A)}{\\cos (S-B)\\cos(S-C)}}\n \\end{alignat}\n" }, { "math_id": 21, "text": "2\\sin^2\\!\\tfrac{A}{2} = 1 - \\cos A," }, { "math_id": 22, "text": "2\\cos^2\\!\\tfrac{A}{2} = 1 + \\cos A," }, { "math_id": 23, "text": "\n\\begin{align}\n \\frac{\\sin{\\tfrac{1}{2}}(A+B)}\n {\\cos{\\tfrac{1}{2}}C}\n=\\frac{\\cos{\\tfrac{1}{2}}(a-b)}\n {\\cos{\\tfrac{1}{2}}c}\n&\\qquad\\qquad\n&\n \\frac{\\sin{\\tfrac{1}{2}}(A-B)}\n {\\cos{\\tfrac{1}{2}}C}\n=\\frac{\\sin{\\tfrac{1}{2}}(a-b)}\n {\\sin{\\tfrac{1}{2}}c}\n\\\\[2ex]\n \\frac{\\cos{\\tfrac{1}{2}}(A+B)}\n {\\sin{\\tfrac{1}{2}}C}\n=\\frac{\\cos{\\tfrac{1}{2}}(a+b)}\n {\\cos{\\tfrac{1}{2}}c}\n&\\qquad\n&\n \\frac{\\cos{\\tfrac{1}{2}}(A-B)}\n {\\sin{\\tfrac{1}{2}}C}\n=\\frac{\\sin{\\tfrac{1}{2}}(a+b)}\n {\\sin{\\tfrac{1}{2}}c}\n \\end{align}\n" }, { "math_id": 24, "text": "\\begin{align}\n \\tan\\tfrac{1}{2}(A+B) = \\frac{\\cos\\tfrac{1}{2}(a-b)}{\\cos\\tfrac{1}{2}(a+b)} \\cot\\tfrac{1}{2}C\n&\\qquad&\n \\tan\\tfrac{1}{2}(a+b) = \\frac{\\cos\\tfrac{1}{2}(A-B)}{\\cos\\tfrac{1}{2}(A+B)}\\tan\\tfrac{1}{2}c\n\\\\[2ex]\n \\tan\\tfrac{1}{2}(A-B) = \\frac{\\sin\\tfrac{1}{2}(a-b)}{\\sin\\tfrac{1}{2}(a+b)} \\cot\\tfrac{1}{2}C\n&\\qquad& \n \\tan\\tfrac{1}{2}(a-b) =\\frac{\\sin\\tfrac{1}{2}(A-B)}{\\sin\\tfrac{1}{2}(A+B)} \\tan\\tfrac{1}{2}c\n\\end{align}" }, { "math_id": 25, "text": "\n\\frac{\\tan\\tfrac12(A-B)}{\\tan\\tfrac12(A+B)}\n= \\frac{\\tan\\tfrac12(a-b)}{\\tan\\tfrac12(a+b)} \n" }, { "math_id": 26, "text": "\\begin{align}\n \\sin a &= \\tan(\\tfrac{\\pi}{2} - B)\\,\\tan b \\\\[2pt]\n &= \\cos(\\tfrac{\\pi}{2} - c)\\, \\cos(\\tfrac{\\pi}{2} - A) \\\\[2pt]\n &= \\cot B\\,\\tan b \\\\[4pt]\n &= \\sin c\\,\\sin A. \n\\end{align}" }, { "math_id": 27, "text": "\\begin{alignat}{4}\n &\\text{(R1)}&\\qquad \\cos c&=\\cos a\\,\\cos b,\n&\\qquad\\qquad\n &\\text{(R6)}&\\qquad \\tan b&=\\cos A\\,\\tan c,\\\\\n &\\text{(R2)}& \\sin a &= \\sin A\\,\\sin c,\n&&\\text{(R7)}& \\tan a &= \\cos B\\,\\tan c,\\\\\n &\\text{(R3)}& \\sin b &= \\sin B\\,\\sin c,\n&&\\text{(R8)}& \\cos A &= \\sin B\\,\\cos a,\\\\\n &\\text{(R4)}& \\tan a &= \\tan A\\,\\sin b,\n&&\\text{(R9)}& \\cos B &= \\sin A\\,\\cos b,\\\\\n &\\text{(R5)}& \\tan b &= \\tan B\\,\\sin a,\n&&\\text{(R10)}& \\cos c &= \\cot A\\,\\cot B.\n\\end{alignat}" }, { "math_id": 28, "text": "\\begin{alignat}{4}\n &\\text{(Q1)}&\\qquad \\cos C &= -\\cos A\\,\\cos B,\n&\\qquad\\qquad\n &\\text{(Q6)}&\\qquad \\tan B &= -\\cos a\\,\\tan C,\\\\\n &\\text{(Q2)}& \\sin A &= \\sin a\\,\\sin C,\n&&\\text{(Q7)}& \\tan A &= -\\cos b\\,\\tan C,\\\\\n &\\text{(Q3)}& \\sin B &= \\sin b\\,\\sin C,\n&&\\text{(Q8)}& \\cos a &= \\sin b\\,\\cos A,\\\\\n &\\text{(Q4)}& \\tan A &= \\tan a\\,\\sin B,\n&&\\text{(Q9)}& \\cos b &= \\sin a\\,\\cos B,\\\\\n &\\text{(Q5)}& \\tan B &= \\tan b\\,\\sin A,\n&&\\text{(Q10)}& \\cos C &= -\\cot a\\,\\cot b.\n\\end{alignat}" }, { "math_id": 29, "text": "\\begin{align}\n \\cos a &= (\\cos a \\,\\cos c + \\sin a \\,\\sin c \\,\\cos B) \\cos c + \\sin b \\,\\sin c \\,\\cos A \\\\[4pt]\n \\cos a \\,\\sin^2 c &= \\sin a \\,\\cos c \\,\\sin c \\,\\cos B + \\sin b \\,\\sin c \\,\\cos A\n\\end{align}" }, { "math_id": 30, "text": "\\cos a \\sin c = \\sin a \\,\\cos c \\,\\cos B + \\sin b \\,\\cos A" }, { "math_id": 31, "text": "\\cos a \\cos A = \\cos b \\,\\cos c \\,\\cos A + \\sin b \\,\\sin c - \\sin b \\,\\sin c \\,\\sin^2 A." }, { "math_id": 32, "text": "\\cos a \\cos A = -\\cos B \\,\\cos C \\,\\cos a + \\sin B \\,\\sin C - \\sin B \\,\\sin C \\,\\sin^2 a." }, { "math_id": 33, "text": " \\sin b \\,\\sin c \\,\\sin^2 A = \\sin B \\,\\sin C \\,\\sin^2 a " }, { "math_id": 34, "text": "\\sin b \\,\\sin c + \\cos b \\,\\cos c \\,\\cos A = \\sin B \\,\\sin C - \\cos B \\,\\cos C \\,\\cos a" }, { "math_id": 35, "text": "{\\text{Area of polygon} \\atop \\text{(on the unit sphere)}} \\equiv E_N = \\left(\\sum_{n=1}^{N} A_{n}\\right) - (N-2)\\pi." }, { "math_id": 36, "text": " {\\text{Area of triangle} \\atop \\text{(on the unit sphere)}} \\equiv E = E_3 = A+B+C -\\pi," }, { "math_id": 37, "text": " A+B+C = \\pi + \\frac{4\\pi \\times \\text{Area of triangle}}{\\text{Area of the sphere}}." }, { "math_id": 38, "text": "\\tan\\tfrac{1}{4}E\n= \\sqrt{\\tan\\tfrac{1}{2}s\\, \\tan\\tfrac{1}{2}(s-a)\\, \\tan\\tfrac{1}{2}(s-b)\\,\\tan\\tfrac{1}{2}(s-c)}" }, { "math_id": 39, "text": "s = \\tfrac{1}{2}(a+b+c)" }, { "math_id": 40, "text": "a = b \\approx \\frac12c" }, { "math_id": 41, "text": "\\tan\\tfrac12 E = \\frac\n{\\tan\\frac12a\\tan\\frac12b\\sin C}{1 + \\tan\\frac12a\\tan\\frac12b\\cos C}." }, { "math_id": 42, "text": "\\tan\\tfrac12 E = \\tan\\tfrac12a\\tan\\tfrac12b." }, { "math_id": 43, "text": "\\lambda_1" }, { "math_id": 44, "text": "\\lambda_2," }, { "math_id": 45, "text": "(\\lambda_1, \\varphi_1)" }, { "math_id": 46, "text": "(\\lambda_2, \\varphi_2)" }, { "math_id": 47, "text": "\n\\tan\\tfrac12 E_4\n= \\frac {\\sin\\tfrac12(\\varphi_2 + \\varphi_1)}{\\cos\\tfrac12(\\varphi_2 - \\varphi_1)}\n\\tan\\tfrac12(\\lambda_2 - \\lambda_1).\n" }, { "math_id": 48, "text": "\\varphi_1, \\varphi_2, \\lambda_2 - \\lambda_1" }, { "math_id": 49, "text": "E_4 \\approx \\frac12 (\\varphi_2 + \\varphi_1) (\\lambda_2 - \\lambda_1)" } ]
https://en.wikipedia.org/wiki?curid=650405
650518
Complex dynamics
Branch of mathematics Complex dynamics, or holomorphic dynamics, is the study of dynamical systems obtained by iterating a complex analytic mapping. This article focuses on the case of algebraic dynamics, where a polynomial or rational function is iterated. In geometric terms, that amounts to iterating a mapping from some algebraic variety to itself. The related theory of arithmetic dynamics studies iteration over the rational numbers or the p-adic numbers instead of the complex numbers. Dynamics in complex dimension 1. A simple example that shows some of the main issues in complex dynamics is the mapping formula_0 from the complex numbers C to itself. It is helpful to view this as a map from the complex projective line formula_1 to itself, by adding a point formula_2 to the complex numbers. (formula_1 has the advantage of being compact.) The basic question is: given a point formula_3 in formula_1, how does its "orbit" (or "forward orbit") formula_4 behave, qualitatively? The answer is: if the absolute value |"z"| is less than 1, then the orbit converges to 0, in fact more than exponentially fast. If |"z"| is greater than 1, then the orbit converges to the point formula_2 in formula_1, again more than exponentially fast. (Here 0 and formula_2 are "superattracting" fixed points of "f", meaning that the derivative of "f" is zero at those points. An "attracting" fixed point means one where the derivative of "f" has absolute value less than 1.) On the other hand, suppose that formula_5, meaning that "z" is on the unit circle in C. At these points, the dynamics of "f" is chaotic, in various ways. For example, for almost all points "z" on the circle in terms of measure theory, the forward orbit of "z" is dense in the circle, and in fact uniformly distributed on the circle. There are also infinitely many periodic points on the circle, meaning points with formula_6 for some positive integer "r". (Here formula_7 means the result of applying "f" to "z" "r" times, formula_8.) Even at periodic points "z" on the circle, the dynamics of "f" can be considered chaotic, since points near "z" diverge exponentially fast from "z" upon iterating "f". (The periodic points of "f" on the unit circle are "repelling": if formula_6, the derivative of formula_9 at "z" has absolute value greater than 1.) Pierre Fatou and Gaston Julia showed in the late 1910s that much of this story extends to any complex algebraic map from formula_1 to itself of degree greater than 1. (Such a mapping may be given by a polynomial formula_10 with complex coefficients, or more generally by a rational function.) Namely, there is always a compact subset of formula_1, the Julia set, on which the dynamics of "f" is chaotic. For the mapping formula_0, the Julia set is the unit circle. For other polynomial mappings, the Julia set is often highly irregular, for example a fractal in the sense that its Hausdorff dimension is not an integer. This occurs even for mappings as simple as formula_11 for a constant formula_12. The Mandelbrot set is the set of complex numbers "c" such that the Julia set of formula_11 is connected. There is a rather complete classification of the possible dynamics of a rational function formula_13 in the Fatou set, the complement of the Julia set, where the dynamics is "tame". Namely, Dennis Sullivan showed that each connected component "U" of the Fatou set is pre-periodic, meaning that there are natural numbers formula_14 such that formula_15. Therefore, to analyze the dynamics on a component "U", one can assume after replacing "f" by an iterate that formula_16. Then either (1) "U" contains an attracting fixed point for "f"; (2) "U" is "parabolic" in the sense that all points in "U" approach a fixed point in the boundary of "U"; (3) "U" is a Siegel disk, meaning that the action of "f" on "U" is conjugate to an irrational rotation of the open unit disk; or (4) "U" is a Herman ring, meaning that the action of "f" on "U" is conjugate to an irrational rotation of an open annulus. (Note that the "backward orbit" of a point "z" in "U", the set of points in formula_1 that map to "z" under some iterate of "f", need not be contained in "U".) The equilibrium measure of an endomorphism. Complex dynamics has been effectively developed in any dimension. This section focuses on the mappings from complex projective space formula_17 to itself, the richest source of examples. The main results for formula_17 have been extended to a class of rational maps from any projective variety to itself. Note, however, that many varieties have no interesting self-maps. Let "f" be an endomorphism of formula_17, meaning a morphism of algebraic varieties from formula_17 to itself, for a positive integer "n". Such a mapping is given in homogeneous coordinates by formula_18 for some homogeneous polynomials formula_19 of the same degree "d" that have no common zeros in formula_17. (By Chow's theorem, this is the same thing as a holomorphic mapping from formula_17 to itself.) Assume that "d" is greater than 1; then the degree of the mapping "f" is formula_20, which is also greater than 1. Then there is a unique probability measure formula_21 on formula_17, the equilibrium measure of "f", that describes the most chaotic part of the dynamics of "f". (It has also been called the Green measure or measure of maximal entropy.) This measure was defined by Hans Brolin (1965) for polynomials in one variable, by Alexandre Freire, Artur Lopes, Ricardo Mañé, and Mikhail Lyubich for formula_22 (around 1983), and by John Hubbard, Peter Papadopol, John Fornaess, and Nessim Sibony in any dimension (around 1994). The small Julia set formula_23 is the support of the equilibrium measure in formula_17; this is simply the Julia set when formula_22. formula_26 Then the equilibrium measure formula_21 is the Haar measure on the "n"-dimensional torus formula_27 For more general holomorphic mappings from formula_17 to itself, the equilibrium measure can be much more complicated, as one sees already in complex dimension 1 from pictures of Julia sets. Characterizations of the equilibrium measure. A basic property of the equilibrium measure is that it is "invariant" under "f", in the sense that the pushforward measure formula_28 is equal to formula_21. Because "f" is a finite morphism, the pullback measure formula_29 is also defined, and formula_21 is totally invariant in the sense that formula_30. One striking characterization of the equilibrium measure is that it describes the asymptotics of almost every point in formula_17 when followed backward in time, by Jean-Yves Briend, Julien Duval, Tien-Cuong Dinh, and Sibony. Namely, for a point "z" in formula_17 and a positive integer "r", consider the probability measure formula_31 which is evenly distributed on the formula_32 points "w" with formula_33. Then there is a Zariski closed subset formula_34 such that for all points "z" not in "E", the measures just defined converge weakly to the equilibrium measure formula_21 as "r" goes to infinity. In more detail: only finitely many closed complex subspaces of formula_17 are totally invariant under "f" (meaning that formula_35), and one can take the "exceptional set" "E" to be the unique largest totally invariant closed complex subspace not equal to formula_17. Another characterization of the equilibrium measure (due to Briend and Duval) is as follows. For each positive integer "r", the number of periodic points of period "r" (meaning that formula_6), counted with multiplicity, is formula_36, which is roughly formula_32. Consider the probability measure which is evenly distributed on the points of period "r". Then these measures also converge to the equilibrium measure formula_21 as "r" goes to infinity. Moreover, most periodic points are repelling and lie in formula_23, and so one gets the same limit measure by averaging only over the repelling periodic points in formula_23. There may also be repelling periodic points outside formula_23. The equilibrium measure gives zero mass to any closed complex subspace of formula_17 that is not the whole space. Since the periodic points in formula_23 are dense in formula_23, it follows that the periodic points of "f" are Zariski dense in formula_17. A more algebraic proof of this Zariski density was given by Najmuddin Fakhruddin. Another consequence of formula_21 giving zero mass to closed complex subspaces not equal to formula_17 is that each point has zero mass. As a result, the support formula_23 of formula_21 has no isolated points, and so it is a perfect set. The support formula_23 of the equilibrium measure is not too small, in the sense that its Hausdorff dimension is always greater than zero. In that sense, an endomorphism of complex projective space with degree greater than 1 always behaves chaotically at least on part of the space. (There are examples where formula_23 is all of formula_17.) Another way to make precise that "f" has some chaotic behavior is that the topological entropy of "f" is always greater than zero, in fact equal to formula_37, by Mikhail Gromov, Michał Misiurewicz, and Feliks Przytycki. For any continuous endomorphism "f" of a compact metric space "X", the topological entropy of "f" is equal to the maximum of the measure-theoretic entropy (or "metric entropy") of all "f"-invariant measures on "X". For a holomorphic endomorphism "f" of formula_17, the equilibrium measure formula_21 is the "unique" invariant measure of maximal entropy, by Briend and Duval. This is another way to say that the most chaotic behavior of "f" is concentrated on the support of the equilibrium measure. Finally, one can say more about the dynamics of "f" on the support of the equilibrium measure: "f" is ergodic and, more strongly, mixing with respect to that measure, by Fornaess and Sibony. It follows, for example, that for almost every point with respect to formula_21, its forward orbit is uniformly distributed with respect to formula_21. Lattès maps. A Lattès map is an endomorphism "f" of formula_17 obtained from an endomorphism of an abelian variety by dividing by a finite group. In this case, the equilibrium measure of "f" is absolutely continuous with respect to Lebesgue measure on formula_17. Conversely, by Anna Zdunik, François Berteloot, and Christophe Dupont, the only endomorphisms of formula_17 whose equilibrium measure is absolutely continuous with respect to Lebesgue measure are the Lattès examples. That is, for all non-Lattès endomorphisms, formula_21 assigns its full mass 1 to some Borel set of Lebesgue measure 0. In dimension 1, more is known about the "irregularity" of the equilibrium measure. Namely, define the "Hausdorff dimension" of a probability measure formula_38 on formula_1 (or more generally on a smooth manifold) by formula_39 where formula_40 denotes the Hausdorff dimension of a Borel set "Y". For an endomorphism "f" of formula_1 of degree greater than 1, Zdunik showed that the dimension of formula_21 is equal to the Hausdorff dimension of its support (the Julia set) if and only if "f" is conjugate to a Lattès map, a Chebyshev polynomial (up to sign), or a power map formula_41 with formula_42. (In the latter cases, the Julia set is all of formula_1, a closed interval, or a circle, respectively.) Thus, outside those special cases, the equilibrium measure is highly irregular, assigning positive mass to some closed subsets of the Julia set with smaller Hausdorff dimension than the whole Julia set. Automorphisms of projective varieties. More generally, complex dynamics seeks to describe the behavior of rational maps under iteration. One case that has been studied with some success is that of "automorphisms" of a smooth complex projective variety "X", meaning isomorphisms "f" from "X" to itself. The case of main interest is where "f" acts nontrivially on the singular cohomology formula_43. Gromov and Yosef Yomdin showed that the topological entropy of an endomorphism (for example, an automorphism) of a smooth complex projective variety is determined by its action on cohomology. Explicitly, for "X" of complex dimension "n" and formula_44, let formula_45 be the spectral radius of "f" acting by pullback on the Hodge cohomology group formula_46. Then the topological entropy of "f" is formula_47 (The topological entropy of "f" is also the logarithm of the spectral radius of "f" on the whole cohomology formula_48.) Thus "f" has some chaotic behavior, in the sense that its topological entropy is greater than zero, if and only if it acts on some cohomology group with an eigenvalue of absolute value greater than 1. Many projective varieties do not have such automorphisms, but (for example) many rational surfaces and K3 surfaces do have such automorphisms. Let "X" be a compact Kähler manifold, which includes the case of a smooth complex projective variety. Say that an automorphism "f" of "X" has "simple action on cohomology" if: there is only one number "p" such that formula_45 takes its maximum value, the action of "f" on formula_49 has only one eigenvalue with absolute value formula_45, and this is a simple eigenvalue. For example, Serge Cantat showed that every automorphism of a compact Kähler surface with positive topological entropy has simple action on cohomology. (Here an "automorphism" is complex analytic but is not assumed to preserve a Kähler metric on "X". In fact, every automorphism that preserves a metric has topological entropy zero.) For an automorphism "f" with simple action on cohomology, some of the goals of complex dynamics have been achieved. Dinh, Sibony, and Henry de Thélin showed that there is a unique invariant probability measure formula_21 of maximal entropy for "f", called the equilibrium measure (or Green measure, or measure of maximal entropy). (In particular, formula_21 has entropy formula_50 with respect to "f".) The support of formula_21 is called the small Julia set formula_23. Informally: "f" has some chaotic behavior, and the most chaotic behavior is concentrated on the small Julia set. At least when "X" is projective, formula_23 has positive Hausdorff dimension. (More precisely, formula_21 assigns zero mass to all sets of sufficiently small Hausdorff dimension.) Kummer automorphisms. Some abelian varieties have an automorphism of positive entropy. For example, let "E" be a complex elliptic curve and let "X" be the abelian surface formula_51. Then the group formula_52 of invertible formula_53 integer matrices acts on "X". Any group element "f" whose trace has absolute value greater than 2, for example formula_54, has spectral radius greater than 1, and so it gives a positive-entropy automorphism of "X". The equilibrium measure of "f" is the Haar measure (the standard Lebesgue measure) on "X". The Kummer automorphisms are defined by taking the quotient space by a finite group of an abelian surface with automorphism, and then blowing up to make the surface smooth. The resulting surfaces include some special K3 surfaces and rational surfaces. For the Kummer automorphisms, the equilibrium measure has support equal to "X" and is smooth outside finitely many curves. Conversely, Cantat and Dupont showed that for all surface automorphisms of positive entropy except the Kummer examples, the equilibrium measure is not absolutely continuous with respect to Lebesgue measure. In this sense, it is usual for the equilibrium measure of an automorphism to be somewhat irregular. Saddle periodic points. A periodic point "z" of "f" is called a "saddle" periodic point if, for a positive integer "r" such that formula_6, at least one eigenvalue of the derivative of formula_9 on the tangent space at "z" has absolute value less than 1, at least one has absolute value greater than 1, and none has absolute value equal to 1. (Thus "f" is expanding in some directions and contracting at others, near "z".) For an automorphism "f" with simple action on cohomology, the saddle periodic points are dense in the support formula_23 of the equilibrium measure formula_21. On the other hand, the measure formula_21 vanishes on closed complex subspaces not equal to "X". It follows that the periodic points of "f" (or even just the saddle periodic points contained in the support of formula_21) are Zariski dense in "X". For an automorphism "f" with simple action on cohomology, "f" and its inverse map are ergodic and, more strongly, mixing with respect to the equilibrium measure formula_21. It follows that for almost every point "z" with respect to formula_21, the forward and backward orbits of "z" are both uniformly distributed with respect to formula_21. A notable difference with the case of endomorphisms of formula_17 is that for an automorphism "f" with simple action on cohomology, there can be a nonempty open subset of "X" on which neither forward nor backward orbits approach the support formula_23 of the equilibrium measure. For example, Eric Bedford, Kyounghee Kim, and Curtis McMullen constructed automorphisms "f" of a smooth projective rational surface with positive topological entropy (hence simple action on cohomology) such that "f" has a Siegel disk, on which the action of "f" is conjugate to an irrational rotation. Points in that open set never approach formula_23 under the action of "f" or its inverse. At least in complex dimension 2, the equilibrium measure of "f" describes the distribution of the isolated periodic points of "f". (There may also be complex curves fixed by "f" or an iterate, which are ignored here.) Namely, let "f" be an automorphism of a compact Kähler surface "X" with positive topological entropy formula_55. Consider the probability measure which is evenly distributed on the isolated periodic points of period "r" (meaning that formula_6). Then this measure converges weakly to formula_21 as "r" goes to infinity, by Eric Bedford, Lyubich, and John Smillie. The same holds for the subset of saddle periodic points, because both sets of periodic points grow at a rate of formula_56. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(z)=z^2" }, { "math_id": 1, "text": "\\mathbf{CP}^1" }, { "math_id": 2, "text": "\\infty" }, { "math_id": 3, "text": "z" }, { "math_id": 4, "text": "z,\\; f(z)=z^2,\\; f(f(z))=z^4, f(f(f(z)))=z^8,\\; \\ldots " }, { "math_id": 5, "text": "|z|=1" }, { "math_id": 6, "text": "f^r(z)=z" }, { "math_id": 7, "text": "f^r(z)" }, { "math_id": 8, "text": "f(f(\\cdots(f(z))\\cdots))" }, { "math_id": 9, "text": "f^r" }, { "math_id": 10, "text": "f(z)" }, { "math_id": 11, "text": "f(z)=z^2+c" }, { "math_id": 12, "text": "c\\in\\mathbf{C}" }, { "math_id": 13, "text": "f\\colon\\mathbf{CP}^1\\to \\mathbf{CP}^1" }, { "math_id": 14, "text": "a<b" }, { "math_id": 15, "text": "f^a(U)=f^b(U)" }, { "math_id": 16, "text": "f(U)=U" }, { "math_id": 17, "text": "\\mathbf{CP}^n" }, { "math_id": 18, "text": "f([z_0,\\ldots,z_n])=[f_0(z_0,\\ldots,z_n),\\ldots,f_n(z_0,\\ldots,z_n)]" }, { "math_id": 19, "text": "f_0,\\ldots,f_n" }, { "math_id": 20, "text": "d^n" }, { "math_id": 21, "text": "\\mu_f" }, { "math_id": 22, "text": "n=1" }, { "math_id": 23, "text": "J^*(f)" }, { "math_id": 24, "text": "d>1" }, { "math_id": 25, "text": "f\\colon \\mathbf{CP}^n\\to\\mathbf{CP}^n" }, { "math_id": 26, "text": "f([z_0,\\ldots,z_n])=[z_0^d,\\ldots,z_n^d]." }, { "math_id": 27, "text": "\\{[1,z_1,\\ldots,z_n]: |z_1|=\\cdots=|z_n|=1\\}." }, { "math_id": 28, "text": "f_*\\mu_f" }, { "math_id": 29, "text": "f^*\\mu_f" }, { "math_id": 30, "text": "f^*\\mu_f=\\deg(f)\\mu_f" }, { "math_id": 31, "text": "(1/d^{rn})(f^r)^*(\\delta_z)" }, { "math_id": 32, "text": "d^{rn}" }, { "math_id": 33, "text": "f^r(w)=z" }, { "math_id": 34, "text": "E\\subsetneq \\mathbf{CP}^n" }, { "math_id": 35, "text": "f^{-1}(S)=S" }, { "math_id": 36, "text": "(d^{r(n+1)}-1)/(d^r-1)" }, { "math_id": 37, "text": "n\\log d" }, { "math_id": 38, "text": "\\mu" }, { "math_id": 39, "text": "\\dim(\\mu)=\\inf \\{\\dim_H(Y):\\mu(Y)=1\\}," }, { "math_id": 40, "text": "\\dim_H(Y)" }, { "math_id": 41, "text": "f(z)=z^{\\pm d}" }, { "math_id": 42, "text": "d\\geq 2" }, { "math_id": 43, "text": "H^*(X,\\mathbf{Z})" }, { "math_id": 44, "text": "0\\leq p\\leq n" }, { "math_id": 45, "text": "d_p" }, { "math_id": 46, "text": "H^{p,p}(X)\\subset H^{2p}(X,\\mathbf{C})" }, { "math_id": 47, "text": "h(f)=\\max_p \\log d_p." }, { "math_id": 48, "text": "H^*(X,\\mathbf{C})" }, { "math_id": 49, "text": "H^{p,p}(X)" }, { "math_id": 50, "text": "\\log d_p" }, { "math_id": 51, "text": "E\\times E" }, { "math_id": 52, "text": "GL(2,\\mathbf{Z})" }, { "math_id": 53, "text": "2\\times 2" }, { "math_id": 54, "text": "\\begin{pmatrix}2&1\\\\1&1\\end{pmatrix}" }, { "math_id": 55, "text": "h(f)=\\log d_1" }, { "math_id": 56, "text": "(d_1)^r" } ]
https://en.wikipedia.org/wiki?curid=650518
6505575
Milne model
Cosmological model The Milne model was a special-relativistic cosmological model proposed by Edward Arthur Milne in 1935. It is mathematically equivalent to a special case of the FLRW model in the limit of zero energy density and it obeys the cosmological principle. The Milne model is also similar to Rindler space in that both are simple re-parameterizations of flat Minkowski space. Since it features both zero energy density and maximally negative spatial curvature, the Milne model is inconsistent with cosmological observations. Cosmologists actually observe the universe's density parameter to be consistent with unity and its curvature to be consistent with flatness. Milne metric. The Milne universe is a special case of a more general Friedmann–Lemaître–Robertson–Walker model (FLRW). The Milne solution can be obtained from the more generic FLRW model by demanding that the energy density, pressure and cosmological constant all equal zero and the spatial curvature is negative. From these assumptions and the Friedmann equations it follows that the scale factor must depend on time coordinate linearly. Setting the spatial curvature and speed of light to unity the metric for a Milne universe can be expressed with hyperspherical coordinates as: formula_0 where formula_1 is the metric for a two-sphere and formula_2 is the curvature-corrected radial component for negatively curved space that varies between 0 and formula_3. The empty space that the Milne model describes can be identified with the inside of a light cone of an event in Minkowski space by a change of coordinates. Milne developed this model independent of general relativity but with awareness of special relativity. As he initially described it, the model has no expansion of space, so all of the redshift (except that caused by peculiar velocities) is explained by a recessional velocity associated with the hypothetical "explosion". However, the mathematical equivalence of the zero energy density (formula_4) version of the FLRW metric to Milne's model implies that a full general relativistic treatment using Milne's assumptions would result in a linearly increasing scale factor for all time since the deceleration parameter is uniquely zero for such a model. Milne's density function. Milne proposed that the universe's density changes in time because of an initial outward explosion of matter. Milne's model assumes an inhomogeneous density function which is Lorentz Invariant (around the event t=x=y=z=0). When rendered graphically Milne's density distribution shows a three-dimensional spherical Lobachevskian pattern with outer edges moving outward at the speed of light. Every inertial body perceives itself to be at the center of the explosion of matter (see observable universe), and sees the local universe as homogeneous and isotropic in the sense of the cosmological principle. In order to be consistent with general relativity, the universe's density must be negligible in comparison to the critical density at all times for which the Milne model is taken to apply. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ds^2 = dt^2-t^2(d \\chi ^2+\\sinh^2{\\chi} d\\Omega^2)\\ " }, { "math_id": 1, "text": "d\\Omega^2 = d\\theta^2+\\sin^2\\theta d\\phi^2\\ " }, { "math_id": 2, "text": "\\chi = \\sinh^{-1}{r}" }, { "math_id": 3, "text": "+\\infin" }, { "math_id": 4, "text": "\\rho = 0" } ]
https://en.wikipedia.org/wiki?curid=6505575
6505948
MaxDiff
The MaxDiff is a long-established theory in mathematical psychology with very specific assumptions about how people make choices: it assumes that respondents evaluate all possible pairs of items within the displayed set and choose the pair that reflects the maximum difference in preference or importance. It may be thought of as a variation of the method of Paired Comparisons. Consider a set in which a respondent evaluates four items: A, B, C and D. If the respondent says that A is best and D is worst, these two responses inform us on five of six possible implied paired comparisons: The only paired comparison that cannot be inferred is B vs. C. In a choice, like above, with four items MaxDiff questioning informs on five of six implied paired comparisons. In a choice among five items, MaxDiff questioning informs on seven of ten implied paired comparisons. The total amount of known relations between items, can be mathematically expressed as follows: formula_0. N represents here the total amount of items. The formula, makes it clear that the effectiveness of this method, of assuming relations, drastically decreases as N grows bigger. Overview. In 1938 Richardson introduced a choice method in which subjects reported the most alike pair of a triad and the most different pair. The component of this method involving the most different pair may be properly called "MaxDiff" in contrast to a "most-least" or "best-worst" method where both the most different pair and the direction of difference are obtained. Ennis, Mullen and Frijters (1988) derived a unidimensional Thurstonian scaling model for Richardson's method of triads so that the results could be scaled under normality assumptions about the item percepts. MaxDiff may involve multidimensional percepts, unlike most-least models that assume a unidimensional representation. MaxDiff and most-least methods belong to a class of methods that do not require the estimation of a cognitive parameter as occurs in the analysis of ratings data. This is one of the reasons for their popularity in applications. Other methods in this class include the 2- and 3-alternative forced choice methods, the triangular method which is a special case of Richardson's method, the duo-trio method and the specified and unspecified methods of tetrads. All of these methods have well developed Thurstonian scaling models as discussed recently in Ennis (2016) which also includes a Thurstonian model for first-last or most-least choice and ranks with rank-induced dependencies. There are a number of possible processes through which subjects may make a most-least decision, including paired comparisons and ranking, but it is typically not known how the decision is reached. Relationship to best–worst scaling ("MaxDiff" surveys). MaxDiff and best–worst scaling (BWS or "MaxDiff surveys") have erroneously been considered synonyms. Respondents can produce best-worst data in any of a number of ways, with a MaxDiff process being but one. Instead of evaluating all possible pairs (the MaxDiff model), they might choose the best from n items, the worst from the remaining n-1, or vice versa (sequential models). Or indeed they may use another method entirely. Thus it should be clear that MaxDiff is a subset of BWS; MaxDiff is BWS, but BWS is not necessarily MaxDiff. Indeed, MaxDiff might not be considered an attractive model on psychological and intuitive grounds: as the number of items increases, the number of possible pairs increases in a multiplicative fashion: n items produces n(n-1) pairs (where best-worst order matters). Assuming respondents do evaluate all possible pairs is a strong assumption. Early work did use the term MaxDiff to refer to BWS, but with Marley's return to the field, correct academic terminology has been disseminated in some parts of the world. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (2 (N-1))-1) " } ]
https://en.wikipedia.org/wiki?curid=6505948
650751
Complete Heyting algebra
In mathematics, especially in order theory, a complete Heyting algebra is a Heyting algebra that is complete as a lattice. Complete Heyting algebras are the objects of three different categories; the category CHey, the category Loc of locales, and its opposite, the category Frm of frames. Although these three categories contain the same objects, they differ in their morphisms, and thus get distinct names. Only the morphisms of CHey are homomorphisms of complete Heyting algebras. Locales and frames form the foundation of pointless topology, which, instead of building on point-set topology, recasts the ideas of general topology in categorical terms, as statements on frames and locales. Definition. Consider a partially ordered set ("P", ≤) that is a complete lattice. Then "P" is a complete Heyting algebra or frame if any of the following equivalent conditions hold: formula_1 formula_2 and the meet operations formula_0 are Scott continuous (i.e., preserve the suprema of directed sets) for all "x" in "P". The entailed definition of Heyting implication is formula_3 Using a bit more category theory, we can equivalently define a frame to be a cocomplete cartesian closed poset. Examples. The system of all open sets of a given topological space ordered by inclusion is a complete Heyting algebra. Frames and locales. The objects of the category CHey, the category Frm of frames and the category Loc of locales are complete Heyting algebras. These categories differ in what constitutes a morphism: The relation of locales and their maps to topological spaces and continuous functions may be seen as follows. Let formula_4 be any map. The power sets "P"("X") and "P"("Y") are complete Boolean algebras, and the map formula_5 is a homomorphism of complete Boolean algebras. Suppose the spaces "X" and "Y" are topological spaces, endowed with the topology "O"("X") and "O"("Y") of open sets on "X" and "Y". Note that "O"("X") and "O"("Y") are subframes of "P"("X") and "P"("Y"). If formula_6 is a continuous function, then formula_7 preserves finite meets and arbitrary joins of these subframes. This shows that "O" is a functor from the category Top of topological spaces to Loc, taking any continuous map formula_4 to the map formula_8 in Loc that is defined in Frm to be the inverse image frame homomorphism formula_9 Given a map of locales formula_10 in Loc, it is common to write formula_11 for the frame homomorphism that defines it in Frm. Using this notation, formula_12 is defined by the equation formula_13 Conversely, any locale "A" has a topological space "S"("A"), called its "spectrum", that best approximates the locale. In addition, any map of locales formula_10 determines a continuous map formula_14 Moreover this assignment is functorial: letting "P"(1) denote the locale that is obtained as the power set of the terminal set formula_15 the points of "S"("A") are the maps formula_16 in Loc, i.e., the frame homomorphisms formula_17 For each formula_18 we define formula_19 as the set of points formula_20 such that formula_21 It is easy to verify that this defines a frame homomorphism formula_22 whose image is therefore a topology on "S"("A"). Then, if formula_10 is a map of locales, to each point formula_20 we assign the point formula_23 defined by letting formula_24 be the composition of formula_25 with formula_26 hence obtaining a continuous map formula_27 This defines a functor formula_28 from Loc to Top, which is right adjoint to "O". Any locale that is isomorphic to the topology of its spectrum is called "spatial", and any topological space that is homeomorphic to the spectrum of its locale of open sets is called "sober". The adjunction between topological spaces and locales restricts to an equivalence of categories between sober spaces and spatial locales. Any function that preserves all joins (and hence any frame homomorphism) has a right adjoint, and, conversely, any function that preserves all meets has a left adjoint. Hence, the category Loc is isomorphic to the category whose objects are the frames and whose morphisms are the meet preserving functions whose left adjoints preserve finite meets. This is often regarded as a representation of Loc, but it should not be confused with Loc itself, whose morphisms are formally the same as frame homomorphisms in the opposite direction. "Still a great resource on locales and complete Heyting algebras." "Includes the characterization in terms of meet continuity." "Surprisingly extensive resource on locales and Heyting algebras. Takes a more categorical viewpoint."
[ { "math_id": 0, "text": "(x\\land\\cdot)" }, { "math_id": 1, "text": "x \\land \\bigvee_{s \\in S} s = \\bigvee_{s \\in S} (x \\land s)." }, { "math_id": 2, "text": "x \\land ( y \\lor z ) = ( x \\land y ) \\lor ( x \\land z )" }, { "math_id": 3, "text": "a\\to b=\\bigvee\\{c \\mid a\\land c\\le b\\}." }, { "math_id": 4, "text": "f: X\\to Y" }, { "math_id": 5, "text": "f^{-1}: P(Y)\\to P(X)" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "f^{-1}: O(Y)\\to O(X)" }, { "math_id": 8, "text": "O(f): O(X)\\to O(Y)" }, { "math_id": 9, "text": "f^{-1}: O(Y)\\to O(X)." }, { "math_id": 10, "text": "f: A\\to B" }, { "math_id": 11, "text": "f^*: B\\to A" }, { "math_id": 12, "text": "O(f)" }, { "math_id": 13, "text": "O(f)^* = f^{-1}." }, { "math_id": 14, "text": "S(A)\\to S(B)." }, { "math_id": 15, "text": "1=\\{*\\}," }, { "math_id": 16, "text": "p: P(1)\\to A" }, { "math_id": 17, "text": "p^*: A\\to P(1)." }, { "math_id": 18, "text": "a\\in A" }, { "math_id": 19, "text": "U_a" }, { "math_id": 20, "text": "p\\in S(A)" }, { "math_id": 21, "text": "p^*(a) =\\{*\\}." }, { "math_id": 22, "text": "A\\to P(S(A))," }, { "math_id": 23, "text": "S(f)(q)" }, { "math_id": 24, "text": "S(f)(p)^*" }, { "math_id": 25, "text": "p^*" }, { "math_id": 26, "text": "f^*," }, { "math_id": 27, "text": "S(f): S(A)\\to S(B)." }, { "math_id": 28, "text": "S" } ]
https://en.wikipedia.org/wiki?curid=650751
650789
Star (game theory)
Term in combinatorial game theory In combinatorial game theory, star, written as formula_0 or formula_1, is the value given to the game where both players have only the option of moving to the zero game. Star may also be denoted as the surreal form {0|0}. This game is an unconditional first-player win. Star, as defined by John Conway in "Winning Ways for your Mathematical Plays", is a value, but not a number in the traditional sense. Star is not zero, but neither positive nor negative, and is therefore said to be "fuzzy" and "confused with" (a fourth alternative that means neither "less than", "equal to", nor "greater than") 0. It is less than all positive rational numbers, and greater than all negative rationals. Games other than {0 | 0} may have value *. For example, the game formula_2, where the values are nimbers, has value * despite each player having more options than simply moving to 0. Why * ≠ 0. A combinatorial game has a positive and negative player; which player moves first is left ambiguous. The combinatorial game 0, or { | }, leaves no options and is a second-player win. Likewise, a combinatorial game is won (assuming optimal play) by the second player if and only if its value is 0. Therefore, a game of value *, which is a first-player win, is neither positive nor negative. However, * is not the only possible value for a first-player win game (see nimbers). Star does have the property that the sum * + *, has value 0, because the first-player's only move is to the game *, which the second-player will win. Example of a value-* game. Nim, with one pile and one piece, has value *. The first player will remove the piece, and the second player will lose. A single-pile Nim game with one pile of "n" pieces (also a first-player win) is defined to have value "*n". The numbers "*z" for integers "z" form an infinite field of characteristic 2, when addition is defined in the context of combinatorial games and multiplication is given a more complex definition.
[ { "math_id": 0, "text": "*" }, { "math_id": 1, "text": "*1" }, { "math_id": 2, "text": "*2 + *3" } ]
https://en.wikipedia.org/wiki?curid=650789
65089917
Convex Polytopes
1967 mathematics textbook Convex Polytopes is a graduate-level mathematics textbook about convex polytopes, higher-dimensional generalizations of three-dimensional convex polyhedra. It was written by Branko Grünbaum, with contributions from Victor Klee, Micha Perles, and G. C. Shephard, and published in 1967 by John Wiley &amp; Sons. It went out of print in 1970. A second edition, prepared with the assistance of Volker Kaibel, Victor Klee, and Günter M. Ziegler, was published by Springer-Verlag in 2003, as volume 221 of their book series Graduate Texts in Mathematics. "Convex Polytopes" was the winner of the 2005 Leroy P. Steele Prize for mathematical exposition, given by the American Mathematical Society. The Basic Library List Committee of the Mathematical Association of America has recommended its inclusion in undergraduate mathematics libraries. Topics. The book has 19 chapters. After two chapters introducing background material in linear algebra, topology, and convex geometry, two more chapters provide basic definitions of polyhedra, in their two dual versions (intersections of half-spaces and convex hulls of finite point sets), introduce Schlegel diagrams, and provide some basic examples including the cyclic polytopes. Chapter 5 introduces Gale diagrams, and the next two chapters use them to study polytopes with a number of vertices only slightly higher than their dimension, and neighborly polytopes. Chapters 8 through 11 study the numbers of faces of different dimensions in polytopes through Euler's polyhedral formula, the Dehn–Sommerville equations, and the extremal combinatorics of numbers of faces in polytopes. Chapter 11 connects the low-dimensional faces together into the skeleton of a polytope, and proves the van Kampen–Flores theorem about non-embeddability of skeletons into lower-dimensional spaces. Chapter 12 studies the question of when a skeleton uniquely determines the higher-dimensional combinatorial structure of its polytope. Chapter 13 provides a complete answer to this theorem for three-dimensional convex polytopes via Steinitz's theorem, which characterizes the graphs of convex polyhedra combinatorially and can be used to show that they can only be realized as a convex polyhedron in one way. It also touches on the multisets of face sizes that can be realized as polyhedra (Eberhard's theorem) and on the combinatorial types of polyhedra that can have inscribed spheres or circumscribed spheres. Chapter 14 concerns relations analogous to the Dehn–Sommerville equations for sums of angles of polytopes, and uses sums of angles to define a central point, the "Steiner point", for any polytope. Chapter 15 studies Minkowski addition and Blaschke addition, two operations by which polytopes can be combined to produce other polytopes. Chapters 16 and 17 study shortest paths and the Hirsch conjecture, longest paths and Hamiltonian cycles, and the shortness exponent of polytopes. Chapter 18 studies arrangements of hyperplanes and their dual relation to the combinatorial structure of zonotopes. A concluding chapter, chapter 19, also includes material on the symmetries of polytopes. Exercises throughout the book make it usable as a textbook, and provide additional links to recent research, and the later chapters of the book also list many open research problems. The second edition of the book keeps the content, organization, and pagination of the first edition intact, adding to it notes at the ends of each chapter on updates to the material in that chapter. These updates include material on Mnëv's universality theorem and its relation to the realizability of polytopes from their combinatorial structures, the proof of the formula_0-conjecture for simplicial spheres, and Kalai's 3"d" conjecture. The second edition also provides an improved bibliography. Topics that are important to the theory of convex polytopes but not well-covered in the book "Convex Polytopes" include Hilbert's third problem and the theory of Dehn invariants. Audience and reception. Although written at a graduate level, the main prerequisites for reading the book are linear algebra and general topology, both at an undergraduate level. In a review of the first edition of the book, Werner Fenchel calls it "a remarkable achievement", "a wealth of material", "well organized and presented in a lucid style". Over 35 years later, in giving the Steele Prize to Grünbaum for "Convex Polytopes", the American Mathematical Society wrote that the book "has served both as a standard reference and as an inspiration", that it was in large part responsible for the vibrant ongoing research in polyhedral combinatorics, and that it remained relevant to this area. Reviewing and welcoming the second edition, Peter McMullen wrote that despite being "immediately rendered obsolete" by the research that it sparked, the book was still essential reading for researchers in this area. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g" } ]
https://en.wikipedia.org/wiki?curid=65089917
65093515
Direct collapse black hole
High-mass black hole seeds Direct collapse black holes (DCBHs) are high-mass black hole seeds that form from the direct collapse of a large amount of material. They putatively formed within the redshift range "z"=15–30, when the Universe was about 100–250 million years old. Unlike seeds formed from the first population of stars (also known as Population III stars), direct collapse black hole seeds are formed by a direct, general relativistic instability. They are very massive, with a typical mass at formation of ~. This category of black hole seeds was originally proposed theoretically to alleviate the challenge in building supermassive black holes already at redshift z~7, as numerous observations to date have confirmed. Formation. Direct collapse black holes (DCBHs) are massive black hole seeds theorized to have formed in the high-redshift Universe and with typical masses at formation of ~, but spanning between and . The environmental physical conditions to form a DCBH (as opposed to a cluster of stars) are the following: The previous conditions are necessary to avoid gas cooling and, hence, fragmentation of the primordial gas cloud. Unable to fragment and form stars, the gas cloud undergoes a gravitational collapse of the entire structure, reaching extremely high matter density at its core, on the order of ~107 g/cm3. At this density, the object undergoes a general relativistic instability, which leads to the formation of a black hole of a typical mass ~, and up to 1 million M☉. The occurrence of the general relativistic instability, as well as the absence of the intermediate stellar phase, led to the denomination of direct collapse black hole. In other words, these objects collapse directly from the primordial gas cloud, not from a stellar progenitor as prescribed in standard black hole models. A computer simulation reported in July 2022 showed that a halo at the rare convergence of strong, cold accretion flows can create massive black holes seeds without the need for ultraviolet backgrounds, supersonic streaming motions or even atomic cooling. Cold flows produced turbulence in the halo, which suppressed star formation. In the simulation, no stars formed in the halo until it had grown to 40 million solar masses at a redshift of 25.7 when the halo's gravity was finally able to overcome the turbulence; the halo then collapsed and formed two supermassive stars that died as DCBHs of and . Demography. Direct collapse black holes are generally thought to be extremely rare objects in the high-redshift Universe, because the three fundamental conditions for their formation (see above in section Formation) are challenging to be met all together in the same gas cloud. Current cosmological simulations suggest that DCBHs could be as rare as only about 1 per cubic gigaparsec at redshift 15. The prediction on their number density is highly dependent on the minimum flux of Lyman–Werner photons required for their formation and can be as large as ~107 DCBHs per cubic gigaparsec in the most optimistic scenarios. Detection. In 2016, a team led by Harvard University astrophysicist Fabio Pacucci identified the first two candidate direct collapse black holes, using data from the Hubble Space Telescope and the Chandra X-ray Observatory. The two candidates, both at redshift formula_0, were found in the CANDELS GOODS-S field and matched the spectral properties predicted for this type of astrophysical sources. In particular, these sources are predicted to have a significant excess of infrared radiation, when compared to other categories of sources at high redshift. Additional observations, in particular with the James Webb Space Telescope, will be crucial to investigate the properties of these sources and confirm their nature. Difference from primordial and stellar collapse black holes. A primordial black hole is the result of the direct collapse of energy, ionized matter, or both, during the inflationary or radiation-dominated eras, while a direct collapse black hole is the result of the collapse of unusually dense and large regions of gas. Note that a black hole formed by the collapse of a Population III star is not considered "direct" collapse. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "z > 6" } ]
https://en.wikipedia.org/wiki?curid=65093515
65093532
Focused proof
In mathematical logic, focused proofs are a family of analytic proofs that arise through goal-directed proof-search, and are a topic of study in structural proof theory and reductive logic. They form the most general definition of "goal-directed" proof-search—in which someone chooses a formula and performs hereditary reductions until the result meets some condition. The extremal case where reduction only terminates when axioms are reached forms the sub-family of "uniform" proofs. A sequent calculus is said to have the focusing property when focused proofs are complete for some terminating condition. For System LK, System LJ, and System LL, uniform proofs are focused proofs where all the atoms are assigned negative polarity. Many other sequent calculi has been shown to have the focusing property, notably the nested sequent calculi of both the classical and intuitionistic variants of the modal logics in the S5 cube. Uniform proofs. In the sequent calculus for an intuitionistic logic, the uniform proofs can be characterised as those in which the upward reading performs all right rules before the left rules. Typically, uniform proofs are not complete for the logic i.e., not all provable sequents or formulas admit a uniform proof, so one considers fragments where they are complete e.g., the hereditary Harrop fragment of intuitionistic logic. Due to the deterministic behaviour, uniform proof-search has been used as the control mechanism defining the programming language paradigm of logic programming. Occasionally, uniform proof-search is implemented in a variant of the sequent calculus for the given logic where context management is automatic, thereby increasing the fragment for which one can define a logic programming language. Focused proofs. The focusing principle was originally classified through the disambiguation between synchronous and asynchronous connectives in linear logic i.e., connectives that interact with the context and those that do not, as a consequence of research on logic programming. They are now an increasingly important example of control in reductive logic, and can drastically improve proof-search procedures in industry. The essential idea of focusing is to identify and coalesce the non-deterministic choices in a proof, so that a proof can be seen as an alternation of negative phases (where invertible rules are applied eagerly) and positive phases (where applications of the other rules are confined and controlled). Polarisation. According to the rules of the sequent calculus, formulas are canonically put into one of two classes called "positive" and "negative" e.g., in LK and LJ the formula formula_0 is positive. The only freedom is over atoms, which are assigned a polarity freely. For negative formulas, provability is invariant under the application of a right rule; and, dually, for a positive formulas provability is invariant under the application of a left rule. In either case one can safely apply rules in any order to hereditary sub-formulas of the same polarity. In the case of a right rule applied to a positive formula, or a left rule applied to a negative formula, one may result in invalid sequents e.g., in LK and LJ there is no proof of the sequent formula_1 beginning with a right rule. A calculus admits the "focusing principle" if when an original reduct was provable then the hereditary reducts of the same polarity are also provable. That is, one can commit to focusing on decomposing a formula and its sub-formulas of the same polarity without loss of completeness. Focused system. A sequent calculus is often shown to have the focusing property by working in a related calculus where polarity explicitly controls which rules apply. Proofs in such systems are in focused, unfocused, or neutral phases, where the first two are characterised by hereditary decomposition; and the latter by forcing a choice of focus. One of the most important operational behaviours a procedure can undergo is "backtracking" i.e., returning to an earlier stage in the computation where a choice was made. In focused systems for classical and intuitionistic logic, the use of backtracking can be simulated by pseudo-contraction. Let formula_2 and formula_3 denote change of polarity, the former making a formula negative, and the latter positive; and call a formula with an arrow neutral. Recall that formula_4 is positive, and consider the neutral polarized sequent formula_5, which is interpreted as the actual sequent formula_6. For neutral sequents such as this, the focused system forces one to make an explicit choice of which formula to focus on, denoted by formula_7. To perform a proof-search the best thing is to choose the left formula, since formula_4 is positive, indeed (as discussed above) in some cases there are no proofs where the focus is on the right formula. To overcome this, some focused calculi create a backtracking point such that focusing on the right yields formula_8, which is still as formula_6. The second formula on the right can be removed only when the focused phase has finished, but if proof-search gets stuck before this happens the sequent may remove the focused component thereby returning to the choice e.g., formula_9 must be taken to formula_10 as no other reductive inference can be made. This is a pseudo-contraction since it has the syntactic form of a contraction on the right, but the actual formula doesn't exist i.e., in the interpretation of the proof in the focused system the sequent has only one formula on the right. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\phi \\lor \\psi" }, { "math_id": 1, "text": "B \\lor A \\vdash A \\lor B" }, { "math_id": 2, "text": "\\uparrow" }, { "math_id": 3, "text": "\\downarrow" }, { "math_id": 4, "text": " \\lor " }, { "math_id": 5, "text": "{\\downarrow \\uparrow \\phi \\lor \\psi} \\vdash {\\uparrow \\phi \\lor \\psi}" }, { "math_id": 6, "text": "\\phi \\lor \\psi \\vdash \\phi \\lor \\psi" }, { "math_id": 7, "text": " \\langle \\, \\rangle " }, { "math_id": 8, "text": "\\downarrow \\uparrow \\phi \\lor \\psi \\vdash \\langle \\phi \\lor \\psi \\rangle, \\uparrow \\phi \\lor \\psi" }, { "math_id": 9, "text": "\\downarrow \\uparrow B \\lor A \\vdash \\langle A \\rangle, \\uparrow A \\lor B" }, { "math_id": 10, "text": "\\downarrow \\uparrow B \\lor A \\vdash {\\uparrow A \\lor B}" } ]
https://en.wikipedia.org/wiki?curid=65093532
650978
Pupillary light reflex
Eye reflex which alters the pupil's size in response to light intensity The pupillary light reflex (PLR) or photopupillary reflex is a reflex that controls the diameter of the pupil, in response to the intensity (luminance) of light that falls on the retinal ganglion cells of the retina in the back of the eye, thereby assisting in adaptation of vision to various levels of lightness/darkness. A greater intensity of light causes the pupil to constrict (miosis/myosis; thereby allowing less light in), whereas a lower intensity of light causes the pupil to dilate (mydriasis, expansion; thereby allowing more light in). Thus, the pupillary light reflex regulates the intensity of light entering the eye. Light shone into one eye will cause both pupils to constrict. Terminology. The pupil is the dark circular opening in the center of the iris and is where light enters the eye. By analogy with a camera, the pupil is equivalent to aperture, whereas the iris is equivalent to the diaphragm. It may be helpful to consider the "Pupillary reflex" as an "'Iris' reflex", as the iris sphincter and dilator muscles are what can be seen responding to ambient light. Whereas, the pupil is the passive opening formed by the active iris. Pupillary reflex is synonymous with pupillary response, which may be pupillary constriction or dilation. Pupillary reflex is conceptually linked to the side (left or right) of the reacting pupil, and not to the side from which light stimulation originates. Left pupillary reflex refers to the response of the left pupil to light, regardless of which eye is exposed to a light source. Right pupillary reflex means reaction of the right pupil, whether light is shone into the left eye, right eye, or both eyes. When light is shone into only one eye and not the other, it is normal for both pupils to constrict simultaneously. The terms "direct" and "consensual" refers to the side where the light source comes from, relative to the side of the reacting pupil. A direct pupillary reflex is pupillary response to light that enters the ipsilateral (same) eye. A consensual pupillary reflex is response of a pupil to light that enters the contralateral (opposite) eye. Thus there are four types of pupillary light reflexes, based on this terminology of absolute laterality (left versus right) and relative laterality (same side versus opposite side, ipsilateral versus contralateral, direct versus consensual): Neural pathway anatomy. The pupillary light reflex neural pathway on each side has an afferent limb and two efferent limbs. The afferent limb has nerve fibers running within the optic nerve (CN II). Each efferent limb has parasympathetic nerve fibers traveling along the periphery of the oculomotor nerve (CN III). The afferent limb carries sensory input. Anatomically, the afferent limb consists of the retina, the optic nerve, and the pretectal nucleus in the midbrain, at level of superior colliculus. Ganglion cells of the retina project fibers through the optic nerve to the ipsilateral pretectal nucleus. The efferent limb is the pupillary motor output from the pretectal nucleus to the pupillary sphincter of the iris. The pretectal nucleus projects nerve fibers to the ipsilateral and contralateral Edinger-Westphal nuclei, which are also located in the midbrain. Each Edinger-Westphal nucleus gives rise to preganglionic parasympathetic fibers which exit with CN III and synapse with postganglionic parasympathetic neurons in the ciliary ganglion. Postganglionic nerve fibers leave the ciliary ganglion to innervate the pupillary sphincter. Each afferent limb has two efferent limbs, one ipsilateral and one contralateral. The ipsilateral efferent limb transmits nerve signals for direct light reflex of the ipsilateral pupil. The contralateral efferent limb causes consensual light reflex of the contralateral pupil. Sympathetic nervous system plays a role in dilating the pupils in low light conditions. Types of neurons. The optic nerve, or more precisely, the photosensitive ganglion cells through the retinohypothalamic tract, is responsible for the afferent limb of the pupillary reflex; it senses the incoming light. The oculomotor nerve is responsible for the efferent limb of the pupillary reflex; it drives the iris muscles that constrict the pupil. Schematic. Referring to the neural pathway schematic diagram, the entire pupillary light reflex system can be visualized as having eight neural segments, numbered 1 through 8. Odd-numbered segments 1, 3, 5, and 7 are on the left. Even-numbered segments 2, 4, 6, and 8 are on the right. Segments 1 and 2 each includes both the retina and the optic nerve (cranial Nerve #2). Segments 3 and 4 are nerve fibers that cross from the pretectal nucleus on one side to the Edinger-Westphal nucleus on the contralateral side. Segments 5 and 6 are fibers that connect the pretectal nucleus on one side to the Edinger-Westphal nucleus on the same side. Segments 3, 4, 5, and 6 are all located within a compact region within the midbrain. Segments 7 and 8 each contains parasympathetic fibers that courses from the Edinger-Westphal nucleus, through the ciliary ganglion, along the oculomotor nerve (cranial nerve #3), to the ciliary sphincter, the muscular structure within the iris. The diagram may assist in localizing lesion within the pupillary reflex system by process of elimination, using light reflex testing results obtained by clinical examination. Clinical significance. Pupillary light reflex provides a useful diagnostic tool for testing the integrity of the sensory and motor functions of the eye. Emergency physicians routinely test pupillary light reflex to assess brain stem function. Abnormal pupillary reflex can be found in optic nerve injury, oculomotor nerve damage, brain stem lesion (including brain stem death), and depressant drugs, such as barbiturates. Examples are provided as below: Lesion localization example. For example, in a person with abnormal left direct reflex and abnormal right consensual reflex (with normal left consensual and normal right direct reflexes), which would produce a left Marcus Gunn pupil, or what is called left afferent pupillary defect, by physical examination. Location of the lesion can be deduced as follows: Cognitive influences. The pupillary response to light is not purely reflexive, but is modulated by cognitive factors, such as attention, awareness, and the way visual input is interpreted. For example, if a bright stimulus is presented to one eye, and a dark stimulus to the other eye, perception alternates between the two eyes (i.e., binocular rivalry): Sometimes the dark stimulus is perceived, sometimes the bright stimulus, but never both at the same time. Using this technique, it has been shown the pupil is smaller when a bright stimulus dominates awareness, relative to when a dark stimulus dominates awareness. This shows that the pupillary light reflex is modulated by visual awareness. Similarly, it has been shown that the pupil constricts when you covertly (i.e., without looking at) pay attention to a bright stimulus, compared to a dark stimulus, even when visual input is identical. Moreover, the magnitude of the pupillary light reflex following a distracting probe is strongly correlated with the extent to which the probe captures visual attention and interferes with task performance. This shows that the pupillary light reflex is modulated by visual attention and trial-by-trial variation in visual attention. Finally, a picture that is subjectively perceived as bright (e.g. a picture of the sun), elicits a stronger pupillary constriction than an image that is perceived as less bright (e.g. a picture of an indoor scene), even when the objective brightness of both images is equal. This shows that the pupillary light reflex is modulated by subjective (as opposed to objective) brightness. Mathematical model. Pupillary light reflex is modeled as a physiologically-based non-linear delay differential equation that describes the changes in the pupil diameter as a function of the environment lighting: formula_0 where formula_1 is the pupil diameter measured in millimeters and formula_2 is the luminous intensity reaching the retina in a time formula_3, which can be described as formula_4: luminance reaching the eye in lumens/mm2 times the pupil area in mm2. formula_5 is the pupillary latency, a time delay between the instant in which the light pulse reaches the retina and the beginning of iridal reaction due nerve transmission, neuro-muscular excitation and activation delays. formula_6, formula_7 and formula_8 are the derivatives for the formula_9 function, pupil diameter formula_1 and time formula_3. Since the pupil constriction velocity is approximately 3 times faster than (re)dilation velocity, different step sizes in the numerical solver simulation must be used: formula_10 where formula_11 and formula_12 are respectively the formula_8 for constriction and dilation measured in milliseconds, formula_13 and formula_14 are respectively the current and previous simulation times (times since the simulation started) measured in milliseconds, formula_15 is a constant that affects the constriction/dilation velocity and varies among individuals. The higher the formula_15 value, the smaller the time step used in the simulation and, consequently, the smaller the pupil constriction/dilation velocity. In order to improve the realism of the resulting simulations, the hippus effect can be approximated by adding small random variations to the environment light (in the range 0.05–0.3 Hz). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n M(D) {} &= \\tanh^{-1} \\left(\\frac{D - 4.9}{3}\\right) \\\\\n \\frac{\\mathrm{d}M}{\\mathrm{d}D}\\frac{\\mathrm{d}D}{\\mathrm{d}t} + 2.3026 \\tanh^{-1} \\left(\\frac{D - 4.9}{3}\\right)\n &= 5.2 - 0.45 \\ln \\left(\\frac{\\Phi [t - \\tau]}{4.8118 \\times 10^{-10}} \\right)\n\\end{align}" }, { "math_id": 1, "text": "D" }, { "math_id": 2, "text": " \\Phi(t - \\tau) " }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "\\Phi = IA" }, { "math_id": 5, "text": "\\tau" }, { "math_id": 6, "text": "\\mathrm{d}M" }, { "math_id": 7, "text": "\\mathrm{d}D" }, { "math_id": 8, "text": "\\mathrm{d}t" }, { "math_id": 9, "text": "M" }, { "math_id": 10, "text": "\\begin{align}\n \\mathrm{d}t_{c} &= \\frac{T_c - T_p}{S} \\\\\n \\mathrm{d}t_{d} &= \\frac{T_c - T_p}{3S}\n\\end{align}" }, { "math_id": 11, "text": "\\mathrm{d}t_c" }, { "math_id": 12, "text": "\\mathrm{d}t_d" }, { "math_id": 13, "text": "T_c" }, { "math_id": 14, "text": "T_p" }, { "math_id": 15, "text": "S" } ]
https://en.wikipedia.org/wiki?curid=650978
6511
Computational complexity
Amount of resources to perform an algorithm In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem. The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory. Both areas are highly related, as the complexity of an algorithm is always an upper bound on the complexity of the problem solved by this algorithm. Moreover, for designing efficient algorithms, it is often fundamental to compare the complexity of a specific algorithm to the complexity of the problem to be solved. Also, in most cases, the only thing that is known about the complexity of a problem is that it is lower than the complexity of the most efficient known algorithms. Therefore, there is a large overlap between analysis of algorithms and complexity theory. As the amount of resources required to run an algorithm generally varies with the size of the input, the complexity is typically expressed as a function "n" → "f"("n"), where "n" is the size of the input and "f"("n") is either the worst-case complexity (the maximum of the amount of resources that are needed over all inputs of size "n") or the average-case complexity (the average of the amount of resources over all inputs of size "n"). Time complexity is generally expressed as the number of required elementary operations on an input of size "n", where elementary operations are assumed to take a constant amount of time on a given computer and change only by a constant factor when run on a different computer. Space complexity is generally expressed as the amount of memory required by an algorithm on an input of size "n". Resources. Time. The resource that is most commonly considered is time. When "complexity" is used without qualification, this generally means time complexity. The usual units of time (seconds, minutes etc.) are not used in complexity theory because they are too dependent on the choice of a specific computer and on the evolution of technology. For instance, a computer today can execute an algorithm significantly faster than a computer from the 1960s; however, this is not an intrinsic feature of the algorithm but rather a consequence of technological advances in computer hardware. Complexity theory seeks to quantify the intrinsic time requirements of algorithms, that is, the basic time constraints an algorithm would place on "any" computer. This is achieved by counting the number of "elementary operations" that are executed during the computation. These operations are assumed to take constant time (that is, not affected by the size of the input) on a given machine, and are often called "steps". Bit complexity. Formally, the "bit complexity" refers to the number of operations on bits that are needed for running an algorithm. With most models of computation, it equals the time complexity up to a constant factor. On computers, the number of operations on machine words that are needed is also proportional to the bit complexity. So, the "time complexity" and the "bit complexity" are equivalent for realistic models of computation. Space. Another important resource is the size of computer memory that is needed for running algorithms. Communication. For the class of distributed algorithms that are commonly executed by multiple, interacting parties, the resource that is of most interest is the communication complexity. It is the necessary amount of communication between the executing parties. Others. The number of arithmetic operations is another resource that is commonly used. In this case, one talks of arithmetic complexity. If one knows an upper bound on the size of the binary representation of the numbers that occur during a computation, the time complexity is generally the product of the arithmetic complexity by a constant factor. For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time. Therefore, the time complexity, generally called bit complexity in this context, may be much larger than the arithmetic complexity. For example, the arithmetic complexity of the computation of the determinant of a "n"×"n" integer matrix is formula_0 for the usual algorithms (Gaussian elimination). The bit complexity of the same algorithms is exponential in n, because the size of the coefficients may grow exponentially during the computation. On the other hand, if these algorithms are coupled with multi-modular arithmetic, the bit complexity may be reduced to "O"~("n"4). In sorting and searching, the resource that is generally considered is the number of entry comparisons. This is generally a good measure of the time complexity if data are suitably organized. Complexity as a function of input size. It is impossible to count the number of steps of an algorithm on all possible inputs. As the complexity generally increases with the size of the input, the complexity is typically expressed as a function of the size "n" (in bits) of the input, and therefore, the complexity is a function of "n". However, the complexity of an algorithm may vary dramatically for different inputs of the same size. Therefore, several complexity functions are commonly used. The worst-case complexity is the maximum of the complexity over all inputs of size n, and the average-case complexity is the average of the complexity over all inputs of size n (this makes sense, as the number of possible inputs of a given size is finite). Generally, when "complexity" is used without being further specified, this is the worst-case time complexity that is considered. Asymptotic complexity. It is generally difficult to compute precisely the worst-case and the average-case complexity. In addition, these exact values provide little practical application, as any change of computer or of model of computation would change the complexity somewhat. Moreover, the resource use is not critical for small values of n, and this makes that, for small n, the ease of implementation is generally more interesting than a low complexity. For these reasons, one generally focuses on the behavior of the complexity for large n, that is on its asymptotic behavior when n tends to the infinity. Therefore, the complexity is generally expressed by using big O notation. For example, the usual algorithm for integer multiplication has a complexity of formula_1 this means that there is a constant formula_2 such that the multiplication of two integers of at most n digits may be done in a time less than formula_3 This bound is "sharp" in the sense that the worst-case complexity and the average-case complexity are formula_4 which means that there is a constant formula_5 such that these complexities are larger than formula_6 The radix does not appear in these complexity, as changing of radix changes only the constants formula_2 and formula_7 Models of computation. The evaluation of the complexity relies on the choice of a model of computation, which consists in defining the basic operations that are done in a unit of time. When the model of computation is not explicitly specified, it is generally implicitely assumed as being a multitape Turing machine, since several more realistic models of computation, such as random-access machines are asymptotically equivalent for most problems. It is only for very specific and difficult problems, such as integer multiplication in time formula_8 that the explicit definition of the model of computation is required for proofs. Deterministic models. A deterministic model of computation is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state. Historically, the first deterministic models were recursive functions, lambda calculus, and Turing machines. The model of random-access machines (also called RAM-machines) is also widely used, as a closer counterpart to real computers. When the model of computation is not specified, it is generally assumed to be a multitape Turing machine. For most algorithms, the time complexity is the same on multitape Turing machines as on RAM-machines, although some care may be needed in how data is stored in memory to get this equivalence. Non-deterministic computation. In a non-deterministic model of computation, such as non-deterministic Turing machines, some choices may be done at some steps of the computation. In complexity theory, one considers all possible choices simultaneously, and the non-deterministic time complexity is the time needed, when the best choices are always done. In other words, one considers that the computation is done simultaneously on as many (identical) processors as needed, and the non-deterministic computation time is the time spent by the first processor that finishes the computation. This parallelism is partly amenable to quantum computing via superposed entangled states in running specific quantum algorithms, like e.g. Shor's factorization of yet only small integers (as of  2018[ [update]]: 21 = 3 × 7). Even when such a computation model is not realistic yet, it has theoretical importance, mostly related to the P = NP problem, which questions the identity of the complexity classes formed by taking "polynomial time" and "non-deterministic polynomial time" as least upper bounds. Simulating an NP-algorithm on a deterministic computer usually takes "exponential time". A problem is in the complexity class NP, if it may be solved in polynomial time on a non-deterministic machine. A problem is NP-complete if, roughly speaking, it is in NP and is not easier than any other NP problem. Many combinatorial problems, such as the Knapsack problem, the travelling salesman problem, and the Boolean satisfiability problem are NP-complete. For all these problems, the best known algorithm has exponential complexity. If any one of these problems could be solved in polynomial time on a deterministic machine, then all NP problems could also be solved in polynomial time, and one would have P = NP. As of 2017[ [update]] it is generally conjectured that P ≠ NP, with the practical implication that the worst cases of NP problems are intrinsically difficult to solve, i.e., take longer than any reasonable time span (decades!) for interesting lengths of input. Parallel and distributed computation. Parallel and distributed computing consist of splitting computation on several processors, which work simultaneously. The difference between the different model lies mainly in the way of transmitting information between processors. Typically, in parallel computing the data transmission between processors is very fast, while, in distributed computing, the data transmission is done through a network and is therefore much slower. The time needed for a computation on N processors is at least the quotient by N of the time needed by a single processor. In fact this theoretically optimal bound can never be reached, because some subtasks cannot be parallelized, and some processors may have to wait a result from another processor. The main complexity problem is thus to design algorithms such that the product of the computation time by the number of processors is as close as possible to the time needed for the same computation on a single processor. Quantum computing. A quantum computer is a computer whose model of computation is based on quantum mechanics. The Church–Turing thesis applies to quantum computers; that is, every problem that can be solved by a quantum computer can also be solved by a Turing machine. However, some problems may theoretically be solved with a much lower time complexity using a quantum computer rather than a classical computer. This is, for the moment, purely theoretical, as no one knows how to build an efficient quantum computer. Quantum complexity theory has been developed to study the complexity classes of problems solved using quantum computers. It is used in post-quantum cryptography, which consists of designing cryptographic protocols that are resistant to attacks by quantum computers. Problem complexity (lower bounds). The complexity of a problem is the infimum of the complexities of the algorithms that may solve the problem, including unknown algorithms. Thus the complexity of a problem is not greater than the complexity of any algorithm that solves the problems. It follows that every complexity of an algorithm, that is expressed with big O notation, is also an upper bound on the complexity of the corresponding problem. On the other hand, it is generally hard to obtain nontrivial lower bounds for problem complexity, and there are few methods for obtaining such lower bounds. For solving most problems, it is required to read all input data, which, normally, needs a time proportional to the size of the data. Thus, such problems have a complexity that is at least linear, that is, using big omega notation, a complexity formula_9 The solution of some problems, typically in computer algebra and computational algebraic geometry, may be very large. In such a case, the complexity is lower bounded by the maximal size of the output, since the output must be written. For example, a system of n polynomial equations of degree d in n indeterminates may have up to formula_10 complex solutions, if the number of solutions is finite (this is Bézout's theorem). As these solutions must be written down, the complexity of this problem is formula_11 For this problem, an algorithm of complexity formula_12 is known, which may thus be considered as asymptotically quasi-optimal. A nonlinear lower bound of formula_13 is known for the number of comparisons needed for a sorting algorithm. Thus the best sorting algorithms are optimal, as their complexity is formula_14 This lower bound results from the fact that there are "n"! ways of ordering n objects. As each comparison splits in two parts this set of "n"! orders, the number of N of comparisons that are needed for distinguishing all orders must verify formula_15 which implies formula_16 by Stirling's formula. A standard method for getting lower bounds of complexity consists of "reducing" a problem to another problem. More precisely, suppose that one may encode a problem A of size n into a subproblem of size "f"("n") of a problem B, and that the complexity of A is formula_17 Without loss of generality, one may suppose that the function f increases with n and has an inverse function h. Then the complexity of the problem B is formula_18 This is the method that is used to prove that, if P ≠ NP (an unsolved conjecture), the complexity of every NP-complete problem is formula_19 for every positive integer k. Use in algorithm design. Evaluating the complexity of an algorithm is an important part of algorithm design, as this gives useful information on the performance that may be expected. It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of Moore's law, which posits the exponential growth of the power of modern computers. This is wrong because this power increase allows working with large input data (big data). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the bibliography of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require formula_20 comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the quicksort and merge sort require only formula_21 comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For "n" = 1,000,000, this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second. Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation.
[ { "math_id": 0, "text": "O(n^3)" }, { "math_id": 1, "text": "O(n^2)," }, { "math_id": 2, "text": "c_u" }, { "math_id": 3, "text": "c_un^2." }, { "math_id": 4, "text": "\\Omega(n^2)," }, { "math_id": 5, "text": "c_l" }, { "math_id": 6, "text": "c_ln^2." }, { "math_id": 7, "text": "c_l." }, { "math_id": 8, "text": "O(n\\log n)," }, { "math_id": 9, "text": "\\Omega(n)." }, { "math_id": 10, "text": "d^n" }, { "math_id": 11, "text": "\\Omega(d^n)." }, { "math_id": 12, "text": "d^{O(n)}" }, { "math_id": 13, "text": "\\Omega(n\\log n)" }, { "math_id": 14, "text": "O(n\\log n)." }, { "math_id": 15, "text": "2^N>n!," }, { "math_id": 16, "text": "N =\\Omega(n\\log n)," }, { "math_id": 17, "text": "\\Omega(g(n))." }, { "math_id": 18, "text": "\\Omega(g(h(n)))." }, { "math_id": 19, "text": "\\Omega(n^k)," }, { "math_id": 20, "text": "O(n^2)" }, { "math_id": 21, "text": "n\\log_2 n" } ]
https://en.wikipedia.org/wiki?curid=6511
651196
Reproducing kernel Hilbert space
In functional analysis, a Hilbert space In functional analysis (a branch of mathematics), a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Roughly speaking, this means that if two functions formula_0 and formula_1 in the RKHS are close in norm, i.e., formula_2 is small, then formula_0 and formula_1 are also pointwise close, i.e., formula_3 is small for all formula_4. The converse does not need to be true. Informally, this can be shown by looking at the supremum norm: the sequence of functions formula_5 converges pointwise, but does not converge uniformly i.e. does not converge with respect to the supremum norm. (This is not a counterexample because the supremum norm does not arise from any inner product due to not satisfying the parallelogram law.) It is not entirely straightforward to construct a Hilbert space of functions which is not an RKHS. Some examples, however, have been found. "L"2 spaces are not Hilbert spaces of functions (and hence not RKHSs), but rather Hilbert spaces of equivalence classes of functions (for example, the functions formula_0 and formula_1 defined by formula_6 and formula_7 are equivalent in "L"2). However, there are RKHSs in which the norm is an "L"2-norm, such as the space of band-limited functions (see the example below). An RKHS is associated with a kernel that reproduces every function in the space in the sense that for every formula_4 in the set on which the functions are defined, "evaluation at formula_4" can be performed by taking an inner product with a function determined by the kernel. Such a "reproducing kernel" exists if and only if every evaluation functional is continuous. The reproducing kernel was first introduced in the 1907 work of Stanisław Zaremba concerning boundary value problems for harmonic and biharmonic functions. James Mercer simultaneously examined functions which satisfy the reproducing property in the theory of integral equations. The idea of the reproducing kernel remained untouched for nearly twenty years until it appeared in the dissertations of Gábor Szegő, Stefan Bergman, and Salomon Bochner. The subject was eventually systematically developed in the early 1950s by Nachman Aronszajn and Stefan Bergman. These spaces have wide applications, including complex analysis, harmonic analysis, and quantum mechanics. Reproducing kernel Hilbert spaces are particularly important in the field of statistical learning theory because of the celebrated representer theorem which states that every function in an RKHS that minimises an empirical risk functional can be written as a linear combination of the kernel function evaluated at the training points. This is a practically useful result as it effectively simplifies the empirical risk minimization problem from an infinite dimensional to a finite dimensional optimization problem. For ease of understanding, we provide the framework for real-valued Hilbert spaces. The theory can be easily extended to spaces of complex-valued functions and hence include the many important examples of reproducing kernel Hilbert spaces that are spaces of analytic functions. Definition. Let formula_8 be an arbitrary set and formula_9 a Hilbert space of real-valued functions on formula_8, equipped with pointwise addition and pointwise scalar multiplication. The evaluation functional over the Hilbert space of functions formula_9 is a linear functional that evaluates each function at a point formula_4, formula_10 We say that "H" is a reproducing kernel Hilbert space if, for all formula_4 in formula_8, formula_11 is continuous at every formula_0 in formula_9 or, equivalently, if formula_11 is a bounded operator on formula_9, i.e. there exists some formula_12 such that Although formula_13 is assumed for all formula_14, it might still be the case that formula_15. While property (1) is the weakest condition that ensures both the existence of an inner product and the evaluation of every function in formula_9 at every point in the domain, it does not lend itself to easy application in practice. A more intuitive definition of the RKHS can be obtained by observing that this property guarantees that the evaluation functional can be represented by taking the inner product of formula_16 with a function formula_17 in formula_9. This function is the so-called reproducing kernel for the Hilbert space formula_9 from which the RKHS takes its name. More formally, the Riesz representation theorem implies that for all formula_4 in formula_8 there exists a unique element formula_17 of formula_9 with the reproducing property, Since formula_17 is itself a function defined on formula_8 with values in the field formula_18 (or formula_19 in the case of complex Hilbert spaces) and as formula_17 is in formula_9 we have that formula_20 where formula_21 is the element in formula_9 associated to formula_22. This allows us to define the reproducing kernel of formula_9 as a function formula_23 (or formula_19 in the complex case) by formula_24 From this definition it is easy to see that formula_23 (or formula_19 in the complex case) is both symmetric (resp. conjugate symmetric) and positive definite, i.e. formula_25 for every formula_26 The Moore–Aronszajn theorem (see below) is a sort of converse to this: if a function formula_27 satisfies these conditions then there is a Hilbert space of functions on formula_8 for which it is a reproducing kernel. Examples. The simplest example of a reproducing kernel Hilbert space is the space formula_28 where formula_8 is a set and formula_8 is the counting measure on formula_8. For formula_29, the reproducing kernel formula_30 is the indicator function of the one point set formula_31. Nontrivial reproducing kernel Hilbert spaces often involve analytic functions, as we now illustrate by example. Consider the Hilbert space of bandlimited continuous functions formula_9. Fix some cutoff frequency formula_32 and define the Hilbert space formula_33 where formula_34 is the set of square integrable functions, and formula_35 is the Fourier transform of formula_36. As the inner product, we use formula_37 Since this is a closed subspace of formula_38, it is a HIlbert space. Moreover, the elements of formula_9 are smooth functions on formula_39 that tend to zero at infinity, essentially by the Riemann-Lebesgue lemma. In fact, the elements of formula_9 are the restrictions to formula_39 of entire holomorphic functions, by the Paley–Wiener theorem. From the Fourier inversion theorem, we have formula_40 It then follows by the Cauchy–Schwarz inequality and Plancherel's theorem that, for all formula_4, formula_41 This inequality shows that the evaluation functional is bounded, proving that formula_42 is indeed a RKHS. The kernel function formula_30 in this case is given by formula_43 The Fourier transform of formula_44 defined above is given by formula_45 which is a consequence of the time-shifting property of the Fourier transform. Consequently, using Plancherel's theorem, we have formula_46 Thus we obtain the reproducing property of the kernel. formula_30 in this case is the "bandlimited version" of the Dirac delta function, and that formula_44 converges to formula_47 in the weak sense as the cutoff frequency formula_48 tends to infinity. Moore–Aronszajn theorem. We have seen how a reproducing kernel Hilbert space defines a reproducing kernel function that is both symmetric and positive definite. The Moore–Aronszajn theorem goes in the other direction; it states that every symmetric, positive definite kernel defines a unique reproducing kernel Hilbert space. The theorem first appeared in Aronszajn's "Theory of Reproducing Kernels", although he attributes it to E. H. Moore. Theorem. Suppose "K" is a symmetric, positive definite kernel on a set "X". Then there is a unique Hilbert space of functions on "X" for which "K" is a reproducing kernel. Proof. For all "x" in "X", define "Kx" = "K"("x", ⋅ ). Let "H"0 be the linear span of {"Kx" : "x" ∈ "X"}. Define an inner product on "H"0 by formula_49 which implies formula_50. The symmetry of this inner product follows from the symmetry of "K" and the non-degeneracy follows from the fact that "K" is positive definite. Let "H" be the completion of "H"0 with respect to this inner product. Then "H" consists of functions of the form formula_51 Now we can check the reproducing property (2): formula_52 To prove uniqueness, let "G" be another Hilbert space of functions for which "K" is a reproducing kernel. For every "x" and "y" in "X", (2) implies that formula_53 By linearity, formula_54 on the span of formula_55. Then formula_56 because "G" is complete and contains "H"0 and hence contains its completion. Now we need to prove that every element of "G" is in "H". Let formula_16 be an element of "G". Since "H" is a closed subspace of "G", we can write formula_57 where formula_58 and formula_59. Now if formula_60 then, since "K" is a reproducing kernel of "G" and "H": formula_61 where we have used the fact that formula_17 belongs to "H" so that its inner product with formula_62 in "G" is zero. This shows that formula_63 in "G" and concludes the proof. Integral operators and Mercer's theorem. We may characterize a symmetric positive definite kernel formula_27 via the integral operator using Mercer's theorem and obtain an additional view of the RKHS. Let formula_8 be a compact space equipped with a strictly positive finite Borel measure formula_64 and formula_65 a continuous, symmetric, and positive definite function. Define the integral operator formula_66 as formula_67 where formula_68 is the space of square integrable functions with respect to formula_69. Mercer's theorem states that the spectral decomposition of the integral operator formula_70 of formula_27 yields a series representation of formula_27 in terms of the eigenvalues and eigenfunctions of formula_71. This then implies that formula_27 is a reproducing kernel so that the corresponding RKHS can be defined in terms of these eigenvalues and eigenfunctions. We provide the details below. Under these assumptions formula_70 is a compact, continuous, self-adjoint, and positive operator. The spectral theorem for self-adjoint operators implies that there is an at most countable decreasing sequence formula_72 such that formula_73 and formula_74, where the formula_75 form an orthonormal basis of formula_68. By the positivity of formula_76 for all formula_77 One can also show that formula_78 maps continuously into the space of continuous functions formula_79 and therefore we may choose continuous functions as the eigenvectors, that is, formula_80 for all formula_77 Then by Mercer's theorem formula_81 may be written in terms of the eigenvalues and continuous eigenfunctions as formula_82 for all formula_83 such that formula_84 This above series representation is referred to as a Mercer kernel or Mercer representation of formula_81. Furthermore, it can be shown that the RKHS formula_42 of formula_81 is given by formula_85 where the inner product of formula_42 given by formula_86 This representation of the RKHS has application in probability and statistics, for example to the Karhunen-Loève representation for stochastic processes and kernel PCA. Feature maps. A feature map is a map formula_87, where formula_88 is a Hilbert space which we will call the feature space. The first sections presented the connection between bounded/continuous evaluation functions, positive definite functions, and integral operators and in this section we provide another representation of the RKHS in terms of feature maps. Every feature map defines a kernel via Clearly formula_81 is symmetric and positive definiteness follows from the properties of inner product in formula_88. Conversely, every positive definite function and corresponding reproducing kernel Hilbert space has infinitely many associated feature maps such that (3) holds. For example, we can trivially take formula_89 and formula_90 for all formula_60. Then (3) is satisfied by the reproducing property. Another classical example of a feature map relates to the previous section regarding integral operators by taking formula_91 and formula_92. This connection between kernels and feature maps provides us with a new way to understand positive definite functions and hence reproducing kernels as inner products in formula_42. Moreover, every feature map can naturally define a RKHS by means of the definition of a positive definite function. Lastly, feature maps allow us to construct function spaces that reveal another perspective on the RKHS. Consider the linear space formula_93 We can define a norm on formula_94 by formula_95 It can be shown that formula_96 is a RKHS with kernel defined by formula_97. This representation implies that the elements of the RKHS are inner products of elements in the feature space and can accordingly be seen as hyperplanes. This view of the RKHS is related to the kernel trick in machine learning. Properties. Useful properties of RKHSs: formula_112 Common examples. Bilinear kernels. The RKHS formula_9 corresponding to this kernel is the dual space, consisting of functions formula_113 satisfying formula_114. formula_115 Radial basis function kernels. These are another common class of kernels which satisfy formula_116. Some examples include: Bergman kernels. We also provide examples of Bergman kernels. Let "X" be finite and let "H" consist of all complex-valued functions on "X". Then an element of "H" can be represented as an array of complex numbers. If the usual inner product is used, then "Kx" is the function whose value is 1 at "x" and 0 everywhere else, and formula_109 can be thought of as an identity matrix since formula_120 In this case, "H" is isomorphic to formula_121. The case of formula_122 (where formula_123 denotes the unit disc) is more sophisticated. Here the Bergman space formula_124 is the space of square-integrable holomorphic functions on formula_123. It can be shown that the reproducing kernel for formula_124 is formula_125 Lastly, the space of band limited functions in formula_126 with bandwidth formula_127 is a RKHS with reproducing kernel formula_128 Extension to vector-valued functions. In this section we extend the definition of the RKHS to spaces of vector-valued functions as this extension is particularly important in multi-task learning and manifold regularization. The main difference is that the reproducing kernel formula_129 is a symmetric function that is now a positive semi-definite "matrix" for every formula_130 in formula_131. More formally, we define a vector-valued RKHS (vvRKHS) as a Hilbert space of functions formula_132 such that for all formula_133 and formula_60 formula_134 and formula_135 This second property parallels the reproducing property for the scalar-valued case. This definition can also be connected to integral operators, bounded evaluation functions, and feature maps as we saw for the scalar-valued RKHS. We can equivalently define the vvRKHS as a vector-valued Hilbert space with a bounded evaluation functional and show that this implies the existence of a unique reproducing kernel by the Riesz Representation theorem. Mercer's theorem can also be extended to address the vector-valued setting and we can therefore obtain a feature map view of the vvRKHS. Lastly, it can also be shown that the closure of the span of formula_136 coincides with formula_42, another property similar to the scalar-valued case. We can gain intuition for the vvRKHS by taking a component-wise perspective on these spaces. In particular, we find that every vvRKHS is isometrically isomorphic to a scalar-valued RKHS on a particular input space. Let formula_137. Consider the space formula_138 and the corresponding reproducing kernel As noted above, the RKHS associated to this reproducing kernel is given by the closure of the span of formula_139 where formula_140 for every set of pairs formula_141. The connection to the scalar-valued RKHS can then be made by the fact that every matrix-valued kernel can be identified with a kernel of the form of (4) via formula_142 Moreover, every kernel with the form of (4) defines a matrix-valued kernel with the above expression. Now letting the map formula_143 be defined as formula_144 where formula_145 is the formula_146 component of the canonical basis for formula_147, one can show that formula_148 is bijective and an isometry between formula_149 and formula_150. While this view of the vvRKHS can be useful in multi-task learning, this isometry does not reduce the study of the vector-valued case to that of the scalar-valued case. In fact, this isometry procedure can make both the scalar-valued kernel and the input space too difficult to work with in practice as properties of the original kernels are often lost. An important class of matrix-valued reproducing kernels are "separable" kernels which can factorized as the product of a scalar valued kernel and a formula_151-dimensional symmetric positive semi-definite matrix. In light of our previous discussion these kernels are of the form formula_152 for all formula_153 in formula_131 and formula_154 in formula_155. As the scalar-valued kernel encodes dependencies between the inputs, we can observe that the matrix-valued kernel encodes dependencies among both the inputs and the outputs. We lastly remark that the above theory can be further extended to spaces of functions with values in function spaces but obtaining kernels for these spaces is a more difficult task. Connection between RKHSs and the ReLU function. The ReLU function is commonly defined as formula_156 and is a mainstay in the architecture of neural networks where it is used as an activation function. One can construct a ReLU-like nonlinear function using the theory of reproducing kernel Hilbert spaces. Below, we derive this construction and show how it implies the representation power of neural networks with ReLU activations. We will work with the Hilbert space formula_157 of absolutely continuous functions with formula_158 and square integrable (i.e. formula_159) derivative. It has the inner product formula_160 To construct the reproducing kernel it suffices to consider a dense subspace, so let formula_161 and formula_162. The Fundamental Theorem of Calculus then gives formula_163 where formula_164 and formula_165 i.e. formula_166 This implies formula_167 reproduces formula_0. Moreover the minimum function on formula_168 has the following representations with the ReLu function: formula_169 Using this formulation, we can apply the representer theorem to the RKHS, letting one prove the optimality of using ReLU activations in neural network settings. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "g" }, { "math_id": 2, "text": "\\|f-g\\|" }, { "math_id": 3, "text": "|f(x)-g(x)|" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "\\sin^{2n} (x)" }, { "math_id": 6, "text": "f(x)=0" }, { "math_id": 7, "text": "g(x)=1_{\\mathbb{Q}}" }, { "math_id": 8, "text": "X" }, { "math_id": 9, "text": "H" }, { "math_id": 10, "text": " L_{x} : f \\mapsto f(x) \\text{ } \\forall f \\in H. " }, { "math_id": 11, "text": " L_x " }, { "math_id": 12, "text": "M_x>0" }, { "math_id": 13, "text": "M_x<\\infty" }, { "math_id": 14, "text": "x \\in X" }, { "math_id": 15, "text": "\\sup_x M_x = \\infty" }, { "math_id": 16, "text": " f " }, { "math_id": 17, "text": " K_x " }, { "math_id": 18, "text": "\\mathbb{R}" }, { "math_id": 19, "text": "\\mathbb{C}" }, { "math_id": 20, "text": " K_x(y) = L_y(K_x)= \\langle K_x,\\ K_y \\rangle_H, " }, { "math_id": 21, "text": "K_y\\in H" }, { "math_id": 22, "text": "L_y" }, { "math_id": 23, "text": " K: X \\times X \\to \\mathbb{R} " }, { "math_id": 24, "text": " K(x,y) = \\langle K_x,\\ K_y \\rangle_H. " }, { "math_id": 25, "text": " \\sum_{i,j =1}^n c_i c_j K(x_i, x_j)=\n\\sum_{i=1}^n c_i \\left\\langle K_{x_i} , \\sum_{j=1}^n c_j K_{x_j} \\right\\rangle_{H} = \n \\left\\langle \\sum_{i=1}^n c_i K_{x_i} , \\sum_{j=1}^n c_j K_{x_j} \\right\\rangle_{H} =\n\\left\\|\\sum_{i=1}^nc_iK_{x_i}\\right\\|_H^2 \\ge 0 " }, { "math_id": 26, "text": " n \\in \\mathbb{N}, x_1, \\dots, x_n \\in X, \\text{ and } c_1, \\dots, c_n \\in \\mathbb{R}. " }, { "math_id": 27, "text": "K" }, { "math_id": 28, "text": "L^2(X,\\mu)" }, { "math_id": 29, "text": "x\\in X" }, { "math_id": 30, "text": "K_x" }, { "math_id": 31, "text": "\\{x\\}\\subset X" }, { "math_id": 32, "text": " 0<a < \\infty " }, { "math_id": 33, "text": " H = \\{ f \\in L^2(\\mathbb{R}) \\mid \\operatorname{supp}(F) \\subset [-a,a] \\} " }, { "math_id": 34, "text": "L^2(\\mathbb{R})" }, { "math_id": 35, "text": " F(\\omega) = \\int_{-\\infty}^\\infty f(t) e^{-i\\omega t} \\, dt " }, { "math_id": 36, "text": " f" }, { "math_id": 37, "text": "\\langle f, g\\rangle_{L^2} = \\int_{-\\infty}^\\infty f(x) \\cdot \\overline{g(x)} \\, dx." }, { "math_id": 38, "text": "L^2(\\mathbb R)" }, { "math_id": 39, "text": "\\mathbb R" }, { "math_id": 40, "text": " f(x) = \\frac{1}{2 \\pi} \\int_{-a}^a F(\\omega) e^{ix \\omega} \\, d\\omega ." }, { "math_id": 41, "text": " |f(x)| \\le \n\\frac{1}{2 \\pi} \\sqrt{ 2a\\int_{-a}^a |F(\\omega)|^2 \\, d\\omega} \n=\\frac{ \\sqrt{2a} }{2\\pi}\\sqrt{\\int_{-\\infty}^\\infty |F(\\omega)|^2 \\, d\\omega} \n= \\sqrt{\\frac{a}{\\pi}} \\|f\\|_{L^2}. " }, { "math_id": 42, "text": " H " }, { "math_id": 43, "text": "K_x(y) = \\frac{a}{\\pi} \\operatorname{sinc}\\left ( \\frac{a}{\\pi} (y-x) \\right )=\\frac{\\sin(a(y-x))}{\\pi(y-x)}." }, { "math_id": 44, "text": "K_x(y)" }, { "math_id": 45, "text": "\\int_{-\\infty}^\\infty K_x(y)e^{-i \\omega y} \\, dy = \n\\begin{cases}\ne^{-i \\omega x} &\\text{if } \\omega \\in [-a, a], \\\\\n0 &\\textrm{otherwise},\n\\end{cases}\n " }, { "math_id": 46, "text": " \\langle f, K_x\\rangle_{L^2} = \\int_{-\\infty}^\\infty f(y) \\cdot \\overline{K_x(y)} \\, dy \n= \\frac{1}{2\\pi} \\int_{-a}^a F(\\omega) \\cdot e^{i\\omega x} \\, d\\omega = f(x) ." }, { "math_id": 47, "text": "\\delta(y-x)" }, { "math_id": 48, "text": "a" }, { "math_id": 49, "text": " \\left\\langle \\sum_{j=1}^n b_j K_{y_j}, \\sum_{i=1}^m a_i K_{x_i} \\right \\rangle_{H_0} = \\sum_{i=1}^m \\sum_{j=1}^n {a_i} b_j K(y_j, x_i)," }, { "math_id": 50, "text": "K(x,y)=\\left\\langle K_{x}, K_{y} \\right\\rangle_{H_0}" }, { "math_id": 51, "text": " f(x) = \\sum_{i=1}^\\infty a_i K_{x_i} (x) \\quad \\text{where} \\quad \\lim_{n \\to \\infty}\\sup_{p\\geq0}\\left\\|\\sum_{i=n}^{n+p} a_i K_{x_i}\\right\\|_{H_0} = 0." }, { "math_id": 52, "text": "\\langle f, K_x \\rangle_H = \\sum_{i=1}^\\infty a_i\\left \\langle K_{x_i}, K_x \\right \\rangle_{H_0}= \\sum_{i=1}^\\infty a_i K (x_i, x) = f(x)." }, { "math_id": 53, "text": "\\langle K_x, K_y \\rangle_H = K(x, y) = \\langle K_x, K_y \\rangle_G." }, { "math_id": 54, "text": "\\langle \\cdot, \\cdot \\rangle_H = \\langle \\cdot, \\cdot \\rangle_G" }, { "math_id": 55, "text": "\\{K_x : x \\in X\\}" }, { "math_id": 56, "text": "H \\subset G" }, { "math_id": 57, "text": " f=f_H + f_{H^\\bot} " }, { "math_id": 58, "text": " f_H \\in H " }, { "math_id": 59, "text": " f_{H^\\bot} \\in H^\\bot " }, { "math_id": 60, "text": " x \\in X " }, { "math_id": 61, "text": "f(x) = \\langle K_x , f \\rangle_G = \\langle K_x, f_H \\rangle_G + \\langle K_x, f_{H^\\bot} \\rangle_G = \\langle K_x , f_H \\rangle_G = \\langle K_x , f_H \\rangle_H = f_H(x), " }, { "math_id": 62, "text": " f_{H^\\bot} " }, { "math_id": 63, "text": " f = f_H " }, { "math_id": 64, "text": "\\mu" }, { "math_id": 65, "text": "K: X \\times X \\to \\R" }, { "math_id": 66, "text": "T_K: L_2(X) \\to L_2(X)" }, { "math_id": 67, "text": " [T_K f](\\cdot) =\\int_X K(\\cdot,t) f(t)\\, d\\mu(t) " }, { "math_id": 68, "text": "L_2(X)" }, { "math_id": 69, "text": " \\mu " }, { "math_id": 70, "text": "T_K" }, { "math_id": 71, "text": " T_K " }, { "math_id": 72, "text": "(\\sigma_i)_i \\geq 0 " }, { "math_id": 73, "text": "\\lim_{i \\to \\infty}\\sigma_i = 0" }, { "math_id": 74, "text": "T_K\\varphi_i(x) = \\sigma_i\\varphi_i(x)" }, { "math_id": 75, "text": "\\{\\varphi_i\\}" }, { "math_id": 76, "text": "T_K, \\sigma_i > 0" }, { "math_id": 77, "text": "i." }, { "math_id": 78, "text": "T_K " }, { "math_id": 79, "text": "C(X)" }, { "math_id": 80, "text": "\\varphi_i \\in C(X)" }, { "math_id": 81, "text": " K " }, { "math_id": 82, "text": " K(x,y) = \\sum_{j=1}^\\infty \\sigma_j \\, \\varphi_j(x) \\, \\varphi_j(y) " }, { "math_id": 83, "text": "x, y \\in X" }, { "math_id": 84, "text": " \\lim_{n \\to \\infty}\\sup_{u,v} \\left |K(u,v) - \\sum_{j=1}^n \\sigma_j \\, \\varphi_j(u) \\, \\varphi_j(v) \\right | = 0. " }, { "math_id": 85, "text": " H = \\left \\{ f \\in L_2(X) \\,\\Bigg\\vert\\, \\sum_{i=1}^\\infty \\frac{\\left\\langle f,\\varphi_i \\right \\rangle^2_{L_2}}{\\sigma_i} < \\infty \\right\\} " }, { "math_id": 86, "text": " \\left\\langle f,g \\right\\rangle_H = \\sum_{i=1}^\\infty \\frac{\\left\\langle f,\\varphi_i \\right\\rangle_{L_2}\\left\\langle g,\\varphi_i \\right\\rangle_{L_2}}{\\sigma_i}. " }, { "math_id": 87, "text": " \\varphi\\colon X \\rightarrow F " }, { "math_id": 88, "text": " F " }, { "math_id": 89, "text": " F = H " }, { "math_id": 90, "text": " \\varphi(x) = K_x " }, { "math_id": 91, "text": " F = \\ell^2 " }, { "math_id": 92, "text": " \\varphi(x) = (\\sqrt{\\sigma_i} \\varphi_i(x))_i " }, { "math_id": 93, "text": " H_\\varphi = \\{ f: X \\to \\mathbb{R} \\mid \\exists w \\in F, f(x) = \\langle w, \\varphi(x) \\rangle_{F}, \\forall \\text{ } x \\in X \\} . " }, { "math_id": 94, "text": " H_\\varphi " }, { "math_id": 95, "text": " \\|f\\|_\\varphi = \\inf \\{\\|w\\|_F : w \\in F, f(x) = \\langle w, \\varphi(x)\\rangle_F, \\forall \\text{ } x \\in X \\} ." }, { "math_id": 96, "text": " H_{\\varphi} " }, { "math_id": 97, "text": " K(x,y) = \\langle\\varphi(x), \\varphi(y)\\rangle_F " }, { "math_id": 98, "text": "(X_i)_{i=1}^p" }, { "math_id": 99, "text": "(K_i)_{i=1}^p" }, { "math_id": 100, "text": " (X_i)_{i=1}^p." }, { "math_id": 101, "text": "K((x_1,\\ldots ,x_p),(y_1,\\ldots,y_p)) = K_1(x_1,y_1)\\cdots K_p(x_p,y_p)" }, { "math_id": 102, "text": " X = X_1 \\times \\dots \\times X_p." }, { "math_id": 103, "text": "X_0 \\subset X," }, { "math_id": 104, "text": "X_0 \\times X_0 " }, { "math_id": 105, "text": " K(x, x) = 1 " }, { "math_id": 106, "text": "x \\in X " }, { "math_id": 107, "text": " d_K(x,y) = \\|K_x - K_y\\|_H^2 = 2(1-K(x,y)) \\qquad \\forall x \\in X . " }, { "math_id": 108, "text": " K(x,y)^2 \\le K(x, x)K(y, y)=1 \\qquad \\forall x,y \\in X." }, { "math_id": 109, "text": "K(x,y)" }, { "math_id": 110, "text": "x,y \\in X" }, { "math_id": 111, "text": " \\{ K_x \\mid x \\in X \\} " }, { "math_id": 112, "text": " K(x,y) = \\langle x,y\\rangle " }, { "math_id": 113, "text": "f(x) = \\langle x,\\beta\\rangle" }, { "math_id": 114, "text": "\\|f\\|_H^2=\\|\\beta\\|^2" }, { "math_id": 115, "text": " K(x,y) = (\\alpha\\langle x,y \\rangle + 1)^d, \\qquad \\alpha \\in \\R, d \\in \\N " }, { "math_id": 116, "text": " K(x,y) = K(\\|x - y\\|)" }, { "math_id": 117, "text": " K(x,y) = e^{-\\frac{\\|x - y\\|^2}{2\\sigma^2}}, \\qquad \\sigma > 0 " }, { "math_id": 118, "text": " K(x,y) = e^{-\\frac{\\|x - y\\|}{\\sigma}}, \\qquad \\sigma > 0 " }, { "math_id": 119, "text": "\\|f\\|_H^2=\\int_{\\mathbb R}\\Big( \\frac1{\\sigma} f(x)^2 + \\sigma f'(x)^2\\Big) \\mathrm d x." }, { "math_id": 120, "text": "K(x,y)=\\begin{cases} 1 & x=y \\\\ 0 & x \\neq y \\end{cases}" }, { "math_id": 121, "text": "\\Complex^n" }, { "math_id": 122, "text": "X= \\mathbb{D}" }, { "math_id": 123, "text": "\\mathbb{D}" }, { "math_id": 124, "text": "H^2(\\mathbb{D})" }, { "math_id": 125, "text": "K(x,y)=\\frac{1}{\\pi}\\frac{1}{(1-x\\overline{y})^2}." }, { "math_id": 126, "text": " L^2(\\R) " }, { "math_id": 127, "text": "2a" }, { "math_id": 128, "text": "K(x,y)=\\frac{\\sin a (x - y)}{\\pi (x-y)}." }, { "math_id": 129, "text": " \\Gamma " }, { "math_id": 130, "text": " x,y " }, { "math_id": 131, "text": " X " }, { "math_id": 132, "text": " f: X \\to \\mathbb{R}^T " }, { "math_id": 133, "text": " c \\in \\mathbb{R}^T " }, { "math_id": 134, "text": " \\Gamma_xc(y) = \\Gamma(x, y)c \\in H \\text{ for } y \\in X " }, { "math_id": 135, "text": " \\langle f, \\Gamma_x c \\rangle_H = f(x)^\\intercal c. " }, { "math_id": 136, "text": " \\{ \\Gamma_xc : x \\in X, c \\in \\mathbb{R}^T \\} " }, { "math_id": 137, "text": "\\Lambda = \\{1, \\dots, T \\} " }, { "math_id": 138, "text": " X \\times \\Lambda " }, { "math_id": 139, "text": "\\{ \\gamma_{(x,t)} : x \\in X, t \\in \\Lambda \\} " }, { "math_id": 140, "text": " \\gamma_{(x,t)} (y,s) = \\gamma( (x,t), (y,s)) " }, { "math_id": 141, "text": " (x,t), (y,s) \\in X \\times \\Lambda " }, { "math_id": 142, "text": " \\Gamma(x,y)_{(t,s)} = \\gamma((x,t), (y,s)). " }, { "math_id": 143, "text": " D: H_\\Gamma \\to H_\\gamma " }, { "math_id": 144, "text": " (Df)(x,t) = \\langle f(x), e_t \\rangle_{\\mathbb{R}^T} " }, { "math_id": 145, "text": " e_t " }, { "math_id": 146, "text": " t^\\text{th} " }, { "math_id": 147, "text": " \\mathbb{R}^T " }, { "math_id": 148, "text": " D " }, { "math_id": 149, "text": " H_\\Gamma " }, { "math_id": 150, "text": " H_\\gamma " }, { "math_id": 151, "text": "T" }, { "math_id": 152, "text": " \\gamma((x,t),(y,s)) = K(x,y) K_T(t,s) " }, { "math_id": 153, "text": "x,y " }, { "math_id": 154, "text": "t,s" }, { "math_id": 155, "text": " T " }, { "math_id": 156, "text": "f(x)=\\max \\{0, x\\}" }, { "math_id": 157, "text": " \\mathcal{H}=L^1_2(0)[0, \\infty) " }, { "math_id": 158, "text": "f(0) = 0" }, { "math_id": 159, "text": "L_2" }, { "math_id": 160, "text": " \\langle f,g \\rangle_{\\mathcal{H}} = \\int_0^\\infty f'(x)g'(x) \\, dx ." }, { "math_id": 161, "text": "f\\in C^1[0, \\infty)" }, { "math_id": 162, "text": "f(0)=0" }, { "math_id": 163, "text": "f(y)= \\int_0^y f'(x) \\, dx = \\int_0^\\infty G(x,y) f'(x) \\, dx = \\langle K_y,f \\rangle" }, { "math_id": 164, "text": "G(x,y)= \n\\begin{cases} 1, & x < y\\\\\n 0, & \\text{otherwise}\n\\end{cases}" }, { "math_id": 165, "text": "K_y'(x)= G(x,y),\\ K_y(0) = 0" }, { "math_id": 166, "text": "K(x, y)=K_y(x)=\\int_0^x G(z, y) \\, dz=\n\\begin{cases}\n x, & 0\\leq x<y \\\\\n y, & \\text{otherwise.}\n\\end{cases}=\\min(x, y)" }, { "math_id": 167, "text": "K_y=K(\\cdot, y)" }, { "math_id": 168, "text": " X\\times X = [0,\\infty)\\times [0,\\infty) " }, { "math_id": 169, "text": " \\min(x,y) = x -\\operatorname{ReLU}(x-y) = y - \\operatorname{ReLU}(y-x). " } ]
https://en.wikipedia.org/wiki?curid=651196
6512121
Igneous intrusion
Body of intrusive igneous rocks In geology, an igneous intrusion (or intrusive body or simply intrusion) is a body of intrusive igneous rock that forms by crystallization of magma slowly cooling below the surface of the Earth. Intrusions have a wide variety of forms and compositions, illustrated by examples like the Palisades Sill of New York and New Jersey; the Henry Mountains of Utah; the Bushveld Igneous Complex of South Africa; Shiprock in New Mexico; the Ardnamurchan intrusion in Scotland; and the Sierra Nevada Batholith of California. Because the solid country rock into which magma intrudes is an excellent insulator, cooling of the magma is extremely slow, and intrusive igneous rock is coarse-grained (phaneritic). Intrusive igneous rocks are classified separately from extrusive igneous rocks, generally on the basis of their mineral content. The relative amounts of quartz, alkali feldspar, plagioclase, and feldspathoid is particularly important in classifying intrusive igneous rocks. Intrusions must displace existing country rock to make room for themselves. The question of how this takes place is called the "room problem", and it remains a subject of active investigation for many kinds of intrusions. The term pluton is poorly defined, but has been used to describe an intrusion emplaced at great depth; as a synonym for all igneous intrusions; as a dustbin category for intrusions whose size or character are not well determined; or as a name for a very large intrusion or for a crystallized magma chamber. A pluton that has intruded and obscured the contact between a terrane and adjacent rock is called a stitching pluton. Classification. Intrusions are broadly divided into "discordant intrusions", which cut across the existing structure of the country rock, and "concordant intrusions" that intrude parallel to existing bedding or fabric. These are further classified according to such criteria as size, evident mode of origin, or whether they are tabular in shape. An "intrusive suite" is a group of intrusions related in time and space. Discordant intrusions. Dikes. Dikes are tabular discordant intrusions, taking the form of sheets that cut across existing rock beds. They tend to resist erosion, so that they stand out as natural walls on the landscape. They vary in thickness from millimeter-thick films to over and an individual sheet can have an area of . They also vary widely in composition. Dikes form by hydraulic fracturing of the country rock by magma under pressure, and are more common in regions of crustal tension. Ring dikes and cone sheets. Ring dikes and cone sheets are dikes with particular forms that are associated with the formation of calderas. Volcanic necks. Volcanic necks are feeder pipes for volcanoes that have been exposed by erosion. Surface exposures are typically cylindrical, but the intrusion often becomes elliptical or even cloverleaf-shaped at depth. Dikes often radiate from a volcanic neck, suggesting that necks tend to form at intersections of dikes where passage of magma is least obstructed. Diatremes and breccia pipes. Diatremes and breccia pipes are pipe-like bodies of breccia that are formed by particular kinds of explosive eruptions. As they have reached the surface they are really extrusions, but the non erupted material is an intrusion and indeed due to erosion may be difficult to distinguish from an intrusion that never reached the surface when magma/lava. The root material of a diatreme is identical to intrusive material nearby, if it exists, that never reached the then surface when formed. Stocks. A stock is a non-tabular discordant intrusion whose exposure covers less than . Although this seems arbitrary, particularly since the exposure may be only the tip of a larger intrusive body, the classification is meaningful for bodies which do not change much in area with depth and that have other features suggesting a distinctive origin and mode of emplacement. Batholiths. Batholiths are discordant intrusions with an exposed area greater than . Some are of truly enormous size, and their lower contacts are very rarely exposed. For example, the Coastal Batholith of Peru is long and wide. They are usually formed from magma rich in silica, and never from gabbro or other rock rich in mafic minerals, but some batholiths are composed almost entirely of anorthosite. Concordant intrusions. Sills. A sill is a tabular concordant intrusion, typically taking the form of a sheet parallel to sedimentary beds. They are otherwise similar to dikes. Most are of mafic composition, relatively low in silica, which gives them the low viscosity necessary to penetrate between sedimentary beds. Laccoliths. A laccolith is a concordant intrusion with a flat base and domed roof. Laccoliths typically form at shallow depth, less than , and in regions of crustal compression. Lopoliths and layered intrusions. Lopoliths are concordant intrusions with a saucer shape, somewhat resembling an inverted laccolith, but they can be much larger and form by different processes. Their immense size promotes very slow cooling, and this produces an unusually complete mineral segregation called a layered intrusion. Formation. The room problem. The ultimate source of magma is partial melting of rock in the upper mantle and lower crust. This produces magma that is less dense than its source rock. For example, a granitic magma, which is high in silica, has a density of 2.4 Mg/m3, much less than the 2.8 Mg/m3 of high-grade metamorphic rock. This gives the magma tremendous buoyancy, so that ascent of the magma is inevitable once enough magma has accumulated. However, the question of precisely how large quantities of magma are able to shove aside country rock to make room for themselves (the "room problem") is still a matter of research. The composition of the magma and country rock and the stresses affecting the country rock strongly influence the kinds of intrusions that take place. For example, where the crust is undergoing extension, magma can easily rise into tensional fractures in the upper crust to form dikes. Where the crust is under compression, magma at shallow depth will tend to form laccoliths instead, with the magma penetrating the least competent beds, such as shale beds. Ring dikes and cone sheets form only at shallow depth, where a plug of overlying country rock can be raised or lowered. The immense volumes of magma involved in batholiths can force their way upwards only when the magma is highly silicic and buoyant, and are likely do so as diapirs in the ductile deep crust and through a variety of other mechanisms in the brittle upper crust. Multiple and composite intrusions. Igneous intrusions may form from a single magmatic event or several incremental events. Recent evidence suggests that incremental formation is more common for large intrusions. For example, the Palisades Sill was never a single body of magma thick, but was formed from multiple injections of magma. An intrusive body is described as "multiple" when it forms from repeated injections of magma of similar composition, and as "composite" when formed of repeated injections of magma of unlike composition. A composite dike can include rocks as different as granophyre and diabase. While there is often little visual evidence of multiple injections in the field, there is geochemical evidence. Zircon zoning provides important evidence for determining if a single magmatic event or a series of injections were the methods of emplacement. Large felsic intrusions likely form from melting of lower crust that has been heated by an intrusion of mafic magma from the upper mantle. The different densities of felsic and mafic magma limit mixing, so that the silicic magma floats on the mafic magma. Such limited mixing as takes place results in the small inclusions of mafic rock commonly found in granites and granodiorites. Cooling. An intrusion of magma loses heat to the surrounding country rock through heat conduction. Near the contact of hot material with cold material, if the hot material is initially uniform in temperature, the temperature profile across the contact is given by the relationship formula_0 where formula_1 is the initial temperature of the hot material, "k" is the thermal diffusivity (typically close to 10−6 m2 s−1 for most geologic materials), x is the distance from the contact, and t is the time since intrusion. This formula suggests that the magma close to the contact will be rapidly chilled while the country rock close to the contact is rapidly heated, while material further from the contact will be much slower to cool or heat. Thus a "chilled margin" is often found on the intrusion side of the contact, while a "contact aureole" is found on the country rock side. The chilled margin is much finer grained than most of the intrusion, and may be different in composition, reflecting the initial composition of the intrusion before fractional crystallization, assimilation of country rock, or further magmatic injections modified the composition of the rest of the intrusion. Isotherms (surfaces of constant temperature) propagate away from the margin according to a square root law, so that if the outermost meter of the magma takes ten years to cool to a given temperature, the next inward meter will take 40 years, the next will take 90 years, and so on. This is an idealization, and such processes as magma convection (where cooled magma next to the contact sinks to the bottom of the magma chamber and hotter magma takes its place) can alter the cooling process, reducing the thickness of chilled margins while hastening cooling of the intrusion as a whole. However, it is clear that thin dikes will cool much faster than larger intrusions, which explains why small intrusions near the surface (where the country rock is initially cold) are often nearly as fine-grained as volcanic rock. Structural features of the contact between intrusion and country rock give clues to the conditions under which the intrusion took place. "Catazonal intrusions" have a thick aureole that grades into the intrusive body with no sharp margin, indicating considerable chemical reaction between intrusion and country rock, and often have broad migmatite zones. Foliations in the intrusion and the surrounding country rock are roughly parallel, with indications of extreme deformation in the country rock. Such intrusions are interpreted as taking placed at great depth. "Mesozonal intrusions" have a much lower degree of metamorphism in their contact aureoles, and the contact between country rock and intrusion is clearly discernible. Migmatites are rare and deformation of country rock is moderate. Such intrusions are interpreted as occurring at medium depth. "Epizonal intrusions" are discordant with country rock and have sharp contacts with chilled margins, with only limited metamorphism in a contact aureole, and often contain xenolithic fragments of country rock suggesting brittle fracturing. Such intrusions are interpreted as occurring at shallow depth, and are commonly associated with volcanic rocks and collapse structures. Cumulates. An intrusion does not crystallize all minerals at once; rather, there is a sequence of crystallization that is reflected in the Bowen reaction series. Crystals formed early in cooling are generally denser than the remaining magma and can settle to the bottom of a large intrusive body. This forms a "cumulate layer" with distinctive texture and composition. Such cumulate layers may contain valuable ore deposits of chromite. The vast Bushveld Igneous Complex of South Africa includes cumulate layers of the rare rock type, chromitite, composed of 90% chromite, References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T/T_0 = \\frac{1}{2} + \\frac{1}{2} \\operatorname{erf}(\\frac{x}{2\\sqrt{kt}})" }, { "math_id": 1, "text": "T_0" } ]
https://en.wikipedia.org/wiki?curid=6512121
651311
Subdwarf
Star of luminosity class VI under the Yerkes spectral classification system Hertzsprung–Russell diagram Spectral type O B A F G K M L T Brown dwarfs White dwarfs Red dwarfs Subdwarfs Main sequence&lt;br&gt;("dwarfs") Subgiants Giants Red giants Blue giants Bright giants Supergiants Red supergiant Hypergiants absolutemagni-tude(MV) A subdwarf, sometimes denoted by "sd", is a star with luminosity class VI under the Yerkes spectral classification system. They are defined as stars with luminosity 1.5 to 2 magnitudes lower than that of main-sequence stars of the same spectral type. On a Hertzsprung–Russell diagram subdwarfs appear to lie below the main sequence. The term "subdwarf" was coined by Gerard Kuiper in 1939, to refer to a series of stars with anomalous spectra that were previously labeled as "intermediate white dwarfs".(p87) Since Kuiper coined the term, the subdwarf type has been extended to lower-mass stars than were known at the time. Astronomers have also discovered an entirely different group of blue-white subdwarfs, making two distinct categories: Cool (red) subdwarfs. Like ordinary main-sequence stars, cool subdwarfs (of spectral types G to M) produce their energy from hydrogen fusion. The explanation of their underluminosity lies in their low metallicity: These stars are not enriched in elements heavier than helium. The lower metallicity decreases the opacity of their outer layers and decreases the radiation pressure, resulting in a smaller, hotter star for a given mass. This lower opacity also allows them to emit a higher percentage of ultraviolet light for the same spectral type relative to a Population I star, a feature known as the ultraviolet excess.(p87–92) Usually members of the Milky Way's halo, they frequently have high space velocities relative to the Sun. Cool subdwarfs of spectral type L and T exist, for example ULAS J131610.28+075553.0 with spectral type sdT6.5. Subclasses of cool subdwarfs are as following: Subdwarfs of type L, T and Y. The low metallicity of subdwarfs is coupled with their old age. The early universe had a low content of elements heavier than helium and formed stars and brown dwarfs with lower metallicity. Only later supernovae, planetary nebulae and neutron star mergers enriched the universe with heavier elements. The old subdwarfs belong therefore often to the older structures in our Milky Way, mainly the thick disk and the galactic halo. Objects in the thick disk or the halo have a high space velocity compared to the Sun, which belongs to the younger thin disk. A high proper motion can be used to discover subdwarfs. Additionally the subdwarfs have spectral features that make them different from subdwarfs with solar metallicity. All subdwarfs share the suppression of the near-infrared spectrum, mainly the H-band and K-band. The low metallicity increase the collision induced absorption of hydrogen, causing this suppressed near-infrared spectrum. This is seen as blue infrared colors compared to brown dwarfs with solar metallicity. The low metallicity also change other absorption features, such as deeper CaH and TiO bands at 0.7 μm in L-subdwarfs, a weaker VO band at 0.8 μm in early L-subdwarfs and stronger FeH band at 0.99 μm for mid- to late L-subdwarfs. 2MASS J0532+8246 was discovered in 2003 as the first L-type subdwarf, which was later re-classified as an extreme subdwarf. The L-type subdwarfs have subtypes similar to M-type subdwarfs: The subtypes subdwarf (sd), extreme subdwarfs (esd) and ultra subdwarfs (usd), which are defined by their decreasing metallicity, compared to solar metallicity, which is defined on a logarithmic scale: For T-type subdwarfs only a small sample of subdwarfs and extreme subdwarfs is known. 2MASSI J0937347+293142 is the first object that was discovered in 2002 as a T-type subdwarf candidate and in 2006 it was confirmed to have low metallicity. The first two extreme subdwarfs of type T were discovered in 2020 by scientists and volunteers of the Backyard Worlds project. The first extreme subdwarfs of type T are WISEA 0414−5854 and WISEA 1810−1010. Subdwarfs of type T and Y have less methane in their atmosphere, due to the lower concentration of carbon in these subdwarfs. This leads to a bluer W1-W2 (WISE) or ch1-ch2 (Spitzer) color, compared to objects with similar temperature, but with solar metallicity. The color of T-types as a single classification criterion can be misleading. The closest directly imaged exoplanet, COCONUTS-2b, was first classified as a subdwarf of type T due to its color, while not showing a high tangential velocity. Only in 2021 it was identified as an exoplanet. The first Y-type subdwarf candidate was discovered in 2021, the brown dwarf WISE 1534–1043, which shows a moderate red Spitzer Space Telescope color (ch1-ch2 = 0.925±0.039 mag). The very red color between J and ch2 (J-ch2 &gt; 8.03 mag) and the absolute brightness would suggest a much redder ch1-ch2 color of about 2.4 to 3 mag. Due to the agreement with new subdwarf models, together with the high tangential velocity of 200 km/s, Kirkpatrick, Marocco "et al". (2021) argue that the most likely explanation is a cold very low-metal brown dwarf, maybe the first subdwarf of type Y. Binaries can help to determine the age and mass of these subdwarfs. The subdwarf VVV 1256−62B (sdL3) was discovered as a companion to a halo white dwarf, allowing the age to be measured at 8.4 to 13.8 billion years. It has a mass of 84 to 87 MJ, making VVV 1256−62B likely a red dwarf star. The subdwarf Wolf 1130C (sdT8) is the companion of an old subdwarf-white dwarf binary, which is estimated to be older than 10 billion years. It has a mass of 44.9 MJ, making it a brown dwarf. Hot (blue) subdwarfs. Hot subdwarfs, of bluish spectral types O and B are an entirely different class of object than cool subdwarfs; they are also called "extreme horizontal-branch stars". Hot subdwarf stars represent a late stage in the evolution of some stars, caused when a red giant star loses its outer hydrogen layers before the core begins to fuse helium. The reasons for their premature loss of their hydrogen envelope are unclear, but the interaction of stars in a binary star system is thought to be one of the main mechanisms. Single subdwarfs may be the result of a merger of two white dwarfs or gravitational influence from substellar companions. B-type subdwarfs, being more luminous than white dwarfs, are a significant component in the hot star population of old stellar systems, such as globular clusters and elliptical galaxies. Heavy metal subdwarfs. The heavy metal subdwarfs are a type of hot subdwarf star with high concentrations of heavy metals. The metals detected include germanium, strontium, yttrium, zirconium and lead. Known heavy metal subdwarfs include HE 2359-2844, LS IV-14 116, and HE 1256-2738. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ -1.0 < \\bigl[ \\tfrac{ \\mathsf{Fe} }{ \\mathsf{H} } \\bigr]_\\star \\leq -0.3\\ ," }, { "math_id": 1, "text": "\\ -1.7 < \\bigl[ \\tfrac{ \\mathsf{Fe} }{\\mathsf{H} } \\bigr]_\\star \\leq -1.0\\ ," }, { "math_id": 2, "text": "\\ \\bigl[ \\tfrac{ \\mathsf{Fe} }{\\mathsf{H} } \\bigr]_\\star \\leq -1.7 ~." }, { "math_id": 3, "text": "\\ \\bigl[ \\tfrac{\\mathsf{Fe} }{\\mathsf{H}} \\bigr]_\\odot \\equiv 0\\ ," } ]
https://en.wikipedia.org/wiki?curid=651311
65133209
Searchable symmetric encryption
System allowing searching of encrypted documents Searchable symmetric encryption (SSE) is a form of encryption that allows one to efficiently search over a collection of encrypted documents or files without the ability to decrypt them. SSE can be used to outsource files to an untrusted cloud storage server without ever revealing the files in the clear but while preserving the server's ability to search over them. Description. A searchable symmetric encryption scheme is a symmetric-key encryption scheme that encrypts a collection of documents formula_0, where each document formula_1 is viewed as a set of keywords from a keyword space formula_2. Given the encryption key formula_3 and a keyword formula_4, one can generate a search token formula_5 with which the encrypted data collection can be searched for formula_6. The result of the search is the subset of encrypted documents that contain the keyword formula_6. Static SSE. A static SSE scheme consists of three algorithms formula_7 that work as follows: A static SSE scheme is used by a client and an untrusted server as follows. The client encrypts its data collection using the formula_8 algorithm which returns a secret key formula_3 and an encrypted document collection formula_12. The client keeps formula_3 secret and sends formula_12 and formula_11 to the untrusted server. To search for a keyword formula_6, the client runs the formula_13 algorithm on formula_3 and formula_6 to generate a search token formula_5 which it sends to the server. The server runs Search with formula_12, formula_11, and formula_5 and returns the resulting encrypted documents back to the client. Dynamic SSE. A dynamic SSE scheme supports, in addition to search, the insertion and deletion of documents. A dynamic SSE scheme consists of seven algorithms formula_16 where formula_8, formula_13 and formula_14 are as in the static case and the remaining algorithms work as follows: To add a new document formula_18 the client runs formula_17 on formula_3 and formula_18to generate an insert token formula_19 which it sends to the server. The server runs formula_20 with formula_12 and formula_19 and stores the updated encrypted document collection. To delete a document with identifier formula_23, the client runs the formula_22 algorithm with formula_3 and formula_23 to generate a delete token formula_24 which it sends to the server. The server runs formula_25 with formula_12 and formula_24 and stores the updated encrypted document collection. An SSE scheme that does not support formula_22 and formula_25 is called semi-dynamic. History of Searchable Symmetric Encryption. The problem of searching on encrypted data was considered by Song, Wagner and Perrig though previous work on Oblivious RAM by Goldreich and Ostrovsky could be used in theory to address the problem. This work proposed an SSE scheme with a search algorithm that runs in time formula_27, where formula_28. Goh and Chang and Mitzenmacher gave new SSE constructions with search algorithms that run in time formula_29, where formula_30 is the number of documents. Curtmola, Garay, Kamara and Ostrovsky later proposed two static constructions with formula_31 search time, where formula_32 is the number of documents that contain formula_6, which is optimal. This work also proposed a semi-dynamic construction with formula_33 search time, where formula_34 is the number of updates. An optimal dynamic SSE construction was later proposed by Kamara, Papamanthou and Roeder. Goh and Chang and Mitzenmacher proposed security definitions for SSE. These were strengthened and extended by Curtmola, Garay, Kamara and Ostrovsky who proposed the notion of adaptive security for SSE. This work also was the first to observe leakage in SSE and to formally capture it as part of the security definition. Leakage was further formalized and generalized by Chase and Kamara. Islam, Kuzu and Kantarcioglu described the first leakage attack. All the previously mentioned constructions support single keyword search. Cash, Jarecki, Jutla, Krawczyk, Rosu and Steiner proposed an SSE scheme that supports conjunctive search in sub-linear time in formula_30. The construction can also be extended to support disjunctive and Boolean searches that can be expressed in searchable normal form (SNF) in sub-linear time. At the same time, Pappas, Krell, Vo, Kolesnikov, Malkin, Choi, George, Keromytis and Bellovin described a construction that supports conjunctive and all disjunctive and Boolean searches in sub-linear time. Security. SSE schemes are designed to guarantee that the untrusted server cannot learn any partial information about the documents or the search queries beyond some well-defined and reasonable leakage. The leakage of a scheme is formally described using a leakage profile which itself can consists of several leakage patterns. SSE constructions attempt to minimize leakage while achieving the best possible search efficiency. SSE security can be analyzed in several adversarial models but the most common are: Security in the Persistent Model. In the persistent model, there are SSE schemes that achieve a wide variety of leakage profiles. The most common leakage profile for static schemes that achieve single keyword search in optimal time is formula_35 which reveals the number of documents in the collection, the size of each document in the collection, if and when a query was repeated and which encrypted documents match the search query. It is known, however, how to construct schemes that leak considerably less at an additional cost in search time and storage. When considering dynamic SSE schemes, the state-of-the-art constructions with optimal time search have leakage profiles that guarantee forward privacy which means that inserts cannot be correlated with past search queries. Security in the Snapshot Model. In the snapshot model, efficient dynamic SSE schemes with no leakage beyond the number of documents and the size of the collection can be constructed. When using an SSE construction that is secure in the snapshot model one has to carefully consider how the scheme will be deployed because some systems might cache previous search queries. Cryptanalysis. A leakage profile only describes the leakage of an SSE scheme but it says nothing about whether that leakage can be exploited or not. Cryptanalysis is therefore used to better understand the real-world security of a leakage profile. There is a wide variety of attacks working in different adversarial models, based on a variety of assumptions and attacking different leakage profiles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{D} = (\\mathrm{D_1}, \\dots, \\mathrm{D_n})" }, { "math_id": 1, "text": "\\mathrm{D_i} \\subseteq \\mathbb{W}" }, { "math_id": 2, "text": "\\mathbb{W}" }, { "math_id": 3, "text": "K" }, { "math_id": 4, "text": "w \\in \\mathbb{W}" }, { "math_id": 5, "text": "tk" }, { "math_id": 6, "text": "w" }, { "math_id": 7, "text": "\\mathsf{SSE = (Setup, Token, Search)}" }, { "math_id": 8, "text": "\\mathsf{Setup}" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "\\mathbf{D}" }, { "math_id": 11, "text": "\\mathbf{I}" }, { "math_id": 12, "text": "\\mathbf{ED}" }, { "math_id": 13, "text": "\\mathsf{Token}" }, { "math_id": 14, "text": "\\mathsf{Search}" }, { "math_id": 15, "text": "\\mathbf{R} \\subseteq \\mathbf{ED}" }, { "math_id": 16, "text": "\\mathsf{SSE = (Setup, Token, Search, InsertToken, Insert, DeleteToken, Delete)}" }, { "math_id": 17, "text": "\\mathsf{InsertToken}" }, { "math_id": 18, "text": "\\mathrm{D_{n+1}}" }, { "math_id": 19, "text": "itk" }, { "math_id": 20, "text": "\\mathsf{Insert}" }, { "math_id": 21, "text": "\\mathbf{ED'}" }, { "math_id": 22, "text": "\\mathsf{DeleteToken}" }, { "math_id": 23, "text": "id" }, { "math_id": 24, "text": "dtk" }, { "math_id": 25, "text": "\\mathsf{Delete}" }, { "math_id": 26, "text": "\\mathrm{EDC}" }, { "math_id": 27, "text": "O(s)" }, { "math_id": 28, "text": "s = |\\mathbf{D}|" }, { "math_id": 29, "text": "O(n)" }, { "math_id": 30, "text": "n" }, { "math_id": 31, "text": "O(\\mathrm{opt} )" }, { "math_id": 32, "text": "\\mathrm{opt}" }, { "math_id": 33, "text": "O(\\mathrm{opt}\\cdot \\log(u))" }, { "math_id": 34, "text": "u" }, { "math_id": 35, "text": "\\Lambda_{\\mathrm{opt}}" } ]
https://en.wikipedia.org/wiki?curid=65133209
651360
Photoemission spectroscopy
Examining a substance by measuring electrons emitted in the photoelectric effect Photoemission spectroscopy (PES), also known as photoelectron spectroscopy, refers to energy measurement of electrons emitted from solids, gases or liquids by the photoelectric effect, in order to determine the binding energies of electrons in the substance. The term refers to various techniques, depending on whether the ionization energy is provided by X-ray, XUV or UV photons. Regardless of the incident photon beam, however, all photoelectron spectroscopy revolves around the general theme of surface analysis by measuring the ejected electrons. Types. X-ray photoelectron spectroscopy (XPS) was developed by Kai Siegbahn starting in 1957 and is used to study the energy levels of atomic core electrons, primarily in solids. Siegbahn referred to the technique as "electron spectroscopy for chemical analysis" (ESCA), since the core levels have small chemical shifts depending on the chemical environment of the atom that is ionized, allowing chemical structure to be determined. Siegbahn was awarded the Nobel Prize in 1981 for this work. XPS is sometimes referred to as PESIS (photoelectron spectroscopy for inner shells), whereas the lower-energy radiation of UV light is referred to as PESOS (outer shells) because it cannot excite core electrons. Ultraviolet photoelectron spectroscopy (UPS) is used to study valence energy levels and chemical bonding, especially the bonding character of molecular orbitals. The method was developed originally for gas-phase molecules in 1961 by Feodor I. Vilesov and in 1962 by David W. Turner, and other early workers included David C. Frost, J. H. D. Eland and K. Kimura. Later, Richard Smalley modified the technique and used a UV laser to excite the sample, in order to measure the binding energy of electrons in gaseous molecular clusters. Angle-resolved photoemission spectroscopy (ARPES) has become the most prevalent electron spectroscopy in condensed matter physics after recent advances in energy and momentum resolution, and widespread availability of synchrotron light sources. The technique is used to map the band structure of crystalline solids, to study quasiparticle dynamics in highly correlated materials, and to measure electron spin polarization. Two-photon photoelectron spectroscopy (2PPE) extends the technique to optically excited electronic states through the introduction of a pump-and-probe scheme. Extreme-ultraviolet photoelectron spectroscopy (EUPS) lies in between XPS and UPS. It is typically used to assess the valence band structure. Compared to XPS, it gives better energy resolution, and compared to UPS, the ejected electrons are faster, resulting in less space charge and mitigated final state effects. Physical principle. The physics behind the PES technique is an application of the photoelectric effect. The sample is exposed to a beam of UV or XUV light inducing photoelectric ionization. The energies of the emitted photoelectrons are characteristic of their original electronic states, and depend also on vibrational state and rotational level. For solids, photoelectrons can escape only from a depth on the order of nanometers, so that it is the surface layer which is analyzed. Because of the high frequency of the light, and the substantial charge and energy of emitted electrons, photoemission is one of the most sensitive and accurate techniques for measuring the energies and shapes of electronic states and molecular and atomic orbitals. Photoemission is also among the most sensitive methods of detecting substances in trace concentrations, provided the sample is compatible with ultra-high vacuum and the analyte can be distinguished from background. Typical PES (UPS) instruments use helium gas sources of UV light, with photon energy up to 52 eV (corresponding to wavelength 23.7 nm). The photoelectrons that actually escaped into the vacuum are collected, slightly retarded, energy resolved, and counted. This results in a spectrum of electron intensity as a function of the measured kinetic energy. Because binding energy values are more readily applied and understood, the kinetic energy values, which are source dependent, are converted into binding energy values, which are source independent. This is achieved by applying Einstein's relation formula_0. The formula_1 term of this equation is the energy of the UV light quanta that are used for photoexcitation. Photoemission spectra are also measured using tunable synchrotron radiation sources. The binding energies of the measured electrons are characteristic of the chemical structure and molecular bonding of the material. By adding a source monochromator and increasing the energy resolution of the electron analyzer, peaks appear with full width at half maximum (FWHM) less than 5–8 meV. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_k=h\\nu-E_B" }, { "math_id": 1, "text": "h\\nu" } ]
https://en.wikipedia.org/wiki?curid=651360
651361
Curvature of Riemannian manifolds
In mathematics, specifically differential geometry, the infinitesimal geometry of Riemannian manifolds with dimension greater than 2 is too complicated to be described by a single number at a given point. Riemann introduced an abstract and rigorous way to define curvature for these manifolds, now known as the "Riemann curvature tensor". Similar notions have found applications everywhere in differential geometry of surfaces and other objects. The curvature of a pseudo-Riemannian manifold can be expressed in the same way with only slight modifications. Ways to express the curvature of a Riemannian manifold. The Riemann curvature tensor. The curvature of a Riemannian manifold can be described in various ways; the most standard one is the curvature tensor, given in terms of a Levi-Civita connection (or covariant differentiation) formula_0 and Lie bracket formula_1 by the following formula: formula_2 Here formula_3 is a linear transformation of the tangent space of the manifold; it is linear in each argument. If formula_4 and formula_5 are coordinate vector fields then formula_6 and therefore the formula simplifies to formula_7 i.e. the curvature tensor measures "noncommutativity of the covariant derivative". The linear transformation formula_8 is also called the curvature transformation or endomorphism. "N.B." There are a few books where the curvature tensor is defined with opposite sign. Symmetries and identities. The curvature tensor has the following symmetries: formula_9 formula_10 formula_11 The last identity was discovered by Ricci, but is often called the "first Bianchi identity", just because it looks similar to the Bianchi identity below. The first two should be addressed as "antisymmetry" and "Lie algebra property" respectively, since the second means that the "R"("u", "v") for all "u", "v" are elements of the pseudo-orthogonal Lie algebra. All three together should be named "pseudo-orthogonal curvature structure". They give rise to a "tensor" only by identifications with objects of the tensor algebra - but likewise there are identifications with concepts in the Clifford-algebra. Let us note that these three axioms of a curvature structure give rise to a well-developed structure theory, formulated in terms of projectors (a Weyl projector, giving rise to "Weyl curvature" and an Einstein projector, needed for the setup of the Einsteinian gravitational equations). This structure theory is compatible with the action of the pseudo-orthogonal groups plus dilations. It has strong ties with the theory of Lie groups and algebras, Lie triples and Jordan algebras. See the references given in the discussion. The three identities form a complete list of symmetries of the curvature tensor, i.e. given any tensor which satisfies the identities above, one could find a Riemannian manifold with such a curvature tensor at some point. Simple calculations show that such a tensor has formula_12 independent components. Yet another useful identity follows from these three: formula_13 The Bianchi identity (often the second Bianchi identity) involves the covariant derivatives: formula_14 Sectional curvature. Sectional curvature is a further, equivalent but more geometrical, description of the curvature of Riemannian manifolds. It is a function formula_15 which depends on a "section" formula_16 (i.e. a 2-plane in the tangent spaces). It is the Gauss curvature of the formula_17-"section" at "p"; here formula_17-"section" is a locally defined piece of surface which has the plane formula_17 as a tangent plane at "p", obtained from geodesics which start at "p" in the directions of the image of formula_17 under the exponential map at "p". If formula_18 are two linearly independent vectors in formula_16 then formula_19 The following formula indicates that sectional curvature describes the curvature tensor completely: formula_20 formula_21 formula_22 Or in a simpler formula: formula_23 Curvature form. The connection form gives an alternative way to describe curvature. It is used more for general vector bundles, and for principal bundles, but it works just as well for the tangent bundle with the Levi-Civita connection. The curvature of an "n"-dimensional Riemannian manifold is given by an antisymmetric "n"×"n" matrix formula_24 of 2-forms (or equivalently a 2-form with values in formula_25, the Lie algebra of the orthogonal group formula_26, which is the structure group of the tangent bundle of a Riemannian manifold). Let formula_27 be a local section of orthonormal bases. Then one can define the connection form, an antisymmetric matrix of 1-forms formula_28 which satisfy from the following identity formula_29 Then the curvature form formula_30 is defined by formula_31. Note that the expression "formula_32" is shorthand for formula_33 and hence does not necessarily vanish. The following describes relation between curvature form and curvature tensor: formula_34 This approach builds in all symmetries of curvature tensor except the "first Bianchi identity", which takes form formula_35 where formula_36 is an "n"-vector of 1-forms defined by formula_37. The "second Bianchi identity" takes form formula_38 "D" denotes the exterior covariant derivative The curvature operator. It is sometimes convenient to think about curvature as an operator formula_39 on tangent bivectors (elements of formula_40), which is uniquely defined by the following identity: formula_41 It is possible to do this precisely because of the symmetries of the curvature tensor (namely antisymmetry in the first and last pairs of indices, and block-symmetry of those pairs). Further curvature tensors. In general the following tensors and functions do not describe the curvature tensor completely, however they play an important role. Scalar curvature. Scalar curvature is a function on any Riemannian manifold, denoted variously by formula_42 or formula_43. It is the full trace of the curvature tensor; given an orthonormal basis formula_44 in the tangent space at a point we have formula_45 where formula_46 denotes the Ricci tensor. The result does not depend on the choice of orthonormal basis. Starting with dimension 3, scalar curvature does not describe the curvature tensor completely. Ricci curvature. Ricci curvature is a linear operator on tangent space at a point, usually denoted by "formula_46". Given an orthonormal basis formula_44 in the tangent space at "p" we have formula_47 The result does not depend on the choice of orthonormal basis. With four or more dimensions, Ricci curvature does not describe the curvature tensor completely. Explicit expressions for the Ricci tensor in terms of the Levi-Civita connection is given in the article on Christoffel symbols. Weyl curvature tensor. The Weyl curvature tensor has the same symmetries as the Riemann curvature tensor, but with one extra constraint: its trace (as used to define the Ricci curvature) must vanish. The Weyl tensor is invariant with respect to a conformal change of metric: if two metrics are related as formula_48 for some positive scalar function formula_49, then formula_50. In dimensions 2 and 3 the Weyl tensor vanishes, but in 4 or more dimensions the Weyl tensor can be non-zero. For a manifold of constant curvature, the Weyl tensor is zero. Moreover, formula_51 if and only if the metric is locally conformal to the Euclidean metric. Ricci decomposition. Although individually, the Weyl tensor and Ricci tensor do not in general determine the full curvature tensor, the Riemann curvature tensor can be decomposed into a Weyl part and a Ricci part. This decomposition is known as the Ricci decomposition, and plays an important role in the conformal geometry of Riemannian manifolds. In particular, it can be used to show that if the metric is rescaled by a conformal factor of formula_52, then the Riemann curvature tensor changes to (seen as a (0, 4)-tensor): formula_53 where formula_54 denotes the Kulkarni–Nomizu product and Hess is the Hessian. Calculation of curvature. For calculation of curvature
[ { "math_id": 0, "text": "\\nabla" }, { "math_id": 1, "text": "[\\cdot,\\cdot]" }, { "math_id": 2, "text": "R(u,v)w=\\nabla_u\\nabla_v w - \\nabla_v \\nabla_u w -\\nabla_{[u,v]} w ." }, { "math_id": 3, "text": "R(u,v)" }, { "math_id": 4, "text": "u=\\partial/\\partial x_i" }, { "math_id": 5, "text": "v=\\partial/\\partial x_j" }, { "math_id": 6, "text": "[u,v]=0" }, { "math_id": 7, "text": "R(u,v)w=\\nabla_u\\nabla_v w - \\nabla_v \\nabla_u w " }, { "math_id": 8, "text": "w\\mapsto R(u,v)w" }, { "math_id": 9, "text": "R(u,v)=-R(v,u)^{}_{}" }, { "math_id": 10, "text": "\\langle R(u,v)w,z \\rangle=-\\langle R(u,v)z,w \\rangle^{}_{}" }, { "math_id": 11, "text": "R(u,v)w+R(v,w)u+R(w,u)v=0 ^{}_{}" }, { "math_id": 12, "text": "n^2(n^2-1)/12" }, { "math_id": 13, "text": "\\langle R(u,v)w,z \\rangle=\\langle R(w,z)u,v \\rangle^{}_{}" }, { "math_id": 14, "text": "\\nabla_uR(v,w)+\\nabla_vR(w,u)+\\nabla_w R(u,v)=0" }, { "math_id": 15, "text": "K(\\sigma)" }, { "math_id": 16, "text": "\\sigma" }, { "math_id": 17, "text": "\\sigma " }, { "math_id": 18, "text": "v,u" }, { "math_id": 19, "text": "K(\\sigma)= K(u,v)/|u\\wedge v|^2\\text{ where }K(u,v)=\\langle R(u,v)v,u \\rangle" }, { "math_id": 20, "text": "6\\langle R(u,v)w,z \\rangle =^{}_{}" }, { "math_id": 21, "text": "[K(u+z,v+w)-K(u+z,v)-K(u+z,w)-K(u,v+w)-K(z,v+w)+K(u,w)+K(v,z)]-^{}_{}" }, { "math_id": 22, "text": "[K(u+w,v+z)-K(u+w,v)-K(u+w,z)-K(u,v+z)-K(w,v+z)+K(v,w)+K(u,z)].^{}_{} " }, { "math_id": 23, "text": "\\langle R(u,v)w,z\\rangle=\\frac 16 \\left.\\frac{\\partial^2}{\\partial s\\partial t}\n\\left(K(u+sz,v+tw)-K(u+sw,v+tz)\\right)\\right|_{(s,t)=(0,0)}" }, { "math_id": 24, "text": "\\Omega^{}_{}=\\Omega^i_{\\ j}" }, { "math_id": 25, "text": "\\operatorname{so}(n)" }, { "math_id": 26, "text": "\\operatorname{O}(n)" }, { "math_id": 27, "text": "e_i" }, { "math_id": 28, "text": "\\omega=\\omega^i_{\\ j}" }, { "math_id": 29, "text": "\\omega^k_{\\ j}(e_i)=\\langle \\nabla_{e_i}e_j,e_k\\rangle" }, { "math_id": 30, "text": "\\Omega=\\Omega^i_{\\ j}" }, { "math_id": 31, "text": "\\Omega=d\\omega +\\omega\\wedge\\omega" }, { "math_id": 32, "text": "\\omega\\wedge\\omega" }, { "math_id": 33, "text": " \\omega^i_{\\ j}\\wedge\\omega^j_{\\ k}" }, { "math_id": 34, "text": "R(u,v)w=\\Omega(u\\wedge v)w. " }, { "math_id": 35, "text": "\\Omega\\wedge\\theta=0" }, { "math_id": 36, "text": "\\theta=\\theta^i" }, { "math_id": 37, "text": "\\theta^i(v)=\\langle e_i,v\\rangle" }, { "math_id": 38, "text": "D\\Omega=0" }, { "math_id": 39, "text": "Q" }, { "math_id": 40, "text": "\\Lambda^2(T)" }, { "math_id": 41, "text": "\\langle Q (u\\wedge v),w\\wedge z\\rangle=\\langle R(u,v)z,w \\rangle." }, { "math_id": 42, "text": "S, R" }, { "math_id": 43, "text": "\\text{Sc}" }, { "math_id": 44, "text": "\\{e_i\\}" }, { "math_id": 45, "text": "S =\\sum_{i,j}\\langle R(e_i,e_j)e_j,e_i\\rangle=\\sum_{i}\\langle \\text{Ric}(e_i),e_i\\rangle, " }, { "math_id": 46, "text": "\\text{Ric}" }, { "math_id": 47, "text": "\\text{Ric}(u)=\\sum_{i} R(u,e_i)e_i.^{}_{} " }, { "math_id": 48, "text": "\\tilde{g} = f g" }, { "math_id": 49, "text": "f" }, { "math_id": 50, "text": "\\tilde{W} = W" }, { "math_id": 51, "text": "W = 0" }, { "math_id": 52, "text": "e^{2f}" }, { "math_id": 53, "text": "e^{2f}\\left(R+\\left(\\text{Hess}(f)-df\\otimes df+\\frac{1}{2}\\|\\text{grad}(f)\\|^2 g\\right) {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} g\\right)" }, { "math_id": 54, "text": "{~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~}" } ]
https://en.wikipedia.org/wiki?curid=651361
65136309
Nuclear ensemble approach
Semiclassical approach for molecular spectrum simulations. The Nuclear Ensemble Approach (NEA) is a general method for simulations of diverse types of molecular spectra. It works by sampling an ensemble of molecular conformations (nuclear geometries) in the source state, computing the transition probabilities to the target states for each of these geometries, and performing a sum over all these transitions convoluted with shape function. The result is an incoherent spectrum containing absolute band shapes through inhomogeneous broadening. Motivation. Spectrum simulation is one of the most fundamental tasks in quantum chemistry. It allows comparing the theoretical results to experimental measurements. There are many theoretical methods for simulating spectra. Some are simple approximations (like stick spectra); others are high-level, accurate approximations (like those based on Fourier-transform of wavepacket propagations). The NEA lies in between. On the one hand, it is intuitive and straightforward to apply, providing much improved results compared to the stick spectrum. On the other hand, it does not recover all spectral effects and delivers a limited spectral resolution.   Historical. The NEA is a multidimensional extension of the reflection principle, an approach often used for estimating spectra in photodissociative systems. With popularization molecular mechanics, ensembles of geometries started to be also used to estimate the spectra through incoherent sums. Thus, different from the reflection principle, which is usually done via direct integration of analytical functions, the NEA is a numerical approach. In 2012, a formal account of NEA showed that it corresponded to an approximation to the time-dependent spectrum simulation approach, employing a Monte Carlo integration of the wavepacket overlap time evolution. NEA for absorption spectrum. Consider an ensemble of molecules absorbing radiation in the UV/vis. Initially, all molecules are in the ground electronic state Because of the molecular zero-point energy and temperature, the molecular geometry has a distribution around the equilibrium geometry. From a classical point of view, supposing that the photon absorption is an instantaneous process, each time a molecule is excited, it does so from a different geometry. As a consequence, the transition energy has not always the same value, but is a function of the nuclear coordinates. The NEA captures this effect by creating an ensemble of geometries reflecting the zero-point energy, the temperature, or both. In the NEA, the absorption spectrum (or absorption cross section) "σ"("E") at excitation energy "E" is calculated as formula_0 where "e" and "m" are the electron charge and mass, "c" is the speed of light, "ε"0 the vacuum permittivity, and "ћ" the reduced Planck constant. The sums run over "Nfs" excited states and "Np" nuclear geometries xi". For each of such geometries in the ensemble, transition energies Δ"E"0"n"(xi") and oscillator strengths "f"0"n"(xi") between the ground (0) and the excited ("n") states are computed. Each transition in the ensemble is convoluted with a normalized line shape function centered at Δ"E"0"n"(xi") and with width "δ". Each x"i" is a vector collecting the cartesian components of the geometries of each atom. The line shape function may be, for instance, a normalized Gaussian function given by formula_1 Although "δ" is an arbitrary parameter, it must be much narrower than the band width, not to interfere in its description. As the average value of band widths is around 0.3 eV, it is a good practice to adopt "δ" ≤ 0.05 eV. The geometries x"i" can be generated by any method able to describe the ground state distribution. Two of the most employed are dynamics and Wigner distribution nuclear normal modes. Molar extinction coefficient "ε" can be obtained from absorption cross section through formula_2 Because of the dependence of "f"0"n" on x"i", NEA is a post-Condon approximation, and it can predict dark vibronic bands. NEA for emission spectrum. In the case of fluorescence, the differential emission rate is given by formula_3. This expression assumes the validity of the Kasha's rule, with emission from the first excited state. NEA for other types of spectrum. NEA can be used for many types of steady-state and time-resolved spectrum simulations. Some examples beyond absorption and emission spectra are: Limitations of NEA. By construction, NEA does not include information about the target (final) states. For this reason, any spectral information that depends on these states cannot be described in the framework of NEA. For example, vibronically resolved peaks in the absorption spectrum will not appear in the simulations, only the band envelope around them, because these peaks depend on the wavefunction overlap between the ground and excited state. NEA can be, however, coupled to excited-state dynamics to recover these effects. NEA may be too computationally expensive for large molecules. The spectrum simulation requires the calculation of transition probabilities for hundreds of different nuclear geometries, which may become prohibitive due to the high computational costs. Machine learning methods coupled to NEA have been proposed to reduce these costs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\sigma \\left( E \\right)=\\frac{\\pi {{e}^{2}}\\hbar }{2mc{{\\epsilon }_{0}}E}\\sum\\limits_{n}^{{{N}_{fs}}}{\\frac{1}{{{N}_{p}}}\\sum\\limits_{i}^{{{N}_{p}}}{\\Delta {{E}_{0n}}\\left( {{\\mathbf{x}}_{i}} \\right){{f}_{0n}}\\left( {{\\mathbf{x}}_{i}} \\right)g\\left( E-\\Delta {{E}_{0n}}\\left( {{\\mathbf{x}}_{i}} \\right),\\delta \\right)}}, " }, { "math_id": 1, "text": " g\\left( E-\\Delta {{E}_{0n}},\\delta \\right)=\\frac{1}{\\sqrt{2\\pi {{\\left( \\delta /2 \\right)}^{2}}}}\\exp \\left( -\\frac{{{\\left( E-\\Delta {{E}_{0n}} \\right)}^{2}}}{2{{\\left( \\delta /2 \\right)}^{2}}} \\right). " }, { "math_id": 2, "text": "\\sigma = \\ln(10) \\frac{10^3}{N_\\text{A}} \\varepsilon \\approx 3.823 532 16 \\times 10^{-21}\\,\\varepsilon." }, { "math_id": 3, "text": "\\Gamma(E) = \\frac{ e^{2}}{2\\pi \\hbar mc^3 \\epsilon_{0}} \\frac{1}{N_{p}} \\sum_{i}^{N_{p}} \\Delta E_{1,0}(\\mathbf{x}_{i})^2 \\left | f_{1,0}(\\mathbf{x}_{i})\\right | g\\left(E-\\Delta E_{1,0}(\\mathbf{x}_{i}), \\delta\\right)" } ]
https://en.wikipedia.org/wiki?curid=65136309
651372
Evaporative cooler
Device that cools air through the evaporation of water An evaporative cooler (also known as evaporative air conditioner, swamp cooler, swamp box, desert cooler and wet air cooler) is a device that cools air through the evaporation of water. Evaporative cooling differs from other air conditioning systems, which use vapor-compression or absorption refrigeration cycles. Evaporative cooling exploits the fact that water will absorb a relatively large amount of heat in order to evaporate (that is, it has a large enthalpy of vaporization). The temperature of dry air can be dropped significantly through the phase transition of liquid water to water vapor (evaporation). This can cool air using much less energy than refrigeration. In extremely dry climates, evaporative cooling of air has the added benefit of conditioning the air with more moisture for the comfort of building occupants. The cooling potential for evaporative cooling is dependent on the wet-bulb depression, the difference between dry-bulb temperature and wet-bulb temperature (see relative humidity). In arid climates, evaporative cooling can reduce energy consumption and total equipment for conditioning as an alternative to compressor-based cooling. In climates not considered arid, indirect evaporative cooling can still take advantage of the evaporative cooling process without increasing humidity. Passive evaporative cooling strategies can offer the same benefits as mechanical evaporative cooling systems without the complexity of equipment and ductwork. History. An earlier form of evaporative cooling, the windcatcher, was first used in ancient Egypt and Persia thousands of years ago in the form of wind shafts on the roof. They caught the wind, passed it over subterranean water in a "qanat" and discharged the cooled air into the building. Modern Iranians have widely adopted powered evaporative coolers (""). The evaporative cooler was the subject of numerous US patents in the 20th century; many of these, starting in 1906, suggested or assumed the use of excelsior (wood wool) pads as the elements to bring a large volume of water in contact with moving air to allow evaporation to occur. A typical design, as shown in a 1945 patent, includes a water reservoir (usually with level controlled by a float valve), a pump to circulate water over the excelsior pads and a centrifugal fan to draw air through the pads and into the house. This design and this material remain dominant in evaporative coolers in the American Southwest, where they are also used to increase humidity. In the United States, the use of the term "swamp cooler" may be due to the odor of algae produced by early units. Externally mounted evaporative cooling devices (car coolers) were used in some automobiles to cool interior air—often as aftermarket accessories—until modern vapor-compression air conditioning became widely available. Passive evaporative cooling techniques in buildings have been a feature of desert architecture for centuries, but Western acceptance, study, innovation, and commercial application are all relatively recent. In 1974, William H. Goettl noticed how evaporative cooling technology works in arid climates, speculated that a combination unit could be more effective, and invented the "High Efficiency Astro Air Piggyback System", a combination refrigeration and evaporative cooling air conditioner. In 1986, University of Arizona researchers built a passive evaporative cooling tower, and performance data from this experimental facility in Tucson, Arizona became the foundation of evaporative cooling tower design guidelines. Physical principles. Evaporative coolers lower the temperature of air using the principle of evaporative cooling, unlike typical air conditioning systems which use vapor-compression refrigeration or absorption refrigeration. Evaporative cooling is the conversion of liquid water into vapor using the thermal energy in the air, resulting in a lower air temperature. The energy needed to evaporate the water is taken from the air in the form of sensible heat, which affects the temperature of the air, and converted into latent heat, the energy present in the water vapor component of the air, whilst the air remains at a constant enthalpy value. This conversion of sensible heat to latent heat is known as an isenthalpic process because it occurs at a constant enthalpy value. Evaporative cooling therefore causes a drop in the temperature of air proportional to the sensible heat drop and an increase in humidity proportional to the latent heat gain. Evaporative cooling can be visualized using a psychrometric chart by finding the initial air condition and moving along a line of constant enthalpy toward a state of higher humidity. A simple example of natural evaporative cooling is perspiration, or sweat, secreted by the body, evaporation of which cools the body. The amount of heat transfer depends on the evaporation rate, however for each kilogram of water vaporized 2,257 kJ of energy (about 890 BTU per pound of pure water, at 95 °F (35 °C)) are transferred. The evaporation rate depends on the temperature and humidity of the air, which is why sweat accumulates more on humid days, as it does not evaporate fast enough. Vapor-compression refrigeration uses evaporative cooling, but the evaporated vapor is within a sealed system, and is then compressed ready to evaporate again, using energy to do so. A simple evaporative cooler's water is evaporated into the environment, and not recovered. In an interior space cooling unit, the evaporated water is introduced into the space along with the now-cooled air; in an evaporative tower the evaporated water is carried off in the airflow exhaust. Other types of phase-change cooling. A closely related process, "sublimation cooling", differs from evaporative cooling in that a phase transition from solid to vapor, rather than liquid to vapor, occurs. Sublimation cooling has been observed to operate on a planetary scale on the planetoid Pluto, where it has been called an anti-greenhouse effect. Another application of a phase change to cooling is the "self-refrigerating" beverage can. A separate compartment inside the can contains a desiccant and a liquid. Just before drinking, a tab is pulled so that the desiccant comes into contact with the liquid and dissolves. As it does so, it absorbs an amount of heat energy called the latent heat of fusion. Evaporative cooling works with the phase change of liquid into vapor and the latent heat of vaporization, but the self-cooling can uses a change from solid to liquid, and the latent heat of fusion, to achieve the same result. Applications. Before the advent of modern refrigeration, evaporative cooling was used for millennia, for instance in "qanats", windcatchers, and mashrabiyas. A porous earthenware vessel would cool water by evaporation through its walls; frescoes from about 2500 BCE show slaves fanning jars of water to cool rooms. Alternatively, a bowl filled with milk or butter could be placed in another bowl filled with water, all being covered with a wet cloth resting in the water, to keep the milk or butter as fresh as possible (see zeer, botijo and Coolgardie safe). Evaporative cooling is a common form of cooling buildings for thermal comfort since it is relatively cheap and requires less energy than other forms of cooling. The figure showing the Salt Lake City weather data represents the typical summer climate (June to September). The colored lines illustrate the potential of direct and indirect evaporative cooling strategies to expand the comfort range in summer time. It is mainly explained by the combination of a higher air speed on one hand and elevated indoor humidity when the region permits the direct evaporative cooling strategy on the other hand. Evaporative cooling strategies that involve the humidification of the air should be implemented in dry condition where the increase in moisture content stays below recommendations for occupant's comfort and indoor air quality. Passive cooling towers lack the control that traditional HVAC systems offer to occupants. However, the additional air movement provided into the space can improve occupant comfort. Evaporative cooling is most effective when the relative humidity is on the low side, limiting its popularity to dry climates. Evaporative cooling raises the internal humidity level significantly, which desert inhabitants may appreciate as the moist air re-hydrates dry skin and sinuses. Therefore, assessing typical climate data is an essential procedure to determine the potential of evaporative cooling strategies for a building. The three most important climate considerations are dry-bulb temperature, wet-bulb temperature, and wet-bulb depression during a typical summer day. It is important to determine if the wet-bulb depression can provide sufficient cooling during the summer day. By subtracting the wet-bulb depression from the outside dry-bulb temperature, one can estimate the approximate air temperature leaving the evaporative cooler. It is important to consider that the ability for the exterior dry-bulb temperature to reach the wet-bulb temperature depends on the saturation efficiency. A general recommendation for applying direct evaporative cooling is to implement it in places where the wet-bulb temperature of the outdoor air does not exceed . However, in the example of Salt Lake City, the upper limit for the direct evaporative cooling on psychrometric chart is . Despite the lower temperature, evaporative cooling is suitable for similar climates to Salt Lake City. Evaporative cooling is especially well suited for climates where the air is hot and humidity is low. In the United States, the western and mountain states are good locations, with evaporative coolers prevalent in cities like Albuquerque, Denver, El Paso, Fresno, Salt Lake City, and Tucson. Evaporative air conditioning is also popular and well-suited to the southern (temperate) part of Australia. In dry, arid climates, the installation and operating cost of an evaporative cooler can be much lower than that of refrigerative air conditioning, often by 80% or so. However, evaporative cooling and vapor-compression air conditioning are sometimes used in combination to yield optimal cooling results. Some evaporative coolers may also serve as humidifiers in the heating season. In regions that are mostly arid, short periods of high humidity may prevent evaporative cooling from being an effective cooling strategy. An example of this event is the monsoon season in New Mexico and central and southern Arizona in July and August. In locations with moderate humidity there are many cost-effective uses for evaporative cooling, in addition to their widespread use in dry climates. For example, industrial plants, commercial kitchens, laundries, dry cleaners, greenhouses, spot cooling (loading docks, warehouses, factories, construction sites, athletic events, workshops, garages, and kennels) and confinement farming (poultry ranches, hog, and dairy) often employ evaporative cooling. In highly humid climates, evaporative cooling may have little thermal comfort benefit beyond the increased ventilation and air movement it provides. Other examples. Trees transpire large amounts of water through pores in their leaves called stomata, and through this process of evaporative cooling, forests interact with climate at local and global scales. Simple evaporative cooling devices such as evaporative cooling chambers (ECCs) and clay pot coolers, or pot-in-pot refrigerators, are simple and inexpensive ways to keep vegetables fresh without the use of electricity. Several hot and dry regions throughout the world could potentially benefit from evaporative cooling, including North Africa, the Sahel region of Africa, the Horn of Africa, southern Africa, the Middle East, arid regions of South Asia, and Australia. Benefits of evaporative cooling chambers for many rural communities in these regions include reduced post-harvest loss, less time spent traveling to the market, monetary savings, and increased availability of vegetables for consumption. Evaporative cooling is commonly used in cryogenic applications. The vapor above a reservoir of cryogenic liquid is pumped away, and the liquid continuously evaporates as long as the liquid's vapor pressure is significant. Evaporative cooling of ordinary helium forms a 1-K pot, which can cool to at least 1.2 K. Evaporative cooling of helium-3 can provide temperatures below 300 mK. These techniques can be used to make cryocoolers, or as components of lower-temperature cryostats such as dilution refrigerators. As the temperature decreases, the vapor pressure of the liquid also falls, and cooling becomes less effective. This sets a lower limit to the temperature attainable with a given liquid. Evaporative cooling is also the last cooling step in order to reach the ultra-low temperatures required for Bose–Einstein condensation (BEC). Here, so-called forced evaporative cooling is used to selectively remove high-energetic ("hot") atoms from an atom cloud until the remaining cloud is cooled below the BEC transition temperature. For a cloud of 1 million alkali atoms, this temperature is about 1μK. Although robotic spacecraft use thermal radiation almost exclusively, many crewed spacecraft have short missions that permit open-cycle evaporative cooling. Examples include the Space Shuttle, the Apollo command and service module (CSM), lunar module and portable life support system. The Apollo CSM and the Space Shuttle also had radiators, and the Shuttle could evaporate ammonia as well as water. The Apollo spacecraft used sublimators, compact and largely passive devices that dump waste heat in water vapor (steam) that is vented to space. When liquid water is exposed to vacuum it boils vigorously, carrying away enough heat to freeze the remainder to ice that covers the sublimator and automatically regulates the feedwater flow depending on the heat load. The water expended is often available in surplus from the fuel cells used by many crewed spacecraft to produce electricity. Designs. Most designs take advantage of the fact that water has one of the highest known enthalpy of vaporization (latent heat of vaporization) values of any common substance. Because of this, evaporative coolers use only a fraction of the energy of vapor-compression or absorption air conditioning systems. Except in very dry climates, the single-stage (direct) cooler can increase relative humidity (RH) to a level that makes occupants uncomfortable. Indirect and two-stage evaporative coolers keep the RH lower. Direct evaporative cooling. "Direct evaporative cooling" (open circuit) is used to lower the temperature and increase the humidity of air by using latent heat of evaporation, changing liquid water to water vapor. In this process, the energy in the air does not change. Warm dry air is changed to cool moist air. The heat of the outside air is used to evaporate water. The RH increases to 70 to 90% which reduces the cooling effect of human perspiration. The moist air has to be continually released to outside or else the air becomes saturated and evaporation stops. A "mechanical" direct evaporative cooler unit uses a fan to draw air through a wetted membrane, or pad, which provides a large surface area for the evaporation of water into the air. Water is sprayed at the top of the pad so it can drip down into the membrane and continually keep the membrane saturated. Any excess water that drips out from the bottom of the membrane is collected in a pan and recirculated to the top. Single-stage direct evaporative coolers are typically small in size as they only consist of the membrane, water pump, and centrifugal fan. The mineral content of the municipal water supply will cause scaling on the membrane, which will lead to clogging over the life of the membrane. Depending on this mineral content and the evaporation rate, regular cleaning and maintenance are required to ensure optimal performance. Generally, supply air from the single-stage evaporative cooler will need to be exhausted directly (one-through flow) as with direct evaporative cooling. A few design solutions have been conceived to utilize the energy in the air, like directing the exhaust air through two sheets of double glazed windows, thus reducing the solar energy absorbed through the glazing. Compared to energy required to achieve the equivalent cooling load with a compressor, single stage evaporative coolers consume less energy. "Passive" direct evaporative cooling can occur anywhere that the evaporatively cooled water can cool a space without the assistance of a fan. This can be achieved through the use of fountains or more architectural designs such as the evaporative downdraft cooling tower, also called a "passive cooling tower". The passive cooling tower design allows outside air to flow in through the top of a tower that is constructed within or next to the building. The outside air comes in contact with water inside the tower either through a wetted membrane or a mister. As water evaporates in the outside air, the air becomes cooler and less buoyant and creates a downward flow in the tower. At the bottom of the tower, an outlet allows the cooler air into the interior. Similar to mechanical evaporative coolers, towers can be an attractive low-energy solution for hot and dry climate as they only require a water pump to raise water to the top of the tower. Energy savings from using a passive direct evaporating cooling strategy depends on the climate and heat load. For arid climates with a great wet-bulb depression, cooling towers can provide enough cooling during summer design conditions to be net zero. For example, a 371 m2 (4,000 ft2) retail store in Tucson, Arizona with a sensible heat gain of 29.3 kJ/h (100,000 Btu/h) can be cooled entirely by two passive cooling towers providing 11890 m3/h (7,000 cfm) each. For the Zion National Park visitors' center, which uses two passive cooling towers, the cooling energy intensity was 14.5 MJ/m2 (1.28 kBtu/ft2;), which was 77% less than a typical building in the western United States that uses 62.5 MJ/m2 (5.5 kBtu/ft2). A study of field performance results in Kuwait revealed that power requirements for an evaporative cooler are approximately 75% less than the power requirements for a conventional packaged unit air-conditioner. Indirect evaporative cooling. "Indirect evaporative cooling" (closed circuit) is a cooling process that uses direct evaporative cooling in addition to some heat exchanger to transfer the cool energy to the supply air. The cooled moist air from the direct evaporative cooling process never comes in direct contact with the conditioned supply air. The moist air stream is released outside or used to cool other external devices such as solar cells which are more efficient if kept cool. This is done to avoid excess humidity in enclosed spaces, which is not appropriate for residential systems. Maisotsenko cycle. Indirect cooler manufacturer uses the Maisotsenko cycle (M-Cycle), named after inventor and Professor Dr. Valeriy Maisotsenko, employs an iterative (multi-step) heat exchanger made of a thin recyclable membrane that can reduce the temperature of product air to below the wet-bulb temperature, and can approach the dew point. Testing by the US Department of Energy found that a hybrid M-Cycle combined with a standard compression refrigeration system significantly improved efficiency by between 150 and 400% but was only capable of doing so in the dry western half of the US, and did not recommend being used in the much more humid eastern half of the US. The evaluation found that the system water consumption of 2–3 gallons per cooling ton (12,000 BTUs) was roughly equal in efficiency to the water consumption of new high efficiency power plants. This means the higher efficiency can be utilized to reduce load on the grid without requiring any additional water, and may actually reduce water usage if the source of the power does not have a high efficiency cooling system. An M-Cycle based system built by Coolerado is currently being used to cool the Data Center for NASA's National Snow and Ice Data Center (NSIDC). The facility is air cooled below 70 degrees Fahrenheit and uses the Coolerado system above that temperature. This is possible because the air handler for the system uses fresh outside air, which allows it to automatically use cool outside ambient air when conditions allow. This avoids running the refrigeration system when unnecessary. It is powered by a solar panel array which also serves as secondary power in case of main power loss. The system has very high efficiency but, like other evaporative cooling systems, is constrained by the ambient humidity levels, which has limited its adoption for residential use. It may be used as supplementary cooling during times of extreme heat without placing significant additional burden on electrical infrastructure. If a location has excess water supplies or excess desalination capacity it can be used to reduce excessive electrical demand by utilizing water in affordable M-Cycle units. Due to high costs of conventional air conditioning units and extreme limitations of many electrical utility systems, M-Cycle units may be the only appropriate cooling systems suitable for impoverished areas during times of extremely high temperature and high electrical demand. In developed areas, they may serve as supplemental backup systems in case of electrical overload, and can be used to boost efficiency of existing conventional systems. The M-Cycle is not limited to cooling systems and can be applied to various technologies from Stirling engines to Atmospheric water generators. For cooling applications it can be used in both cross flow and counterflow configurations. Counterflow was found to obtain lower temperatures more suitable for home cooling, but cross flow was found to have a higher coefficient of performance (COP), and is therefore better for large industrial installations. Unlike traditional refrigeration techniques, the COP of small systems remains high, as they do not require lift pumps or other equipment required for cooling towers. A 1.5 ton/4.4 kW cooling system requires just 200 watts for operation of the fan, giving a COP of 26.4 and an EER rating of 90. This does not take into account the energy required to purify or deliver the water, and is strictly the power required to run the device once water is supplied. Though desalination of water also presents a cost, the latent heat of vaporization of water is nearly 100 times higher than the energy required to purify the water itself. Furthermore, the device has a maximum efficiency of 55%, so its actual COP is much lower than this calculated value. However, regardless of these losses, the effective COP is still significantly higher than a conventional cooling system, even if water must first be purified by desalination. In areas where water is not available in any form, it can be used with a desiccant to recover water using available heat sources, such as solar thermal energy. Theoretical designs. In the newer but yet-to-be-commercialized "cold-SNAP" design from Harvard's Wyss Institute, a 3D-printed ceramic conducts heat but is half-coated with a hydrophobic material that serves as a moisture barrier. While no moisture is added to the incoming air the relative humidity (RH) does rise a little according to the Temperature-RH formula. Still, the relatively dry air resulting from indirect evaporative cooling allows inhabitants' perspiration to evaporate more easily, increasing the relative effectiveness of this technique. Indirect Cooling is an effective strategy for hot-humid climates that cannot afford to increase the moisture content of the supply air due to indoor air quality and human thermal comfort concerns. "Passive" indirect evaporative cooling strategies are rare because this strategy involves an architectural element to act as a heat exchanger (for example a roof). This element can be sprayed with water and cooled through the evaporation of the water on this element. These strategies are rare due to the high use of water, which also introduces the risk of water intrusion and compromising building structure. Hybrid designs. Two-stage evaporative cooling, or indirect-direct. In the first stage of a two-stage cooler, warm air is pre-cooled indirectly without adding humidity (by passing inside a heat exchanger that is cooled by evaporation on the outside). In the direct stage, the pre-cooled air passes through a water-soaked pad and picks up humidity as it cools. Since the air supply is pre-cooled in the first stage, less humidity is transferred in the direct stage, to reach the desired cooling temperatures. The result, according to manufacturers, is cooler air with a RH between 50 and 70%, depending on the climate, compared to a traditional system that produces about 70–80% relative humidity in the conditioned air. Evaporative + conventional backup. In another "hybrid" design, direct or indirect cooling has been combined with vapor-compression or absorption air conditioning to increase the overall efficiency and/or to reduce the temperature below the wet-bulb limit. Evaporative + passive daytime radiative + thermal insulation. Evaporative cooling can be combined with passive daytime radiative cooling and thermal insulation to enhance cooling power with zero energy use, albeit with an occasional water "re-charge" depending on the climatic zone of the installation. The system, developed by Lu et al. "consists of a solar reflector, a water-rich and IR-emitting evaporative layer, and a vapor-permeable, IR-transparent, and solar-reflecting insulation layer," with the top layer enabling "heat removal through both evaporation and radiation while resisting environmental heating." The system demonstrated 300% higher ambient cooling power than stand-alone passive daytime radiative cooling and could extend the shelf life of food by 40% in cool humid climates and 200% in dry climates without refrigeration. Membrane dehumidification and evaporative cooling. Conventional evaporative cooling only works with dry air, e.g. when the humidity ratio is below ~0.02 kgwater/kgair. They also require substantial water inputs. To remove these limitations, dewpoint evaporative cooling can be hybridized with membrane dehumidification, using membranes that pass water vapor but block air. Air passing through these membranes can be concentrated with a compressor, so it can be condensed at warmer temperatures. The first configuration with this approach reused the dehumidification water to provide further evaporative cooling. Such an approach can fully provide its own water for evaporative cooling, outperforms a baseline desiccant wheel system under all conditions, and outperforms vapor compression in dry conditions. It can also allow for cooling at higher humidity without the use of refrigerants, many of which have substantial greenhouse gas potential. Materials. Traditionally, evaporative "cooler pads" consist of excelsior (aspen wood fiber) inside a containment net, but more modern materials, such as some plastics and melamine paper, are entering use as cooler-pad media. Modern rigid media, commonly 8" or 12" thick, adds more moisture, and thus cools air more than typically much thinner aspen media. Another material which is sometimes used is corrugated cardboard. Design considerations. Water use. In arid and semi-arid climates, the scarcity of water makes water consumption a concern in cooling system design. From the installed water meters, 420938 L (111,200 gal) of water were consumed during 2002 for the two passive cooling towers at the Zion National Park visitors' center. However, such concerns are addressed by experts who note that electricity generation usually requires a large amount of water, and evaporative coolers use far less electricity, and thus comparable water overall, and cost less overall, compared to chillers. Shading. Allowing direct solar exposure to any surface which can transfer the extra heat to any part of the air flow through the unit will raise the temperature of the air. If the heat is transferred to the air prior to flowing through the pads, or if the sunlight warms the pads themselves, evaporation will increase, but the additional energy required to achieve this will not come from the energy contained in the ambient air, but will be supplied by the sun, and this will result not only in higher temperatures, but higher humidity as well, just as raising the inlet air temperature by any means, and heating the water prior to distribution over the pad by any means, would do. In addition, sunlight may degrade some media, and other components of the cooler. Therefore, shading is advisable in all circumstances, though the vertical aspect of the pads, and insulation between the exterior and interior horizontal (upwards facing) surfaces to minimise heat transfer will suffice. Mechanical systems. Apart from fans used in mechanical evaporative cooling, pumps are the only other piece of mechanical equipment required for the evaporative cooling process in both mechanical and passive applications. Pumps can be used for either recirculating the water to the wet media pad or providing water at very high pressure to a mister system for a passive cooling tower. Pump specifications will vary depending on evaporation rates and media pad area. The Zion National Park visitors' center uses a 250 W (1/3 HP) pump. Exhaust. Exhaust ducts and/or open windows must be used at all times to allow air to continually escape the air-conditioned area. Otherwise, pressure develops and the fan or blower in the system is unable to push much air through the media and into the air-conditioned area. The evaporative system cannot function without exhausting the continuous supply of air from the air-conditioned area to the outside. By optimizing the placement of the cooled-air inlet, along with the layout of the house passages, related doors, and room windows, the system can be used most effectively to direct the cooled air to the required areas. A well-designed layout can effectively scavenge and expel the hot air from desired areas without the need for an above-ceiling ducted venting system. Continuous airflow is essential, so the exhaust windows or vents must not restrict the volume and passage of air being introduced by the evaporative cooling machine. One must also be mindful of the outside wind direction, as, for example, a strong hot southerly wind will slow or restrict the exhausted air from a south-facing window. It is always best to have the downwind windows open, while the upwind windows are closed. Different types of installations. Typical installations. Typically, residential and industrial evaporative coolers use direct evaporation, and can be described as an enclosed metal or plastic box with vented sides. Air is moved by a centrifugal fan or blower (usually driven by an electric motor with pulleys known as "sheaves" in HVAC terminology, or a direct-driven axial fan), and a water pump is used to wet the evaporative cooling pads. The cooling units can be mounted on the roof (down draft, or downflow) or exterior walls or windows (side draft, or horizontal flow) of buildings. To cool, the fan draws ambient air through vents on the unit's sides and through the damp pads. Heat in the air evaporates water from the pads which are constantly re-dampened to continue the cooling process. Then cooled, moist air is delivered into the building via a vent in the roof or wall. Because the cooling air originates outside the building, one or more large vents must exist to allow air to move from inside to outside. Air should only be allowed to pass once through the system, or the cooling effect will decrease. This is due to the air reaching the saturation point. Often 15 or so air changes per hour (ACHs) occur in spaces served by evaporative coolers, a relatively high rate of air exchange. Evaporative (wet) cooling towers. Cooling towers are structures for cooling water or other heat transfer media to near-ambient wet-bulb temperature. Wet cooling towers operate on the evaporative cooling principle, but are optimized to cool the water rather than the air. Cooling towers can often be found on large buildings or on industrial sites. They transfer heat to the environment from chillers, industrial processes, or the Rankine power cycle, for example. Misting systems. Misting systems work by forcing water via a high pressure pump and tubing through a brass and stainless steel mist nozzle that has an orifice of about 5 micrometres, thereby producing a micro-fine mist. The water droplets that create the mist are so small that they instantly flash-evaporate. Flash evaporation can reduce the surrounding air temperature by as much as 35 °F (20 °C) in just seconds. For patio systems, it is ideal to mount the mist line approximately 8 to 10 feet (2.4 to 3.0 m) above the ground for optimum cooling. Misting is used for applications such as flowerbeds, pets, livestock, kennels, insect control, odor control, zoos, veterinary clinics, cooling of produce, and greenhouses. Misting fans. A misting fan is similar to a humidifier. A fan blows a fine mist of water into the air. If the air is not too humid, the water evaporates, absorbing heat from the air, allowing the misting fan to also work as an air cooler. A misting fan may be used outdoors, especially in a dry climate. It may also be used indoors. Small portable battery-powered misting fans, consisting of an electric fan and a hand-operated water spray pump, are sold as novelty items. Their effectiveness in everyday use is unclear. Performance. Understanding evaporative cooling performance requires an understanding of psychrometrics. Evaporative cooling performance is variable due to changes in external temperature and humidity level. A residential cooler should be able to decrease the temperature of air to within of the wet bulb temperature. It is simple to predict cooler performance from standard weather report information. Because weather reports usually contain the dewpoint and relative humidity, but not the wet-bulb temperature, a psychrometric chart or a simple computer program must be used to compute the wet bulb temperature. Once the wet bulb temperature and the dry bulb temperature are identified, the cooling performance or leaving air temperature of the cooler may be determined. For direct evaporative cooling, the direct saturation efficiency, formula_0, measures in what extent the temperature of the air leaving the direct evaporative cooler is close to the wet-bulb temperature of the entering air. The direct saturation efficiency can be determined as follows: formula_1 Where: "formula_0" = direct evaporative cooling saturation efficiency (%) "formula_2" = entering air dry-bulb temperature (°C) "formula_3" = leaving air dry-bulb temperature (°C) "formula_4" = entering air wet-bulb temperature (°C) Evaporative media efficiency usually runs between 80% and 90%. Most efficient systems can lower the dry air temperature to 95% of the wet-bulb temperature, the least efficient systems only achieve 50%. The evaporation efficiency drops very little over time. Typical aspen pads used in residential evaporative coolers offer around 85% efficiency while CELdek type of evaporative media offer efficiencies of &gt;90% depending on air velocity. The CELdek media is more often used in large commercial and industrial installations. As an example, in Las Vegas, with a typical summer design day of  dry bulb and  wet bulb temperature or about 8% relative humidity, the leaving air temperature of a residential cooler with 85% efficiency would be: "formula_3" = 42 °C – [(42 °C – 19 °C) × 85%] = However, either of two methods can be used to estimate performance: Some examples clarify this relationship: ("Cooling examples extracted from the June 25, 2000 University of Idaho publication, "Homewise""). Because evaporative coolers perform best in dry conditions, they are widely used and most effective in arid, desert regions such as the southwestern USA, northern Mexico, and Rajasthan. The same equation indicates why evaporative coolers are of limited use in highly humid environments: for example, a hot August day in Tokyo may be with 85% relative humidity, 1,005 hPa pressure. This gives a dew point of and a wet-bulb temperature of . According to the formula above, at 85% efficiency air may be cooled only down to which makes it quite impractical. Comparison to other types of air conditioning. Comparison of evaporative cooling to refrigeration-based air conditioning: Advantages. Less expensive to install and operate Ease of installation and maintenance Ventilation air Disadvantages. Performance Comfort Water use Maintenance frequency Health hazards References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\epsilon" }, { "math_id": 1, "text": "\\epsilon=\\frac{T_{e,db}-T_{l,db}}{T_{e,db}-T_{e,wb}}" }, { "math_id": 2, "text": "T_{e,db}" }, { "math_id": 3, "text": "T_{l,db}" }, { "math_id": 4, "text": "T_{e,wb}" } ]
https://en.wikipedia.org/wiki?curid=651372
6513914
Komar mass
Concept of mass used in general relativity The Komar mass (named after Arthur Komar) of a system is one of several formal concepts of mass that are used in general relativity. The Komar mass can be defined in any stationary spacetime, which is a spacetime in which all the metric components can be written so that they are independent of time. Alternatively, a stationary spacetime can be defined as a spacetime which possesses a timelike Killing vector field. The following discussion is an expanded and simplified version of the motivational treatment in (Wald, 1984, pg 288). Motivation. Consider the Schwarzschild metric. Using the Schwarzschild basis, a for the Schwarzschild metric, one can find that the radial acceleration required to hold a test mass stationary at a Schwarzschild coordinate of "r" is: formula_0 Because the metric is static, there is a well-defined meaning to "holding a particle stationary". Interpreting this acceleration as being due to a "gravitational force", we can then compute the integral of normal acceleration multiplied by area to get a "Gauss law" integral of: formula_1 While this approaches a constant as r approaches infinity, it is not a constant independent of "r". We are therefore motivated to introduce a correction factor to make the above integral independent of the radius "r" of the enclosing shell. For the Schwarzschild metric, this correction factor is just formula_2, the "red-shift" or "time dilation" factor at distance "r". One may also view this factor as "correcting" the local force to the "force at infinity", the force that an observer at infinity would need to apply through a string to hold the particle stationary. (Wald, 1984). To proceed further, we will write down a line element for a static metric. formula_3 where formula_4 and the quadratic form are functions only of the spatial coordinates "x", "y", "z" and are not functions of time. In spite of our choices of variable names, it should not be assumed that our coordinate system is Cartesian. The fact that none of the metric coefficients are functions of time makes the metric stationary: the additional fact that there are no "cross terms" involving both time and space components (such as formula_5) make it static. Because of the simplifying assumption that some of the metric coefficients are zero, some of our results in this motivational treatment will not be as general as they could be. In flat space-time, the proper acceleration required to hold station is formula_6, where "u" is the 4-velocity of our hovering particle and formula_7 is the proper time. In curved space-time, we must take the covariant derivative. Thus we compute the acceleration vector as: formula_8 formula_9 where formula_10 is a unit time-like vector such that formula_11 The component of the acceleration vector normal to the surface is formula_12 where Nb is a unit vector normal to the surface. In a Schwarzschild coordinate system, for example, we find that formula_13 as expected - we have simply re-derived the previous results presented in a frame-field in a coordinate basis. We define formula_14 so that in our Schwarzschild example: formula_15 We can, if we desire, derive the accelerations formula_16 and the adjusted "acceleration at infinity" formula_17 from a scalar potential Z, though there is not necessarily any particular advantage in doing so. (Wald 1984, pg 158, problem 4) formula_18 formula_19 We will demonstrate that integrating the normal component of the "acceleration at infinity" formula_20 over a bounding surface will give us a quantity that does not depend on the shape of the enclosing sphere, so that we can calculate the mass enclosed by a sphere by the integral formula_21 To make this demonstration, we need to express this surface integral as a volume integral. In flat space-time, we would use Stokes theorem and integrate formula_22 over the volume. In curved space-time, this approach needs to be modified slightly. Using the formulas for electromagnetism in curved space-time as a guide, we write instead. formula_23 where F plays a role similar to the "Faraday tensor", in that formula_24 We can then find the value of "gravitational charge", i.e. mass, by evaluating formula_25 and integrating it over the volume of our sphere. An alternate approach would be to use differential forms, but the approach above is computationally more convenient as well as not requiring the reader to understand differential forms. A lengthy, but straightforward (with computer algebra) calculation from our assumed line element shows us that formula_26 Thus we can write formula_27 In any vacuum region of space-time, all components of the Ricci tensor must be zero. This demonstrates that enclosing any amount of vacuum will not change our volume integral. It also means that our volume integral will be constant for any enclosing surface, as long as we enclose all of the gravitating mass inside our surface. Because Stokes theorem guarantees that our surface integral is equal to the above volume integral, our surface integral will also be independent of the enclosing surface as long as the surface encloses all of the gravitating mass. By using Einstein's Field Equations formula_28 letting u=v and summing, we can show that formula_29 This allows us to rewrite our mass formula as a volume integral of the stress–energy tensor. formula_30 where Komar mass as volume integral - general stationary metric. To make the formula for Komar mass work for a general stationary metric, regardless of the choice of coordinates, it must be modified slightly. We will present the applicable result from (Wald, 1984 eq 11.2.10) without a formal proof. formula_31 where Note that formula_32 replaces formula_34 in our motivational result. If none of the metric coefficients formula_35 are functions of time, formula_36 While it is not "necessary" to choose coordinates for a stationary space-time such that the metric coefficients are independent of time, it is often "convenient". When we chose such coordinates, the time-like Killing vector for our system formula_37 becomes a scalar multiple of a unit coordinate-time vector formula_38 i.e. formula_39 When this is the case, we can rewrite our formula as formula_40 Because formula_41 is by definition a unit vector, K is just the length of formula_32, i.e. K = formula_42. Evaluating the "red-shift" factor K based on our knowledge of the components of formula_37, we can see that K = formula_2. If we chose our spatial coordinates so that we have a locally Minkowskian metric formula_43 we know that formula_44 With these coordinate choices, we can write our Komar integral as formula_45 While we can't choose a coordinate system to make a curved space-time globally Minkowskian, the above formula provides some insight into the meaning of the Komar mass formula. Essentially, both energy and pressure contribute to the Komar mass. Furthermore, the contribution of local energy and mass to the system mass is multiplied by the local "red shift" factor formula_46 Komar mass as surface integral - general stationary metric. We also wish to give the general result for expressing the Komar mass as a surface integral. The formula for the Komar mass in terms of the metric and its Killing vector is (Wald, 1984, pg 289, formula 11.2.9) formula_47 where formula_48 are the Levi-civita symbols and formula_49 is the Killing vector of our stationary metric, normalized so that formula_33 at infinity. The surface integral above is interpreted as the "natural" integral of a two form over a manifold. As mentioned previously, if none of the metric coefficients formula_35 are functions of time, formula_50 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a^\\hat{r} = \\frac{m}{r^2 \\sqrt{1-\\frac{2m}{r c^2}}}" }, { "math_id": 1, "text": "\\frac{4 \\pi m}{\\sqrt{1 - \\frac{2m}{r c^2}}}" }, { "math_id": 2, "text": "\\sqrt{g_{tt}}" }, { "math_id": 3, "text": " ds^2 = g_{tt} \\, dt^2 + \\mathrm{quadratic\\ form}(dx, \\, dy, \\, dz)" }, { "math_id": 4, "text": "g_{tt}" }, { "math_id": 5, "text": "dx dt" }, { "math_id": 6, "text": "du/d \\tau" }, { "math_id": 7, "text": "\\tau" }, { "math_id": 8, "text": "a^b = \\nabla_u u^b = u^c \\nabla_c u^b" }, { "math_id": 9, "text": "a_b = u^c \\nabla_c u_b " }, { "math_id": 10, "text": "u^b" }, { "math_id": 11, "text": "u^b u_b = -1." }, { "math_id": 12, "text": "a_{\\mathrm{norm}}= N^b a_b" }, { "math_id": 13, "text": "N^b a_b = \\frac{ \\frac{\\partial g_{tt}}{\\partial r} c^2 }{2 g_{tt} \\sqrt{g_{rr}}} = \\frac{m}{r^2 \\sqrt{1-\\frac{2m}{rc^2}}}" }, { "math_id": 14, "text": "a_\\inf = \\sqrt{g_{tt}} \\, a" }, { "math_id": 15, "text": "N^b a_{\\inf \\, b} = m/r^2." }, { "math_id": 16, "text": "a_b" }, { "math_id": 17, "text": "a_{\\inf \\, b}" }, { "math_id": 18, "text": "a_b = \\nabla_b Z_1 \\qquad Z_1 = \\ln{g_{tt}}" }, { "math_id": 19, "text": "a_{\\inf \\, b} = \\nabla_b Z_2 \\qquad Z_2 = \\sqrt{g_{tt}}" }, { "math_id": 20, "text": "a_\\inf" }, { "math_id": 21, "text": "m = -\\frac{1}{4 \\pi} \\int_A N^b a_{\\inf \\, b} \\; dA" }, { "math_id": 22, "text": "-\\nabla \\cdot a_\\inf" }, { "math_id": 23, "text": "F_{ab} = a_{\\inf \\, a} \\,u_b - a_{\\inf \\, b} \\,u_a" }, { "math_id": 24, "text": "a_{\\inf \\, a} = F_{ab} u^b" }, { "math_id": 25, "text": "\\nabla^a F_{ab} u^b" }, { "math_id": 26, "text": " -u^b \\nabla^a F_{ab} = \\sqrt{g_{tt}} R_{00} u^a u^b = \\sqrt{g_{tt}} R_{ab} u^a u^b " }, { "math_id": 27, "text": "m = \\frac {\\sqrt{g_{tt}}} {4 \\pi} \\int_V R_{ab} u^a u^b " }, { "math_id": 28, "text": "G^u{}_v = R^u{}_v - \\frac{1}{2} R I^u{}_v = 8 \\pi T^u{}_v" }, { "math_id": 29, "text": "R = -8\\pi T." }, { "math_id": 30, "text": " m = \\int_V \\sqrt{g_{tt}} \\left( 2 T_{ab} - T g_{ab} \\right) u^a u^b dV" }, { "math_id": 31, "text": " m = \\int_V \\left( 2 T_{ab} - T g_{ab} \\right) u^a \\xi^b dV," }, { "math_id": 32, "text": "\\xi^b" }, { "math_id": 33, "text": "\\xi^a \\xi_a = -1" }, { "math_id": 34, "text": "\\sqrt{g_{tt}} u^b\\," }, { "math_id": 35, "text": "g_{ab}" }, { "math_id": 36, "text": "\\xi^a = (1, 0, 0, 0)." }, { "math_id": 37, "text": "\\xi^a" }, { "math_id": 38, "text": "u^a," }, { "math_id": 39, "text": "\\xi^a = K u^a." }, { "math_id": 40, "text": " m = \\int_V \\left(2T_{00} - T g_{00} \\right) K dV" }, { "math_id": 41, "text": "u^a" }, { "math_id": 42, "text": "\\sqrt{-\\xi^a\\xi_a}" }, { "math_id": 43, "text": "g_{ab} = \\eta_{ab}" }, { "math_id": 44, "text": "g_{00}=-1, T = -T_{00}+ T_{11}+T_{22}+T_{33}" }, { "math_id": 45, "text": "m = \\int_V \\sqrt{-\\xi^a \\xi_a} \\left( T_{00}+T_{11}+T_{22}+T_{33} \\right) dV" }, { "math_id": 46, "text": "K = \\sqrt{g_{tt}} = \\sqrt{-\\xi^a \\xi_a}" }, { "math_id": 47, "text": "m = - \\frac{1}{8 \\pi} \\int_S \\epsilon_{abcd} \\nabla^c \\xi^d " }, { "math_id": 48, "text": "\\epsilon_{abcd}" }, { "math_id": 49, "text": "\\xi^d" }, { "math_id": 50, "text": "\\xi^a = \\left( 1, 0, 0, 0 \\right) " } ]
https://en.wikipedia.org/wiki?curid=6513914
651398
Undulator
An undulator is an insertion device from high-energy physics and usually part of a larger installation, a synchrotron storage ring, or it may be a component of a free electron laser. It consists of a periodic structure of dipole magnets. These can be permanent magnets or superconducting magnets. The static magnetic field alternates along the length of the undulator with a wavelength formula_0. Electrons traversing the periodic magnet structure are forced to undergo oscillations and thus to radiate energy. The radiation produced in an undulator is very intense and concentrated in narrow energy bands in the spectrum. It is also collimated on the orbit plane of the electrons. This radiation is guided through beamlines for experiments in various scientific areas. The undulator strength parameter is: formula_1, where "e" is the electron charge, "B" is the magnetic field, "formula_0" is the spatial period of the undulator magnets, "formula_2" is the electron rest mass, and "c" is the speed of light. This parameter characterizes the nature of the electron motion. For formula_3 the oscillation amplitude of the motion is small and the radiation displays interference patterns which lead to narrow energy bands. If formula_4 the oscillation amplitude is bigger and the radiation contributions from each field period sum up independently, leading to a broad energy spectrum. In this regime of fields the device is no longer called an "undulator"; it is called a wiggler. The key difference between undulator and wiggler is coherence. In the case of an undulator, the emitted radiation is coherent with a wavelength determined by the period length and the beam energy, while in wiggler the electrons are not coherent. The usual description of the undulator is relativistic but classical. This means that although a precise calculation is tedious, the undulator can be seen as a black box, where only functions inside the device affect how an input is converted to an output; an electron enters the box and an electromagnetic pulse exits through a small exit slit. The slit should be small enough such that only the main cone passes, and the side lobes of the wavelength spectra can be ignored. Undulators can provide several orders of magnitude higher flux than a simple bending magnet and as such are in high demand at synchrotron radiation facilities. For an undulator with N periods, the brightness can be up to formula_5 more than a bending magnet. The first factor of N occurs because the intensity is enhanced up to a factor of N at harmonic wavelengths due to the constructive interference of the fields emitted during the N radiation periods. The usual pulse is a sine with some envelope. The second factor of N comes from the reduction of the emission angle associated with these harmonics, which is reduced as 1/N. When the electrons come with half the period, they interfere destructively, the undulator stays dark. The same is true, if they come as a bead chain. The polarization of the emitted radiation can be controlled by using permanent magnets to induce different periodic electron trajectories through the undulator. If the oscillations are confined to a plane the radiation will be linearly polarized. If the oscillation trajectory is helical, the radiation will be circularly polarized, with the handedness determined by the helix. If the electrons follow the Poisson distribution a partial interference leads to a linear increase in intensity. In the free electron laser the intensity increases exponentially with the number of electrons. An undulator's figure of merit is spectral radiance. History. The Russian physicist Vitaly Ginzburg showed theoretically that undulators could be built in a 1947 paper. Julian Schwinger published a useful paper in 1949 that reduced the necessary calculations to Bessel functions, for which there were tables. This was significant for solving the design equations as digital computers were not available to most academics at that time. Hans Motz and his coworkers at Stanford University demonstrated the first undulator in 1952. It produced the first manmade coherent infrared radiation. The design could produce a total frequency range from visible light down to millimeter waves. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda_u" }, { "math_id": 1, "text": "K=\\frac{e B \\lambda_u}{2 \\pi m_e c}" }, { "math_id": 2, "text": "m_{e}" }, { "math_id": 3, "text": "K\\ll1" }, { "math_id": 4, "text": "K\\gg1" }, { "math_id": 5, "text": "N^{2}" } ]
https://en.wikipedia.org/wiki?curid=651398
6513985
Mass in general relativity
Facet of general relativity The concept of mass in general relativity (GR) is more subtle to define than the concept of mass in special relativity. In fact, general relativity does not offer a single definition of the term mass, but offers several different definitions that are applicable under different circumstances. Under some circumstances, the mass of a system in general relativity may not even be defined. The reason for this subtlety is that the energy and momentum in the gravitational field cannot be unambiguously localized. (See Chapter 20 of .) So, rigorous definitions of the mass in general relativity are not local, as in classical mechanics or special relativity, but make reference to the asymptotic nature of the spacetime. A well defined notion of the mass exists for asymptotically flat spacetimes and for asymptotically Anti-de Sitter space. However, these definitions must be used with care in other settings. Defining mass in general relativity: concepts and obstacles. In special relativity, the rest mass of a particle can be defined unambiguously in terms of its energy and momentum as described in the article on mass in special relativity. Generalizing the notion of the energy and momentum to general relativity, however, is subtle. The main reason for this is that that gravitational field itself contributes to the energy and momentum. However, the "gravitational field energy" is not a part of the energy–momentum tensor; instead, what might be identified as the contribution of the gravitational field to a total energy is part of the Einstein tensor on the other side of Einstein's equation (and, as such, a consequence of these equations' non-linearity). While in certain situations it is possible to rewrite the equations so that part of the "gravitational energy" now stands alongside the other source terms in the form of the stress–energy–momentum pseudotensor, this separation is not true for all observers, and there is no general definition for obtaining it. How, then, does one define a concept as a system's total mass – which is easily defined in classical mechanics? As it turns out, at least for spacetimes which are asymptotically flat (roughly speaking, which represent some isolated gravitating system in otherwise empty and gravity-free infinite space), the ADM 3+1 split leads to a solution: as in the usual Hamiltonian formalism, the time direction used in that split has an associated energy, which can be integrated up to yield a global quantity known as the ADM mass (or, equivalently, ADM energy). Alternatively, there is a possibility to define mass for a spacetime that is stationary, in other words, one that has a time-like Killing vector field (which, as a generating field for time, is canonically conjugate to energy); the result is the so-called Komar mass Although defined in a totally different way, it can be shown to be equivalent to the ADM mass for stationary spacetimes. The Komar integral definition can also be generalized to non-stationary fields for which there is at least an asymptotic time translation symmetry; imposing a certain gauge condition, one can define the Bondi energy at null infinity. In a way, the ADM energy measures all of the energy contained in spacetime, while the Bondi energy excludes those parts carried off by gravitational waves to infinity. Great effort has been expended on proving positivity theorems for the masses just defined, not least because positivity, or at least the existence of a lower limit, has a bearing on the more fundamental question of boundedness from below: if there were no lower limit to the energy, then no isolated system would be absolutely stable; there would always be the possibility of a decay to a state of even lower total energy. Several kinds of proofs that both the ADM mass and the Bondi mass are indeed positive exist; in particular, this means that Minkowski space (for which both are zero) is indeed stable. While the focus here has been on energy, analogue definitions for global momentum exist; given a field of angular Killing vectors and following the Komar technique, one can also define global angular momentum. Quasi-local quantities. The disadvantage of all the definitions mentioned so far is that they are defined only at (null or spatial) infinity; since the 1970s, physicists and mathematicians have worked on the more ambitious endeavor of defining suitable "quasi-local" quantities, such as the mass of an isolated system defined using only quantities defined within a finite region of space containing that system. However, while there is a variety of proposed definitions such as the Hawking energy, the Geroch energy or Penrose's quasi-local energy–momentum based on twistor methods, the field is still in flux. Eventually, the hope is to use a suitable defined quasi-local mass to give a more precise formulation of the hoop conjecture, prove the so-called Penrose inequality for black holes (relating the black hole's mass to the horizon area) and find a quasi-local version of the laws of black hole mechanics. Types of mass in general relativity. Komar mass in stationary spacetimes. A non-technical definition of a stationary spacetime is a spacetime where none of the metric coefficients formula_0 are functions of time. The Schwarzschild metric of a black hole and the Kerr metric of a rotating black hole are common examples of stationary spacetimes. By definition, a stationary spacetime exhibits time translation symmetry. This is technically called a time-like Killing vector. Because the system has a time translation symmetry, Noether's theorem guarantees that it has a conserved energy. Because a stationary system also has a well defined rest frame in which its momentum can be considered to be zero, defining the energy of the system also defines its mass. In general relativity, this mass is called the Komar mass of the system. Komar mass can only be defined for stationary systems. Komar mass can also be defined by a flux integral. This is similar to the way that Gauss's law defines the charge enclosed by a surface as the normal electric force multiplied by the area. The flux integral used to define Komar mass is slightly different from that used to define the electric field, however – the normal force is not the actual force, but the "force at infinity". See the main article for more detail. Of the two definitions, the description of Komar mass in terms of a time translation symmetry provides the deepest insight. ADM and Bondi masses in asymptotically flat space-times. If a system containing gravitational sources is surrounded by an infinite vacuum region, the geometry of the space-time will tend to approach the flat Minkowski geometry of special relativity at infinity. Such space-times are known as "asymptotically flat" space-times. For systems in which space-time is asymptotically flat, the ADM and Bondi energy, momentum, and mass can be defined. In terms of Noether's theorem, the ADM energy, momentum, and mass are defined by the asymptotic symmetries at spatial infinity, and the Bondi energy, momentum, and mass are defined by the asymptotic symmetries at null infinity. Note that mass is computed as the length of the energy–momentum four-vector, which can be thought of as the energy and momentum of the system "at infinity". The ADM energy is defined through the following flux integral at infinity. If a spacetime is asymptotically flat this means that near "infinity" the metric tends to that of flat space. The asymptotic deviations of the metric away from flat space can be parametrized by formula_1 where formula_2 is the flat space metric. The ADM energy is then given by an integral over a surface, formula_3 at infinity formula_4 where formula_5 is the outward-pointing normal to formula_3. The Einstein summation convention is assumed for repeated indices but the sum over k and j only runs over the spatial directions. The use of ordinary derivatives instead of covariant derivatives in the formula above is justified because of the assumption that the asymptotic geometry is flat. Some intuition for the formula above can be obtained as follows. Imagine that that we take the surface, S, to be a spherical surface so that the normal points radially outwards. At large distances from the source of the energy, r, the tensor formula_6 is expected to fall off as formula_7 and the derivative with respect to r converts this into formula_8 The area of the sphere at large radius also grows precisely as formula_9 and therefore one obtains a finite value for the energy. It is also possible to obtain expressions for the momentum in asymptotically flat spacetime. To obtain such an expression one defines formula_10 where formula_11 Then the momentum is obtained by a flux integral in the asymptotically flat region formula_12 Note that the expression for formula_13 obtained from the formula above coincides with the expression for the ADM energy given above as can easily be checked using the explicit expression for H. The Newtonian limit for nearly flat space-times. In the Newtonian limit, for quasi-static systems in nearly flat space-times, one can approximate the total energy of the system by adding together the non-gravitational components of the energy of the system and then subtracting the "Newtonian" gravitational binding energy. Translating the above statement into the language of general relativity, we say that a system in nearly flat space-time has a total non-gravitational energy E and momentum P given by: formula_14 When the components of the momentum vector of the system are zero, i.e. Pi = 0, the approximate mass of the system is just (E+Ebinding)/c2, Ebinding being a negative number representing the Newtonian gravitational self-binding energy. Hence when one assumes that the system is quasi-static, one assumes that there is no significant energy present in the form of "gravitational waves". When one assumes that the system is in "nearly-flat" space-time, one assumes that the metric coefficients are essentially Minkowskian within acceptable experimental error. The formulas for the total energy and momentum can be seen to arise naturally in this limit as follows. In the linearized limit, the equations of general relativity can be written in the form formula_15 In this limit, the total energy-momentum of the system is simply given by integrating the stress-tensor on a spacelike slice. formula_16 But using the equations of motion, one can also write this as formula_17 where the sum over j runs only over the spatial directions and the second equality uses the fact that formula_18 is anti-symmetric in formula_19 and formula_20. Finally, one uses the Gauss law to convert the integral of a divergence over the spatial slice into an integral over a Gaussian sphere formula_21 which coincides precisely with the formula for the total momentum given above. History. In 1918, David Hilbert wrote about the difficulty in assigning an energy to a "field" and "the failure of the energy theorem" in a correspondence with Klein. In this letter, Hilbert conjectured that this failure is a characteristic feature of the general theory, and that instead of "proper energy theorems" one had 'improper energy theorems'. This conjecture was soon proved to be correct by one of Hilbert's close associates, Emmy Noether. Noether's theorem applies to any system which can be described by an action principle. Noether's theorem associates conserved energies with time-translation symmetries. When the time-translation symmetry is a finite parameter continuous group, such as the Poincaré group, Noether's theorem defines a scalar conserved energy for the system in question. However, when the symmetry is an infinite parameter continuous group, the existence of a conserved energy is not guaranteed. In a similar manner, Noether's theorem associates conserved momenta with space-translations, when the symmetry group of the translations is finite-dimensional. Because General Relativity is a diffeomorphism invariant theory, it has an infinite continuous group of symmetries rather than a finite-parameter group of symmetries, and hence has the wrong group structure to guarantee a conserved energy. Noether's theorem has been influential in inspiring and unifying various ideas of mass, system energy, and system momentum in General Relativity. As an example of the application of Noether's theorem is the example of stationary space-times and their associated Komar mass.(Komar 1959). While general space-times lack a finite-parameter time-translation symmetry, stationary space-times have such a symmetry, known as a Killing vector. Noether's theorem proves that such stationary space-times must have an associated conserved energy. This conserved energy defines a conserved mass, the Komar mass. ADM mass was introduced (Arnowitt et al., 1960) from an initial-value formulation of general relativity. It was later reformulated in terms of the group of asymptotic symmetries at spatial infinity, the SPI group, by various authors. (Held, 1980). This reformulation did much to clarify the theory, including explaining why ADM momentum and ADM energy transforms as a 4-vector (Held, 1980). Note that the SPI group is actually infinite-dimensional. The existence of conserved quantities is because the SPI group of "super-translations" has a preferred 4-parameter subgroup of "pure" translations, which, by Noether's theorem, generates a conserved 4-parameter energy–momentum. The norm of this 4-parameter energy–momentum is the ADM mass. The Bondi mass was introduced (Bondi, 1962) in a paper that studied the loss of mass of physical systems via gravitational radiation. The Bondi mass is also associated with a group of asymptotic symmetries, the BMS group at null infinity. Like the SPI group at spatial infinity, the BMS group at null infinity is infinite-dimensional, and it also has a preferred 4-parameter subgroup of "pure" translations. Another approach to the problem of energy in General Relativity is the use of pseudotensors such as the Landau–Lifshitz pseudotensor.(Landau and Lifshitz, 1962). Pseudotensors are not gauge invariant – because of this, they only give consistent gauge-independent answers for the total energy when additional constraints (such as asymptotic flatness) are met. The gauge dependence of pseudotensors also prevents any gauge-independent definition of the local energy density, as every different gauge choice results in a different local energy density. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g_{\\mu\\nu}\\," }, { "math_id": 1, "text": "g_{\\mu \\nu} = \\eta_{\\mu \\nu} + h_{\\mu \\nu}\n" }, { "math_id": 2, "text": "\\eta_{\\mu \\nu}" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "\nP^0 = {1 \\over 16 \\pi G} \\int \\left(\\partial^k h_{j k} - \\partial^j h_{k k} \\right) d^2 S_j,\n" }, { "math_id": 5, "text": " S_j " }, { "math_id": 6, "text": "h_{i j}" }, { "math_id": 7, "text": "r^{-1}" }, { "math_id": 8, "text": "r^{-2}" }, { "math_id": 9, "text": "r^2" }, { "math_id": 10, "text": "\nH^{\\mu \\alpha \\nu \\beta} = -\\bar{h}^{\\mu \\nu} \\eta^{\\alpha \\beta} - \\eta^{\\mu \\nu} \\bar{h}^{\\alpha \\beta} + \\bar{h}^{\\alpha \\nu} \\eta^{\\mu \\beta} + \\bar{h}^{\\mu \\beta} \\eta^{\\alpha \\nu}\n" }, { "math_id": 11, "text": "\n\\bar{h}_{\\mu \\nu} = h_{\\mu \\nu} - {1 \\over 2} \\eta_{\\mu \\nu} h^{\\alpha}_{\\alpha}\n" }, { "math_id": 12, "text": "\nP^{\\mu} = {1 \\over 16 \\pi G}\\int \\partial_{\\alpha} H^{\\mu \\alpha 0 j} d^2 S_j\n" }, { "math_id": 13, "text": "P^0" }, { "math_id": 14, "text": "E = \\int_v T_{00} dV \\qquad P^i = \\int_V T_{0i} dV " }, { "math_id": 15, "text": "\n\\partial_{\\alpha} \\partial_{\\beta} H^{\\mu \\alpha \\nu \\beta} = 16 \\pi G T^{\\mu \\nu}\n" }, { "math_id": 16, "text": "\nP^{\\mu} = \\int T^{\\mu 0} d^3 x\n" }, { "math_id": 17, "text": "\nP^{\\mu} = {1 \\over 16 \\pi G} \\int \\partial_{\\alpha} \\partial_{\\beta} H^{\\mu \\alpha 0 \\beta} d^3 x = {1 \\over 16 \\pi G} \\int \\partial_{\\alpha} \\partial_j H^{\\mu \\alpha 0 j} d^3 x\n" }, { "math_id": 18, "text": "H^{\\mu \\alpha \\nu \\beta}" }, { "math_id": 19, "text": "\\nu" }, { "math_id": 20, "text": " \\beta " }, { "math_id": 21, "text": "\n{1 \\over 16 \\pi G} \\int \\partial_{\\alpha} \\partial_j H^{\\mu \\alpha 0 j} d^3 x = {1 \\over 16 \\pi G} \\int \\partial_{\\alpha} H^{\\mu \\alpha 0 j} d^2 S_j\n" } ]
https://en.wikipedia.org/wiki?curid=6513985
65154948
Matrix 2 of 5
Matrix 2 of 5 (also known as Code 2 of 5 Matrix.) is a variable length, discrete, two width symbology. Matrix 2 of 5 is a subset of two-out-of-five codes. Unlike Industrial 2 of 5 code, Matrix 2 of 5 can encode data not only with black bars but with white spaces. Matrix 2 of 5 was developed in 1970-х by Nieaf Co. in The Netherlands and commonly was uses for warehouse sorting, photo finishing, and airline ticket marking. Matrix 2 of 5 can encode only digits 0-9. Matrix 2 of 5 can include optional check digit. Most of barcode readers support this symbology. Encoding. Matrix 2 of 5 is a subset of two-out-of-five codes family and uses wide and narrow elements for encoding. Unlike previously developed Industrial 2 of 5 it uses both black bars and white spaces for data encoding. However, it has lower density then Interleaved 2 of 5 code, because it is discrete symbology and requires additional space between data patterns. Main advantage over Interleaved 2 of 5 codes is ability to encode odd number of characters in message. Matrix 2 of 5 encodes only digits from 0 to 9 in three black bars and two white spaces, with every data pattern split by additional white space. Matrix 2 of 5 could include optional checksum character which is added to the end of the barcode. Matrix 2 of 5 features: Four starting bars and spaces in pattern have own weights which encode value of the symbol (except zero). Also, last black bar is used as parity bit to avoid single error. Value of the symbol is a sum of nonzero weights of four first pattern elements. N - narrow black bar or white space. &lt;br&gt;W - wide black bar or white space. &lt;br&gt;Narrow to wide components difference could be from 1/3 to 2/5. The barcode has the following physical structure: &lt;br&gt;1. Quiet zone 10X wide &lt;br&gt;2. Start character &lt;br&gt;3. Variable length digit characters, properly encoded &lt;br&gt;4. Optional check digit &lt;br&gt;5. Stop character &lt;br&gt;6. Quiet zone 10X wide Checksum. Matrix 2 of 5 may include an optional check digit which is calculated as mod 10/3 checksum. Because specification of Matrix 2 of 5 does not require checksum any other checksum types could be used with the symbology. However mod 10/3 checksum is most common. &lt;br&gt;formula_0, &lt;br&gt;where formula_1 is the most right data digit. Example for the first 6 digits 423456: Result: 4234562 barcode Datalogic 2 of 5. Data Logic 2 of 5 (also known as Code 2 of 5 Datalogic, China Post Code) is proprietary Chinese version of Matrix 2 of 5 symbology developed by Datalogic. It has difference from Matrix 2 of 5 code only in start/stop patterns usage and, in this way, it has all advantages and issues of Matrix 2 of 5. Datalogic 2 of 5 was used mostly in Chinese Postal Services. Some readers currently still support this symbology N - narrow black bar or white space. &lt;br&gt;W - wide black bar or white space. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_{check} = 10 - ((3x_1 + x_2 + 3x_3 + x_4 +\\cdots+ x_{2n} + 3x_{2n+1})\\pmod{10})" }, { "math_id": 1, "text": "x_1" } ]
https://en.wikipedia.org/wiki?curid=65154948
65158327
Minkowski problem for polytopes
In the geometry of convex polytopes, the Minkowski problem for polytopes concerns the specification of the shape of a polytope by the directions and measures of its facets. The theorem that every polytope is uniquely determined up to translation by this information was proven by Hermann Minkowski; it has been called "Minkowski's theorem", although the same name has also been given to several unrelated results of Minkowski. The Minkowski problem for polytopes should also be distinguished from the Minkowski problem, on specifying convex shapes by their curvature. Specification and necessary conditions. For any formula_0-dimensional polytope, one can specify its collection of facet directions and measures by a finite set of formula_0-dimensional nonzero vectors, one per facet, pointing perpendicularly outward from the facet, with length equal to the formula_1-dimensional measure of its facet. To be a valid specification of a bounded polytope, these vectors must span the full formula_0-dimensional space, and no two can be parallel with the same sign. Additionally, their sum must be zero; this requirement corresponds to the observation that, when the polytope is projected perpendicularly onto any hyperplane, the projected measure of its top facets and its bottom facets must be equal, because the top facets project to the same set as the bottom facets. Minkowski's uniqueness theorem. It is a theorem of Hermann Minkowski that these necessary conditions are sufficient: every finite set of vectors that spans the whole space, has no two parallel with the same sign, and sums to zero describes the facet directions and measures of a polytope. More, the shape of this polytope is uniquely determined by this information: every two polytopes that give rise to the same set of vectors are translations of each other. Blaschke sums. The sets of vectors representing two polytopes can be added by taking the union of the two sets and, when the two sets contain parallel vectors with the same sign, replacing them by their sum. The resulting operation on polytope shapes is called the Blaschke sum. It can be used to decompose arbitrary polytopes into simplices, and centrally symmetric polytopes into parallelotopes. Generalizations. With certain additional information (including separating the facet direction and size into a unit vector and a real number, which may be negative, providing an additional bit of information per facet) it is possible to generalize these existence and uniqueness results to certain classes of non-convex polyhedra. It is also possible to specify three-dimensional polyhedra uniquely by the direction and perimeter of their facets. Minkowski's theorem and the uniqueness of this specification by direction and perimeter have a common generalization: whenever two three-dimensional convex polyhedra have the property that their facets have the same directions and no facet of one polyhedron can be translated into a proper subset of the facet with the same direction of the other polyhedron, the two polyhedra must be translates of each other. However, this version of the theorem does not generalize to higher dimensions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "(d-1)" } ]
https://en.wikipedia.org/wiki?curid=65158327
65158329
Blaschke sum
Polytope combining two smaller polytopes In convex geometry and the geometry of convex polytopes, the Blaschke sum of two polytopes is a polytope that has a facet parallel to each facet of the two given polytopes, with the same measure. When both polytopes have parallel facets, the measure of the corresponding facet in the Blaschke sum is the sum of the measures from the two given polytopes. Blaschke sums exist and are unique up to translation, as can be proven using the theory of the Minkowski problem for polytopes. They can be used to decompose arbitrary polytopes into simplices, and centrally symmetric polytopes into parallelotopes. Although Blaschke sums of polytopes are used implicitly in the work of Hermann Minkowski, Blaschke sums are named for Wilhelm Blaschke, who defined a corresponding operation for smooth convex sets. The Blaschke sum operation can be extended to arbitrary convex bodies, generalizing both the polytope and smooth cases, using measures on the Gauss map. Definition. For any formula_0-dimensional polytope, one can specify its collection of facet directions and measures by a finite set of formula_0-dimensional nonzero vectors, one per facet, pointing perpendicularly outward from the facet, with length equal to the formula_1-dimensional measure of its facet. As Hermann Minkowski proved, a finite set of nonzero vectors describes a polytope in this way if and only if it spans the whole formula_0-dimensional space, no two are collinear with the same sign, and the sum of the set is the zero vector. The polytope described by this set has a unique shape, in the sense that any two polytopes described by the same set of vectors are translates of each other. The Blaschke sum formula_2 of two polytopes formula_3 and formula_4 is defined by combining the vectors describing their facet directions and measures, in the obvious way: form the union of the two sets of vectors, except that when both sets contain vectors that are parallel and have the same sign, replace each such pair of parallel vectors by its sum. This operation preserves the necessary conditions for Minkowski's theorem on the existence of a polytope described by the resulting set of vectors, and this polytope is the Blaschke sum. The two polytopes need not have the same dimension as each other, as long as they are both defined in a common space of high enough dimension to contain both: lower-dimensional polytopes in a higher-dimensional space are defined in the same way by sets of vectors that span a lower-dimensional subspace of the higher-dimensional space, and these sets of vectors can be combined without regard to the dimensions of the spaces they span. For convex polygons and line segments in the Euclidean plane, their Blaschke sum coincides with their Minkowski sum. Decomposition. Blaschke sums can be used to decompose polytopes into simpler polytopes. In particular, every formula_0-dimensional convex polytope with formula_5 facets can be represented as a Blaschke sum of at most formula_6 simplices (not necessarily of the same dimension). Every formula_0-dimensional centrally symmetric convex polytope can be represented as a Blaschke sum of parallelotopes. And every formula_0-dimensional convex polytope can be represented as a Blaschke sum of formula_0-dimensional convex polytopes, each having at most formula_7 facets. Generalizations. The Blaschke sum can be extended from polytopes to arbitrary bounded convex sets, by representing the amount of surface in each direction using a measure on the Gauss map of the set instead of using a finite set of vectors, and adding sets by adding their measures. If two bodies of constant brightness are combined in this way, the result is another body of constant brightness. Kneser–Süss inequality. The volume formula_8 of the Blaschke sum of two formula_0-dimensional polytopes or convex bodies formula_3 and formula_4 obeys an inequality known as the Kneser–Süss inequality, an analogue of the Brunn–Minkowski theorem on volumes of Minkowski sums of convex bodies: formula_9 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "(d-1)" }, { "math_id": 2, "text": "X\\# Y" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "Y" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "n-d" }, { "math_id": 7, "text": "2d" }, { "math_id": 8, "text": "V(X\\# Y)" }, { "math_id": 9, "text": "V(X\\# Y)^{(d-1)/d}\\ge V(X)^{(d-1)/d}+V(Y)^{(d-1)/d}." } ]
https://en.wikipedia.org/wiki?curid=65158329
65165192
Wastewater-based epidemiology
Epidemiological instrument for finding toxic substances Wastewater-based epidemiology (or wastewater-based surveillance or sewage chemical-information mining) analyzes wastewater to determine the consumption of, or exposure to, chemicals or pathogens in a population. This is achieved by measuring chemical or biomarkers in wastewater generated by the people contributing to a sewage treatment plant catchment. Wastewater-based epidemiology has been used to estimate illicit drug use in communities or populations, but can be used to measure the consumption of alcohol, caffeine, various pharmaceuticals and other compounds. Wastewater-based epidemiology has also been adapted to measure the load of pathogens such as SARS-CoV-2 in a community. It differs from traditional drug testing, urine or stool testing in that results are population-level rather than individual level. Wastewater-based epidemiology is an interdisciplinary endeavour that draws on input from specialists such as wastewater treatment plant operators, analytical chemists and epidemiologists. History. Wastewater-based epidemiology (WBE) can be applied in the field of research that uses the analysis of sewage and wastewater to monitor the presence, distribution, and prevalence of a disease or chemicals in communities. The technique has been used for several decades, and an example of its early application is in the 1940s when WBE was applied for the detection and distribution of poliovirus in the sewage of New York, Chicago, and other cities. Another early application came in 1954, in a study of schistosome of snails. Wastewater-based epidemiology thereafter spread to multiple countries. By the turn of the 21st century, numerous studies had adopted the technique. A 2005 study measured cocaine and its metabolite benzoylecgonine in water samples from the River Po in Italy. Wastewater-based epidemiology is supported by government bodies such as the European Monitoring Centre for Drugs and Drug Addiction in Europe. Similar counterparts in other countries, such as the Australian Criminal Intelligence Commission in Australia and authorities in China use wastewater-based epidemiology to monitor drug use in their populations. As of 2022, WBE had reached 3,000 sites in 58 countries. A group of Chinese scientists published the first WBE study on SARS-CoV-2 in 2020. They assessed whether the virus was present in fecal samples among 74 patients hospitalized for COVID-19 between January 16 and March 15, 2020, at a Chinese hospital. The first US SARS-CoV-2 study came from Boston. It reported a far higher rate of infection than had been estimated from individual PCR testing. It also served as a warning system, alerting the public to outbreaks (and outbreak ends) before positive test rates changed. However, considerable variability has been found within populations, based on symptom profiles, which may compromise measurement accuracy as the pathogen evolves. Technique. Wastewater-based epidemiology is analogous to urinalysis on a community scale. Small molecule compounds consumed by an individual can be excreted in the urine and/or feces in the form of the unchanged parent compound or a metabolite. In communities with sewerage, this urine combines with other wastes including other individuals' urine as they travel to a municipal wastewater treatment plant. The wastewater is sampled at the plant's inlet, prior to treatment. This is typically done with autosampler devices that collect 24-hour flow or temporally composite samples. These samples contain biomarkers from all the people contributing to a catchment. Collected samples are sent to a laboratory, where analytical chemistry techniques (such as liquid chromatography-mass spectrometry) are used to quantify compounds of interest. These results can be expressed in per capita loads based on the volume of wastewater. Per capita daily consumption of a chemical of interest (e.g. a drug) is determined as formula_0 where "R" is the concentration of a residue in a wastewater sample, "F" is the volume of wastewater that the sample represents, "C" is a correction factor which reflects the average mass and molar excretion fraction of a parent drug or a metabolite, and "P" is the number of people in a wastewater catchment. Variations or modifications may be made to C to account for other factors such as the degradation of a chemical during its transport in the sewer system. Applications. Commonly detected chemicals include, but are not limited to the following; Temporal comparisons. By analyzing samples taken across different time points, day-to-day or longer-term trends can be assessed. This approach has illustrated trends such as increased consumption of alcohol and recreational drugs on weekends compared to weekdays. A temporal wastewater-based epidemiology study in Washington measured wastewater samples in Washington before, during and after cannabis legalisation. By comparing cannabis consumption in wastewater with sales of cannabis through legal outlets, the study showed that the opening of legal outlets led to a decrease in the market share of the illegal market. Spatial comparisons. Differences in chemical consumption amongst different locations can be established when comparable methods are used to analyse wastewater samples from different locations. The European Monitoring Centre for Drugs and Drug Addiction conducts regular multi-city tests in Europe to estimate the consumption of illegal drugs. Data from these monitoring efforts are used alongside more traditional monitoring methods to understand geographical changes in drug consumption trends. Microbial surveillance. Virus surveillance. Sewage can also be tested for signatures of viruses excreted via feces, such as the enteroviruses poliovirus, aichivirus and coronavirus. Systematic wastewater surveillance programs for monitoring enteroviruses, namely poliovirus, were instituted as early as 1996 in Russia. Wastewater testing is recognised as an important tool for poliovirus surveillance by the WHO, especially in situations where mainstream surveillance methods are lacking, or where viral circulation or introduction is suspected. Wastewater-based epidemiology of viruses has the potential to inform on the presence of viral outbreaks when or where it is not suspected. A 2013 study of archived wastewater samples from the Netherlands found viral RNA of Aichivirus A in Dutch sewage samples dating back to 1987, two years prior to the first identification of Aichivirus A in Japan. During the COVID-19 pandemic, wastewater-based epidemiology using qPCR and/or RNA-Seq was used in various countries as a complementary method for assessing the load of COVID-19 and its variants in populations. Regular surveillance programs for monitoring SARS-Cov-2 in wastewater has been instituted in populations within countries such as Canada, UAE, China, Singapore, the Netherlands, Spain, Austria, Germany and the United States. In addition to surveillance of human wastewater, studies have also been conducted on livestock wastewater. A 2011 article reported findings of 11.8% of collected human wastewater samples and 8.6% of swine wastewater samples as positive of the pathogen Clostridiodes Difficile. Applications against major outbreaks. Wastewater surveillance, which substantially expanded during the earlier COVID-19 pandemic was used to detect monkeypox in the 2022 monkeypox outbreak.&lt;ref name="10.1016/j.scitotenv.2022.158265"&gt;&lt;/ref&gt; It is unclear how cost-effective wastewater surveillance is, but national coordination and standardized methods could be useful. Less common infections may be difficult to detect, including, such as those that cause hepatitis or foodborne illness. A warning of increased cases from wastewater surveillance can "provide health departments with critical lead time for making decisions about resource allocation and preventive measures" and "unlike testing of individual people, wastewater testing provides insights into the entire population within a catchment area". A 2023 report by the National Academies of Sciences, Engineering and Medicine called for moving from the grass roots system that "sprung up in an ad hoc way, fueled by volunteerism and emergency pandemic-related funding" to a more standardized national system and suggested such a system "should be able to track a variety of potential threats, which could include future coronavirus variants, flu viruses, antibiotic resistant bacteria and entirely new pathogens". Antimicrobial resistance. In 2022, genomic epidemiologists reported results from a global survey of antimicrobial resistance (AMR) via genomic wastewater-based epidemiology, finding large regional variations, providing maps, and suggesting resistance genes are also passed on between microbial species that are not closely related.&lt;ref name="10.1038/s41467-022-34312-7"&gt;&lt;/ref&gt; A 2023 review on wastewater-based epidemiology opined the necessity of surveillance wastewater from farms with livestock, wet markets and surrounding areas given the greater risk of pathogen spillover to humans. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{{\\frac{R \\times F \\times C}{P}}} " } ]
https://en.wikipedia.org/wiki?curid=65165192
65171548
Meshulam's game
Mathematical game In graph theory, Meshulam's game is a game used to explain a theorem of Roy Meshulam related to the homological connectivity of the independence complex of a graph, which is the smallest index "k" such that all reduced homological groups up to and including "k" are trivial. The formulation of this theorem as a game is due to Aharoni, Berger and Ziv. Description. The game-board is a graph "G." It is a zero-sum game for two players, CON and NON. CON wants to show that I("G"), the independence complex of "G", has a high connectivity; NON wants to prove the opposite. At his turn, CON chooses an edge "e" from the remaining graph. NON then chooses one of two options: The score of CON is defined as follows: For every given graph "G", the game value on "G" (i.e., the score of CON when both sides play optimally) is denoted by "Ψ"("G"). Game value and homological connectivity. Meshulam proved that, for every graph "G":formula_0where formula_1 is the homological connectivity of formula_2 plus 2. Proof for the case 1. To illustrate the connection between Meshulam's game and connectivity, we prove it in the special case in which formula_13, which is the smallest possible value of formula_1. We prove that, in this case, formula_14, i.e., NON can always destroy the entire graph using at most one explosion. formula_13 means that formula_2 is not connected. This means that there are two subsets of vertices, "X" and "Y", where no edge in formula_2 connects any vertex of X to any vertex of Y. But formula_2 is the independence complex of "G"; so in "G", every vertex of "X" is connected to every vertex of "Y". Regardless of how CON plays, he must at some step select an edge between a vertex of "X" and a vertex of "Y". NON can explode this edge and destroy the entire graph. In general, the proof works only one way, that is, there may be graphs for which formula_15. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\eta_H(I(G))\\geq \\Psi(G)" }, { "math_id": 1, "text": "\\eta_H(I(G))" }, { "math_id": 2, "text": "I(G)" }, { "math_id": 3, "text": "i \\gamma(G) \\geq k" }, { "math_id": 4, "text": " \\Psi(G)\\geq k" }, { "math_id": 5, "text": " \\Psi(G)\\geq i\\gamma(G)" }, { "math_id": 6, "text": " \\Psi(L(G))\\geq \\nu(G)/2" }, { "math_id": 7, "text": " L(G)" }, { "math_id": 8, "text": " \\nu(G)" }, { "math_id": 9, "text": " \\Psi(L(H))\\geq \\nu(H)/r" }, { "math_id": 10, "text": " \\Psi(L(G))\\geq \\lfloor 2n/3\\rfloor" }, { "math_id": 11, "text": " \\lfloor 2n/3\\rfloor" }, { "math_id": 12, "text": " \\Psi(L(G))\\geq \\frac{|F|}{n} - \\frac{n}{3} - \\frac{1}{2}" }, { "math_id": 13, "text": "\\eta_H(I(G))=1" }, { "math_id": 14, "text": "\\Psi(G)\\leq 1" }, { "math_id": 15, "text": "\\eta_H(I(G))> \\Psi(G)" } ]
https://en.wikipedia.org/wiki?curid=65171548
65171762
Lee–Yang theory
In statistical mechanics, Lee–Yang theory, sometimes also known as Yang–Lee theory, is a scientific theory which seeks to describe phase transitions in large physical systems in the thermodynamic limit based on the properties of small, finite-size systems. The theory revolves around the complex zeros of partition functions of finite-size systems and how these may reveal the existence of phase transitions in the thermodynamic limit. Lee–Yang theory constitutes an indispensable part of the theories of phase transitions. Originally developed for the Ising model, the theory has been extended and applied to a wide range of models and phenomena, including protein folding, percolation, complex networks, and molecular zippers. The theory is named after the Nobel laureates Tsung-Dao Lee and Yang Chen-Ning, who were awarded the 1957 Nobel Prize in Physics for their unrelated work on parity non-conservation in weak interaction. Introduction. For an equilibrium system in the canonical ensemble, all statistical information about the system is encoded in the partition function, formula_0 where the sum runs over all possible microstates, and formula_1 is the inverse temperature, formula_2 is the Boltzmann constant and formula_3 is the energy of a microstate. The moments formula_4 of the energy statistics are obtained by differentiating the partition function with respect to the inverse temperature multiple times, formula_5 From the partition function, we may also obtain the free energy formula_6 Analogously to how the partition function generates the moments, the free energy generates the cumulants of the energy statistics formula_7 More generally, if the microstate energies formula_8 depend on a "control parameter" formula_9 and a fluctuating conjugate variable formula_10 (whose value may depend on the microstate), the moments of formula_10 may be obtained as formula_11 and the cumulants as formula_12 For instance, for a spin system, the control parameter may be an external magnetic field, formula_13, and the conjugate variable may be the total magnetization, formula_14. Phase transitions and Lee–Yang theory. The partition function and the free energy are intimately linked to phase transitions, for which there is a sudden change in the properties of a physical system. Mathematically, a phase transition occurs when the partition function vanishes and the free energy is singular (non-analytic). For instance, if the first derivative of the free energy with respect to the control parameter is non-continuous, a jump may occur in the average value of the fluctuating conjugate variable, such as the magnetization, corresponding to a first-order phase transition. Importantly, for a finite-size system, formula_16 is a finite sum of exponential functions and is thus always positive for real values of formula_9. Consequently, formula_17 is always well-behaved and analytic for finite system sizes. By contrast, in the thermodynamic limit, formula_17 may exhibit a non-analytic behavior. Using that formula_16 is an entire function for finite system sizes, Lee–Yang theory takes advantage of the fact that the partition function can be fully characterized by its zeros in the "complex" plane of formula_9. These zeros are often known as "Lee–Yang zeros" or, in the case of inverse temperature as control parameter, "Fisher zeros". The main idea of Lee–Yang theory is to mathematically study how the positions and the behavior of the zeros change as the system size grows. If the zeros move onto the real axis of the control parameter in the thermodynamic limit, it signals the presence of a phase transition at the corresponding real value of formula_18. In this way, Lee–Yang theory establishes a connection between the properties (the zeros) of a partition function for a finite size system and phase transitions that may occur in the thermodynamic limit (where the system size goes to infinity). Examples. Molecular zipper. The molecular zipper is a toy model which may be used to illustrate the Lee–Yang theory. It has the advantage that all quantities, including the zeros, can be computed analytically. The model is based on a double-stranded macromolecule with formula_15 links that can be either open or closed. For a fully closed zipper, the energy is zero, while for each open link the energy is increased by an amount formula_19. A link can only be open if the preceding one is also open. For a number formula_20 of different ways that a link can be open, the partition function of a zipper with formula_15 links reads formula_21. This partition function has the complex zeros formula_22 where we have introduced the critical inverse temperature formula_23, with formula_24. We see that in the limit formula_25, the zeros closest to the real axis approach the critical value formula_26. For formula_27, the critical temperature is infinite and no phase transition takes place for finite temperature. By contrast, for formula_28, a phase transition takes place at the finite temperature formula_29. To confirm that the system displays a non-analytic behavior in the thermodynamic limit, we consider the free energy formula_30 or, equivalently, the dimensionless free energy per link formula_31 In the thermodynamic limit, one obtains formula_32. Indeed, a cusp develops at formula_29 in the thermodynamic limit. In this case, the first derivative of the free energy is discontinuous, corresponding to a first-order phase transition. Ising model. The Ising model is the original model that Lee and Yang studied when they developed their theory on partition function zeros. The Ising model consists of spin lattice with formula_15 spins formula_33, each pointing either up, formula_34, or down, formula_35. Each spin may also interact with its closest spin neighbors with a strength formula_36. In addition, an external magnetic field formula_37 may be applied (here we assume that it is uniform and thus independent of the spin indices). The Hamiltonian of the system for a certain spin configuration formula_38 then reads formula_39 In this case, the partition function reads formula_40 The zeros of this partition function cannot be determined analytically, thus requiring numerical approaches. Lee–Yang theorem. For the ferromagnetic Ising model, for which formula_41 for all formula_42, Lee and Yang showed that all zeros of formula_43 lie on the unit circle in the complex plane of the parameter formula_44. This statement is known as the "Lee–Yang theorem", and has later been generalized to other models, such as the Heisenberg model. Dynamical phase transitions. A similar approach can be used to study dynamical phase transitions. These transitions are characterized by the Loschmidt amplitude, which plays the analogue role of a partition function. Connections to fluctuations. The Lee–Yang zeros may be connected to the cumulants of the conjugate variable formula_10 of the control variable formula_9. For brevity, we set formula_45 in the following. Using that the partition function is an entire function for a finite-size system, one may expand it in terms of its zeros as formula_46 where formula_47 and formula_48 are constants, and formula_49 is the formula_50:th zero in the complex plane of formula_9. The corresponding free energy then reads formula_51 Differentiating this expression formula_52 times with respect to formula_9, yields the formula_52:th order cumulant formula_53 Furthermore, using that the partition function is a real function, the Lee–Yang zeros have to come in complex conjugate pairs, allowing us to express the cumulants as formula_54 where the sum now runs only over each pair of zeros. This establishes a direct connection between cumulants and Lee–Yang zeros. Moreover, if formula_52 is large, the contribution from zeros lying far away from formula_9 is strongly suppressed, and only the closest pair formula_55 of zeros plays an important role. One may then write formula_56 This equation may be solved as a linear system of equations, allowing for the Lee–Yang zeros to be determined directly from higher-order cumulants of the conjugate variable: formula_57 Experiments. Being complex numbers of a physical variable, Lee–Yang zeros have traditionally been seen as a purely "theoretical" tool to describe phase transitions, with little or none connection to experiments. However, in a series of experiments in the 2010s, various kinds of Lee–Yang zeros have been determined from real measurements. In one experiment in 2015, the Lee–Yang zeros were extracted experimentally by measuring the quantum coherence of a spin coupled to an Ising-type spin bath. In another experiment in 2017, dynamical Lee–Yang zeros were extracted from Andreev tunneling processes between a normal-state island and two superconducting leads. Furthermore, in 2018, there was an experiment determining the dynamical Fisher zeros of the Loschmidt amplitude, which may be used to identify dynamical phase transitions.
[ { "math_id": 0, "text": "Z = \\sum_i e^{-\\beta E_i}," }, { "math_id": 1, "text": "\\beta =1/(k_B T)" }, { "math_id": 2, "text": "k_B" }, { "math_id": 3, "text": "E_i" }, { "math_id": 4, "text": "\\langle E^n \\rangle" }, { "math_id": 5, "text": "\\langle E^n \\rangle = \\frac{1}{Z} \\partial^n_{-\\beta} Z = \\frac{\\sum_i E_i^n e^{-\\beta E_i}}{\\sum_i e^{-\\beta E_i}}." }, { "math_id": 6, "text": "F = -\\beta^{-1} \\log[Z]." }, { "math_id": 7, "text": "\\langle \\!\\langle E^n \\rangle \\!\\rangle = \\partial^n_{-\\beta} (-\\beta F)." }, { "math_id": 8, "text": "E_i(q) =E_i(0)-q\\Phi_i" }, { "math_id": 9, "text": "q" }, { "math_id": 10, "text": "\\Phi" }, { "math_id": 11, "text": "\\langle \\Phi^n \\rangle = \\frac{1}{Z}\\beta^{-n}\\partial^n_{q} Z(q) =\\frac{1}{Z} \\beta^{-n}\\partial^n_{q} \\sum_i e^{-\\beta E_i(q)}= \\frac{\\sum_i \\Phi_i^n e^{\\beta E_i(0)+\\beta q \\Phi_i}}{\\sum_i e^{\\beta E_i(0)+\\beta q \\Phi_i}}," }, { "math_id": 12, "text": "\\langle \\!\\langle \\Phi^n \\rangle \\!\\rangle = \\beta^{-n}\\partial^n_{q} [-\\beta F(q)]." }, { "math_id": 13, "text": "q=h" }, { "math_id": 14, "text": "\\Phi = M" }, { "math_id": 15, "text": "N" }, { "math_id": 16, "text": "Z(q)" }, { "math_id": 17, "text": "F(q)" }, { "math_id": 18, "text": "q=q^*" }, { "math_id": 19, "text": "\\varepsilon" }, { "math_id": 20, "text": "g" }, { "math_id": 21, "text": "Z = \\sum_{n=0}^Ng^n e^{-\\beta n \\varepsilon} = \\frac{1-(ge^{-\\beta \\varepsilon})^{N+1}}{1-ge^{-\\beta \\varepsilon}}" }, { "math_id": 22, "text": "\\beta_k = \\beta_c + \\frac{2\\pi k}{\\varepsilon (N+1)}i, \\qquad k \\in \\{-N,...,N\\}\\backslash \\{0\\}," }, { "math_id": 23, "text": "\\beta_c^{-1} = k_B T_c" }, { "math_id": 24, "text": "T_c = \\frac{\\varepsilon}{k_B \\log g}" }, { "math_id": 25, "text": "N\\rightarrow \\infty" }, { "math_id": 26, "text": "\\beta_k = \\beta_c" }, { "math_id": 27, "text": "g=1" }, { "math_id": 28, "text": "g>1" }, { "math_id": 29, "text": "T_c" }, { "math_id": 30, "text": "F = - k_B T \\log Z" }, { "math_id": 31, "text": "\\frac{F}{N \\varepsilon}." }, { "math_id": 32, "text": "\\lim_{N\\rightarrow \\infty}\\frac{F}{N\\varepsilon} = \\lim_{N\\rightarrow \\infty}-\\frac{\\beta^{-1}}{N\\varepsilon} \\log\\left[\\frac{1-(ge^{-\\beta \\varepsilon})^{N+1}}{1-ge^{-\\beta \\varepsilon}}\\right] =\\begin{cases}\n 1-T/T_c, & T > T_c\\\\\n 0, & T \\leq T_c\n \\end{cases} " }, { "math_id": 33, "text": "\\{\\sigma_k\\}" }, { "math_id": 34, "text": "\\sigma_k=+1" }, { "math_id": 35, "text": "\\sigma_k=-1" }, { "math_id": 36, "text": "J_{ij}" }, { "math_id": 37, "text": "h>0" }, { "math_id": 38, "text": "\\{\\sigma_i\\}" }, { "math_id": 39, "text": "H(\\{\\sigma_i\\},h) = - \\sum_{\\langle i,j\\rangle} J_{ij} \\sigma_i \\sigma_j - h \\sum_j \\sigma_j." }, { "math_id": 40, "text": "Z(h) = \\sum_{\\{\\sigma_i\\}} e^{-\\beta H(\\{\\sigma_i\\},h)} " }, { "math_id": 41, "text": "J_{ij} \\geq 0" }, { "math_id": 42, "text": "i, j" }, { "math_id": 43, "text": "Z(h)" }, { "math_id": 44, "text": "z\\equiv \\exp(-2 \\beta h)" }, { "math_id": 45, "text": "\\beta = 1" }, { "math_id": 46, "text": "Z(q) = Z(0)e^{cq}\\prod_k (1-q/q_k)," }, { "math_id": 47, "text": "Z(0)" }, { "math_id": 48, "text": "c" }, { "math_id": 49, "text": "q_k" }, { "math_id": 50, "text": "k" }, { "math_id": 51, "text": "-F(q) = \\log[Z(q)] = \\log[Z(0)]+cq+\\sum_k \\log[1-q/q_k]." }, { "math_id": 52, "text": "n" }, { "math_id": 53, "text": "\\langle \\!\\langle \\Phi^n \\rangle \\!\\rangle = \\partial^n_q [-F(q)] = -\\sum_k \\frac{(n-1)!}{(q_k-q)^n}, \\quad n>1." }, { "math_id": 54, "text": "\\langle \\!\\langle \\Phi^n \\rangle \\!\\rangle = -(n-1)!\\sum_k \\frac{2 \\cos(n \\arg\\{q_k-q\\})}{|q_k-q|^n}, \\quad n>1," }, { "math_id": 55, "text": "q_0" }, { "math_id": 56, "text": "\\langle \\!\\langle \\Phi^n \\rangle \\!\\rangle \\simeq -(n-1)!\\frac{2 \\cos(n \\arg\\{q_0-q\\})}{|q_0-q|^n}, \\quad n\\gg 1." }, { "math_id": 57, "text": "\\begin{bmatrix}2 \\text{Re}[q-q_0] \\\\ |q-q_0| \\end{bmatrix} = \\begin{bmatrix}1 & -\\frac{\\kappa^{(+)}_n}{n}\\\\ 1 & -\\frac{\\kappa^{(+)}_{n+1}}{n+1} \\end{bmatrix}^{-1} \\begin{bmatrix}(n-1) \\kappa_n^{(-)} \\\\ n \\kappa_{n+1}^{(-)} \\end{bmatrix}, \\qquad \\kappa^{\\pm} \\equiv \\frac{\\langle \\!\\langle \\Phi^{n\\pm1}\\rangle\\!\\rangle}{\\langle \\!\\langle \\Phi^{n}\\rangle\\!\\rangle}." } ]
https://en.wikipedia.org/wiki?curid=65171762
6517456
Plane stress
When the stress vector within a material is zero across a particular plane In continuum mechanics, a material is said to be under plane stress if the stress vector is zero across a particular plane. When that situation occurs over an entire element of a structure, as is often the case for thin plates, the stress analysis is considerably simplified, as the stress state can be represented by a tensor of dimension 2 (representable as a 2×2 matrix rather than 3×3). A related notion, plane strain, is often applicable to very thick members. Plane stress typically occurs in thin flat plates that are acted upon only by load forces that are parallel to them. In certain situations, a gently curved thin plate may also be assumed to have plane stress for the purpose of stress analysis. This is the case, for example, of a thin-walled cylinder filled with a fluid under pressure. In such cases, stress components perpendicular to the plate are negligible compared to those parallel to it. In other situations, however, the bending stress of a thin plate cannot be neglected. One can still simplify the analysis by using a two-dimensional domain, but the plane stress tensor at each point must be complemented with bending terms. Mathematical definition. Mathematically, the stress at some point in the material is a plane stress if one of the three principal stresses (the eigenvalues of the Cauchy stress tensor) is zero. That is, there is Cartesian coordinate system in which the stress tensor has the form formula_0 For example, consider a rectangular block of material measuring 10, 40 and 5 cm along the formula_1, formula_2, and formula_3, that is being stretched in the formula_1 direction and compressed in the formula_2 direction, by pairs of opposite forces with magnitudes 10 N and 20 N, respectively, uniformly distributed over the corresponding faces. The stress tensor inside the block will be formula_4 More generally, if one chooses the first two coordinate axes arbitrarily but perpendicular to the direction of zero stress, the stress tensor will have the form formula_5 and can therefore be represented by a 2 × 2 matrix, formula_6 Plane stress in curved surfaces. In certain cases, the plane stress model can be used in the analysis of gently curved surfaces. For example, consider a thin-walled cylinder subjected to an axial compressive load uniformly distributed along its rim, and filled with a pressurized fluid. The internal pressure will generate a reactive hoop stress on the wall, a normal tensile stress directed perpendicular to the cylinder axis and tangential to its surface. The cylinder can be conceptually unrolled and analyzed as a flat thin rectangular plate subjected to tensile load in one direction and compressive load in another other direction, both parallel to the plate. Plane strain (strain matrix). If one dimension is very large compared to the others, the principal strain in the direction of the longest dimension is constrained and can be assumed as constant, that means there will be effectively zero strain along it, hence yielding a plane strain condition (Figure 7.2). In this case, though all principal stresses are non-zero, the principal stress in the direction of the longest dimension can be disregarded for calculations. Thus, allowing a two dimensional analysis of stresses, e.g. a dam analyzed at a cross section loaded by the reservoir. The corresponding strain tensor is: formula_7 and the corresponding stress tensor is: formula_8 in which the non-zero formula_9 term arises from the Poisson's effect. However, this term can be temporarily removed from the stress analysis to leave only the in-plane terms, effectively reducing the analysis to two dimensions. Stress transformation in plane stress and plane strain. Consider a point formula_10 in a continuum under a state of plane stress, or plane strain, with stress components formula_11 and all other stress components equal to zero (Figure 8.1). From static equilibrium of an infinitesimal material element at formula_10 (Figure 8.2), the normal stress formula_12 and the shear stress formula_13 on any plane perpendicular to the formula_14-formula_15 plane passing through formula_10 with a unit vector formula_16 making an angle of formula_17 with the horizontal, i.e. formula_18 is the direction cosine in the formula_14 direction, is given by: formula_19 formula_20 These equations indicate that in a plane stress or plane strain condition, one can determine the stress components at a point on all directions, i.e. as a function of formula_17, if one knows the stress components formula_11 on any two perpendicular directions at that point. It is important to remember that we are considering a unit area of the infinitesimal element in the direction parallel to the formula_15-formula_21 plane. The principal directions (Figure 8.3), i.e., orientation of the planes where the shear stress components are zero, can be obtained by making the previous equation for the shear stress formula_13 equal to zero. Thus we have: formula_22 and we obtain formula_23 This equation defines two values formula_24 which are formula_25 apart (Figure 8.3). The same result can be obtained by finding the angle formula_17 which makes the normal stress formula_12 a maximum, i.e. formula_26 The principal stresses formula_27 and formula_28, or minimum and maximum normal stresses formula_29 and formula_30, respectively, can then be obtained by replacing both values of formula_24 into the previous equation for formula_12. This can be achieved by rearranging the equations for formula_12 and formula_13, first transposing the first term in the first equation and squaring both sides of each of the equations then adding them. Thus we have formula_31 formula_32 where formula_33 which is the equation of a circle of radius formula_34 centered at a point with coordinates formula_35, called Mohr's circle. But knowing that for the principal stresses the shear stress formula_36, then we obtain from this equation: formula_37 formula_38 When formula_39 the infinitesimal element is oriented in the direction of the principal planes, thus the stresses acting on the rectangular element are principal stresses: formula_40 and formula_41. Then the normal stress formula_12 and shear stress formula_13 as a function of the principal stresses can be determined by making formula_39. Thus we have formula_42 formula_43 Then the maximum shear stress formula_44 occurs when formula_45, i.e. formula_46 (Figure 8.3): formula_47 Then the minimum shear stress formula_48 occurs when formula_49, i.e. formula_50 (Figure 8.3): formula_51
[ { "math_id": 0, "text": "\\sigma = \n\\begin{bmatrix}\n\\sigma_{11} & 0 & 0 \\\\\n0 & \\sigma_{22} & 0 \\\\\n0 & 0 & 0\n\\end{bmatrix} \n\\equiv \n\\begin{bmatrix}\n\\sigma_{x} & 0 & 0 \\\\\n0 & \\sigma_{y} & 0 \\\\\n0 & 0 & 0\n\\end{bmatrix}" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "z" }, { "math_id": 4, "text": "\\sigma = \n\\begin{bmatrix}\n500\\mathrm{ Pa} & 0 & 0 \\\\\n0 & -4000\\mathrm{ Pa} & 0 \\\\\n0 & 0 & 0\n\\end{bmatrix}\n" }, { "math_id": 5, "text": "\\sigma = \n\\begin{bmatrix}\n\\sigma_{11} & \\sigma_{12} & 0 \\\\\n\\sigma_{21} & \\sigma_{22} & 0 \\\\\n0 & 0 & 0\n\\end{bmatrix} \n\\equiv \n\\begin{bmatrix}\n\\sigma_{x} & \\tau_{xy} & 0 \\\\\n\\tau_{yx} & \\sigma_{y} & 0 \\\\\n0 & 0 & 0\n\\end{bmatrix}" }, { "math_id": 6, "text": "\\sigma_{ij} = \n\\begin{bmatrix}\n\\sigma_{11} & \\sigma_{12} \\\\\n\\sigma_{21} & \\sigma_{22}\n\\end{bmatrix} \n\\equiv \n\\begin{bmatrix}\n\\sigma_{x} & \\tau_{xy} \\\\\n\\tau_{yx} & \\sigma_{y}\n\\end{bmatrix}" }, { "math_id": 7, "text": "\\varepsilon_{ij} = \\begin{bmatrix}\n\\varepsilon_{11} & \\varepsilon_{12} & 0 \\\\\n\\varepsilon_{21} & \\varepsilon_{22} & 0 \\\\\n0 & 0 & 0\n\\end{bmatrix}\\,\\!" }, { "math_id": 8, "text": "\\sigma_{ij} = \\begin{bmatrix}\n\\sigma_{11} & \\sigma_{12} & 0 \\\\\n\\sigma_{21} & \\sigma_{22} & 0 \\\\\n0 & 0 & \\sigma_{33}\n\\end{bmatrix}\\,\\!" }, { "math_id": 9, "text": "\\sigma_{33}\\,\\!" }, { "math_id": 10, "text": "P\\,\\!" }, { "math_id": 11, "text": "(\\sigma_x, \\sigma_y, \\tau_{xy})\\,\\!" }, { "math_id": 12, "text": "\\sigma_\\mathrm{n}\\,\\!" }, { "math_id": 13, "text": "\\tau_\\mathrm{n}\\,\\!" }, { "math_id": 14, "text": "x\\,\\!" }, { "math_id": 15, "text": "y\\,\\!" }, { "math_id": 16, "text": "\\mathbf n\\,\\!" }, { "math_id": 17, "text": "\\theta\\,\\!" }, { "math_id": 18, "text": "\\cos \\theta\\,\\!" }, { "math_id": 19, "text": "\\sigma_\\mathrm{n} = \\frac{1}{2} ( \\sigma_x + \\sigma_y ) + \\frac{1}{2} ( \\sigma_x - \\sigma_y )\\cos 2\\theta + \\tau_{xy} \\sin 2\\theta\\,\\!" }, { "math_id": 20, "text": "\\tau_\\mathrm{n} = -\\frac{1}{2}(\\sigma_x - \\sigma_y )\\sin 2\\theta + \\tau_{xy}\\cos 2\\theta \\,\\!" }, { "math_id": 21, "text": "z\\,\\!" }, { "math_id": 22, "text": "\\tau_\\mathrm{n} = -\\frac{1}{2}(\\sigma_x - \\sigma_y )\\sin 2\\theta + \\tau_{xy}\\cos 2\\theta=0\\,\\!" }, { "math_id": 23, "text": "\\tan 2 \\theta_\\mathrm{p} = \\frac{2 \\tau_{xy}}{\\sigma_x - \\sigma_y}\\,\\!" }, { "math_id": 24, "text": "\\theta_\\mathrm{p}\\,\\!" }, { "math_id": 25, "text": "90^\\circ\\,\\!" }, { "math_id": 26, "text": "\\frac{d\\sigma_\\mathrm{n}}{d\\theta}=0\\,\\!" }, { "math_id": 27, "text": "\\sigma_1\\,\\!" }, { "math_id": 28, "text": "\\sigma_2\\,\\!" }, { "math_id": 29, "text": "\\sigma_\\mathrm{max}\\,\\!" }, { "math_id": 30, "text": "\\sigma_\\mathrm{min}\\,\\!" }, { "math_id": 31, "text": "\n\\left[ \\sigma_\\mathrm{n} - \\tfrac{1}{2} ( \\sigma_x + \\sigma_y )\\right]^2 + \\tau_\\mathrm{n}^2 = \\left[\\tfrac{1}{2}(\\sigma_x - \\sigma_y)\\right]^2 + \\tau_{xy}^2 \\,\\!" }, { "math_id": 32, "text": "\n(\\sigma_\\mathrm{n} - \\sigma_\\mathrm{avg})^2 + \\tau_\\mathrm{n}^2 = R^2 \\,\\!" }, { "math_id": 33, "text": "R = \\sqrt{\\left[\\tfrac{1}{2}(\\sigma_x - \\sigma_y)\\right]^2 + \\tau_{xy}^2} \\quad \\text{and} \\quad \\sigma_\\mathrm{avg} = \\tfrac{1}{2} ( \\sigma_x + \\sigma_y )\\,\\!" }, { "math_id": 34, "text": "R\\,\\!" }, { "math_id": 35, "text": "[\\sigma_\\mathrm{avg}, 0]\\,\\!" }, { "math_id": 36, "text": "\\tau_\\mathrm{n} = 0\\,\\!" }, { "math_id": 37, "text": "\\sigma_1 =\\sigma_\\mathrm{max} = \\tfrac{1}{2}(\\sigma_x + \\sigma_y) + \\sqrt{\\left[\\tfrac{1}{2}(\\sigma_x - \\sigma_y)\\right]^2 + \\tau_{xy}^2}\\,\\!" }, { "math_id": 38, "text": "\\sigma_2 =\\sigma_\\mathrm{min} = \\tfrac{1}{2}(\\sigma_x + \\sigma_y) - \\sqrt{\\left[\\tfrac{1}{2}(\\sigma_x - \\sigma_y)\\right]^2 + \\tau_{xy}^2}\\,\\!" }, { "math_id": 39, "text": "\\tau_{xy}=0\\,\\!" }, { "math_id": 40, "text": "\\sigma_x = \\sigma_1\\,\\!" }, { "math_id": 41, "text": "\\sigma_y = \\sigma_2\\,\\!" }, { "math_id": 42, "text": "\\sigma_\\mathrm{n} = \\frac{1}{2} ( \\sigma_1 + \\sigma_2 ) + \\frac{1}{2} ( \\sigma_1 - \\sigma_2 )\\cos 2\\theta\\,\\!" }, { "math_id": 43, "text": "\\tau_\\mathrm{n} = -\\frac{1}{2}(\\sigma_1 - \\sigma_2 )\\sin 2\\theta\\,\\!" }, { "math_id": 44, "text": "\\tau_\\mathrm{max}\\,\\!" }, { "math_id": 45, "text": "\\sin 2\\theta = 1\\,\\!" }, { "math_id": 46, "text": "\\theta = 45^\\circ\\,\\!" }, { "math_id": 47, "text": "\\tau_\\mathrm{max} = \\frac{1}{2}(\\sigma_1 - \\sigma_2 )\\,\\!" }, { "math_id": 48, "text": "\\tau_\\mathrm{min}\\,\\!" }, { "math_id": 49, "text": "\\sin 2\\theta = -1\\,\\!" }, { "math_id": 50, "text": "\\theta = 135^\\circ\\,\\!" }, { "math_id": 51, "text": "\\tau_\\mathrm{min} = -\\frac{1}{2}(\\sigma_1 - \\sigma_2 )\\,\\!" } ]
https://en.wikipedia.org/wiki?curid=6517456
651752
Scaling (geometry)
Geometric transformation In affine geometry, uniform scaling (or isotropic scaling) is a linear transformation that enlarges (increases) or shrinks (diminishes) objects by a "scale factor" that is the same in all directions. The result of uniform scaling is similar (in the geometric sense) to the original. A scale factor of 1 is normally allowed, so that congruent shapes are also classed as similar. Uniform scaling happens, for example, when enlarging or reducing a photograph, or when creating a scale model of a building, car, airplane, etc. More general is scaling with a separate scale factor for each axis direction. Non-uniform scaling (anisotropic scaling) is obtained when at least one of the scaling factors is different from the others; a special case is directional scaling or stretching (in one direction). Non-uniform scaling changes the shape of the object; e.g. a square may change into a rectangle, or into a parallelogram if the sides of the square are not parallel to the scaling axes (the angles between lines parallel to the axes are preserved, but not all angles). It occurs, for example, when a faraway billboard is viewed from an oblique angle, or when the shadow of a flat object falls on a surface that is not parallel to it. When the scale factor is larger than 1, (uniform or non-uniform) scaling is sometimes also called dilation or enlargement. When the scale factor is a positive number smaller than 1, scaling is sometimes also called contraction or reduction. In the most general sense, a scaling includes the case in which the directions of scaling are not perpendicular. It also includes the case in which one or more scale factors are equal to zero (projection), and the case of one or more negative scale factors (a directional scaling by -1 is equivalent to a reflection). Scaling is a linear transformation, and a special case of homothetic transformation (scaling about a point). In most cases, the homothetic transformations are non-linear transformations. Uniform scaling. A scale factor is usually a decimal which scales, or multiplies, some quantity. In the equation "y" = "Cx", "C" is the scale factor for "x". "C" is also the coefficient of "x", and may be called the constant of proportionality of "y" to "x". For example, doubling distances corresponds to a scale factor of two for distance, while cutting a cake in half results in pieces with a scale factor for volume of one half. The basic equation for it is image over preimage. In the field of measurements, the scale factor of an instrument is sometimes referred to as sensitivity. The ratio of any two corresponding lengths in two similar geometric figures is also called a scale. Matrix representation. A scaling can be represented by a scaling matrix. To scale an object by a vector "v" = ("vx, vy, vz"), each point "p" = ("px, py, pz") would need to be multiplied with this scaling matrix: formula_0 As shown below, the multiplication will give the expected result: formula_1 Such a scaling changes the diameter of an object by a factor between the scale factors, the area by a factor between the smallest and the largest product of two scale factors, and the volume by the product of all three. The scaling is uniform if and only if the scaling factors are equal ("vx = vy = vz"). If all except one of the scale factors are equal to 1, we have directional scaling. In the case where "vx = vy = vz = k", scaling increases the area of any surface by a factor of "k"2 and the volume of any solid object by a factor of "k"3. Scaling in arbitrary dimensions. In formula_2-dimensional space formula_3, uniform scaling by a factor formula_4 is accomplished by scalar multiplication with formula_4, that is, multiplying each coordinate of each point by formula_4. As a special case of linear transformation, it can be achieved also by multiplying each point (viewed as a column vector) with a diagonal matrix whose entries on the diagonal are all equal to formula_4, namely formula_5 . Non-uniform scaling is accomplished by multiplication with any symmetric matrix. The eigenvalues of the matrix are the scale factors, and the corresponding eigenvectors are the axes along which each scale factor applies. A special case is a diagonal matrix, with arbitrary numbers formula_6 along the diagonal: the axes of scaling are then the coordinate axes, and the transformation scales along each axis formula_7 by the factor formula_8. In uniform scaling with a non-zero scale factor, all non-zero vectors retain their direction (as seen from the origin), or all have the direction reversed, depending on the sign of the scaling factor. In non-uniform scaling only the vectors that belong to an eigenspace will retain their direction. A vector that is the sum of two or more non-zero vectors belonging to different eigenspaces will be tilted towards the eigenspace with largest eigenvalue. Using homogeneous coordinates. In projective geometry, often used in computer graphics, points are represented using homogeneous coordinates. To scale an object by a vector "v" = ("vx, vy, vz"), each homogeneous coordinate vector "p" = ("px, py, pz", 1) would need to be multiplied with this projective transformation matrix: formula_9 As shown below, the multiplication will give the expected result: formula_10 Since the last component of a homogeneous coordinate can be viewed as the denominator of the other three components, a uniform scaling by a common factor "s" (uniform scaling) can be accomplished by using this scaling matrix: formula_11 For each vector "p" = ("px, py, pz", 1) we would have formula_12 which would be equivalent to formula_13 Function dilation and contraction. Given a point formula_14, the dilation associates it with the point formula_15 through the equations formula_16 for formula_17. Therefore, given a function formula_18, the equation of the dilated function is formula_19 Particular cases. If formula_20, the transformation is horizontal; when formula_21, it is a dilation, when formula_22, it is a contraction. If formula_23, the transformation is vertical; when formula_24 it is a dilation, when formula_25, it is a contraction. If formula_26 or formula_27, the transformation is a squeeze mapping. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " S_v = \n\\begin{bmatrix}\nv_x & 0 & 0 \\\\\n0 & v_y & 0 \\\\\n0 & 0 & v_z \\\\\n\\end{bmatrix}.\n" }, { "math_id": 1, "text": "\nS_vp =\n\\begin{bmatrix}\nv_x & 0 & 0 \\\\\n0 & v_y & 0 \\\\\n0 & 0 & v_z \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\np_x \\\\ p_y \\\\ p_z \n\\end{bmatrix}\n=\n\\begin{bmatrix}\nv_xp_x \\\\ v_yp_y \\\\ v_zp_z\n\\end{bmatrix}.\n" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "\\mathbb{R}^n" }, { "math_id": 4, "text": "v" }, { "math_id": 5, "text": "v I" }, { "math_id": 6, "text": "v_1,v_2,\\ldots v_n" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "v_i" }, { "math_id": 9, "text": " S_v = \n\\begin{bmatrix}\nv_x & 0 & 0 & 0 \\\\\n0 & v_y & 0 & 0 \\\\\n0 & 0 & v_z & 0 \\\\\n0 & 0 & 0 & 1 \n\\end{bmatrix}.\n" }, { "math_id": 10, "text": "\nS_vp =\n\\begin{bmatrix}\nv_x & 0 & 0 & 0 \\\\\n0 & v_y & 0 & 0 \\\\\n0 & 0 & v_z & 0 \\\\\n0 & 0 & 0 & 1 \n\\end{bmatrix}\n\\begin{bmatrix}\np_x \\\\ p_y \\\\ p_z \\\\ 1 \n\\end{bmatrix}\n=\n\\begin{bmatrix}\nv_xp_x \\\\ v_yp_y \\\\ v_zp_z \\\\ 1 \n\\end{bmatrix}.\n" }, { "math_id": 11, "text": " S_v = \n\\begin{bmatrix}\n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & \\frac{1}{s} \n\\end{bmatrix}.\n" }, { "math_id": 12, "text": "\nS_vp =\n\\begin{bmatrix}\n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & \\frac{1}{s} \n\\end{bmatrix}\n\\begin{bmatrix}\np_x \\\\ p_y \\\\ p_z \\\\ 1 \n\\end{bmatrix}\n=\n\\begin{bmatrix}\np_x \\\\ p_y \\\\ p_z \\\\ \\frac{1}{s} \n\\end{bmatrix}\n," }, { "math_id": 13, "text": "\n\\begin{bmatrix}\nsp_x \\\\ sp_y \\\\ sp_z \\\\ 1 \n\\end{bmatrix}.\n" }, { "math_id": 14, "text": "P(x,y)" }, { "math_id": 15, "text": "P'(x',y')" }, { "math_id": 16, "text": "\\begin{cases}x'=mx \\\\ y'=ny\\end{cases}" }, { "math_id": 17, "text": "m,n \\in \\R^+" }, { "math_id": 18, "text": "y=f(x)" }, { "math_id": 19, "text": "y=nf\\left(\\frac{x}{m}\\right)." }, { "math_id": 20, "text": "n=1" }, { "math_id": 21, "text": "m > 1" }, { "math_id": 22, "text": "m < 1" }, { "math_id": 23, "text": "m=1" }, { "math_id": 24, "text": "n>1" }, { "math_id": 25, "text": "n<1" }, { "math_id": 26, "text": "m=1/n" }, { "math_id": 27, "text": "n=1/m" } ]
https://en.wikipedia.org/wiki?curid=651752
651822
Orthographic map projection
Azimuthal perspective map projection Orthographic projection in cartography has been used since antiquity. Like the stereographic projection and gnomonic projection, orthographic projection is a perspective (or azimuthal) projection in which the sphere is projected onto a tangent plane or secant plane. The "point of perspective" for the orthographic projection is at infinite distance. It depicts a hemisphere of the globe as it appears from outer space, where the horizon is a great circle. The shapes and areas are distorted, particularly near the edges. History. The orthographic projection has been known since antiquity, with its cartographic uses being well documented. Hipparchus used the projection in the 2nd century BC to determine the places of star-rise and star-set. In about 14 BC, Roman engineer Marcus Vitruvius Pollio used the projection to construct sundials and to compute sun positions. Vitruvius also seems to have devised the term orthographic (from the Greek "orthos" (= “straight”) and graphē (= “drawing”)) for the projection. However, the name "analemma", which also meant a sundial showing latitude and longitude, was the common name until François d'Aguilon of Antwerp promoted its present name in 1613. The earliest surviving maps on the projection appear as crude woodcut drawings of terrestrial globes of 1509 (anonymous), 1533 and 1551 (Johannes Schöner), and 1524 and 1551 (Apian). A highly-refined map, designed by Renaissance polymath Albrecht Dürer and executed by Johannes Stabius, appeared in 1515. Photographs of the Earth and other planets from spacecraft have inspired renewed interest in the orthographic projection in astronomy and planetary science. Mathematics. The formulas for the spherical orthographic projection are derived using trigonometry. They are written in terms of longitude ("λ") and latitude ("φ") on the sphere. Define the radius of the sphere "R" and the "center" point (and origin) of the projection ("λ"0, "φ"0). The equations for the orthographic projection onto the ("x", "y") tangent plane reduce to the following: formula_0 Latitudes beyond the range of the map should be clipped by calculating the angular distance "c" from the "center" of the orthographic projection. This ensures that points on the opposite hemisphere are not plotted: formula_1. The point should be clipped from the map if cos("c") is negative. That is, all points that are included in the mapping satisfy: formula_2. The inverse formulas are given by: formula_3 where formula_4 For computation of the inverse formulas the use of the two-argument atan2 form of the inverse tangent function (as opposed to atan) is recommended. This ensures that the sign of the orthographic projection as written is correct in all quadrants. The inverse formulas are particularly useful when trying to project a variable defined on a ("λ", "φ") grid onto a rectilinear grid in ("x", "y"). Direct application of the orthographic projection yields scattered points in ("x", "y"), which creates problems for plotting and numerical integration. One solution is to start from the ("x", "y") projection plane and construct the image from the values defined in ("λ", "φ") by using the inverse formulas of the orthographic projection. See References for an ellipsoidal version of the orthographic map projection. Orthographic projections onto cylinders. In a wide sense, all projections with the point of perspective at infinity (and therefore parallel projecting lines) are considered as orthographic, regardless of the surface onto which they are projected. Such projections distort angles and areas close to the poles. An example of an orthographic projection onto a cylinder is the Lambert cylindrical equal-area projection. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\nx &= R\\,\\cos\\varphi \\sin\\left(\\lambda - \\lambda_0\\right) \\\\\ny &= R\\big(\\cos\\varphi_0 \\sin\\varphi - \\sin\\varphi_0 \\cos\\varphi \\cos\\left(\\lambda - \\lambda_0\\right)\\big)\n\\end{align}" }, { "math_id": 1, "text": "\\cos c = \\sin\\varphi_0 \\sin\\varphi + \\cos\\varphi_0 \\cos\\varphi \\cos\\left(\\lambda - \\lambda_0\\right)\\," }, { "math_id": 2, "text": "-\\frac{\\pi}{2} < c < \\frac{\\pi}{2}" }, { "math_id": 3, "text": "\\begin{align}\n\\varphi &= \\arcsin\\left(\\cos c \\sin\\varphi_0 + \\frac{y\\sin c \\cos\\varphi_0}{\\rho}\\right) \\\\\n\\lambda &= \\lambda_0 + \\arctan\\left(\\frac{x\\sin c}{\\rho \\cos c \\cos\\varphi_0 - y \\sin c \\sin\\varphi_0}\\right)\n\\end{align}" }, { "math_id": 4, "text": "\\begin{align}\n\\rho &= \\sqrt{x^2 + y^2} \\\\\n c &= \\arcsin\\frac{\\rho}{R}\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=651822
6518342
Formative assessment
Method in education &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; Formative assessment, formative evaluation, formative feedback, or assessment for learning, including "diagnostic testing", is a range of formal and informal assessment procedures conducted by teachers during the learning process in order to modify teaching and learning activities to improve student attainment. The goal of a formative assessment is to "monitor student learning" to provide ongoing feedback that can help students identify their strengths and weaknesses and target areas that need work. It also helps faculty recognize where students are struggling and address problems immediately. It typically involves qualitative feedback (rather than scores) for both student and teacher that focuses on the details of content and performance. It is commonly contrasted with summative assessment, which seeks to monitor educational outcomes, often for purposes of external accountability. Definition. Formative assessment involves a continuous way of checks and balances in the teaching learning processes. The method allows teachers to frequently check their learners' progress and the effectiveness of their own practice, thus allowing for self assessment of the student. Practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited. Formative assessments give in-process feedback about what students are or are not learning so instructional approaches, teaching materials, and academic support can be modified to the students' needs. They are not graded, can be informal in nature, and they may take a variety of forms. Formative assessments are generally low stakes, which means that they have low or no point value. Examples of formative assessments include asking students to draw a concept map in class to represent their understanding of a topic, submit one or two sentences identifying the main point of a lecture, or turn in a research proposal for early feedback. Origin of the term. Michael Scriven coined the terms formative and summative evaluation in 1967, and emphasized their differences both in terms of the goals of the information they seek and how the information is used. For Scriven, formative evaluation gathered information to assess the effectiveness of a curriculum and guide school system choices as to which curriculum to adopt and how to improve it. Benjamin Bloom took up the term in 1968 in the book "Learning for Mastery" to consider formative assessment as a tool for improving the teaching-learning process for students. His subsequent 1971 book "Handbook of Formative and Summative Evaluation", written with Thomas Hasting and George Madaus, showed how formative assessments could be linked to instructional units in a variety of content areas. It is this approach that reflects the generally accepted meaning of the term today. For both Scriven and Bloom, an assessment, whatever its other uses, is only formative if it is used to alter subsequent educational decisions. Subsequently, however, Paul Black and Dylan Wiliam suggested this definition is too restrictive, since formative assessments may be used to provide evidence that the intended course of action was indeed appropriate. They propose that practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited. Versus summative assessment. The type of assessment that people may be more familiar with is summative assessment. The table below shows some basic differences between the two types of assessment. Principles. Among the most comprehensive listing of principles of assessment for learning are those written by the QCA (Qualifications and Curriculum Authority). The authority, which is sponsored by England's Department for Children, Schools and Families, is responsible for national curriculum, assessment, and examinations. Their principal focus is on crucial aspects of assessment for learning, including how such assessment should be seen as central to classroom practice, and that all teachers should regard assessment for learning as a key professional skill. The UK Assessment Reform Group (1999) identifies "The big 5 principles of assessment for learning": In the United States, the Assessment For Learning Project has identified four "core shifts" and ten "emerging principles" of assessment for learning: Core shifts Emerging principles Rationale and practice. Formative assessment serves several purposes: Characteristics of formative assessment: According to Harlen and James (1997), formative assessment: Feedback is the central function of formative assessment. It typically involves a focus on the detailed content of what is being learnt, rather than simply a test score or other measurement of how far a student is falling short of the expected standard. Examples. The time between formative assessment and adjustments to learning can be a matter of seconds or a matter of months. Some examples of formative assessment are: Evidence. Meta-analysis of studies into formative assessment have indicated significant learning gains where formative assessment is used, across all content areas, knowledge and skill types, and levels of education. Educational researcher Robert J. Marzano states: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Recall the finding from Black and Wiliam's (1998) synthesis of more than 250 studies that formative assessments, as opposed to summative ones, produce the more powerful effect on student learning. In his review of the research, Terrance Crooks (1988) reports that effects sizes for summative assessments are consistently lower than effect sizes for formative assessments. In short, it is formative assessment that has a strong research base supporting its impact on learning. While empirical evidence has shown the substantial impact formative assessment has in raising student achievement, it is also "recognized as one of the most powerful ways to enhance student motivation". Believing in their ability to learn, contributing learning successes to individual efforts and abilities, emphasizing progress toward learning goals rather than letter grades, and evaluating "the nature of their thinking to identify strategies that improve understanding" are all manners in which motivation is enhanced through an effective use of formative assessment. However, for these gains to become evident formative assessment must (1) Clarify and share learning goals and success criteria; (2) Create effective classroom discussions and other tasks which demonstrate evidence of student understanding; (3) provide feedback which can and will be acted upon; (4) allow students to become instructional resources for one another; and (5) stimulate students to become owners of their own learning. Some researchers have concluded that standards-based assessments may be an effective way to "prescribe instruction and to ensure that no child is left behind". In past decades, teachers would design a unit of study that would typically include objectives, teaching strategies, and resources. The student's mark on this test or exam was taken as the indicator of his or her understanding of the topic. In 1998, Black &amp; Wiliam produced a review that highlighted that students who learn in a formative way achieve significantly better than matched control groups receiving normal teaching. Their work developed into several important research projects on Assessment for Learning by the King's College team including Kings-Medway-Oxfordshire Formative Assessment Project (KMOFAP), Assessment is For learning (Scotland), Jersey-Actioning-Formative assessment (Channel Islands), and smaller projects in England, Wales, Peru, and the USA. The strongest evidence of improved learning gains comes from short-cycle (over seconds or minutes within a single lesson) formative assessment, and medium to long-term assessment where assessment is used to change the teacher's regular classroom practice. Strategies. Understanding goals for learning. It is important for students to understand the goals and the criteria for success when learning in the classroom. Often teachers will introduce learning goals to their students before a lesson, but will not do an effective job in distinguishing between the end goals and what the students will be doing to achieve those goals. "When teachers start from what it is they want students to know and design their instruction backward from that goal, then instruction is far more likely to be effective". In a study done by Gray and Tall, they found that 72 students between the ages of 7 and 13 had different experiences when learning in mathematics. The study showed that higher achieving students looked over mathematical ambiguities, while the lower achieving students tended to get stuck on these misunderstandings. An example of this can be seen in the number formula_0. Although it is not explicitly stated, the operation between these two numbers is addition. If we look at the number formula_1, here the implied operation between formula_2 and formula_3 is multiplication. Finally if we take a look at the number formula_4, there is a completely different operation between the 6 and 1. The study showed that higher achieving students were able to look past this while other students were not. Another study done by White and Frederiksen showed that when twelve 7th grade science classrooms were given time to reflect on what they deemed to be quality work, and how they thought they would be evaluated on their work, the gap between the high achieving students and the low achieving students was decreased. One way to help with this is to offer students different examples of other students' work so they can evaluate the different pieces. By examining the different levels of work, students can start to differentiate between superior and inferior work. Feedback. There has been extensive research done on studying how students are affected by feedback. Kluger and DeNisi (1996) reviewed over three thousand reports on feedback in schools, universities, and the workplace. Of these, only 131 of them were found to be scientifically rigorous and of those, 50 of the studies shows that feedback actually has negative effects on its recipients. This is due to the fact that feedback is often "ego-involving", that is the feedback focuses on the individual student rather than the quality of the student's work. Feedback is often given in the form of some numerical or letter grade and that perpetuates students being compared to their peers. The studies previously mentioned showed that the most effective feedback for students is when they are not only told in which areas they need to improve, but also how to go about improving it. It has been shown that leaving comments alongside grades is just as ineffective as giving solely a numerical/letter grade (Butler 1987, 1989). This is due to the fact that students tend to look at their grade and disregard any comments that are given to them. The next thing students tend to do is to ask other students in the class for their grade, and they compare the grade to their own grade. Questioning. Questioning is an important part of the learning process and an even more important part is asking the right types of questions. Questions should either cause the student to think, or collect information to inform teaching. Questions that promote discussion and student reflection make it easier for students to go on the right path to end up completing their learning goals. Here are some types of questions that are good to ask students: Wait time. Wait time is the amount of time that is given to a student to answer a question that was posed and the time allowed for the student to answer. Mary Budd Rowe went on to research the outcomes of having longer wait times for students. These included: Peer-assessment. Having students assess each other's work has been studied to have numerous benefits: In K–12. Formative assessment is valuable for day-to-day teaching when used to adapt instructional methods to meet students' needs and for monitoring student progress toward learning goals. Further, it helps students monitor their own progress as they get feedback from the teacher and/or peers, allowing the opportunity to revise and refine their thinking. Formative assessment is also known as educative assessment, classroom assessment, or assessment for learning. Methods. There are many ways to integrate formative assessment into K–12 classrooms. Although the key concepts of formative assessment such as constant feedback, modifying the instruction, and information about students' progress do not vary among different disciplines or levels, the methods or strategies may differ. For example, researchers developed generative activities (Stroup et al., 2004) and model-eliciting activities (Lesh et al., 2000) that can be used as formative assessment tools in mathematics and science classrooms. Others developed strategies computer-supported collaborative learning environments (Wang et al., 2004b). More information about implication of formative assessment in specific areas is given below. Purpose. Formative assessment, or "diagnostic testing" as the National Board of Professional Teaching Standards argues, serves to create effective teaching curricula and classroom-specific evaluations. It involves gathering the best possible evidence about what students have learned, and then using that information to decide what to do next. By focusing on student-centered activities, a student is able to relate the material to his life and experiences. Students are encouraged to think critically and to develop analytical skills. This type of testing allows for a teacher's lesson plan to be clear, creative, and reflective of the curriculum (T.P Scot et al., 2009). Based on the Appalachian Education Laboratory (AEL), "diagnostic testing" emphasizes effective teaching practices while "considering learners' experiences and their unique conceptions" (T.P Scot et al., 2009). Furthermore, it provides the framework for "efficient retrieval and application"(T.P Scot et al., 2009). by urging students to take charge of their education. The implications of this type of testing, is developing a knowledgeable student with deep understanding of the information and then be able to account for a students' comprehension on a subject. Specific applications. The following are examples of application of formative assessment to content areas: In math education. In math education, it is important for teachers to see how their students approach the problems and how much mathematical knowledge and at what level students use when solving the problems. That is, knowing how students think in the process of learning or problem solving makes it possible for teachers to help their students overcome conceptual difficulties and, in turn, improve learning. In that sense, formative assessment is diagnostic. To employ formative assessment in the classrooms, a teacher has to make sure that each student participates in the learning process by expressing their ideas; there is a trustful environment in which students can provide each other with feedback; s/he (the teacher) provides students with feedback; and the instruction is modified according to students' needs. In math classes, thought revealing activities such as model-eliciting activities (MEAs) and generative activities provide good opportunities for covering these aspects of formative assessment. Feedback examples. Here are some examples of possible feedback for students in math education: Different approaches for feedback encourage pupils to reflect: Another method has students looking to each other to gain knowledge. In second/foreign language education. As an ongoing assessment it focuses on the process, it helps teachers to check the current status of their students' language ability, that is, they can know what the students know and what the students do not know. It also gives chances to students to participate in modifying or planning the upcoming classes (Bachman &amp; Palmer, 1996). Participation in their learning grows students' motivation to learn the target language. It also raises students' awareness on their target languages, which results in resetting their own goals. In consequence, it helps students to achieve their goals successfully as well as teachers be the facilitators to foster students' target language ability. In classroom, short quizzes, inflectional journals, or portfolios could be used as a formative assessment (Cohen, 1994). In elementary education. In primary schools, it is used to inform the next steps of learning. Teachers and students both use formative assessments as a tool to make decisions based on data. Formative assessment occurs when teachers feed information back to students in ways that enable the student to learn better, or when students can engage in a similar, self-reflective process. The evidence shows that high quality formative assessment does have a powerful impact on student learning. Black and Wiliam (1998) report that studies of formative assessment show an effect size on standardized tests of between 0.4 and 0.7, larger than most known educational interventions. (The effect size is the ratio of the average improvement in test scores in the innovation to the range of scores of typical groups of pupils on the same tests; Black and Wiliam recognize that standardized tests are very limited measures of learning.) Formative assessment is particularly effective for students who have not done well in school, thus narrowing the gap between low and high achievers while raising overall achievement. Research examined by Black and Wiliam supports the conclusion that summative assessments tend to have a negative effect on student learning. Math and science. Model-eliciting activities (MEAs). Model-eliciting activities are based on real-life situations where students, working in small groups, present a mathematical model as a solution to a client's need (Zawojewski &amp; Carmona, 2001). The problem design enables students to evaluate their solutions according to the needs of a client identified in the problem situation and sustain themselves in productive, progressively effective cycles of conceptualizing and problem solving. Model-eliciting activities (MEAs) are ideally structured to help students build their real-world sense of problem solving towards increasingly powerful mathematical constructs. What is especially useful for mathematics educators and researchers is the capacity of MEAs to make students' thinking visible through their models and modeling cycles. Teachers do not prompt the use of particular mathematical concepts or their representational counterparts when presenting the problems. Instead, they choose activities that maximize the potential for students to develop the concepts that are the focal point in the curriculum by building on their early and intuitive ideas. The mathematical models emerge from the students' interactions with the problem situation and learning is assessed via these emergent behaviors. Generative activities. In a generative activity, students are asked to come up with outcomes that are mathematically same. Students can arrive at the responses or build responses from this sameness in a wide range of ways. The sameness gives coherence to the task and allows it to be an "organizational unit for performing a specific function." (Stroup et al., 2004) Other activities can also be used as the means of formative assessment as long as they ensure the participation of every student, make students' thoughts visible to each other and to the teacher, promote feedback to revise and refine thinking. In addition, as a complementary to all of these is to modify and adapt instruction through the information gathered by those activities. In computer-supported learning. Many academics are seeking to diversify assessment tasks, broaden the range of skills assessed and provide students with more timely and informative feedback on their progress. Others are wishing to meet student expectations for more flexible delivery and to generate efficiencies in assessment that can ease academic staff workloads. The move to on-line and computer based assessment is a natural outcome of the increasing use of information and communication technologies to enhance learning. As more students seek flexibility in their courses, it seems inevitable there will be growing expectations for flexible assessment as well. When implementing online and computer-based instruction, it is recommended that a structured framework or model be used to guide the assessment. The way in which teachers orchestrate their classroom activities and lesson can be improved through the use of connected classroom technologies. With the use of technology, the formative assessment process not only allows for the rapid collection, analysis and exploitation of student data but also provides teachers with the data needed to inform their teaching. In UK education. In the UK education system, formative assessment (or assessment for learning) has been a key aspect of the agenda for personalized learning. The Working Group on 14–19 Reform led by Sir Mike Tomlinson, recommended that assessment of learners be refocused to be more teacher-led and less reliant on external assessment, putting learners at the heart of the assessment process. The UK government has stated that personalized learning depends on teachers knowing the strengths and weaknesses of individual learners, and that a key means of achieving this is through formative assessment, involving high quality feedback to learners included within every teaching session. The Assessment Reform Group has set out the following 10 principles for formative assessment. Learning should: Complex assessment. A complex assessment is the one that requires a rubric and an expert examiner. Example items for complex assessment include thesis, funding proposal, etc. The complexity of assessment is due to the format implicitness. In the past, it has been puzzling to deal with the ambiguous assessment criteria for final year project (FYP) thesis assessment. Webster, Pepper and Jenkins (2000) discussed some common general criteria for FYP thesis and their ambiguity regarding use, meaning and application. Woolf (2004) more specifically stated on the FYP assessment criterion weighting:'The departments are as silent on the weightings that they apply to their criteria as they are on the number of criteria that contribute to a grade'. A more serious concern was raised by Shay (2004) who argued that the FYP assessment for engineering and social sciences is 'a socially situated interpretive act', implying that many different alternative interpretations and grades are possible for one assessment task. The problems with the FYP thesis assessment have thus received much attention over the decades since the assessment difficulty was discussed by Black (1975). Common formative assessments. The practice of common formative assessments is a way for teachers to use assessments to beneficially adjust their teaching pedagogy. The concept is that teachers who teach a common class can provide their classes with a common assessment. The results of that assessment could provide the teachers with valuable information, the most important being who on that teacher team is seeing the most success with his or her students on a given topic or standard. The purpose of this practice is to provide feedback for teachers, not necessarily students, so an assignment could be considered formative for teachers, but summative for students. Researchers Kim Bailey and Chris Jakicic have stated that common formative assessments "promote efficiency for teachers, promote equity for students, provide an effective strategy for determining whether the guaranteed curriculum is being taught and, more importantly, learned, inform the practice of individual teachers, build a team's capacity to improve its program, facilitate a systematic, collective response to students who are experiencing difficulty, [and] offer the most powerful tool for changing adult behavior and practice." Developing common formative assessments on a teacher team helps educators to address what Bailey and Jakicic lay out as the important questions to answer when reflecting on student progress. These include: Common formative assessments are a way to address the second question. Teachers collects data on how students are doing to gain understanding and insight on whether students are learning, and how they are making sense of the lessons being taught. After gathering this data, teachers develop systems and plans to address the third and fourth questions and, over several years, modify the first question to fit the learning needs of their specific students. When utilizing common formative assessments to collect data on student progress, teachers can compare their students' results. In tandem, they can also share the strategies they used in the classroom to teach that particular concept. With these things in mind, the teacher team can make some evaluations on what tasks and explanations seemed to produce the best student outcomes. Teachers who used alternate strategies now have new ideas for interventions and for when they teach the topic in upcoming years. Teacher teams can also use common formative assessments to review and calibrate their scoring practices. Teachers of a common class should aim to be as consistent as possible in evaluating their students. Comparing formative assessments, or having all teachers evaluate them together, is a way for teachers to adjust their grading criteria before the summative assessment. Through this practice, teachers are presented with an opportunity to grow professionally with the people who know them and understand their school environment. To make the practice of teacher teams, common formative assessments, and power standards the most advantageous, the practice of backwards design should be utilized. Backwards design is the idea in education that the summative assessment should be developed first and that all formative work and lessons leading up to that specific assessment should be created second. Tomlinson and McTighe wrote, "Although not a new idea, we have found that the deliberate use of backwards design for planning courses, units, and individual lessons results in more clearly defined goals, more appropriate assessments, and more purposeful teaching." More specifically, intervention and re-teaching time must be factored into the schedule. It is unrealistic to think that every student will get every topic perfect and ready to take the summative assessment on a prescribed schedule. Several models have been developed to refine or address specific issues in formative assessment. For example, Harry Torrance and John Pryor proposed a model that aims to provide a pattern and balance for assessment activities based on 14 categories. The classification allows for detailed analysis as well as guidance for practices being observed. While there are comprehensive models of formative assessment, there are also some frameworks that are specifically tailored to the subject being taught. This is demonstrated in a model that balances personal, social, and science development in science instruction and the framework that focuses on listening comprehension and speaking skills when assessing and instructing English language. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "6\\frac{1}{2}" }, { "math_id": 1, "text": "6x" }, { "math_id": 2, "text": "6" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "61" } ]
https://en.wikipedia.org/wiki?curid=6518342
6519310
Homotopical connectivity
In algebraic topology, homotopical connectivity is a property describing a topological space based on the dimension of its holes. In general, low homotopical connectivity indicates that the space has at least one low-dimensional hole. The concept of "n"-connectedness generalizes the concepts of path-connectedness and simple connectedness. An equivalent definition of homotopical connectivity is based on the homotopy groups of the space. A space is n"-connected (or n"-simple connected) if its first "n" homotopy groups are trivial. Homotopical connectivity is defined for maps, too. A map is "n"-connected if it is an isomorphism "up to dimension "n," in homotopy". Definition using holes. All definitions below consider a topological space "X". A hole in "X" is, informally, a thing that prevents some suitably-placed sphere from continuously shrinking to a point.78 Equivalently, it is a sphere that cannot be continuously extended to a ball. Formally, Homotopical connectivity of spheres. In general, for every integer "d", formula_20 (and formula_21)"79,&amp;hairsp;Thm.4.3.2" The proof requires two directions: Definition using groups. A space "X" is called "n"-connected, for "n ≥" 0, if it is non-empty, and all its homotopy groups of order "d" ≤ "n" are the trivial group: formula_26 where formula_27 denotes the "i"-th homotopy group and 0 denotes the trivial group. The two definitions are equivalent. The requirement for an "n"-connected space consists of requirements for all "d" ≤ "n": The requirements of being non-empty and path-connected can be interpreted as (−1)-connected and 0-connected, respectively, which is useful in defining 0-connected and 1-connected maps, as below. The "0th homotopy set" can be defined as: formula_29 This is only a pointed set, not a group, unless "X" is itself a topological group; the distinguished point is the class of the trivial map, sending "S"0 to the base point of "X". Using this set, a space is 0-connected if and only if the 0th homotopy set is the one-point set. The definition of homotopy groups and this homotopy set require that "X" be pointed (have a chosen base point), which cannot be done if "X" is empty. A topological space "X" is path-connected if and only if its 0th homotopy group vanishes identically, as path-connectedness implies that any two points "x"1 and "x"2 in "X" can be connected with a continuous path which starts in "x"1 and ends in "x"2, which is equivalent to the assertion that every mapping from "S"0 (a discrete set of two points) to "X" can be deformed continuously to a constant map. With this definition, we can define "X" to be "n"-connected if and only if formula_30 "n"-connected map. The corresponding "relative" notion to the "absolute" notion of an "n"-connected "space" is an "n"-connected "map", which is defined as a map whose homotopy fiber "Ff" is an ("n" − 1)-connected space. In terms of homotopy groups, it means that a map formula_31 is "n"-connected if and only if: The last condition is frequently confusing; it is because the vanishing of the ("n" − 1)-st homotopy group of the homotopy fiber "Ff" corresponds to a surjection on the "n"th homotopy groups, in the exact sequence: formula_35 If the group on the right formula_36 vanishes, then the map on the left is a surjection. Low-dimensional examples: "n"-connectivity for spaces can in turn be defined in terms of "n"-connectivity of maps: a space "X" with basepoint "x"0 is an "n"-connected space if and only if the inclusion of the basepoint formula_37 is an "n"-connected map. The single point set is contractible, so all its homotopy groups vanish, and thus "isomorphism below "n" and onto at "n"" corresponds to the first "n" homotopy groups of "X" vanishing. Interpretation. This is instructive for a subset: an "n"-connected inclusion formula_38 is one such that, up to dimension "n" − 1, homotopies in the larger space "X" can be homotoped into homotopies in the subset "A". For example, for an inclusion map formula_38 to be 1-connected, it must be: One-to-one on formula_42 means that if there is a path connecting two points formula_43 by passing through "X," there is a path in "A" connecting them, while onto formula_44 means that in fact a path in "X" is homotopic to a path in "A." In other words, a function which is an isomorphism on formula_45 only implies that any elements of formula_46 that are homotopic in "X" are "abstractly" homotopic in "A" – the homotopy in "A" may be unrelated to the homotopy in "X" – while being "n"-connected (so also onto formula_47) means that (up to dimension "n" − 1) homotopies in "X" can be pushed into homotopies in "A". This gives a more concrete explanation for the utility of the definition of "n"-connectedness: for example, a space where the inclusion of the "k"-skeleton is "n"-connected (for "n" &gt; "k") – such as the inclusion of a point in the "n"-sphere – has the property that any cells in dimensions between "k" and "n" do not affect the lower-dimensional homotopy types. Lower bounds. Many topological proofs require lower bounds on the homotopical connectivity. There are several "recipes" for proving such lower bounds. Homology. Hurewicz theorem relates the homotopical connectivity formula_2 to the homological connectivity"," denoted by formula_48. This is useful for computing homotopical connectivity, since the homological groups can be computed more easily. Suppose first that "X" is simply-connected, that is, formula_49. Let formula_50; so formula_51 for all formula_52, and formula_53. Hurewicz theorem366,&amp;hairsp;Thm.4.32 says that, in this case, formula_54 for all formula_52, and formula_55 is isomorphic to formula_47, so formula_56 too. Therefore:formula_57If "X" is not simply-connected (formula_58), thenformula_59still holds. When formula_60 this is trivial. When formula_61 (so "X" is path-connected but not simply-connected), one should prove that formula_62. The inequality may be strict: there are spaces in which formula_61 but formula_63. By definition, the "k"-th homology group of a simplicial complex depends only on the simplices of dimension at most "k"+1 (see simplicial homology). Therefore, the above theorem implies that a simplicial complex "K" is "k"-connected if and only if its ("k"+1)-dimensional skeleton (the subset of "K" containing only simplices of dimension at most "k"+1) is "k"-connected.:80,&amp;hairsp;Prop.4.4.2 Join. Let "K" and "L" be non-empty cell complexes. Their "join" is commonly denoted by formula_64. Then:"81,&amp;hairsp;Prop.4.4.3 formula_65 The identity is simpler with the eta notation: formula_66 As an example, let formula_67 a set of two disconnected points. There is a 1-dimensional hole between the points, so the eta is 1. The join formula_64 is a square, which is homeomorphic to a circle, so its eta is 2. The join of this square with a third copy of "K" is a octahedron, which is homeomorphic to formula_68, and its eta is 3. In general, the join of "n" copies of formula_69 is homeomorphic to formula_70 and its eta is "n". The general proof is based on a similar formula for the homological connectivity. Nerve. Let "K"1...,"Kn" be abstract simplicial complexes, and denote their union by K. Denote the nerve complex of {"K"1, ... , "Kn"} (the abstract complex recording the intersection pattern of the "Ki") by N. If, for each nonempty formula_71, the intersection formula_72 is either empty or ("k"−|"J"|+1)-connected, then for every "j" ≤ "k", the "j"-th homotopy group of "N" is isomorphic to the "j"-th homotopy group of "K". In particular, "N" is "k"-connected if-and-only-if "K" is "k"-connected.Thm.6 Homotopy principle. In geometric topology, cases when the inclusion of a geometrically-defined space, such as the space of immersions formula_73 into a more general topological space, such as the space of all continuous maps between two associated spaces formula_74 are "n"-connected are said to satisfy a homotopy principle or "h-principle". There are a number of powerful general techniques for proving h-principles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_d: S^d \\to X" }, { "math_id": 1, "text": "g_d: B^d \\to X" }, { "math_id": 2, "text": "\\text{conn}_{\\pi}(X)" }, { "math_id": 3, "text": "\\eta_{\\pi}(X)" }, { "math_id": 4, "text": "\\eta_{\\pi}(X) := \\text{conn}_{\\pi}(X) + 2" }, { "math_id": 5, "text": "\\mathbb{R}^2\\setminus \\{(0,0)\\}" }, { "math_id": 6, "text": "\\text{conn}_{\\pi}(X) = 0" }, { "math_id": 7, "text": "\\eta_{\\pi}(X) = 2" }, { "math_id": 8, "text": "\\text{conn}_{\\pi}(X) = 1" }, { "math_id": 9, "text": "\\eta_{\\pi}(X) = 3" }, { "math_id": 10, "text": "S^0" }, { "math_id": 11, "text": "S^d" }, { "math_id": 12, "text": "B^{d+1}" }, { "math_id": 13, "text": "B^1" }, { "math_id": 14, "text": "\\text{conn}_{\\pi}(X) = -1" }, { "math_id": 15, "text": "\\eta_{\\pi}(X) = 1" }, { "math_id": 16, "text": "S^{-1}" }, { "math_id": 17, "text": "\\text{conn}_{\\pi}(X) = -2" }, { "math_id": 18, "text": "\\eta_{\\pi}(X) = 0" }, { "math_id": 19, "text": "\\eta_{\\pi}(X) = \\text{conn}_{\\pi}(X) = \\infty" }, { "math_id": 20, "text": "\\text{conn}_{\\pi}(S^d)=d-1" }, { "math_id": 21, "text": "\\eta_{\\pi}(S^d)=d+1" }, { "math_id": 22, "text": "\\text{conn}_{\\pi}(S^d) < d" }, { "math_id": 23, "text": "\\text{conn}_{\\pi}(S^d) \\geq d-1" }, { "math_id": 24, "text": "S^k \\to S^d" }, { "math_id": 25, "text": "k < d" }, { "math_id": 26, "text": "\\pi_d(X) \\cong 0, \\quad -1 \\leq d \\leq n," }, { "math_id": 27, "text": "\\pi_i(X)" }, { "math_id": 28, "text": "\\pi_d(X) \\not \\cong 0" }, { "math_id": 29, "text": "\\pi_0(X, *) := \\left[\\left(S^0, *\\right), \\left(X, *\\right)\\right]." }, { "math_id": 30, "text": "\\pi_i(X) \\simeq 0, \\quad 0 \\leq i \\leq n." }, { "math_id": 31, "text": "f\\colon X \\to Y" }, { "math_id": 32, "text": "\\pi_i(f)\\colon \\pi_i(X) \\mathrel{\\overset{\\sim}{\\to}} \\pi_i(Y)" }, { "math_id": 33, "text": "i < n" }, { "math_id": 34, "text": "\\pi_n(f)\\colon \\pi_n(X) \\twoheadrightarrow \\pi_n(Y)" }, { "math_id": 35, "text": "\\pi_n(X) \\mathrel{\\overset{\\pi_n(f)}{\\to}} \\pi_n(Y) \\to \\pi_{n-1}(Ff)." }, { "math_id": 36, "text": "\\pi_{n-1}(Ff)" }, { "math_id": 37, "text": "x_0 \\hookrightarrow X" }, { "math_id": 38, "text": "A \\hookrightarrow X" }, { "math_id": 39, "text": "\\pi_0(X)," }, { "math_id": 40, "text": "\\pi_0(A) \\to \\pi_0(X)," }, { "math_id": 41, "text": "\\pi_1(X)." }, { "math_id": 42, "text": "\\pi_0(A) \\to \\pi_0(X)" }, { "math_id": 43, "text": "a, b \\in A" }, { "math_id": 44, "text": "\\pi_1(X)" }, { "math_id": 45, "text": "\\pi_{n-1}(A) \\to \\pi_{n-1}(X)" }, { "math_id": 46, "text": "\\pi_{n-1}(A)" }, { "math_id": 47, "text": "\\pi_n(X)" }, { "math_id": 48, "text": "\\text{conn}_H(X)" }, { "math_id": 49, "text": "\\text{conn}_{\\pi}(X)\\geq 1" }, { "math_id": 50, "text": "n := \\text{conn}_{\\pi}(X) + 1\\geq 2" }, { "math_id": 51, "text": "\\pi_i(X)= 0" }, { "math_id": 52, "text": "i<n" }, { "math_id": 53, "text": "\\pi_n(X)\\neq 0" }, { "math_id": 54, "text": "\\tilde{H_i}(X)= 0" }, { "math_id": 55, "text": "\\tilde{H_n}(X)" }, { "math_id": 56, "text": "\\tilde{H_n}(X)\\neq 0" }, { "math_id": 57, "text": "\\text{conn}_H(X) = \\text{conn}_{\\pi}(X)." }, { "math_id": 58, "text": "\\text{conn}_{\\pi}(X)\\leq 0" }, { "math_id": 59, "text": "\\text{conn}_H(X)\\geq \\text{conn}_{\\pi}(X)" }, { "math_id": 60, "text": "\\text{conn}_{\\pi}(X)\\leq-1 " }, { "math_id": 61, "text": "\\text{conn}_{\\pi}(X)=0" }, { "math_id": 62, "text": "\\tilde{H_0}(X)= 0" }, { "math_id": 63, "text": "\\text{conn}_H(X)=\\infty" }, { "math_id": 64, "text": "K * L " }, { "math_id": 65, "text": "\\text{conn}_{\\pi}(K*L) \\geq \\text{conn}_{\\pi}(K)+\\text{conn}_{\\pi}(L)+2." }, { "math_id": 66, "text": "\\eta_{\\pi}(K*L) \\geq \\eta_{\\pi}(K)+\\eta_{\\pi}(L)." }, { "math_id": 67, "text": "K = L = S^0 = " }, { "math_id": 68, "text": "S^2 " }, { "math_id": 69, "text": "S^0 " }, { "math_id": 70, "text": "S^{n-1} " }, { "math_id": 71, "text": "J\\subset I" }, { "math_id": 72, "text": "\\bigcap_{i\\in J} U_i" }, { "math_id": 73, "text": "M \\to N," }, { "math_id": 74, "text": "X(M) \\to X(N)," } ]
https://en.wikipedia.org/wiki?curid=6519310
65198326
Priority matching
Graph matching with max number of high-priority vertices In graph theory, a priority matching (also called: maximum priority matching) is a matching that maximizes the number of high-priority vertices that participate in the matching. Formally, we are given a graph "G" = ("V", "E"), and a partition of the vertex-set V into some k subsets, "V"1, …, "Vk", called "priority classes". A priority matching is a matching that, among all possible matchings, saturates the largest number of vertices from "V"1; subject to this, it saturates the largest number of vertices from "V"2; subject to this, it saturates the largest number of vertices from "V"3; and so on. Priority matchings were introduced by Alvin Roth, Tayfun Sonmez and Utku Unver in the context of kidney exchange. In this problem, the vertices are patient-donor pairs, and each edge represents a mutual medical compatibility. For example, an edge between pair 1 and pair 2 indicates that donor 1 is compatible with patient 2 and donor 2 is compatible with patient 1. The priority classes correspond to medical priority among patients. For example, some patients are in a more severe condition so they must be matched first. Roth, Sonmez and Unver assumed that each priority-class contains a single vertex, i.e., the priority classes induce a total order among the pairs. Later, Yasunori Okumura extended the work to priority-classes that may contain any number of vertices. He also showed how to find a priority matching efficiently using an algorithm for maximum-cardinality matching, with a run-time complexity of "O"(|"V"||"E"| + |"V"|2 log |"V"|). Jonathan S. Turner presented a variation of the augmenting path method (Edmonds' algorithm) that finds a priority matching in time "O"(|"V"||"E"|). Later, he found a faster algorithm for bipartite graphs: the algorithm runs in time formula_0 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(k |E| \\sqrt{|V|} )" } ]
https://en.wikipedia.org/wiki?curid=65198326
65206035
Eberhard's theorem
In mathematics, and more particularly in polyhedral combinatorics, Eberhard's theorem partially characterizes the multisets of polygons that can form the faces of simple convex polyhedra. It states that, for given numbers of triangles, quadrilaterals, pentagons, heptagons, and other polygons other than hexagons, there exists a convex polyhedron with those given numbers of faces of each type (and an unspecified number of hexagonal faces) if and only if those numbers of polygons obey a linear equation derived from Euler's polyhedral formula. The theorem is named after Victor Eberhard, a blind German mathematician, who published it in 1888 in his habilitation thesis and in expanded form in an 1891 book on polyhedra. Definitions and statement. For an arbitrary convex polyhedron, one can define numbers formula_0, formula_1, formula_2, etc., where formula_3 counts the faces of the polyhedron that have exactly formula_4 sides. A three-dimensional convex polyhedron is defined to be simple when every vertex of the polyhedron is incident to exactly three edges. In a simple polyhedron, every vertex is incident to three angles of faces, and every edge is incident to two sides of faces. Since the numbers of angles and sides of the faces are given, one can calculate the three numbers formula_5 (the total number of vertices), formula_6 (the total number of edges), and formula_7 (the total number of faces), by summing over all faces and multiplying by an appropriate factor: formula_8 formula_9 and formula_10 Plugging these values into Euler's polyhedral formula formula_11 and clearing denominators leads to the equation formula_12 which must be satisfied by the face counts of every simple polyhedron. However, this equation is not affected by the value of formula_13 (as its multiplier formula_14 is zero), and, for some choices of the other face counts, changing formula_13 can change whether or not a polyhedron with those face counts exists. That is, obeying this equation on the face counts is a necessary condition for the existence of a polyhedron, but not a sufficient condition, and a complete characterization of which face counts are realizable would need to take into account the value of formula_13. Eberhard's theorem implies that the equation above is the only necessary condition that does not depend on formula_13. It states that, if an assignment of numbers to formula_15 (omitting formula_13) obeys the equation formula_12 then there exists a value of formula_13 and a simple convex polyhedron with exactly formula_3 formula_4-sided faces for all formula_4. Examples. There are three simple Platonic solids, the tetrahedron, cube, and dodecahedron. The tetrahedron has formula_16, the cube has formula_17, and the dodecahedron has formula_18, with all other values of formula_3 being zero. These three assignments of numbers to formula_3 all obey the equation that Eberhard's theorem requires them to obey. The existence of these polyhedra shows that, for these three assignments of numbers to formula_3, there exists a polyhedron with formula_19. The case of the dodecahedron, with formula_18 and all others except formula_13 zero, describes more generally the fullerenes. There is no fullerene with formula_20 but these graphs are realizable for any other value of formula_13; see for instance, the 26-fullerene graph, with formula_21. There is no simple convex polyhedron with three triangle faces, three pentagon faces, and no other faces. That is, it is impossible to have a simple convex polyhedron with formula_22, and formula_23 for formula_24. However, Eberhard's theorem states that it should be possible to form a simple polyhedron by adding some number of hexagons, and in this case one hexagon suffices: bisecting a cube on a regular hexagon passing through six of its faces produces two copies of a simple roofless polyhedron with three triangle faces, three pentagon faces, and one hexagon face. That is, setting formula_20 suffices in this case to produce a realizable combination of face counts. Related results. An analogous result to Eberhard's theorem holds for the existence of polyhedra in which all vertices are incident to exactly four edges. In this case the equation derived from Euler's formula is not affected by the number formula_1 of quadrilaterals, and for every assignment to the numbers of faces of other types that obeys this equation it is possible to choose a number of quadrilaterals that allows a 4-regular polyhedron to be realized. A strengthened version of Eberhard's theorem states that, under the same conditions as the original theorem, there exists a number formula_25 such that all choices of formula_13 that are greater than equal to formula_25 and have the same parity as formula_25 are realizable by simple convex polyhedra. A theorem of David W. Barnette provides a lower bound on the number of hexagons that are needed, whenever the number of faces of order seven or higher is at least three. It states that, in these cases, formula_26 For polygons with few pentagons and many high-order faces, this inequality can force the number of hexagons to be arbitrarily large. More strongly, it can be used to find assignments to the numbers of faces for which the required number of hexagons cannot be bounded by any function of the maximum number of sides of a face. Analogues of Eberhard's theorem have also been studied for other systems of faces and face counts than simple convex polyhedra, for instance for toroidal graphs and for tessellations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_3" }, { "math_id": 1, "text": "p_4" }, { "math_id": 2, "text": "p_5" }, { "math_id": 3, "text": "p_i" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "v" }, { "math_id": 6, "text": "e" }, { "math_id": 7, "text": "f" }, { "math_id": 8, "text": "v = \\frac{1}{3}\\sum_i i\\,p_i," }, { "math_id": 9, "text": "e = \\frac{1}{2}\\sum_i i\\,p_i," }, { "math_id": 10, "text": "f=\\sum_i p_i." }, { "math_id": 11, "text": "v-e+f=2" }, { "math_id": 12, "text": "\\sum_i (6-i)p_i = 12," }, { "math_id": 13, "text": "p_6" }, { "math_id": 14, "text": "6-i" }, { "math_id": 15, "text": "p_3,p_4,p_5,p_7,\\dots" }, { "math_id": 16, "text": "p_3=4" }, { "math_id": 17, "text": "p_4=6" }, { "math_id": 18, "text": "p_5=12" }, { "math_id": 19, "text": "p_6=0" }, { "math_id": 20, "text": "p_6=1" }, { "math_id": 21, "text": "p_6=3" }, { "math_id": 22, "text": "p_3=p_5=3" }, { "math_id": 23, "text": "p_i=0" }, { "math_id": 24, "text": "i\\notin\\{3,5\\}" }, { "math_id": 25, "text": "m" }, { "math_id": 26, "text": "p_6\\ge 2+\\frac{p_3}{2}-\\frac{p_5}{2}-\\sum_{i>6} p_i." } ]
https://en.wikipedia.org/wiki?curid=65206035
652078
Second fundamental form
Quadratic form related to curvatures of surfaces In differential geometry, the second fundamental form (or shape tensor) is a quadratic form on the tangent plane of a smooth surface in the three-dimensional Euclidean space, usually denoted by formula_0 (read "two"). Together with the first fundamental form, it serves to define extrinsic invariants of the surface, its principal curvatures. More generally, such a quadratic form is defined for a smooth immersed submanifold in a Riemannian manifold. Surface in R3. Motivation. The second fundamental form of a parametric surface "S" in R3 was introduced and studied by Gauss. First suppose that the surface is the graph of a twice continuously differentiable function, "z" "f"("x","y"), and that the plane "z" 0 is tangent to the surface at the origin. Then "f" and its partial derivatives with respect to "x" and "y" vanish at (0,0). Therefore, the Taylor expansion of "f" at (0,0) starts with quadratic terms: formula_1 and the second fundamental form at the origin in the coordinates ("x","y") is the quadratic form formula_2 For a smooth point "P" on "S", one can choose the coordinate system so that the plane "z" 0 is tangent to "S" at "P", and define the second fundamental form in the same way. Classical notation. The second fundamental form of a general parametric surface is defined as follows. Let r = r("u","v") be a regular parametrization of a surface in R3, where r is a smooth vector-valued function of two variables. It is common to denote the partial derivatives of r with respect to "u" and "v" by r"u" and r"v". Regularity of the parametrization means that r"u" and r"v" are linearly independent for any ("u","v") in the domain of r, and hence span the tangent plane to "S" at each point. Equivalently, the cross product r"u" × r"v" is a nonzero vector normal to the surface. The parametrization thus defines a field of unit normal vectors n: formula_3 The second fundamental form is usually written as formula_4 its matrix in the basis {r"u", r"v"} of the tangent plane is formula_5 The coefficients "L", "M", "N" at a given point in the parametric "uv"-plane are given by the projections of the second partial derivatives of r at that point onto the normal line to "S" and can be computed with the aid of the dot product as follows: formula_6 For a signed distance field of Hessian H, the second fundamental form coefficients can be computed as follows: formula_7 Physicist's notation. The second fundamental form of a general parametric surface "S" is defined as follows. Let r r("u"1,"u"2) be a regular parametrization of a surface in R3, where r is a smooth vector-valued function of two variables. It is common to denote the partial derivatives of r with respect to "u""α" by r"α", α 1, 2. Regularity of the parametrization means that r1 and r2 are linearly independent for any ("u"1,"u"2) in the domain of r, and hence span the tangent plane to "S" at each point. Equivalently, the cross product r1 × r2 is a nonzero vector normal to the surface. The parametrization thus defines a field of unit normal vectors n: formula_8 The second fundamental form is usually written as formula_9 The equation above uses the Einstein summation convention. The coefficients "b""αβ" at a given point in the parametric "u"1"u"2-plane are given by the projections of the second partial derivatives of r at that point onto the normal line to "S" and can be computed in terms of the normal vector n as follows: formula_10 Hypersurface in a Riemannian manifold. In Euclidean space, the second fundamental form is given by formula_11 where formula_12 is the Gauss map, and formula_13 the differential of formula_12 regarded as a vector-valued differential form, and the brackets denote the metric tensor of Euclidean space. More generally, on a Riemannian manifold, the second fundamental form is an equivalent way to describe the shape operator (denoted by "S") of a hypersurface, formula_14 where ∇"v""w" denotes the covariant derivative of the ambient manifold and "n" a field of normal vectors on the hypersurface. (If the affine connection is torsion-free, then the second fundamental form is symmetric.) The sign of the second fundamental form depends on the choice of direction of "n" (which is called a co-orientation of the hypersurface - for surfaces in Euclidean space, this is equivalently given by a choice of orientation of the surface). Generalization to arbitrary codimension. The second fundamental form can be generalized to arbitrary codimension. In that case it is a quadratic form on the tangent space with values in the normal bundle and it can be defined by formula_15 where formula_16 denotes the orthogonal projection of covariant derivative formula_17 onto the normal bundle. In Euclidean space, the curvature tensor of a submanifold can be described by the following formula: formula_18 This is called the Gauss equation, as it may be viewed as a generalization of Gauss's Theorema Egregium. For general Riemannian manifolds one has to add the curvature of ambient space; if "N" is a manifold embedded in a Riemannian manifold ("M","g") then the curvature tensor "RN" of "N" with induced metric can be expressed using the second fundamental form and "RM", the curvature tensor of "M": formula_19
[ { "math_id": 0, "text": "\\mathrm{I\\!I}" }, { "math_id": 1, "text": " z=L\\frac{x^2}{2} + Mxy + N\\frac{y^2}{2} + \\text{higher order terms}\\,," }, { "math_id": 2, "text": " L \\, dx^2 + 2M \\, dx \\, dy + N \\, dy^2 \\,. " }, { "math_id": 3, "text": "\\mathbf{n} = \\frac{\\mathbf{r}_u\\times\\mathbf{r}_v}{|\\mathbf{r}_u\\times\\mathbf{r}_v|} \\,." }, { "math_id": 4, "text": "\\mathrm{I\\!I} = L\\, du^2 + 2M\\, du\\, dv + N\\, dv^2 \\,," }, { "math_id": 5, "text": " \\begin{bmatrix}\nL&M\\\\\nM&N\n\\end{bmatrix} \\,. " }, { "math_id": 6, "text": "L = \\mathbf{r}_{uu} \\cdot \\mathbf{n}\\,, \\quad\nM = \\mathbf{r}_{uv} \\cdot \\mathbf{n}\\,, \\quad\nN = \\mathbf{r}_{vv} \\cdot \\mathbf{n}\\,. " }, { "math_id": 7, "text": "L = -\\mathbf{r}_u \\cdot \\mathbf{H} \\cdot \\mathbf{r}_u\\,, \\quad\nM = -\\mathbf{r}_u \\cdot \\mathbf{H} \\cdot \\mathbf{r}_v\\,, \\quad\nN = -\\mathbf{r}_v \\cdot \\mathbf{H} \\cdot \\mathbf{r}_v\\,. " }, { "math_id": 8, "text": "\\mathbf{n} = \\frac{\\mathbf{r}_1\\times\\mathbf{r}_2}{|\\mathbf{r}_1\\times\\mathbf{r}_2|}\\,." }, { "math_id": 9, "text": "\\mathrm{I\\!I} = b_{\\alpha \\beta} \\, du^{\\alpha} \\, du^{\\beta} \\,." }, { "math_id": 10, "text": "b_{\\alpha \\beta} = r_{,\\alpha \\beta}^{\\ \\ \\,\\gamma} n_{\\gamma}\\,. " }, { "math_id": 11, "text": "\\mathrm{I\\!I}(v,w) = -\\langle d\\nu(v),w\\rangle\\nu" }, { "math_id": 12, "text": "\\nu" }, { "math_id": 13, "text": "d\\nu" }, { "math_id": 14, "text": "\\mathrm I\\!\\mathrm I(v,w)=\\langle S(v),w\\rangle n = -\\langle \\nabla_v n,w\\rangle n=\\langle n,\\nabla_v w\\rangle n\\,," }, { "math_id": 15, "text": "\\mathrm{I\\!I}(v,w)=(\\nabla_v w)^\\bot\\,, " }, { "math_id": 16, "text": "(\\nabla_v w)^\\bot" }, { "math_id": 17, "text": "\\nabla_v w" }, { "math_id": 18, "text": "\\langle R(u,v)w,z\\rangle =\\langle \\mathrm I\\!\\mathrm I(u,z),\\mathrm I\\!\\mathrm I(v,w)\\rangle-\\langle \\mathrm I\\!\\mathrm I(u,w),\\mathrm I\\!\\mathrm I(v,z)\\rangle." }, { "math_id": 19, "text": "\\langle R_N(u,v)w,z\\rangle = \\langle R_M(u,v)w,z\\rangle+\\langle \\mathrm I\\!\\mathrm I(u,z),\\mathrm I\\!\\mathrm I(v,w)\\rangle-\\langle \\mathrm I\\!\\mathrm I(u,w),\\mathrm I\\!\\mathrm I(v,z)\\rangle\\,." } ]
https://en.wikipedia.org/wiki?curid=652078
65211233
Highest median voting rules
The highest median voting rules are a class of graded voting rules where the candidate with the highest median rating is elected. The various highest median rules differ in their treatment of ties, i.e., the method of ranking the candidates with the same median rating. Proponents of highest median rules argue that they provide the most faithful reflection of the voters' opinion. They note that as with other cardinal voting rules, highest medians are not subject to Arrow's impossibility theorem, and so can satisfy both independence of irrelevant alternatives and Pareto efficiency. However, critics note that highest median rules violate participation and the Archimedean property; highest median rules can fail to elect a candidate almost-unanimously preferred over all other candidates. Example. As in score voting, voters rate candidates along a common scale, e.g.: An elector can give the same appreciation to several different candidates. A candidate not evaluated automatically receives the mention "Bad". Then, for each candidate, we calculate what percentage of voters assigned them each grade, e.g.: This is presented graphically in the form of a cumulative histogram whose total corresponds to 100% of the votes cast: For each candidate, we then determine the majority (or median) grade (shown here in bold). This rule means that an absolute majority (more than 50%) of voters judge that a candidate deserves at least its majority grade, and that half or more (50% or more) of the electors judges that he deserves at the most its majority grade. Thus, the majority grade looks like a median. If only one candidate has the highest median score, they are elected. Otherwise, highest median rules must invoke a tiebreaking procedure to choose between the candidates with the highest median grade. Tiebreaking procedures. When different candidates share the same median rating, a tie-breaking rule is required, analogous to interpolation. For discrete grading scales, the median is insensitive to changes in the data and highly sensitive to the choice of scale (as there are large "gaps" between ratings). Most tie-breaking rules choose between tied candidates by comparing their relative shares of proponents (above-median grades) and opponents (below-median grades). The share of proponents and opponents are represented by formula_0 and formula_1 respectively, while their share of median grades is written as formula_2. Example. The example in the following table shows a six-way tied rating, where each alternative wins under one of the rules mentioned above. (All scores apart from Bucklin/anti-Bucklin are scaled to fall in formula_4to allow for interpreting them as interpolations between the next-highest and next-lowest scores.) Advantages and Disadvantages. Advantages. Common to cardinal voting methods. Cardinal voting systems allow voters to provide much more information than ranked-choice ballots (so long as there are enough categories); in addition to allowing voters to specify which of two candidates they prefer, cardinal ballots allow them to express how "strongly" they prefer such candidates. Voters can choose between a wide variety of options for rating candidates, allowing for nuanced judgments of quality. Because highest median methods ask voters to evaluate candidates rather than rank them, they escape Arrow's impossibility theorem, and satisfy both unanimity and independence of irrelevant alternatives. However, highest medians fail the slightly stronger near-unanimity criterion (see #Disadvantages). Several candidates belonging to a similar political faction can participate in the election without helping or hurting each other, as highest median methods satisfy independence from irrelevant alternatives: Adding candidates does not change the ranking of previous candidates. In other words, if a group ranks A higher than B when choosing between A and B, they should not rank that B higher than A when choosing between A, B, and C. Unique to highest medians. The most commonly-cited advantage of highest median rules over their mean-based counterparts is they minimize the number of voters who have an incentive to be dishonest. Voters with strong preferences in particular will not much incentive to give candidates very high or very low scores. On the other hand, all voters in a score voting system have an incentive to exaggerate, which in theory would lead to "de facto" approval voting for a large share of the electorate most voters will only give the highest or lowest score to every candidate). Disadvantages. Participation failure. Highest median rules violate the participation criterion; in other words, a candidate may lose because they have "too many supporters." In the example below, notice how adding the two ballots labeled "+" causes A (the initial winner) to lose to B: It can be proven that score voting (i.e. choosing highest mean instead of highest median) is the unique voting system satisfying the participation criterion, Archimedean property, and independence of irrelevant alternatives, as a corollary of the VNM utility theorem. Archimedean property. Highest median rules violate the Archimedean property; informally, the Archimedean property says that if "99.999...%" of voters prefer Alice to Bob, Alice should defeat Bob. As shown below, it is possible for Alice to defeat Bob in an election, even if only one voter thinks Bob is better than Alice, and a very large number of voters (up to 100%) give Alice a higher rating: In this election, Bob has the highest median score (51) and defeats Alice, even though every voter except one (perhaps Bob himself) thinks Alice is a better candidate. This is true no matter how many voters there are. As a result, even a single voter's weak preferences can override the strong preferences of the rest of the electorate. The above example restricted to candidates Alice and Bob also serves as an example of highest median rules failing the majority criterion, although highest medians can pass the majority criterion with normalized ballots (i.e. ballots scaled to use the whole 0-100 range). However, normalization cannot recover the Archimedean criterion. Feasibility. A poll of French voters found a majority would be opposed to implementing majority judgment, but a majority would support conducting elections by score voting.
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "p-q" }, { "math_id": 4, "text": "\\left[-\\frac{1}{2}, \\frac{1}{2} \\right]" } ]
https://en.wikipedia.org/wiki?curid=65211233
652164
Euclidean group
Isometry group of Euclidean space In mathematics, a Euclidean group is the group of (Euclidean) isometries of a Euclidean space formula_0; that is, the transformations of that space that preserve the Euclidean distance between any two points (also called Euclidean transformations). The group depends only on the dimension "n" of the space, and is commonly denoted E("n") or ISO("n"), for "inhomogeneous special orthogonal" group. The Euclidean group E("n") comprises all translations, rotations, and reflections of formula_0; and arbitrary finite combinations of them. The Euclidean group can be seen as the symmetry group of the space itself, and contains the group of symmetries of any figure (subset) of that space. A Euclidean isometry can be "direct" or "indirect", depending on whether it preserves the handedness of figures. The direct Euclidean isometries form a subgroup, the special Euclidean group, often denoted SE("n") and E+("n"), whose elements are called rigid motions or Euclidean motions. They comprise arbitrary combinations of translations and rotations, but not reflections. These groups are among the oldest and most studied, at least in the cases of dimension 2 and 3 – implicitly, long before the concept of group was invented. Overview. Dimensionality. The number of degrees of freedom for E("n") is "n"("n" + 1)/2, which gives 3 in case "n" = 2, and 6 for "n" = 3. Of these, "n" can be attributed to available translational symmetry, and the remaining "n"("n" − 1)/2 to rotational symmetry. Direct and indirect isometries. The direct isometries (i.e., isometries preserving the handedness of chiral subsets) comprise a subgroup of E("n"), called the special Euclidean group and usually denoted by E+("n") or SE("n"). They include the translations and rotations, and combinations thereof; including the identity transformation, but excluding any reflections. The isometries that reverse handedness are called indirect, or opposite. For any fixed indirect isometry "R", such as a reflection about some hyperplane, every other indirect isometry can be obtained by the composition of "R" with some direct isometry. Therefore, the indirect isometries are a coset of E+("n"), which can be denoted by E−("n"). It follows that the subgroup E+("n") is of index 2 in E("n"). Topology of the group. The natural topology of Euclidean space formula_0 implies a topology for the Euclidean group E("n"). Namely, a sequence "f""i" of isometries of formula_0 (formula_1) is defined to converge if and only if, for any point "p" of formula_0, the sequence of points "p""i" converges. From this definition it follows that a function formula_2 is continuous if and only if, for any point "p" of formula_0, the function formula_3 defined by "f""p"("t") = ("f"("t"))("p") is continuous. Such a function is called a "continuous trajectory" in E("n"). It turns out that the special Euclidean group SE("n") = E+("n") is connected in this topology. That is, given any two direct isometries "A" and "B" of formula_0, there is a continuous trajectory "f" in E+("n") such that "f"(0) = "A" and "f"(1) = "B". The same is true for the indirect isometries E−("n"). On the other hand, the group E("n") as a whole is not connected: there is no continuous trajectory that starts in E+("n") and ends in E−("n"). The continuous trajectories in E(3) play an important role in classical mechanics, because they describe the physically possible movements of a rigid body in three-dimensional space over time. One takes "f"(0) to be the identity transformation "I" of formula_4, which describes the initial position of the body. The position and orientation of the body at any later time "t" will be described by the transformation "f"(t). Since "f"(0) = "I" is in E+(3), the same must be true of "f"("t") for any later time. For that reason, the direct Euclidean isometries are also called "rigid motions". Lie structure. The Euclidean groups are not only topological groups, they are Lie groups, so that calculus notions can be adapted immediately to this setting. Relation to the affine group. The Euclidean group E("n") is a subgroup of the affine group for "n" dimensions. Both groups have a structure as a semidirect product of the group of Euclidean translations with a group of origin-preserving transformations, and this product structure is respected by the inclusion of the Euclidean group in the affine group. This gives, "a fortiori", two ways of writing elements in an explicit notation. These are: Details for the first representation are given in the next section. In the terms of Felix Klein's Erlangen programme, we read off from this that Euclidean geometry, the geometry of the Euclidean group of symmetries, is, therefore, a specialisation of affine geometry. All affine theorems apply. The origin of Euclidean geometry allows definition of the notion of distance, from which angle can then be deduced. Detailed discussion. Subgroup structure, matrix and vector representation. The Euclidean group is a subgroup of the group of affine transformations. It has as subgroups the translational group T("n"), and the orthogonal group O("n"). Any element of E("n") is a translation followed by an orthogonal transformation (the linear part of the isometry), in a unique way: formula_5 where "A" is an orthogonal matrix or the same orthogonal transformation followed by a translation: formula_6 with "c" = "Ab" T("n") is a normal subgroup of E("n"): for every translation "t" and every isometry "u", the composition formula_7 is again a translation. Together, these facts imply that E("n") is the semidirect product of O("n") extended by T("n"), which is written as formula_8. In other words, O("n") is (in the natural way) also the quotient group of E("n") by T("n"): formula_9 Now SO("n"), the special orthogonal group, is a subgroup of O("n") of index two. Therefore, E("n") has a subgroup E+("n"), also of index two, consisting of "direct" isometries. In these cases the determinant of "A" is 1. They are represented as a translation followed by a rotation, rather than a translation followed by some kind of reflection (in dimensions 2 and 3, these are the familiar reflections in a mirror line or plane, which may be taken to include the origin, or in 3D, a rotoreflection). This relation is commonly written as: formula_10 or, equivalently: formula_11 Subgroups. Types of subgroups of E("n"): *all direct isometries that keep the origin fixed, or more generally, some point (in 3D called the rotation group) *all isometries that keep the origin fixed, or more generally, some point (the orthogonal group) *all direct isometries E+("n") *the whole Euclidean group E("n") *one of these groups in an "m"-dimensional subspace combined with a discrete group of isometries in the orthogonal ("n"−"m")-dimensional space *one of these groups in an "m"-dimensional subspace combined with another one in the orthogonal ("n"−"m")-dimensional space Examples in 3D of combinations: Overview of isometries in up to three dimensions. E(1), E(2), and E(3) can be categorized as follows, with degrees of freedom: Chasles' theorem asserts that any element of E+(3) is a screw displacement. See also 3D isometries that leave the origin fixed, space group, involution. Commuting isometries. For some isometry pairs composition does not depend on order: Conjugacy classes. The translations by a given distance in any direction form a conjugacy class; the translation group is the union of those for all distances. In 1D, all reflections are in the same class. In 2D, rotations by the same angle in either direction are in the same class. Glide reflections with translation by the same distance are in the same class. In 3D:
[ { "math_id": 0, "text": "\\mathbb{E}^n" }, { "math_id": 1, "text": "i \\in \\mathbb{N}" }, { "math_id": 2, "text": "f:[0,1] \\to E(n)" }, { "math_id": 3, "text": "f_p: [0,1] \\to \\mathbb{E}^n" }, { "math_id": 4, "text": "\\mathbb{E}^3" }, { "math_id": 5, "text": "x \\mapsto A (x + b)" }, { "math_id": 6, "text": "x \\mapsto A x + c," }, { "math_id": 7, "text": "u^{-1}tu" }, { "math_id": 8, "text": "\\text{E}(n) = \\text{T}(n) \\rtimes \\text{O}(n)" }, { "math_id": 9, "text": "\\text{O}(n) \\cong \\text{E}(n) / \\text{T}(n)" }, { "math_id": 10, "text": "\\text{SO}(n) \\cong \\text{E}^+(n) / \\text{T}(n)" }, { "math_id": 11, "text": "\\text{E}^+(n) = \\text{SO}(n) \\ltimes \\text{T}(n)." } ]
https://en.wikipedia.org/wiki?curid=652164
65223554
Ridge function
In mathematics, a ridge function is any function formula_0 that can be written as the composition of a univariate function with an affine transformation, that is: formula_1 for some formula_2 and formula_3. Coinage of the term 'ridge function' is often attributed to B.F. Logan and L.A. Shepp. Relevance. A ridge function is not susceptible to the curse of dimensionality, making it an instrumental tool in various estimation problems. This is a direct result of the fact that ridge functions are constant in formula_4 directions: Let formula_5 be formula_4 independent vectors that are orthogonal to formula_6, such that these vectors span formula_4 dimensions. Then formula_7 for all formula_8. In other words, any shift of formula_9 in a direction perpendicular to formula_10 does not change the value of formula_11. Ridge functions play an essential role in amongst others projection pursuit, generalized linear models, and as activation functions in neural networks. For a survey on ridge functions, see. For books on ridge functions, see. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f:\\R^d\\rightarrow\\R" }, { "math_id": 1, "text": "f(\\boldsymbol{x}) = g(\\boldsymbol{x}\\cdot \\boldsymbol{a})" }, { "math_id": 2, "text": "g:\\R\\rightarrow\\R" }, { "math_id": 3, "text": "\\boldsymbol{a}\\in\\R^d" }, { "math_id": 4, "text": "d-1" }, { "math_id": 5, "text": "a_1,\\dots,a_{d-1}" }, { "math_id": 6, "text": "a" }, { "math_id": 7, "text": "f\\left(\\boldsymbol{x} + \\sum_{k=1}^{d-1}c_k\\boldsymbol{a}_k\\right)=g\\left(\\boldsymbol{x}\\cdot\\boldsymbol{a} + \\sum_{k=1}^{d-1} c_k\\boldsymbol{a}_k\\cdot\\boldsymbol{a}\\right)=g\\left(\\boldsymbol{x}\\cdot\\boldsymbol{a} + \\sum_{k=1}^{d-1} c_k0\\right) = g(\\boldsymbol{x} \\cdot \\boldsymbol{a})=f(\\boldsymbol{x})" }, { "math_id": 8, "text": "c_i\\in\\R,1\\le i<d" }, { "math_id": 9, "text": "\\boldsymbol{x}" }, { "math_id": 10, "text": "\\boldsymbol{a}" }, { "math_id": 11, "text": "f" } ]
https://en.wikipedia.org/wiki?curid=65223554
65225861
Dicke model
Model of quantum optics The Dicke model is a fundamental model of quantum optics, which describes the interaction between light and matter. In the Dicke model, the "light" component is described as a single quantum mode, while the "matter" is described as a set of two-level systems. When the coupling between the light and matter crosses a critical value, the Dicke model shows a mean-field phase transition to a superradiant phase. This transition belongs to the Ising universality class and was realized in cavity quantum electrodynamics experiments. Although the superradiant transition bears some analogy with the lasing instability, these two transitions belong to different universality classes. Description. The Dicke model is a quantum mechanical model that describes the coupling between a single-mode cavity and formula_0 two-level systems, or equivalently formula_0 spin-1/2 degrees of freedom. The model was first introduced in 1973 by K. Hepp and E. H. Lieb. Their study was inspired by the pioneering work of R. H. Dicke on the superradiant emission of light in free space and named after him. Like any other model in quantum mechanics, the Dicke model includes a set of quantum states (the Hilbert space) and a total-energy operator (the Hamiltonian). The Hilbert space of the Dicke model is given by (the tensor product of) the states of the cavity and of the two-level systems. The Hilbert space of the cavity can be spanned by Fock states with formula_1 photons, denoted by formula_2. These states can be constructed from the vacuum state formula_3 using the canonical ladder operators, formula_4 and formula_5, which add and subtract a photon from the cavity, respectively. The states of each two-level system are referred to as "up" and "down" and are defined through the spin operators formula_6, satisfying the spin algebra formula_7. Here formula_8 is the reduced Planck constant and formula_9 indicates a specific two-level system. The Hamiltonian of the Dicke model is Here, the first term describes the energy of the cavity and equals to the product of the energy of a single cavity photon formula_10 (where formula_11 is the cavity frequency), times the number of photons in the cavity, formula_12. The second term describes the energy of the two-level systems, where formula_13 is the energy difference between the states of each two-level system. The last term describes the coupling between the two-level systems and the cavity and is assumed to be proportional to a constant, formula_14, times the inverse of the square root of the number of two-level systems. This assumption allows one to obtain a phase transition in the limit of formula_15 (see below). The coupling can be written as the sum of two terms: a "co-rotating" term that conserves the number of excitations and is proportional to formula_16 and a "counter-rotating" term proportional to formula_17, where formula_18 are the spin ladder operators. The Hamiltonian in Eq. 1 assumes that all the spins are identical (i.e. have the same energy difference and are equally coupled to the cavity). Under this assumption, one can define the macroscopic spin operators formula_19, with formula_20, which satisfy the spin algebra, formula_21. Using these operators, one can rewrite the Hamiltonian in Eq. 1 as This notation simplifies the numerical study of the model because it involves a single spin-S with formula_22, whose Hilbert space has size formula_23, rather than formula_0 spin-1/2, whose Hilbert space has size formula_24. The Dicke model has one global symmetry, Because formula_25 squares to unity (i.e. if applied twice, it brings each state back to its original state), it has two eigenvalues, formula_26 and formula_27. This symmetry is associated with a conserved quantity: the parity of the total number of excitations, formula_28, where This parity conservation can be seen from the fact that each term in the Hamiltonian preserves the excitation number, except for the counter-rotating terms, which can only change the excitation number by formula_29. A state of the Dicke model is said to be "normal" when this symmetry is preserved, and "superradiant" when this symmetry is spontaneously broken. Related models. The Dicke model is closely related to other models of quantum optics. Specifically, the Dicke model with a single two-level system, formula_30, is called the Rabi model. In the absence of counter-rotating terms, the model is called Jaynes-Cummings for formula_30 and Tavis-Cummings for formula_31. These two models conserve the number of excitations formula_32 and are characterized by a formula_33 symmetry. The spontaneous breaking of this symmetry gives rise to a lasing state (see below). The relation between the Dicke model and other models is summarized in the table below Superradiant phase transition. Early studies of the Dicke model considered its equilibrium properties. These works considered the limit of formula_15 (also known as the "thermodynamic limit") and assumed a thermal partition function, formula_34, where formula_35 is the Boltzmann constant and formula_36 is the temperature. It was found that, when the coupling formula_14 crosses a critical value formula_37, the Dicke model undergoes a second-order phase transition, known as the superradiant phase transition. In their original derivation, Hepp and Lieb neglected the effects of counter-rotating terms and, thus, actually considered the Tavis-Cummings model (see above). Further studies of the full Dicke model found that the phase transition still occurs in the presence of counter-rotating terms, albeit at a different critical coupling. The superradiant transition spontaneously breaks the parity symmetry, formula_25, defined in Eq. 3. The order parameter of this phase transition is formula_38. In the thermodynamic limit, this quantity tends to zero if the system is normal, or to one of two possible values, if the system is superradiant. These two values correspond to physical states of the cavity field with opposite phases (see Eq. 3 and, correspondingly, to states of the spin with opposite formula_39 components). Close to the superradiant phase transition, the order parameter depends on formula_14 as formula_40. This dependence corresponds to the mean-field critical exponent formula_41. Mean-field description of the transition. The simplest way to describe the superradiant transition is to use a mean-field approximation, in which the cavity field operators are substituted by their expectation values. Under this approximation, which is exact in the thermodynamic limit, the Dicke Hamiltonian of Eq. 1 becomes a sum of independent terms, each acting on a different two-level system, which can be diagonalized independently. At thermal equilibrium (see above), one finds that the free energy per two-level system is The critical coupling of the transition can be found by the condition formula_42, leading to For formula_43, formula_44 has one minimum, while for formula_45, it has two minima. In the limit of formula_46 one obtains an expression for the critical coupling of the zero-temperature superradiant phase transition, formula_47. Semiclassical limit and chaos. Semiclassical limit. A phase space for the Dicke model in the symmetric atomic subspace with formula_48 may be constructed by considering the tensor product of the Glauber coherent states where formula_49 is the displacement operator and formula_50 is the photon vacuum Fock state, and the SU(2) coherent states where formula_51 is the rotation operator in the Bloch sphere, formula_52 and formula_53 is the state with all atoms in their ground state. This yields a four-dimensional phase space with canonical coordinates formula_54 and formula_55. A classical Hamiltonian is obtained by taking the expectation value of the Dicke Hamiltonian given by Eq. 2 under these states, In the limit of formula_57, the quantum dynamics given by the quantum Hamiltonian of Eq. 2 and the classical dynamics given by Eq. 9 coincide. For a finite system size, there is a classical and quantum correspondence that breaks down at the Ehrenfest time, which is inversely proportional to formula_0. Quantum chaos. The Dicke model provides an ideal system to study the quantum-classical correspondence and quantum chaos. The classical system given by Eq. 9 is chaotic or regular depending on the values of the parameters formula_56, formula_58, and formula_59 and the energy formula_60. Note that there may be chaos in both the normal and superradadiant regimes. It was recently found that the exponential growth rate of the out-of-time-order correlator coincides with the classical Lyapunov exponents in the chaotic regime and at unstable points of the regular regime. In addition, the evolution of the survival probability (i.e. the fidelity of a state with itself at a later time) of initial coherent states highly delocalized in the energy eigenbasis is well-described by random matrix theory, while initial coherent states strongly affected by the presence of quantum scars display behaviors that break ergodicity. Open Dicke model. The Dicke model of Eq. 1 assumes that the cavity mode and the two-level systems are perfectly isolated from the external environment. In actual experiments, this assumption is not valid: the coupling to free modes of light can cause the loss of cavity photons and the decay of the two-level systems (i.e. dissipation channels). It is worth mentioning, that these experiments use driving fields (e.g. laser fields) to implement the coupling between the cavity mode and the two-level systems. The various dissipation channels can be described by adding a coupling to additional environmental degrees of freedom. By averaging over the dynamics of these external degrees of freedom one obtains equations of motion describing an open quantum system. According to the common Born-Markov approximation, one can describe the dynamics of the system with the quantum master equation in Lindblad form Here, formula_61 is the density matrix of the system, formula_62 is the Lindblad operator of the decay channel formula_63, and formula_64 the associated decay rate. When the Hamiltonian formula_65 is given by Eq. 1, the model is referred to as the open Dicke model. Some common decay processes that are relevant to experiments are given in the following table: In the theoretical description of the model, one often considers the steady state where formula_66. In the limit of formula_15, the steady state of the open Dicke model shows a continuous phase transition, often referred to as the "nonequilibrium superradiant transition". The critical exponents of this transition are the same as the equilibrium superradiant transition at finite temperature (and differ from the superradiant transition at zero temperature). Superradiant transition and Dicke superradiance. The superradiant transition of the open Dicke model is related to, but differs from, Dicke superradiance. Dicke superradiance is a collective phenomenon in which many two-level systems emit photons coherently in free space. It occurs if the two-level systems are initially prepared in their excited state and placed at a distance much smaller than the relevant photon's wavelength. Under these conditions, the spontaneous decay of the two-level systems becomes much faster: the two-level systems emit a short pulse of light with large amplitude. Under ideal conditions, the pulse duration is inversely proportional to the number of two-level systems, formula_0, and the maximal intensity of the emitted light scales as formula_67. This is in contrast to the spontaneous emission of formula_0 independent two-level systems, whose decay time does not depend on formula_0 and where the pulse intensity scales as formula_0. As explained above, the open Dicke model rather models two-level systems coupled to a quantized cavity and driven by an external pump. In the normal phase, the intensity of the cavity field does not scale with the number of atoms formula_0, while in the superradiant phase, the intensity of the cavity field is proportional to formula_68. The scaling laws of Dicke superradiance and of the superradiant transition of the Dicke model are summarized in the following table: Experimental realizations. The simplest realization of the Dicke model involves the dipole coupling between two-level atoms in a cavity. In this system, the observation of the superradiant transition is hindered by two possible problems: (1) The bare coupling between atoms and cavities is usually weak and insufficient to reach the critical value formula_37, see Eq. 6. (2) An accurate modelling of the physical system requires to consider formula_69 terms that according to a "no-go theorem", may prevent the transition. Both limitations can be circumvented by applying external pumps on the atoms and creating an effective Dicke model in an appropriately rotating frame. In 2010, the superradiant transition of the open Dicke model was observed experimentally using neutral Rubidium atoms trapped in an optical cavity. In these experiments, the coupling between the atoms and the cavity is not achieved by a direct dipole coupling between the two systems. Instead, the atoms are illuminated by an external pump, which drives a stimulated Raman transition. This two-photon process causes the two-level system to change its state from "down" to "up", or "vice versa", and emit or absorb a photon into the cavity. Experiments showed that the number of photons in the cavity shows a steep increase when the pump intensity crosses a critical threshold. This threshold was associated with the critical coupling of the Dicke model. In the experiments, two different sets of physical states were used as the "down" and "up" states. In some experiments, the two states correspond to atoms with different velocities, or momenta: the "down" state had zero momentum and belonged to a Bose-Einstein condensate, while the "up" state had a momentum equal to sum of the momentum of a cavity photon and the momentum of a pump photon. In contrast, later experiments used two different hyperfine levels of the Rubidium atoms in a magnetic field. The latter realization allowed the researchers to study a generalized Dicke model (see below). In both experiments, the system is time-dependent and the (generalized) Dicke Hamiltonian is realized in a frame that rotates at the pump's frequency. Generalized model and lasing. The Dicke model can be generalized by considering the effects of additional terms in the Hamiltonian of Eq. 1. For example, a recent experiment realized an open Dicke model with independently tunable rotating and counter-rotating terms. In addition to the superradiant transition, this "generalized" Dicke model can undergo a lasing instability, which was termed "inverted lasing" or "counter-lasing". This transition is induced by the counter-rotating terms of the Dicke model and is most prominent when these terms are larger than the rotating ones. The nonequilibrium superradiant transition and the lasing instability have several similarities and differences. Both transitions are of a mean-field type and can be understood in terms of the dynamics of a single degree of freedom. The superradiant transition corresponds to a supercritical pitchfork bifurcation, while the lasing instability corresponds to a Hopf instability. The key difference between these two types of bifurcations is that the former gives rise to two stable solutions, while the latter leads to periodic solutions (limit cycles). Accordingly, in the superradiant phase the cavity field is static (in the frame of the pump field), while it oscillates periodically in the lasing phase. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "|n\\rangle" }, { "math_id": 3, "text": "|n=0\\rangle" }, { "math_id": 4, "text": "a^\\dagger" }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "\\vec\\sigma_j = (\\sigma^x_j,~\\sigma^y_j,~\\sigma^z_j)" }, { "math_id": 7, "text": "[\\sigma^x_j,~\\sigma^y_k]=i\\hbar\\sigma^z_j\\delta_{j,k}" }, { "math_id": 8, "text": "\\hbar" }, { "math_id": 9, "text": "j = (0,1,2,...,N)" }, { "math_id": 10, "text": "\\hbar\\omega_c" }, { "math_id": 11, "text": "\\omega_c" }, { "math_id": 12, "text": "n_c=a^\\dagger a" }, { "math_id": 13, "text": "\\hbar\\omega_z" }, { "math_id": 14, "text": "\\lambda" }, { "math_id": 15, "text": "N\\to\\infty" }, { "math_id": 16, "text": "a \\sigma^+ + a^\\dagger \\sigma^-" }, { "math_id": 17, "text": "a \\sigma^- + a^\\dagger \\sigma^+" }, { "math_id": 18, "text": "\\sigma^\\pm = \\sigma^x \\pm i \\sigma^y" }, { "math_id": 19, "text": "S^\\alpha=\\sum_{j=1}^N\\sigma_j^\\alpha" }, { "math_id": 20, "text": "\\alpha=x,y,z" }, { "math_id": 21, "text": "[S^x,S^y]=i\\hbar S^z" }, { "math_id": 22, "text": "S\\leq N/2" }, { "math_id": 23, "text": "2S+1" }, { "math_id": 24, "text": "2^N" }, { "math_id": 25, "text": "\\mathcal{P}" }, { "math_id": 26, "text": "1" }, { "math_id": 27, "text": "-1" }, { "math_id": 28, "text": "P=(-1)^{N_{ex}}" }, { "math_id": 29, "text": "\\pm 2" }, { "math_id": 30, "text": "N=1" }, { "math_id": 31, "text": "N>1" }, { "math_id": 32, "text": "N_{ex}" }, { "math_id": 33, "text": "U(1)" }, { "math_id": 34, "text": " Z=\\exp(-H/k_B T)" }, { "math_id": 35, "text": "k_B" }, { "math_id": 36, "text": "T" }, { "math_id": 37, "text": "\\lambda_c" }, { "math_id": 38, "text": "\\langle{a}\\rangle/\\sqrt{N}" }, { "math_id": 39, "text": "x" }, { "math_id": 40, "text": "\\langle{a}\\rangle/\\sqrt{N}\\sim(\\lambda_c-\\lambda)^{-1/2}" }, { "math_id": 41, "text": "\\beta = 1/2" }, { "math_id": 42, "text": "dF/d\\alpha(\\alpha=0)=0" }, { "math_id": 43, "text": "\\lambda<\\lambda_c" }, { "math_id": 44, "text": "F" }, { "math_id": 45, "text": "\\lambda>\\lambda_c" }, { "math_id": 46, "text": "T\\to0" }, { "math_id": 47, "text": "\\lambda_c=\\sqrt{\\omega_c\\omega_z}/2" }, { "math_id": 48, "text": "S=N/2" }, { "math_id": 49, "text": "D(q,p)=e^{-\\frac{S}{4}\\left (q^2 + p^2\\right)}\\exp\\left(\\sqrt{\\frac{S}{2}\n\n}\\left(q+ip\\right) a^\\dagger\\right)" }, { "math_id": 50, "text": "\\left\\vert 0\\right \\rangle" }, { "math_id": 51, "text": "R(Q,P)=\\left(1-\\frac{Q^2+P^2}{4}\\right)^S \\exp\\left(\\sqrt{\\frac{1}{4-Q^2-P^2}}\\left(Q+iP\\right) \\frac{S^+}{\\hbar}\\right)" }, { "math_id": 52, "text": "Q^2+P^2 \\leq 4," }, { "math_id": 53, "text": "\\left \\vert {S,-S}\\right \\rangle" }, { "math_id": 54, "text": "(q,p)" }, { "math_id": 55, "text": "(Q,P)" }, { "math_id": 56, "text": "\\lambda" }, { "math_id": 57, "text": "N\\to \\infty" }, { "math_id": 58, "text": "\\omega_c" }, { "math_id": 59, "text": "\\omega_z" }, { "math_id": 60, "text": "E" }, { "math_id": 61, "text": "\\rho" }, { "math_id": 62, "text": "L_\\alpha" }, { "math_id": 63, "text": "\\alpha" }, { "math_id": 64, "text": "\\gamma_\\alpha" }, { "math_id": 65, "text": "H" }, { "math_id": 66, "text": "d\\rho/dt = 0" }, { "math_id": 67, "text": "N^2" }, { "math_id": 68, "text": "\\langle a^\\dagger a \\rangle \\sim N" }, { "math_id": 69, "text": "A^2" } ]
https://en.wikipedia.org/wiki?curid=65225861
6523885
Bessel polynomials
Mathematics concept In mathematics, the Bessel polynomials are an orthogonal sequence of polynomials. There are a number of different but closely related definitions. The definition favored by mathematicians is given by the series formula_0 Another definition, favored by electrical engineers, is sometimes known as the reverse Bessel polynomials formula_1 The coefficients of the second definition are the same as the first but in reverse order. For example, the third-degree Bessel polynomial is formula_2 while the third-degree reverse Bessel polynomial is formula_3 The reverse Bessel polynomial is used in the design of Bessel electronic filters. Properties. Definition in terms of Bessel functions. The Bessel polynomial may also be defined using Bessel functions from which the polynomial draws its name. formula_4 formula_5 formula_6 where "K""n"("x") is a , "y""n"("x") is the ordinary polynomial, and "θ""n"("x") is the reverse polynomial . For example: formula_7 Definition as a hypergeometric function. The Bessel polynomial may also be defined as a confluent hypergeometric function formula_8 A similar expression holds true for the generalized Bessel polynomials (see below): formula_9 The reverse Bessel polynomial may be defined as a generalized Laguerre polynomial: formula_10 from which it follows that it may also be defined as a hypergeometric function: formula_11 where (−2"n")"n" is the Pochhammer symbol (rising factorial). Generating function. The Bessel polynomials, with index shifted, have the generating function formula_12 Differentiating with respect to formula_13, cancelling formula_14, yields the generating function for the polynomials formula_15 formula_16 Similar generating function exists for the formula_17 polynomials as well: formula_18 Upon setting formula_19, one has the following representation for the exponential function: formula_20 Recursion. The Bessel polynomial may also be defined by a recursion formula: formula_21 formula_22 formula_23 and formula_24 formula_25 formula_26 Differential equation. The Bessel polynomial obeys the following differential equation: formula_27 and formula_28 Orthogonality. The Bessel polynomials are orthogonal with respect to the weight formula_29 integrated over the unit circle of the complex plane. In other words, if formula_30, formula_31 Generalization. Explicit form. A generalization of the Bessel polynomials have been suggested in literature, as following: formula_32 the corresponding reverse polynomials are formula_33 The explicit coefficients of the formula_34 polynomials are: formula_35 Consequently, the formula_36 polynomials can explicitly be written as follows: formula_37 For the weighting function formula_38 they are orthogonal, for the relation formula_39 holds for "m" ≠ "n" and "c" a curve surrounding the 0 point. They specialize to the Bessel polynomials for α = β = 2, in which situation ρ("x") = exp(−2/"x"). Rodrigues formula for Bessel polynomials. The Rodrigues formula for the Bessel polynomials as particular solutions of the above differential equation is : formula_40 where "a" are normalization coefficients. Associated Bessel polynomials. According to this generalization we have the following generalized differential equation for associated Bessel polynomials: formula_41 where formula_42. The solutions are, formula_43 Zeros. If one denotes the zeros of formula_44 as formula_45, and that of the formula_46 by formula_47, then the following estimates exist: formula_48 and formula_49 for all formula_50. Moreover, all these zeros have negative real part. Sharper results can be said if one resorts to more powerful theorems regarding the estimates of zeros of polynomials (more concretely, the Parabola Theorem of Saff and Varga, or differential equations techniques). One result is the following: formula_51 Particular values. The Bessel polynomials formula_52 up to formula_53 are formula_54 No Bessel polynomial can be factored into lower degree polynomials with rational coefficients. The reverse Bessel polynomials are obtained by reversing the coefficients. Equivalently, formula_55. This results in the following: formula_56 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y_n(x)=\\sum_{k=0}^n\\frac{(n+k)!}{(n-k)!k!}\\,\\left(\\frac{x}{2}\\right)^k." }, { "math_id": 1, "text": "\\theta_n(x)=x^n\\,y_n(1/x)=\\sum_{k=0}^n\\frac{(n+k)!}{(n-k)!k!}\\,\\frac{x^{n-k}}{2^{k}}." }, { "math_id": 2, "text": "y_3(x)=15x^3+15x^2+6x+1" }, { "math_id": 3, "text": "\\theta_3(x)=x^3+6x^2+15x+15." }, { "math_id": 4, "text": "y_n(x)=\\,x^{n}\\theta_n(1/x)\\," }, { "math_id": 5, "text": "y_n(x)=\\sqrt{\\frac{2}{\\pi x}}\\,e^{1/x}K_{n+\\frac 1 2}(1/x)" }, { "math_id": 6, "text": "\\theta_n(x)=\\sqrt{\\frac{2}{\\pi}}\\,x^{n+1/2}e^{x}K_{n+ \\frac 1 2}(x)" }, { "math_id": 7, "text": "y_3(x)=15x^3+15x^2+6x+1 = \\sqrt{\\frac{2}{\\pi x}}\\,e^{1/x}K_{3+\\frac 1 2}(1/x)" }, { "math_id": 8, "text": "y_n(x)=\\,_2F_0(-n,n+1;;-x/2)= \\left(\\frac 2 x\\right)^{-n} U\\left(-n,-2n,\\frac 2 x\\right)= \\left(\\frac 2 x\\right)^{n+1} U\\left(n+1,2n+2,\\frac 2 x \\right)." }, { "math_id": 9, "text": "y_n(x;a,b)=\\,_2F_0(-n,n+a-1;;-x/b)= \\left(\\frac b x\\right)^{n+a-1} U\\left(n+a-1,2n+a,\\frac b x \\right)." }, { "math_id": 10, "text": "\\theta_n(x)=\\frac{n!}{(-2)^n}\\,L_n^{-2n-1}(2x)" }, { "math_id": 11, "text": "\\theta_n(x)=\\frac{(-2n)_n}{(-2)^n}\\,\\,_1F_1(-n;-2n;2x)" }, { "math_id": 12, "text": "\\sum_{n=0}^\\infty \\sqrt{\\frac 2 \\pi} x^{n+\\frac 1 2} e^x K_{n-\\frac 1 2}(x) \\frac {t^n}{n!}=1+x\\sum_{n=1}^\\infty \\theta_{n-1}(x) \\frac{t^n}{n!}= e^{x(1-\\sqrt{1-2t})}." }, { "math_id": 13, "text": "t" }, { "math_id": 14, "text": "x" }, { "math_id": 15, "text": "\\{\\theta_n\\}_{n\\ge0}" }, { "math_id": 16, "text": "\\sum_{n=0}^\\infty \\theta_{n}(x) \\frac{t^n}{n!}=\\frac{1}{\\sqrt{1-2t}}e^{x(1-\\sqrt{1-2t})}." }, { "math_id": 17, "text": "y_n" }, { "math_id": 18, "text": "\\sum_{n=0}^\\infty y_{n-1}(x)\\frac{t^n}{n!}=\\exp\\left(\\frac{1-\\sqrt{1-2xt}}{x}\\right)." }, { "math_id": 19, "text": "t=z-xz^2/2" }, { "math_id": 20, "text": "e^z=\\sum_{n=0}^\\infty y_{n-1}(x)\\frac{(z-xz^2/2)^n}{n!}." }, { "math_id": 21, "text": "y_0(x)=1\\," }, { "math_id": 22, "text": "y_1(x)=x+1\\," }, { "math_id": 23, "text": "y_n(x)=(2n\\!-\\!1)x\\,y_{n-1}(x)+y_{n-2}(x)\\," }, { "math_id": 24, "text": "\\theta_0(x)=1\\," }, { "math_id": 25, "text": "\\theta_1(x)=x+1\\," }, { "math_id": 26, "text": "\\theta_n(x)=(2n\\!-\\!1)\\theta_{n-1}(x)+x^2\\theta_{n-2}(x)\\," }, { "math_id": 27, "text": "x^2\\frac{d^2y_n(x)}{dx^2}+2(x\\!+\\!1)\\frac{dy_n(x)}{dx}-n(n+1)y_n(x)=0" }, { "math_id": 28, "text": "x\\frac{d^2\\theta_n(x)}{dx^2}-2(x\\!+\\!n)\\frac{d\\theta_n(x)}{dx}+2n\\,\\theta_n(x)=0" }, { "math_id": 29, "text": "e^{-2/x}" }, { "math_id": 30, "text": "n \\neq m" }, { "math_id": 31, "text": "\\int_0^{2\\pi} y_n\\left(e^{i\\theta}\\right) y_m\\left(e^{i\\theta}\\right) ie^{i\\theta} \\mathrm{d}\\theta = 0" }, { "math_id": 32, "text": "y_n(x;\\alpha,\\beta):= (-1)^n n! \\left(\\frac x \\beta\\right)^n L_n^{(-1-2n-\\alpha)}\\left(\\frac \\beta x\\right)," }, { "math_id": 33, "text": "\\theta_n(x;\\alpha, \\beta):= \\frac{n!}{(-\\beta)^n}L_n^{(-1-2n-\\alpha)}(\\beta x)=x^n y_n\\left(\\frac 1 x;\\alpha,\\beta\\right)." }, { "math_id": 34, "text": "y_n(x;\\alpha, \\beta)" }, { "math_id": 35, "text": "y_n(x;\\alpha, \\beta)= \\sum_{k=0}^n\\binom{n}{k}(n+k+\\alpha-2)^{\\underline{k}}\\left(\\frac{x}{\\beta}\\right)^k." }, { "math_id": 36, "text": "\\theta_n(x;\\alpha, \\beta)" }, { "math_id": 37, "text": "\\theta_n(x;\\alpha, \\beta)=\\sum_{k=0}^n\\binom{n}{k}(2n-k+\\alpha-2)^{\\underline{n-k}}\\frac{x^k}{\\beta^{n-k}}." }, { "math_id": 38, "text": "\\rho(x;\\alpha,\\beta) := {}_1F_1\\left(1,\\alpha-1,-\\frac \\beta x\\right)" }, { "math_id": 39, "text": "0 = \\oint_c\\rho(x;\\alpha,\\beta)y_n(x;\\alpha,\\beta) y_m(x;\\alpha,\\beta)\\,\\mathrm d x" }, { "math_id": 40, "text": "B_n^{(\\alpha,\\beta)}(x)=\\frac{a_n^{(\\alpha,\\beta)}}{x^{\\alpha} e^{-\\frac{\\beta}{x}}} \\left(\\frac{d}{dx}\\right)^n (x^{\\alpha+2n} e^{-\\frac{\\beta}{x}})" }, { "math_id": 41, "text": "x^2\\frac{d^2B_{n,m}^{(\\alpha,\\beta)}(x)}{dx^2} + [(\\alpha+2)x+\\beta]\\frac{dB_{n,m}^{(\\alpha,\\beta)}(x)}{dx} - \\left[ n(\\alpha+n+1) + \\frac{m \\beta}{x} \\right] B_{n,m}^{(\\alpha,\\beta)}(x)=0" }, { "math_id": 42, "text": "0\\leq m\\leq n" }, { "math_id": 43, "text": "B_{n,m}^{(\\alpha,\\beta)}(x)=\\frac{a_{n,m}^{(\\alpha,\\beta)}}{x^{\\alpha+m} e^{-\\frac{\\beta}{x}}} \\left(\\frac{d}{dx}\\right)^{n-m} (x^{\\alpha+2n} e^{-\\frac{\\beta}{x}})" }, { "math_id": 44, "text": "y_n(x;\\alpha,\\beta)" }, { "math_id": 45, "text": "\\alpha_k^{(n)}(\\alpha,\\beta)" }, { "math_id": 46, "text": "\\theta_n(x;\\alpha,\\beta)" }, { "math_id": 47, "text": "\\beta_k^{(n)}(\\alpha,\\beta)" }, { "math_id": 48, "text": "\\frac{2}{n(n+\\alpha-1)}\\le\\alpha_k^{(n)}(\\alpha,2)\\le\\frac{2}{n+\\alpha-1}," }, { "math_id": 49, "text": "\\frac{n+\\alpha-1}{2}\\le\\beta_k^{(n)}(\\alpha,2)\\le\\frac{n(n+\\alpha-1)}{2}," }, { "math_id": 50, "text": "\\alpha\\ge2" }, { "math_id": 51, "text": "\\frac{2}{2n+\\alpha-\\frac23}\\le\\alpha_k^{(n)}(\\alpha,2)\\le\\frac{2}{n+\\alpha-1}." }, { "math_id": 52, "text": "y_n(x)" }, { "math_id": 53, "text": "n=5" }, { "math_id": 54, "text": "\n\\begin{align}\ny_0(x) & = 1 \\\\\ny_1(x) & = x + 1 \\\\\ny_2(x) & = 3x^2+ 3x + 1 \\\\\ny_3(x) & = 15x^3+ 15x^2+ 6x + 1 \\\\\ny_4(x) & = 105x^4+105x^3+ 45x^2+ 10x + 1 \\\\\ny_5(x) & = 945x^5+945x^4+420x^3+105x^2+15x+1\n\\end{align}\n" }, { "math_id": 55, "text": "\\theta_k(x) = x^k y_k(1/x)" }, { "math_id": 56, "text": "\n\\begin{align}\n\\theta_0(x) & = 1 \\\\\n\\theta_1(x) & = x + 1 \\\\\n\\theta_2(x) & = x^{2} + 3 x + 3 \\\\\n\\theta_3(x) & = x^{3} + 6 x^{2} + 15 x + 15 \\\\\n\\theta_4(x) & = x^{4} + 10 x^{3} + 45 x^{2} + 105 x + 105 \\\\\n\\theta_5(x) & = x^{5} + 15 x^{4} + 105 x^{3} + 420 x^{2} + 945 x + 945 \\\\\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=6523885
65261724
Polycon
Developable roller constructed from a cone In geometry, a polycon is a kind of a developable roller. It is made of identical pieces of a cone whose apex angle equals the angle of an even sided regular polygon. In principle, there are infinitely many polycons, as many as there are even sided regular polygons. Most members of the family have elongated spindle like shapes. The polycon family generalizes the sphericon. It was discovered by the Israeli inventor David Hirsch in 2017 Edges and vertices. A polycon based on a regular polygon with formula_1 edges has formula_4 vertices, formula_1 of which coincide with the polygon's vertices, with the remaining two lying at the extreme ends of the solid. It has formula_1 edges, each one being half of the conic section created where the cone's surface intersects one of the two cutting planes. On each side of the polygonal cross-section, formula_2 edges of the polycon run (from every second vertex of the polygon) to one of the solid's extreme ends. The edges on one side are offset by an angle of formula_3 from those on the other side. The edges of the sphericon (formula_5) are circular. The edges of the hexacon (formula_6) are parabolic. All other polycons' edges are hyperbolic. The sphericon as a polycon. The sphericon is the first member of the polycon family. It is also a member of the poly-sphericon and the convex hull of the two disc roller (TDR convex hull) families. In each of the families, it is constructed differently. As a poly-sphericon, it is constructed by cutting a bicone with an apex angle of formula_7 at its plane of symmetry and reuniting the two obtained parts after rotating them at an offset angel of formula_7. As a TDR convex hull it is the convex hull of two perpendicular 180° circular sectors joined at their centers. As a polycon, the starting point is a cone created by rotating two adjacent edges of a square around its axis of symmetry that passes through their common vertex. In this specific case there is no need to extend the edges because their ends reach the square's other axis of symmetry. Since, in this specific case, the two cutting planes coincide with the plane of the cone's base, nothing is discarded and the cone remains intact. By creating another identical cone and joining the two cones together using their flat surfaces, a bicone is created. From here the construction continues in the same way described for the construction of the sphericon as a poly-sphericon. The only difference between the sphericon as a poly-sphericon and sphericon as a polycon is that as a poly- sphericon it has four vertices and as a polycon it is considered to have six. The additional vertices are not noticeable because they are located in the middle of the circular edges, and merge with them completely. Rolling properties. The surface of each polycon is a single developable face. Thus the entire family has rolling properties that are related to the meander motion of the sphericon, as do some members of the poly-sphericon family. Because the polysphericons' surfaces consist of conical surfaces and various kinds of frustum surfaces (conical and/or cylindrical), their rolling properties change whenever each of the surfaces touches the rolling plane. This is not the case with the polycons. Because each one of them is made of only one kind of conical surface the rolling properties remain uniform throughout the entire rolling motion. The instantaneous motion of the polycon is identical to a cone rolling motion around one of its formula_1 central vertices. The motion, as a whole, is a combination of these motions with each of the vertices serving in turn as an instant center of rotation around which the solid rotates during formula_8 of the rotation cycle. Once another vertex comes into contact with the rolling surface it becomes the new temporary center of rotation, and the rotation vector flips to the opposite direction. The resulting overall motion is a meander that is linear on average. Each of the two extreme vertices touches the rolling plane, instantaneously, formula_2 times in one rotation cycle. The instantaneous line of contact between the polycon and the surface it is rolling on is a segment of one of the generatinglines of a cone, and everywhere along this line the tangent plane to the polycon is the same. When formula_2 is an odd number this tangent plane is a constant distance from the tangent plane to the generating line on the polycon surface which is instantaneously uppermost. Thus the polycons, for formula_2 odd, are constant height rollers (as is a right circular bicone, a cylinder or a prism with Reuleaux triangle cross-section). Polycons, for formula_2 even, don't possess this feature. History. The sphericon was first introduced by David Hirsch in 1980 in a patent he named 'A Device for Generating a Meander Motion'. The principle, according to which it was constructed, as described in the patent, is consistent with the principle according to which poly-sphericons are constructed. Only more than 25 years later, following Ian Stewart's article about the sphericon in the Scientific American Journal, it was realized both by members of the woodturning [17, 26] and mathematical [16, 20] communities that the same construction method could be generalized to a series of axial-symmetric objects that have regular polygon cross sections other than the square. The surfaces of the bodies obtained by this method (not including the sphericon itself) consist of one kind of conic surface, and one, or more, cylindrical or conical frustum surfaces. In 2017 Hirsch began exploring a different method of generalizing the sphericon, one that is based on a single surface without the use of frustum surfaces. The result of this research was the discovery of the polycon family. The new family was first introduced at the 2019 Bridges Conference in Linz, Austria, both at the art works gallery and at the film festival References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{n}{2}-1" }, { "math_id": 1, "text": "{n}" }, { "math_id": 2, "text": "\\frac{n}{2}" }, { "math_id": 3, "text": "\\frac{2\\pi}{n}" }, { "math_id": 4, "text": "{n+2}" }, { "math_id": 5, "text": "{n=4}" }, { "math_id": 6, "text": "{n=6}" }, { "math_id": 7, "text": "\\frac{\\pi}{2}" }, { "math_id": 8, "text": "\\frac{1}{n}" } ]
https://en.wikipedia.org/wiki?curid=65261724
6526281
Normal variance-mean mixture
Probability distribution In probability theory and statistics, a normal variance-mean mixture with mixing probability density formula_0 is the continuous probability distribution of a random variable formula_1 of the form formula_2 where formula_3, formula_4 and formula_5 are real numbers, and random variables formula_6 and formula_7 are independent, formula_6 is normally distributed with mean zero and variance one, and formula_7 is continuously distributed on the positive half-axis with probability density function formula_0. The conditional distribution of formula_1 given formula_7 is thus a normal distribution with mean formula_8 and variance formula_9. A normal variance-mean mixture can be thought of as the distribution of a certain quantity in an inhomogeneous population consisting of many different normal distributed subpopulations. It is the distribution of the position of a Wiener process (Brownian motion) with drift formula_4 and infinitesimal variance formula_10 observed at a random time point independent of the Wiener process and with probability density function formula_0. An important example of normal variance-mean mixtures is the generalised hyperbolic distribution in which the mixing distribution is the generalized inverse Gaussian distribution. The probability density function of a normal variance-mean mixture with mixing probability density formula_0 is formula_11 and its moment generating function is formula_12 where formula_13 is the moment generating function of the probability distribution with density function formula_0, i.e. formula_14 References. O.E Barndorff-Nielsen, J. Kent and M. Sørensen (1982): "Normal variance-mean mixtures and z-distributions", "International Statistical Review", 50, 145–159.
[ { "math_id": 0, "text": "g" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "Y=\\alpha + \\beta V+\\sigma \\sqrt{V}X," }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": "\\beta" }, { "math_id": 5, "text": "\\sigma > 0" }, { "math_id": 6, "text": "X" }, { "math_id": 7, "text": "V" }, { "math_id": 8, "text": "\\alpha + \\beta V" }, { "math_id": 9, "text": "\\sigma^2 V" }, { "math_id": 10, "text": "\\sigma^2" }, { "math_id": 11, "text": "f(x) = \\int_0^\\infty \\frac{1}{\\sqrt{2 \\pi \\sigma^2 v}} \\exp \\left( \\frac{-(x - \\alpha - \\beta v)^2}{2 \\sigma^2 v} \\right) g(v) \\, dv" }, { "math_id": 12, "text": "M(s) = \\exp(\\alpha s) \\, M_g \\left(\\beta s + \\frac12 \\sigma^2 s^2 \\right)," }, { "math_id": 13, "text": "M_g" }, { "math_id": 14, "text": "M_g(s) = E\\left(\\exp( s V)\\right) = \\int_0^\\infty \\exp(s v) g(v) \\, dv." } ]
https://en.wikipedia.org/wiki?curid=6526281
652660
Poncelet–Steiner theorem
Universality of construction using just a straightedge and a single circle with center In the branch of mathematics known as Euclidean geometry, the Poncelet–Steiner theorem is one of several results concerning compass and straightedge constructions having additional restrictions imposed on the traditional rules. This result, related to the rusty compass equivalence, states that whatever can be constructed by straightedge and compass together can be constructed by straightedge alone, provided that a single circle and its centre are given: "Any Euclidean construction, insofar as the given and required elements are points (or lines), if it can be completed with both the compass and the straightedge together, may be completed with the straightedge alone provided that no fewer than one circle with its center exist in the plane." Though a compass can make constructions significantly easier, it is implied that there is no functional purpose of the compass once the first circle has been drawn. All constructions remain possible, though it is naturally understood that circles and their arcs cannot be drawn without the compass. All points that uniquely define a construction, which can be determined with the use of the compass, are equally determinable without, albeit with greater difficulty. This means only that the compass may be used for aesthetic purposes, rather than for the purposes of construction. In other words, the compass may be used "after" all of the key points are determined, in order to "fill-in" the arcs purely for visual or artistic purposes, if it is desirable, and not as a necessary step toward construction. Nothing essential for the purposes of geometric construction is lost by neglecting the construction of circular arcs. Constructions carried out in adherence with this theorem - relying solely on the use of a straightedge tool without the aid of a compass - are known as Steiner constructions. Steiner constructions may involve any number of circles, including none, already drawn in the plane, with or without their centers. They may involve all manner of unique shapes and curves preexisting in the plane, also, provided that the straightedge tool is the only physical tool at the geometers disposal. Whereas the Poncelet-Steiner theorem stipulates the existence of a circle and its center, and affirms that a single circle is equivalent to a compass. History. In the tenth century, the Persian mathematician Abu al-Wafa' Buzjani (940−998) considered geometric constructions using a straightedge and a compass with a fixed opening, a so-called "rusty compass". Constructions of this type appeared to have some practical significance as they were used by artists Leonardo da Vinci and Albrecht Dürer in Europe in the late fifteenth century. A new viewpoint developed in the mid sixteenth century when the size of the opening was considered fixed but arbitrary and the question of how many of Euclid's constructions could be obtained was paramount. Renaissance mathematician Lodovico Ferrari, a student of Gerolamo Cardano in a "mathematical challenge" against Niccolò Fontana Tartaglia was able to show that "all of Euclid" (that is, the straightedge and compass constructions in the first six books of Euclid's Elements) could be accomplished with a straightedge and rusty compass. Within ten years additional sets of solutions were obtained by Cardano, Tartaglia and Tartaglia's student Benedetti. During the next century these solutions were generally forgotten until, in 1673, Georg Mohr published (anonymously and in Dutch) "Euclidis Curiosi" containing his own solutions. Mohr had only heard about the existence of the earlier results and this led him to work on the problem. Showing that "all of Euclid" could be performed with straightedge and rusty compass is not the same as proving that "all" straightedge and compass constructions could be done with a straightedge and just a rusty compass. Such a proof would require the formalization of what a straightedge and compass could construct. This groundwork was provided by Jean Victor Poncelet in 1822, having been motivated by Mohr's work on the Mohr-Mascheroni theorem. He also conjectured and suggested a possible proof that a straightedge and rusty compass would be equivalent to a straightedge and compass, and moreover, the rusty compass need only be used once. The result of this theorem, that "a straightedge and single circle with given centre is equivalent to a straightedge and compass" was proved by Jakob Steiner in 1833. Relationships to other constructs. Various other notions, tools, terminology, etc., is often associated (sometimes loosely) to the Poncelet-Steiner theorem. Some are listed here. Steiner constructions. The term Steiner construction typically refers to any geometric construction that utilizes the straightedge tool only, and is sometimes simply called a "straightedge-only construction". No stipulations are made about what geometric objects already exist in the plane or their relative placement; any such conditions are postulated ahead of time. Also, no implications are made about what is or is not possible to construct. Therefore, all constructions adhering to the Poncelet-Steiner theorem are Steiner constructions, though not all Steiner constructions abide by the condition of only one circle with its center provided in the plane. The Poncelet-Steiner theorem does not require an actual compass - it is presumed that the circle preexists in the plane - therefore all constructions herein demonstrating the Poncelet-Steiner theorem are Steiner constructions. Rusty compass. The rusty compass describes a compass whose hinge is so rusted as to be fused such that its legs - the needle and pencil - are unable to adjust width. In essence, it is a compass whose distance is fixed, and which draws circles of a predetermined and constant, but arbitrary radius. Circles may be drawn centered at any arbitrary point, but the radius is unchangeable. As a restricted construction paradigm, the "rusty compass constructions" allow the use of a straightedge and the fixed-width compass. The rusty compass equivalence: "All points necessary to uniquely describe any compass-straightedge construction may be achieved with a straightedge and fixed-width compass." It is naturally understood that the arbitrary-radius compass may be used for aesthetic purposes; only the arc of the fixed-width compass may be used for construction. In some sense, the rusty compass is a generalization and simplification of the Poncelet-Steiner theorem. Though not more powerful, it is certainly more convenient. The Poncelet-Steiner theorem requires a single circle with arbitrary radius and center point to be placed in the plane. As it is the only drawn circle, whether or not it was drawn by a rusty compass is immaterial and equivalent. The benefit of general rusty compass constructions, however, is that the compass may be used repeatedly to redraw circles centered at any desired point, albeit with the same radius, thus simplifying many constructions. Naturally if all constructions are possible with a single circle arbitrarily placed in the plane, then the same can surely be said about a straightedge and rusty compass, with which at least one circle may be arbitrarily placed. It is known that a straightedge and a rusty compass is sufficient to construct all that is possible with straightedge and standard compass - with the implied understanding that circular arcs of arbitrary radii cannot be drawn, and only need be drawn for aesthetic purposes rather than constructive ones. Historically this was proven when the Poncelet-Steiner theorem was proven, which is a stronger result. The rusty compass, therefore, is no weaker than the Poncelet-Steiner theorem. The rusty compass is also no stronger. The Poncelet-Steiner theorem reduces Ferrari's rusty compass equivalence, a claim at the time, to a single-use compass: "All points necessary to uniquely describe any compass-straightedge construction may be achieved with only a straightedge, once the first circle has been placed." The Poncelet-Steiner theorem takes the rusty compass scenario, and breaks the compass completely after its first use. "Not to be confused for "Steiner's parallel axis theorem", "Steiner's porism", or the "Steiner–Lehmus theorem". Steiner's theorem / Hilbert's error. If only one circle is to be given and no other special information, Steiner's theorem implies that the center of the circle must be provided along with the arc of the circle. This is done by proving the impossibility of constructing the circle's center from straightedge alone using only a single circle in the plane, without its center. An argument using projective transformations and Steiner's conic sections is used. " With only one circle provided in the plane, its center cannot be constructed by straightedge alone." Also attributed to David Hilbert and known as "Hilbert's Error", a naïve summary of the proof is as follows. With the use of a straightedge tool, only linear projective transformations are possible, and linear projective transformations are reversible operations. Lines project onto lines under any linear projective transformation, while conic sections project onto conic sections under a linear projective transformation, but the latter are skewed such that eccentricities, foci, and centers of circles are not preserved. Under different sequences of mappings, the center does not map uniquely and reversibly. This would not be the case if lines could be used to determine a circle's center. As linear transformations are reversible operations and would thus produce unique results, the fact that unique results are not possible implies the impossibility of center-point constructions. The uniqueness of the constructed center would depend on additional information - not provided by a single circle - which would make the construction reversible. Thus it is not possible to construct everything that can be constructed with straightedge and compass with straightedge alone. Consequently, requirements on the Poncelet-Steiner theorem cannot be weakened with respect to the circle center. If the centre of the only given circle is not provided, it cannot be obtained by a straightedge alone. Many constructions are impossible with straightedge alone. Something more is necessary, and a circle with its center identified is sufficient. Alternative Frameworks to the Single Circle with Center. Alternatively, the center may be omitted with sufficient additional information. This is not a weakening of the Poncelet-Steiner theorem, merely an alternative framework. Nor is it a contradiction of Steiner's Theorem which hypothesizes only a single circle. The inclusion of this sufficient alternative information, which always includes at least two circles, disambiguates the mappings under the projective transformations, thus allowing various Steiner constructions to recover the circle center. Each of these alternatives requires at least two circles devoid of their centers, plus some other unique piece of information. Some alternatives include two concentric or two intersecting circles, or three circles, or other variations wherein the provided circles are devoid of their centers. In each, some additional unique-but-sufficient criterion is met, such as concentricity, intersection points, a third circle, etc., respectively. In any of these cases, the center of a circle can be constructed, thereby reducing the problem to the Poncelet-Steiner theorem hypothesis (with the added convenience of having additional circles in the plane, all of whose centers may now be constructed). Constructive proof outline. To prove the theorem, each of the basic constructions of compass and straightedge need to be proven to be possible by using a straightedge alone (provided that a circle and its center exist in the plane), as these are the foundations of, or elementary steps for, all other constructions. That is to say, all constructions can be written as a series of steps involving these five basic constructions: If these fundamentals can be achieved with only a straightedge and an arbitrary circle (with center) embedded in the plane, the claim that is the theorem of this article will have been proved. This can be done with a straightedge alone. Neither a compass nor a circle is required. It is understood that the arc of a circle cannot be drawn without a compass. A circle is considered to be given by any two points, one defining the center and one existing on the circumference at radius. Any such pair define a unique circle, although the converse is not true: for any given circle there is no unique pair defining it. In keeping with the intent of the theorem which we aim to prove, the actual circle need not be drawn but for aesthetic reasons. It is the claim of this theorem that a circle defined in this way is sufficient; this claim will be revisited later in the article. This construction can also be done directly with a straightedge. Thus, to prove the theorem, only constructions #4 and #5 need be proven possible using only a straightedge and a given circle with its center. Notes and Caveats for the Constructive Proof. Some notes and commentary regarding the theorem, the proofs, and related topics of consideration follow. Regarding the Circle. Circle nomenclature In the constructions below, a circle defined by a center point "P" and a point on its circumference, "Q", through which the arc of the circle passes, is denoted "P(Q)". As most circles are not compass-drawn, center and circumference points are named explicitly. The arc, if drawn, may also be named, such as "circle" "c". Per the theorem, when a compass-drawn circle is provided it is simply referred to as the "given circle" or the "provided circle". Circle generality The provided circle should always be assumed to be placed arbitrarily in the plane with an arbitrary radius. Many examples of constructability with a straightedge one may find in various references on and offline, will presume that the circle is not placed in general position. Instead, for example, the constructability of a polygon may postulate that the circle is circumscribing. Such assumptions simplify a construction but does not prove generality of the claim of constructability. For the purposes of this theorem, we may assume that the circle is indeed fully general. Usage of the arc of the provided circle(s) The intersection points between any line and the given circle may be found directly, as can the intersection points between the arcs of two circles, if provided. The Poncelet-Steiner Theorem does not prohibit the normal treatment of circles already drawn in the plane; normal construction rules apply. The theorem only prohibits the construction of new circular arcs with a compass. Regarding Application. Usability The constructive proof does not merely serve as a proof of the theorem, but also demonstrates the practical application of the most basic constructions, such that the claim of constructability with a straightedge could be employed in practice, in the most general case. Since all geometric constructions can be expressed as a sequence of the five basic constructive steps, and the below constructions demonstrate and justify each of these, necessarily, in order to prove the theorem, therefore all possible constructions may be implemented accordingly. Generality and simplicity Some specific construction goals - such as for example the construction of a square - may potentially have relatively simple construction solutions, which will not be demonstrated here in the article, despite its simplicity. The omission of such constructions mitigate the length of the article. The purpose of these decisions is that such constructions may not be ubiquitous or sufficiently useful, particularly for the purposes of proving the theorem. Though the theorem and the constructions found herein can be used to construct any figure, no claims are made about the existence of simpler (straightedge-only) alternatives for any specific construction. Arbitrary point placement Steiner constructions and those constructions herein proving the Poncelet-Steiner theorem require the arbitrary placement of points in space. These constructions rely on the concept of fixed points (and fixed lines), wherein the resultant construction is independent of the arbitrariness employed during construction. In some construction paradigms - such as in the geometric definition of the constructible number - arbitrary point placement may be prohibited. Traditional geometry has no such restriction on point placement; with such a restriction against the placement of arbitrary points, the single circle is indeed weaker than the compass. This can be reconciled, however. Steiner constructions may be used as a basis for the set of constructible numbers if one only enters into the set those points which are fixed, disregarding the arbitrarily placed points required during a construction. Regarding the Proof and Approach. Doubts about constructions #1 or #3: defining lines and intersecting them Any doubts about constructions #1 or #3 would apply equally to the traditional construction paradigm which involves the compass, and thus are not concerns unique to the Poncelet-Steiner theorem. Their justifications - if any need be given - will not be explored in this article. Doubts about construction #2: defining and constructing circles Construction #2 should not be of concern. Even though it is undisputed that a unique circle is defined by a center point and a point on its circumference, the pertinent question is whether or not this is sufficient information for the purposes of straightedge-only construction, or if the drawn arc is required. The arc of the circle is only used in traditional construction paradigms for the purposes of circle-circle and circle-line intersections, therein the arc of the circle is used directly to identify intersection points. Thus if constructions #4 and #5 are satisfiable without the arc of the circle with which to intersect, then it will prove the non-necessity of drawing the arc. This would therefore imply that construction #2 is satisfied by a simple labeling of two points, identifying the unique circle. Choice of construction among variants In general constructions there are often several variations that will produce the same result. The choices made in such a variant can be made without loss of generality. However, when a construction is being used to prove that something can be done, it is not necessary to describe all these various choices and, for the sake of clarity of exposition, only one variant will be given below. The variants chosen below are done so for their ubiquity and generalizability in application rather than their simplicity or convenience under any particular set of special conditions. Alternative proofs Alternative proofs do exist for the Poncelet-Steiner theorem, originating in an algebraic approach to geometry. Relying on equations and numerical values in real coordinate space, formula_0, via an isomorphism to the Euclidean plane, this is a fairly modern interpretation which requires the notions of length, distance, and coordinate positions to be imported into the plane. This is well beyond the scope of traditional geometry. This article takes a more traditional approach and proves the theorem using pure geometric constructive techniques, which also showcases the practical application. Constructive proof. The proof of the theorem and useful straightedge-only constructions follow. Some preliminary constructions. To prove the above constructions #4 and #5, which are included below, a few necessary intermediary constructions are also explained below since they are used and referenced frequently. These are also straightedge-only constructions. All constructions below rely on basic constructions #1,#2,#3, and any other construction that is listed prior to it. Parallel of a line having a colinear bisected segment. This construction does not require the use of the given circle. Naturally any line that passes through the center of the given circle implicitly has a bisected segment: the diameter is bisected by the center. The animated GIF file embedded at the introduction to this article demonstrates this construction - relying on the bisected diameter; the arc of the circle is never used - which is reiterated here without the circle and with enumerated steps. Given an arbitrary line n (in black) on which there exist two points A and B, having a midpoint M between them, and an arbitrary point P in the plane (assumed not to be on line n) through which a parallel of line n is to be made: In some literature the bisected line segment is sometimes viewed as a one-dimensional "circle" existing on the line. Alternatively, some literature views the bisected line segment as a two dimensional circle in three dimensional space with the line passing through a diameter, but not parallel to the plane, thus intersecting the plane of construction at two points on the circumference with the midpoint simply being the prescribed circle center. This construction is a special case of the projective harmonic conjugate construction, which is not demonstrated in this article. Creating a bisected segment on a line. If the line passes through the center of a circle, the segment defined by the diameter through the circle is bisected by the center of the circle. In the general case, however, any other line in the plane may have a bisected segment constructed onto it. This construction does require the use of the given circle. Given a line, m (in black), and a circle centered at A, we wish to create points E, B, and H on the line such that B is the midpoint: As point C is chosen arbitrarily, there is no need for it to inconveniently be on the perpendicular of line through the circle center. If however it is, line is merely the tangent line to the circle through point C, which is coincident to point D. This construction is possible though the construction is not listed in this article. Points F and G may be constructed as before, and will also equal one another. And again, line is merely the tangent line to the circle at that point. Thus points E, H and their midpoint B may be found, as before, with only a minor change adding a subconstruction. Constructing a parallel of any line. This construction does require the use of the given circle. In order to generalize the parallel line construction to all possible lines, not just the ones with a collinear bisected line segment, it becomes necessary to have additional information. In keeping with the Poncelet-Steiner theorem, a circle (with center) is the object of choice for this construction. To construct a parallel line of any given line, through any point in the plane, we trivially combine two constructions: In alternative constructions, which are not demonstrated in this article, a parallel may be constructed from any pair of lines which are already parallel to one another; thus a third parallel may be produced from any two, without the use of a circle. Additionally, a parallel of any line may be constructed whenever there exists in the plane any parallelogram, also without the use of a given circle. Constructing a perpendicular line. This construction does require the use of the given circle and takes advantage of Thales's theorem. From a given line m, and a given point A in the plane, a perpendicular to the line is to be constructed through the point. Provided is the given circle "O(r)". If the line from which a perpendicular is to be made does pass through the circle center, an alternative approach would be to construct the tangent lines to the circle at the lines points of intersection, using Steiner constructions. This is not demonstrated in this article. Another option in the event the line passes through the circle's center would be to construct a parallel to it through the circle at an arbitrary point. An isosceles trapezoid (or potentially an isosceles triangle) is formed by the intersection points to the circle of both lines. The two non-parallel sides of which may be extended to an intersection point between them, and a line drawn from there through the circle's center. This line is perpendicular, and the diameter is bisected by the center. By an alternative construction not demonstrated in this article, a perpendicular of any line may be constructed without a circle, provided there exists in the plane any square. Constructing the midpoint of any segment (segment bisection). Given is a line segment , which is to be bisected. Optionally, a parallel line "m" exists in the plane. For added perspective, in some sense this construction is a variant of a previous construction of a parallel from a bisected line segment, and is therefore also a special case of the projective harmonic conjugate (not provided in this article). It is the same set of lines when taken on whole, but constructed in a different order, and from a different initial set of conditions, arriving at a different end goal. Since any arbitrary segment on one of two parallel lines can be bisected, and any line with a bisected segment on it may have a parallel constructed, the two scenarios are geometrically equivalent propositions. They imply one another; a simple construction can convert one scenario into the other using no additional information. In an alternative construction, any line segment may be bisected whenever a parallelogram exists in the plane. Rotating a line segment. To define a circle only the center and one point - any point - on the circumference is required. In principle a new point "B' " is constructed such that circle "A(B)" is equal to circle "A(B')", though the point "B" is not equal to point "B' ". In essence, segment is rotated about the axis point "A", to , for a different set of defining points for the same circle. One way of going about this which satisfies most conditions is as follows: This construction will fail if the desired rotation is diametrically opposite the circle (i.e. a half-circle rotation). One solution to this scenario is to employ two separate rotation constructions, neither one a half-circle rotation from the previous, one acting as an intermediary step. Choose any rotation angle arbitrarily, complete the rotation, then choose the supplementary angle and perform the rotation a second time. There does exist a second, alternative rotation construction solution, based on projections and perspective points. Though it avoids the aforementioned half-circle rotation complication, it does have its own complications, which are similarly resolved with intermediary rotation constructions. The construction is no more versatile. It is not demonstrated in this article. Constructing the radical axis between circles. This construction does require the use of the given circle (which is not depicted) for the referenced sub-constructions previously demonstrated. Suppose two circles "A"("B") and "C"("D") are implicitly given, defined only by the points "A", "B", "C", and "D" in the plane, with their centers defined, but are not compass-constructed. The radical axis, line "m" (in dark blue), between the two circles may be constructed: In the event that the construction of the radical axis fails due to there not being an intersection point "X" between parallel lines "j" and "k", which results from the coincidental placement of the midpoint "M" on the line , an alternative approach is required. One such approach is to rotate the segment about the axis point "A" (the center of circle "A"("B")). Once arrived at the arbitrary rotation , which defines the same circle, the radical axis construction can begin anew without issue. Intersecting a line with a circle (Construction #4). This construction does require the use of the provided circle, "O"("r"). Any line may be naturally intersected with any compass-drawn circle. Given is the line "m" (in black) and the circle "P(Q)", which is not compass-constructed. The intersection points of the circle "P(Q)" and the line "m", which are point "A" and "B", may be constructed: Intersecting two circles (Construction #5). The intersection between two circles becomes a trivial combination of two earlier constructions: A circle through one point centered at another point (Construction #2, revisited). The second basic construction - describing a full circle with just its center and one point at radius defining circumference - never needed an arc to be constructed with the compass in order for the circle to be utilized in constructions. Namely, the intersections of circles both with circles and with lines, which together are the essence of all constructions involving a circle, are achievable without the arc. Thus defining a circle by its center and by any arbitrary point on its circumference is sufficient to fully describe the entire circle and construct with it. As such, the arc only serves an aesthetic purpose. Basic construction #2 is satisfied. Conclusion. Since all five basic constructions have been shown to be achievable with only a straightedge, provided that a single circle with its center is placed in the plane, this proves the Poncelet-Steiner theorem. Projective Geometry and the Poncelet-Steiner Theorem. Though a distinct topic in its own right, many of the concepts of projective geometry are applied here to Steiner constructions. Jean-Victor Poncelet was a major contributor to the subject when he postulated the theorem of this article, which Jakob Steiner later proved. Many of the related concepts developed in projective geometry include but are not limited to: concurrence, "points at infinity", perspective, ratios and cross ratios, stable or fixed points of involutions, invariants, homogeneity, linear transformations, projective harmonics, and others. A thorough treatment of Steiner constructions and their proofs require a background in projective geometry, though the subject of projective geometry is not restricted to straightedge-only constructions. Other types of restricted construction. Restricted constructions involving the compass. The Poncelet–Steiner theorem can be contrasted with the Mohr–Mascheroni theorem, which states that any compass and straightedge construction can be performed with only a compass. The straightedge is not required but for aesthetic purposes; nothing else is needed in the plane. The rusty compass restriction allows the use of a compass and straightedge, provided that the compass produces circles of fixed radius. Although the rusty compass constructions were explored since the 10th century, and all of Euclid was shown to be constructable with a rusty compass by the 17th century, the Poncelet-Steiner theorem proves that the rusty compass and straightedge together are more than sufficient for any and all Euclidean construction. Indeed, the rusty compass becomes a tool simplifying constructions over merely the straightedge and single circle. Viewed the other way, the Poncelet-Steiner theorem not only fixes the width of the rusty compass, but ensures that the compass breaks after its first use. The compass equivalence theorem proves that the rigid compass (also called the modern compass) - one that holds its spacing when lifted from the plane - is equivalent to the traditional collapsing compass (also called divider) - one that does not retain its spacing, thus "resetting to zero", every time it is lifted from the plane. The ability to transfer distances (i.e. construct congruent circles, translate a circle in the plane) - an operation made trivial by the fixable aperture of a rigid compass - was proven by Euclid to be possible with the collapsing compass. In fact it can be done using only the collapsing compass, without the straightedge tool. Consequently, the rigid compass and the collapsing compass are equivalent; what can be constructed by one can be constructed by the other, even in the compass-only construction paradigm. Furthermore, in the compass-only construction paradigm, the operation of circle translation requires no more than three additional circles over that of the rigid compass. Restricted Steiner constructions. The requirement placed on the Poncelet-Steiner theorem - that one circle with its center provided exist in the plane - has been since generalized, or strengthened, to include alternative but equally restrictive conditions. Other unique scenarios undoubtedly exist than those listed here. This is not an exhaustive list of possibilities. Poncelet-Steiner without the circle center. In two such alternatives, the centre may be omitted entirely provided that given are either two concentric circles, or two distinct intersecting circles, of which there are two cases: two intersection points and one intersection point (tangential circles). From any of these scenarios, centres can be constructed, reducing the scenario to the original hypothesis. These do not contradict Steiner's theorem which, although stating a center is absolutely required, also hypothesizes only one circle exists. Still other variations exist. It suffices to have two non-intersecting, non-concentric circles (without their centres), provided that at least one point is given on either the centerline through them or on the radical axis between them, or provided any two parallel lines arbitrarily in the plane. It also suffices, alternatively, to have three non-intersecting circles. Once a single center is constructed, the scenario again reduces to the original hypothesis of the Poncelet-Steiner theorem. In each of these scenarios, more information exists than just a second circle. The fact that two circles are concentric, or that two circles have known intersection points, or the presence of a third circle, or of a point on either a centerline or a radical axis, constitutes an additional piece of information beyond merely the presence of a second circle. Poncelet-Steiner without the full circular arc. In another alternative, the entire circle is not required at all. In 1904, Francesco Severi proved that any small arc (of the circle), together with the centre, will suffice. This construction breaks the rusty compass at any point before the first circle is completed, but after it has begun, thus drawing some continuous portion of the arc of the circle in the plane, and still all constructions remain possible. Thus, the conditions hypothesizing the Poncelet-Steiner theorem may indeed be weakened, but only with respect to the completeness of the circular arc, and not, per the Steiner theorem, with respect to the center. The theorem demonstrates the Steiner construction of the intersection points between a line and the circle of an arc, regardless of the size or position of the arc, using only a straightedge and the arc. The construction, also, does not make use of the center of the circle of the arc. Though the center is required to complete all of Euclid, as the Steiner theorem and the Poncelet-Steiner theorem both proves, the center is not needed in order to intersect a line with the circle of that arc. Using this construction, the arc, and the center of the circle of the arc, all of the above Poncelet-Steiner constructions are equally achievable, albeit with greater difficulty. Severi's proof illustrates that any arc of the circle, with its center, fully characterizes the circle and allows intersection points (of lines) with it to be found. Consequently, the Poncelet-Steiner Theorem's minimum requirement of one circle is still satisfied. The circle remains, fully defined, and fully utilizable, regardless of the absence of some portion of the completed arc. Since Severi's construction intersects lines with the circle of the arc, and the Poncelet-Steiner theorem constructs circle-circle intersections via the straightedge, there is no additional need to justify the circle-circle intersections under this restricted paradigm. Also in each of the previously mentioned two-or-more-circle scenarios wherein the circle centers are omitted, the completeness of the circular arc is not necessary, as per Severi's theorem. However, in the case of two intersecting circles, their intersection points must be explicitly given whenever arcs of the circle(s) do not exist wherever intersection points do. That is, if the intersection points between two circles of arcs cannot be found directly by way of the two arcs, they must be provided. The completeness of the circular arc is otherwise redundant. Control flow restrictions. Though a relatively new concept stemming from computational systems, the notion of control flow and its various restrictions in the context of geometry are also the subject of study. As it applies to the geometrical constructions, these restrictions are typically ones placed on the geometer. As with the case of constructible numbers, prohibitions against arbitrary point placement is one possible control flow restriction, previously discussed in this article. In the aforementioned Mohr-Mascheroni Theorem, restrictions on compass radius could be imposed, such as minimums and maximums given a set of starting points. Other examples may include restrictions on construction sequences and order, decision branching, and repetition or reiteration of steps. Extended, liberated, or neusis constructions. Instead of restricting the rules of construction further, it is of equal interest to study relaxing the restrictions. These are sometimes called "extended constructions", because they extend what is constructible by extending the allowable toolset. These constructions are also called "neusis constructions" (from Greek) because they employ tools other than the compass and straightedge, or "liberated constructions" because they alleviate the restrictions of the traditional paradigm. Just as geometers have studied what remains possible to construct (and how) when additional restrictions are placed on traditional construction rules - such as compass only, straightedge only, rusty compass, etc. - they have also studied what constructions becomes possible that weren't already when the natural restrictions inherent to traditional construction rules are alleviated. Questions such as "what becomes constructible", "how might it be constructed", "what are the fewest traditional rules to be broken", "what are the simplest tools needed", "which seemingly different tools are equivalent", "how does the new paradigm simplify traditional constructions", etc. are asked. The arbitrary angle is not trisectable using traditional compass and straightedge rules, for example, but the trisection becomes constructible when allowed the additional tool of an ellipse in the plane, which is itself not constructible. Some of the traditional unsolved problems such as angle trisection, doubling the cube, squaring the circle, finding cubic roots, etc., since proven to be impossible by straightedge and compass alone, have been resolved using an expanded set of tools. In general, the objects studied to extend the scope of what is constructible have included: Each of the three above categorical approaches have their own unique angle trisection solutions, as do the various tools and curves. The ancient geometers considered the compass and straightedge constructions (known as "planar constructions") as ideal and preferred. Second to that they preferred "solid constructions", which included the use of conic sections in the plane other than the circle. They favored thirdly the use of arbitrary smooth curves in the plane (such as the Archimedean spiral), and least of all the use of neuseis (alternative physical handheld tools). It is doubtful that the ancient geometers - at least of the western world - even considered paper folding. The term "neusis" (singular) or neusis construction may also refer to a specific tool or method employed by the ancient geometers. The graduated ruler is unique in that it defines a metric, and also a norm, and gives rise to the algebraic treatment of geometry, Cartesian graphing, and imports a standard unit for proportionality of segments. The algebraic approach to geometry offers a distinct proof of the Poncelet-Steiner theorem. Approximations. It is worthwhile to point out that in all construction paradigms, the implicit rule is that all constructions must terminate in a finite number of applications of the available tools (compass and straightedge), and produce the exact intended results. Entire discussions could be made with either of these conditions alleviated. For any otherwise non-constructible figure: For example, an angle trisection may be performed exactly with compass and straightedge, using an infinite sequence of angle bisections. If the construction is terminated at some finite iteration, an accurate approximation of a trisection can be achieved to arbitrary accuracy. Although each point, line or circle is a valid construction, what it aims to approximate can never truly be achieved in finite applications of a compass and/or straightedge. There are, alternatively, exactly constructible non-iteratively constructed figures that are reasonable approximations for non-constructible figures. For example, there is a relatively simple non-iterative constructions for an approximation of the heptagon. Further generalizations. The Poncelet-Steiner theorem has been generalized to higher dimensions, such as, for example, a three dimensional variation where the straightedge is replaced with a plane, and the circle with center is replaced with a sphere with center. This is essentially a "straightedge only" variation of three dimensional geometry. Though research is ongoing, and some propositions are yet to be proved, many of the properties that apply to the two dimensional case also apply to higher dimensions, as implementations of projective geometry. Additionally, some research is underway to generalize the Poncelet-Steiner theorem to non-Euclidean geometries. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^2" } ]
https://en.wikipedia.org/wiki?curid=652660
652733
Pronic number
Number, product of consecutive integers A pronic number is a number that is the product of two consecutive integers, that is, a number of the form formula_0. The study of these numbers dates back to Aristotle. They are also called oblong numbers, heteromecic numbers, or rectangular numbers; however, the term "rectangular number" has also been applied to the composite numbers. The first few pronic numbers are: 0, 2, 6, 12, 20, 30, 42, 56, 72, 90, 110, 132, 156, 182, 210, 240, 272, 306, 342, 380, 420, 462 … (sequence in the OEIS). Letting formula_1 denote the pronic number formula_0, we have formula_2. Therefore, in discussing pronic numbers, we may assume that formula_3 without loss of generality, a convention that is adopted in the following sections. As figurate numbers. The pronic numbers were studied as figurate numbers alongside the triangular numbers and square numbers in Aristotle's "Metaphysics", and their discovery has been attributed much earlier to the Pythagoreans. As a kind of figurate number, the pronic numbers are sometimes called "oblong" because they are analogous to polygonal numbers in this way: The nth pronic number is the sum of the first n even integers, and as such is twice the nth triangular number and n more than the nth square number, as given by the alternative formula "n"2 + "n" for pronic numbers. The nth pronic number is also the difference between the odd square (2"n" + 1)2 and the ("n"+1)st centered hexagonal number. Since the number of off-diagonal entries in a square matrix is twice a triangular number, it is a pronic number. Sum of pronic numbers. The partial sum of the first n positive pronic numbers is twice the value of the nth tetrahedral number: formula_4. The sum of the reciprocals of the positive pronic numbers (excluding 0) is a telescoping series that sums to 1: formula_5. The partial sum of the first n terms in this series is formula_6. The alternating sum of the reciprocals of the positive pronic numbers (excluding 0) is a convergent series: formula_7. Additional properties. Pronic numbers are even, and 2 is the only prime pronic number. It is also the only pronic number in the Fibonacci sequence and the only pronic Lucas number. The arithmetic mean of two consecutive pronic numbers is a square number: formula_8 So there is a square between any two consecutive pronic numbers. It is unique, since formula_9 Another consequence of this chain of inequalities is the following property. If m is a pronic number, then the following holds: formula_10 The fact that consecutive integers are coprime and that a pronic number is the product of two consecutive integers leads to a number of properties. Each distinct prime factor of a pronic number is present in only one of the factors n or "n" + 1. Thus a pronic number is squarefree if and only if n and "n" + 1 are also squarefree. The number of distinct prime factors of a pronic number is the sum of the number of distinct prime factors of n and "n" + 1. If 25 is appended to the decimal representation of any pronic number, the result is a square number, the square of a number ending on 5; for example, 625 = 252 and 1225 = 352. This is so because formula_11. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n(n+1)" }, { "math_id": 1, "text": "P_n" }, { "math_id": 2, "text": "P_{{-}n} = P_{n{-}1}" }, { "math_id": 3, "text": "n\\geq 0" }, { "math_id": 4, "text": "\\sum_{k=1}^{n} k(k+1) =\\frac{n(n+1)(n+2)}{3}= 2T_n " }, { "math_id": 5, "text": "\\sum_{i=1}^{\\infty} \\frac{1}{i(i+1)}=\\frac{1}{2}+\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{20}\\cdots=1" }, { "math_id": 6, "text": "\\sum_{i=1}^{n} \\frac{1}{i(i+1)} =\\frac{n}{n+1}" }, { "math_id": 7, "text": "\\sum_{i=1}^{\\infty} \\frac{(-1)^{i+1}}{i(i+1)}=\\frac{1}{2}-\\frac{1}{6}+\\frac{1}{12}-\\frac{1}{20}\\cdots=\\log(4)-1" }, { "math_id": 8, "text": "\\frac {n(n+1) + (n+1)(n+2)}{2} = (n+1)^2" }, { "math_id": 9, "text": "n^2 \\leq n(n+1) < (n+1)^2 < (n+1)(n+2) < (n+2)^2." }, { "math_id": 10, "text": " \\lfloor{\\sqrt{m}}\\rfloor \\cdot \\lceil{\\sqrt{m}}\\rceil = m." }, { "math_id": 11, "text": "100n(n+1) + 25 = 100n^2 + 100n + 25 = (10n+5)^2" } ]
https://en.wikipedia.org/wiki?curid=652733
6527803
Self-interacting dark matter
Hypothetical form of dark matter consisting of particles with strong self-interactions In astrophysics and particle physics, self-interacting dark matter (SIDM) is an alternative class of dark matter particles which have strong interactions, in contrast to the standard cold dark matter model (CDM). SIDM was postulated in 2000 as a solution to the core-cusp problem. In the simplest models of DM self-interactions, a Yukawa-type potential and a force carrier φ mediates between two dark matter particles. On galactic scales, DM self-interaction leads to energy and momentum exchange between DM particles. Over cosmological time scales this results in isothermal cores in the central region of dark matter haloes. If the self-interacting dark matter is in hydrostatic equilibrium, its pressure and density follow: formula_0 where formula_1 and formula_2 are the gravitational potential of the dark matter and a baryon respectively. The equation naturally correlates the dark matter distribution to that of the baryonic matter distribution. With this correlation, the self-interacting dark matter can explain phenomena such as the Tully–Fisher relation. Self-interacting dark matter has also been postulated as an explanation for the DAMA annual modulation signal. Moreover, it is shown that it can serve the seed of supermassive black holes at high redshift. SIDM may have originated in a so-called "Dark Big Bang". A 2024 study found SIDM solves the "final-parsec problem". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nabla P_\\chi/\\rho_\\chi = \\nabla \\Phi_{\\rm tot} = \\nabla (\\Phi_\\chi + \\Phi_b)" }, { "math_id": 1, "text": "\\Phi_{\\chi}" }, { "math_id": 2, "text": "\\Phi_{b}" } ]
https://en.wikipedia.org/wiki?curid=6527803
6527939
Proof of knowledge
Class of interactive proof In cryptography, a proof of knowledge is an interactive proof in which the prover succeeds in 'convincing' a verifier that the prover knows something. What it means for a machine to 'know something' is defined in terms of computation. A machine 'knows something', if this something can be computed, given the machine as an input. As the program of the prover does not necessarily spit out the knowledge itself (as is the case for zero-knowledge proofs) a machine with a different program, called the knowledge extractor is introduced to capture this idea. We are mostly interested in what can be proven by polynomial time bounded machines. In this case the set of knowledge elements is limited to a set of witnesses of some language in NP. Let formula_0 be a statement of language formula_1 in NP, and formula_2 the set of witnesses for x that should be accepted in the proof. This allows us to define the following relation: formula_3. A proof of knowledge for relation formula_4 with knowledge error formula_5 is a two party protocol with a prover formula_6 and a verifier formula_7 with the following two properties: Details on the definition. This is a more rigorous definition of Validity: Let formula_4 be a witness relation, formula_2 the set of all witnesses for public value formula_0, and formula_5 the knowledge error. A proof of knowledge is formula_5-valid if there exists a polynomial-time machine formula_11, given oracle access to formula_12, such that for every formula_12, it is the case that formula_13 and formula_14 The result formula_15 signifies that the Turing machine formula_11 did not come to a conclusion. The knowledge error formula_16 denotes the probability that the verifier formula_7 might accept formula_0, even though the prover does in fact not know a witness formula_9. The knowledge extractor formula_11 is used to express what is meant by the knowledge of a Turing machine. If formula_11 can extract formula_9 from formula_12, we say that formula_12 knows the value of formula_9. This definition of the validity property is a combination of the validity and strong validity properties. For small knowledge errors formula_16, such as e.g. formula_17 or formula_18 it can be seen as being stronger than the soundness of ordinary interactive proofs. Relation to general interactive proofs. In order to define a specific proof of knowledge, one need not only define the language, but also the witnesses the verifier should know. In some cases proving membership in a language may be easy, while computing a specific witness may be hard. This is best explained using an example: Let formula_19 be a cyclic group with generator formula_20 in which solving the discrete logarithm problem is believed to be hard. Deciding membership of the language formula_21 is trivial, as every formula_0 is in formula_19. However, finding the witness formula_9 such that formula_22 holds corresponds to solving the discrete logarithm problem. Protocols. Schnorr protocol. One of the simplest and frequently used proofs of knowledge, the "proof of knowledge of a discrete logarithm", is due to Schnorr. The protocol is defined for a cyclic group formula_23 of order formula_24 with generator formula_20. In order to prove knowledge of formula_25, the prover interacts with the verifier as follows: The verifier accepts, if formula_30. We can see this is a valid proof of knowledge because it has an extractor that works as follows: Since formula_37, the output of the extractor is precisely formula_0. This protocol happens to be zero-knowledge, though that property is not required for a proof of knowledge. Sigma protocols. Protocols which have the above three-move structure (commitment, challenge and response) are called "sigma protocols". The naming originates from Sig, referring to the zig-zag symbolizing the three moves of the protocol, and MA, an abbreviation of "Merlin-Arthur". Sigma protocols exist for proving various statements, such as those pertaining to discrete logarithms. Using these proofs, the prover can not only prove the knowledge of the discrete logarithm, but also that the discrete logarithm is of a specific form. For instance, it is possible to prove that two logarithms of formula_38 and formula_39 with respect to bases formula_40 and formula_41 are equal or fulfill some other linear relation. For "a" and "b" elements of formula_42, we say that the prover proves knowledge of formula_43 and formula_44 such that formula_45 and formula_46. Equality corresponds to the special case where "a" = 1 and "b" = 0. As formula_44 can be trivially computed from formula_43 this is equivalent to proving knowledge of an "x" such that formula_47. This is the intuition behind the following notation, which is commonly used to express what exactly is proven by a proof of knowledge. formula_48 states that the prover knows an "x" that fulfills the relation above. Applications. Proofs of knowledge are useful tool for the construction of identification protocols, and in their non-interactive variant, signature schemes. Such schemes are: They are also used in the construction of group signature and anonymous digital credential systems.
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "W(x)" }, { "math_id": 3, "text": "R= \\{(x,w): x \\in L, w \\in W(x)\\}" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "\\kappa" }, { "math_id": 6, "text": "P" }, { "math_id": 7, "text": "V" }, { "math_id": 8, "text": "(x,w) \\in R" }, { "math_id": 9, "text": "w" }, { "math_id": 10, "text": "\\Pr(P(x,w)\\leftrightarrow V(x) \\rightarrow 1) =1" }, { "math_id": 11, "text": "E" }, { "math_id": 12, "text": "\\tilde P" }, { "math_id": 13, "text": "E^{\\tilde P(x)}(x) \\in W(x) \\cup \\{ \\bot \\}" }, { "math_id": 14, "text": "\\Pr(E^{\\tilde P(x)}(x) \\in W(x)) \\geq \\Pr(\\tilde P(x)\\leftrightarrow V(x) \\rightarrow 1) - \\kappa(x)." }, { "math_id": 15, "text": "\\bot" }, { "math_id": 16, "text": "\\kappa(x)" }, { "math_id": 17, "text": "2^{-80}" }, { "math_id": 18, "text": "1/\\mathrm{poly}(|x|)" }, { "math_id": 19, "text": "\\langle g \\rangle" }, { "math_id": 20, "text": "g" }, { "math_id": 21, "text": "L=\\{x \\mid g^w=x \\}" }, { "math_id": 22, "text": "g^w=x" }, { "math_id": 23, "text": "G_q" }, { "math_id": 24, "text": "q" }, { "math_id": 25, "text": "x=\\log_g y" }, { "math_id": 26, "text": "r" }, { "math_id": 27, "text": "t=g^r" }, { "math_id": 28, "text": "c" }, { "math_id": 29, "text": "s=r+cx" }, { "math_id": 30, "text": "g^s = t y^{c}" }, { "math_id": 31, "text": "Q" }, { "math_id": 32, "text": "c_1" }, { "math_id": 33, "text": "s_1=r+c_1x" }, { "math_id": 34, "text": "c_2" }, { "math_id": 35, "text": "s_2=r+c_2x" }, { "math_id": 36, "text": "(s_1-s_2)(c_1-c_2)^{-1}" }, { "math_id": 37, "text": "(s_1-s_2)=(r+c_1x)-(r+c_2x)=x(c_1-c_2)" }, { "math_id": 38, "text": "y_1" }, { "math_id": 39, "text": "y_2" }, { "math_id": 40, "text": "g_1" }, { "math_id": 41, "text": "g_2" }, { "math_id": 42, "text": "Z_q" }, { "math_id": 43, "text": "x_1" }, { "math_id": 44, "text": "x_2" }, { "math_id": 45, "text": "y_1= g_1^{x_1} \\land y_2=g_2^{x_2}" }, { "math_id": 46, "text": "x_2 = a x_1 + b" }, { "math_id": 47, "text": "y_1= g_1^{x} \\land y_2={(g_2^a)}^{x} g_2^b" }, { "math_id": 48, "text": "PK\\{(x): y_1= g_1^{x} \\land y_2={(g_2^a)}^{x} g_2^b \\}," } ]
https://en.wikipedia.org/wiki?curid=6527939
652816
Mixing (process engineering)
Process of mechanically stirring a heterogeneous mixture to homogenize it In industrial process engineering, mixing is a unit operation that involves manipulation of a heterogeneous physical system with the intent to make it more homogeneous. Familiar examples include pumping of the water in a swimming pool to homogenize the water temperature, and the stirring of pancake batter to eliminate lumps (deagglomeration). Mixing is performed to allow heat and/or mass transfer to occur between one or more streams, components or phases. Modern industrial processing almost always involves some form of mixing. Some classes of chemical reactors are also mixers. With the right equipment, it is possible to mix a solid, liquid or gas into another solid, liquid or gas. A biofuel fermenter may require the mixing of microbes, gases and liquid medium for optimal yield; organic nitration requires concentrated (liquid) nitric and sulfuric acids to be mixed with a hydrophobic organic phase; production of pharmaceutical tablets requires blending of solid powders. The opposite of mixing is segregation. A classical example of segregation is the brazil nut effect. The mathematics of mixing is highly abstract, and is a part of ergodic theory, itself a part of chaos theory. Mixing classification. The type of operation and equipment used during mixing depends on the state of materials being mixed (liquid, semi-solid, or solid) and the miscibility of the materials being processed. In this context, the act of mixing may be synonymous with stirring-, or kneading-processes. Liquid–liquid mixing. Mixing of liquids occurs frequently in process engineering. The nature of liquids to blend determines the equipment used. Single-phase blending tends to involve low-shear, high-flow mixers to cause liquid engulfment, while multi-phase mixing generally requires the use of high-shear, low-flow mixers to create droplets of one liquid in laminar, turbulent or transitional flow regimes, depending on the Reynolds number of the flow. Turbulent or transitional mixing is frequently conducted with turbines or impellers; laminar mixing is conducted with helical ribbon or anchor mixers. Single-phase blending. Mixing of liquids that are miscible or at least soluble in each other occurs frequently in engineering (and in everyday life). An everyday example would be the addition of milk or cream to tea or coffee. Since both liquids are water-based, they dissolve easily in one another. The momentum of the liquid being added is sometimes enough to cause enough turbulence to mix the two, since the viscosity of both liquids is relatively low. If necessary, a spoon or paddle could be used to complete the mixing process. Blending in a more viscous liquid, such as honey, requires more mixing power per unit volume to achieve the same homogeneity in the same amount of time. Solid–solid mixing. Dry blenders are a type of industrial mixer which are typically used to blend multiple dry components until they are homogeneous. Often minor liquid additions are made to the dry blend to modify the product formulation. Blending times using dry ingredients are often short (15–30 minutes) but are somewhat dependent upon the varying percentages of each component, and the difference in the bulk densities of each. Ribbon, paddle, tumble and vertical blenders are available. Many products including pharmaceuticals, foods, chemicals, fertilizers, plastics, pigments, and cosmetics are manufactured in these designs. Dry blenders range in capacity from half-cubic-foot laboratory models to 500-cubic-foot production units. A wide variety of horsepower-and-speed combinations and optional features such as sanitary finishes, vacuum construction, special valves and cover openings are offered by most manufacturers. Blending powders is one of the oldest unit-operations in the solids handling industries. For many decades powder blending has been used just to homogenize bulk materials. Many different machines have been designed to handle materials with various bulk solids properties. On the basis of the practical experience gained with these different machines, engineering knowledge has been developed to construct reliable equipment and to predict scale-up and mixing behavior. Nowadays the same mixing technologies are used for many more applications: to improve product quality, to coat particles, to fuse materials, to wet, to disperse in liquid, to agglomerate, to alter functional material properties, etc. This wide range of applications of mixing equipment requires a high level of knowledge, long time experience and extended test facilities to come to the optimal selection of equipment and processes. Solid-solid mixing can be performed either in batch mixers, which is the simpler form of mixing, or in certain cases in continuous dry-mix, more complex but which provide interesting advantages in terms of segregation, capacity and validation. One example of a solid–solid mixing process is mulling foundry molding sand, where sand, bentonite clay, fine coal dust and water are mixed to a plastic, moldable and reusable mass, applied for molding and pouring molten metal to obtain sand castings that are metallic parts for automobile, machine building, construction or other industries. Mixing mechanisms. In powder two different dimensions in the mixing process can be determined: convective mixing and intensive mixing. In the case of convective mixing material in the mixer is transported from one location to another. This type of mixing leads to a less ordered state inside the mixer, the components that must be mixed are distributed over the other components. With progressing time the mixture becomes more randomly ordered. After a certain mixing time the ultimate random state is reached. Usually this type of mixing is applied for free-flowing and coarse materials. Possible threats during macro mixing is the de-mixing of the components, since differences in size, shape or density of the different particles can lead to segregation. When materials are cohesive, which is the case with e.g. fine particles and also with wet material, convective mixing is no longer sufficient to obtain a randomly ordered mixture. The relative strong inter-particle forces form lumps, which are not broken up by the mild transportation forces in the convective mixer. To decrease the lump size additional forces are necessary; i.e. more energy intensive mixing is required. These additional forces can either be impact forces or shear forces. Liquid–solid mixing. Liquid–solid mixing is typically done to suspend coarse free-flowing solids, or to break up lumps of fine agglomerated solids. An example of the former is the mixing granulated sugar into water; an example of the latter is the mixing of flour or powdered milk into water. In the first case, the particles can be lifted into suspension (and separated from one another) by bulk motion of the fluid; in the second, the mixer itself (or the high shear field near it) must destabilize the lumps and cause them to disintegrate. One example of a solid–liquid mixing process in industry is concrete mixing, where cement, sand, small stones or gravel and water are commingled to a homogeneous self-hardening mass, used in the construction industry. Solid suspension. Suspension of solids into a liquid is done to improve the rate of mass transfer between the solid and the liquid. Examples include dissolving a solid reactant into a solvent, or suspending catalyst particles in liquid to improve the flow of reactants and products to and from the particles. The associated eddy diffusion increases the rate of mass transfer within the bulk of the fluid, and the convection of material away from the particles decreases the size of the boundary layer, where most of the resistance to mass transfer occurs. Axial-flow impellers are preferred for solid suspension because solid suspension needs momentum rather than shear, although radial-flow impellers can be used in a tank with baffles, which converts some of the rotational motion into vertical motion. When the solid is denser than the liquid (and therefore collects at the bottom of the tank), the impeller is rotated so that the fluid is pushed downwards; when the solid is less dense than the liquid (and therefore floats on top), the impeller is rotated so that the fluid is pushed upwards (though this is relatively rare). The equipment preferred for solid suspension produces large volumetric flows but not necessarily high shear; high flow-number turbine impellers, such as hydrofoils, are typically used. Multiple turbines mounted on the same shaft can reduce power draw. The degree of homogeneity of a solid-liquid suspension can be described by the RSD (Relative Standard Deviation of the solid volume fraction field in the mixing tank). A perfect suspension would have a RSD of 0% but in practice, a RSD inferior or equal to 20% can be sufficient for the suspension to be considered homogeneous, although this is case-dependent. The RSD can be obtained by experimental measurements or by calculations. Measurements can be performed at full scale but this is generally unpractical, so it is common to perform measurements at small scale and use a "scale-up" criterion to extrapolate the RSD from small to full scale. Calculations can be performed using a computational fluid dynamics software or by using correlations built on theoretical developments, experimental measurements and/or computational fluid dynamics data. Computational fluid dynamics calculations are quite accurate and can accommodate virtually any tank and agitator designs, but they require expertise and long computation time. Correlations are easy to use but are less accurate and don't cover any possible designs. The most popular correlation is the ‘just suspended speed’ correlation published by Zwietering (1958). It's an easy to use correlation but it is not meant for homogeneous suspension. It only provides a crude estimate of the stirring speed for ‘bad’ quality suspensions (partial suspensions) where no particle remains at the bottom for more than 1 or 2 seconds. Another equivalent correlation is the correlation from Mersmann (1998). For ‘good’ quality suspensions, some examples of useful correlations can be found in the publications of Barresi (1987), Magelli (1991), Cekinski (2010) or Macqueron (2017). Machine learning can also be used to build models way more accurate than "classical" correlations. Solid deagglomeration. Very fine powders, such as titanium dioxide pigments, and materials that have been spray dried may agglomerate or form lumps during transportation and storage. Starchy materials or those that form gels when exposed to solvent can form lumps that are wetted on the outside but dry on the inside. These types of materials are not easily mixed into liquid with the types of mixers preferred for solid suspension because the agglomerate particles must be subjected to intense shear to be broken up. In some ways, deagglomeration of solids is similar to the blending of immiscible liquids, except for the fact that coalescence is usually not a problem. An everyday example of this type of mixing is the production of milkshakes from liquid milk and solid ice cream. Liquid–gas mixing. Liquids and gases are typically mixed to allow mass transfer to occur. For instance, in the case of air stripping, gas is used to remove volatiles from a liquid. Typically, a packed column is used for this purpose, with the packing acting as a motionless mixer and the air pump providing the driving force. When a tank and impeller are used, the objective is typically to ensure that the gas bubbles remain in contact with the liquid for as long as possible. This is especially important if the gas is expensive, such as pure oxygen, or diffuses slowly into the liquid. Mixing in a tank is also useful when a (relatively) slow chemical reaction is occurring in the liquid phase, and so the concentration difference in the thin layer near the bubble is close to that of the bulk. This reduces the driving force for mass transfer. If there is a (relatively) fast chemical reaction in the liquid phase, it is sometimes advantageous to disperse but not recirculate the gas bubbles, ensuring that they are in plug flow and can transfer mass more efficiently. Rushton turbines have been traditionally used to disperse gases into liquids, but newer options, such as the Smith turbine and Bakker turbine are becoming more prevalent. One of the issues is that as the gas flow increases, more and more of the gas accumulates in the low pressure zones behind the impeller blades, which reduces the power drawn by the mixer (and therefore its effectiveness). Newer designs, such as the GDX impeller, have nearly eliminated this problem. Gas–solid mixing. Gas–solid mixing may be conducted to transport powders or small particulate solids from one place to another, or to mix gaseous reactants with solid catalyst particles. In either case, the turbulent eddies of the gas must provide enough force to suspend the solid particles, which otherwise sink under the force of gravity. The size and shape of the particles is an important consideration, since different particles have different drag coefficients, and particles made of different materials have different densities. A common unit operation the process industry uses to separate gases and solids is the cyclone, which slows the gas and causes the particles to settle out. Multiphase mixing. Multiphase mixing occurs when solids, liquids and gases are combined in one step. This may occur as part of a catalytic chemical process, in which liquid and gaseous reagents must be combined with a solid catalyst (such as hydrogenation); or in fermentation, where solid microbes and the gases they require must be well-distributed in a liquid medium. The type of mixer used depends upon the properties of the phases. In some cases, the mixing power is provided by the gas itself as it moves up through the liquid, entraining liquid with the bubble plume. This draws liquid upwards inside the plume, and causes liquid to fall outside the plume. If the viscosity of the liquid is too high to allow for this (or if the solid particles are too heavy), an impeller may be needed to keep the solid particles suspended. Basic nomenclature. For liquid mixing, the nomenclature is rather standardized: Constitutive equations. Many of the equations used for determining the output of mixers are empirically derived, or contain empirically derived constants. Since mixers operate in the turbulent regime, many of the equations are approximations that are considered acceptable for most engineering purposes. When a mixing impeller rotates in the fluid, it generates a combination of flow and shear. The impeller generated flow can be calculated with the following equation: formula_0 Flow numbers for impellers have been published in the North American Mixing Forum sponsored Handbook of Industrial Mixing. The power required to rotate an impeller can be calculated using the following equations: formula_1 (Turbulent regime) formula_2 (Laminar regime) formula_3 is the (dimensionless) power number, which is a function of impeller geometry; formula_4 is the density of the fluid; formula_5 is the rotational speed, typically rotations per second; formula_6 is the diameter of the impeller; formula_7 is the laminar power constant; and formula_8 is the viscosity of the fluid. Note that the mixer power is strongly dependent upon the rotational speed and impeller diameter, and linearly dependent upon either the density or viscosity of the fluid, depending on which flow regime is present. In the transitional regime, flow near the impeller is turbulent and so the turbulent power equation is used. The time required to blend a fluid to within 5% of the final concentration, formula_9, can be calculated with the following correlations: formula_10 (Turbulent regime) formula_11 (Transitional region) formula_12 (Laminar regime) The Transitional/Turbulent boundary occurs at formula_13 The Laminar/Transitional boundary occurs at formula_14 Laboratory mixing. At a laboratory scale, mixing is achieved by magnetic stirrers or by simple hand-shaking. Sometimes mixing in laboratory vessels is more thorough and occurs faster than is possible industrially. Magnetic stir bars are radial-flow mixers that induce solid body rotation in the fluid being mixed. This is acceptable on a small scale, since the vessels are small and mixing therefore occurs rapidly (short blend time). A variety of stir bar configurations exist, but because of the small size and (typically) low viscosity of the fluid, it is possible to use one configuration for nearly all mixing tasks. The cylindrical stir bar can be used for suspension of solids, as seen in iodometry, deagglomeration (useful for preparation of microbiology growth medium from powders), and liquid–liquid blending. Another peculiarity of laboratory mixing is that the mixer rests on the bottom of the vessel instead of being suspended near the center. Furthermore, the vessels used for laboratory mixing are typically more widely varied than those used for industrial mixing; for instance, Erlenmeyer flasks, or Florence flasks may be used in addition to the more cylindrical beaker. Mixing in microfluidics. When scaled down to the microscale, fluid mixing behaves radically different. This is typically at sizes from a couple (2 or 3) millimeters down to the nanometer range. At this size range normal advection does not happen unless it is forced by a hydraulic pressure gradient. Diffusion is the dominant mechanism whereby two different fluids come together. Diffusion is a relatively slow process. Hence a number of researchers had to devise ways to get the two fluids to mix. This involved Y junctions, T junctions, three-way intersections and designs where the interfacial area between the two fluids is maximized. Beyond just interfacing the two liquids people also made twisting channels to force the two fluids to mix. These included multilayered devices where the fluids would corkscrew, looped devices where the fluids would flow around obstructions and wavy devices where the channel would constrict and flare out. Additionally channels with features on the walls like notches or groves were tried. One way to know if mixing is happening due to advection or diffusion is by finding the Peclet number. It is the ratio of advection to diffusion. At high Peclet numbers (&gt; 1), advection dominates. At low Peclet numbers (&lt; 1), diffusion dominates. Peclet number = (flow velocity × mixing path) / diffusion coefficient Industrial mixing equipment. At an industrial scale, efficient mixing can be difficult to achieve. A great deal of engineering effort goes into designing and improving mixing processes. Mixing at industrial scale is done in batches (dynamic mixing), inline or with help of static mixers. Moving mixers are powered with electric motors that operate at standard speeds of 1800 or 1500 RPM, which is typically much faster than necessary. Gearboxes are used to reduce speed and increase torque. Some applications require the use of multi-shaft mixers, in which a combination of mixer types are used to completely blend the product. In addition to performing typical batch mixing operations, some mixing can be done continuously. Using a machine like the Continuous Processor, one or more dry ingredients and one or more liquid ingredients can be accurately and consistently metered into the machine and see a continuous, homogeneous mixture come out the discharge of the machine. Many industries have converted to continuous mixing for many reasons. Some of those are ease of cleaning, lower energy consumption, smaller footprint, versatility, control, and many others. Continuous mixers, such as the twin-screw Continuous Processor, also have the ability to handle very high viscosities. Turbines. A selection of turbine geometries and power numbers are shown below. Different types of impellers are used for different tasks; for instance, Rushton turbines are useful for dispersing gases into liquids, but are not very helpful for dispersing settled solids into liquid. Newer turbines have largely supplanted the Rushton turbine for gas–liquid mixing, such as the Smith turbine and Bakker turbine. The power number is an empirical measure of the amount of torque needed to drive different impellers in the same fluid at constant power per unit volume; impellers with higher power numbers require more torque but operate at lower speed than impellers with lower power numbers, which operate at lower torque but higher speeds. Close-clearance mixers. There are two main types of close-clearance mixers: anchors and helical ribbons. Anchor mixers induce solid-body rotation and do not promote vertical mixing, but helical ribbons do. Close clearance mixers are used in the laminar regime, because the viscosity of the fluid overwhelms the inertial forces of the flow and prevents the fluid leaving the impeller from entraining the fluid next to it. Helical ribbon mixers are typically rotated to push material at the wall downwards, which helps circulate the fluid and refresh the surface at the wall. High shear dispersers. High shear dispersers create intense shear near the impeller but relatively little flow in the bulk of the vessel. Such devices typically resemble circular saw blades and are rotated at high speed. Because of their shape, they have a relatively low drag coefficient and therefore require comparatively little torque to spin at high speed. High shear dispersers are used for forming emulsions (or suspensions) of immiscible liquids and solid deagglomeration. Static mixers. Static mixers are used when a mixing tank would be too large, too slow, or too expensive to use in a given process. Liquid whistles. Liquid whistles are a kind of static mixer which pass fluid at high pressure through an orifice and subsequently over a blade. This subjects the fluid to high turbulent stresses and may result in mixing, emulsification, deagglomeration and disinfection. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Q = Fl*N*D^3 " }, { "math_id": 1, "text": " P= P_{o}\\rho N^3D^5 " }, { "math_id": 2, "text": " P= K_p\\mu N^2D^3 " }, { "math_id": 3, "text": " P_{o} " }, { "math_id": 4, "text": " \\rho " }, { "math_id": 5, "text": " N " }, { "math_id": 6, "text": " D " }, { "math_id": 7, "text": " K_p " }, { "math_id": 8, "text": " \\mu " }, { "math_id": 9, "text": " {\\theta_{95}} " }, { "math_id": 10, "text": " {\\theta_{95}} = \\frac {5.40} {P_{o}^{1 \\over 3} N} (\\frac {T} {D})^2 " }, { "math_id": 11, "text": " {\\theta_{95}} = \\frac {34596} {P_{o}{1 \\over 3} N^2 D^2} (\\frac {\\mu} {\\rho}) (\\frac {T} {D})^2 " }, { "math_id": 12, "text": " {\\theta_{95}} = \\frac {896*10^3 K_p^{-1.69}} {N} " }, { "math_id": 13, "text": " P_{o}^{1 \\over 3} Re = 6404 " }, { "math_id": 14, "text": " P_{o}^{1 \\over 3} Re = 186 " } ]
https://en.wikipedia.org/wiki?curid=652816
65288835
Mészáros effect
Evolution of Cold Dark Matter perturbations The Mészáros effect "is the main physical process that alters the shape of the initial power spectrum of fluctuations in the cold dark matter theory of cosmological structure formation". It was introduced in 1974 by Péter Mészáros considering the behavior of dark matter perturbations in the range around the radiation-matter equilibrium redshift formula_0 and up to the radiation decoupling redshift formula_1. This showed that, for a non-baryonic cold dark matter not coupled to radiation, the small initial perturbations expected to give rise to the present day large scale structures experience below formula_2 an additional distinct growth period which alters the initial fluctuation power spectrum, and allows sufficient time for the fluctuations to grow into galaxies and galaxy clusters by the present epoch. This involved introducing and solving a joint radiation plus dark matter perturbation equation for the density fluctuations formula_3, formula_4 in which formula_5, the variable formula_6, and formula_7 is the length scale parametrizing the expansion of the Universe. The analytical solution has a growing mode formula_8. This is referred to as the Mészáros effect, or Mészáros equation. The process is independent of whether the cold dark matter consists of elementary particles or macroscopic objects. It determines the cosmological transfer function of the original fluctuation spectrum, and it has been incorporated in all subsequent treatments of cosmological large scale structure evolution (e.g. A more specific galaxy formation scenario involving this effect was discussed by Mészáros in 1975 explicitly assuming that the dark matter might consist of approximately solar mass primordial black holes, an idea which has received increased attention (e.g. after the discovery in 2015 of gravitational waves from stellar-mass black holes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "z_\\text{eq}" }, { "math_id": 1, "text": "z_\\text{dec}" }, { "math_id": 2, "text": "z_{eq}" }, { "math_id": 3, "text": "\\Delta \\rho" }, { "math_id": 4, "text": " \\delta'' + \\frac{2+3y}{2y(1+y)}\\delta' - \\frac{3}{2y(1+y)} \\delta = 0 , " }, { "math_id": 5, "text": "\\delta \\equiv \\Delta\\rho/\\langle \\rho \\rangle " }, { "math_id": 6, "text": "y = \\rho_m/\\rho_r = a/a_\\text{eq}" }, { "math_id": 7, "text": "a" }, { "math_id": 8, "text": "\\delta \\propto 1 + \\tfrac{3}{2} y" } ]
https://en.wikipedia.org/wiki?curid=65288835
6529735
Bubble (physics)
Globule of one substance in another, typically gas in a liquid A bubble is a globule of a gas substance in a liquid. In the opposite case, a globule of a liquid in a gas, is called a drop. Due to the Marangoni effect, bubbles may remain intact when they reach the surface of the immersive substance. Common examples. Bubbles are seen in many places in everyday life, for example: Physics and chemistry. Bubbles form and coalesce into globular shapes because those shapes are at a lower energy state. For the physics and chemistry behind it, see nucleation. Appearance. Bubbles are visible because they have a different refractive index (RI) than the surrounding substance. For example, the RI of air is approximately 1.0003 and the RI of water is approximately 1.333. Snell's Law describes how electromagnetic waves change direction at the interface between two mediums with different RI; thus bubbles can be identified from the accompanying refraction and internal reflection even though both the immersed and immersing mediums are transparent. The above explanation only holds for bubbles of one medium submerged in another medium (e.g. bubbles of gas in a soft drink); the volume of a membrane bubble (e.g. soap bubble) will not distort light very much, and one can only see a membrane bubble due to thin-film diffraction and reflection. Applications. Nucleation can be intentionally induced, for example, to create a bubblegram in a solid. In medical ultrasound imaging, small encapsulated bubbles called contrast agent are used to enhance the contrast. In thermal inkjet printing, vapor bubbles are used as actuators. They are occasionally used in other microfluidics applications as actuators. The violent collapse of bubbles (cavitation) near solid surfaces and the resulting impinging jet constitute the mechanism used in ultrasonic cleaning. The same effect, but on a larger scale, is used in focused energy weapons such as the bazooka and the torpedo. Pistol shrimp also uses a collapsing cavitation bubble as a weapon. The same effect is used to treat kidney stones in a lithotripter. Marine mammals such as dolphins and whales use bubbles for entertainment or as hunting tools. Aerators cause the dissolution of gas in the liquid by injecting bubbles. Bubbles are used by chemical and metallurgic engineer in processes such as distillation, absorption, flotation and spray drying. The complex processes involved often require consideration for mass and heat transfer and are modeled using fluid dynamics. The star-nosed mole and the American water shrew can smell underwater by rapidly breathing through their nostrils and creating a bubble. Research on the origin of life on Earth suggests that bubbles may have played an integral role in confining and concentrating precursor molecules for life, a function currently performed by cell membranes. Bubble lasers use bubbles as the optical resonator. They can be used as highly sensitive pressure sensors. Pulsation. When bubbles are disturbed (for example when a gas bubble is injected underwater), the wall oscillates. Although it is often visually masked by much larger deformations in shape, a component of the oscillation changes the bubble volume (i.e. it is pulsation) which, in the absence of an externally-imposed sound field, occurs at the bubble's natural frequency. The pulsation is the most important component of the oscillation, acoustically, because by changing the gas volume, it changes its pressure, and leads to the emission of sound at the bubble's natural frequency. For air bubbles in water, large bubbles (negligible surface tension and thermal conductivity) undergo adiabatic pulsations, which means that no heat is transferred either from the liquid to the gas or vice versa. The natural frequency of such bubbles is determined by the equation: formula_0 where: For air bubbles in water, smaller bubbles undergo isothermal pulsations. The corresponding equation for small bubbles of surface tension σ (and negligible liquid viscosity) is formula_5 Excited bubbles trapped underwater are the major source of liquid sounds, such as inside our knuckles during knuckle cracking, and when a rain droplet impacts a surface of water. Physiology and medicine. Injury by bubble formation and growth in body tissues is the mechanism of decompression sickness, which occurs when supersaturated dissolved inert gases leave the solution as bubbles during decompression. The damage can be due to mechanical deformation of tissues due to bubble growth in situ, or by blocking blood vessels where the bubble has lodged. Arterial gas embolism can occur when a gas bubble is introduced to the circulatory system and lodges in a blood vessel that is too small for it to pass through under the available pressure difference. This can occur as a result of decompression after hyperbaric exposure, a lung overexpansion injury, during intravenous fluid administration, or during surgery. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_0 = {1 \\over 2 \\pi R_0}\\sqrt{3 \\gamma p_0 \\over \\rho}" }, { "math_id": 1, "text": "\\gamma" }, { "math_id": 2, "text": "R_0" }, { "math_id": 3, "text": "p_0" }, { "math_id": 4, "text": "\\rho" }, { "math_id": 5, "text": "f_0 = {1 \\over 2 \\pi R_0}\\sqrt{{3 p_0 \\over \\rho}+{4 \\sigma \\over \\rho R_0}}" } ]
https://en.wikipedia.org/wiki?curid=6529735
65309
Support vector machine
Set of methods for supervised statistical learning &lt;templatestyles src="Machine learning/styles.css"/&gt; In machine learning, support vector machines (SVMs, also support vector networks) are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&amp;T Bell Laboratories by Vladimir Vapnik with colleagues (Boser et al., 1992, Guyon et al., 1993, Cortes and Vapnik, 1995, Vapnik et al., 1997) SVMs are one of the most studied models, being based on statistical learning frameworks of VC theory proposed by Vapnik (1982, 1995) and Chervonenkis (1974). In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the , representing the data only through a set of pairwise similarity comparisons between the original data points using a kernel function, which transforms them into coordinates in the higher dimensional feature space. Thus, SVMs use the kernel trick to implicitly map their inputs into high-dimensional feature spaces where linear classification can be performed. Being max-margin models, SVMs are resilient to noisy data (for example, mis-classified examples). SVMs can also be used for regression tasks, where the objective becomes formula_0-sensitive. The support vector clustering algorithm, created by Hava Siegelmann and Vladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data. These data sets require unsupervised learning approaches, which attempt to find natural clustering of the data to groups and, then, to map new data according to these clusters. The popularity of SVMs is likely due to their amenability to theoretical analysis, their flexibility in being applied to a wide variety of tasks, including structured prediction problems. It is not clear that SVMs have better predictive performance than other linear models, such as logistic regression and linear regression. Motivation. Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a "new" data point will be in. In the case of support vector machines, a data point is viewed as a formula_1-dimensional vector (a list of formula_1 numbers), and we want to know whether we can separate such points with a formula_2-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the "maximum-margin hyperplane" and the linear classifier it defines is known as a "maximum-margin classifier"; or equivalently, the "perceptron of optimal stability". More formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high or infinite-dimensional space, which can be used for classification, regression, or other tasks like outliers detection. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class (so-called functional margin), since in general the larger the margin, the lower the generalization error of the classifier. A lower generalization error means that the implementer is less likely to experience overfitting. Whereas the original problem may be stated in a finite-dimensional space, it often happens that the sets to discriminate are not linearly separable in that space. For this reason, it was proposed that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure that dot products of pairs of input data vectors may be computed easily in terms of the variables in the original space, by defining them in terms of a kernel function formula_3 selected to suit the problem. The hyperplanes in the higher-dimensional space are defined as the set of points whose dot product with a vector in that space is constant, where such a set of vectors is an orthogonal (and thus minimal) set of vectors that defines a hyperplane. The vectors defining the hyperplanes can be chosen to be linear combinations with parameters formula_4 of images of feature vectors formula_5 that occur in the data base. With this choice of a hyperplane, the points formula_6 in the feature space that are mapped into the hyperplane are defined by the relation formula_7 Note that if formula_3 becomes small as formula_8 grows further away from formula_6, each term in the sum measures the degree of closeness of the test point formula_6 to the corresponding data base point formula_5. In this way, the sum of kernels above can be used to measure the relative nearness of each test point to the data points originating in one or the other of the sets to be discriminated. Note the fact that the set of points formula_6 mapped into any hyperplane can be quite convoluted as a result, allowing much more complex discrimination between sets that are not convex at all in the original space. Applications. SVMs can be used to solve various real-world problems: History. The original SVM algorithm was invented by Vladimir N. Vapnik and Alexey Ya. Chervonenkis in 1964. In 1992, Bernhard Boser, Isabelle Guyon and Vladimir Vapnik suggested a way to create nonlinear classifiers by applying the kernel trick to maximum-margin hyperplanes. The "soft margin" incarnation, as is commonly used in software packages, was proposed by Corinna Cortes and Vapnik in 1993 and published in 1995. Linear SVM. We are given a training dataset of formula_9 points of the form formula_10 where the formula_11 are either 1 or −1, each indicating the class to which the point formula_12 belongs. Each formula_12 is a formula_1-dimensional real vector. We want to find the "maximum-margin hyperplane" that divides the group of points formula_13 for which formula_14 from the group of points for which formula_15, which is defined so that the distance between the hyperplane and the nearest point formula_12 from either group is maximized. Any hyperplane can be written as the set of points formula_16 satisfying formula_17 where formula_18 is the (not necessarily normalized) normal vector to the hyperplane. This is much like Hesse normal form, except that formula_18 is not necessarily a unit vector. The parameter formula_19 determines the offset of the hyperplane from the origin along the normal vector formula_18. Warning: most of the literature on the subject defines the bias so that formula_20 Hard-margin. If the training data is linearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. The region bounded by these two hyperplanes is called the "margin", and the maximum-margin hyperplane is the hyperplane that lies halfway between them. With a normalized or standardized dataset, these hyperplanes can be described by the equations formula_21 (anything on or above this boundary is of one class, with label 1) and formula_22 (anything on or below this boundary is of the other class, with label −1). Geometrically, the distance between these two hyperplanes is formula_23, so to maximize the distance between the planes we want to minimize formula_24. The distance is computed using the distance from a point to a plane equation. We also have to prevent data points from falling into the margin, we add the following constraint: for each formula_25 either formula_26 or formula_27 These constraints state that each data point must lie on the correct side of the margin. This can be rewritten as We can put this together to get the optimization problem: formula_28 The formula_18 and formula_29 that solve this problem determine the final classifier, formula_30, where formula_31 is the sign function. An important consequence of this geometric description is that the max-margin hyperplane is completely determined by those formula_13 that lie nearest to it (explained below). These formula_13 are called "support vectors". Soft-margin. To extend SVM to cases in which the data are not linearly separable, the "hinge loss" function is helpful formula_32 Note that formula_11 is the "i"-th target (i.e., in this case, 1 or −1), and formula_33 is the "i"-th output. This function is zero if the constraint in (1) is satisfied, in other words, if formula_13 lies on the correct side of the margin. For data on the wrong side of the margin, the function's value is proportional to the distance from the margin. The goal of the optimization then is to minimize: formula_34 where the parameter formula_35 determines the trade-off between increasing the margin size and ensuring that the formula_13 lie on the correct side of the margin (Note we can add a weight to either term in the equation above). By deconstructing the hinge loss, this optimization problem can be massaged into the following: formula_36 Thus, for large values of formula_37, it will behave similar to the hard-margin SVM, if the input data are linearly classifiable, but will still learn if a classification rule is viable or not. Nonlinear kernels. The original maximum-margin hyperplane algorithm proposed by Vapnik in 1963 constructed a linear classifier. However, in 1992, Bernhard Boser, Isabelle Guyon and Vladimir Vapnik suggested a way to create nonlinear classifiers by applying the kernel trick (originally proposed by Aizerman et al.) to maximum-margin hyperplanes. The kernel trick, where dot products are replaced by kernels, is easily derived in the dual representation of the SVM problem. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be nonlinear and the transformed space high-dimensional; although the classifier is a hyperplane in the transformed feature space, it may be nonlinear in the original input space. It is noteworthy that working in a higher-dimensional feature space increases the generalization error of support vector machines, although given enough samples the algorithm still performs well. Some common kernels include: The kernel is related to the transform formula_47 by the equation formula_48. The value w is also in the transformed space, with formula_49. Dot products with w for classification can again be computed by the kernel trick, i.e. formula_50. Computing the SVM classifier. Computing the (soft-margin) SVM classifier amounts to minimizing an expression of the form We focus on the soft-margin classifier since, as noted above, choosing a sufficiently small value for formula_51 yields the hard-margin classifier for linearly classifiable input data. The classical approach, which involves reducing (2) to a quadratic programming problem, is detailed below. Then, more recent approaches such as sub-gradient descent and coordinate descent will be discussed. Primal. Minimizing (2) can be rewritten as a constrained optimization problem with a differentiable objective function in the following way. For each formula_52 we introduce a variable formula_53. Note that formula_54 is the smallest nonnegative number satisfying formula_55 Thus we can rewrite the optimization problem as follows formula_56 This is called the "primal" problem. Dual. By solving for the Lagrangian dual of the above problem, one obtains the simplified problem formula_57 This is called the "dual" problem. Since the dual maximization problem is a quadratic function of the formula_58 subject to linear constraints, it is efficiently solvable by quadratic programming algorithms. Here, the variables formula_58 are defined such that formula_59 Moreover, formula_60 exactly when formula_61 lies on the correct side of the margin, and formula_62 when formula_61 lies on the margin's boundary. It follows that formula_18 can be written as a linear combination of the support vectors. The offset, formula_63, can be recovered by finding an formula_61 on the margin's boundary and solving formula_64 Kernel trick. Suppose now that we would like to learn a nonlinear classification rule which corresponds to a linear classification rule for the transformed data points formula_67 Moreover, we are given a kernel function formula_68 which satisfies formula_69. We know the classification vector formula_18 in the transformed space satisfies formula_70 where, the formula_71 are obtained by solving the optimization problem formula_72 The coefficients formula_58 can be solved for using quadratic programming, as before. Again, we can find some index formula_73 such that formula_62, so that formula_74 lies on the boundary of the margin in the transformed space, and then solve formula_75 Finally, formula_76 Modern methods. Recent algorithms for finding the SVM classifier include sub-gradient descent and coordinate descent. Both techniques have proven to offer significant advantages over the traditional approach when dealing with large, sparse datasets—sub-gradient methods are especially efficient when there are many training examples, and coordinate descent when the dimension of the feature space is high. Sub-gradient descent. Sub-gradient descent algorithms for the SVM work directly with the expression formula_77 Note that formula_78 is a convex function of formula_18 and formula_29. As such, traditional gradient descent (or SGD) methods can be adapted, where instead of taking a step in the direction of the function's gradient, a step is taken in the direction of a vector selected from the function's sub-gradient. This approach has the advantage that, for certain implementations, the number of iterations does not scale with formula_9, the number of data points. Coordinate descent. Coordinate descent algorithms for the SVM work from the dual problem formula_79 For each formula_80, iteratively, the coefficient formula_58 is adjusted in the direction of formula_81. Then, the resulting vector of coefficients formula_82 is projected onto the nearest vector of coefficients that satisfies the given constraints. (Typically Euclidean distances are used.) The process is then repeated until a near-optimal vector of coefficients is obtained. The resulting algorithm is extremely fast in practice, although few performance guarantees have been proven. Empirical risk minimization. The soft-margin support vector machine described above is an example of an empirical risk minimization (ERM) algorithm for the "hinge loss". Seen this way, support vector machines belong to a natural class of algorithms for statistical inference, and many of its unique features are due to the behavior of the hinge loss. This perspective can provide further insight into how and why SVMs work, and allow us to better analyze their statistical properties. Risk minimization. In supervised learning, one is given a set of training examples formula_83 with labels formula_84, and wishes to predict formula_85 given formula_86. To do so one forms a hypothesis, formula_78, such that formula_87 is a "good" approximation of formula_85. A "good" approximation is usually defined with the help of a "loss function," formula_88, which characterizes how bad formula_89 is as a prediction of formula_8. We would then like to choose a hypothesis that minimizes the "expected risk:" formula_90 In most cases, we don't know the joint distribution of formula_91 outright. In these cases, a common strategy is to choose the hypothesis that minimizes the "empirical risk:" formula_92 Under certain assumptions about the sequence of random variables formula_93 (for example, that they are generated by a finite Markov process), if the set of hypotheses being considered is small enough, the minimizer of the empirical risk will closely approximate the minimizer of the expected risk as formula_9 grows large. This approach is called "empirical risk minimization," or ERM. Regularization and stability. In order for the minimization problem to have a well-defined solution, we have to place constraints on the set formula_94 of hypotheses being considered. If formula_94 is a normed space (as is the case for SVM), a particularly effective technique is to consider only those hypotheses formula_95 for which formula_96 . This is equivalent to imposing a "regularization penalty" formula_97, and solving the new optimization problem formula_98 This approach is called "Tikhonov regularization." More generally, formula_99 can be some measure of the complexity of the hypothesis formula_78, so that simpler hypotheses are preferred. SVM and the hinge loss. Recall that the (soft-margin) SVM classifier formula_100 is chosen to minimize the following expression: formula_101 In light of the above discussion, we see that the SVM technique is equivalent to empirical risk minimization with Tikhonov regularization, where in this case the loss function is the hinge loss formula_102 From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss, formula_103; logistic regression employs the log-loss, formula_104 Target functions. The difference between the hinge loss and these other loss functions is best stated in terms of "target functions -" the function that minimizes expected risk for a given pair of random variables formula_105. In particular, let formula_106 denote formula_8 conditional on the event that formula_107. In the classification setting, we have: formula_108 The optimal classifier is therefore: formula_109 For the square-loss, the target function is the conditional expectation function, formula_110; For the logistic loss, it's the logit function, formula_111. While both of these target functions yield the correct classifier, as formula_112, they give us more information than we need. In fact, they give us enough information to completely describe the distribution of formula_113. On the other hand, one can check that the target function for the hinge loss is "exactly" formula_114. Thus, in a sufficiently rich hypothesis space—or equivalently, for an appropriately chosen kernel—the SVM classifier will converge to the simplest function (in terms of formula_115) that correctly classifies the data. This extends the geometric interpretation of SVM—for linear classification, the empirical risk is minimized by any function whose margins lie between the support vectors, and the simplest of these is the max-margin classifier. Properties. SVMs belong to a family of generalized linear classifiers and can be interpreted as an extension of the perceptron. They can also be considered a special case of Tikhonov regularization. A special property is that they simultaneously minimize the empirical "classification error" and maximize the "geometric margin"; hence they are also known as maximum margin classifiers. A comparison of the SVM to other classifiers has been made by Meyer, Leisch and Hornik. Parameter selection. The effectiveness of SVM depends on the selection of kernel, the kernel's parameters, and soft margin parameter formula_51. A common choice is a Gaussian kernel, which has a single parameter "formula_116". The best combination of formula_51 and formula_116 is often selected by a grid search with exponentially growing sequences of formula_51 and "formula_116", for example, formula_117; formula_118. Typically, each combination of parameter choices is checked using cross validation, and the parameters with best cross-validation accuracy are picked. Alternatively, recent work in Bayesian optimization can be used to select formula_51 and "formula_116" , often requiring the evaluation of far fewer parameter combinations than grid search. The final model, which is used for testing and for classifying new data, is then trained on the whole training set using the selected parameters. Issues. Potential drawbacks of the SVM include the following aspects: Extensions. Multiclass SVM. Multiclass SVM aims to assign labels to instances by using support vector machines, where the labels are drawn from a finite set of several elements. The dominant approach for doing so is to reduce the single multiclass problem into multiple binary classification problems. Common methods for such reduction include: Crammer and Singer proposed a multiclass SVM method which casts the multiclass classification problem into a single optimization problem, rather than decomposing it into multiple binary classification problems. See also Lee, Lin and Wahba and Van den Burg and Groenen. Transductive support vector machines. Transductive support vector machines extend SVMs in that they could also treat partially labeled data in semi-supervised learning by following the principles of transduction. Here, in addition to the training set formula_119, the learner is also given a set formula_120 of test examples to be classified. Formally, a transductive support vector machine is defined by the following primal optimization problem: Minimize (in formula_121) formula_122 subject to (for any formula_123 and any formula_124) formula_125 and formula_126 Transductive support vector machines were introduced by Vladimir N. Vapnik in 1998. Structured SVM. Structured support-vector machine is an extension of the traditional SVM model. While the SVM model is primarily designed for binary classification, multiclass classification, and regression tasks, structured SVM broadens its application to handle general structured output labels, for example parse trees, classification with taxonomies, sequence alignment and many more. Regression. A version of SVM for regression was proposed in 1996 by Vladimir N. Vapnik, Harris Drucker, Christopher J. C. Burges, Linda Kaufman and Alexander J. Smola. This method is called support vector regression (SVR). The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin. Analogously, the model produced by SVR depends only on a subset of the training data, because the cost function for building the model ignores any training data close to the model prediction. Another SVM version known as least-squares support vector machine (LS-SVM) has been proposed by Suykens and Vandewalle. Training the original SVR means solving minimize formula_127 subject to formula_128 where formula_5 is a training sample with target value formula_11. The inner product plus intercept formula_129 is the prediction for that sample, and formula_130 is a free parameter that serves as a threshold: all predictions have to be within an formula_130 range of the true predictions. Slack variables are usually added into the above to allow for errors and to allow approximation in the case the above problem is infeasible. Bayesian SVM. In 2011 it was shown by Polson and Scott that the SVM admits a Bayesian interpretation through the technique of data augmentation. In this approach the SVM is viewed as a graphical model (where the parameters are connected via probability distributions). This extended view allows the application of Bayesian techniques to SVMs, such as flexible feature modeling, automatic hyperparameter tuning, and predictive uncertainty quantification. Recently, a scalable version of the Bayesian SVM was developed by Florian Wenzel, enabling the application of Bayesian SVMs to big data. Florian Wenzel developed two different versions, a variational inference (VI) scheme for the Bayesian kernel support vector machine (SVM) and a stochastic version (SVI) for the linear Bayesian SVM. Implementation. The parameters of the maximum-margin hyperplane are derived by solving the optimization. There exist several specialized algorithms for quickly solving the quadratic programming (QP) problem that arises from SVMs, mostly relying on heuristics for breaking the problem down into smaller, more manageable chunks. Another approach is to use an interior-point method that uses Newton-like iterations to find a solution of the Karush–Kuhn–Tucker conditions of the primal and dual problems. Instead of solving a sequence of broken-down problems, this approach directly solves the problem altogether. To avoid solving a linear system involving the large kernel matrix, a low-rank approximation to the matrix is often used in the kernel trick. Another common method is Platt's sequential minimal optimization (SMO) algorithm, which breaks the problem down into 2-dimensional sub-problems that are solved analytically, eliminating the need for a numerical optimization algorithm and matrix storage. This algorithm is conceptually simple, easy to implement, generally faster, and has better scaling properties for difficult SVM problems. The special case of linear support vector machines can be solved more efficiently by the same kind of algorithms used to optimize its close cousin, logistic regression; this class of algorithms includes sub-gradient descent (e.g., PEGASOS) and coordinate descent (e.g., LIBLINEAR). LIBLINEAR has some attractive training-time properties. Each convergence iteration takes time linear in the time taken to read the train data, and the iterations also have a Q-linear convergence property, making the algorithm extremely fast. The general kernel SVMs can also be solved more efficiently using sub-gradient descent (e.g. P-packSVM), especially when parallelization is allowed. Kernel SVMs are available in many machine-learning toolkits, including LIBSVM, MATLAB, SAS, SVMlight, kernlab, scikit-learn, Shogun, Weka, Shark, JKernelMachines, OpenCV and others. Preprocessing of data (standardization) is highly recommended to enhance accuracy of classification. There are a few methods of standardization, such as min-max, normalization by decimal scaling, Z-score. Subtraction of mean and division by variance of each feature is usually used for SVM. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\epsilon" }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "(p-1)" }, { "math_id": 3, "text": "k(x, y)" }, { "math_id": 4, "text": "\\alpha_i" }, { "math_id": 5, "text": "x_i" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "\\textstyle\\sum_i \\alpha_i k(x_i, x) = \\text{constant}." }, { "math_id": 8, "text": "y" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": " (\\mathbf{x}_1, y_1), \\ldots, (\\mathbf{x}_n, y_n)," }, { "math_id": 11, "text": "y_i" }, { "math_id": 12, "text": "\\mathbf{x}_i " }, { "math_id": 13, "text": "\\mathbf{x}_i" }, { "math_id": 14, "text": "y_i = 1" }, { "math_id": 15, "text": "y_i = -1" }, { "math_id": 16, "text": "\\mathbf{x}" }, { "math_id": 17, "text": "\\mathbf{w}^\\mathsf{T} \\mathbf{x} - b = 0," }, { "math_id": 18, "text": "\\mathbf{w}" }, { "math_id": 19, "text": "\\tfrac{b}{\\|\\mathbf{w}\\|}" }, { "math_id": 20, "text": "\\mathbf{w}^\\mathsf{T} \\mathbf{x} + b = 0." }, { "math_id": 21, "text": "\\mathbf{w}^\\mathsf{T} \\mathbf{x} - b = 1" }, { "math_id": 22, "text": "\\mathbf{w}^\\mathsf{T} \\mathbf{x} - b = -1" }, { "math_id": 23, "text": "\\tfrac{2}{\\|\\mathbf{w}\\|}" }, { "math_id": 24, "text": "\\|\\mathbf{w}\\|" }, { "math_id": 25, "text": "i" }, { "math_id": 26, "text": "\\mathbf{w}^\\mathsf{T} \\mathbf{x}_i - b \\ge 1 \\, , \\text{ if } y_i = 1," }, { "math_id": 27, "text": "\\mathbf{w}^\\mathsf{T} \\mathbf{x}_i - b \\le -1 \\, , \\text{ if } y_i = -1." }, { "math_id": 28, "text": "\\begin{align}\n&\\underset{\\mathbf{w},\\;b}{\\operatorname{minimize}} && \\frac{1}{2}\\|\\mathbf{w}\\|^2\\\\\n&\\text{subject to} && y_i(\\mathbf{w}^\\top \\mathbf{x}_i - b) \\geq 1 \\quad \\forall i \\in \\{1,\\dots,n\\}\n\\end{align}" }, { "math_id": 29, "text": "b" }, { "math_id": 30, "text": "\\mathbf{x} \\mapsto \\sgn(\\mathbf{w}^\\mathsf{T} \\mathbf{x} - b)" }, { "math_id": 31, "text": "\\sgn(\\cdot)" }, { "math_id": 32, "text": "\\max\\left(0, 1 - y_i(\\mathbf{w}^\\mathsf{T} \\mathbf{x}_i - b)\\right)." }, { "math_id": 33, "text": "\\mathbf{w}^\\mathsf{T} \\mathbf{x}_i - b" }, { "math_id": 34, "text": " \\lVert \\mathbf{w} \\rVert^2 + C \\left[\\frac 1 n \\sum_{i=1}^n \\max\\left(0, 1 - y_i(\\mathbf{w}^\\mathsf{T} \\mathbf{x}_i - b)\\right) \\right]," }, { "math_id": 35, "text": "C > 0" }, { "math_id": 36, "text": "\\begin{align}\n&\\underset{\\mathbf{w},\\;b,\\;\\mathbf{\\zeta}}{\\operatorname{minimize}} &&\\|\\mathbf{w}\\|_2^2 + C\\sum_{i=1}^n \\zeta_i\\\\\n&\\text{subject to} && y_i(\\mathbf{w}^\\top \\mathbf{x}_i - b) \\geq 1 - \\zeta_i, \\quad \\zeta_i \\geq 0 \\quad \\forall i\\in \\{1,\\dots,n\\}\n\\end{align}" }, { "math_id": 37, "text": "C" }, { "math_id": 38, "text": "k(\\mathbf{x}_i, \\mathbf{x}_j) = (\\mathbf{x}_i \\cdot \\mathbf{x}_j)^d" }, { "math_id": 39, "text": "d = 1" }, { "math_id": 40, "text": "k(\\mathbf{x}_i, \\mathbf{x}_j) = (\\mathbf{x}_i \\cdot \\mathbf{x}_j + r)^d" }, { "math_id": 41, "text": "k(\\mathbf{x}_i, \\mathbf{x}_j) = \\exp\\left(-\\gamma \\left\\|\\mathbf{x}_i - \\mathbf{x}_j\\right\\|^2\\right)" }, { "math_id": 42, "text": "\\gamma > 0" }, { "math_id": 43, "text": "\\gamma = 1/(2\\sigma^2)" }, { "math_id": 44, "text": "k(\\mathbf{x_i}, \\mathbf{x_j}) = \\tanh(\\kappa \\mathbf{x}_i \\cdot \\mathbf{x}_j + c)" }, { "math_id": 45, "text": "\\kappa > 0 " }, { "math_id": 46, "text": "c < 0" }, { "math_id": 47, "text": "\\varphi(\\mathbf{x}_i)" }, { "math_id": 48, "text": "k(\\mathbf{x}_i, \\mathbf{x}_j) = \\varphi(\\mathbf{x}_i) \\cdot \\varphi(\\mathbf{x_j})" }, { "math_id": 49, "text": "\\mathbf{w} = \\sum_i \\alpha_i y_i \\varphi(\\mathbf{x}_i)" }, { "math_id": 50, "text": " \\mathbf{w} \\cdot \\varphi(\\mathbf{x}) = \\sum_i \\alpha_i y_i k(\\mathbf{x}_i, \\mathbf{x})" }, { "math_id": 51, "text": "\\lambda" }, { "math_id": 52, "text": "i \\in \\{1,\\,\\ldots,\\,n\\}" }, { "math_id": 53, "text": " \\zeta_i = \\max\\left(0, 1 - y_i(\\mathbf{w}^\\mathsf{T} \\mathbf{x}_i - b)\\right)" }, { "math_id": 54, "text": " \\zeta_i" }, { "math_id": 55, "text": " y_i(\\mathbf{w}^\\mathsf{T} \\mathbf{x}_i - b) \\geq 1 - \\zeta_i." }, { "math_id": 56, "text": " \\begin{align}\n&\\text{minimize } \\frac 1 n \\sum_{i=1}^n \\zeta_i + \\lambda \\|\\mathbf{w}\\|^2 \\\\[0.5ex]\n&\\text{subject to } y_i\\left(\\mathbf{w}^\\mathsf{T} \\mathbf{x}_i - b\\right) \\geq 1 - \\zeta_i \\, \\text{ and } \\, \\zeta_i \\geq 0,\\, \\text{for all } i.\n\\end{align} " }, { "math_id": 57, "text": " \\begin{align}\n&\\text{maximize}\\,\\, f(c_1 \\ldots c_n) = \\sum_{i=1}^n c_i - \\frac 1 2 \\sum_{i=1}^n\\sum_{j=1}^n y_i c_i(\\mathbf{x}_i^\\mathsf{T} \\mathbf{x}_j)y_j c_j, \\\\\n&\\text{subject to } \\sum_{i=1}^n c_iy_i = 0,\\,\\text{and } 0 \\leq c_i \\leq \\frac{1}{2n\\lambda}\\;\\text{for all }i.\n\\end{align}" }, { "math_id": 58, "text": " c_i" }, { "math_id": 59, "text": " \\mathbf{w} = \\sum_{i=1}^n c_iy_i \\mathbf{x}_i." }, { "math_id": 60, "text": " c_i = 0" }, { "math_id": 61, "text": " \\mathbf{x}_i" }, { "math_id": 62, "text": " 0 < c_i <(2n\\lambda)^{-1}" }, { "math_id": 63, "text": " b" }, { "math_id": 64, "text": " y_i(\\mathbf{w}^\\mathsf{T} \\mathbf{x}_i - b) = 1 \\iff b = \\mathbf{w}^\\mathsf{T} \\mathbf{x}_i - y_i ." }, { "math_id": 65, "text": "y_i^{-1}=y_i" }, { "math_id": 66, "text": "y_i=\\pm 1" }, { "math_id": 67, "text": " \\varphi(\\mathbf{x}_i)." }, { "math_id": 68, "text": " k" }, { "math_id": 69, "text": " k(\\mathbf{x}_i, \\mathbf{x}_j) = \\varphi(\\mathbf{x}_i) \\cdot \\varphi(\\mathbf{x}_j)" }, { "math_id": 70, "text": " \\mathbf{w} = \\sum_{i=1}^n c_iy_i\\varphi(\\mathbf{x}_i)," }, { "math_id": 71, "text": "c_i" }, { "math_id": 72, "text": " \\begin{align}\n\\text{maximize}\\,\\, f(c_1 \\ldots c_n) &= \\sum_{i=1}^n c_i - \\frac 1 2 \\sum_{i=1}^n\\sum_{j=1}^n y_ic_i(\\varphi(\\mathbf{x}_i) \\cdot \\varphi(\\mathbf{x}_j))y_jc_j \\\\\n &= \\sum_{i=1}^n c_i - \\frac 1 2 \\sum_{i=1}^n\\sum_{j=1}^n y_ic_ik(\\mathbf{x}_i, \\mathbf{x}_j)y_jc_j \\\\\n\\text{subject to } \\sum_{i=1}^n c_i y_i &= 0,\\,\\text{and } 0 \\leq c_i \\leq \\frac{1}{2n\\lambda}\\;\\text{for all }i.\n\\end{align}\n" }, { "math_id": 73, "text": " i" }, { "math_id": 74, "text": " \\varphi(\\mathbf{x}_i)" }, { "math_id": 75, "text": " \\begin{align}\nb = \\mathbf{w}^\\mathsf{T} \\varphi(\\mathbf{x}_i) - y_i &= \\left[\\sum_{j=1}^n c_jy_j\\varphi(\\mathbf{x}_j) \\cdot \\varphi(\\mathbf{x}_i)\\right] - y_i \\\\\n &= \\left[\\sum_{j=1}^n c_jy_jk(\\mathbf{x}_j, \\mathbf{x}_i)\\right] - y_i.\n\\end{align}" }, { "math_id": 76, "text": " \\mathbf{z} \\mapsto \\sgn(\\mathbf{w}^\\mathsf{T} \\varphi(\\mathbf{z}) - b) = \\sgn \\left(\\left[\\sum_{i=1}^n c_iy_ik(\\mathbf{x}_i, \\mathbf{z})\\right] - b\\right)." }, { "math_id": 77, "text": "f(\\mathbf{w}, b) = \\left[\\frac 1 n \\sum_{i=1}^n \\max\\left(0, 1 - y_i(\\mathbf{w}^\\mathsf{T} \\mathbf{x}_i - b)\\right) \\right] + \\lambda \\|\\mathbf{w}\\|^2." }, { "math_id": 78, "text": "f" }, { "math_id": 79, "text": " \\begin{align}\n&\\text{maximize}\\,\\, f(c_1 \\ldots c_n) = \\sum_{i=1}^n c_i - \\frac 1 2 \\sum_{i=1}^n\\sum_{j=1}^n y_i c_i(x_i \\cdot x_j)y_j c_j,\\\\\n&\\text{subject to } \\sum_{i=1}^n c_iy_i = 0,\\,\\text{and } 0 \\leq c_i \\leq \\frac{1}{2n\\lambda}\\;\\text{for all }i.\n\\end{align}" }, { "math_id": 80, "text": " i \\in \\{1,\\, \\ldots,\\, n\\}" }, { "math_id": 81, "text": " \\partial f/ \\partial c_i" }, { "math_id": 82, "text": " (c_1',\\,\\ldots,\\,c_n')" }, { "math_id": 83, "text": "X_1 \\ldots X_n" }, { "math_id": 84, "text": "y_1 \\ldots y_n" }, { "math_id": 85, "text": "y_{n+1}" }, { "math_id": 86, "text": "X_{n+1}" }, { "math_id": 87, "text": "f(X_{n+1})" }, { "math_id": 88, "text": "\\ell(y,z)" }, { "math_id": 89, "text": "z" }, { "math_id": 90, "text": "\\varepsilon(f) = \\mathbb{E}\\left[\\ell(y_{n+1}, f(X_{n+1})) \\right]." }, { "math_id": 91, "text": "X_{n+1},\\,y_{n+1}" }, { "math_id": 92, "text": "\\hat \\varepsilon(f) = \\frac 1 n \\sum_{k=1}^n \\ell(y_k, f(X_k))." }, { "math_id": 93, "text": "X_k,\\, y_k" }, { "math_id": 94, "text": "\\mathcal{H}" }, { "math_id": 95, "text": " f" }, { "math_id": 96, "text": "\\lVert f \\rVert_{\\mathcal H} < k" }, { "math_id": 97, "text": "\\mathcal R(f) = \\lambda_k\\lVert f \\rVert_{\\mathcal H}" }, { "math_id": 98, "text": "\\hat f = \\mathrm{arg}\\min_{f \\in \\mathcal{H}} \\hat \\varepsilon(f) + \\mathcal{R}(f)." }, { "math_id": 99, "text": "\\mathcal{R}(f)" }, { "math_id": 100, "text": "\\hat\\mathbf{w}, b: \\mathbf{x} \\mapsto \\sgn(\\hat\\mathbf{w}^\\mathsf{T} \\mathbf{x} - b)" }, { "math_id": 101, "text": "\\left[\\frac 1 n \\sum_{i=1}^n \\max\\left(0, 1 - y_i(\\mathbf{w}^\\mathsf{T} \\mathbf{x} - b)\\right) \\right] + \\lambda \\|\\mathbf{w}\\|^2." }, { "math_id": 102, "text": "\\ell(y,z) = \\max\\left(0, 1 - yz \\right)." }, { "math_id": 103, "text": "\\ell_{sq}(y,z) = (y-z)^2" }, { "math_id": 104, "text": "\\ell_{\\log}(y,z) = \\ln(1 + e^{-yz})." }, { "math_id": 105, "text": "X,\\,y" }, { "math_id": 106, "text": "y_x" }, { "math_id": 107, "text": "X = x" }, { "math_id": 108, "text": "y_x = \\begin{cases} 1 & \\text{with probability } p_x \\\\ -1 & \\text{with probability } 1-p_x \\end{cases}" }, { "math_id": 109, "text": "f^*(x) = \\begin{cases}1 & \\text{if }p_x \\geq 1/2 \\\\ -1 & \\text{otherwise}\\end{cases} " }, { "math_id": 110, "text": "f_{sq}(x) = \\mathbb{E}\\left[y_x\\right]" }, { "math_id": 111, "text": "f_{\\log}(x) = \\ln\\left(p_x / ({1-p_x})\\right)" }, { "math_id": 112, "text": "\\sgn(f_{sq}) = \\sgn(f_\\log) = f^*" }, { "math_id": 113, "text": " y_x" }, { "math_id": 114, "text": "f^*" }, { "math_id": 115, "text": "\\mathcal{R}" }, { "math_id": 116, "text": "\\gamma" }, { "math_id": 117, "text": "\\lambda \\in \\{ 2^{-5}, 2^{-3}, \\dots, 2^{13},2^{15} \\}" }, { "math_id": 118, "text": "\\gamma \\in \\{ 2^{-15},2^{-13}, \\dots, 2^{1},2^{3} \\}" }, { "math_id": 119, "text": "\\mathcal{D}" }, { "math_id": 120, "text": "\\mathcal{D}^\\star = \\{ \\mathbf{x}^\\star_i \\mid \\mathbf{x}^\\star_i \\in \\mathbb{R}^p\\}_{i=1}^k " }, { "math_id": 121, "text": "\\mathbf{w}, b, \\mathbf{y}^\\star" }, { "math_id": 122, "text": "\\frac{1}{2}\\|\\mathbf{w}\\|^2" }, { "math_id": 123, "text": "i = 1, \\dots, n" }, { "math_id": 124, "text": "j = 1, \\dots, k" }, { "math_id": 125, "text": "\\begin{align}\n&y_i(\\mathbf{w} \\cdot \\mathbf{x}_i - b) \\ge 1, \\\\\n&y^\\star_j(\\mathbf{w} \\cdot \\mathbf{x}^\\star_j - b) \\ge 1,\n\\end{align}" }, { "math_id": 126, "text": "y^\\star_j \\in \\{-1, 1\\}." }, { "math_id": 127, "text": "\\tfrac{1}{2} \\|w\\|^2 " }, { "math_id": 128, "text": " | y_i - \\langle w, x_i \\rangle - b | \\le \\varepsilon " }, { "math_id": 129, "text": "\\langle w, x_i \\rangle + b" }, { "math_id": 130, "text": "\\varepsilon" } ]
https://en.wikipedia.org/wiki?curid=65309
65309248
List of topologies
List of concrete topologies and topological spaces The following is a list of named topologies or topological spaces, many of which are counterexamples in topology and related branches of mathematics. This is not a list of properties that a topology or topological space might possess; for that, see List of general topology topics and Topological property. Counter-examples (general topology). The following topologies are a known source of counterexamples for point-set topology. Topologies defined in terms of other topologies. Natural topologies. List of natural topologies. Compactifications. Compactifications include: Topologies of uniform convergence. This lists named topologies of uniform convergence. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(X, \\tau)," }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "S^1." }, { "math_id": 3, "text": "\\{0, 1\\}" }, { "math_id": 4, "text": "\\{\\varnothing, \\{1\\}, \\{0,1\\}\\}." }, { "math_id": 5, "text": "p := (0, 0)" }, { "math_id": 6, "text": "X \\setminus \\{p\\}" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "x_\\bull = \\left(x_i\\right)_{i=1}^\\infty" }, { "math_id": 9, "text": "X \\setminus \\{(0, 0)\\}" }, { "math_id": 10, "text": "(0, 0)" }, { "math_id": 11, "text": "x_\\bull." }, { "math_id": 12, "text": "\\N^{\\N}" }, { "math_id": 13, "text": "\\N" }, { "math_id": 14, "text": "[0, 1]" }, { "math_id": 15, "text": "\\Reals^n" }, { "math_id": 16, "text": "\\Reals" }, { "math_id": 17, "text": "\\Reals^3." }, { "math_id": 18, "text": "\\Reals^2" }, { "math_id": 19, "text": "(0, 1)^2" }, { "math_id": 20, "text": "[0, 1]^2 \\setminus \\Q^2," }, { "math_id": 21, "text": "[0, 1]^2 \\cap \\Q^2" }, { "math_id": 22, "text": "[0, 1/1] \\times [0, 1/2] \\times [0, 1/3] \\times \\cdots" }, { "math_id": 23, "text": "x \\in X" }, { "math_id": 24, "text": "X," }, { "math_id": 25, "text": "d \\not\\in X" }, { "math_id": 26, "text": "Y = X \\cup \\{d\\}." }, { "math_id": 27, "text": "\\tau = \\{V \\subseteq Y : \\text{ either } V \\text{ or } ( V \\setminus \\{d\\}) \\cup \\{x\\} \\text{ is an open subset of } X\\}" }, { "math_id": 28, "text": "Y" }, { "math_id": 29, "text": "x" }, { "math_id": 30, "text": "d" }, { "math_id": 31, "text": "Y." }, { "math_id": 32, "text": "X \\times X." } ]
https://en.wikipedia.org/wiki?curid=65309248
65309316
Kirchberger's theorem
Kirchberger's theorem is a theorem in discrete geometry, on linear separability. The two-dimensional version of the theorem states that, if a finite set of red and blue points in the Euclidean plane has the property that, for every four points, there exists a line separating the red and blue points within those four, then there exists a single line separating all the red points from all the blue points. Donald Watson phrases this result more colorfully, with a farmyard analogy: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;If sheep and goats are grazing in a field and for every four animals there exists a line separating the sheep from the goats then there exists such a line for all the animals. More generally, for finitely many red and blue points in formula_0-dimensional Euclidean space, if the red and blue points in every subset of formula_1 of the points are linearly separable, then all the red points and all the blue points are linearly separable. Another equivalent way of stating the result is that, if the convex hulls of finitely many red and blue points have a nonempty intersection, then there exists a subset of formula_1 points for which the convex hulls of the red and blue points in the subsets also intersect. History and proofs. The theorem is named after German mathematician Paul Kirchberger, a student of David Hilbert at the University of Göttingen who proved it in his 1902 dissertation, and published it in 1903 in "Mathematische Annalen", as an auxiliary theorem used in his analysis of Chebyshev approximation. A report of Hilbert on the dissertation states that some of Kirchberger's auxiliary theorems in this part of his dissertation were known to Hermann Minkowski but unpublished; it is not clear whether this statement applies to the result now known as Kirchberger's theorem. Since Kirchberger's work, other proofs of Kirchberger's theorem have been published, including simple proofs based on Helly's theorem on intersections of convex sets, based on Carathéodory's theorem on membership in convex hulls, or based on principles related to Radon's theorem on intersections of convex hulls. However, Helly's theorem, Carathéodory's theorem, and Radon's theorem all postdate Kirchberger's theorem. Generalizations and related results. A strengthened version of Kirchberger's theorem fixes one of the given points, and only considers subsets of formula_1 points that include the fixed point. If the red and blue points in each of these subsets are linearly separable, then all the red points and all the blue points are linearly separable. The theorem also holds if the red points and blue points form compact sets that are not necessarily finite. By using stereographic projection, Kirchberger's theorem can be used to prove a similar result for circular or spherical separability: if every five points of finitely many red and blue points in the plane can have their red and blue points separated by a circle, or every formula_2 points in higher dimensions can have their red and blue points separated by a hypersphere, then all the red and blue points can be separated in the same way. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "d+2" }, { "math_id": 2, "text": "d+3" } ]
https://en.wikipedia.org/wiki?curid=65309316
65312394
Resonant interaction
Interaction of multiple waves in a nonlinear system In nonlinear systems a resonant interaction is the interaction of three or more waves, usually but not always of small amplitude. Resonant interactions occur when a simple set of criteria coupling wave vectors and the dispersion equation are met. The simplicity of the criteria make technique popular in multiple fields. Its most prominent and well-developed forms appear in the study of gravity waves, but also finds numerous applications from astrophysics and biology to engineering and medicine. Theoretical work on partial differential equations provides insights into chaos theory; there are curious links to number theory. Resonant interactions allow waves to (elastically) scatter, diffuse or to become unstable. Diffusion processes are responsible for the eventual thermalization of most nonlinear systems; instabilities offer insight into high-dimensional chaos and turbulence. Discussion. The underlying concept is that when the sum total of the energy and momentum of several vibrational modes sum to zero, they are free to mix together via nonlinearities in the system under study. Modes for which the energy and momentum do not sum to zero cannot interact, as this would imply a violation of energy/momentum conservation. The momentum of a wave is understood to be given by its wave vector formula_0 and its energy formula_1 follows from the dispersion relation for the system. For example, for three waves in continuous media, the resonant condition is conventionally written as the requirement that formula_2 and also formula_3, the minus sign being taken depending on how energy is redistributed among the waves. For waves in discrete media, such as in computer simulations on a lattice, or in (nonlinear) solid-state systems, the wave vectors are quantized, and the normal modes can be called phonons. The Brillouin zone defines an upper bound on the wave vector, and waves can interact when they sum to integer multiples of the Brillouin vectors (Umklapp scattering). Although three-wave systems provide the simplest form of resonant interactions in waves, not all systems have three-wave interactions. For example, the deep-water wave equation, a continuous-media system, does not have a three-wave interaction. The Fermi–Pasta–Ulam–Tsingou problem, a discrete-media system, does not have a three-wave interaction. It does have a four-wave interaction, but this is not enough to thermalize the system; that requires a six-wave interaction. As a result, the eventual thermalization time goes as the inverse eighth power of the coupling—clearly, a very long time for weak coupling—thus allowing the famous FPUT recurrences to dominate on "normal" time scales. Hamiltonian formulation. In many cases, the system under study can be readily expressed in a Hamiltonian formalism. When this is possible, a set of manipulations can be applied, having the form of a generalized, non-linear Fourier transform. These manipulations are closely related to the inverse scattering method. A particularly simple example can be found in the treatment of deep water waves. In such a case, the system can be expressed in terms of a Hamiltonian, formulated in terms of canonical coordinates formula_4. To avoid notational confusion, write formula_5 for these two; they are meant to be conjugate variables satisfying Hamilton's equation. These are to be understood as functions of the configuration space coordinates formula_6, "i.e." functions of space and time. Taking the Fourier transform, write formula_7 and likewise for formula_8. Here, formula_9 is the wave vector. When "on shell", it is related to the angular frequency formula_1 by the dispersion relation. The ladder operators follow in the canonical fashion: formula_10 formula_11 with formula_12 some function of the angular frequency. The formula_13 correspond to the normal modes of the linearized system. The Hamiltonian (the energy) can now be written in terms of these raising and lowering operators (sometimes called the "action density variables") as formula_14 Here, the first term formula_15 is quadratic in formula_13 and represents the linearized theory, while the non-linearities are captured in formula_16, which is cubic or higher-order. Given the above as the starting point, the system is then decomposed into "free" and "bound" modes. The bound modes have no independent dynamics of their own; for example, the higher harmonics of a soliton solution are bound to the fundamental mode, and cannot interact. This can be recognized by the fact that they do not follow the dispersion relation, and have no resonant interactions. In this case, canonical transformations are applied, with the goal of eliminating terms that are non-interacting, leaving free modes. That is, one re-writes formula_17 and likewise for formula_18, and rewrites the system in terms of these new, "free" (or at least, freer) modes. Properly done, this leaves formula_19 expressed only with terms that are resonantly interacting. If formula_19 is cubic, these are then the three-wave terms; if quartic, these are the four-wave terms, and so on. Canonical transformations can be repeated to obtain higher-order terms, as long as the lower-order resonant interactions are not damaged, and one skillfully avoids the "small divisor problem", which occurs when there are near-resonances. The terms themselves give the rate or speed of the mixing, and are sometimes called transfer coefficients or the transfer matrix. At the conclusion, one obtains an equation for the time evolution of the normal modes, corrected by scattering terms. Picking out one of the modes out of the bunch, call it formula_20 below, the time evolution has the generic form formula_21 with formula_22 the transfer coefficients for the "n"-wave interaction, and the formula_23 capturing the notion of the conservation of energy/momentum implied by the resonant interaction. Here formula_24 is either formula_25 or formula_18 as appropriate. For deep-water waves, the above is called the Zakharov equation, named after Vladimir E. Zakharov. History. Resonant interactions were first considered and described by Henri Poincaré in the 19th century, in the analysis of perturbation series describing 3-body planetary motion. The first-order terms in the perturbative series can be understood for form a matrix; the eigenvalues of the matrix correspond to the fundamental modes in the perturbated solution. Poincare observed that in many cases, there are integer linear combinations of the eigenvalues that sum to zero; this is the original "resonant interaction". When in resonance, energy transfer between modes can keep the system in a stable phase-locked state. However, going to second order is challenging in several ways. One is that degenerate solutions are difficult to diagonalize (there is no unique vector basis for the degenerate space). A second issue is that differences appear in the denominator of the second and higher order terms in the perturbation series; small differences lead to the famous "small divisor problem". These can be interpreted as corresponding to chaotic behavior. To roughly summarize, precise resonances lead to scattering and mixing; approximate resonances lead to chaotic behavior. Applications. Resonant interactions have found broad utility in many areas. Below is a selected list of some of these, indicating the broad variety of domains to which the ideas have been applied.
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "\\omega" }, { "math_id": 2, "text": "k_1\\pm k_2 \\pm k_3=0" }, { "math_id": 3, "text": "\\omega_1\\pm\\omega_2 \\pm \\omega_3=0" }, { "math_id": 4, "text": "p,q" }, { "math_id": 5, "text": "\\psi,\\phi" }, { "math_id": 6, "text": "\\vec{x},t" }, { "math_id": 7, "text": "\\hat \\psi(\\vec k) = \\int e^{-i\\vec k\\cdot \\vec x}\\; \\psi(\\vec x) \\;dx" }, { "math_id": 8, "text": "\\hat \\phi(\\vec k)" }, { "math_id": 9, "text": "\\vec k" }, { "math_id": 10, "text": "\\hat \\phi(\\vec k) = \\sqrt {2f(\\omega)} \\;\\;\\left(a_{k} + a^*_{-k}\\right)" }, { "math_id": 11, "text": "\\hat \\psi(\\vec k) = -i\\sqrt {\\frac{2}{f(\\omega)}} \\;\\;\\left(a_{k} - a^*_{-k}\\right)" }, { "math_id": 12, "text": "2f(\\omega)" }, { "math_id": 13, "text": "a,a^*" }, { "math_id": 14, "text": "H = H_0(a,a^*) + \\epsilon H_1(a,a^*)" }, { "math_id": 15, "text": "H_0(a,a^*)" }, { "math_id": 16, "text": "H_1(a,a^*)" }, { "math_id": 17, "text": "a\\to a^\\prime=a+\\mathcal{O}(\\epsilon)" }, { "math_id": 18, "text": "a^*" }, { "math_id": 19, "text": "H_1" }, { "math_id": 20, "text": "a_1" }, { "math_id": 21, "text": "\\frac{\\partial a_1}{\\partial t} + i\\omega_1 = -i\\int dk_2\\cdots dk_n \\;T_{1\\cdots n} \\;a^\\pm_2\\cdots a^\\pm_n \\; \\delta_{1\\pm 2 \\pm \\cdots \\pm n}" }, { "math_id": 22, "text": "T_{1\\cdots n}" }, { "math_id": 23, "text": "\\delta_{1\\pm 2 \\pm \\cdots\\pm n}=\\delta(k_1 \\pm k_2 \\pm \\cdots \\pm k_n)" }, { "math_id": 24, "text": "a^\\pm_k" }, { "math_id": 25, "text": "a" } ]
https://en.wikipedia.org/wiki?curid=65312394
65318914
Geodetic graph
Graph whose shortest paths are unique In graph theory, a geodetic graph is an undirected graph such that there exists a unique (unweighted) shortest path between each two vertices. Geodetic graphs were introduced in 1962 by Øystein Ore, who observed that they generalize a property of trees (in which there exists a unique path between each two vertices regardless of distance), and asked for a characterization of them. Although these graphs can be recognized in polynomial time, "more than sixty years later a full characterization is still elusive". Examples. Every tree, every complete graph, and every odd-length cycle graph is geodetic. If formula_0 is a geodetic graph, then replacing every edge of formula_0 by a path of the same odd length will produce another geodetic graph. In the case of a complete graph, a more general pattern of replacement by paths is possible: choose a non-negative integer formula_1 for each vertex formula_2, and subdivide each edge formula_3 by adding formula_4 vertices to it. Then the resulting subdivided complete graph is geodetic, and every geodetic subdivided complete graph can be obtained in this way. Related graph classes. If every biconnected component of a graph is geodetic then the graph itself is geodetic. In particular, every block graph (graphs in which the biconnected components are complete) is geodetic. Similarly, because a cycle graph is geodetic when it has odd length, every cactus graph in which the cycles have odd length is also geodetic. These cactus graphs are exactly the connected graphs in which all cycles have odd length. More strongly, a planar graph is geodetic if and only if all of its biconnected components are either odd-length cycles or geodetic subdivisions of a four-vertex clique. Computational complexity. Geodetic graphs may be recognized in polynomial time, by using a variation of breadth first search that can detect multiple shortest paths, starting from each vertex of the graph. Geodetic graphs cannot contain an induced four-vertex cycle graph, nor an induced diamond graph, because these two graphs are not geodetic. In particular, as a subset of diamond-free graphs, the geodetic graphs have the property that every edge belongs to a unique maximal clique; in this context, the maximal cliques have also been called "lines". It follows that the problem of finding maximum cliques, or maximum weighted cliques, can be solved in polynomial time for geodetic graphs, by listing all maximal cliques. The broader class of graphs that have no induced 4-cycle or diamond are called "weakly geodetic"; these are the graphs where vertices at distance exactly two from each other have a unique shortest path. Diameter two. For graphs of diameter two (that is, graphs in which all vertices are at distance at most two from each other), the geodetic graphs and weakly geodetic graphs coincide. Every geodetic graph of diameter two is of one of three types: The strongly regular geodetic graphs include the 5-vertex cycle graph, the Petersen graph, and the Hoffman–Singleton graph. Despite additional research on the properties such a graph must have, it is not known whether there are more of these graphs, or infinitely many of these graphs. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Are there infinitely many strongly regular geodetic graphs? Geodetic graphs with diameter two and two different degrees cannot have a triangle composed of vertices of both degrees. They can be constructed from any finite affine plane by adding to the point-line incidence graph of the plane additional edges between the vertices corresponding to each two parallel lines. For the binary affine plane with four points and six two-point lines in three parallel pairs, the result of this construction is the Petersen graph, but for higher-order finite affine planes it produces graphs with two different degrees. Other related constructions of geodetic graphs from finite geometries are also known, but it is not known whether these exhaust all the possible geodetic graphs with diameter two and two different degrees. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "f(v)" }, { "math_id": 2, "text": "v" }, { "math_id": 3, "text": "uv" }, { "math_id": 4, "text": "f(u)+f(v)" }, { "math_id": 5, "text": "\\mu" } ]
https://en.wikipedia.org/wiki?curid=65318914
6532631
Frattini's argument
In group theory, a branch of mathematics, Frattini's argument is an important lemma in the structure theory of finite groups. It is named after Giovanni Frattini, who used it in a paper from 1885 when defining the Frattini subgroup of a group. The argument was taken by Frattini, as he himself admits, from a paper of Alfredo Capelli dated 1884. Frattini's argument. Statement. If formula_0 is a finite group with normal subgroup formula_1, and if formula_2 is a Sylow "p"-subgroup of formula_1, then formula_3 where formula_4 denotes the normalizer of formula_2 in formula_0, and formula_5 means the product of group subsets. Proof. The group formula_2 is a Sylow formula_6-subgroup of formula_1, so every Sylow formula_6-subgroup of formula_1 is an formula_1-conjugate of formula_2, that is, it is of the form formula_7 for some formula_8 (see Sylow theorems). Let formula_9 be any element of formula_0. Since formula_1 is normal in formula_0, the subgroup formula_10 is contained in formula_1. This means that formula_10 is a Sylow formula_6-subgroup of formula_1. Then, by the above, it must be formula_1-conjugate to formula_2: that is, for some formula_8 formula_11 and so formula_12 Thus formula_13 and therefore formula_14. But formula_15 was arbitrary, and so formula_16 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "P" }, { "math_id": 3, "text": "G = N_G(P)H," }, { "math_id": 4, "text": "N_G(P)" }, { "math_id": 5, "text": "N_G(P)H" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "h^{-1}Ph" }, { "math_id": 8, "text": "h \\in H" }, { "math_id": 9, "text": "g" }, { "math_id": 10, "text": "g^{-1}Pg" }, { "math_id": 11, "text": "g^{-1}Pg = h^{-1}Ph," }, { "math_id": 12, "text": "hg^{-1}Pgh^{-1} = P." }, { "math_id": 13, "text": "gh^{-1} \\in N_G(P)," }, { "math_id": 14, "text": "g \\in N_G(P)H" }, { "math_id": 15, "text": "g \\in G" }, { "math_id": 16, "text": "G = HN_G(P) = N_G(P)H.\\ \\square" }, { "math_id": 17, "text": "N_G(N_G(P))" }, { "math_id": 18, "text": "N_G(N_G(P)) = N_G(P)" }, { "math_id": 19, "text": "M \\leq G" }, { "math_id": 20, "text": "M" }, { "math_id": 21, "text": "M = N_G(M)" } ]
https://en.wikipedia.org/wiki?curid=6532631
6532984
Sapovirus
Genus of viruses Sapovirus is a genetically diverse genus of single-stranded positive-sense RNA, non-enveloped viruses within the family "Caliciviridae". Together with norovirus, sapoviruses are the most common cause of acute gastroenteritis (commonly called the "stomach flu" although it is not related to influenza) in humans and animals. It is a monotypic taxon containing only one species, the Sapporo virus. Natural hosts for the virus are humans and swine. The virus is transmitted through oral/fecal contact. Sapovirus commonly occurs in children and infants and therefore is often spread in nurseries and daycares; however, it has also been found in long-term care facilities. This could be due to a lack of personal hygiene and sanitation measures. Common symptoms include diarrhea and vomiting. The sapovirus was initially discovered in an outbreak of gastroenteritis in an orphanage in Sapporo, Japan, in 1977. Transmission route and host susceptibility. Sapovirus is spread via the fecal–oral route. Infected individuals expel more than formula_0 particles/gram of feces or vomit. Particles from the infected individual remain viable for years, and an infectious dose can be as few as 10 particles. Contamination of work surfaces, hands, etc. can cause a vast number of new infections. Infection may occur if the particles are inhaled, such as when the particles are aerosolized when those who are infected vomit, or when the toilet is flushed after an infected individual vomits. Other forms of transmission include the excessive handling of foods by an infected individual (this most commonly occurs in a restaurant setting), consumption of shellfish that lived in waters contaminated with infected fecal matter, and the ingestion of water that has been contaminated. Symptoms. After an incubation period of 1–4 days, signs of illness start to arise. Symptoms of sapovirus are very similar to those of norovirus. The most common symptoms are vomiting and diarrhea. However, additional symptoms may occur, including chills, nausea, headache, abdominal cramps, myalgia, and fever though it is very rare. While patients frequently start to show symptoms after the 1–4 day incubation period, there have been cases in which an individual is asymptomatic. Although the individual does not show symptoms, they are still capable of spreading the virus through the general mode of transmission, which is the oral-fecal route. Prevention. General sanitary hygiene is the most important method of preventing sapovirus. This can be done by thoroughly washing hands after using the restroom and before eating/preparing food. Alcohol-based hand sanitizer is ineffective against sapovirus. Contaminated surfaces should be cleaned with disinfectant or solutions containing bleach. Other preventative measures include avoiding contact and sharing drinks/food with infected individuals. Treatment. There is no specific medication for individuals infected with sapovirus. Sapovirus cannot be treated with antibiotics because it is not a bacterial infection. Treatments include symptom support such as rehydrating the individual. Viral classification. Structure and genome. Sapovirus is a non-enveloped, positive-sense, single-stranded RNA virus about 7.7kb in size. The virus has a 3'-end poly(A) tail but not a 5' cap. Sapovirus has an icosahedral structure that contains 180 subunits (T=3). The diameter of the capsid is between 27 and 40 nm. Like other caliciviruses, the capsid of the sapovirus has round intends on its surface. However, its "Star of David" surface morphology distinguishes it from other caliciviruses. Sapovirus' genome is organized into two (possibly three) well-known open reading frames (ORFs). ORF1 encodes for nonstructural proteins and for VP1, the main capsid protein. VP1 has two standard domains, shell (S) and protruding (P). The S protein's function is to "form the scaffold around the nucleic acid", while the P protein is important in forming a "homodimer with the receptors". ORF2 encodes for minor structural polyproteins, VP2. While there have been predictions of a third ORF (ORF3), there is no proof for what its function is. There have been at least 21 complete genomes for sapovirus analyzed and identified already, all of which can be classified into five categories (GI-GV), which can further be divided into different genetic clusters. Four of the five groups (GI, GII, GIV, GV) can infect humans and these four groups correspond to the four antigenically distinct strains of sapovirus: Sapporo, Houston, London, and Stockholm. While there are at least 21 genotypes for this virus, new ones continue to be reported in America, Asia, and Europe. Laboratory diagnosis. Nucleic acid detection methods. Reverse transcription-PCR (RT-PCR) is the most commonly used detection tool for sapovirus because of its broad reactivity, sensitivity, speed, and specificity. Because of the diversity of the sapovirus, hundreds of primers have been designed in order to specifically target and amplify RNA-dependent RNA polymerase. This can be used to "partially characterize the Sapovirus and investigate the similarity of the detected Sapovirus." Virus particle detection. "Sapoviruses are morphologically distinguishable from other gastroenteritis pathogens (e.g., norovirus, rotavirus, astrovirus, or adenovirus) by their typical "Star of David" surface morphology under the electron microscope. However, this has low sensitivity compared to nucleic acid detection methods." Antigen detection methods. Enzyme-linked immunosorbent assays (ELISA) have been used to detect human sapovirus from clinical samples. While ELISA can be used to detect human sapovirus antigens, it is not commonly used. The diversity of the many strains of sapovirus makes it difficult to detect the wide array of antigens that may be present. Because there are so many possible antigens, ELISA is not as accurate or as sensitive as the nucleic acid detection methods. Replication cycle. The exact replication cycle of sapovirus has not been determined; however, it is thought to have the same or similar cytoplasmic replication cycle that other caliciviruses display. The cytoplasmic replication cycle is as follows: History. Using electron microscopy, the sapovirus was first seen in diarrheic stool samples from the United Kingdom in 1977 and was soon known as a gastroenteritis pathogen. While the virus was first seen in the United Kingdom, "the prototype strain of the genus "Sapovirus" was from another outbreak in Sapporo, Japan in 1982." The first complete genome of sapovirus was interpreted from the Manchester strain in the United Kingdom in 1993. Formerly, sapoviruses were called "Sapporo-like viruses"; however, in 2002, they were changed to the species Sapporo virus, genus "Sapovirus", in the family "Caliciviridae". "Currently, the family "Caliciviridae" consists of five established genera: "Sapovirus, Norovirus, Lagovirus, Vesivirus", and "Nebovirus."" Outbreaks. December 2013. One positive result for sapovirus infection was confirmed at Gisborne Hospital in New Zealand. Two additional cases were found in staff members and five additional patients were put into isolation. Hospital staff used precautionary measures by using personal protection when entering rooms. June 2007. Fifty five faculty members of a college in Taipei County had been diagnosed with sapovirus infection. Long term care facilities 2002–2009. "Using data from the Oregon and Minnesota public health departments, researchers investigated 2161 gastroenteritis outbreaks between 2002 through 2009. Of these, 142 outbreaks (7 percent) were found to be norovirus-negative, and 93 of these were further tested for other gastrointestinal viruses including sapovirus, astrovirus, adenovirus, and rotavirus. Sapovirus was identified in 21 outbreaks (23 percent), with 66 percent of these occurring in long term care facilities. Close to half of these cases occurred in 2007 alone." The researchers further explained that while the proportion of sapovirus occurring in the long-term care facilities was high, it was likely an artifact of legally mandated outbreak reporting. Associated diseases. Norovirus is most commonly associated with sapovirus. Norovirus and sapovirus genomes are very closely related; the distinction between the two can only be made from the differences in their coding strategy and reading frames. Noroviruses, along with sapoviruses, are the most common cause of gastroenteritis and therefore show the same symptoms as each other. Astrovirus, like sapovirus, causes gastroenteritis in children and the elderly, especially those who are immunocompromised. While sapovirus has two ORFs, Astrovirus has three. Astrovirus also has 6 recombinant strains. Astrovirus replicates within the cytoplasm and propagates readily in the GI tract. Rotavirus, like norovirus, astrovirus, and sapovirus, causes gastroenteritis. Rotavirus, however, is much more lethal, causing 37% of deaths in children with diarrhea and 215,000 deaths worldwide. Animal viruses. Sapoviruses have been identified in bats, California sea lions, dogs, pigs and mink. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "10^{9}" } ]
https://en.wikipedia.org/wiki?curid=6532984
65332236
CEK Machine
Theoretical computer model A CEK Machine is an abstract machine invented by Matthias Felleisen and Daniel P. Friedman that implements left-to-right call by value. It is generally implemented as an interpreter for functional programming languages, but can also be used to implement simple imperative programming languages. A state in a CEK machine includes a control statement, environment and continuation. The control statement is the term being evaluated at that moment, the environment is (usually) a map from variable names to values, and the continuation stores another state, or a special halt case. It is a simplified form of another abstract machine called the SECD machine. The CEK machine builds on the SECD machine by replacing the dump (call stack) with the more advanced continuation, and putting parameters directly into the environment, rather than pushing them on to the parameter stack first. Other modifications can be made which creates a whole family of related machines. For example, the CESK machine has the environment map variables to a pointer on the store, which is effectively a heap. This allows it to model mutable state better than the ordinary CEK machine. The CK machine has no environment, and can be used for simple calculi without variables. Description. A CEK machine can be created for any programming language so the term is often used vaguely. For example, a CEK machine could be created to interpret the lambda calculus. Its environment maps variables to closures and the continuations are either a halt, a continuation to evaluate an argument (ar), or a continuation to evaluate an application after evaluating a function (ap): Representation of components. Each component of the CEK machine has various representations. The control string is usually a term being evaluated, or sometimes, a line number. For example, a CEK machine evaluating the lambda calculus would use a lambda expression as a control string. The environment is almost always a map from variables to values, or in the case of CESK machines, variables to addresses in the store. The representation of the continuation varies. It often contains another environment as well as a continuation type, for example "argument" or "application". It is sometimes a call stack, where each frame is the rest of the state, i.e. a control statement and an environment. Related machines. There are some other machines closely linked to the CEK machine. CESK machine. The CESK machine is another machine closely related to the CEK machine. The environment in a CESK machine maps variables to "pointers", on a "store" (heap) hence the name "CESK". It can be used to model mutable state, for example the Λσ calculus described in the original paper. This makes it much more useful for interpreting imperative programming languages, rather than functional ones. CS machine. The CS machine contains just a control statement and a store. It is also described by the original paper. In an application, instead of putting variables into an environment it substitutes them with an address on the store and putting the value of the variable in that address. The continuation is not needed because it is lazily evaluated; it does not need to remember to evaluate an argument. SECD machine. The SECD machine was the machine that CEK machine was based on. It has a stack, environment, control statement and dump. The dump is a call stack, and is used instead of a continuation. The stack is used for passing parameters to functions. The control statement was written in postfix notation, and the machine had its own "programming language". A lambda calculus statement like this: "(M N)" would be written like this: "N:M:ap" where "ap" is a function that applies two abstractions together. Origins. On page 196 of "Control Operators, the SECD Machine, and the formula_0-Calculus", and on page 4 of the technical report with the same name, Matthias Felleisen and Daniel P. Friedman wrote "The [CEK] machine is derived from Reynolds' extended interpreter IV.", referring to John Reynolds's Interpreter III in "Definitional Interpreters for Higher-Order Programming Languages". To wit, here is an implementation of the CEK machine in OCaml, representing lambda terms with de Bruijn indices: type term = IND of int (* de Bruijn index *) | ABS of term | APP of term * term Values are closures, as invented by Peter Landin: type value = CLO of term * value list type cont = C2 of term * value list * cont | C1 of value * cont | C0 let rec continue (c : cont) (v : value) : value = match c, v with C2 (t1, e, k), v0 -&gt; eval t1 e (C1 (v0, k)) | C1 (v0, k), v1 -&gt; apply v0 v1 k | C0, v -&gt; v and eval (t : term) (e : value list) (k : cont) : value = match t with IND n -&gt; continue k (List.nth e n) | ABS t' -&gt; continue k (CLO (t', e)) | APP (t0, t1) -&gt; eval t0 e (C2 (t1, e, k)) and apply (v0 : value) (v1 : value) (k : cont) = let (CLO (t, e)) = v0 in eval t (v1 :: e) k let main (t : term) : value = eval t [] C0 This implementation is in defunctionalized form, with codice_0 and codice_1 as the first-order representation of a continuation. Here is its refunctionalized counterpart: let rec eval (t : term) (e : value list) (k : value -&gt; 'a) : 'a = match t with IND n -&gt; k (List.nth e n) | ABS t' -&gt; k (CLO (t', e)) | APP (t0, t1) -&gt; eval t0 e (fun v0 -&gt; eval t1 e (fun v1 -&gt; apply v0 v1 k)) and apply (v0 : value) (v1 : value) (k : value -&gt; 'a) : 'a = let (CLO (t, e)) = v0 in eval t (v1 :: e) k let main (t : term) : value = eval t [] (fun v -&gt; v) This implementation is in left-to-right continuation-passing style, where the domain of answers is polymorphic, i.e., is implemented with a type variable. This continuation-passing implementation is mapped back to direct style as follows: let rec eval (t : term) (e : value list) : value = match t with IND n -&gt; List.nth e n | ABS t' -&gt; CLO (t', e) | APP (t0, t1) -&gt; let v0 = eval t0 e and v1 = eval t1 e in apply v0 v1 and apply (v0 : value) (v1 : value) : value = let (CLO (t, e)) = v0 in eval t (v1 :: e) let main (t : term) : value = eval t [] This direct-style implementation is also in defunctionalized form, or more precisely in closure-converted form. Here is the result of closure-unconverting it: type value = FUN of (value -&gt; value) let rec eval (t : term) (e : value list) : value = match t with IND n -&gt; List.nth e n | ABS t' -&gt; FUN (fun v -&gt; eval t' (v :: e)) | APP (t0, t1) -&gt; let v0 = eval t0 e and v1 = eval t1 e in apply v0 v1 and apply (v0 : value) (v1 : value) : value = let (FUN f) = v0 in f v1 let main (t : term) : value = eval t [] The resulting implementation is compositional. It is the usual Scott-Tarski definitional self-interpreter where the domain of values is reflexive (Scott) and where syntactic functions are defined as semantic functions and syntactic applications are defined as semantic applications (Tarski). This derivation mimics Danvy's rational deconstruction of Landin's SECD machine. The converse derivation (closure conversion, CPS transformation, and defunctionalization) is documented in John Reynolds's article "Definitional Interpreters for Higher-Order Programming Languages", which is the origin of the CEK machine and was subsequently identified as a blueprint for transforming compositional evaluators into abstract machines as well as vice versa. Modern times. The CEK machine, like the Krivine machine, does not only functionally correspond to a meta-circular evaluator (via a left-to-right call-by-value CPS transformation), it also syntactically corresponds to the formula_1 calculus -- a calculus that uses explicit substitution -- with a left-to-right applicative-order reduction strategy, and likewise for the SECD machine (via a right-to-left call-by-value CPS transformation). References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda" }, { "math_id": 1, "text": "\\lambda\\widehat{\\rho}" } ]
https://en.wikipedia.org/wiki?curid=65332236
65333929
Three-wave equation
In nonlinear systems, the three-wave equations, sometimes called the three-wave resonant interaction equations or triad resonances, describe small-amplitude waves in a variety of non-linear media, including electrical circuits and non-linear optics. They are a set of completely integrable nonlinear partial differential equations. Because they provide the simplest, most direct example of a resonant interaction, have broad applicability in the sciences, and are completely integrable, they have been intensively studied since the 1970s. Informal introduction. The three-wave equation arises by consideration of some of the simplest imaginable non-linear systems. Linear differential systems have the generic form formula_0 for some differential operator "D". The simplest non-linear extension of this is to write formula_1 How can one solve this? Several approaches are available. In a few exceptional cases, there might be known exact solutions to equations of this form. In general, these are found in some "ad hoc" fashion after applying some ansatz. A second approach is to assume that formula_2 and use perturbation theory to find "corrections" to the linearized theory. A third approach is to apply techniques from scattering matrix (S-matrix) theory. In the S-matrix approach, one considers particles or plane waves coming in from infinity, interacting, and then moving out to infinity. Counting from zero, the zero-particle case corresponds to the vacuum, consisting entirely of the background. The one-particle case is a wave that comes in from the distant past and then disappears into thin air; this can happen when the background is absorbing, deadening or dissipative. Alternately, a wave appears out of thin air and moves away. This occurs when the background is unstable and generates waves: one says that the system "radiates". The two-particle case consists of a particle coming in, and then going out. This is appropriate when the background is non-uniform: for example, an acoustic plane wave comes in, scatters from an enemy submarine, and then moves out to infinity; by careful analysis of the outgoing wave, characteristics of the spatial inhomogeneity can be deduced. There are two more possibilities: pair creation and pair annihilation. In this case, a pair of waves is created "out of thin air" (by interacting with some background), or disappear into thin air. Next on this count is the three-particle interaction. It is unique, in that it does not require any interacting background or vacuum, nor is it "boring" in the sense of a non-interacting plane-wave in a homogeneous background. Writing formula_3 for these three waves moving from/to infinity, this simplest quadratic interaction takes the form of formula_4 and cyclic permutations thereof. This generic form can be called the three-wave equation; a specific form is presented below. A key point is that "all" quadratic resonant interactions can be written in this form (given appropriate assumptions). For time-varying systems where formula_5 can be interpreted as energy, one may write formula_6 for a time-dependent version. Review. Formally, the three-wave equation is formula_7 where formula_8 cyclic, formula_9 is the group velocity for the wave having formula_10 as the wave-vector and angular frequency, and formula_11 the gradient, taken in flat Euclidean space in "n" dimensions. The formula_12 are the interaction coefficients; by rescaling the wave, they can be taken formula_13. By cyclic permutation, there are four classes of solutions. Writing formula_14 one has formula_15. The formula_16 are all equivalent under permutation. In 1+1 dimensions, there are three distinct formula_17 solutions: the formula_18 solutions, termed "explosive"; the formula_19 cases, termed "stimulated backscatter", and the formula_20 case, termed "soliton exchange". These correspond to very distinct physical processes. One interesting solution is termed the simulton, it consists of three comoving solitons, moving at a velocity "v" that differs from any of the three group velocities formula_21. This solution has a possible relationship to the "three sisters" observed in rogue waves, even though deep water does not have a three-wave resonant interaction. The lecture notes by Harvey Segur provide an introduction. The equations have a Lax pair, and are thus completely integrable. The Lax pair is a 3x3 matrix pair, to which the inverse scattering method can be applied, using techniques by Fokas. The class of spatially uniform solutions are known, these are given by Weierstrass elliptic ℘-function. The resonant interaction relations are in this case called the Manley–Rowe relations; the invariants that they describe are easily related to the modular invariants formula_22 and formula_23 That these appear is perhaps not entirely surprising, as there is a simple intuitive argument. Subtracting one wave-vector from the other two, one is left with two vectors that generate a period lattice. All possible relative positions of two vectors are given by Klein's j-invariant, thus one should expect solutions to be characterized by this. A variety of exact solutions for various boundary conditions are known. A "nearly general solution" to the full non-linear PDE for the three-wave equation has recently been given. It is expressed in terms of five functions that can be freely chosen, and a Laurent series for the sixth parameter. Applications. Some selected applications of the three-wave equations include: These cases are all naturally described by the three-wave equation.
[ { "math_id": 0, "text": "D\\psi=\\lambda\\psi" }, { "math_id": 1, "text": "D\\psi-\\lambda\\psi=\\varepsilon\\psi^2." }, { "math_id": 2, "text": "\\varepsilon\\ll 1" }, { "math_id": 3, "text": "\\psi_1, \\psi_2, \\psi_3" }, { "math_id": 4, "text": "(D-\\lambda)\\psi_1=\\varepsilon\\psi_2\\psi_3" }, { "math_id": 5, "text": "\\lambda" }, { "math_id": 6, "text": "(D-i\\partial/\\partial t)\\psi_1=\\varepsilon\\psi_2\\psi_3" }, { "math_id": 7, "text": "\\frac{\\partial B_j}{\\partial t} + v_j \\cdot \\nabla B_j=\\eta_j B^*_\\ell B^*_m" }, { "math_id": 8, "text": "j,\\ell,m=1,2,3" }, { "math_id": 9, "text": "v_j" }, { "math_id": 10, "text": "\\vec k_j, \\omega_j" }, { "math_id": 11, "text": "\\nabla" }, { "math_id": 12, "text": "\\eta_j" }, { "math_id": 13, "text": "\\eta_j=\\pm 1" }, { "math_id": 14, "text": "\\eta=\\eta_1\\eta_2\\eta_3" }, { "math_id": 15, "text": "\\eta=\\pm 1" }, { "math_id": 16, "text": "\\eta=-1" }, { "math_id": 17, "text": "\\eta=+1" }, { "math_id": 18, "text": "+++" }, { "math_id": 19, "text": "--+" }, { "math_id": 20, "text": "-+-" }, { "math_id": 21, "text": "v_1, v_2, v_3" }, { "math_id": 22, "text": "g_2" }, { "math_id": 23, "text": "g_3." }, { "math_id": 24, "text": "\\chi^{(2)}" } ]
https://en.wikipedia.org/wiki?curid=65333929
6533836
Normal-inverse Gaussian distribution
The normal-inverse Gaussian distribution (NIG, also known as the normal-Wald distribution) is a continuous probability distribution that is defined as the normal variance-mean mixture where the mixing density is the inverse Gaussian distribution. The NIG distribution was noted by Blaesild in 1977 as a subclass of the generalised hyperbolic distribution discovered by Ole Barndorff-Nielsen. In the next year Barndorff-Nielsen published the NIG in another paper. It was introduced in the mathematical finance literature in 1997. The parameters of the normal-inverse Gaussian distribution are often used to construct a heaviness and skewness plot called the NIG-triangle. Properties. Moments. The fact that there is a simple expression for the moment generating function implies that simple expressions for all moments are available. Linear transformation. This class is closed under affine transformations, since it is a particular case of the Generalized hyperbolic distribution, which has the same property. If formula_2 then formula_3 Summation. This class is infinitely divisible, since it is a particular case of the Generalized hyperbolic distribution, which has the same property. Convolution. The class of normal-inverse Gaussian distributions is closed under convolution in the following sense: if formula_4 and formula_5 are independent random variables that are NIG-distributed with the same values of the parameters formula_0 and formula_1, but possibly different values of the location and scale parameters, formula_6, formula_7 and formula_8 formula_9, respectively, then formula_10 is NIG-distributed with parameters formula_11 formula_12formula_13 and formula_14 Related distributions. The class of NIG distributions is a flexible system of distributions that includes fat-tailed and skewed distributions, and the normal distribution, formula_15 arises as a special case by setting formula_16 and letting formula_17. Stochastic process. The normal-inverse Gaussian distribution can also be seen as the marginal distribution of the normal-inverse Gaussian process which provides an alternative way of explicitly constructing it. Starting with a drifting Brownian motion (Wiener process), formula_18, we can define the inverse Gaussian process formula_19 Then given a second independent drifting Brownian motion, formula_20, the normal-inverse Gaussian process is the time-changed process formula_21. The process formula_22 at time formula_23 has the normal-inverse Gaussian distribution described above. The NIG process is a particular instance of the more general class of Lévy processes. As a variance-mean mixture. Let formula_24 denote the inverse Gaussian distribution and formula_25 denote the normal distribution. Let formula_26, where formula_27; and let formula_28, then formula_29 follows the NIG distribution, with parameters, formula_30. This can be used to generate NIG variates by ancestral sampling. It can also be used to derive an EM algorithm for maximum-likelihood estimation of the NIG parameters.
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "\\beta" }, { "math_id": 2, "text": "x\\sim\\mathcal{NIG}(\\alpha,\\beta,\\delta,\\mu) \\text{ and } y=ax+b," }, { "math_id": 3, "text": "y\\sim\\mathcal{NIG}\\bigl(\\frac{\\alpha}{\\left|a\\right|},\\frac{\\beta}{a},\\left|a\\right|\\delta,a\\mu+b\\bigr)." }, { "math_id": 4, "text": "X_1" }, { "math_id": 5, "text": "X_2" }, { "math_id": 6, "text": "\\mu_1" }, { "math_id": 7, "text": "\\delta_1" }, { "math_id": 8, "text": "\\mu_2," }, { "math_id": 9, "text": "\\delta_2" }, { "math_id": 10, "text": "X_1 + X_2" }, { "math_id": 11, "text": "\\alpha, " }, { "math_id": 12, "text": "\\beta, " }, { "math_id": 13, "text": "\\mu_1+\\mu_2" }, { "math_id": 14, "text": "\\delta_1 + \\delta_2." }, { "math_id": 15, "text": "N(\\mu,\\sigma^2)," }, { "math_id": 16, "text": "\\beta=0, \\delta=\\sigma^2\\alpha," }, { "math_id": 17, "text": "\\alpha\\rightarrow\\infty" }, { "math_id": 18, "text": "W^{(\\gamma)}(t)=W(t)+\\gamma t" }, { "math_id": 19, "text": "A_t=\\inf\\{s>0:W^{(\\gamma)}(s)=\\delta t\\}." }, { "math_id": 20, "text": "W^{(\\beta)}(t)=\\tilde W(t)+\\beta t" }, { "math_id": 21, "text": "X_t=W^{(\\beta)}(A_t)" }, { "math_id": 22, "text": "X(t)" }, { "math_id": 23, "text": "t=1" }, { "math_id": 24, "text": "\\mathcal{IG}" }, { "math_id": 25, "text": "\\mathcal{N}" }, { "math_id": 26, "text": "z\\sim\\mathcal{IG}(\\delta,\\gamma)" }, { "math_id": 27, "text": "\\gamma=\\sqrt{\\alpha^2-\\beta^2}" }, { "math_id": 28, "text": "x\\sim\\mathcal{N}(\\mu+\\beta z,z)" }, { "math_id": 29, "text": "x" }, { "math_id": 30, "text": "\\alpha,\\beta,\\delta,\\mu" } ]
https://en.wikipedia.org/wiki?curid=6533836
6533841
Non-associative algebra
Algebra over a field where binary multiplication is not necessarily associative A non-associative algebra (or distributive algebra) is an algebra over a field where the binary multiplication operation is not assumed to be associative. That is, an algebraic structure "A" is a non-associative algebra over a field "K" if it is a vector space over "K" and is equipped with a "K"-bilinear binary multiplication operation "A" × "A" → "A" which may or may not be associative. Examples include Lie algebras, Jordan algebras, the octonions, and three-dimensional Euclidean space equipped with the cross product operation. Since it is not assumed that the multiplication is associative, using parentheses to indicate the order of multiplications is necessary. For example, the expressions ("ab")("cd"), ("a"("bc"))"d" and "a"("b"("cd")) may all yield different answers. While this use of "non-associative" means that associativity is not assumed, it does not mean that associativity is disallowed. In other words, "non-associative" means "not necessarily associative", just as "noncommutative" means "not necessarily commutative" for noncommutative rings. An algebra is "unital" or "unitary" if it has an identity element "e" with "ex" = "x" = "xe" for all "x" in the algebra. For example, the octonions are unital, but Lie algebras never are. The nonassociative algebra structure of "A" may be studied by associating it with other associative algebras which are subalgebras of the full algebra of "K"-endomorphisms of "A" as a "K"-vector space. Two such are the derivation algebra and the (associative) enveloping algebra, the latter being in a sense "the smallest associative algebra containing "A"". More generally, some authors consider the concept of a non-associative algebra over a commutative ring "R": An "R"-module equipped with an "R"-bilinear binary multiplication operation. If a structure obeys all of the ring axioms apart from associativity (for example, any "R"-algebra), then it is naturally a formula_0-algebra, so some authors refer to non-associative formula_0-algebras as non-associative rings. Algebras satisfying identities. Ring-like structures with two binary operations and no other restrictions are a broad class, one which is too general to study. For this reason, the best-known kinds of non-associative algebras satisfy identities, or properties, which simplify multiplication somewhat. These include the following ones. Usual properties. Let x, y and z denote arbitrary elements of the algebra A over the field K. Let powers to positive (non-zero) integer be recursively defined by "x"1 ≝ "x" and either "x""n"+1 ≝ "x""n""x" (right powers) or "x""n"+1 ≝ "xx""n" (left powers) depending on authors. "x" "xe"; in that case we can define "x"0 ≝ "e". "x"("yz"). "yx". −"yx". 0 or "x"("yz") + "y"("zx") + "z"("xy") 0 depending on authors. "x"2("yx") or ("xy")"x"2 "x"("yx"2) depending on authors. "x"("xy") (left alternative) and ("yx")"x" "y"("xx") (right alternative). "x"("yx"). "x""n" for all integers k so that 0 &lt; "k" &lt; "n". "xx"2. "x"2"x"2 "xx"3 (compare with "fourth power commutative" below). "x""k""x""n−k" for all integers k so that 0 &lt; "k" &lt; "n". "xx"2. "xx"3 (compare with "fourth power associative" above). 0 and there exist "n"−1 elements so that "y"1"y"2…"y""n"−1 ≠ 0 for a specific association. 0 and there exist an element y so that "y""n"−1 ≠ 0. Relations between properties. For K of any characteristic: If "K" ≠ GF(2) or dim("A") ≤ 3: If char("K") ≠ 2: If char("K") ≠ 3: If char("K") ∉ {2,3,5}: "x"2"x"2 (one of the two identities defining "fourth power associative") together imply "power associative". If char("K") 0: "x"2"x"2 (one of the two identities defining "fourth power associative") together imply "power associative". If char("K") 2: Associator. The associator on "A" is the "K"-multilinear map formula_1 given by ["x","y","z"] ("xy")"z" − "x"("yz"). It measures the degree of nonassociativity of formula_2, and can be used to conveniently express some possible identities satisfied by "A". Let x, y and z denote arbitrary elements of the algebra. 0. 0 (left alternative) and ["y","x","x"] 0 (right alternative). −["x","z","y"] −["z","y","x"] −["y","x","z"]; the converse holds only if char("K") ≠ 2. 0. −["z","y","x"]; the converse holds only if char("K") ≠ 2. 0 or ["x","y","x"2] 0 depending on authors. 0. The nucleus is the set of elements that associate with all others: that is, the n in "A" such that ["n","A","A"] ["A","n","A"] ["A","A","n"] {0}. The nucleus is an associative subring of "A". Center. The center of "A" is the set of elements that commute and associate with everything in "A", that is the intersection of formula_3 with the nucleus. It turns out that for elements of "C(A)" it is enough that two of the sets formula_4 are formula_5 for the third to also be the zero set. Examples. More classes of algebras: Properties. There are several properties that may be familiar from ring theory, or from associative algebras, which are not always true for non-associative algebras. Unlike the associative case, elements with a (two-sided) multiplicative inverse might also be a zero divisor. For example, all non-zero elements of the sedenions have a two-sided inverse, but some of them are also zero divisors. Free non-associative algebra. The free non-associative algebra on a set "X" over a field "K" is defined as the algebra with basis consisting of all non-associative monomials, finite formal products of elements of "X" retaining parentheses. The product of monomials "u", "v" is just ("u")("v"). The algebra is unital if one takes the empty product as a monomial. Kurosh proved that every subalgebra of a free non-associative algebra is free. Associated algebras. An algebra "A" over a field "K" is in particular a "K"-vector space and so one can consider the associative algebra End"K"("A") of "K"-linear vector space endomorphism of "A". We can associate to the algebra structure on "A" two subalgebras of End"K"("A"), the derivation algebra and the (associative) enveloping algebra. Derivation algebra. A "derivation" on "A" is a map "D" with the property formula_6 The derivations on "A" form a subspace Der"K"("A") in End"K"("A"). The commutator of two derivations is again a derivation, so that the Lie bracket gives Der"K"("A") a structure of Lie algebra. Enveloping algebra. There are linear maps "L" and "R" attached to each element "a" of an algebra "A": formula_7 The "associative enveloping algebra" or "multiplication algebra" of "A" is the associative algebra generated by the left and right linear maps. The "centroid" of "A" is the centraliser of the enveloping algebra in the endomorphism algebra End"K"("A"). An algebra is "central" if its centroid consists of the "K"-scalar multiples of the identity. Some of the possible identities satisfied by non-associative algebras may be conveniently expressed in terms of the linear maps: The "quadratic representation" "Q" is defined by formula_8, or equivalently, formula_9 The article on universal enveloping algebras describes the canonical construction of enveloping algebras, as well as the PBW-type theorems for them. For Lie algebras, such enveloping algebras have a universal property, which does not hold, in general, for non-associative algebras. The best-known example is, perhaps the Albert algebra, an exceptional Jordan algebra that is not enveloped by the canonical construction of the enveloping algebra for Jordan algebras. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}" }, { "math_id": 1, "text": "[\\cdot,\\cdot,\\cdot] : A \\times A \\times A \\to A" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": " C(A) = \\{ n \\in A \\ | \\ nr=rn \\, \\forall r \\in A \\, \\} " }, { "math_id": 4, "text": "([n,A,A], [A,n,A] , [A,A,n])" }, { "math_id": 5, "text": "\\{0\\}" }, { "math_id": 6, "text": "D(x \\cdot y) = D(x) \\cdot y + x \\cdot D(y) \\ . " }, { "math_id": 7, "text": "L(a) : x \\mapsto ax ; \\ \\ R(a) : x \\mapsto xa \\ . " }, { "math_id": 8, "text": "Q(a) : x \\mapsto 2a \\cdot (a \\cdot x) - (a \\cdot a) \\cdot x \\ " }, { "math_id": 9, "text": "Q(a) = 2 L^2(a) - L(a^2) \\ . " } ]
https://en.wikipedia.org/wiki?curid=6533841
653404
Deterministic finite automaton
Finite-state machine In the theory of computation, a branch of theoretical computer science, a deterministic finite automaton (DFA)—also known as deterministic finite acceptor (DFA), deterministic finite-state machine (DFSM), or deterministic finite-state automaton (DFSA)—is a finite-state machine that accepts or rejects a given string of symbols, by running through a state sequence uniquely determined by the string. "Deterministic" refers to the uniqueness of the computation run. In search of the simplest models to capture finite-state machines, Warren McCulloch and Walter Pitts were among the first researchers to introduce a concept similar to finite automata in 1943. The figure illustrates a deterministic finite automaton using a state diagram. In this example automaton, there are three states: S0, S1, and S2 (denoted graphically by circles). The automaton takes a finite sequence of 0s and 1s as input. For each state, there is a transition arrow leading out to a next state for both 0 and 1. Upon reading a symbol, a DFA jumps "deterministically" from one state to another by following the transition arrow. For example, if the automaton is currently in state S0 and the current input symbol is 1, then it deterministically jumps to state S1. A DFA has a "start state" (denoted graphically by an arrow coming in from nowhere) where computations begin, and a set of "accept states" (denoted graphically by a double circle) which help define when a computation is successful. A DFA is defined as an abstract mathematical concept, but is often implemented in hardware and software for solving various specific problems such as lexical analysis and pattern matching. For example, a DFA can model software that decides whether or not online user input such as email addresses are syntactically valid. DFAs have been generalized to "nondeterministic finite automata (NFA)" which may have several arrows of the same label starting from a state. Using the powerset construction method, every NFA can be translated to a DFA that recognizes the same language. DFAs, and NFAs as well, recognize exactly the set of regular languages. Formal definition. A deterministic finite automaton M is a 5-tuple, ("Q", Σ, "δ", "q"0, "F"), consisting of Let "w" = "a"1"a"2..."an" be a string over the alphabet Σ. The automaton M accepts the string w if a sequence of states, "r"0, "r"1, ..., "rn", exists in Q with the following conditions: In words, the first condition says that the machine starts in the start state "q"0. The second condition says that given each character of string w, the machine will transition from state to state according to the transition function δ. The last condition says that the machine accepts w if the last input of w causes the machine to halt in one of the accepting states. Otherwise, it is said that the automaton "rejects" the string. The set of strings that M accepts is the language "recognized" by M and this language is denoted by "L"("M"). A deterministic finite automaton without accept states and without a starting state is known as a transition system or semiautomaton. For more comprehensive introduction of the formal definition see automata theory. Example. The following example is of a DFA M, with a binary alphabet, which requires that the input contains an even number of 0s. "M" = ("Q", Σ, "δ", "q"0, "F") where The state "S"1 represents that there has been an even number of 0s in the input so far, while "S"2 signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s, M will finish in state "S"1, an accepting state, so the input string will be accepted. The language recognized by M is the regular language given by the regular expression codice_0, where codice_1 is the Kleene star, e.g., codice_2 denotes any number (possibly zero) of consecutive ones. Variations. Complete and incomplete. According to the above definition, deterministic finite automata are always "complete": they define from each state a transition for each input symbol. While this is the most common definition, some authors use the term deterministic finite automaton for a slightly different notion: an automaton that defines "at most" one transition for each state and each input symbol; the transition function is allowed to be partial. When no transition is defined, such an automaton halts. Local automata. A local automaton is a DFA, not necessarily complete, for which all edges with the same label lead to a single vertex. Local automata accept the class of local languages, those for which membership of a word in the language is determined by a "sliding window" of length two on the word. A Myhill graph over an alphabet "A" is a directed graph with vertex set "A" and subsets of vertices labelled "start" and "finish". The language accepted by a Myhill graph is the set of directed paths from a start vertex to a finish vertex: the graph thus acts as an automaton. The class of languages accepted by Myhill graphs is the class of local languages. Randomness. When the start state and accept states are ignored, a DFA of n states and an alphabet of size k can be seen as a digraph of n vertices in which all vertices have k out-arcs labeled 1, ..., "k" (a k-out digraph). It is known that when "k" ≥ 2 is a fixed integer, with high probability, the largest strongly connected component (SCC) in such a k-out digraph chosen uniformly at random is of linear size and it can be reached by all vertices. It has also been proven that if k is allowed to increase as n increases, then the whole digraph has a phase transition for strong connectivity similar to Erdős–Rényi model for connectivity. In a random DFA, the maximum number of vertices reachable from one vertex is very close to the number of vertices in the largest SCC with high probability. This is also true for the largest induced sub-digraph of minimum in-degree one, which can be seen as a directed version of 1-core. Closure properties. If DFAs recognize the languages that are obtained by applying an operation on the DFA recognizable languages then DFAs are said to be closed under the operation. The DFAs are closed under the following operations. For each operation, an optimal construction with respect to the number of states has been determined in state complexity research. Since DFAs are equivalent to nondeterministic finite automata (NFA), these closures may also be proved using closure properties of NFA. As a transition monoid. A run of a given DFA can be seen as a sequence of compositions of a very general formulation of the transition function with itself. Here we construct that function. For a given input symbol formula_3, one may construct a transition function formula_4 by defining formula_5 for all formula_6. (This trick is called currying.) From this perspective, formula_7 "acts" on a state in Q to yield another state. One may then consider the result of function composition repeatedly applied to the various functions formula_7, formula_8, and so on. Given a pair of letters formula_9, one may define a new function formula_10, where formula_11 denotes function composition. Clearly, this process may be recursively continued, giving the following recursive definition of formula_12: formula_13, where formula_14 is the empty string and formula_15, where formula_16 and formula_6. formula_17 is defined for all words formula_18. A run of the DFA is a sequence of compositions of formula_17 with itself. Repeated function composition forms a monoid. For the transition functions, this monoid is known as the transition monoid, or sometimes the "transformation semigroup". The construction can also be reversed: given a formula_17, one can reconstruct a formula_19, and so the two descriptions are equivalent. Advantages and disadvantages. DFAs are one of the most practical models of computation, since there is a trivial linear time, constant-space, online algorithm to simulate a DFA on a stream of input. Also, there are efficient algorithms to find a DFA recognizing: Because DFAs can be reduced to a "canonical form" (minimal DFAs), there are also efficient algorithms to determine: DFAs are equivalent in computing power to nondeterministic finite automata (NFAs). This is because, firstly any DFA is also an NFA, so an NFA can do what a DFA can do. Also, given an NFA, using the powerset construction one can build a DFA that recognizes the same language as the NFA, although the DFA could have exponentially larger number of states than the NFA. However, even though NFAs are computationally equivalent to DFAs, the above-mentioned problems are not necessarily solved efficiently also for NFAs. The non-universality problem for NFAs is PSPACE complete since there are small NFAs with shortest rejecting word in exponential size. A DFA is universal if and only if all states are final states, but this does not hold for NFAs. The Equality, Inclusion and Minimization Problems are also PSPACE complete since they require forming the complement of an NFA which results in an exponential blow up of size. On the other hand, finite-state automata are of strictly limited power in the languages they can recognize; many simple languages, including any problem that requires more than constant space to solve, cannot be recognized by a DFA. The classic example of a simply described language that no DFA can recognize is bracket or Dyck language, i.e., the language that consists of properly paired brackets such as word "(()())". Intuitively, no DFA can recognize the Dyck language because DFAs are not capable of counting: a DFA-like automaton needs to have a state to represent any possible number of "currently open" parentheses, meaning it would need an unbounded number of states. Another simpler example is the language consisting of strings of the form "anbn" for some finite but arbitrary number of "a"'s, followed by an equal number of "b"'s. DFA identification from labeled words. Given a set of "positive" words formula_20 and a set of "negative" words formula_21 one can construct a DFA that accepts all words from formula_22 and rejects all words from formula_23: this problem is called "DFA identification" (synthesis, learning). While "some" DFA can be constructed in linear time, the problem of identifying a DFA with the minimal number of states is NP-complete. The first algorithm for minimal DFA identification has been proposed by Trakhtenbrot and Barzdin and is called the "TB-algorithm". However, the TB-algorithm assumes that all words from formula_24 up to a given length are contained in either formula_25. Later, K. Lang proposed an extension of the TB-algorithm that does not use any assumptions about formula_22 and formula_23, the "Traxbar" algorithm. However, Traxbar does not guarantee the minimality of the constructed DFA. In his work E.M. Gold also proposed a heuristic algorithm for minimal DFA identification. Gold's algorithm assumes that formula_22 and formula_23 contain a "characteristic set" of the regular language; otherwise, the constructed DFA will be inconsistent either with formula_22 or formula_23. Other notable DFA identification algorithms include the RPNI algorithm, the Blue-Fringe evidence-driven state-merging algorithm, and Windowed-EDSM. Another research direction is the application of evolutionary algorithms: the smart state labeling evolutionary algorithm allowed to solve a modified DFA identification problem in which the training data (sets formula_22 and formula_23) is "noisy" in the sense that some words are attributed to wrong classes. Yet another step forward is due to application of SAT solvers by Marjin J. H. Heule and S. Verwer: the minimal DFA identification problem is reduced to deciding the satisfiability of a Boolean formula. The main idea is to build an augmented prefix-tree acceptor (a trie containing all input words with corresponding labels) based on the input sets and reduce the problem of finding a DFA with formula_26 states to "coloring" the tree vertices with formula_26 states in such a way that when vertices with one color are merged to one state, the generated automaton is deterministic and complies with formula_22 and formula_23. Though this approach allows finding the minimal DFA, it suffers from exponential blow-up of execution time when the size of input data increases. Therefore, Heule and Verwer's initial algorithm has later been augmented with making several steps of the EDSM algorithm prior to SAT solver execution: the DFASAT algorithm. This allows reducing the search space of the problem, but leads to loss of the minimality guarantee. Another way of reducing the search space has been proposed by Ulyantsev et al. by means of new symmetry breaking predicates based on the breadth-first search algorithm: the sought DFA's states are constrained to be numbered according to the BFS algorithm launched from the initial state. This approach reduces the search space by formula_27 by eliminating isomorphic automata. Equivalent models. Read-only right-moving Turing machines. Read-only right-moving Turing machines are a particular type of Turing machine that only moves right; these are almost exactly equivalent to DFAs. The definition based on a singly infinite tape is a 7-tuple formula_28 where formula_29 is a finite set of "states"; formula_30 is a finite set of the "tape alphabet/symbols"; formula_31 is the "blank symbol" (the only symbol allowed to occur on the tape infinitely often at any step during the computation); formula_24, a subset of formula_30 not including "b", is the set of "input symbols"; formula_32 is a function called the "transition function", "R" is a right movement (a right shift); formula_0 is the "initial state"; formula_1 is the set of "final" or "accepting states". The machine always accepts a regular language. There must exist at least one element of the set F (a HALT state) for the language to be nonempty. formula_33 formula_34 formula_35, "blank"; formula_36, empty set; formula_37 see state-table above; formula_38, initial state; formula_39 the one element set of final states: formula_40. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "q_0 \\in Q" }, { "math_id": 1, "text": "F \\subseteq Q" }, { "math_id": 2, "text": "r_n \\in F" }, { "math_id": 3, "text": "a \\in \\Sigma" }, { "math_id": 4, "text": "\\delta_a : Q \\rightarrow Q" }, { "math_id": 5, "text": "\\delta_a(q) = \\delta(q,a)" }, { "math_id": 6, "text": "q \\in Q" }, { "math_id": 7, "text": "\\delta_a" }, { "math_id": 8, "text": "\\delta_b" }, { "math_id": 9, "text": "a, b \\in \\Sigma" }, { "math_id": 10, "text": "\\widehat\\delta_{ab}=\\delta_a \\circ \\delta_b" }, { "math_id": 11, "text": "\\circ" }, { "math_id": 12, "text": "\\widehat\\delta : Q \\times \\Sigma^{\\star} \\rightarrow Q" }, { "math_id": 13, "text": "\\widehat\\delta ( q, \\epsilon ) = q" }, { "math_id": 14, "text": "\\epsilon" }, { "math_id": 15, "text": "\\widehat\\delta ( q, wa ) = \\delta_a(\\widehat\\delta ( q, w ))" }, { "math_id": 16, "text": " w \\in \\Sigma ^*, a \\in \\Sigma " }, { "math_id": 17, "text": "\\widehat\\delta" }, { "math_id": 18, "text": "w\\in\\Sigma^*" }, { "math_id": 19, "text": "\\delta" }, { "math_id": 20, "text": "S^+ \\subset \\Sigma^*" }, { "math_id": 21, "text": "S^- \\subset \\Sigma^*" }, { "math_id": 22, "text": "S^+" }, { "math_id": 23, "text": "S^-" }, { "math_id": 24, "text": "\\Sigma" }, { "math_id": 25, "text": "S^+ \\cup S^-" }, { "math_id": 26, "text": "C" }, { "math_id": 27, "text": "C!" }, { "math_id": 28, "text": "M = \\langle Q, \\Gamma, b, \\Sigma, \\delta, q_0, F \\rangle," }, { "math_id": 29, "text": "Q" }, { "math_id": 30, "text": "\\Gamma" }, { "math_id": 31, "text": "b \\in \\Gamma" }, { "math_id": 32, "text": "\\delta: Q \\times \\Gamma \\to Q \\times \\Gamma \\times \\{R\\}" }, { "math_id": 33, "text": "Q = \\{ A, B, C, \\text{HALT} \\};" }, { "math_id": 34, "text": "\\Gamma = \\{ 0, 1 \\};" }, { "math_id": 35, "text": "b = 0" }, { "math_id": 36, "text": "\\Sigma = \\varnothing" }, { "math_id": 37, "text": "\\delta = " }, { "math_id": 38, "text": "q_0 = A" }, { "math_id": 39, "text": "F = " }, { "math_id": 40, "text": "\\{\\text{HALT}\\}" } ]
https://en.wikipedia.org/wiki?curid=653404
653406
Nondeterministic finite automaton
Type of finite-state machine in automata theory In automata theory, a finite-state machine is called a deterministic finite automaton (DFA), if A nondeterministic finite automaton (NFA), or nondeterministic finite-state machine, does not need to obey these restrictions. In particular, every DFA is also an NFA. Sometimes the term NFA is used in a narrower sense, referring to an NFA that is "not" a DFA, but not in this article. Using the subset construction algorithm, each NFA can be translated to an equivalent DFA; i.e., a DFA recognizing the same formal language. Like DFAs, NFAs only recognize regular languages. NFAs were introduced in 1959 by Michael O. Rabin and Dana Scott, who also showed their equivalence to DFAs. NFAs are used in the implementation of regular expressions: Thompson's construction is an algorithm for compiling a regular expression to an NFA that can efficiently perform pattern matching on strings. Conversely, Kleene's algorithm can be used to convert an NFA into a regular expression (whose size is generally exponential in the input automaton). NFAs have been generalized in multiple ways, e.g., nondeterministic finite automata with ε-moves, finite-state transducers, pushdown automata, alternating automata, ω-automata, and probabilistic automata. Besides the DFAs, other known special cases of NFAs are unambiguous finite automata (UFA) and self-verifying finite automata (SVFA). Informal introduction. There are at least two ways to describe the behavior of an NFA, and both of them are equivalent. The first way makes use of the nondeterminism in the name of an NFA. For each input symbol, the NFA transitions to a new state until all input symbols have been consumed. In each step, the automaton nondeterministically "chooses" one of the applicable transitions. If there exists at least one "lucky run", i.e. some sequence of choices leading to an accepting state after completely consuming the input, it is accepted. Otherwise, i.e. if no choice sequence at all can consume all the input and lead to an accepting state, the input is rejected. In the second way, the NFA consumes a string of input symbols, one by one. In each step, whenever two or more transitions are applicable, it "clones" itself into appropriately many copies, each one following a different transition. If no transition is applicable, the current copy is in a dead end, and it "dies". If, after consuming the complete input, any of the copies is in an accept state, the input is accepted, else, it is rejected. Formal definition. For a more elementary introduction of the formal definition, see automata theory. Automaton. An "NFA" is represented formally by a 5-tuple, formula_0, consisting of Here, formula_8 denotes the power set of formula_1. Recognized language. Given an NFA formula_9, its recognized language is denoted by formula_10, and is defined as the set of all strings over the alphabet formula_2 that are accepted by formula_11. Loosely corresponding to the above informal explanations, there are several equivalent formal definitions of a string formula_12 being accepted by formula_11: In words, the first condition says that the machine starts in the start state formula_19. The second condition says that given each character of string formula_13, the machine will transition from state to state according to the transition function formula_3. The last condition says that the machine accepts formula_13 if the last input of formula_13 causes the machine to halt in one of the accepting states. In order for formula_13 to be accepted by formula_11, it is not required that every state sequence ends in an accepting state, it is sufficient if one does. Otherwise, "i.e." if it is impossible at all to get from formula_19 to a state from formula_6 by following formula_13, it is said that the automaton "rejects" the string. The set of strings formula_11 accepts is the language "recognized" by formula_11 and this language is denoted by formula_10. In words, formula_26 is the set of all states reachable from state formula_27 by consuming the string formula_28. The string formula_13 is accepted if some accepting state in formula_6 can be reached from the start state formula_19 by consuming formula_13. Initial state. The above automaton definition uses a "single initial state", which is not necessary. Sometimes, NFAs are defined with a set of initial states. There is an easy construction that translates an NFA with multiple initial states to an NFA with a single initial state, which provides a convenient notation. Example. The following automaton formula_11, with a binary alphabet, determines if the input ends with a 1. Let formula_29 where the transition function formula_3 can be defined by this state transition table (cf. upper left picture): Since the set formula_31 contains more than one state, formula_11 is nondeterministic. The language of formula_11 can be described by the regular language given by the regular expression codice_0. All possible state sequences for the input string "1011" are shown in the lower picture. The string is accepted by formula_11 since one state sequence satisfies the above definition; it does not matter that other sequences fail to do so. The picture can be interpreted in a couple of ways: The feasibility to read the same picture in two ways also indicates the equivalence of both above explanations. In contrast, the string "10" is rejected by formula_11 (all possible state sequences for that input are shown in the upper right picture), since there is no way to reach the only accepting state, formula_30, by reading the final 0 symbol. While formula_30 can be reached after consuming the initial "1", this does not mean that the input "10" is accepted; rather, it means that an input string "1" would be accepted. Equivalence to DFA. A deterministic finite automaton (DFA) can be seen as a special kind of NFA, in which for each state and symbol, the transition function has exactly one state. Thus, it is clear that every formal language that can be recognized by a DFA can be recognized by an NFA. Conversely, for each NFA, there is a DFA such that it recognizes the same formal language. The DFA can be constructed using the powerset construction. This result shows that NFAs, despite their additional flexibility, are unable to recognize languages that cannot be recognized by some DFA. It is also important in practice for converting easier-to-construct NFAs into more efficiently executable DFAs. However, if the NFA has "n" states, the resulting DFA may have up to 2"n" states, which sometimes makes the construction impractical for large NFAs. NFA with ε-moves. Nondeterministic finite automaton with ε-moves (NFA-ε) is a further generalization to NFA. In this kind of automaton, the transition function is additionally defined on the empty string ε. A transition without consuming an input symbol is called an ε-transition and is represented in state diagrams by an arrow labeled "ε". ε-transitions provide a convenient way of modeling systems whose current states are not precisely known: i.e., if we are modeling a system and it is not clear whether the current state (after processing some input string) should be q or q', then we can add an ε-transition between these two states, thus putting the automaton in both states simultaneously. Formal definition. An "NFA-ε" is represented formally by a 5-tuple, formula_0, consisting of Here, formula_8 denotes the power set of formula_1 and formula_23 denotes empty string. ε-closure of a state or set of states. For a state formula_40, let formula_41 denote the set of states that are reachable from formula_30 by following ε-transitions in the transition function formula_3, i.e., formula_42 if there is a sequence of states formula_43 such that formula_41 is known as the epsilon closure, (also ε-closure) of formula_30. The ε-closure of a set formula_48 of states of an NFA is defined as the set of states reachable from any state in formula_48 following ε-transitions. Formally, for formula_49, define formula_50. Extended transition function. Similar to NFA without ε-moves, the transition function formula_3 of an NFA-ε can be extended to strings. Informally, formula_51 denotes the set of all states the automaton may have reached when starting in state formula_40 and reading the string formula_52 The function formula_21 can be defined recursively as follows. "Informally:" Reading the empty string may drive the automaton from state formula_30 to any state of the epsilon closure of formula_56 "Informally:" Reading the string formula_13 may drive the automaton from state formula_30 to any state formula_27 in the recursively computed set formula_51; after that, reading the symbol formula_60 may drive it from formula_27 to any state in the epsilon closure of formula_61 The automaton is said to accept a string formula_13 if formula_62 that is, if reading formula_13 may drive the automaton from its start state formula_19 to some accepting state in formula_63 Example. Let formula_11 be a NFA-ε, with a binary alphabet, that determines if the input contains an even number of 0s or an even number of 1s. Note that 0 occurrences is an even number of occurrences as well. In formal notation, let formula_64 where the transition relation formula_3 can be defined by this state transition table: formula_11 can be viewed as the union of two DFAs: one with states formula_65 and the other with states formula_66. The language of formula_11 can be described by the regular language given by this regular expression formula_67. We define formula_11 using ε-moves but formula_11 can be defined without using ε-moves. Equivalence to NFA. To show NFA-ε is equivalent to NFA, first note that NFA is a special case of NFA-ε, so it remains to show for every NFA-ε, there exists an equivalent NFA. Given an NFA with epsilon moves formula_68 define an NFA formula_69 where formula_70 and formula_71 for each state formula_40 and each symbol formula_72 using the extended transition function formula_73 defined above. One has to distinguish the transition functions of formula_11 and formula_74 viz. formula_3 and formula_75 and their extensions to strings, formula_3 and formula_76 respectively. By construction, formula_77 has no ε-transitions. One can prove that formula_78 for each string formula_79, by induction on the length of formula_80 Based on this, one can show that formula_81 if, and only if, formula_82 for each string formula_83 From formula_78 and formula_88 we have formula_89 we still have to show the "formula_90" direction. *If formula_91 contains a state in formula_92 then formula_93 contains the same state, which lies in formula_6. *If formula_91 contains formula_94 and formula_95 then formula_93 also contains a state in formula_96 viz. formula_97 *If formula_91 contains formula_94 and formula_98 but formula_99 then there exists a state in formula_100, and the same state must be in formula_101 Since NFA is equivalent to DFA, NFA-ε is also equivalent to DFA. Closure properties. The set of languages recognized by NFAs is closed under the following operations. These closure operations are used in Thompson's construction algorithm, which constructs an NFA from any regular expression. They can also be used to prove that NFAs recognize exactly the regular languages. Since NFAs are equivalent to nondeterministic finite automaton with ε-moves (NFA-ε), the above closures are proved using closure properties of NFA-ε. Properties. The machine starts in the specified initial state and reads in a string of symbols from its alphabet. The automaton uses the state transition function Δ to determine the next state using the current state, and the symbol just read or the empty string. However, "the next state of an NFA depends not only on the current input event, but also on an arbitrary number of subsequent input events. Until these subsequent events occur it is not possible to determine which state the machine is in". If, when the automaton has finished reading, it is in an accepting state, the NFA is said to accept the string, otherwise it is said to reject the string. The set of all strings accepted by an NFA is the language the NFA accepts. This language is a regular language. For every NFA a deterministic finite automaton (DFA) can be found that accepts the same language. Therefore, it is possible to convert an existing NFA into a DFA for the purpose of implementing a (perhaps) simpler machine. This can be performed using the powerset construction, which may lead to an exponential rise in the number of necessary states. For a formal proof of the powerset construction, please see the Powerset construction article. Implementation. There are many ways to implement a NFA: Application of NFA. NFAs and DFAs are equivalent in that if a language is recognized by an NFA, it is also recognized by a DFA and vice versa. The establishment of such equivalence is important and useful. It is useful because constructing an NFA to recognize a given language is sometimes much easier than constructing a DFA for that language. It is important because NFAs can be used to reduce the complexity of the mathematical work required to establish many important properties in the theory of computation. For example, it is much easier to prove closure properties of regular languages using NFAs than DFAs. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(Q, \\Sigma, \\delta, q_0, F)" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "\\Sigma" }, { "math_id": 3, "text": "\\delta" }, { "math_id": 4, "text": "Q\\times \\Sigma \\rightarrow \\mathcal{P}(Q)" }, { "math_id": 5, "text": "q_0 \\in Q" }, { "math_id": 6, "text": "F" }, { "math_id": 7, "text": "F \\subseteq Q" }, { "math_id": 8, "text": "\\mathcal{P}(Q)" }, { "math_id": 9, "text": "M = (Q, \\Sigma, \\delta, q_0, F)" }, { "math_id": 10, "text": "L(M)" }, { "math_id": 11, "text": "M" }, { "math_id": 12, "text": "w = a_1 a_2 ... a_n" }, { "math_id": 13, "text": "w" }, { "math_id": 14, "text": "r_0, r_1, ..., r_n" }, { "math_id": 15, "text": "r_0 = q_0" }, { "math_id": 16, "text": "r_{i+1} \\in \\delta (r_i, a_{i+1})" }, { "math_id": 17, "text": "i = 0, \\ldots, n-1" }, { "math_id": 18, "text": "r_n \\in F" }, { "math_id": 19, "text": "q_0" }, { "math_id": 20, "text": "\\delta^*(q_0, w) \\cap F \\not = \\emptyset" }, { "math_id": 21, "text": "\\delta^*: Q \\times \\Sigma^* \\rightarrow \\mathcal{P}(Q)" }, { "math_id": 22, "text": "\\delta^*(r, \\epsilon) = \\{r\\}" }, { "math_id": 23, "text": "\\epsilon" }, { "math_id": 24, "text": "\\delta^*(r, xa)= \\bigcup_{r' \\in \\delta^*(r, x)} \\delta(r', a)" }, { "math_id": 25, "text": "x \\in \\Sigma^*, a \\in \\Sigma" }, { "math_id": 26, "text": "\\delta^*(r, x)" }, { "math_id": 27, "text": "r" }, { "math_id": 28, "text": "x" }, { "math_id": 29, "text": "M = (\\{p, q\\}, \\{0, 1\\}, \\delta, p, \\{q\\})" }, { "math_id": 30, "text": "q" }, { "math_id": 31, "text": "\\delta(p,1)" }, { "math_id": 32, "text": "\\langle r_0,r_1,r_2,r_3,r_4 \\rangle = \\langle p, p, p, p, q \\rangle" }, { "math_id": 33, "text": "\\delta^*(p,\\epsilon) = \\{ p \\}" }, { "math_id": 34, "text": "\\delta^*(p,1) = \\delta(p,1) = \\{ p,q \\}" }, { "math_id": 35, "text": "\\delta^*(p,10) = \\delta(p,0) \\cup \\delta(q,0) = \\{ p \\} \\cup \\{\\}" }, { "math_id": 36, "text": "\\delta^*(p,101) = \\delta(p,1) = \\{ p,q \\}" }, { "math_id": 37, "text": "\\delta^*(p,1011) = \\delta(p,1) \\cup \\delta(q,1) = \\{ p,q \\} \\cup \\{\\}" }, { "math_id": 38, "text": "\\{ q \\}" }, { "math_id": 39, "text": "\\delta : Q \\times (\\Sigma \\cup \\{\\epsilon\\}) \\rightarrow \\mathcal{P}(Q)" }, { "math_id": 40, "text": "q \\in Q" }, { "math_id": 41, "text": "E(q)" }, { "math_id": 42, "text": "p \\in E(q)" }, { "math_id": 43, "text": "q_1,..., q_k" }, { "math_id": 44, "text": "q_1 = q" }, { "math_id": 45, "text": "q_{i+1} \\in \\delta(q_i, \\varepsilon)" }, { "math_id": 46, "text": "1 \\le i < k" }, { "math_id": 47, "text": "q_k = p" }, { "math_id": 48, "text": "P" }, { "math_id": 49, "text": "P \\subseteq Q" }, { "math_id": 50, "text": "E(P) = \\bigcup\\limits_{q\\in P} E(q)" }, { "math_id": 51, "text": "\\delta^*(q,w)" }, { "math_id": 52, "text": "w \\in \\Sigma^* ." }, { "math_id": 53, "text": "\\delta^*(q,\\varepsilon) = E(q)" }, { "math_id": 54, "text": "q \\in Q ," }, { "math_id": 55, "text": "E" }, { "math_id": 56, "text": "q ." }, { "math_id": 57, "text": "\\delta^*(q,wa) = \\bigcup_{r \\in \\delta^*(q,w)} E(\\delta(r,a)) ," }, { "math_id": 58, "text": "w \\in \\Sigma^*" }, { "math_id": 59, "text": "a \\in \\Sigma ." }, { "math_id": 60, "text": "a" }, { "math_id": 61, "text": "\\delta(r,a) ." }, { "math_id": 62, "text": "\\delta^*(q_0,w) \\cap F \\neq \\emptyset ," }, { "math_id": 63, "text": "F ." }, { "math_id": 64, "text": "M = (\\{S_0, S_1, S_2, S_3, S_4\\}, \\{0, 1\\}, \\delta, S_0, \\{S_1, S_3\\})" }, { "math_id": 65, "text": "\\{S_1, S_2\\}" }, { "math_id": 66, "text": "\\{S_3, S_4\\}" }, { "math_id": 67, "text": "(1^{*}01^{*}0)^{*} \\cup (0^{*}10^{*}1)^{*}" }, { "math_id": 68, "text": "M = (Q, \\Sigma, \\delta, q_0, F) ," }, { "math_id": 69, "text": "M' = (Q, \\Sigma, \\delta', q_0, F') ," }, { "math_id": 70, "text": "F' = \\begin{cases} F \\cup \\{ q_0 \\} & \\text{ if } E(q_0) \\cap F \\neq \\{\\} \\\\ F & \\text{ otherwise } \\\\ \\end{cases} " }, { "math_id": 71, "text": "\\delta'(q,a) = \\delta^*(q,a) " }, { "math_id": 72, "text": "a \\in \\Sigma ," }, { "math_id": 73, "text": "\\delta^*" }, { "math_id": 74, "text": "M' ," }, { "math_id": 75, "text": "\\delta' ," }, { "math_id": 76, "text": "\\delta'^* ," }, { "math_id": 77, "text": "M'" }, { "math_id": 78, "text": "\\delta'^*(q_0,w) = \\delta^*(q_0,w)" }, { "math_id": 79, "text": "w \\neq \\varepsilon" }, { "math_id": 80, "text": "w ." }, { "math_id": 81, "text": "\\delta'^*(q_0,w) \\cap F' \\neq \\{\\}" }, { "math_id": 82, "text": "\\delta^*(q_0,w) \\cap F \\neq \\{\\}," }, { "math_id": 83, "text": "w \\in \\Sigma^* :" }, { "math_id": 84, "text": "w = \\varepsilon ," }, { "math_id": 85, "text": "F' ." }, { "math_id": 86, "text": "w = va" }, { "math_id": 87, "text": "v \\in \\Sigma^*" }, { "math_id": 88, "text": "F \\subseteq F' ," }, { "math_id": 89, "text": "\\delta'^*(q_0,w) \\cap F' \\neq \\{\\} \\;\\Leftarrow\\; \\delta^*(q_0,w) \\cap F \\neq \\{\\} ;" }, { "math_id": 90, "text": "\\Rightarrow" }, { "math_id": 91, "text": "\\delta'^*(q_0,w)" }, { "math_id": 92, "text": "F' \\setminus \\{ q_0 \\} ," }, { "math_id": 93, "text": "\\delta^*(q_0,w)" }, { "math_id": 94, "text": "q_0 ," }, { "math_id": 95, "text": "q_0 \\in F ," }, { "math_id": 96, "text": "F ," }, { "math_id": 97, "text": "q_0 ." }, { "math_id": 98, "text": "q_0 \\not\\in F ," }, { "math_id": 99, "text": "q_0\\in F'," }, { "math_id": 100, "text": "E(q_0)\\cap F" }, { "math_id": 101, "text": "\\delta^*(q_0,w) = \\bigcup_{r \\in \\delta^*(q,v)} E(\\delta(r,a)) ." } ]
https://en.wikipedia.org/wiki?curid=653406
6534096
Variance-gamma distribution
The variance-gamma distribution, generalized Laplace distribution or Bessel function distribution is a continuous probability distribution that is defined as the normal variance-mean mixture where the mixing density is the gamma distribution. The tails of the distribution decrease more slowly than the normal distribution. It is therefore suitable to model phenomena where numerically large values are more probable than is the case for the normal distribution. Examples are returns from financial assets and turbulent wind speeds. The distribution was introduced in the financial literature by Madan and Seneta. The variance-gamma distributions form a subclass of the generalised hyperbolic distributions. The fact that there is a simple expression for the moment generating function implies that simple expressions for all moments are available. The class of variance-gamma distributions is closed under convolution in the following sense. If formula_2 and formula_3 are independent random variables that are variance-gamma distributed with the same values of the parameters formula_0 and formula_1, but possibly different values of the other parameters, formula_4, formula_5 and formula_6 formula_7, respectively, then formula_8 is variance-gamma distributed with parameters formula_0, formula_1, formula_9 and formula_10. The variance-gamma distribution can also be expressed in terms of three inputs parameters (C,G,M) denoted after the initials of its founders. If the "C", formula_11 here, parameter is integer then the distribution has a closed form 2-EPT distribution. See 2-EPT probability density function. Under this restriction closed form option prices can be derived. If formula_12, formula_13 and formula_14, the distribution becomes a Laplace distribution with scale parameter formula_15. As long as formula_13, alternative choices of formula_0 and formula_1 will produce distributions related to the Laplace distribution, with skewness, scale and location depending on the other parameters. For a symmetric variance-gamma distribution, the kurtosis can be given by formula_16. See also Variance gamma process.
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "\\beta" }, { "math_id": 2, "text": "X_1" }, { "math_id": 3, "text": "X_2" }, { "math_id": 4, "text": "\\lambda_1" }, { "math_id": 5, "text": "\\mu_1" }, { "math_id": 6, "text": "\\lambda_2," }, { "math_id": 7, "text": "\\mu_2" }, { "math_id": 8, "text": "X_1 + X_2" }, { "math_id": 9, "text": "\\lambda_1+\\lambda_2" }, { "math_id": 10, "text": "\\mu_1 + \\mu_2" }, { "math_id": 11, "text": "\\lambda" }, { "math_id": 12, "text": "\\alpha=1" }, { "math_id": 13, "text": "\\lambda=1" }, { "math_id": 14, "text": "\\beta=0" }, { "math_id": 15, "text": "b=1" }, { "math_id": 16, "text": "3(1 + 1/\\lambda)" } ]
https://en.wikipedia.org/wiki?curid=6534096
65342312
Bifacial solar cells
Solar cell that can produce electrical energy from each side of the cell A bifacial solar cell (BSC) is any photovoltaic solar cell that can produce electrical energy when illuminated on either of its surfaces, front or rear. In contrast, monofacial solar cells produce electrical energy only when photons impinge on their front side. Bifacial solar cells can make use of albedo radiation, which is useful for applications where a lot of light is reflected on surfaces such as roofs. The concept was introduced as a means of increasing the energy output in solar cells. Efficiency of solar cells, defined as the ratio of incident luminous power to generated electrical power under one or several suns (1 sun = 1000W/m2 ), is measured independently for the front and rear surfaces for bifacial solar cells. The bifaciality factor (%) is defined as the ratio of rear efficiency in relation to the front efficiency subject to the same irradiance. The vast majority of solar cells today are made of silicon (Si). Silicon is a semiconductor and as such, its external electrons are in an interval of energies called the valence band and they completely fill the energy levels of this band. Above this valence band there is a forbidden band, or band gap, of energies within which no electron can exist, and further above, we find the conduction band. The conduction band of semiconductors is almost empty of electrons, but it is where valence band electrons will find accommodation after being excited by the absorption of photons. The excited electrons have more energy than the ordinary electrons of the semiconductor. The electrical conductivity of Si, as described so far, called intrinsic silicon, is exceedingly small. Introducing impurities to the Si in the form of phosphorus atoms will provide additional electrons located in the conduction band, rendering the Si n-type, with a conductivity that can be engineered by modifying the density of phosphorus atoms. Alternatively, impurification with boron or aluminum atoms renders the Si p-type, with a conductivity that can also be engineered. These impurity atoms retrieve electrons from the valence band leaving the so-called "holes" in it, that behave like virtual positive charges. Si solar cells are usually doped with boron, so behaving as a p-type semiconductor and have a narrow (~0.5 microns) superficial n-type region. Between the p-type region and the n-type region the so-called p-n junction is formed, in which an electric field is formed which separates electrons and holes, the electrons towards the n-type region at the surface and the holes towards the p-type region. Under illumination an excess of electron-hole pairs are generated, because more electrons are excited. Thus, a photocurrent is generated, which is extracted by metal contacts located on both faces of the semiconductor. The electron-hole pairs generated by light falling outside the p-n junction are not separated by the electric field, and thus the electron-hole pairs end up recombining without producing a photocurrent. The roles of the p and n regions in the cell can be interchanged. Accordingly, a monofacial solar cell produces photocurrent only if the face where the junction has been formed is illuminated. Instead, a bifacial solar cell is designed in such a way that the cell will produce a photocurrent when either side, front or rear, is illuminated. BSCs and modules (arrays of BSCs) were invented and first produced for space and earth applications in the late 1970s, and became mainstream solar cell technology by the 2010s. It is foreseen that it will become the leading approach to photovoltaic solar cell manufacturing by 2030 due to the shown benefits over monofacial options including increased performance, versatility, and reduce soiling impact. History of the bifacial solar cell. Invention and first devices. A silicon solar cell was first patented in 1946 by Russell Ohl when working at Bell Labs and first publicly demonstrated at the same research institution by Calvin Fuller, Daryl Chapin, and Gerald Pearson in 1954; however, these first proposals were monofacial cells and not designed to have their rear face active. The first bifacial solar cell theoretically proposed is in a Japanese patent with a priority date 4 October 1960, by Hiroshi Mori, when working for the company Hayakawa Denki Kogyo Kabushiki Kaisha (in English, Hayakawa Electric Industry Co. Ltd.), which later developed into nowadays Sharp Corporation. The proposed cell was a two-junction pnp structure with contact electrodes attached to two opposite edges. However, first demonstrations of bifacial solar cells and panels were carried out in the Soviet Space Program in the Salyut 3 (1974) and Salyut 5 (1976) LEO military space stations. These bifacial solar cells were developed and manufactured by Bordina et al. at the VNIIT (All Union Scientific Research Institute of Energy Sources) in Moscow that in 1975 became Russian solar cell manufacturer KVANT. In 1974 this team filed a US patent in which the cells were proposed with the shape of mini-parallelepipeds of maximum size 1mm × 1mm × 1mm connected in series so that there were 100 cells/cm2. As in modern-day BSCs, they proposed the use of isotype junctions pp+ close to one of the light-receiving surfaces. In Salyut 3, small experimental panels with a total cell surface of 24 cm2 demonstrated an increase in energy generation per satellite revolution due to Earth's albedo of up to 34%, compared to monofacial panels at the time. A 17–45% gain due to the use of bifacial panels (0.48m2 – 40W) was recorded during the flight of Salyut 5 space station. Simultaneous to this Russian research, on the other side of the Iron Curtain, the Laboratory of Semiconductors at the School of Telecommunication Engineering of the Technical University of Madrid, led by Professor Antonio Luque, independently carries out a broad research program seeking the development of industrially feasible bifacial solar cells. While Mori's patent and VNIIT-KVANT spaceship-borne prototypes were based on tiny cells without surface metal grid and therefore intricately interconnected, more in the style of microelectronic devices which were at that time in their onset, Luque will file two Spanish patents in 1976 and 1977 and one in the United States in 1977 that were precursory of modern bifacials . Luque's patents were the first to propose BSCs with one cell per silicon wafer, as was by then the case of monofacial cells and so continues to be, with metal grids on both surfaces. They considered both the npp+ structure and the pnp structures. Development of BSCs at the Laboratory of Semiconductors was tackled in a three-fold approach that resulted in three PhD theses, authored by Andrés Cuevas (1980), Javier Eguren (1981) and Jesús Sangrador (1982), the first two having Luque as doctoral advisor while Dr. Gabriel Sala, from the same group, conducted the third. Cuevas' thesis consisted of constructing the first of Luque's patents, the one of 1976, that due to its npn structure similar to that of a transistor, was dubbed the "transcell". Eguren's thesis dealt with the demonstration of Luque's 2nd patent of 1977, with a npp+ doping profile, with the pp+ isotype junction next to the cell's rear surface, creating what is usually referred as a back surface field (BSF) in solar cell technology. This work gave way to several publications and additional patents. In particular, the beneficial effect of reducing p-doping in the base, where reduction of voltage in the emitter junction (front p-n junction) was compensated by voltage increase in the rear isotype junction, while at the same time enabling higher diffusion length of minority carriers that increases the current output under bifacial illumination. Sangrador's thesis and third development route at the Technical University of Madrid, proposed the so-called vertical multijunction edge-illuminated solar cell in which p+nn+ where stacked and connected in series and illuminated by their edges, this being high voltage cells that required no surface metal grid to extract the current. In 1979 the Laboratory for Semiconductors became the Institute for Solar Energy (IES-UPM), that having Luque as the first director, continued intense research on bifacial solar cells well until the first decade of the 21st century, with remarkable results. For example, in 1994, two Brazilian PhD students at the Institute of Solar Energy, Adriano Moehlecke and Izete Zanesco, together with Luque, developed and produced a bifacial solar cell rendering 18.1% in the front face and 19.1% in the rear face; a record bifaciality of 103% (at that time record efficiency for monofacial cells was slightly below 22%). The first bifacial solar cell factory: Isofoton. Of the three BSC development approaches carried out at the Institute of Solar Energy, it was that of Eguren's thesis, the npp+, the one that gave the best results. On the other hand, it was found that bifacial solar cells could deliver up to 59% more power yearly when installed with a white surface at their back, which enhanced the sun's reflected radiation (albedo radiation) going into the cells' rear face. It could have been expected this finding to happen easier in Spain, where houses, especially rural ones are, in the south, frequently whitewashed. Hence, a spin-off company was founded to manufacture bifacial solar cells and modules, based on the npp+ development, to commercially exploit their enhanced power production when suitably installed with high albedo surfaces behind, whether ground or walls. Founded in 1981 it was named Isofotón (because its cells singularly used all isotropic photons) and established in Málaga, Luque's hometown. Its initial capital came from family and friends (e.g. most of the employees and research staff of the Institute of Solar Energy) plus some public capital from an industrial development fund, SODIAN, owned by the Andalusian Autonomous Community. It set sail with 45 shareholders, Luque as 1st chairman and co-CEO, together with his brother Alberto, a seasoned industrial entrepreneur, and having Javier Eguren as CTO. Eguren and Sala led the technology transfer from the Institute of Solar Energy to Isofoton. By 1983 Isofoton's factory in Málaga had a manufacturing capacity of 330 kW/yr. of bifacial modules (with a 15 people net headcount) at a time when the global market of photovoltaics was in the range of 15 MW. At that time, the market of terrestrial photovoltaic power plants, to which Isofoton oriented its production, essentially consisted of demonstration projects. Thus, early landmarks of Isofoton's production were the 20kWp power plant in San Agustín de Guadalix, built in 1986 for Iberdrola, and an off-grid installation by 1988 also of 20kWp in the village of Noto Gouye Diama (Senegal) funded by the Spanish international aid and cooperation programs. As Isofotón matured, its early shareholding structure of individuals was replaced by big technology and engineering corporations as Abengoa or Alcatel or banks such as BBVA. Upon Alcatel's entry as a major shareholder in 1987 the decision was taken to switch production to more conventional monofacial photovoltaic cells, based on licensed technology from US PV manufacturer Arco Solar, this being the end of Isofoton as the world's first and until then, only bifacial cell manufacturer. However, Isofoton still continued to forge ahead successfully and between 2000 and 2005 it ranked consistently among the world's top 10 photovoltaic manufacturers. In 2015 it filed for bankruptcy when, as almost all of the other European and Western PV manufacturers of its time, it could not withstand the competitive pressure of the new wave of Chinese PV manufacturers. Later progress until mass production. Besides Isofoton, some other PV manufacturers, however, specialized in space applications, reported developments of BSCs at a laboratory scale such as COMSAT in 1980, Solarex in 1981 or AEG Telefunken in 1984. During the late 1980s and the 1990s research and improvement of bifacial solar cell technology continued. In 1987 Jaeger and Hezel at ISFH (Institute for Solar Energy Research in Hamelin) successfully produced a new BSC design based on a single junction n+p, in which the rear contact was replaced by a metal grid and all intermetallic surfaces were passivated with PECVD-grown silicon nitride, this resulting in 15% and 13.2% under front and rear illumination respectively. In this way, these devices presented a Metal Insulator Semiconductor-Inversion Layer (MIS-IL) front junction. Ten years later, the same research group replaced this MIS layer with a diffused pn junction to produce BSC laboratory devices with 20.1% front and 17.2% rear efficiencies. In 1997, Glunz et al., at the Fraunhofer Institute for Solar Energy Systems, produced n+pn+ 4 cm2 devices with 20.6% front and 20.2% rear conversion efficiencies. This was a double junction cell (one of the junctions not connected or "floating") with the metal grid only on the rear surface, i.e. operating an interdigitated back contact (IBC) solar BSC and with the floating front junction performing as passivation. By 1997, SunPower, by then the solar cell manufacturer producing the highest efficiency cells through its back contact design, published research by a team led by its founder, Richard Swanson, on a back contact BSC with front efficiency of 21.9% and rear efficiency of 13.9%. A prototype series of cells and modules were produced but never made it to mass production. During these days, with PV module cost being almost the only driver towards a wider embracement of solar electricity – as has happened ever after – and despite their attractiveness and the large research effort carried out, the added complexity of BSCs precluded its adoption for large-scale production as had only previously been achieved by Isofoton. Niche applications where BSCs presented competitive advantages were proposed and demonstrated, even to the point of involving some pilot productions. For example, sun shading bifacial PV modules in facades or carports. A celebrated application demonstration was the one by Nordmann et al. in 1997, consisting of a 10 kW PV noise barrier along a north-south-oriented 120m tranche of the A1 motorway in Wallisellen (north of Zurich). BSC cells here were manufactured by German companies ASE (later RWE Schott Solar GmbH) and Kohlhauer based on a system patent by TNC Energie Consulting, and this application has since been abundantly replicated. With the turn of the millennium, paths towards the industrial production of BSC cells and modules started to be laid again. In 2000, Japanese manufacturer Hitachi released results of its research in BSCs with another transistor-like n+pn+ cell with 21.3% front and 19.8% rear efficiency. By 2003 Hitachi had developed BSC module technology that was licensed in 2006 to the US company Prism Solar. In 2004 a team led by Prof. Andrew Blakers at the Australian National University published its first results on the so-called Sliver BSC technology, that had taken the design route previously proposed by Mori and also realized by IES-UPM by Sangrador and Sala, i.e., a stack of laterally connected bifacial cells requiring no metal grids, however, by then with more advanced means with which thousands of cells were micromachined out of one p-type silicon wafer. The technology was later transferred to Origin Energy that planned large-scale manufacturing for the Australian market by 2008, but finally this never occurred due to price pressure from Chinese competition. In 2012 Sanyo (later acquired by Panasonic) successfully launches industrial production of bifacial PV modules, based on its HIT (Heterojunction with Intrinsic Thin layer) technology. By 2010, ECN releases results on its research on BSCs, based on the by then classical p+nn+ Back Surface Field BSC. This technology, dubbed n-PASHA, was transferred to the leading Chinese PV manufacturer Yingli by 2012, that began to commercialize them under the brand name Panda. Yingli was at that time the no.1 PV producer holding 10% of the world's shipments, and this technology transfer by ECN can be considered a milestone in the ultimate coming of age of BSCs, in which the technology is picked up by the, by then, mighty Chinese manufacturers mainly responsible for the steep decrease experienced PV prices since the beginning of the 2010s. By 2020, the ENF Solar directory of solar companies lists 184 producers of bifacial solar panels, and according to the International Technology Roadmap for Photovoltaics, they held a 20% share of the overall PV market and its forecast is that this share will rise to 70% by 2030. When looking back on the development history of the BSC, it seems clear that fully industrializing the monofacial PV solar cells and the development of its nowadays booming market, was a necessary condition for BSCs to become a next step in the advancement of PV solar cell technology, with a solar market and industry that can thus make the most of its performance advantages. Current bifacial solar cells. Several in-depth reviews on bifacial solar cells and their technology elements cover the current state-of-the-art. They summarize the most common BSC designs currently being marketed and then provide a review of their technological aspects. BSC types in the market. Various bifacial PV modules with different architectures for their BSCs are currently available in the PV market. These include Passivated Emitter Rear Contact (PERC), Passivated Emitter Rear Locally-diffused (PERL), Passivated Emitter Rear Totally diffused (PERT), Heterojunction with Intrinsic Thin-layer (HIT), Interdigitated Back Contact (IBC). Technology aspects. Silicon wafers have traditionally been used as cell substrates, although other materials have been proposed and proven. The thickness of the substrate has an essential impact on material costs; thinner wafers mean savings, but at the same time, they make handling more difficult and costly or impact the throughput. Also, thinner substrates can improve efficiency due to the reduction of bulk recombination. While monofacial cells require only one diffusion step when forming their single p-n junction, BSCs require two p-n junctions with different dopants which increase the number of high temperature processes in the manufacturing and, therefore its cost. Co-diffusion is one option to simplify this process, consisting in the pre-deposition and doping of boron and phosphorus on both sides of the cell simultaneously; however, it requires controlling there will be no cross-doping. Another cost-saving option is to build the p-n junctions using ion implantation instead of diffusion. As in monofacial cells, front contacts in BSCs cells are mainly silver screen printed that become, due to the silver content, one of its important cost elements. Research is conducted to replace screen printed silver contacts with copper-plated contacts, TCOs, or aluminum. However, the most feasible so far has been to reduce the amount of screen printing paste by using front busbar-less solar cells with very thin contact wires. In BSCs recombination at the metal-semiconductor interface in the rear surface is reduced when compared to monofacial cells, due to the former restricting this interface to that of the rear surface metal grid. However, passivation of silicon surfaces is still needed and its area extended by that of the rear surface. Again the target is to reduce the temperature of the manufacturing processes involved. Traditionally passivation was obtained by thermal oxidation (SiO2); however, this requires over 1000C temperature. Currently, silicon surface passivation is achieved by putting silicon nitride (SiNx) on both sides of the cell by means of plasma-enhanced chemical vapor deposition (PECVD), which requires 400C. Lower deposition temperatures of ~225C can be achieved by passivating with hydrogenated amorphous silicon, a-Si:H. Bifacial solar cell performance parameters. The efficiency of BSCs is usually determined by means of independent efficiency measurements of the front and rear sides under one sun. Sometimes, the BSC is characterized using its "equivalent efficiency," defined as the efficiency of a monofacial cell able to render the same power per unit area as the bifacial cell at the same test conditions. Alternatively, the equivalent efficiency has been defined as the sum of the front and rear side efficiencies weighted by the relative amounts of irradiance on both sides. Another related parameter is the "Bifaciality Factor," defined as the ratio of the front and rear efficiencies when illuminated and measured independently: formula_0 Also specific to BSCs is the "Separation Rate", that intends to measure the "Bifacial Illumination Effect" predicted by McIntosh et al. in 1997 by which, the electrical output of BSCs operating under bifacial illumination would not necessarily equal the sum of the front-only and rear-only electrical output, i.e. it is not merely a linear combination of the monofacial characteristics: formula_1 Typically "X" represents one of the cell characteristic parameters such as the short circuit current "Jsc", the peak power "P"max or the efficiency "η." Furthermore, to characterize BSC operation under simultaneous front and rear irradiation, the irradiance gain, "g", defined as: formula_2 so that formula_3 and a bifacial "1.x" "Efficiency" can be defined as the efficiency obtained under a simultaneous irradiance of a certain amount on the front face and "x" times this amount on the rear side of the BSC. Then the actual gain of a BSC with respect to a monofacial one can be expressed through the "Gain-Efficiency Product," which is the product of the irradiance gain "g" and the bifacial "1.x Efficiency." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Bifaciality factor }(\\%) = \\left[ \\frac{\\eta_\\text{front}}{\\eta_\\text{rear}} \\right] \\times 100" }, { "math_id": 1, "text": "\\text{Separation rate }(\\%) = \\left[ \\frac{X_\\text{front+rear}}{X_\\text{front}+X_\\text{rear}} \\right] \\times100" }, { "math_id": 2, "text": "g = \\frac{G_f + G_r}{G_f}" }, { "math_id": 3, "text": "x = \\frac{G_r}{G_f} = g - 1" } ]
https://en.wikipedia.org/wiki?curid=65342312