id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
863791 | Cauchy product | Concept in mathematics
In mathematics, more specifically in mathematical analysis, the Cauchy product is the discrete convolution of two infinite series. It is named after the French mathematician Augustin-Louis Cauchy.
Definitions.
The Cauchy product may apply to infinite series or power series. When people apply it to finite sequences or finite series, that can be seen merely as a particular case of a product of series with a finite number of non-zero coefficients (see discrete convolution).
Convergence issues are discussed in the next section.
Cauchy product of two infinite series.
Let formula_0 and formula_1 be two infinite series with complex terms. The Cauchy product of these two infinite series is defined by a discrete convolution as follows:
formula_2 where formula_3.
Cauchy product of two power series.
Consider the following two power series
formula_4 and formula_5
with complex coefficients formula_6 and formula_7. The Cauchy product of these two power series is defined by a discrete convolution as follows:
formula_8 where formula_3.
Convergence and Mertens' theorem.
Let ("an")"n"≥0 and ("bn")"n"≥0 be real or complex sequences. It was proved by Franz Mertens that, if the series formula_9 converges to "A" and formula_10 converges to "B", and at least one of them converges absolutely, then their Cauchy product converges to "AB". The theorem is still valid in a Banach algebra (see first line of the following proof).
It is not sufficient for both series to be convergent; if both sequences are conditionally convergent, the Cauchy product does not have to converge towards the product of the two series, as the following example shows:
Example.
Consider the two alternating series with
formula_11
which are only conditionally convergent (the divergence of the series of the absolute values follows from the direct comparison test and the divergence of the harmonic series). The terms of their Cauchy product are given by
formula_12
for every integer "n" ≥ 0. Since for every "k" ∈ {0, 1, ..., "n"} we have the inequalities "k" + 1 ≤ "n" + 1 and "n" – "k" + 1 ≤ "n" + 1, it follows for the square root in the denominator that , hence, because there are "n" + 1 summands,
formula_13
for every integer "n" ≥ 0. Therefore, "cn" does not converge to zero as "n" → ∞, hence the series of the ("cn")"n"≥0 diverges by the term test.
Proof of Mertens' theorem.
For simplicity, we will prove it for complex numbers. However, the proof we are about to give is formally identical for an arbitrary Banach algebra (not even commutativity or associativity is required).
Assume without loss of generality that the series formula_9 converges absolutely.
Define the partial sums
formula_14
with
formula_15
Then
formula_16
by rearrangement, hence
Fix "ε" > 0. Since formula_17 by absolute convergence, and since "Bn" converges to "B" as "n" → ∞, there exists an integer "N" such that, for all integers "n" ≥ "N",
(this is the only place where the absolute convergence is used). Since the series of the ("an")"n"≥0 converges, the individual "an" must converge to 0 by the term test. Hence there exists an integer "M" such that, for all integers "n" ≥ "M",
Also, since "An" converges to "A" as "n" → ∞, there exists an integer "L" such that, for all integers "n" ≥ "L",
Then, for all integers "n" ≥ max{"L", "M" + "N"}, use the representation (1) for "Cn", split the sum in two parts, use the triangle inequality for the absolute value, and finally use the three estimates (2), (3) and (4) to show that
formula_18
By the definition of convergence of a series, "Cn" → "AB" as required.
Cesàro's theorem.
In cases where the two sequences are convergent but not absolutely convergent, the Cauchy product is still Cesàro summable. Specifically:
If formula_19, formula_20 are real sequences with formula_21 and formula_22 then
formula_23
This can be generalised to the case where the two sequences are not convergent but just Cesàro summable:
Theorem.
For formula_24 and formula_25, suppose the sequence formula_19 is formula_26 summable with sum "A" and formula_20 is formula_27 summable with sum "B". Then their Cauchy product is formula_28 summable with sum "AB".
Generalizations.
All of the foregoing applies to sequences in formula_42 (complex numbers). The Cauchy product can be defined for series in the formula_43 spaces (Euclidean spaces) where multiplication is the inner product. In this case, we have the result that if two series converge absolutely then their Cauchy product converges absolutely to the inner product of the limits.
Products of finitely many infinite series.
Let formula_40 such that formula_44 (actually the following is also true for formula_45 but the statement becomes trivial in that case) and let formula_46 be infinite series with complex coefficients, from which all except the formula_47th one converge absolutely, and the formula_47th one converges. Then the limit
formula_48
exists and we have:
formula_49
Proof.
Because
formula_50
the statement can be proven by induction over formula_47: The case for formula_51 is identical to the claim about the Cauchy product. This is our induction base.
The induction step goes as follows: Let the claim be true for an formula_40 such that formula_44, and let formula_52 be infinite series with complex coefficients, from which all except the formula_53th one converge absolutely, and the formula_53-th one converges. We first apply the induction hypothesis to the series formula_54. We obtain that the series
formula_55
converges, and hence, by the triangle inequality and the sandwich criterion, the series
formula_56
converges, and hence the series
formula_57
converges absolutely. Therefore, by the induction hypothesis, by what Mertens proved, and by renaming of variables, we have:
formula_58
Therefore, the formula also holds for formula_53.
Relation to convolution of functions.
A finite sequence can be viewed as an infinite sequence with only finitely many nonzero terms, or in other words as a function formula_59 with finite support. For any complex-valued functions "f", "g" on formula_60 with finite support, one can take their convolution:
formula_61
Then formula_62 is the same thing as the Cauchy product of formula_63 and formula_64.
More generally, given a monoid "S", one can form the semigroup algebra formula_65 of "S", with the multiplication given by convolution. If one takes, for example, formula_66, then the multiplication on formula_65 is a generalization of the Cauchy product to higher dimension.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\sum_{i=0}^\\infty a_i"
},
{
"math_id": 1,
"text": " \\sum_{j=0}^\\infty b_j"
},
{
"math_id": 2,
"text": "\\left(\\sum_{i=0}^\\infty a_i\\right) \\cdot \\left(\\sum_{j=0}^\\infty b_j\\right) = \\sum_{k=0}^\\infty c_k"
},
{
"math_id": 3,
"text": "c_k=\\sum_{l=0}^k a_l b_{k-l}"
},
{
"math_id": 4,
"text": "\\sum_{i=0}^\\infty a_i x^i"
},
{
"math_id": 5,
"text": "\\sum_{j=0}^\\infty b_j x^j"
},
{
"math_id": 6,
"text": "\\{a_i\\}"
},
{
"math_id": 7,
"text": "\\{b_j\\}"
},
{
"math_id": 8,
"text": "\\left(\\sum_{i=0}^\\infty a_i x^i\\right) \\cdot \\left(\\sum_{j=0}^\\infty b_j x^j\\right) = \\sum_{k=0}^\\infty c_k x^k"
},
{
"math_id": 9,
"text": " \\sum_{n=0}^\\infty a_n"
},
{
"math_id": 10,
"text": " \\sum_{n=0}^\\infty b_n"
},
{
"math_id": 11,
"text": "a_n = b_n = \\frac{(-1)^n}{\\sqrt{n+1}}\\,,"
},
{
"math_id": 12,
"text": "c_n = \\sum_{k=0}^n \\frac{(-1)^k}{\\sqrt{k+1}} \\cdot \\frac{ (-1)^{n-k} }{ \\sqrt{n-k+1} } = (-1)^n \\sum_{k=0}^n \\frac{1}{ \\sqrt{(k+1)(n-k+1)} }"
},
{
"math_id": 13,
"text": "|c_n| \\ge \\sum_{k=0}^n \\frac{1}{n+1} = 1"
},
{
"math_id": 14,
"text": "A_n = \\sum_{i=0}^n a_i,\\quad B_n = \\sum_{i=0}^n b_i\\quad\\text{and}\\quad C_n = \\sum_{i=0}^n c_i"
},
{
"math_id": 15,
"text": "c_i=\\sum_{k=0}^ia_kb_{i-k}\\,."
},
{
"math_id": 16,
"text": "C_n = \\sum_{i=0}^n a_{n-i}B_i"
},
{
"math_id": 17,
"text": " \\sum_{k \\in \\N} |a_k| < \\infty"
},
{
"math_id": 18,
"text": "\\begin{align}\n|C_n - AB| &= \\biggl|\\sum_{i=0}^n a_{n-i}(B_i-B)+(A_n-A)B\\biggr| \\\\\n &\\le \\sum_{i=0}^{N-1}\\underbrace{|a_{\\underbrace{\\scriptstyle n-i}_{\\scriptscriptstyle \\ge M}}|\\,|B_i-B|}_{\\le\\,\\varepsilon/3\\text{ by (3)}}+{}\\underbrace{\\sum_{i=N}^n |a_{n-i}|\\,|B_i-B|}_{\\le\\,\\varepsilon/3\\text{ by (2)}}+{}\\underbrace{|A_n-A|\\,|B|}_{\\le\\,\\varepsilon/3\\text{ by (4)}}\\le\\varepsilon\\,. \n\\end{align}"
},
{
"math_id": 19,
"text": " (a_n)_{n \\geq 0}"
},
{
"math_id": 20,
"text": " (b_n)_{n \\geq 0}"
},
{
"math_id": 21,
"text": " \\sum a_n\\to A"
},
{
"math_id": 22,
"text": " \\sum b_n\\to B"
},
{
"math_id": 23,
"text": "\\frac{1}{N}\\left(\\sum_{n=1}^N\\sum_{i=1}^n\\sum_{k=0}^i a_k b_{i-k}\\right)\\to AB."
},
{
"math_id": 24,
"text": " r>-1"
},
{
"math_id": 25,
"text": " s>-1"
},
{
"math_id": 26,
"text": " (C,\\; r)"
},
{
"math_id": 27,
"text": " (C,\\; s)"
},
{
"math_id": 28,
"text": " (C,\\; r+s+1)"
},
{
"math_id": 29,
"text": " x,y \\in \\Reals"
},
{
"math_id": 30,
"text": " a_n = x^n/n!"
},
{
"math_id": 31,
"text": " b_n = y^n/n!"
},
{
"math_id": 32,
"text": " c_n = \\sum_{i=0}^n\\frac{x^i}{i!}\\frac{y^{n-i}}{(n-i)!} = \\frac{1}{n!} \\sum_{i=0}^n \\binom{n}{i} x^i y^{n-i} = \\frac{(x+y)^n}{n!}"
},
{
"math_id": 33,
"text": " \\exp(x) = \\sum a_n"
},
{
"math_id": 34,
"text": " \\exp(y) = \\sum b_n"
},
{
"math_id": 35,
"text": " \\exp(x+y) = \\sum c_n"
},
{
"math_id": 36,
"text": " \\exp(x+y) = \\exp(x)\\exp(y)"
},
{
"math_id": 37,
"text": " a_n = b_n = 1"
},
{
"math_id": 38,
"text": " n \\in \\N"
},
{
"math_id": 39,
"text": " c_n = n+1"
},
{
"math_id": 40,
"text": "n \\in \\N"
},
{
"math_id": 41,
"text": " \\sum c_n = (1,1+2,1+2+3,1+2+3+4,\\dots)"
},
{
"math_id": 42,
"text": " \\Complex"
},
{
"math_id": 43,
"text": " \\R^n"
},
{
"math_id": 44,
"text": "n \\ge 2"
},
{
"math_id": 45,
"text": "n=1"
},
{
"math_id": 46,
"text": "\\sum_{k_1 = 0}^\\infty a_{1, k_1}, \\ldots, \\sum_{k_n = 0}^\\infty a_{n, k_n}"
},
{
"math_id": 47,
"text": "n"
},
{
"math_id": 48,
"text": "\\lim_{N\\to\\infty}\\sum_{k_1+\\ldots+k_n\\leq N} a_{1,k_1}\\cdots a_{n,k_n}"
},
{
"math_id": 49,
"text": "\\prod_{j=1}^n \\left( \\sum_{k_j = 0}^\\infty a_{j, k_j} \\right)=\\lim_{N\\to\\infty}\\sum_{k_1+\\ldots+k_n\\leq N} a_{1,k_1}\\cdots a_{n,k_n}"
},
{
"math_id": 50,
"text": "\\forall N\\in\\mathbb N:\\sum_{k_1+\\ldots+k_n\\leq N}a_{1,k_1}\\cdots a_{n,k_n}=\\sum_{k_1 = 0}^N \\sum_{k_2 = 0}^{k_1} \\cdots \\sum_{k_n = 0}^{k_{n-1}}a_{1, k_n} a_{2, k_{n-1} - k_n} \\cdots a_{n, k_1 - k_2}"
},
{
"math_id": 51,
"text": "n = 2"
},
{
"math_id": 52,
"text": "\\sum_{k_1 = 0}^\\infty a_{1, k_1}, \\ldots, \\sum_{k_{n+1} = 0}^\\infty a_{n+1, k_{n+1}}"
},
{
"math_id": 53,
"text": "n+1"
},
{
"math_id": 54,
"text": "\\sum_{k_1 = 0}^\\infty |a_{1, k_1}|, \\ldots, \\sum_{k_n = 0}^\\infty |a_{n, k_n}|"
},
{
"math_id": 55,
"text": "\\sum_{k_1 = 0}^\\infty \\sum_{k_2 = 0}^{k_1} \\cdots \\sum_{k_n = 0}^{k_{n-1}} |a_{1, k_n} a_{2, k_{n-1} - k_n} \\cdots a_{n, k_1 - k_2}|"
},
{
"math_id": 56,
"text": "\\sum_{k_1 = 0}^\\infty \\left| \\sum_{k_2 = 0}^{k_1} \\cdots \\sum_{k_n = 0}^{k_{n-1}} a_{1, k_n} a_{2, k_{n-1} - k_n} \\cdots a_{n, k_1 - k_2} \\right|"
},
{
"math_id": 57,
"text": "\\sum_{k_1 = 0}^\\infty \\sum_{k_2 = 0}^{k_1} \\cdots \\sum_{k_n = 0}^{k_{n-1}} a_{1, k_n} a_{2, k_{n-1} - k_n} \\cdots a_{n, k_1 - k_2}"
},
{
"math_id": 58,
"text": "\\begin{align}\n\\prod_{j=1}^{n+1} \\left( \\sum_{k_j = 0}^\\infty a_{j, k_j} \\right) & = \\left( \\sum_{k_{n+1} = 0}^\\infty \\overbrace{a_{n+1, k_{n+1}}}^{=:a_{k_{n+1}}} \\right) \\left( \\sum_{k_1 = 0}^\\infty \\overbrace{\\sum_{k_2 = 0}^{k_1} \\cdots \\sum_{k_n = 0}^{k_{n-1}} a_{1, k_n} a_{2, k_{n-1} - k_n} \\cdots a_{n, k_1 - k_2}}^{=:b_{k_1}} \\right) \\\\\n\n& = \\left( \\sum_{k_1 = 0}^\\infty \\overbrace{\\sum_{k_2 = 0}^{k_1} \\sum_{k_3 = 0}^{k_2} \\cdots \\sum_{k_n = 0}^{k_{n-1}} a_{1, k_n} a_{2, k_{n-1} - k_n} \\cdots a_{n, k_1 - k_2}}^{=:a_{k_1}} \\right) \\left ( \\sum_{k_{n+1} = 0}^\\infty \\overbrace{a_{n+1, k_{n+1}}}^{=:b_{k_{n+1}}} \\right) \\\\\n\n& = \\left( \\sum_{k_1 = 0}^\\infty \\overbrace{\\sum_{k_3 = 0}^{k_1} \\sum_{k_4 = 0}^{k_3} \\cdots \\sum_{k_n+1 = 0}^{k_{n}} a_{1, k_{n+1}} a_{2, k_{n} - k_{n+1}} \\cdots a_{n, k_1 - k_3}}^{=:a_{k_1}} \\right) \\left ( \\sum_{k_{2} = 0}^\\infty \\overbrace{a_{n+1, k_{2}}}^{=:b_{n+1,k_{2}}=:b_{k_{2}}} \\right) \\\\\n\n& = \\left( \\sum_{k_1 = 0}^\\infty a_{k_1} \\right) \\left ( \\sum_{k_{2} = 0}^\\infty b_{k_2} \\right) \\\\\n\n& = \\left( \\sum_{k_1 = 0}^\\infty \\sum_{k_{2} = 0}^{k_1} a_{k_2}b_{k_1 - k_2} \\right) \\\\\n\n& = \\left( \\sum_{k_1 = 0}^\\infty \\sum_{k_{2} = 0}^{k_1} \\left ( \\overbrace{\\sum_{k_3 = 0}^{k_2} \\cdots \\sum_{k_n+1 = 0}^{k_{n}} a_{1, k_{n+1}} a_{2, k_{n} - k_{n+1}} \\cdots a_{n, k_2 - k_3}}^{=:a_{k_2}} \\right) \\left ( \\overbrace{a_{n+1, k_1 - k_2}}^{=:b_{k_1 - k_2}} \\right) \\right) \\\\\n\n& = \\left( \\sum_{k_1 = 0}^\\infty \\sum_{k_{2} = 0}^{k_1} \\overbrace{\\sum_{k_3 = 0}^{k_2} \\cdots \\sum_{k_n+1 = 0}^{k_{n}} a_{1, k_{n+1}} a_{2, k_{n} - k_{n+1}} \\cdots a_{n, k_2 - k_3}}^{=:a_{k_2}} \\overbrace{a_{n+1, k_1 - k_2}}^{=:b_{k_1 - k_2}} \\right) \\\\\n\n\n& = \\sum_{k_1 = 0}^\\infty \\sum_{k_2 = 0}^{k_1} a_{n+1, k_1 - k_2} \\sum_{k_3 = 0}^{k_2} \\cdots \\sum_{k_{n+1} = 0}^{k_n} a_{1, k_{n+1}} a_{2, k_n - k_{n+1}} \\cdots a_{n, k_2 - k_3}\n\\end{align}"
},
{
"math_id": 59,
"text": "f: \\N \\to \\Complex"
},
{
"math_id": 60,
"text": "\\N"
},
{
"math_id": 61,
"text": "(f * g)(n) = \\sum_{i + j = n} f(i) g(j)."
},
{
"math_id": 62,
"text": "\\sum (f *g)(n)"
},
{
"math_id": 63,
"text": "\\sum f(n)"
},
{
"math_id": 64,
"text": "\\sum g(n)"
},
{
"math_id": 65,
"text": "\\Complex[S]"
},
{
"math_id": 66,
"text": "S = \\N^d"
}
] | https://en.wikipedia.org/wiki?curid=863791 |
863813 | Representation theory of the Poincaré group | Representation theory of an important group in physics
In mathematics, the representation theory of the Poincaré group is an example of the representation theory of a Lie group that is neither a compact group nor a semisimple group. It is fundamental in theoretical physics.
In a physical theory having Minkowski space as the underlying spacetime, the space of physical states is typically a representation of the Poincaré group. (More generally, it may be a projective representation, which amounts to a representation of the double cover of the group.)
In a classical field theory, the physical states are sections of a Poincaré-equivariant vector bundle over Minkowski space. The equivariance condition means that the group acts on the total space of the vector bundle, and the projection to Minkowski space is an equivariant map. Therefore, the Poincaré group also acts on the space of sections. Representations arising in this way (and their subquotients) are called covariant field representations, and are not usually unitary.
For a discussion of such unitary representations, see Wigner's classification.
In quantum mechanics, the state of the system is determined by the Schrödinger equation, which is invariant under Galilean transformations. Quantum field theory is the relativistic extension of quantum mechanics, where relativistic (Lorentz/Poincaré invariant) wave equations are solved, "quantized", and act on a Hilbert space composed of Fock states.
There are no finite unitary representations of the full Lorentz (and thus Poincaré) transformations due to the non-compact nature of Lorentz boosts (rotations in Minkowski space along a space and time axis). However, there are finite non-unitary indecomposable representations of the Poincaré algebra, which may be used for modelling of unstable particles.
In case of spin 1/2 particles, it is possible to find a construction that includes both a finite-dimensional representation and a scalar product preserved by this representation by associating a 4-component Dirac spinor formula_0 with each particle. These spinors transform under Lorentz transformations generated by the gamma matrices (formula_1). It can be shown that the scalar product
formula_2
is preserved. It is not, however, positive definite, so the representation is not unitary.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\psi"
},
{
"math_id": 1,
"text": "\\gamma_{\\mu}"
},
{
"math_id": 2,
"text": "\\langle\\psi|\\phi\\rangle = \\bar{\\psi}\\phi = \\psi^{\\dagger}\\gamma_0\\phi"
}
] | https://en.wikipedia.org/wiki?curid=863813 |
864149 | Arrangement of lines | Subdivision of the plane by lines
In geometry, an arrangement of lines is the subdivision of the plane formed by a collection of lines. Problems of counting the features of arrangements have been studied in discrete geometry, and computational geometers have found algorithms for the efficient construction of arrangements.
Definition.
Intuitively, any finite set of lines in the plane cuts the plane into two-dimensional polygons (cells), one-dimensional line segments or rays, and zero-dimensional crossing points. This can be formalized mathematically by classifying the points of the plane according to which side of each line they are on. Each line separates the plane into two open half-planes, and each point of the plane has three possibilities per line: it can be in either one of these two half-planes, or it can be on the line itself. Two points can be considered to be equivalent if they have the same classification with respect to all of the lines. This is an equivalence relation, whose equivalence classes are subsets of equivalent points. These subsets subdivide the plane into shapes of the following three types:
The boundary of a cell is the system of edges that touch it, and the boundary of an edge is the set of vertices that touch it (one vertex for a ray and two for a line segment). The system of objects of all three types, linked by this boundary operator, form a cell complex covering the plane. Two arrangements are said to be "isomorphic" or "combinatorially equivalent" if there is a one-to-one boundary-preserving correspondence between the objects in their associated cell complexes.
The same classification of points, and the same shapes of equivalence classes, can be used for infinite but "locally finite" arrangements, in which every bounded subset of the plane may be crossed by only finitely many lines, although in this case the unbounded cells may have infinitely many sides.
Complexity of arrangements.
The study of arrangements was begun by Jakob Steiner, who proved the first bounds on the maximum number of features of different types that an arrangement may have. The most straightforward features to count are the vertices, edges, and cells:
More complex features go by the names of "zones", "levels", and "many faces":
Projective arrangements and projective duality.
It is convenient to study line arrangements in the projective plane as every pair of lines has a crossing point. Line arrangements cannot be defined using the sides of lines, because a line in the projective plane does not separate the plane into two distinct sides. One may still define the cells of an arrangement to be the connected components of the points not belonging to any line, the edges to be the connected components of sets of points belonging to a single line, and the vertices to be points where two or more lines cross. A line arrangement in the projective plane differs from its Euclidean counterpart in that the two Euclidean rays at either end of a line are replaced by a single edge in the projective plane that connects the leftmost and rightmost vertices on that line, and in that pairs of unbounded Euclidean cells are replaced in the projective plane by single cells that are crossed by the projective line at infinity.
Due to projective duality, many statements about the combinatorial properties of points in the plane may be more easily understood in an equivalent dual form about arrangements of lines. For instance, the Sylvester–Gallai theorem, stating that any non-collinear set of points in the plane has an "ordinary line" containing exactly two points, transforms under projective duality to the statement that any projective arrangement of finitely many lines with more than one vertex has an "ordinary point", a vertex where only two lines cross. The earliest known proof of the Sylvester–Gallai theorem, by , uses the Euler characteristic to show that such a vertex must always exist.
Triangles in arrangements.
An arrangement of lines in the projective plane is said to be "simplicial" if every cell of the arrangement is bounded by exactly three edges. Simplicial arrangements were first studied by Melchior. Three infinite families of simplicial line arrangements are known:
Additionally there are many other examples of "sporadic simplicial arrangements" that do not fit into any known infinite family.
As Branko Grünbaum writes, simplicial arrangements "appear as examples or counterexamples in many contexts of combinatorial geometry and its applications." For instance, use simplicial arrangements to construct counterexamples to a conjecture on the relation between the degree of a set of differential equations and the number of invariant lines the equations may have. The two known counterexamples to the Dirac–Motzkin conjecture (which states that any formula_0-line arrangement has at least formula_23 ordinary points) are both simplicial.
The dual graph of a line arrangement has one node per cell and one edge linking any pair of cells that share an edge of the arrangement. These graphs are partial cubes, graphs in which the nodes can be labeled by bitvectors in such a way that the graph distance equals the Hamming distance between labels. In the case of a line arrangement, each coordinate of the labeling assigns 0 to nodes on one side of one of the lines and 1 to nodes on the other side. Dual graphs of simplicial arrangements have been used to construct infinite families of 3-regular partial cubes, isomorphic to the graphs of simple zonohedra.
It is also of interest to study the extremal numbers of triangular cells in arrangements that may not necessarily be simplicial. Any arrangement in the projective plane must have at least formula_0 triangles. Every arrangement that has only formula_0 triangles must be simple. For Euclidean rather than projective arrangements, the minimum number of triangles is formula_24, by Roberts's triangle theorem. The maximum possible number of triangular faces in a simple arrangement is known to be upper bounded by formula_25 and lower bounded by formula_26; the lower bound is achieved by certain subsets of the diagonals of a regular formula_27-gon. For non-simple arrangements the maximum number of triangles is similar but more tightly bounded. The closely related Kobon triangle problem asks for the maximum number of non-overlapping finite triangles in an arrangement in the Euclidean plane, not counting the unbounded faces that might form triangles in the projective plane. For some but not all values of formula_0, formula_28 triangles are possible.
Multigrids and rhombus tilings.
The dual graph of a simple line arrangement may be represented geometrically as a collection of rhombi, one per vertex of the arrangement, with sides perpendicular to the lines that meet at that vertex. These rhombi may be joined together to form a tiling of a convex polygon in the case of an arrangement of finitely many lines, or of the entire plane in the case of a locally finite arrangement with infinitely many lines. This construction is sometimes known as a Klee diagram, after a publication of Rudolf Klee in 1938 that used this technique. Not every rhombus tiling comes from lines in this way, however.
investigated special cases of this construction in which the line arrangement consists of formula_11 sets of equally spaced parallel lines. For two perpendicular families of parallel lines this construction just gives the familiar square tiling of the plane, and for three families of lines at 120-degree angles from each other (themselves forming a trihexagonal tiling) this produces the rhombille tiling. However, for more families of lines this construction produces aperiodic tilings. In particular, for five families of lines at equal angles to each other (or, as de Bruijn calls this arrangement, a "pentagrid") it produces a family of tilings that include the rhombic version of the Penrose tilings.
There also exist three infinite simplicial arrangements formed from sets of parallel lines. The tetrakis square tiling is an infinite arrangement of lines forming a periodic tiling that resembles a multigrid with four parallel families, but in which two of the families are more widely spaced than the other two, and in which the arrangement is simplicial rather than simple. Its dual is the truncated square tiling. Similarly, the triangular tiling is an infinite simplicial line arrangement with three parallel families, which has as its dual the hexagonal tiling, and the bisected hexagonal tiling is an infinite simplicial line arrangement with six parallel families and two line spacings, dual to the great rhombitrihexagonal tiling. These three examples come from three affine reflection groups in the Euclidean plane, systems of symmetries based on reflection across each line in these arrangements.
Algorithms.
"Constructing" an arrangement means, given as input a list of the lines in the arrangement, computing a representation of the vertices, edges, and cells of the arrangement together with the adjacencies between these objects, for instance as a doubly connected edge list. Due to the zone theorem, arrangements can be constructed efficiently by an incremental algorithm that adds one line at a time to the arrangement of the previously added lines: each new line can be added in time proportional to its zone, resulting in a total construction time However, the memory requirements of this algorithm are high, so it may be more convenient to report all features of an arrangement by an algorithm that does not keep the entire arrangement in memory at once. This may again be done efficiently, in time formula_10 and space formula_29, by an algorithmic technique known as "topological sweeping". Computing a line arrangement exactly requires a numerical precision several times greater than that of the input coordinates: if a line is specified by two points on it, the coordinates of the arrangement vertices may need four times as much precision as these input points. Therefore, computational geometers have also studied algorithms for constructing arrangements efficiently with limited numerical precision.
As well, researchers have studied efficient algorithms for constructing smaller portions of an arrangement, such as zones, formula_11-levels, or the set of cells containing a given set of points. The problem of finding the arrangement vertex with the median formula_20-coordinate arises (in a dual form) in robust statistics as the problem of computing the Theil–Sen estimator of a set of points.
Marc van Kreveld suggested the algorithmic problem of computing shortest paths between vertices in a line arrangement, where the paths are restricted to follow the edges of the arrangement, more quickly than the quadratic time that it would take to apply a shortest path algorithm to the whole arrangement graph. An approximation algorithm is known, and the problem may be solved efficiently for lines that fall into a small number of parallel families (as is typical for urban street grids), but the general problem remains open.
Non-Euclidean line arrangements.
A pseudoline arrangement is a family of curves that share similar topological properties with a line arrangement. These can be defined most simply in the projective plane as simple closed curves any two of which meet in a single crossing point. A pseudoline arrangement is said to be "stretchable" if it is combinatorially equivalent to a line arrangement. Determining stretchability is a difficult computational task: it is complete for the existential theory of the reals to distinguish stretchable arrangements from non-stretchable ones. Every arrangement of finitely many pseudolines can be extended so that they become lines in a "spread", a type of non-Euclidean incidence geometry in which every two points of a topological plane are connected by a unique line (as in the Euclidean plane) but in which other axioms of Euclidean geometry may not apply.
Another type of non-Euclidean geometry is the hyperbolic plane, and
arrangements of hyperbolic lines in this geometry have also been studied. Any finite set of lines in the Euclidean plane has a combinatorially equivalent arrangement in the hyperbolic plane (e.g. by enclosing the vertices of the arrangement by a large circle and interpreting the interior of the circle as a Klein model of the hyperbolic plane). However, parallel (non-crossing) pairs of lines are less restricted in hyperbolic line arrangements than in the Euclidean plane: in particular, the relation of being parallel is an equivalence relation for Euclidean lines but not for hyperbolic lines. The intersection graph of the lines in a hyperbolic arrangement can be an arbitrary circle graph. The corresponding concept to hyperbolic line arrangements for pseudolines is a "weak pseudoline arrangement", a family of curves having the same topological properties as lines such that any two curves in the family either meet in a single crossing point or have no intersection.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n(n-1)/2"
},
{
"math_id": 2,
"text": "n+1"
},
{
"math_id": 3,
"text": "n(n+1)/2+1"
},
{
"math_id": 4,
"text": "n^2"
},
{
"math_id": 5,
"text": "n-1"
},
{
"math_id": 6,
"text": "\\ell"
},
{
"math_id": 7,
"text": "\\lfloor 9.5n\\rfloor-1"
},
{
"math_id": 8,
"text": "O(n\\alpha(n))"
},
{
"math_id": 9,
"text": "\\alpha"
},
{
"math_id": 10,
"text": "O(n^2)"
},
{
"math_id": 11,
"text": "k"
},
{
"math_id": 12,
"text": "\\le k"
},
{
"math_id": 13,
"text": "O(nk^{1/3})"
},
{
"math_id": 14,
"text": "n2^{\\Omega(\\sqrt{\\log k})}"
},
{
"math_id": 15,
"text": "\\Theta(nk)"
},
{
"math_id": 16,
"text": "n^{2-o(1)}"
},
{
"math_id": 17,
"text": "m"
},
{
"math_id": 18,
"text": "\\Theta(m^{2/3}n^{2/3}+n)"
},
{
"math_id": 19,
"text": "x+n"
},
{
"math_id": 20,
"text": "x"
},
{
"math_id": 21,
"text": "\\Omega(x^3/m^2)"
},
{
"math_id": 22,
"text": "O(m^{2/3}n^{2/3})"
},
{
"math_id": 23,
"text": "n/2"
},
{
"math_id": 24,
"text": "n-2"
},
{
"math_id": 25,
"text": "n(n-1)/3"
},
{
"math_id": 26,
"text": "n(n-3)/3"
},
{
"math_id": 27,
"text": "2n"
},
{
"math_id": 28,
"text": "n(n-2)/3"
},
{
"math_id": 29,
"text": "O(n)"
}
] | https://en.wikipedia.org/wiki?curid=864149 |
864168 | Mains electricity by country | Mains electricity by country includes a list of countries and territories, with the plugs, voltages and frequencies they commonly use for providing electrical power to low voltage appliances, equipment, and lighting typically found in homes and offices. (For industrial machinery, see industrial and multiphase power plugs and sockets.) Some countries have more than one voltage available. For example, in North America, a unique split-phase system is used to supply to most premises that works by center tapping a 240 volt transformer. This system is able to concurrently provide 240 volts and 120 volts. Consequently, this allows homeowners to wire up both 240 V and 120 V circuits as they wish (as regulated by local building codes). Most sockets are connected to 120 V for the use of small appliances and electronic devices, while larger appliances such as dryers, electric ovens, ranges and EV chargers use dedicated 240 V sockets. Different sockets are mandated for different voltage or maximum current levels.
Voltage, frequency, and plug type vary, but large regions may use common standards. Physical compatibility of receptacles may not ensure compatibility of voltage, frequency, or connection to earth (ground), including plugs and cords. In some areas, older standards may still exist. Foreign enclaves, extraterritorial government installations, or buildings frequented by tourists may support plugs not otherwise used in a country, for the convenience of travellers.
Main reference source – IEC World Plugs.
The International Electrotechnical Commission (IEC) publishes a web microsite "World Plugs" which provides the main source for this page, except where other sources are indicated. "World Plugs" includes some history, a description of plug types, and a list of countries giving the type(s) used and the mains voltage and frequency.
Although useful for quick reference, especially for travellers, "IEC World Plugs" may not be regarded as totally accurate, as illustrated by the examples in the plugs section below, and errors may exist.
Voltages.
Voltages in this article are the nominal single-phase supply voltages, or split-phase supply voltages. Three-phase and industrial loads may have other voltages.
All voltages are root mean square voltage; the peak AC voltage is greater by a factor of formula_0, and the peak-to-peak voltage greater by a factor of formula_1
Plugs.
The system of plug types using a single letter (from A to N) used here is from "World Plugs", which defines the plug type letters in terms of a general description, without making reference to specific standards. Where a plug does not have a specific letter code assigned to it, then it may be defined by the style sheet number listed in IEC TR 60083. Not all plugs are included in the letter system; for example, there is no designation for the plugs defined by the Thai National Standard "TIS 116-2549", though some web sites refer to the three-pin plug described in that standard as "Type O".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{2}"
},
{
"math_id": 1,
"text": "2\\sqrt{2}."
}
] | https://en.wikipedia.org/wiki?curid=864168 |
8641870 | Weinstein–Aronszajn identity | For two suitable matrices, A and B, I+AB and I+BA have the same determinate
In mathematics, the Weinstein–Aronszajn identity states that if formula_0 and formula_1 are matrices of size "m" × "n" and "n" × "m" respectively (either or both of which may be infinite) then,
provided formula_2 (and hence, also formula_3) is of trace class,
formula_4
where formula_5 is the "k" × "k" identity matrix.
It is closely related to the matrix determinant lemma and its generalization. It is the determinant analogue of the Woodbury matrix identity for matrix inverses.
Proof.
The identity may be proved as follows.
Let formula_6 be a matrix consisting of the four blocks formula_7, formula_0, formula_1 and formula_8:
formula_9
Because "I""m" is invertible, the formula for the determinant of a block matrix gives
formula_10
Because "I""n" is invertible, the formula for the determinant of a block matrix gives
formula_11
Thus
formula_12
Substituting formula_13 for formula_0 then gives the Weinstein–Aronszajn identity.
Applications.
Let formula_14. The identity can be used to show the somewhat more general statement that
formula_15
It follows that the non-zero eigenvalues of formula_2 and formula_3 are the same.
This identity is useful in developing a Bayes estimator for multivariate Gaussian distributions.
The identity also finds applications in random matrix theory by relating determinants of large matrices to determinants of smaller ones.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "AB"
},
{
"math_id": 3,
"text": "BA"
},
{
"math_id": 4,
"text": "\\det(I_m + AB) = \\det(I_n + BA),"
},
{
"math_id": 5,
"text": " I_k "
},
{
"math_id": 6,
"text": " M"
},
{
"math_id": 7,
"text": "I_m"
},
{
"math_id": 8,
"text": "I_n"
},
{
"math_id": 9,
"text": "M = \\begin{pmatrix} I_m & A \\\\ B & I_n \\end{pmatrix}. "
},
{
"math_id": 10,
"text": "\\det\\begin{pmatrix} I_m & A \\\\ B & I_n \\end{pmatrix} = \\det(I_m) \\det\\left(I_n - B I_m^{-1} A\\right) = \\det(I_n - BA). "
},
{
"math_id": 11,
"text": "\\det\\begin{pmatrix} I_m & A\\\\ B & I_n \\end{pmatrix} = \\det(I_n) \\det\\left(I_m - A I_n^{-1} B\\right) = \\det(I_m - AB)."
},
{
"math_id": 12,
"text": "\\det(I_n - B A) = \\det(I_m - A B)."
},
{
"math_id": 13,
"text": "-A"
},
{
"math_id": 14,
"text": "\\lambda \\in \\mathbb{R} \\setminus \\{0\\}"
},
{
"math_id": 15,
"text": "\\det(AB - \\lambda I_m) = (-\\lambda)^{m - n} \\det(BA - \\lambda I_n)."
}
] | https://en.wikipedia.org/wiki?curid=8641870 |
8643 | Molecular diffusion | Thermal motion of liquid or gas particles at temperatures above absolute zero
Molecular diffusion, often simply called diffusion, is the thermal motion of all (liquid or gas) particles at temperatures above absolute zero. The rate of this movement is a function of temperature, viscosity of the fluid and the size (mass) of the particles. Diffusion explains the net flux of molecules from a region of higher concentration to one of lower concentration. Once the concentrations are equal the molecules continue to move, but since there is no concentration gradient the process of molecular diffusion has ceased and is instead governed by the process of self-diffusion, originating from the random motion of the molecules. The result of diffusion is a gradual mixing of material such that the distribution of molecules is uniform. Since the molecules are still in motion, but an equilibrium has been established, the result of molecular diffusion is called a "dynamic equilibrium". In a phase with uniform temperature, absent external net forces acting on the particles, the diffusion process will eventually result in complete mixing.
Consider two systems; S1 and S2 at the same temperature and capable of exchanging particles. If there is a change in the potential energy of a system; for example μ1>μ2 (μ is Chemical potential) an energy flow will occur from S1 to S2, because nature always prefers low energy and maximum entropy.
Molecular diffusion is typically described mathematically using Fick's laws of diffusion.
Applications.
Diffusion is of fundamental importance in many disciplines of physics, chemistry, and biology. Some example applications of diffusion:
Significance.
Diffusion is part of the transport phenomena. Of mass transport mechanisms, molecular diffusion is known as a slower one.
Biology.
In cell biology, diffusion is a main form of transport for necessary materials such as amino acids within cells. Diffusion of solvents, such as water, through a semipermeable membrane is classified as osmosis.
Metabolism and respiration rely in part upon diffusion in addition to bulk or active processes. For example, in the alveoli of mammalian lungs, due to differences in partial pressures across the alveolar-capillary membrane, oxygen diffuses into the blood and carbon dioxide diffuses out. Lungs contain a large surface area to facilitate this gas exchange process.
Tracer, self- and chemical diffusion.
Fundamentally, two types of diffusion are distinguished:
The diffusion coefficients for these two types of diffusion are generally different because the diffusion coefficient for chemical diffusion is binary and it includes the effects due to the correlation of the movement of the different diffusing species.
Non-equilibrium system.
Because chemical diffusion is a net transport process, the system in which it takes place is not an equilibrium system (i.e. it is not at rest yet). Many results in classical thermodynamics are not easily applied to non-equilibrium systems. However, there sometimes occur so-called quasi-steady states, where the diffusion process does not change in time, where classical results may locally apply. As the name suggests, this process is a not a true equilibrium since the system is still evolving.
Non-equilibrium fluid systems can be successfully modeled with Landau-Lifshitz fluctuating hydrodynamics. In this theoretical framework, diffusion is due to fluctuations whose dimensions range from the molecular scale to the macroscopic scale.
Chemical diffusion increases the entropy of a system, i.e. diffusion is a spontaneous and irreversible process. Particles can spread out by diffusion, but will not spontaneously re-order themselves (absent changes to the system, assuming no creation of new chemical bonds, and absent external forces acting on the particle).
Concentration dependent "collective" diffusion.
"Collective diffusion" is the diffusion of a large number of particles, most often within a solvent.
Contrary to brownian motion, which is the diffusion of a single particle, interactions between particles may have to be considered, unless the particles form an ideal mix with their solvent (ideal mix conditions correspond to the case where the interactions between the solvent and particles are identical to the interactions between particles and the interactions between solvent molecules; in this case, the particles do not interact when inside the solvent).
In case of an ideal mix, the particle diffusion equation holds true and the diffusion coefficient "D" the speed of diffusion in the particle diffusion equation is independent of particle concentration. In other cases, resulting interactions between particles within the solvent will account for the following effects:
Molecular diffusion of gases.
Transport of material in stagnant fluid or across streamlines of a fluid in a laminar flow occurs by molecular diffusion. Two adjacent compartments separated by a partition, containing pure gases A or B may be envisaged. Random movement of all molecules occurs so that after a period molecules are found remote from their original positions. If the partition is removed, some molecules of A move towards the region occupied by B, their number depends on the number of molecules at the region considered. Concurrently, molecules of B diffuse toward regimens formerly occupied by pure A.
Finally, complete mixing occurs. Before this point in time, a gradual variation in the concentration of A occurs along an axis, designated x, which joins the original compartments. This variation, expressed mathematically as -dCA/dx, where CA is the concentration of A. The negative sign arises because the concentration of A decreases as the distance x increases. Similarly, the variation in the concentration of gas B is -dCB/dx. The rate of diffusion of A, NA, depend on concentration gradient and the average velocity with which the molecules of A moves in the x direction. This relationship is expressed by Fick's Law
formula_0 (only applicable for no bulk motion)
where D is the diffusivity of A through B, proportional to the average molecular velocity and, therefore dependent on the temperature and pressure of gases. The rate of diffusion NA, is usually expressed as the number of moles diffusing across unit area in unit time. As with the basic equation of heat transfer, this indicates that the rate of force is directly proportional to the driving force, which is the concentration gradient.
This basic equation applies to a number of situations. Restricting discussion exclusively to steady state conditions, in which neither dCA/dx or dCB/dx change with time, equimolecular counterdiffusion is considered first.
Equimolecular counterdiffusion.
If no bulk flow occurs in an element of length dx, the rates of diffusion of two ideal gases (of similar molar volume) A and B must be equal and opposite, that is formula_1.
The partial pressure of A changes by dPA over the distance dx. Similarly, the partial pressure of B changes dPB. As there is no difference in total pressure across the element (no bulk flow), we have
formula_2.
For an ideal gas the partial pressure is related to the molar concentration by the relation
formula_3
where nA is the number of moles of gas "A" in a volume "V". As the molar concentration "CA" is equal to "nA/ V" therefore
formula_4
Consequently, for gas A,
formula_5
where DAB is the diffusivity of A in B. Similarly,
formula_6
Considering that dPA/dx=-dPB/dx, it therefore proves that DAB=DBA=D. If the partial pressure of A at x1 is PA1 and x2 is PA2, integration of above equation,
formula_7
A similar equation may be derived for the counterdiffusion of gas B.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N_{A}= -D_{AB} \\frac{dC_{A}}{dx}"
},
{
"math_id": 1,
"text": "N_A=-N_B"
},
{
"math_id": 2,
"text": " \\frac{dP_A}{dx}=-\\frac{dP_B}{dx}"
},
{
"math_id": 3,
"text": " P_{A}V=n_{A}RT"
},
{
"math_id": 4,
"text": " P_{A}=C_{A}RT"
},
{
"math_id": 5,
"text": " N_{A}=-D_{AB} \\frac{1}{RT} \\frac{dP_{A}}{dx} "
},
{
"math_id": 6,
"text": " N_{B}=-D_{BA} \\frac{1}{RT} \\frac{dP_{B}}{dx}=D_{AB} \\frac{1}{RT}\\frac{dP_{A}}{dx}"
},
{
"math_id": 7,
"text": " N_{A}=-\\frac{D}{RT} \\frac{(P_{A2}-P_{A1})}{x_{2}-x_{1}}"
}
] | https://en.wikipedia.org/wiki?curid=8643 |
864418 | Heegaard splitting | Decomposition of a compact oriented 3-manifold by dividing it into two handlebodies
In the mathematical field of geometric topology, a Heegaard splitting () is a decomposition of a compact oriented 3-manifold that results from dividing it into two handlebodies.
Definitions.
Let "V" and "W" be handlebodies of genus "g", and let ƒ be an orientation reversing homeomorphism from the boundary of "V" to the boundary of "W". By gluing "V" to "W" along ƒ we obtain the compact oriented 3-manifold
formula_0
Every closed, orientable three-manifold may be so obtained; this follows from deep results on the triangulability of three-manifolds due to Moise. This contrasts strongly with higher-dimensional manifolds which need not admit smooth or piecewise linear structures. Assuming smoothness the existence of a Heegaard splitting also follows from the work of Smale about handle decompositions from Morse theory.
The decomposition of "M" into two handlebodies is called a Heegaard splitting, and their common boundary "H" is called the Heegaard surface of the splitting. Splittings are considered up to isotopy.
The gluing map ƒ need only be specified up to taking a double coset in the mapping class group of "H". This connection with the mapping class group was first made by W. B. R. Lickorish.
Heegaard splittings can also be defined for compact 3-manifolds with boundary by replacing handlebodies with compression bodies. The gluing map is between the positive boundaries of the compression bodies.
A closed curve is called essential if it is not homotopic to a point, a puncture, or a boundary component.
A Heegaard splitting is reducible if there is an essential simple closed curve formula_1 on "H" which bounds a disk in both "V" and in "W". A splitting is irreducible if it is not reducible. It follows from Haken's Lemma that in a reducible manifold every splitting is reducible.
A Heegaard splitting is stabilized if there are essential simple closed curves formula_1 and formula_2 on "H" where formula_1 bounds a disk in "V", formula_2 bounds a disk in "W", and formula_1 and formula_2 intersect exactly once. It follows from Waldhausen's Theorem that every reducible splitting of an irreducible manifold is stabilized.
A Heegaard splitting is weakly reducible if there are disjoint essential simple closed curves formula_1 and formula_2 on "H" where formula_1 bounds a disk in "V" and formula_2 bounds a disk in "W". A splitting is strongly irreducible if it is not weakly reducible.
A Heegaard splitting is minimal or minimal genus if there is no other splitting of the ambient three-manifold of lower genus. The minimal value "g" of the splitting surface is the Heegaard genus of "M".
Generalized Heegaard splittings.
A generalized Heegaard splitting of "M" is a decomposition into compression bodies formula_3 and surfaces formula_4 such that formula_5 and formula_6. The interiors of the compression bodies must be pairwise disjoint and their union must be all of formula_7. The surface formula_8 forms a Heegaard surface for the submanifold formula_9 of formula_7. (Note that here each "Vi" and "Wi" is allowed to have more than one component.)
A generalized Heegaard splitting is called strongly irreducible if each formula_9 is strongly irreducible.
There is an analogous notion of thin position, defined for knots, for Heegaard splittings. The complexity of a connected surface "S", "c(S)", is defined to be formula_10; the complexity of a disconnected surface is the sum of complexities of its components. The complexity of a generalized Heegaard splitting is the multi-set formula_11, where the index runs over the Heegaard surfaces in the generalized splitting. These multi-sets can be well-ordered by lexicographical ordering (monotonically decreasing). A generalized Heegaard splitting is thin if its complexity is minimal.
Theorems.
Suppose now that "M" is a closed orientable three-manifold.
Classifications.
There are several classes of three-manifolds where the set of Heegaard splittings is completely known. For example, Waldhausen's Theorem shows that all splittings of formula_12 are standard. The same holds for lens spaces (as proved by Francis Bonahon and Otal).
Splittings of Seifert fiber spaces are more subtle. Here, all splittings may be isotoped to be vertical or horizontal (as proved by Yoav Moriah and Jennifer Schultens).
classified splittings of torus bundles (which includes all three-manifolds with Sol geometry). It follows from their work that all torus bundles have a unique splitting of minimal genus. All other splittings of the torus bundle are stabilizations of the minimal genus one.
A paper of classifies the Heegaard splittings of hyperbolic three-manifolds which are two-bridge knot complements.
Computational methods can be used to determine or approximate the Heegaard genus of a 3-manifold. John Berge's software Heegaard studies Heegaard splittings generated by the fundamental group of a manifold.
Applications and connections.
Minimal surfaces.
Heegaard splittings appeared in the theory of minimal surfaces first in the work of Blaine Lawson who proved that embedded minimal surfaces in compact manifolds of positive sectional curvature are Heegaard splittings. This result was extended by William Meeks to flat manifolds, except he proves that an embedded minimal surface in a flat three-manifold is either a Heegaard surface or totally geodesic.
Meeks and Shing-Tung Yau went on to use results of Waldhausen to prove results about the topological uniqueness of minimal surfaces of finite genus in formula_31. The final topological classification of embedded minimal surfaces in formula_31 was given by Meeks and Frohman. The result relied heavily on techniques developed for studying the topology of Heegaard splittings.
Heegaard Floer homology.
Heegaard diagrams, which are simple combinatorial descriptions of Heegaard splittings, have been used extensively to construct invariants of three-manifolds. The most recent example of this is the Heegaard Floer homology of Peter Ozsvath and Zoltán Szabó. The theory uses the formula_32 symmetric product of a Heegaard surface as the ambient space, and tori built from the boundaries of meridian disks for the two handlebodies as the Lagrangian submanifolds.
History.
The idea of a Heegaard splitting was introduced by Poul Heegaard (1898). While Heegaard splittings were studied extensively by mathematicians such as Wolfgang Haken and Friedhelm Waldhausen in the 1960s, it was not until a few decades later that the field was rejuvenated by Andrew Casson and Cameron Gordon (1987), primarily through their concept of strong irreducibility.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " M = V \\cup_f W. "
},
{
"math_id": 1,
"text": "\\alpha"
},
{
"math_id": 2,
"text": "\\beta"
},
{
"math_id": 3,
"text": "V_i, W_i, i = 1, \\dotsc, n"
},
{
"math_id": 4,
"text": "H_i, i = 1, \\dotsc, n"
},
{
"math_id": 5,
"text": "\\partial_+ V_i = \\partial_+ W_i = H_i"
},
{
"math_id": 6,
"text": "\\partial_- W_i = \\partial_- V_{i+1}"
},
{
"math_id": 7,
"text": "M"
},
{
"math_id": 8,
"text": "H_i"
},
{
"math_id": 9,
"text": "V_i \\cup W_i"
},
{
"math_id": 10,
"text": "\\operatorname{max}\\left\\{0, 1 - \\chi(S)\\right\\}"
},
{
"math_id": 11,
"text": "\\{c(S_i)\\}"
},
{
"math_id": 12,
"text": "S^3"
},
{
"math_id": 13,
"text": "\\mathbb{R}^4"
},
{
"math_id": 14,
"text": "xyz"
},
{
"math_id": 15,
"text": "\\mathbb{C}^2"
},
{
"math_id": 16,
"text": "1/\\sqrt{2}"
},
{
"math_id": 17,
"text": "T^2"
},
{
"math_id": 18,
"text": "(M, H)"
},
{
"math_id": 19,
"text": "\\left(S^3, T^2\\right)"
},
{
"math_id": 20,
"text": "T^3"
},
{
"math_id": 21,
"text": "S^1"
},
{
"math_id": 22,
"text": "x_0"
},
{
"math_id": 23,
"text": "\\Gamma = \n S^1 \\times \\{x_0\\} \\times \\{x_0\\} \\cup\n \\{x_0\\} \\times S^1 \\times \\{x_0\\} \\cup\n \\{x_0\\} \\times \\{x_0\\} \\times S^1\n"
},
{
"math_id": 24,
"text": "\\Gamma"
},
{
"math_id": 25,
"text": "T^3 - V"
},
{
"math_id": 26,
"text": "H_1"
},
{
"math_id": 27,
"text": "H_2"
},
{
"math_id": 28,
"text": "H"
},
{
"math_id": 29,
"text": "S_1"
},
{
"math_id": 30,
"text": "S_2"
},
{
"math_id": 31,
"text": "\\R^3"
},
{
"math_id": 32,
"text": "g^{th}"
}
] | https://en.wikipedia.org/wiki?curid=864418 |
864438 | Arrangement of hyperplanes | Partition of space by a hyperplanes
In geometry and combinatorics, an arrangement of hyperplanes is an arrangement of a finite set "A" of hyperplanes in a linear, affine, or projective space "S".
Questions about a hyperplane arrangement "A" generally concern geometrical, topological, or other properties of the complement, "M"("A"), which is the set that remains when the hyperplanes are removed from the whole space. One may ask how these properties are related to the arrangement and its intersection semilattice.
The intersection semilattice of "A", written "L"("A"), is the set of all subspaces that are obtained by intersecting some of the hyperplanes; among these subspaces are "S" itself, all the individual hyperplanes, all intersections of pairs of hyperplanes, etc. (excluding, in the affine case, the empty set). These intersection subspaces of "A" are also called the flats of "A". The intersection semilattice "L"("A") is partially ordered by "reverse inclusion".
If the whole space "S" is 2-dimensional, the hyperplanes are lines; such an arrangement is often called an arrangement of lines. Historically, real arrangements of lines were the first arrangements investigated. If "S" is 3-dimensional one has an arrangement of planes.
General theory.
The intersection semilattice and the matroid.
The intersection semilattice "L"("A") is a meet semilattice and more specifically is a geometric semilattice. If the arrangement is linear or projective, or if the intersection of all hyperplanes is nonempty, the intersection lattice is a geometric lattice.
When "L"("A") is a lattice, the matroid of "A", written "M"("A"), has "A" for its ground set and has rank function "r"("S") := codim("I"), where "S" is any subset of "A" and "I" is the intersection of the hyperplanes in "S". In general, when "L"("A") is a semilattice, there is an analogous matroid-like structure called a semimatroid, which is a generalization of a matroid (and has the same relationship to the intersection semilattice as does the matroid to the lattice in the lattice case), but is not a matroid if "L"("A") is not a lattice.
Polynomials.
For a subset "B" of "A", let us define "f"("B") := the intersection of the hyperplanes in "B"; this is "S" if "B" is empty.
The characteristic polynomial of "A", written "pA"("y"), can be defined by
formula_0
summed over all subsets "B" of "A" except, in the affine case, subsets whose intersection is empty. (The dimension of the empty set is defined to be −1.) This polynomial helps to solve some basic questions; see below.
Another polynomial associated with "A" is the Whitney-number polynomial "wA"("x", "y"), defined by
formula_1
summed over "B" ⊆ "C" ⊆ "A" such that "f"("B") is nonempty.
Being a geometric lattice or semilattice, "L"("A") has a characteristic polynomial, "p""L"("A")("y"), which has an extensive theory (see matroid). Thus it is good to know that "p""A"("y") = "y""i" "p""L"("A")("y"), where "i" is the smallest dimension of any flat, except that in the projective case it equals "y""i" + 1"p""L"("A")("y").
The Whitney-number polynomial of "A" is similarly related to that of "L"("A").
The Orlik–Solomon algebra.
The intersection semilattice determines another combinatorial invariant of the arrangement, the Orlik–Solomon algebra. To define it, fix a commutative subring "K" of the base field and form the exterior algebra "E" of the vector space
formula_2
generated by the hyperplanes.
A chain complex structure is defined on "E" with the usual boundary operator formula_3.
The Orlik–Solomon algebra is then the quotient of "E" by the ideal generated by elements of the form formula_4 for which formula_5 have empty intersection, and by boundaries of elements of the same form for which formula_6 has codimension less than "p".
Real arrangements.
In real affine space, the complement is disconnected: it is made up of separate pieces called cells or regions or chambers, each of which is either a bounded region that is a convex polytope, or an unbounded region that is a convex polyhedral region which goes off to infinity.
Each flat of "A" is also divided into pieces by the hyperplanes that do not contain the flat; these pieces are called the faces of "A".
The regions are faces because the whole space is a flat.
The faces of codimension 1 may be called the facets of "A".
The face semilattice of an arrangement is the set of all faces, ordered by "inclusion". Adding an extra top element to the face semilattice gives the face lattice.
In two dimensions (i.e., in the real affine plane) each region is a convex polygon (if it is bounded) or a convex polygonal region which goes off to infinity.
Typical problems about an arrangement in "n"-dimensional real space are to say how many regions there are, or how many faces of dimension 4, or how many bounded regions. These questions can be answered just from the intersection semilattice. For instance, two basic theorems, from Zaslavsky (1975), are that the number of regions of an affine arrangement equals (−1)"n""p""A"(−1) and the number of bounded regions equals (−1)"n"p"A"(1). Similarly, the number of "k"-dimensional faces or bounded faces can be read off as the coefficient of "x""n"−"k" in (−1)"n" w"A" (−"x", −1) or (−1)"n""w""A"(−"x", 1).
designed a fast algorithm to determine the face of an arrangement of hyperplanes containing an input point.
Another question about an arrangement in real space is to decide how many regions are simplices (the "n"-dimensional generalization of triangles and tetrahedra). This cannot be answered based solely on the intersection semilattice. The McMullen problem asks for the smallest arrangement of a given dimension in general position in real projective space for which there does not exist a cell touched by all hyperplanes.
A real linear arrangement has, besides its face semilattice, a poset of regions, a different one for each region. This poset is formed by choosing an arbitrary base region, "B"0, and associating with each region "R" the set "S"("R") consisting of the hyperplanes that separate "R" from "B". The regions are partially ordered so that "R"1 ≥ "R"2 if "S"("R"1, "R") contains "S"("R"2, "R"). In the special case when the hyperplanes arise from a root system, the resulting poset is the corresponding Weyl group with the weak order. In general, the poset of regions is ranked by the number of separating hyperplanes and its Möbius function has been computed .
Vadim Schechtman and Alexander Varchenko introduced a matrix indexed by the regions. The matrix element for the region formula_7 and formula_8 is given by the product of indeterminate variables formula_9 for every hyperplane H that separates these two regions. If these variables are specialized to be all value q, then this is called the q-matrix (over the Euclidean domain formula_10) for the arrangement and much information is contained in its Smith normal form.
Complex arrangements.
In complex affine space (which is hard to visualize because even the complex affine plane has four real dimensions), the complement is connected (all one piece) with holes where the hyperplanes were removed.
A typical problem about an arrangement in complex space is to describe the holes.
The basic theorem about complex arrangements is that the cohomology of the complement "M"("A") is completely determined by the intersection semilattice. To be precise, the cohomology ring of "M"("A") (with integer coefficients) is isomorphic to the Orlik–Solomon algebra on Z.
The isomorphism can be described explicitly and gives a presentation of the cohomology in terms of generators and relations, where generators are represented (in the de Rham cohomology) as logarithmic differential forms
formula_11
with formula_12 any linear form defining the generic hyperplane of the arrangement.
Technicalities.
Sometimes it is convenient to allow the degenerate hyperplane, which is the whole space "S", to belong to an arrangement. If "A" contains the degenerate hyperplane, then it has no regions because the complement is empty. However, it still has flats, an intersection semilattice, and faces. The preceding discussion assumes the degenerate hyperplane is not in the arrangement.
Sometimes one wants to allow repeated hyperplanes in the arrangement. We did not consider this possibility in the preceding discussion, but it makes no material difference. | [
{
"math_id": 0,
"text": "p_A(y) := \\sum_B (-1)^{|B|}y^{\\dim f(B)},"
},
{
"math_id": 1,
"text": "w_A(x,y) := \\sum_B x^{n-\\dim f(B)} \\sum_C (-1)^{|C-B|}y^{\\dim f(C)},"
},
{
"math_id": 2,
"text": "\\bigoplus_{H \\in A} K e_H "
},
{
"math_id": 3,
"text": "\\partial"
},
{
"math_id": 4,
"text": "e_{H_1} \\wedge \\cdots \\wedge e_{H_p}"
},
{
"math_id": 5,
"text": "H_1, \\dots, H_p"
},
{
"math_id": 6,
"text": "H_1 \\cap \\cdots \\cap H_p"
},
{
"math_id": 7,
"text": "R_i"
},
{
"math_id": 8,
"text": "R_j"
},
{
"math_id": 9,
"text": "a_H"
},
{
"math_id": 10,
"text": "\\mathbb{Q}[q]"
},
{
"math_id": 11,
"text": "\\frac{1}{2\\pi i}\\frac{d\\alpha}{\\alpha}."
},
{
"math_id": 12,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=864438 |
864472 | Handlebody | In the mathematical field of geometric topology, a handlebody is a decomposition of a manifold into standard pieces. Handlebodies play an important role in Morse theory, cobordism theory and the surgery theory of high-dimensional manifolds. Handles are used to particularly study 3-manifolds.
Handlebodies play a similar role in the study of manifolds as simplicial complexes and CW complexes play in homotopy theory, allowing one to analyze a space in terms of individual pieces and their interactions.
"n"-dimensional handlebodies.
If formula_0 is an formula_1-dimensional manifold with boundary, and
formula_2
(where formula_3 represents an n-sphere and formula_4 is an n-ball) is an embedding, the formula_1-dimensional manifold with boundary
formula_5
is said to be "obtained from"
formula_0
by attaching an "formula_6-handle".
The boundary formula_7 is obtained from formula_8 by surgery. As trivial examples, note that attaching a 0-handle is just taking a disjoint union with a ball, and that attaching an n-handle
to formula_0 is gluing in a ball along any sphere component of formula_8. Morse theory was used by Thom and Milnor to prove that every manifold (with or without boundary) is a handlebody, meaning that it has an expression as a union of handles. The expression is non-unique: the manipulation of handlebody decompositions is an essential ingredient of the proof of the Smale h-cobordism theorem, and its generalization to the s-cobordism theorem. A manifold is called a "k-handlebody" if it is the union of r-handles, for r at most k. This is not the same as the dimension of the manifold. For instance, a 4-dimensional 2-handlebody is a union of 0-handles, 1-handles and 2-handles. Any manifold is an n-handlebody, that is, any manifold is the union of handles. It isn't too hard to see that a manifold is an (n-1)-handlebody if and only if it has non-empty boundary.
Any handlebody decomposition of a manifold defines a CW complex decomposition of the manifold, since attaching an r-handle is the same, up to homotopy equivalence, as attaching an r-cell. However, a handlebody decomposition gives more information than just the homotopy type of the manifold. For instance, a handlebody decomposition completely describes the manifold up to homeomorphism. In dimension four, they even describe the smooth structure, as long as the attaching maps are smooth. This is false in higher dimensions; any exotic sphere is the union of a 0-handle and an n-handle.
3-dimensional handlebodies.
A handlebody can be defined as an orientable 3-manifold-with-boundary containing pairwise disjoint, properly embedded 2-discs such that the manifold resulting from cutting along the discs is a 3-ball. It's instructive to imagine how to reverse this process to get a handlebody. (Sometimes the orientability hypothesis is dropped from this last definition, and one gets a more general kind of handlebody with a non-orientable handle.)
The "genus" of a handlebody is the genus of its boundary surface. Up to homeomorphism, there is exactly one handlebody of any non-negative integer genus.
The importance of handlebodies in 3-manifold theory comes from their connection with Heegaard splittings. The importance of handlebodies in geometric group theory comes from the fact that their fundamental group is free.
A 3-dimensional handlebody is sometimes, particularly in older literature, referred to as a cube with handles.
Examples.
Let "G" be a connected finite graph embedded in Euclidean space of dimension n. Let "V" be a closed regular neighborhood of "G" in the Euclidean space. Then "V" is an n-dimensional handlebody. The graph "G" is called a "spine" of "V".
Any genus zero handlebody is homeomorphic to the three-ball B3. A genus one handlebody is homeomorphic to B2 × S1 (where S1 is the circle) and is called a "solid torus". All other handlebodies may be obtained by taking the boundary-connected sum of a collection of solid tori. | [
{
"math_id": 0,
"text": "(W,\\partial W)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "S^{r-1} \\times D^{n-r} \\subset \\partial W"
},
{
"math_id": 3,
"text": "S^{n}"
},
{
"math_id": 4,
"text": "D^n"
},
{
"math_id": 5,
"text": "(W',\\partial W') = ((W \\cup( D^r \\times D^{n-r})),(\\partial W - S^{r-1} \\times D^{n-r})\\cup (D^r \\times S^{n-r-1}))"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": "\\partial W'"
},
{
"math_id": 8,
"text": "\\partial W"
}
] | https://en.wikipedia.org/wiki?curid=864472 |
864490 | Chemiosmosis | Electrochemical principle that enables cellular respiration
Chemiosmosis is the movement of ions across a semipermeable membrane bound structure, down their electrochemical gradient. An important example is the formation of adenosine triphosphate (ATP) by the movement of hydrogen ions (H+) across a membrane during cellular respiration or photosynthesis.
Hydrogen ions, or protons, will diffuse from a region of high proton concentration to a region of lower proton concentration, and an electrochemical concentration gradient of protons across a membrane can be harnessed to make ATP. This process is related to osmosis, the movement of water across a selective membrane, which is why it is called "chemiosmosis".
ATP synthase is the enzyme that makes ATP by chemiosmosis. It allows protons to pass through the membrane and uses the free energy difference to convert phosphorylate adenosine diphosphate (ADP) into ATP. The ATP synthase contains two parts: CF0 (present in thylakoid membrane) and CF1 (protrudes on the outer surface of thylakoid membrane). The breakdown of the proton gradient leads to conformational change in CF1—providing enough energy in the process to convert ADP to ATP. The generation of ATP by chemiosmosis occurs in mitochondria and chloroplasts, as well as in most bacteria and archaea. For instance, in chloroplasts during photosynthesis, an electron transport chain pumps H+ ions (protons) in the stroma (fluid) through the thylakoid membrane to the thylakoid spaces. The stored energy is used to photophosphorylate ADP, making ATP, as protons move through ATP synthase.
The chemiosmotic hypothesis.
Peter D. Mitchell proposed the chemiosmotic hypothesis in 1961. In brief, the hypothesis was that most adenosine triphosphate (ATP) synthesis in respiring cells comes from the electrochemical gradient across the inner membranes of mitochondria by using the energy of NADH and FADH2 formed during the oxidative breakdown of energy-rich molecules such as glucose.
Molecules such as glucose are metabolized to produce acetyl CoA as a fairly energy-rich intermediate. The oxidation of acetyl coenzyme A (acetyl-CoA) in the mitochondrial matrix is coupled to the reduction of a carrier molecule such as nicotinamide adenine dinucleotide (NAD) and flavin adenine dinucleotide (FAD).
The carriers pass electrons to the electron transport chain (ETC) in the inner mitochondrial membrane, which in turn pass them to other proteins in the ETC. The energy at every redox transfer step is used to pump protons from the matrix into the intermembrane space, storing energy in the form of a transmembrane electrochemical gradient. The protons move back across the inner membrane through the enzyme ATP synthase. The flow of protons back into the matrix of the mitochondrion via ATP synthase provides enough energy for ADP to combine with inorganic phosphate to form ATP.
This was a radical proposal at the time, and was not well accepted. The prevailing view was that the energy of electron transfer was stored as a stable high potential intermediate, a chemically more conservative concept. The problem with the older paradigm is that no high energy intermediate was ever found, and the evidence for proton pumping by the complexes of the electron transfer chain grew too great to be ignored. Eventually the weight of evidence began to favor the chemiosmotic hypothesis, and in 1978 Peter D. Mitchell was awarded the Nobel Prize in Chemistry.
Chemiosmotic coupling is important for ATP production in mitochondria, chloroplasts
and many bacteria and archaea.
Proton-motive force.
The movement of ions across the membrane depends on a combination of two factors:
These two gradients taken together can be expressed as an electrochemical gradient.
Lipid bilayers of biological membranes, however, are barriers for ions. This is why energy can be stored as a combination of these two gradients across the membrane. Only special membrane proteins like ion channels can sometimes allow ions to move across the membrane (see also: Membrane transport). In the chemiosmotic hypothesis a transmembrane ATP synthase is central to convert energy of spontaneous flow of protons through them into chemical energy of ATP bonds.
Hence researchers created the term proton-motive force (PMF), derived from the electrochemical gradient mentioned earlier. It can be described as the measure of the potential energy stored (chemiosmotic potential) as a combination of proton and voltage (electrical potential) gradients across a membrane. The electrical gradient is a consequence of the charge separation across the membrane (when the protons H+ move without a counterion, such as chloride Cl−).
In most cases the proton-motive force is generated by an electron transport chain which acts as a proton pump, using the Gibbs free energy of redox reactions to pump protons (hydrogen ions) out across the membrane, separating the charge across the membrane. In mitochondria, energy released by the electron transport chain is used to move protons from the mitochondrial matrix (N side) to the intermembrane space (P side). Moving the protons out of the mitochondrion creates a lower concentration of positively charged protons inside it, resulting in excess negative charge on the inside of the membrane. The electrical potential gradient is about -170 mV , negative inside (N). These gradients - charge difference and the proton concentration difference both create a combined electrochemical gradient across the membrane, often expressed as the proton-motive force (PMF). In mitochondria, the PMF is almost entirely made up of the electrical component but in chloroplasts the PMF is made up mostly of the pH gradient because the charge of protons H+ is neutralized by the movement of Cl− and other anions. In either case, the PMF needs to be greater than about 460 mV (45 kJ/mol) for the ATP synthase to be able to make ATP.
Equations.
The proton-motive force is derived from the Gibbs free energy. Let N denote the inside of a cell, and P denote the outside. Then
formula_0
where
The molar Gibbs free energy change formula_1 is frequently interpreted as a molar electrochemical ion potential formula_10.
For an electrochemical proton gradient formula_11 and as a consequence:
formula_12
where
formula_13.
Mitchell defined the proton-motive force (PMF) as
formula_14.
For example, formula_15 implies formula_16. At formula_17 this equation takes the form:
formula_18.
Note that for spontaneous proton import from the P side (relatively more positive and acidic) to the N side (relatively more negative and alkaline), formula_19 is negative (similar to formula_1) whereas PMF is positive (similar to redox cell potential formula_20).
It is worth noting that, as with any transmembrane transport process, the PMF is directional. The sign of the transmembrane electric potential difference formula_21 is chosen to represent the change in potential energy per unit charge flowing into the cell as above. Furthermore, due to redox-driven proton pumping by coupling sites, the proton gradient is always inside-alkaline. For both of these reasons, protons flow in spontaneously, from the P side to the N side; the available free energy is used to synthesize ATP (see below). For this reason, PMF is defined for proton import, which is spontaneous. PMF for proton export, i.e., proton pumping as catalyzed by the coupling sites, is simply the negative of PMF(import).
The spontaneity of proton import (from the P to the N side) is universal in all bioenergetic membranes. This fact was not recognized before the 1990s, because the chloroplast thylakoid lumen was interpreted as an interior phase, but in fact it is topologically equivalent to the exterior of the chloroplast. Azzone et al. stressed that the inside phase (N side of the membrane) is the bacterial cytoplasm, mitochondrial matrix, or chloroplast stroma; the outside (P) side is the bacterial periplasmic space, mitochondrial intermembrane space, or chloroplast lumen. Furthermore, 3D tomography of the mitochondrial inner membrane shows its extensive invaginations to be stacked, similar to thylakoid disks; hence the mitochondrial intermembrane space is topologically quite similar to the chloroplast lumen.:
The energy expressed here as Gibbs free energy, electrochemical proton gradient, or proton-motive force (PMF), is a combination of two gradients across the membrane:
When a system reaches equilibrium, formula_23; nevertheless, the concentrations on either side of the membrane need not be equal. Spontaneous movement across the potential membrane is determined by both concentration and electric potential gradients.
The molar Gibbs free energy formula_24 of ATP synthesis
formula_25
is also called phosphorylation potential. The equilibrium concentration ratio formula_26 can be calculated by comparing formula_27 and formula_24, for example in case of the mammalian mitochondrion:
H+ / ATP = ΔGp / (Δp / 10.4 kJ·mol−1/mV) = 40.2 kJ·mol−1 / (173.5 mV / 10.4 kJ·mol−1/mV) = 40.2 / 16.7 = 2.4. The actual ratio of the proton-binding c-subunit to the ATP-synthesizing beta-subunit copy numbers is 8/3 = 2.67, showing that under these conditions, the mitochondrion functions at 90% (2.4/2.67) efficiency.
In fact, the thermodynamic efficiency is mostly lower in eukaryotic cells because ATP must be exported from the matrix to the cytoplasm, and ADP and phosphate must be imported from the cytoplasm. This "costs" one "extra" proton import per ATP, hence the actual efficiency is only 65% (= 2.4/3.67).
In mitochondria.
The complete breakdown of glucose releasing its energy is called cellular respiration. The last steps of this process occur in mitochondria. The reduced molecules NADH and FADH2 are generated by the Krebs cycle, glycolysis, and pyruvate processing. These molecules pass electrons to an electron transport chain, which releases the energy of oxygen to create a proton gradient across the inner mitochondrial membrane. ATP synthase then uses the energy stored in this gradient to make ATP. This process is called oxidative phosphorylation because it uses energy released by the oxidation of NADH and FADH2 to phosphorylate ADP into ATP.
In plants.
The light reactions of photosynthesis generate ATP by the action of chemiosmosis. The photons in sunlight are received by the antenna complex of Photosystem II, which excites electrons to a higher energy level. These electrons travel down an electron transport chain, causing protons to be actively pumped across the thylakoid membrane into the thylakoid lumen. These protons then flow down their electrochemical potential gradient through an enzyme called ATP-synthase, creating ATP by the phosphorylation of ADP to ATP. The electrons from the initial light reaction reach Photosystem I, then are raised to a higher energy level by light energy and then received by an electron acceptor and reduce NADP+ to NADPH. The electrons lost from Photosystem II get replaced by the oxidation of water, which is "split" into protons and oxygen by the oxygen-evolving complex (OEC, also known as WOC, or the water-oxidizing complex). To generate one molecule of diatomic oxygen, 10 photons must be absorbed by Photosystems I and II, four electrons must move through the two photosystems, and 2 NADPH are generated (later used for carbon dioxide fixation in the Calvin Cycle).
In prokaryotes.
Bacteria and archaea also can use chemiosmosis to generate ATP. Cyanobacteria, green sulfur bacteria, and purple bacteria synthesize ATP by a process called photophosphorylation. These bacteria use the energy of light to create a proton gradient using a photosynthetic electron transport chain. Non-photosynthetic bacteria such as "E. coli" also contain ATP synthase. In fact, mitochondria and chloroplasts are the product of endosymbiosis and trace back to incorporated prokaryotes. This process is described in the endosymbiotic theory. The origin of the mitochondrion triggered the origin of eukaryotes, and the origin of the plastid the origin of the Archaeplastida, one of the major eukaryotic supergroups.
Chemiosmotic phosphorylation is the third pathway that produces ATP from inorganic phosphate and an ADP molecule. This process is part of oxidative phosphorylation.
Emergence of chemiosmosis.
Thermal cycling model.
A stepwise model for the emergence of chemiosmosis, a key element in the origin of life on earth, proposes that primordial organisms used thermal cycling as an energy source (thermosynthesis), functioning essentially as a heat engine:
self-organized convection in natural waters causing thermal cycling →
added β-subunit of F1 ATP Synthase
(generated ATP by thermal cycling of subunit during suspension in convection cell: thermosynthesis) →
added membrane and Fo ATP Synthase moiety
(generated ATP by change in electrical polarization of membrane during thermal cycling: thermosynthesis) →
added metastable, light-induced electric dipoles in membrane
(primitive photosynthesis) →
added quinones and membrane-spanning light-induced electric dipoles
(today's bacterial photosynthesis, which makes use of chemiosmosis).
External proton gradient model.
Deep-sea hydrothermal vents, emitting hot acidic or alkaline water, would have created external proton gradients. These provided energy that primordial organisms could have exploited. To keep the flows separate, such an organism could have wedged itself in the rock of the hydrothermal vent, exposed to the hydrothermal flow on one side and the more alkaline water on the other. As long as the organism's membrane (or passive ion channels within it) is permeable to protons, the mechanism can function without ion pumps. Such a proto-organism could then have evolved further mechanisms such as ion pumps and ATP synthase.
Meteoritic quinones.
A proposed alternative source to chemiosmotic energy developing across membranous structures is if an electron acceptor, ferricyanide, is within a vesicle and the electron donor is outside, quinones transported by carbonaceous meteorites pick up electrons and protons from the donor. They would release electrons across the lipid membrane by diffusion to ferricyanide within the vesicles and release protons which produces gradients above pH 2, the process is conducive to the development of proton gradients.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta\\!G = zF \\Delta\\!\\psi + RT \\ln\\frac{[\\mathrm{X}^{z+}]_{\\text{N}} }{[\\mathrm{X}^{z+}]_{\\text{P}}}"
},
{
"math_id": 1,
"text": "\\Delta\\!G"
},
{
"math_id": 2,
"text": "z"
},
{
"math_id": 3,
"text": "\\mathrm{X}^{z+}"
},
{
"math_id": 4,
"text": "\\Delta\\psi"
},
{
"math_id": 5,
"text": "[\\mathrm{X}^{z+}]_{\\text{P}}"
},
{
"math_id": 6,
"text": "[\\mathrm{X}^{z+}]_{\\text{N}}"
},
{
"math_id": 7,
"text": "F"
},
{
"math_id": 8,
"text": "R"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "\\Delta\\!\\mu _{\\mathrm{X}^{z+}} = \\Delta\\!G"
},
{
"math_id": 11,
"text": "z=1"
},
{
"math_id": 12,
"text": "\\Delta\\!\\mu _{\\mathrm{H}^{+}} = F \\Delta\\!\\psi + RT \\ln \\frac{[\\mathrm{H}^+]_{\\text{N}} }{[\\mathrm{H}^+]_{\\text{P}}} = F \\Delta\\!\\psi - (\\ln 10)RT \\Delta \\mathrm{pH}"
},
{
"math_id": 13,
"text": "\\Delta\\!\\mathrm{pH} = \\mathrm{pH}_{\\mathrm{N}} - \\mathrm{pH}_{\\mathrm{P}}"
},
{
"math_id": 14,
"text": "\\Delta\\!p = -\\frac{\\Delta\\!\\mu_{\\mathrm{H^{+}}}}{F}"
},
{
"math_id": 15,
"text": "\\Delta\\!\\mu_{\\mathrm{H}^+}=1\\,\\mathrm{kJ}\\,\\mathrm{mol}^{-1}"
},
{
"math_id": 16,
"text": "\\Delta\\!p = 10.4\\,\\mathrm{mV}"
},
{
"math_id": 17,
"text": "298\\,\\mathrm{K}"
},
{
"math_id": 18,
"text": "\\Delta\\!p = -\\Delta\\!\\psi + \\left(59.1\\,\\mathrm{mV}\\right)\\Delta\\!\\mathrm{pH}"
},
{
"math_id": 19,
"text": "\\Delta\\!\\mu _{\\mathrm{H}^+}"
},
{
"math_id": 20,
"text": "\\Delta E"
},
{
"math_id": 21,
"text": "\\Delta\\!\\psi"
},
{
"math_id": 22,
"text": "\\Delta\\!\\mathrm{pH}"
},
{
"math_id": 23,
"text": "\\Delta\\!\\rho = 0"
},
{
"math_id": 24,
"text": "\\Delta\\!G_{\\mathrm{p}}"
},
{
"math_id": 25,
"text": "\\mathrm{ADP}^{4-} + \\mathrm{H}^{+} + \\mathrm{HOPO}_3^{2-} \\rightarrow \\mathrm{ATP}^{4-} + \\mathrm{H_2 O}"
},
{
"math_id": 26,
"text": "[\\mathrm{H}^+]/[\\mathrm{ATP}]"
},
{
"math_id": 27,
"text": "\\Delta\\!p"
}
] | https://en.wikipedia.org/wiki?curid=864490 |
8647217 | Transcritical cycle | Closed thermodynamic cycle involving fluid
A transcritical cycle is a closed thermodynamic cycle where the working fluid goes through both subcritical and supercritical states. In particular, for power cycles the working fluid is kept in the liquid region during the compression phase and in vapour and/or supercritical conditions during the expansion phase. The ultrasupercritical steam Rankine cycle represents a widespread transcritical cycle in the electricity generation field from fossil fuels, where water is used as working fluid. Other typical applications of transcritical cycles to the purpose of power generation are represented by organic Rankine cycles, which are especially suitable to exploit low temperature heat sources, such as geothermal energy, heat recovery applications or waste to energy plants. With respect to subcritical cycles, the transcritical cycle exploits by definition higher pressure ratios, a feature that ultimately yields higher efficiencies for the majority of the working fluids. Considering then also supercritical cycles as a valid alternative to the transcritical ones, the latter cycles are capable of achieving higher specific works due to the limited relative importance of the work of compression work. This evidences the extreme potential of transcritical cycles to the purpose of producing the most power (measurable in terms of the cycle specific work) with the least expenditure (measurable in terms of spent energy to compress the working fluid).
While in single level supercritical cycles both pressure levels are above the critical pressure of the working fluid, in transcritical cycles one pressure level is above the critical pressure and the other is below. In the refrigeration field carbon dioxide, CO2, is increasingly considered of interest as refrigerant.
Transcritical conditions of the working fluid.
In transcritical cycles, the pressure of the working fluid at the outlet of the pump is higher than the critical pressure, while the inlet conditions are close to the saturated liquid pressure at the given minimum temperature.
During the heating phase, which is typically considered an isobaric process, the working fluid overcomes the critical temperature, moving thus from the liquid to the supercritical phase without the occurrence of any evaporation process, a significant difference between subcritical and transcritical cycles. Due to this significant difference in the heating phase, the heat injection into the cycle is significantly more efficient from a second law perspective, since the average temperature difference between the hot source and the working fluid is reduced.
As a consequence, the maximum temperatures reached by the cold source can be higher at fixed hot source characteristics. Therefore, the expansion process can be accomplished exploiting higher pressure ratios, which yields higher power production. Modern ultrasupercritical Rankine cycles can reach maximum temperatures up to 620°C exploiting the optimized heat introduction process.
Characterization of the power cycle.
As in any power cycle, the most important indicator of its performance is the thermal efficiency. The thermal efficiency of a transcritical cycle is computed as:
formula_0
where formula_1 is the thermal input of the cycle, provided by either combustion or with a heat exchanger, and formula_2 is the power produced by the cycle.
The power produced is considered comprehensive of the produced power during the expansion process of the working fluid and the one consumed during the compression step.
The typical conceptual configuration of a transcritical cycle employs a single heater, thanks to the absence of drastic phase change from one state to another, being the pressure above the critical one. In subcritical cycles, instead, the heating process of the working fluid occurs in three different heat exchangers: in economizers the working fluid is heated (while remaining in the liquid phase) up to a condition approaching the saturated liquid conditions. Evaporators accomplish fluid evaporation process (typically up to the saturated vapour conditions) and in superheaters the working fluid is heated form the saturated vapour conditions to a superheated vapor. Moreover, using Rankine cycles as bottoming cycles in the context of combined gas-steam cycles keeps the configuration of the former ones as always subcritical. Therefore, there will be multiple pressure levels and hence multiple evaporators, economizers and superheaters, which introduces a significant complication to the heat injection process in the cycle.
Characterization of the compression process.
Along adiabatic and isentropic processes, such as those theoretically associated with pumping processes in transcritical cycles, the enthalpy difference across both a compression and an expansion is computed as:
formula_3
Consequently, a working fluid with a lower specific volume (hence higher density) can inevitably be compressed spending a lower mechanical work than one with low density (more gas like).
In transcritical cycles, the very high maximum pressures and the liquid conditions along the whole compression phase ensure a higher density and a lower specific volume with respect to supercritical counterparts. Considering the different physical phases though which compression processes occur, transcritical and supercritical cycles employ pumps (for liquids) and compressors (for gases), respectively, during the compression step.
Characterization of the expansion process.
In the expansion step of the working fluid in transcritical cycles, as in subcritical ones, the working fluid can be discharged either in wet or dry conditions.
Typical dry expansions are those involving organic or other unconventional working fluids, which are characterized by non-negligible molecular complexities and high molecular weights.
The expansion step occurs in turbines: depending on the application and on the nameplate power produced by the power plant, both axial turbines and radial turbines can be exploited during fluid expansion. Axial turbines favour lower rotational speed and higher power production, while radial turbines are suitable for limited powers produced and high rotational speed.
Organic cycles are appropriate choices for low enthalpy applications and are characterized by higher average densities across the expanders than those occurring in transcritical steam cycles: for this reason a low blade height is normally designed and the volumetric flow rate is kept limited to relatively small values. On the other hand in large scale application scenarios the expander blades typically show heights that exceed one meter and that are exploited in the steam cycles. Here, in fact, the fluid density at the outlet of the last expansion stage is significantly low.
In general, the specific work of the cycle is expressed as:
formula_4
Even though the specific work of any cycle is strongly dependent on the actual working fluid considered in the cycle, transcritical cycles are expected to exhibit higher specific works than the corresponding subcritical and supercritical counterparts (i.e., that exploit the same working fluid). For this reason, at fixed boundary conditions, power produced and working fluid, a lower mass flow rate is expected in transcritical cycles than in other configurations.
Applications in power cycles.
Ultrasupercritical Rankine cycles.
In the last decades, the thermal efficiency of Rankine cycles increased drastically, especially for large scale applications fueled by coal: for these power plants, the application of ultrasupercritical layouts was the main factor to achieve the goal, since the higher pressure ratio ensures higher cycle efficiencies.
The increment in thermal efficiency of power plants fueled by dirty fuels became crucial also in the reduction of the specific emissions of the plants, both in therms of greenhouse gas and for pollutant such as sulfur dioxide or NOx.
In large scale applications, ultrasupercritical Rankine cycles employ up to 10 feedwater heaters, five on the high pressure side and five on the low pressure side, including the deaerator, helping in the increment of the temperature at the inlet of the boiler up to 300°C, allowing a significant regenerative air preheating, thus reducing the fuel consumption. Studies on the best performant configurations of supercritical rankine cycles (300 bar of maximum pressure, 600°C of maximum temperature and two reheats) show that such layouts can achieve a cycle efficiency higher than 50%, about 6% higher than subcritical configurations.
Organic Rankine cycles.
Organic Rankine cycles are innovative power cycles which allow good performances for low enthalpy thermal sources and ensure condensation above the atmospheric pressure, thus avoiding deaerators and large cross sectional area in the heat rejection units. Moreover, with respect to steam Rankine cycles, ORC have a higher flexibility in handling low power sizes, allowing significant compactness.
Typical applications of ORC cover: waste heat recovery plants, geothermal plants, biomass plants and waste to energy power plants.
Organic Rankine cycles use organic fluids (such as hydrocarbons, perfluorocarbons, chlorofluorocarbon, and many others) as working fluids. Most of them have a critical temperature in the range of 100-200°C, for this reason perfectly adaptable to transcritical cycles in low temperature applications.
Considering organic fluids, having a maximum pressure above the critical one can more than double the temperature difference across the turbine, with respect to the subcritical counterpart, and significantly increase both the cycle specific work and cycle efficiency.
Applications in refrigeration cycles.
A refrigeration cycle, also known as heat pump, is a thermodynamic cycle that allows the removal of heat from a low temperature heat source and the rejection of heat into a high temperature heat source, thanks to mechanical power consumption. Traditional refrigeration cycles are subcritical, with the high pressure side (where heat rejection occurs) below the critical pressure.
Innovative transcritical refrigeration cycles, instead, should use a working fluid whose critical temperature is around the ambient temperature. For this reason, carbon dioxide is chosen due to its favourable critical conditions. In fact, the critical point of carbon dioxide is 31°C, reasonably in between the hot source and cold source of traditional refrigeration applications, thus suitable for a transcritical applications.
In transcritical refrigeration cycles the heat is dissipated through a gas cooler instead of a desuperheater and a condenser like in subcritical cycles. This limits the plant components, plant complexity and costs of the power block.
The advantages of using supercritical carbon dioxide as working fluid, instead of traditional refrigerant fluids (like HFC of HFO), in refrigeration cycles is represented both by economic aspects and environmental ones. The cost of carbon dioxide is two order of magnitude lower than the ones of the average refrigerant working fluid and the environmental impact of carbon dioxide is very limited (with a GWP of 1 and an ODP of 0), the fluid is not reactive nor significantly toxic. No other working fluids for refrigeration is able to reach the same environmental favourable characteristics of carbon dioxide.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\eta_{cycle}=\\frac{W_{Cycle}}{Q_{in}}"
},
{
"math_id": 1,
"text": "Q_{in}"
},
{
"math_id": 2,
"text": "W_{Cycle}"
},
{
"math_id": 3,
"text": "\\Delta h= \\int_{Pmin}^{Pmax} v\\cdot dP "
},
{
"math_id": 4,
"text": "w_{Cycle}=\\frac{Power_{expansion}-Power_{compression}}{\\dot m_{compression}}"
}
] | https://en.wikipedia.org/wiki?curid=8647217 |
8648241 | Modulation error ratio | The modulation error ratio or MER is a measure used to quantify the performance of a digital radio (or digital TV) transmitter or receiver in a communications system using digital modulation (such as QAM). A signal sent by an ideal transmitter or received by a receiver would have all constellation points precisely at the ideal locations, however various imperfections in the implementation (such as noise, low image rejection ratio, phase noise, carrier suppression, distortion, etc.) or signal path cause the actual constellation points to deviate from the ideal locations.
Transmitter MER can be measured by specialized equipment, which demodulates the received signal in a similar way to how a real radio demodulator does it. Demodulated and detected signal can be used as a reasonably reliable estimate for the ideal transmitted signal in MER calculation.
Definition.
An error vector is a vector in the I-Q plane between the ideal constellation point and the point received by the receiver. The Euclidean distance between the two points is its magnitude.
The modulation error ratio is equal to the ratio of the root mean square (RMS) power (in Watts) of the reference vector to the power (in Watts) of the error. It is defined in dB as:
formula_0
where Perror is the RMS power of the error vector, and Psignal is the RMS power of ideal transmitted signal.
MER is defined as a percentage in a compatible (but reciprocal) way:
formula_1
with the same definitions.
MER is closely related to error vector magnitude (EVM), but MER is calculated from the average power of the signal. MER is also closely related to signal-to-noise ratio. MER includes all imperfections including deterministic amplitude imbalance, quadrature error and distortion, while noise is random by nature. | [
{
"math_id": 0,
"text": "\n\\mathrm{MER (dB)} = 10 \\log_{10} \\left ( {P_\\mathrm{signal} \\over P_\\mathrm{error}} \\right )\n"
},
{
"math_id": 1,
"text": "\n\\mathrm{MER (\\%)} = \\sqrt{ {P_\\mathrm{error} \\over P_\\mathrm{signal}} } \\times 100\\%\n"
}
] | https://en.wikipedia.org/wiki?curid=8648241 |
8648608 | Root mean square deviation | Statistical measure
The root mean square deviation (RMSD) or root mean square error (RMSE) is either one of two closely related and frequently used measures of the differences between true or predicted values on the one hand and observed values or an estimator on the other.
RMSD of a sample.
The RMSD of a sample is the quadratic mean of the differences between the observed values and predicted ones. These deviations are called "residuals" when the calculations are performed over the data sample that was used for estimation (and are therefore always in reference to an estimate) and are called "errors" (or prediction errors) when computed out-of-sample (aka on the full set, referencing a true value rather than an estimate). The RMSD serves to aggregate the magnitudes of the errors in predictions for various data points into a single measure of predictive power. RMSD is a measure of accuracy, to compare forecasting errors of different models for a particular dataset and not between datasets, as it is scale-dependent.
RMSD is always non-negative, and a value of 0 (almost never achieved in practice) would indicate a perfect fit to the data. In general, a lower RMSD is better than a higher one. However, comparisons across different types of data would be invalid because the measure is dependent on the scale of the numbers used.
RMSD is the square root of the average of squared errors. The effect of each error on RMSD is proportional to the size of the squared error; thus larger errors have a disproportionately large effect on RMSD. Consequently, RMSD is sensitive to outliers.
Formulas.
Estimator.
The RMSD of an estimator formula_0 with respect to an estimated parameter formula_1 is defined as the square root of the mean squared error:
formula_2
For an unbiased estimator, the RMSD is the square root of the variance, known as the standard deviation.
Samples.
If "X"1, ..., "Xn" is a sample of a population with true mean value formula_3, then the RMSD of the sample is
formula_4.
The RMSD of predicted values formula_5 for times "t" of a regression's dependent variable formula_6 with variables observed over "T" times, is computed for "T" different predictions as the square root of the mean of the squares of the deviations:
formula_7
In some disciplines, the RMSD is used to compare differences between two things that may vary, neither of which is accepted as the "standard". For example, when measuring the average difference between two time series formula_8 and formula_9,
the formula becomes
formula_10
Normalization.
Normalizing the RMSD facilitates the comparison between datasets or models with different scales. Though there is no consistent means of normalization in the literature, common choices are the mean or the range (defined as the maximum value minus the minimum value) of the measured data:
formula_11 or formula_12.
This value is commonly referred to as the "normalized root mean square deviation" or "error" (NRMSD or NRMSE), and often expressed as a percentage, where lower values indicate less residual variance. This is also called Coefficient of Variation or Percent RMS. In many cases, especially for smaller samples, the sample range is likely to be affected by the size of sample which would hamper comparisons.
Another possible method to make the RMSD a more useful comparison measure is to divide the RMSD by the interquartile range (IQR). When dividing the RMSD with the IQR the normalized value gets less sensitive for extreme values in the target variable.
formula_13 where formula_14
with formula_15 and formula_16 where CDF−1 is the quantile function.
When normalizing by the mean value of the measurements, the term "coefficient of variation of the RMSD, CV(RMSD)" may be used to avoid ambiguity. This is analogous to the coefficient of variation with the RMSD taking the place of the standard deviation.
formula_17
Mean absolute error.
Some researchers have recommended the use of the mean absolute error (MAE) instead of the root mean square deviation. MAE possesses advantages in interpretability over RMSD. MAE is the average of the absolute values of the errors. MAE is fundamentally easier to understand than the square root of the average of squared errors. Furthermore, each error influences MAE in direct proportion to the absolute value of the error, which is not the case for RMSD.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{\\theta}"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "\\operatorname{RMSD}(\\hat{\\theta}) = \\sqrt{\\operatorname{MSE}(\\hat{\\theta})} = \\sqrt{\\operatorname{E}((\\hat{\\theta}-\\theta)^2)}."
},
{
"math_id": 3,
"text": "x_0"
},
{
"math_id": 4,
"text": "\\operatorname{RMSD} = \\sqrt{\\frac{1}{n}\\sum_{i=1}^n(X_i-x_0)^2}"
},
{
"math_id": 5,
"text": "\\hat y_t"
},
{
"math_id": 6,
"text": "y_t,"
},
{
"math_id": 7,
"text": "\\operatorname{RMSD}=\\sqrt{\\frac{\\sum_{t=1}^T (y_t - \\hat y_t)^2}{T}}."
},
{
"math_id": 8,
"text": "x_{1,t}"
},
{
"math_id": 9,
"text": "x_{2,t}"
},
{
"math_id": 10,
"text": "\\operatorname{RMSD}= \\sqrt{\\frac{\\sum_{t=1}^T (x_{1,t} - x_{2,t})^2}{T}}."
},
{
"math_id": 11,
"text": "\\mathrm{NRMSD} = \\frac{\\mathrm{RMSD}}{y_\\max -y_\\min}"
},
{
"math_id": 12,
"text": " \\mathrm{NRMSD} = \\frac {\\mathrm{RMSD}}{\\bar y} "
},
{
"math_id": 13,
"text": "\\mathrm{RMSDIQR} = \\frac{\\mathrm{RMSD}}{IQR}"
},
{
"math_id": 14,
"text": "IQR = Q_3 - Q_1"
},
{
"math_id": 15,
"text": "Q_1 = \\text{CDF}^{-1}(0.25)"
},
{
"math_id": 16,
"text": "Q_3 = \\text{CDF}^{-1}(0.75) ,"
},
{
"math_id": 17,
"text": " \\mathrm{CV(RMSD)} = \\frac {\\mathrm{RMSD}}{\\bar y} ."
}
] | https://en.wikipedia.org/wiki?curid=8648608 |
8649770 | Glutathione synthetase | Enzyme
Glutathione synthetase (GSS) (EC 6.3.2.3) is the second enzyme in the glutathione (GSH) biosynthesis pathway. It catalyses the condensation of gamma-glutamylcysteine and glycine, to form glutathione. Glutathione synthetase is also a potent antioxidant. It is found in many species including bacteria, yeast, mammals, and plants.
In humans, defects in GSS are inherited in an autosomal recessive way and are the cause of severe metabolic acidosis, 5-oxoprolinuria, increased rate of haemolysis, and defective function of the central nervous system. Deficiencies in GSS can cause a spectrum of deleterious symptoms in plants and human beings alike.
In eukaryotes, this is a homodimeric enzyme. The substrate-binding domain has a three-layer alpha/beta/alpha structure. This enzyme utilizes and stabilizes an acylphosphate intermediate to later perform a favorable nucleophilic attack of glycine.
Structure.
Human and yeast glutathione synthetases are homodimers, meaning they are composed of two identical subunits of itself non-covalently bound to each other. On the other hand, "E. coli" glutathione synthetase is a homotetramer. Nevertheless, they are part of the ATP-grasp superfamily, which consists of 21 enzymes that contain an ATP-grasp fold. Each subunit interacts with each other through alpha helix and beta sheet hydrogen bonding interactions and contains two domains. One domain facilitates the ATP-grasp mechanism and the other is the catalytic active site for γ-glutamylcysteine. The ATP-grasp fold is conserved within the ATP-grasp superfamily and is characterized by two alpha helices and beta sheets that hold onto the ATP molecule between them. The domain containing the active site exhibits interesting properties of specificity. In contrast to γ-glutamylcysteine synthetase, glutathione synthetase accepts a large variety of glutamyl-modified analogs of γ-glutamylcysteine, but is much more specific for cysteine-modified analogs of γ-glutamylcysteine. Crystalline structures have shown glutathione synthetase bound to GSH, ADP, two magnesium ions, and a sulfate ion. Two magnesium ions function to stabilize the acylphosphate intermediate, facilitate binding of ATP, and activate removal of phosphate group from ATP. Sulfate ion serves as a replacement for inorganic phosphate once the acylphosphate intermediate is formed inside the active site.
As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 1GLV, 1GSA, 1GSH, 1M0T, 1M0W, 2GLT, and 2HGS.
Mechanism.
Glutathione synthase catalyzes the chemical reaction
ATP + gamma-L-glutamyl-L-cysteine + glycine formula_0 ADP + phosphate + glutathione
The 3 substrates of this enzyme are ATP, gamma-L-glutamyl-L-cysteine, and glycine, whereas its 3 products are ADP, phosphate, and glutathione.
This enzyme belongs to the family of ligases, specifically those forming carbon-nitrogen bonds as acid-D-amino-acid ligases (peptide synthases). The systematic name of this enzyme class is gamma-L-glutamyl-L-cysteine:glycine ligase (ADP-forming). Other names in common use include glutathione synthetase, and GSH synthetase. This enzyme participates in glutamate metabolism and glutathione metabolism. At least one compound, Phosphinate is known to inhibit this enzyme.
The biosynthetic mechanisms for synthetases use energy from nucleoside triphosphates, whereas synthases do not. Glutathione synthetase stays true to this rule, in that it uses the energy generated by ATP. Initially, the carboxylate group on γ-glutamylcysteine is converted into an acyl phosphate by the transfer of an inorganic phosphate group of ATP to generate an acyl phosphate intermediate. Then the amino group of glycine participates in a nucleophilic attack, displacing the phosphate group and forming GSH. After the final GSH product is made, it can be used by glutathione peroxidase to neutralize reactive oxygen species (ROS) such as H2O2 or Glutathione S-transferases in the detoxification of xenobiotics.
Function.
Glutathione synthetase is important for a variety of biological functions in multiple organisms. In "Arabidopsis thaliana", low levels of glutathione synthetase have resulted in increased vulnerability to stressors such as heavy metals, toxic organic chemicals, and oxidative stress. The presence of a thiol functional group allows its product GSH to serve both as an effective oxidizing and reducing agent in numerous biological scenarios. Thiols can easily accept a pair of electrons and become oxidized to disulfides, and the disulfides can be readily reduced to regenerate thiols. Additionally, the thiol side chain of cysteines serve as potent nucleophiles and react with oxidants and electrophilic species that would otherwise cause damage to the cell. Interactions with certain metals also stabilize thiolate intermediates.
In humans, glutathione synthetase functions in a similar manner. Its product GSH participates in cellular pathways involved in homeostasis and cellular maintenance. For instance, glutathione peroxidases catalyze the oxidation of GSH to glutathione disulfide (GSSG) by reducing free radicals and reactive oxygen species such as hydrogen peroxide. Glutathione S-transferase uses GSH to clean up various metabolites, xenobiotics, and electrophiles to mercapturates for excretion. Because of its antioxidant role, GSS mostly produce GSH inside the cytoplasm of liver cells and imported to mitochondria where detoxification occurs. GSH is also essential for the activation of the immune system to generate robust defense mechanisms against invading pathogens. GSH is capable of preventing infection from the influenza virus.
Clinical significance.
Patients with mutations in the "GSS" gene develop glutathione synthetase (GSS) deficiency, an autosomal recessive disorder. Patients develop a wide range of symptoms depending on the severity of the mutations. Mildly affected patients experience a compensated haemolytic anaemia because mutations affect stability of the enzyme. Moderately and severely affected individuals have enzymes with dysfunctional catalytic sites, rendering it unable to participate in detoxification reactions. Physiological symptoms include metabolic acidosis, neurological defects, and increased susceptibility to pathogenic infections.
Treatment of individuals with glutathione synthetase deficiency generally involve therapeutic treatments to address mild to severe symptoms and conditions. In order to treat metabolic acidosis, severely affected patients are given large amounts of bicarbonate and antioxidants such as vitamin E and vitamin C. In mild cases, ascorbate and "N"-acetylcysteine have been shown to increase glutathione levels and increase erythrocyte production. It is important to note that because glutathione synthetase deficiency is so rare, it is poorly understood. The disease also appears on a spectrum, so it is even more difficult to generalize among the few cases that occur.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=8649770 |
8651 | Dark matter | Concept in cosmology
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in physics:
What is dark matter? How was it generated?
In astronomy, dark matter is a hypothetical form of matter that does not interact with light or other electromagnetic radiation. Dark matter is implied by gravitational effects which cannot be explained by general relativity unless more matter is present than can be observed. Such effects occur in the context of formation and evolution of galaxies, gravitational lensing, the observable universe's current structure, mass position in galactic collisions, the motion of galaxies within galaxy clusters, and cosmic microwave background anisotropies.
In the standard lambda-CDM model of cosmology, the mass–energy content of the universe is 5% ordinary matter, 26.8% dark matter, and 68.2% a form of energy known as dark energy. Thus, dark matter constitutes 85% of the total mass, while dark energy and dark matter constitute 95% of the total mass–energy content.
Dark matter is not known to interact with ordinary baryonic matter and radiation except through gravity, making it difficult to detect in the laboratory. The most prevalent explanation is that dark matter is some as-yet-undiscovered subatomic particle, such as either weakly interacting massive particles (WIMPs) or axions. The other main possibility is that dark matter is composed of primordial black holes.
Dark matter is classified as "cold", "warm", or "hot" according to velocity (more precisely, its free streaming length). Recent models have favored a cold dark matter scenario, in which structures emerge by the gradual accumulation of particles.
Although the astrophysics community generally accepts dark matter's existence, a minority of astrophysicists, intrigued by specific observations that are not well explained by ordinary dark matter, argue for various modifications of the standard laws of general relativity. These include modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity. So far none of the proposed modified gravity theories can describe every piece of observational evidence at the same time, suggesting that even if gravity has to be modified, some form of dark matter will still be required.
History.
Early history.
The hypothesis of dark matter has an elaborate history. In the appendices of the book "Baltimore lectures on molecular dynamics and the wave theory of light" where the main text was based on a series of lectures given in 1884, Lord Kelvin discussed the potential number of stars around the Sun from the observed velocity dispersion of the stars near the Sun, assuming that the Sun was 20 to 100 million years old. He posed what would happen if there were a thousand million stars within 1 kiloparsec of the Sun (at which distance their parallax would be 1 milliarcsecond). Lord Kelvin concluded:Many of our supposed thousand million stars, perhaps a great majority of them, may be dark bodies. In 1906, Henri Poincaré in "The Milky Way and Theory of Gases" used the French term ("dark matter") in discussing Kelvin's work. He found that the amount of dark matter would need to be less than that of visible matter.
The second to suggest the existence of dark matter using stellar velocities was Dutch astronomer Jacobus Kapteyn in 1922. A publication from 1930 points to Swedish Knut Lundmark being the first to realise that the universe must contain much more mass than can be observed. Dutchman and radio astronomy pioneer Jan Oort also hypothesized the existence of dark matter in 1932. Oort was studying stellar motions in the local galactic neighborhood and found the mass in the galactic plane must be greater than what was observed, but this measurement was later determined to be erroneous.
In 1933, Swiss astrophysicist Fritz Zwicky, who studied galaxy clusters while working at the California Institute of Technology, made a similar inference. Zwicky applied the virial theorem to the Coma Cluster and obtained evidence of unseen mass he called ('dark matter'). Zwicky estimated its mass based on the motions of galaxies near its edge and compared that to an estimate based on its brightness and number of galaxies. He estimated the cluster had about 400 times more mass than was visually observable. The gravity effect of the visible galaxies was far too small for such fast orbits, thus mass must be hidden from view. Based on these conclusions, Zwicky inferred some unseen matter provided the mass and associated gravitation attraction to hold the cluster together. Zwicky's estimates were off by more than an order of magnitude, mainly due to an obsolete value of the Hubble constant; the same calculation today shows a smaller fraction, using greater values for luminous mass. Nonetheless, Zwicky did correctly conclude from his calculation that the bulk of the matter was dark.
Further indications of mass-to-light ratio anomalies came from measurements of galaxy rotation curves. In 1939, Horace W. Babcock reported the rotation curve for the Andromeda nebula (known now as the Andromeda Galaxy), which suggested the mass-to-luminosity ratio increases radially. He attributed it to either light absorption within the galaxy or modified dynamics in the outer portions of the spiral and not to the missing matter he had uncovered. Following Babcock's 1939 report of unexpectedly rapid rotation in the outskirts of the Andromeda galaxy and a mass-to-light ratio of 50; in 1940 Jan Oort discovered and wrote about the large non-visible halo of NGC 3115.
1960s.
Early radio astronomy observations, performed by Seth Shostak, later SETI Institute Senior Astronomer, showed a half-dozen galaxies spun too fast in their outer regions, pointing to the existence of dark matter as a means of creating the gravitational pull needed to keep the stars in their orbits.
1970s.
The hypothesis of dark matter largely took root in the 1970s. Several different observations were synthesized to argue that galaxies should be surrounded by halos of unseen matter. In two papers that appeared in 1974, this conclusion was drawn in tandem by independent groups: in Princeton, U.S.A., by Jeremiah Ostriker, Jim Peebles, and Amos Yahil, and in Tartu, Estonia, by Jaan Einasto, Enn Saar, and Ants Kaasik.
One of the observations that served as evidence for the existence of galactic halos of dark matter was the shape of galaxy rotation curves. These observations were done in optical and radio astronomy. In optical astronomy, Vera Rubin and Kent Ford worked with a new spectrograph to measure the velocity curve of edge-on spiral galaxies with greater accuracy.
At the same time, radio astronomers were making use of new radio telescopes to map the 21 cm line of atomic hydrogen in nearby galaxies. The radial distribution of interstellar atomic hydrogen (H) often extends to much greater galactic distances than can be observed as collective starlight, expanding the sampled distances for rotation curves – and thus of the total mass distribution – to a new dynamical regime. Early mapping of Andromeda with the 300 foot telescope at Green Bank and the 250 foot dish at Jodrell Bank already showed the H rotation curve did not trace the expected Keplerian decline. As more sensitive receivers became available, Roberts & Whitehurst (1975) were able to trace the rotational velocity of Andromeda to 30 kpc, much beyond the optical measurements. Illustrating the advantage of tracing the gas disk at large radii; that paper's "Figure 16" combines the optical data (the cluster of points at radii of less than 15 kpc with a single point further out) with the H data between 20 and 30 kpc, exhibiting the flatness of the outer galaxy rotation curve; the solid curve peaking at the center is the optical surface density, while the other curve shows the cumulative mass, still rising linearly at the outermost measurement. In parallel, the use of interferometric arrays for extragalactic H spectroscopy was being developed. Rogstad & Shostak (1972) published H rotation curves of five spirals mapped with the Owens Valley interferometer; the rotation curves of all five were very flat, suggesting very large values of mass-to-light ratio in the outer parts of their extended H disks. In 1978, Albert Bosma showed further evidence of flat rotation curves using data from the Westerbork Synthesis Radio Telescope.
By the late 1970s the existence of dark matter halos around galaxies was widely recognized as real, and became a major unsolved problem in astronomy.
1980-90s.
A stream of observations in the 1980-90s supported the presence of dark matter; notable, for Spirals the investigation of 967 objects by Persic, Salucci and Stel.
The evidence for dark matter also included gravitational lensing of background objects by galaxy clusters, the temperature distribution of hot gas in galaxies and clusters, and the pattern of anisotropies in the cosmic microwave background. According to consensus among cosmologists, dark matter is composed primarily of a not-yet-characterized type of subatomic particle. The search for this particle, by a variety of means, is one of the major efforts in particle physics.
Technical definition.
In standard cosmological calculations, "matter" means any constituent of the universe whose energy density scales with the inverse cube of the scale factor, i.e., This is in contrast to "radiation", which scales as the inverse fourth power of the scale factor and a cosmological constant, which does not change with respect to a (). The different scaling factors for matter and radiation are a consequence of radiation redshift: For example, after gradually doubling the diameter of the observable Universe via cosmic expansion of General Relativity, the scale, a, has doubled. The energy of the cosmic microwave background radiation has been halved (because the wavelength of each photon has doubled); the energy of ultra-relativistic particles, such as early-era standard-model neutrinos, is similarly halved.
The cosmological constant, as an intrinsic property of space, has a constant energy density regardless of the volume under consideration.
In principle, "dark matter" means all components of the universe which are not visible but still obey In practice, the term "dark matter" is often used to mean only the non-baryonic component of dark matter, i.e., excluding "missing baryons". Context will usually indicate which meaning is intended.
Observational evidence.
Galaxy rotation curves.
The arms of spiral galaxies rotate around the galactic center. The luminous mass density of a spiral galaxy decreases as one goes from the center to the outskirts. If luminous mass were all the matter, then we can model the galaxy as a point mass in the centre and test masses orbiting around it, similar to the Solar System. From Kepler's Third Law, it is expected that the rotation velocities will decrease with distance from the center, similar to the Solar System. This is not observed. Instead, the galaxy rotation curve remains flat or even increases as distance from the center increases.
If Kepler's laws are correct, then the obvious way to resolve this discrepancy is to conclude the mass distribution in spiral galaxies is not similar to that of the Solar System. In particular, there is a lot of non-luminous matter (dark matter) in the outskirts of the galaxy.
Velocity dispersions.
Stars in bound systems must obey the virial theorem. The theorem, together with the measured velocity distribution, can be used to measure the mass distribution in a bound system, such as elliptical galaxies or globular clusters. With some exceptions, velocity dispersion estimates of elliptical galaxies do not match the predicted velocity dispersion from the observed mass distribution, even assuming complicated distributions of stellar orbits.
As with galaxy rotation curves, the obvious way to resolve the discrepancy is to postulate the existence of non-luminous matter.
Galaxy clusters.
Galaxy clusters are particularly important for dark matter studies since their masses can be estimated in three independent ways:
Generally, these three methods are in reasonable agreement that dark matter outweighs visible matter by approximately 5 to 1.
Gravitational lensing.
One of the consequences of general relativity is the gravitational lens. Gravitational lensing occurs when massive objects between a source of light and the observer act as a lens to bend light from this source. One example is a cluster of galaxies lying between a more distant source such as a quasar and an observer. The more massive an object, the more lensing is observed.
Strong lensing is the observed distortion of background galaxies into arcs when their light passes through such a gravitational lens. It has been observed around many distant clusters including Abell 1689. By measuring the distortion geometry, the mass of the intervening cluster can be obtained. In the dozens of cases where this has been done, the mass-to-light ratios obtained correspond to the dynamical dark matter measurements of clusters. Lensing can lead to multiple copies of an image. By analyzing the distribution of multiple image copies, scientists have been able to deduce and map the distribution of dark matter around the MACS J0416.1-2403 galaxy cluster.
Weak gravitational lensing investigates minute distortions of galaxies, using statistical analyses from vast galaxy surveys. By examining the apparent shear deformation of the adjacent background galaxies, the mean distribution of dark matter can be characterized. The mass-to-light ratios correspond to dark matter densities predicted by other large-scale structure measurements. Dark matter does not bend light itself; mass (in this case the mass of the dark matter) bends spacetime. Light follows the curvature of spacetime, resulting in the lensing effect.
In May 2021, a new detailed dark matter map was revealed by the Dark Energy Survey Collaboration. In addition, the map revealed previously undiscovered filamentary structures connecting galaxies, by using a machine learning method.
An April 2023 study in "Nature Astronomy" examined the inferred distribution of the dark matter responsible for the lensing of the elliptical galaxy HS 0810+2554, and found tentative evidence of interference patterns within the dark matter. The observation of interference patterns is incompatible with WIMPs, but would be compatible with simulations involving 10−22 eV axions. While acknowledging the need to corroborate the findings by examining other astrophysical lenses, the authors argued that "The ability of (axion-based dark matter) to resolve lensing anomalies even in demanding cases such as HS 0810+2554, together with its success in reproducing other astrophysical observations, tilt the balance toward new physics invoking axions."
Cosmic microwave background.
Although both dark matter and ordinary matter are matter, they do not behave in the same way. In particular, in the early universe, ordinary matter was ionized and interacted strongly with radiation via Thomson scattering. Dark matter does not interact directly with radiation, but it does affect the cosmic microwave background (CMB) by its gravitational potential (mainly on large scales) and by its effects on the density and velocity of ordinary matter. Ordinary and dark matter perturbations, therefore, evolve differently with time and leave different imprints on the CMB.
The cosmic microwave background is very close to a perfect blackbody but contains very small temperature anisotropies of a few parts in 100,000. A sky map of anisotropies can be decomposed into an angular power spectrum, which is observed to contain a series of acoustic peaks at near-equal spacing but different heights.
The series of peaks can be predicted for any assumed set of cosmological parameters by modern computer codes such as CMBFAST and CAMB, and matching theory to data, therefore, constrains cosmological parameters.
The first peak mostly shows that the universe is close to spatially flat. The second peak constrains the cosmological density of baryons. The third peak pins down the cosmological density of dark matter.
The CMB anisotropy was first discovered by COBE in 1992, though this had too coarse resolution to detect the acoustic peaks.
After the discovery of the first acoustic peak by the balloon-borne BOOMERanG experiment in 2000, the power spectrum was precisely observed by WMAP in 2003–2012, and even more precisely by the "Planck" spacecraft in 2013–2015. The results support the Lambda-CDM model.
The observed CMB angular power spectrum provides powerful evidence in support of dark matter, as its precise structure is well fitted by the lambda-CDM model, but difficult to reproduce with any competing model such as modified Newtonian dynamics (MOND).
Structure formation.
Structure formation refers to the period after the Big Bang when density perturbations collapsed to form stars, galaxies, and clusters. Prior to structure formation, the Friedmann solutions to general relativity describe a homogeneous universe. Later, small anisotropies gradually grew and condensed the homogeneous universe into stars, galaxies and larger structures. Ordinary matter is affected by radiation, which is the dominant element of the universe at very early times. As a result, its density perturbations are washed out and unable to condense into structure. If there were only ordinary matter in the universe, there would not have been enough time for density perturbations to grow into the galaxies and clusters currently seen.
Dark matter provides a solution to this problem because it is unaffected by radiation. Therefore, its density perturbations can grow first. The resulting gravitational potential acts as an attractive potential well for ordinary matter collapsing later, speeding up the structure formation process.
Bullet Cluster.
The Bullet Cluster, the result of a recent collision of two galaxy clusters, provides model-independent observational evidence for Dark matter.
Alternatives like modified gravity theories have a difficult time explaining this system because its apparent center of mass is far displaced from the baryonic center of mass.
Type Ia supernova distance measurements.
Type Ia supernovae can be used as standard candles to measure extragalactic distances, which can in turn be used to measure how fast the universe has expanded in the past. Data indicates the universe is expanding at an accelerating rate, the cause of which is usually ascribed to dark energy. Since observations indicate the universe is almost flat, it is expected the total energy density of everything in the universe should sum to 1 (Ωtot ≈ 1). The measured dark energy density is ΩΛ ≈ 0.690; the observed ordinary (baryonic) matter energy density is Ωb ≈ 0.0482 and the energy density of radiation is negligible. This leaves a missing Ωdm ≈ 0.258 which nonetheless behaves like matter (see technical definition section above) – dark matter.
Sky surveys and baryon acoustic oscillations.
Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scales. These are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon–baryon fluid of the early universe and can be observed in the cosmic microwave background angular power spectrum. BAOs set up a preferred length scale for baryons. As the dark matter and baryons clumped together after recombination, the effect is much weaker in the galaxy distribution in the nearby universe, but is detectable as a subtle (≈1 percent) preference for pairs of galaxies to be separated by 147 Mpc, compared to those separated by 130–160 Mpc. This feature was predicted theoretically in the 1990s and then discovered in 2005, in two large galaxy redshift surveys, the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Combining the CMB observations with BAO measurements from galaxy redshift surveys provides a precise estimate of the Hubble constant and the average matter density in the Universe. The results support the Lambda-CDM model.
Redshift-space distortions.
Large galaxy redshift surveys may be used to make a three-dimensional map of the galaxy distribution. These maps are slightly distorted because distances are estimated from observed redshifts; the redshift contains a contribution from the galaxy's so-called peculiar velocity in addition to the dominant Hubble expansion term. On average, superclusters are expanding more slowly than the cosmic mean due to their gravity, while voids are expanding faster than average. In a redshift map, galaxies in front of a supercluster have excess radial velocities towards it and have redshifts slightly higher than their distance would imply, while galaxies behind the supercluster have redshifts slightly low for their distance. This effect causes superclusters to appear squashed in the radial direction, and likewise voids are stretched. Their angular positions are unaffected. This effect is not detectable for any one structure since the true shape is not known, but can be measured by averaging over many structures. It was predicted quantitatively by Nick Kaiser in 1987, and first decisively measured in 2001 by the 2dF Galaxy Redshift Survey. Results are in agreement with the lambda-CDM model.
Lyman-alpha forest.
In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data.
Theoretical classifications.
Composition.
The identity of dark matter is unknown, but there are many hypotheses about what dark matter could consist of, as set out in the table below.
Baryonic matter.
Dark matter can refer to any substance which interacts predominantly via gravity with visible matter (e.g., stars and planets). Hence in principle it need not be composed of a new type of fundamental particle but could, at least in part, be made up of standard baryonic matter, such as protons or neutrons. Most of the ordinary matter familiar to astronomers, including planets, brown dwarfs, red dwarfs, visible stars, white dwarfs, neutron stars, and black holes, fall into this category. Solitary black holes, neutron stars, burnt-out dwarfs, and other massive objects that are hard to detect are collectively known as MACHOs; some scientists initially hoped that baryonic MACHOs could account for and explain all the dark matter.
However, multiple lines of evidence suggest the majority of dark matter is not baryonic:
Non-baryonic matter.
There are two main candidates for non-baryonic dark matter: hypothetical particles such as axions, sterile neutrinos, weakly interacting massive particle (WIMPs), supersymmetric particles, atomic dark matter, or geons; and primordial black holes. Once a black hole ingests either kind of matter, baryonic or not, the distinction is lost.
Unlike baryonic matter, nonbaryonic particles do not contribute to the formation of the elements in the early universe (Big Bang nucleosynthesis) and so its presence is revealed only via its gravitational effects, or weak lensing. In addition, if the particles of which it is composed are supersymmetric, they can undergo annihilation interactions with themselves, possibly resulting in observable by-products such as gamma rays and neutrinos (indirect detection).
In 2015, the idea that dense dark matter was composed of primordial black holes made a comeback
following results of gravitational wave measurements which detected the merger of intermediate-mass black holes. Black holes with about 30 solar masses are not predicted to form by either stellar collapse (typically less than 15 solar masses) or by the merger of black holes in galactic centers (millions or billions of solar masses). It was proposed that the intermediate-mass black holes causing the detected merger formed in the hot dense early phase of the universe due to denser regions collapsing. A later survey of about a thousand supernovae detected no gravitational lensing events, when about eight would be expected if intermediate-mass primordial black holes above a certain mass range accounted for over 60% of dark matter.
However, that study assumed a monochromatic distribution to represent the LIGO/Virgo mass range, which is inapplicable to the broadly platykurtic mass distribution suggested by subsequent James Webb Space Telescope observations.
The possibility that atom-sized primordial black holes account for a significant fraction of dark matter was ruled out by measurements of positron and electron fluxes outside the Sun's heliosphere by the Voyager 1 spacecraft. Tiny black holes are theorized to emit Hawking radiation. However the detected fluxes were too low and did not have the expected energy spectrum, suggesting that tiny primordial black holes are not widespread enough to account for dark matter. Nonetheless, research and theories proposing dense dark matter accounts for dark matter continue as of 2018, including approaches to dark matter cooling,
and the question remains unsettled. In 2019, the lack of microlensing effects in the observation of Andromeda suggests that tiny black holes do not exist.
However, there still exists a largely unconstrained mass range smaller than that which can be limited by optical microlensing observations, where primordial black holes may account for all dark matter.
Free streaming length.
Dark matter can be divided into "cold", "warm", and "hot" categories. These categories refer to velocity rather than an actual temperature, indicating how far corresponding objects moved due to random motions in the early universe, before they slowed due to cosmic expansion – this is an important distance called the "free streaming length" (FSL). Primordial density fluctuations smaller than this length get washed out as particles spread from overdense to underdense regions, while larger fluctuations are unaffected; therefore this length sets a minimum scale for later structure formation.
The categories are set with respect to the size of a protogalaxy (an object that later evolves into a dwarf galaxy): Dark matter particles are classified as cold, warm, or hot according to their FSL; much smaller (cold), similar to (warm), or much larger (hot) than a protogalaxy. Mixtures of the above are also possible: a theory of mixed dark matter was popular in the mid-1990s, but was rejected following the discovery of dark energy.
Cold dark matter leads to a bottom-up formation of structure with galaxies forming first and galaxy clusters at a latter stage, while hot dark matter would result in a top-down formation scenario with large matter aggregations forming early, later fragmenting into separate galaxies; the latter is excluded by high-redshift galaxy observations.
Fluctuation spectrum effects.
These categories also correspond to fluctuation spectrum effects and the interval following the Big Bang at which each type became non-relativistic. Davis "et al." wrote in 1985:
<templatestyles src="Template:Blockquote/styles.css" />
Candidate particles can be grouped into three categories on the basis of their effect on the fluctuation spectrum (Bond "et al." 1983). If the dark matter is composed of abundant light particles which remain relativistic until shortly before recombination, then it may be termed "hot". The best candidate for hot dark matter is a neutrino ... A second possibility is for the dark matter particles to interact more weakly than neutrinos, to be less abundant, and to have a mass of order 1 keV. Such particles are termed "warm dark matter", because they have lower thermal velocities than massive neutrinos ... there are at present few candidate particles which fit this description. Gravitinos and photinos have been suggested (Pagels and Primack 1982; Bond, Szalay and Turner 1982) ... Any particles which became nonrelativistic very early, and so were able to diffuse a negligible distance, are termed "cold" dark matter (CDM). There are many candidates for CDM including supersymmetric particles.
style="text-align:right"|
Alternative definitions.
Another approximate dividing line is warm dark matter became non-relativistic when the universe was approximately 1 year old and 1 millionth of its present size and in the radiation-dominated era (photons and neutrinos), with a photon temperature 2.7 million Kelvins. Standard physical cosmology gives the particle horizon size as formula_0 (speed of light multiplied by time) in the radiation-dominated era, thus 2 light-years. A region of this size would expand to 2 million light-years today (absent structure formation). The actual FSL is approximately 5 times the above length, since it continues to grow slowly as particle velocities decrease inversely with the scale factor after they become non-relativistic. In this example the FSL would correspond to 10 million light-years (or 3 megaparsecs) today, around the size containing an average large galaxy.
The 2.7 million Kelvin photon temperature gives a typical photon energy of 250 electronvolt, thereby setting a typical mass scale for warm dark matter: particles much more massive than this, such as GeV–TeV mass WIMPs, would become non-relativistic much earlier than one year after the Big Bang and thus have FSLs much smaller than a protogalaxy, making them cold. Conversely, much lighter particles, such as neutrinos with masses of only a few electronvolt, have FSLs much larger than a protogalaxy, thus qualifying them as hot.
Cold dark matter.
Cold dark matter offers the simplest explanation for most cosmological observations. It is dark matter composed of constituents with an FSL much smaller than a protogalaxy. This is the focus for dark matter research, as hot dark matter does not seem capable of supporting galaxy or galaxy cluster formation, and most particle candidates slowed early.
The constituents of cold dark matter are unknown. Possibilities range from large objects like MACHOs (such as black holes and Preon stars) or RAMBOs (such as clusters of brown dwarfs), to new particles such as WIMPs and axions.
The 1997 DAMA/NaI experiment and its successor DAMA/LIBRA in 2013, claimed to directly detect dark matter particles passing through the Earth, but many researchers remain skeptical, as negative results from similar experiments seem incompatible with the DAMA results.
Many supersymmetric models offer dark matter candidates in the form of the WIMPy Lightest Supersymmetric Particle (LSP). Separately, heavy sterile neutrinos exist in non-supersymmetric extensions to the standard model which explain the small neutrino mass through the seesaw mechanism.
Warm dark matter.
Warm dark matter comprises particles with an FSL comparable to the size of a protogalaxy. Predictions based on warm dark matter are similar to those for cold dark matter on large scales, but with less small-scale density perturbations. This reduces the predicted abundance of dwarf galaxies and may lead to lower density of dark matter in the central parts of large galaxies. Some researchers consider this a better fit to observations. A challenge for this model is the lack of particle candidates with the required mass ≈ 300 eV to 3000 eV.
No known particles can be categorized as warm dark matter. A postulated candidate is the sterile neutrino: a heavier, slower form of neutrino that does not interact through the weak force, unlike other neutrinos. Some modified gravity theories, such as scalar–tensor–vector gravity, require "warm" dark matter to make their equations work.
Hot dark matter.
Hot dark matter consists of particles whose FSL is much larger than the size of a protogalaxy. The neutrino qualifies as such a particle. They were discovered independently, long before the hunt for dark matter: they were postulated in 1930, and detected in 1956. Neutrinos' mass is less than 10−6 that of an electron. Neutrinos interact with normal matter only via gravity and the weak force, making them difficult to detect (the weak force only works over a small distance, thus a neutrino triggers a weak force event only if it hits a nucleus head-on). This makes them "weakly interacting slender particles" (WISPs), as opposed to WIMPs.
The three known flavours of neutrinos are the "electron", "muon", and "tau". Neutrinos oscillate among the flavours as they move. It is hard to determine an exact upper bound on the collective average mass of the three neutrinos. For example, if the average neutrino mass were over 50 eV/c2 (less than 10−5 of the mass of an electron), the universe would collapse. CMB data and other methods indicate that their average mass probably does not exceed 0.3 eV/c2. Thus, observed neutrinos cannot explain dark matter.
Because galaxy-size density fluctuations get washed out by free-streaming, hot dark matter implies the first objects that can form are huge supercluster-size pancakes, which then fragment into galaxies. Deep-field observations show instead that galaxies formed first, followed by clusters and superclusters as galaxies clump together.
Dark matter aggregation and dense dark matter objects.
If dark matter is composed of weakly interacting particles, then an obvious question is whether it can form objects equivalent to planets, stars, or black holes. Historically, the answer has been it cannot,
because of two factors:
Ordinary matter forms dense objects because it has numerous ways to lose energy. Losing energy would be essential for object formation, because a particle that gains energy during compaction or falling "inward" under gravity, and cannot lose it any other way, will heat up and increase velocity and momentum. Dark matter appears to lack a means to lose energy, simply because it is not capable of interacting strongly in other ways except through gravity. The virial theorem suggests that such a particle would not stay bound to the gradually forming object – as the object began to form and compact, the dark matter particles within it would speed up and tend to escape.
Ordinary matter interacts in many different ways, which allows the matter to form more complex structures. For example, stars form through gravity, but the particles within them interact and can emit energy in the form of neutrinos and electromagnetic radiation through fusion when they become energetic enough. Protons and neutrons can bind via the strong interaction and then form atoms with electrons largely through electromagnetic interaction. There is no evidence that dark matter is capable of such a wide variety of interactions, since it seems to only interact through gravity (and possibly through some means no stronger than the weak interaction, although until dark matter is better understood, this is only speculation).
However, there are theories of atomic dark matter similar to normal matter that overcome these problems.
Detection of dark matter particles.
If dark matter is made up of subatomic particles, then millions, possibly billions, of such particles must pass through every square centimeter of the Earth each second. Many experiments aim to test this hypothesis. Although WIMPs have been the main search candidates, axions have drawn renewed attention, with the Axion Dark Matter Experiment (ADMX) searches for axions and many more planned in the future. Another candidate is heavy hidden sector particles which only interact with ordinary matter via gravity.
These experiments can be divided into two classes: direct detection experiments, which search for the scattering of dark matter particles off atomic nuclei within a detector; and indirect detection, which look for the products of dark matter particle annihilations or decays.
Direct detection.
Direct detection experiments aim to observe low-energy recoils (typically a few keVs) of nuclei induced by interactions with particles of dark matter, which (in theory) are passing through the Earth. After such a recoil, the nucleus will emit energy in the form of scintillation light or phonons as they pass through sensitive detection apparatus. To do so effectively, it is crucial to maintain an extremely low background, which is the reason why such experiments typically operate deep underground, where interference from cosmic rays is minimized. Examples of underground laboratories with direct detection experiments include the Stawell mine, the Soudan mine, the SNOLAB underground laboratory at Sudbury, the Gran Sasso National Laboratory, the Canfranc Underground Laboratory, the Boulby Underground Laboratory, the Deep Underground Science and Engineering Laboratory and the China Jinping Underground Laboratory.
These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100 mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include such projects as CDMS, CRESST, EDELWEISS, and EURECA, while noble liquid experiments include LZ, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO, which use alternative methods in their attempts to detect dark matter.
Currently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particles. The DAMA/NaI and more recent DAMA/LIBRA experimental collaborations have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matter. This results from the expectation that as the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount. This claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX, SuperCDMS and XENON100.
A special case of direct detection experiments covers those with directional sensitivity. This is a search strategy based on the motion of the Solar System around the Galactic Center. A low-pressure time projection chamber makes it possible to access information on recoiling tracks and constrain WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun travels (approximately towards Cygnus) may then be separated from background, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.
Indirect detection.
Indirect detection experiments search for the products of the self-annihilation or decay of dark matter particles in outer space. For example, in regions of high dark matter density (e.g., the centre of our galaxy) two dark matter particles could annihilate to produce gamma rays or Standard Model particle–antiparticle pairs. Alternatively, if a dark matter particle is unstable, it could decay into Standard Model (or other) particles. These processes could be detected indirectly through an excess of gamma rays, antiprotons or positrons emanating from high density regions in our galaxy or others. A major difficulty inherent in such searches is that various astrophysical sources can mimic the signal expected from dark matter, and so multiple signals are likely required for a conclusive discovery.
A few of the dark matter particles passing through the Sun or Earth may scatter off atoms and lose energy. Thus dark matter may accumulate at the center of these bodies, increasing the chance of collision/annihilation. This could produce a distinctive signal in the form of high-energy neutrinos. Such a signal would be strong indirect proof of WIMP dark matter. High-energy neutrino telescopes such as AMANDA, IceCube and ANTARES are searching for this signal.
The detection by LIGO in September 2015 of gravitational waves opens the possibility of observing dark matter in a new way, particularly if it is in the form of primordial black holes.
Many experimental searches have been undertaken to look for such emission from dark matter annihilation or decay, examples of which follow.
The Energetic Gamma Ray Experiment Telescope observed more gamma rays in 2008 than expected from the Milky Way, but scientists concluded this was most likely due to incorrect estimation of the telescope's sensitivity.
The Fermi Gamma-ray Space Telescope is searching for similar gamma rays. In 2009, an as yet unexplained surplus of gamma rays from the Milky Way's galactic center was found in Fermi data. This Galactic Center GeV excess might be due to dark matter annihilation or to a population of pulsars. In April 2012, an analysis of previously available data from Fermi's Large Area Telescope instrument produced statistical evidence of a 130 GeV signal in the gamma radiation coming from the center of the Milky Way. WIMP annihilation was seen as the most probable explanation.
At higher energies, ground-based gamma-ray telescopes have set limits on the annihilation of dark matter in dwarf spheroidal galaxies and in clusters of galaxies.
The PAMELA experiment (launched in 2006) detected excess positrons. They could be from dark matter annihilation or from pulsars. No excess antiprotons were observed.
In 2013, results from the Alpha Magnetic Spectrometer on the International Space Station indicated excess high-energy cosmic rays which could be due to dark matter annihilation.
Collider searches for dark matter.
An alternative approach to the detection of dark matter particles in nature is to produce them in a laboratory. Experiments with the Large Hadron Collider (LHC) may be able to detect dark matter particles produced in collisions of the LHC proton beams. Because a dark matter particle should have negligible interactions with normal visible matter, it may be detected indirectly as (large amounts of) missing energy and momentum that escape the detectors, provided other (non-negligible) collision products are detected. Constraints on dark matter also exist from the LEP experiment using a similar principle, but probing the interaction of dark matter particles with electrons rather than quarks. Any discovery from collider searches must be corroborated by discoveries in the indirect or direct detection sectors to prove that the particle discovered is, in fact, dark matter.
Alternative hypotheses.
Because dark matter has not yet been identified, many other hypotheses have emerged aiming to explain the same observational phenomena without introducing a new unknown type of matter. The theory underpinning most observational evidence for dark matter, general relativity, is well-tested on solar system scales, but its validity on galactic or cosmological scales has not been well proven. A suitable modification to general relativity can in principle conceivably eliminate the need for dark matter. The best-known theories of this class are MOND and its relativistic generalization tensor–vector–scalar gravity (TeVeS), f(R) gravity, negative mass, dark fluid, and entropic gravity. Alternative theories abound.
Primordial black holes are considered candidates for components of dark matter. Early constraints on primordial black holes as dark matter usually assumed most black holes would have similar or identical ("monochromatic") mass, which was disproven by LIGO/Virgo results.
In 2024, a review by Bernard Carr and colleagues concluded that primordial black holes forming in the quantum chromodynamics epoch prior to 10–5 seconds after the Big Bang can explain most observations attributed to dark matter. Such black hole formation would result in an extended mass distribution today, "with a number of distinct bumps, the most prominent one being at around one solar mass."
A problem with alternative hypotheses is that observational evidence for dark matter comes from so many independent approaches (see the "observational evidence" section above). Explaining any individual observation is possible but explaining all of them in the absence of dark matter is very difficult. Nonetheless, there have been some scattered successes for alternative hypotheses, such as a 2016 test of gravitational lensing in entropic gravity and a 2020 measurement of a unique MOND effect.
The prevailing opinion among most astrophysicists is that while modifications to general relativity can conceivably explain part of the observational evidence, there is probably enough data to conclude there must be some form of dark matter present in the universe.
In popular culture.
Dark matter regularly appears as a topic in hybrid periodicals that cover both factual scientific topics and science fiction,
and dark matter itself has been referred to as "the stuff of science fiction".
Mention of dark matter is made in works of fiction. In such cases, it is usually attributed extraordinary physical or magical properties, thus becoming inconsistent with the hypothesized properties of dark matter in physics and cosmology. For example:
More broadly, the phrase "dark matter" is used metaphorically in fiction to evoke the unseen or invisible.
See also.
<templatestyles src="Column/styles.css"/>
<templatestyles src = "Column/styles.css" />
<templatestyles src = "Column/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2 c t"
}
] | https://en.wikipedia.org/wiki?curid=8651 |
865138 | Euler's rotation theorem | Movement with a fixed point is rotation
In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a "rotation group".
The theorem is named after Leonhard Euler, who proved it in 1775 by means of spherical geometry. The axis of rotation is known as an Euler axis, typically represented by a unit vector ê. Its product by the rotation angle is known as an axis-angle vector. The extension of the theorem to kinematics yields the concept of instant axis of rotation, a line of fixed points.
In linear algebra terms, the theorem states that, in 3D space, any two Cartesian coordinate systems with a common origin are related by a rotation about some fixed axis. This also means that the product of two rotation matrices is again a rotation matrix and that for a non-identity rotation matrix one eigenvalue is 1 and the other two are both complex, or both equal to −1. The eigenvector corresponding to this eigenvalue is the axis of rotation connecting the two systems.
Euler's theorem (1776).
Euler states the theorem as follows:
Theorema.
"Quomodocunque sphaera circa centrum suum conuertatur, semper assignari potest diameter,"
"cuius directio in situ translato conueniat cum situ initiali."
or (in English):
When a sphere is moved around its centre it is always possible to find a diameter whose direction in the displaced position is the same as in the initial position.
Proof.
Euler's original proof was made using spherical geometry and therefore whenever he speaks about triangles they must be understood as spherical triangles.
Previous analysis.
To arrive at a proof, Euler analyses what the situation would look like if the theorem were true. To that end, suppose the yellow line in Figure 1 goes through the center of the sphere and is the axis of rotation we are looking for, and point O is one of the two intersection points of that axis with the sphere. Then he considers an arbitrary great circle that does not contain O (the blue circle), and its image after rotation (the red circle), which is another great circle not containing O. He labels a point on their intersection as point A. (If the circles coincide, then A can be taken as any point on either; otherwise A is one of the two points of intersection.)
Now A is on the initial circle (the blue circle), so its image will be on the transported circle (red). He labels that image as point a. Since A is also on the transported circle (red), it is the image of another point that was on the initial circle (blue) and he labels that preimage as α (see Figure 2). Then he considers the two arcs joining α and a to A. These arcs have the same length because arc αA is mapped onto arc Aa. Also, since O is a fixed point, triangle αOA is mapped onto triangle AOa, so these triangles are isosceles, and arc AO bisects angle ∠αAa.
Construction of the best candidate point.
Let us construct a point that could be invariant using the previous considerations. We start with the blue great circle and its image under the transformation, which is the red great circle as in the Figure 1. Let point A be a point of intersection of those circles. If A’s image under the transformation is the same point then A is a fixed point of the transformation, and since the center is also a fixed point, the diameter of the sphere containing A is the axis of rotation and the theorem is proved.
Otherwise we label A’s image as a and its preimage as α, and connect these two points to A with arcs αA and Aa. These arcs have the same length. Construct the great circle that bisects ∠αAa and locate point O on that great circle so that arcs AO and aO have the same length, and call the region of the sphere containing O and bounded by the blue and red great circles the interior of ∠αAa. (That is, the yellow region in Figure 3.) Then since αA
Aa and O is on the bisector of ∠αAa, we also have αO
aO.
Proof of its invariance under the transformation.
Now let us suppose that O′ is the image of O. Then we know ∠αAO
∠AaO′ and orientation is preserved, so O′ must be interior to ∠αAa. Now AO is transformed to aO′, so AO
aO′. Since AO is also the same length as aO, then aO
aO′ and ∠AaO
∠aAO. But ∠αAO
∠aAO, so ∠αAO
∠AaO and ∠AaO
∠AaO′. Therefore O′ is the same point as O. In other words, O is a fixed point of the transformation, and since the center is also a fixed point, the diameter of the sphere containing O is the axis of rotation.
Final notes about the construction.
Euler also points out that O can be found by intersecting the perpendicular bisector of Aa with the angle bisector of ∠αAa, a construction that might be easier in practice. He also proposed the intersection of two planes:
Proposition. These two planes intersect in a diameter. This diameter is the one we are looking for.
Proof. Let us call O either of the endpoints (there are two) of this diameter over the sphere surface. Since αA is mapped on Aa and the triangles have the same angles, it follows that the triangle OαA is transported onto the triangle OAa. Therefore the point O has to remain fixed under the movement.
Corollaries. This also shows that the rotation of the sphere can be seen as two consecutive reflections about the two planes described above. Points in a mirror plane are invariant under reflection, and hence the points on their intersection (a line: the axis of rotation) are invariant under both the reflections, and hence under the rotation.
Another simple way to find the rotation axis is by considering the plane on which the points α, A, a lie. The rotation axis is obviously orthogonal to this plane, and passes through the center C of the sphere.
Given that for a rigid body any movement that leaves an axis invariant is a rotation, this also proves that any arbitrary composition of rotations is equivalent to a single rotation around a new axis.
Matrix proof.
A spatial rotation is a linear map in one-to-one correspondence with a 3 × 3 rotation matrix R that transforms a coordinate vector x into X, that is Rx
X. Therefore, another version of Euler's theorem is that for every rotation R, there is a nonzero vector n for which Rn
n; this is exactly the claim that n is an eigenvector of R associated with the eigenvalue 1. Hence it suffices to prove that 1 is an eigenvalue of R; the rotation axis of R will be the line "μ"n, where n is the eigenvector with eigenvalue 1.
A rotation matrix has the fundamental property that its inverse is its transpose, that is
formula_0
where I is the 3 × 3 identity matrix and superscript T indicates the transposed matrix.
Compute the determinant of this relation to find that a rotation matrix has determinant ±1. In particular,
formula_1
A rotation matrix with determinant +1 is a proper rotation, and one with a negative determinant −1 is an "improper rotation", that is a reflection combined with a proper rotation.
It will now be shown that a proper rotation matrix R has at least one invariant vector n, i.e., Rn
n. Because this requires that (R − I)n
0, we see that the vector n must be an eigenvector of the matrix R with eigenvalue "λ"
1. Thus, this is equivalent to showing that det(R − I)
0.
Use the two relations
formula_2
for any 3 × 3 matrix A and
formula_3
(since det(R)
1) to compute
formula_4
This shows that "λ"
1 is a root (solution) of the characteristic equation, that is,
formula_5
In other words, the matrix R − I is singular and has a non-zero kernel, that is, there is at least one non-zero vector, say n, for which
formula_6
The line "μn for real μ is invariant under R, i.e., "μn is a rotation axis. This proves Euler's theorem.
Equivalence of an orthogonal matrix to a rotation matrix.
Two matrices (representing linear maps) are said to be equivalent if there is a change of basis that makes one equal to the other. A proper orthogonal matrix is always equivalent (in this sense) to either the following matrix or to its vertical reflection:
formula_7
Then, any orthogonal matrix is either a rotation or an improper rotation. A general orthogonal matrix has only one real eigenvalue, either +1 or −1. When it is +1 the matrix is a rotation. When −1, the matrix is an improper rotation.
If R has more than one invariant vector then "φ"
0 and R
I. "Any" vector is an invariant vector of I.
Excursion into matrix theory.
In order to prove the previous equation some facts from matrix theory must be recalled.
An "m" × "m" matrix A has "m" orthogonal eigenvectors if and only if A is normal, that is, if A†A
AA†. This result is equivalent to stating that normal matrices can be brought to diagonal form by a unitary similarity transformation:
formula_8
and U is unitary, that is,
formula_9
The eigenvalues "α"1, ..., "αm" are roots of the characteristic equation. If the matrix A happens to be unitary (and note that unitary matrices are normal), then
formula_10
and it follows that the eigenvalues of a unitary matrix are on the unit circle in the complex plane:
formula_11
Also an orthogonal (real unitary) matrix has eigenvalues on the unit circle in the complex plane. Moreover, since its characteristic equation (an mth order polynomial in λ) has real coefficients, it follows that its roots appear in complex conjugate pairs, that is, if α is a root then so is "α"∗. There are 3 roots, thus at least one of them must be purely real (+1 or −1).
After recollection of these general facts from matrix theory, we return to the rotation matrix R. It follows from its realness and orthogonality that we can find a U such that:
formula_12
If a matrix U can be found that gives the above form, and there is only one purely real component and it is −1, then we define formula_13 to be an improper rotation. Let us only consider the case, then, of matrices R that are proper rotations (the third eigenvalue is just 1). The third column of the 3 × 3 matrix U will then be equal to the invariant vector n. Writing u1 and u2 for the first two columns of U, this equation gives
formula_14
If u1 has eigenvalue 1, then "φ"
0 and u2 has also eigenvalue 1, which implies that in that case R
I. In general, however, as
formula_15 implies that also formula_16 holds, so formula_17 can be chosen for formula_18. Similarly, formula_19 can result in a formula_20 with real entries only, for a proper rotation matrix formula_13.
Finally, the matrix equation is transformed by means of a unitary matrix,
formula_21
which gives
formula_22
The columns of U′ are orthonormal as it is a unitary matrix with real-valued entries only, due to its definition above, that formula_23 is the complex conjugate of formula_18 and that formula_20 is a vector with real-valued components. The third column is still formula_24 n, the other two columns of U′ are perpendicular to n. We can now see how our definition of improper rotation corresponds with the geometric interpretation: an improper rotation is a rotation around an axis (here, the axis corresponding to the third coordinate) and a reflection on a plane perpendicular to that axis. If we only restrict ourselves to matrices with determinant 1, we can thus see that they must be proper rotations. This result implies that any orthogonal matrix R corresponding to a proper rotation is equivalent to a rotation over an angle φ around an axis n.
Equivalence classes.
The trace (sum of diagonal elements) of the real rotation matrix given above is 1 + 2 cos "φ". Since a trace is invariant under an orthogonal matrix similarity transformation,
formula_25
it follows that all matrices that are equivalent to R by such orthogonal matrix transformations have the same trace: the trace is a "class function". This matrix transformation is clearly an equivalence relation, that is, all such equivalent matrices form an equivalence class.
In fact, all proper rotation 3 × 3 rotation matrices form a group, usually denoted by SO(3) (the special orthogonal group in 3 dimensions) and all matrices with the same trace form an equivalence class in this group. All elements of such an equivalence class "share their rotation angle", but all rotations are around different axes. If n is an eigenvector of R with eigenvalue 1, then An is also an eigenvector of ARAT, also with eigenvalue 1. Unless A
I, n and An are different.
Applications.
Generators of rotations.
Suppose we specify an axis of rotation by a unit vector ["x", "y", "z"], and suppose we have an infinitely small rotation of angle Δ"θ" about that vector. Expanding the rotation matrix as an infinite addition, and taking the first order approach, the rotation matrix Δ"R" is represented as:
formula_26
A finite rotation through angle θ about this axis may be seen as a succession of small rotations about the same axis. Approximating Δ"θ" as where "N" is a large number, a rotation of θ about the axis may be represented as:
formula_27
It can be seen that Euler's theorem essentially states that "all" rotations may be represented in this form. The product A"θ" is the "generator" of the particular rotation, being the vector ("x","y","z") associated with the matrix A. This shows that the rotation matrix and the axis–angle format are related by the exponential function.
One can derive a simple expression for the generator G. One starts with an arbitrary plane (in Euclidean space) defined by a pair of perpendicular unit vectors a and b. In this plane one can choose an arbitrary vector x with perpendicular y. One then solves for y in terms of x and substituting into an expression for a rotation in a plane yields the rotation matrix R which includes the generator .
formula_28
To include vectors outside the plane in the rotation one needs to modify the above expression for R by including two projection operators that partition the space. This modified rotation matrix can be rewritten as an exponential function.
formula_29
Analysis is often easier in terms of these generators, rather than the full rotation matrix. Analysis in terms of the generators is known as the Lie algebra of the rotation group.
Quaternions.
It follows from Euler's theorem that the relative orientation of any pair of coordinate systems may be specified by a set of three independent numbers. Sometimes a redundant fourth number is added to simplify operations with quaternion algebra. Three of these numbers are the direction cosines that orient the eigenvector. The fourth is the angle about the eigenvector that separates the two sets of coordinates. Such a set of four numbers is called a quaternion.
While the quaternion as described above, does not involve complex numbers, if quaternions are used to describe two successive rotations, they must be combined using the non-commutative quaternion algebra derived by William Rowan Hamilton through the use of imaginary numbers.
Rotation calculation via quaternions has come to replace the use of direction cosines in aerospace applications through their reduction of the required calculations, and their ability to minimize round-off errors. Also, in computer graphics the ability to perform spherical interpolation between quaternions with relative ease is of value.
Generalizations.
In higher dimensions, any rigid motion that preserves a point in dimension 2"n" or 2"n" + 1 is a composition of at most n rotations in orthogonal planes of rotation, though these planes need not be uniquely determined, and a rigid motion may fix multiple axes. Also, any rigid motion that preserves "n" linearly independent points, which span an "n"-dimensional body in dimension 2"n" or 2"n" + 1, is a single plane of rotation. To put it another way, if two rigid bodies, with identical geometry, share at least "n" points of 'identical' locations within themselves, the convex hull of which is "n"-dimensional, then a single planar rotation can bring one to cover the other accurately in dimension 2"n" or 2"n" + 1.
A rigid motion in three dimensions that does not necessarily fix a point is a "screw motion". This is because a composition of a rotation with a translation perpendicular to the axis is a rotation about a parallel axis, while composition with a translation parallel to the axis yields a screw motion; see screw axis. This gives rise to screw theory.
Notes.
<templatestyles src="Reflist/styles.css" />
"This article incorporates material from the Citizendium article "", which is licensed under the but not under the ." | [
{
"math_id": 0,
"text": "\n\\mathbf{R}^\\mathsf{T}\\mathbf{R} = \\mathbf{R}\\mathbf{R}^\\mathsf{T} = \\mathbf{I},\n"
},
{
"math_id": 1,
"text": "\\begin{align}\n 1 = \\det(\\mathbf{I}) &= \\det\\left(\\mathbf{R}^\\mathsf{T}\\mathbf{R}\\right) = \\det\\left(\\mathbf{R}^\\mathsf{T}\\right)\\det(\\mathbf{R}) = \\det(\\mathbf{R})^2 \\\\\n \\Longrightarrow\\qquad \\det(\\mathbf{R}) &= \\pm 1.\n\\end{align}"
},
{
"math_id": 2,
"text": " \\det(-\\mathbf{A}) = (-1)^{3} \\det(\\mathbf{A}) = - \\det(\\mathbf{A}) \\quad"
},
{
"math_id": 3,
"text": " \\det\\left(\\mathbf{R}^{-1} \\right) = 1 \\quad"
},
{
"math_id": 4,
"text": "\\begin{align}\n &\\det(\\mathbf{R} - \\mathbf{I}) = \\det\\left((\\mathbf{R} - \\mathbf{I})^\\mathsf{T}\\right) \\\\\n {}={} &\\det\\left(\\mathbf{R}^\\mathsf{T} - \\mathbf{I}\\right) = \\det\\left(\\mathbf{R}^{-1} - \\mathbf{R}^{-1}\\mathbf{R}\\right) \\\\\n {}={} &\\det\\left(\\mathbf{R}^{-1}(\\mathbf{I} - \\mathbf{R})\\right) = \\det\\left(\\mathbf{R}^{-1}\\right) \\, \\det(-(\\mathbf{R} - \\mathbf{I})) \\\\\n {}={} &-\\det(\\mathbf{R} - \\mathbf{I}) \\\\[3pt]\n \\Longrightarrow\\ 0 ={} &\\det(\\mathbf{R} - \\mathbf{I}).\n\\end{align}"
},
{
"math_id": 5,
"text": "\n\\det(\\mathbf{R} - \\lambda \\mathbf{I}) = 0\\quad \\hbox{for}\\quad \\lambda=1.\n"
},
{
"math_id": 6,
"text": "\n(\\mathbf{R} - \\mathbf{I}) \\mathbf{n} = \\mathbf{0} \\quad \\Longleftrightarrow \\quad \\mathbf{R}\\mathbf{n} = \\mathbf{n}.\n"
},
{
"math_id": 7,
"text": "\n\\mathbf{R} \\sim\n\\begin{pmatrix}\n\\cos\\phi & -\\sin\\phi & 0 \\\\\n\\sin\\phi & \\cos\\phi & 0 \\\\\n0 & 0 & 1\\\\\n\\end{pmatrix}, \\qquad 0\\le \\phi \\le 2\\pi.\n"
},
{
"math_id": 8,
"text": "\n\\mathbf{A}\\mathbf{U} = \\mathbf{U}\\; \\operatorname{diag}(\\alpha_1,\\ldots,\\alpha_m)\\quad \\Longleftrightarrow\\quad\n\\mathbf{U}^\\dagger \\mathbf{A}\\mathbf{U} = \\operatorname{diag}(\\alpha_1,\\ldots,\\alpha_m),\n"
},
{
"math_id": 9,
"text": "\n\\mathbf{U}^\\dagger = \\mathbf{U}^{-1}.\n"
},
{
"math_id": 10,
"text": "\n\\left(\\mathbf{U}^\\dagger\\mathbf{A} \\mathbf{U}\\right)^\\dagger = \\operatorname{diag}\\left(\\alpha^*_1,\\ldots,\\alpha^*_m\\right) =\n\\mathbf{U}^\\dagger\\mathbf{A}^{-1} \\mathbf{U} = \\operatorname{diag}\\left(\\frac{1}{\\alpha_1},\\ldots,\\frac{1}{\\alpha_m}\\right)\n"
},
{
"math_id": 11,
"text": "\n\\alpha^*_k = \\frac{1}{\\alpha_k} \\quad\\Longleftrightarrow\\quad \\alpha^*_k\\alpha_k = \\left|\\alpha_k\\right|^2 = 1,\\qquad k=1,\\ldots,m.\n"
},
{
"math_id": 12,
"text": "\n \\mathbf{R} \\mathbf{U} = \\mathbf{U}\n\\begin{pmatrix}\ne^{i\\phi} & 0 & 0 \\\\\n0 & e^{-i\\phi} & 0 \\\\\n0 & 0 & \\pm 1 \\\\\n\\end{pmatrix}\n"
},
{
"math_id": 13,
"text": "\\mathbf{R}"
},
{
"math_id": 14,
"text": "\n \\mathbf{R}\\mathbf{u}_1 = e^{i\\phi}\\, \\mathbf{u}_1 \\quad\\hbox{and}\\quad \\mathbf{R}\\mathbf{u}_2 = e^{-i\\phi}\\, \\mathbf{u}_2.\n"
},
{
"math_id": 15,
"text": " (\\mathbf{R}-e^{i\\phi}\\mathbf{I})\\mathbf{u}_1 = 0 "
},
{
"math_id": 16,
"text": " (\\mathbf{R}-e^{-i\\phi}\\mathbf{I})\\mathbf{u}^*_1 = 0 "
},
{
"math_id": 17,
"text": " \\mathbf{u}_2 = \\mathbf{u}^*_1 "
},
{
"math_id": 18,
"text": " \\mathbf{u}_2 "
},
{
"math_id": 19,
"text": " (\\mathbf{R}-\\mathbf{I})\\mathbf{u}_3 = 0 "
},
{
"math_id": 20,
"text": " \\mathbf{u}_3 "
},
{
"math_id": 21,
"text": "\n \\mathbf{R} \\mathbf{U}\n\\begin{pmatrix}\n\\frac{1}{\\sqrt{2}} & \\frac{i}{\\sqrt{2}} & 0 \\\\\n\\frac{1}{\\sqrt{2}} & \\frac{-i}{\\sqrt{2}} & 0 \\\\\n0 & 0 & 1\\\\\n\\end{pmatrix}\n= \\mathbf{U}\n\\underbrace{\n\\begin{pmatrix}\n\\frac{1}{\\sqrt{2}} & \\frac{i}{\\sqrt{2}} & 0 \\\\\n\\frac{1}{\\sqrt{2}} & \\frac{-i}{\\sqrt{2}} & 0 \\\\\n0 & 0 & 1\\\\\n\\end{pmatrix}\n\\begin{pmatrix}\n\\frac{1}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}} & 0 \\\\\n\\frac{-i}{\\sqrt{2}} & \\frac{i}{\\sqrt{2}} & 0 \\\\\n0 & 0 & 1\\\\\n\\end{pmatrix}\n}_{=\\;\\mathbf{I}}\n\\begin{pmatrix}\ne^{i\\phi} & 0 & 0 \\\\\n0 & e^{-i\\phi} & 0 \\\\\n0 & 0 & 1 \\\\\n\\end{pmatrix}\n\\begin{pmatrix}\n\\frac{1}{\\sqrt{2}} & \\frac{i}{\\sqrt{2}} & 0 \\\\\n\\frac{1}{\\sqrt{2}} & \\frac{-i}{\\sqrt{2}} & 0 \\\\\n0 & 0 & 1\\\\\n\\end{pmatrix}\n"
},
{
"math_id": 22,
"text": "\n\\mathbf{U'}^\\dagger \\mathbf{R} \\mathbf{U'} = \\begin{pmatrix}\n\\cos\\phi & -\\sin\\phi & 0 \\\\\n\\sin\\phi & \\cos\\phi & 0 \\\\\n0 & 0 & 1\\\\\n\\end{pmatrix}\n\\quad\\text{ with }\\quad \\mathbf{U'}\n= \\mathbf{U}\n\\begin{pmatrix}\n\\frac{1}{\\sqrt{2}} & \\frac{i}{\\sqrt{2}} & 0 \\\\\n\\frac{1}{\\sqrt{2}} & \\frac{-i}{\\sqrt{2}} & 0 \\\\\n0 & 0 & 1\\\\\n\\end{pmatrix} .\n"
},
{
"math_id": 23,
"text": " \\mathbf{u}_1 "
},
{
"math_id": 24,
"text": " \\mathbf{u}_3 ="
},
{
"math_id": 25,
"text": "\n\\mathrm{Tr}\\left[\\mathbf{A} \\mathbf{R} \\mathbf{A}^\\mathsf{T}\\right] =\n\\mathrm{Tr}\\left[ \\mathbf{R} \\mathbf{A}^\\mathsf{T}\\mathbf{A}\\right] = \\mathrm{Tr}[\\mathbf{R}]\\quad\\text{ with }\\quad \\mathbf{A}^\\mathsf{T} = \\mathbf{A}^{-1},\n"
},
{
"math_id": 26,
"text": "\n \\Delta R =\n \\begin{bmatrix}\n 1 & 0 & 0 \\\\\n 0 & 1 & 0 \\\\\n 0 & 0 & 1\n \\end{bmatrix} +\n \\begin{bmatrix}\n 0 & z & -y \\\\\n -z & 0 & x \\\\\n y & -x & 0\n \\end{bmatrix}\\,\\Delta \\theta =\n \\mathbf{I} + \\mathbf{A}\\,\\Delta \\theta.\n"
},
{
"math_id": 27,
"text": "R = \\left(\\mathbf{1}+\\frac{\\mathbf{A}\\theta}{N}\\right)^N \\approx e^{\\mathbf{A}\\theta}."
},
{
"math_id": 28,
"text": "\\begin{align}\n \\mathbf{x} &= \\mathbf{a}\\cos\\alpha + \\mathbf{b}\\sin\\alpha \\\\\n \\mathbf{y} &= -\\mathbf{a}\\sin\\alpha + \\mathbf{b}\\cos\\alpha \\\\[8pt]\n \\cos\\alpha &= \\mathbf{a}^\\mathsf{T}\\mathbf{x} \\\\\n \\sin\\alpha &= \\mathbf{b}^\\mathsf{T}\\mathbf{x} \\\\[8px]\n \\mathbf{y} &= -\\mathbf{ab}^\\mathsf{T}\\mathbf{x} + \\mathbf{ba}^\\mathsf{T}\\mathbf{x}\n = \\left( \\mathbf{ba}^\\mathsf{T} - \\mathbf{ab}^\\mathsf{T} \\right)\\mathbf{x} \\\\[8px]\n \\mathbf{x}' &= \\mathbf{x}\\cos\\beta + \\mathbf{y}\\sin\\beta \\\\ \n &= \\left( \\mathbf{I}\\cos\\beta + \\left( \\mathbf{ba}^\\mathsf{T} - \\mathbf{ab}^\\mathsf{T} \\right) \\sin\\beta \\right)\\mathbf{x} \\\\[8px] \n \\mathbf{R} &= \\mathbf{I}\\cos\\beta + \\left( \\mathbf{ba}^\\mathsf{T} - \\mathbf{ab}^\\mathsf{T} \\right)\\sin\\beta \\\\ \n &= \\mathbf{I}\\cos\\beta + \\mathbf{G}\\sin\\beta \\\\[8px] \n \\mathbf{G} &= \\mathbf{ba}^\\mathsf{T} - \\mathbf{ab}^\\mathsf{T} \n\\end{align}"
},
{
"math_id": 29,
"text": "\\begin{align}\n \\mathbf{P_{ab}} &= -\\mathbf{G}^2 \\\\ \n \\mathbf{R} &= \\mathbf{I} - \\mathbf{P_{ab}} + \\left( \\mathbf{I} \\cos \\beta + \\mathbf{G} \\sin \\beta \\right)\\mathbf{P_{ab}} = e^{\\mathbf{G}\\beta } \n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=865138 |
8652020 | Convex body | Non-empty convex set in Euclidean space
In mathematics, a convex body in formula_0-dimensional Euclidean space formula_1 is a compact convex set with non-empty interior. Some authors do not require a non-empty interior, merely that the set is non-empty.
A convex body formula_2 is called symmetric if it is centrally symmetric with respect to the origin; that is to say, a point formula_3 lies in formula_2 if and only if its antipode, formula_4 also lies in formula_5 Symmetric convex bodies are in a one-to-one correspondence with the unit balls of norms on formula_6
Important examples of convex bodies are the Euclidean ball, the hypercube and the cross-polytope.
Metric space structure.
Write formula_7 for the set of convex bodies in formula_8. Then formula_7 is a complete metric space with metric
formula_9.
Further, the Blaschke Selection Theorem says that every "d"-bounded sequence in formula_7 has a convergent subsequence.
Polar body.
If formula_2 is a bounded convex body containing the origin formula_10 in its interior, the polar body formula_11 is formula_12. The polar body has several nice properties including formula_13, formula_11 is bounded, and if formula_14 then formula_15. The polar body is a type of duality relation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\R^n"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "- x"
},
{
"math_id": 5,
"text": "K."
},
{
"math_id": 6,
"text": "\\R^n."
},
{
"math_id": 7,
"text": "\\mathcal K^n"
},
{
"math_id": 8,
"text": "\\mathbb R^n"
},
{
"math_id": 9,
"text": "d(K,L) := \\inf\\{\\epsilon \\geq 0 : K \\subset L + B^n(\\epsilon), L \\subset K + B^n(\\epsilon) \\}"
},
{
"math_id": 10,
"text": "O"
},
{
"math_id": 11,
"text": "K^*"
},
{
"math_id": 12,
"text": "\\{u : \\langle u,v \\rangle \\leq 1, \\forall v \\in K \\} "
},
{
"math_id": 13,
"text": "(K^*)^*=K"
},
{
"math_id": 14,
"text": "K_1\\subset K_2"
},
{
"math_id": 15,
"text": "K_2^*\\subset K_1^*"
}
] | https://en.wikipedia.org/wiki?curid=8652020 |
865211 | Homes's law |
In superconductivity, Homes's law is an empirical relation that states that a superconductor's
critical temperature ("T"c) is proportional to the strength of the superconducting state for temperatures well below "T"c close to zero temperature (also referred to as the fully formed superfluid density, formula_0) multiplied by the electrical resistivity formula_1 measured just above the critical temperature. In cuprate high-temperature superconductors the relation follows the form
formula_2,
or alternatively
formula_3.
Many novel superconductors are anisotropic, so the resistivity and the superfluid density are
tensor quantities; the superscript formula_4 denotes the crystallographic direction
along which these quantities are measured.
Note that this expression assumes that the conductivity and temperature have both been recast in units
of cm−1 (or s−1), and that the superfluid density has units of cm−2
(or s−2); the constant is dimensionless. The expected form for a BCS dirty-limit superconductor
has slightly larger numerical constant of ~8.1.
The law is named for physicist Christopher Homes and was first presented in the July 29, 2004 edition of Nature, and was the subject of a News and Views article by Jan Zaanen in the same issue in which he speculated that the high transition temperatures observed in the
cuprate superconductors are because the metallic states in these materials are as viscous as
permitted by the laws of quantum physics. A more detailed version of this scaling relation subsequently appeared in
Physical Review B in 2005, in which it was argued that any material that falls on the scaling line is likely in the
dirty limit (superconducting coherence length ξ0 is much greater than the normal-state mean-free path "l",
ξ0≫ "l"); however, a paper by Vladimir Kogan in Physical Review B in 2013 has shown that the
scaling relation is valid even when ξ0~ "l",
suggesting that only materials in the clean limit (ξ0≪ "l") will fall off of this scaling line.
Francis Pratt and Stephen Blundell have argued that Homes's law is violated in the organic superconductors. This
work was first presented in Physical Review Letters in March 2005. On the other hand, it has been recently demonstrated by Sasa Dordevic and coworkers that
if the dc conductivity and the superfluid density are measured on the same sample at the same time using either infrared
or microwave impedance spectroscopy, then the organic superconductors do indeed fall on the universal scaling line,
along with a number of other exotic superconductors. This work was published in Scientific Reports in
2013. | [
{
"math_id": 0,
"text": "\\rho_{s0}"
},
{
"math_id": 1,
"text": "\\rho_{dc}"
},
{
"math_id": 2,
"text": " \\rho_{dc}^\\alpha\\,\\rho_{s0}^\\alpha/8 \\simeq 4.4\\,T_c "
},
{
"math_id": 3,
"text": "\\rho_{s0}^\\alpha/8 \\simeq 4.4\\,\\sigma_{dc}^\\alpha\\, T_c"
},
{
"math_id": 4,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=865211 |
865249 | R-tree | Data structures used in spatial indexing
R-trees are tree data structures used for spatial access methods, i.e., for indexing multi-dimensional information such as geographical coordinates, rectangles or polygons. The R-tree was proposed by Antonin Guttman in 1984 and has found significant use in both theoretical and applied contexts. A common real-world usage for an R-tree might be to store spatial objects such as restaurant locations or the polygons that typical maps are made of: streets, buildings, outlines of lakes, coastlines, etc. and then find answers quickly to queries such as "Find all museums within 2 km of my current location", "retrieve all road segments within 2 km of my location" (to display them in a navigation system) or "find the nearest gas station" (although not taking roads into account). The R-tree can also accelerate nearest neighbor search for various distance metrics, including great-circle distance.
R-tree idea.
The key idea of the data structure is to group nearby objects and represent them with their minimum bounding rectangle in the next higher level of the tree; the "R" in R-tree is for rectangle. Since all objects lie within this bounding rectangle, a query that does not intersect the bounding rectangle also cannot intersect any of the contained objects. At the leaf level, each rectangle describes a single object; at higher levels the aggregation includes an increasing number of objects. This can also be seen as an increasingly coarse approximation of the data set.
Similar to the B-tree, the R-tree is also a balanced search tree (so all leaf nodes are at the same depth), organizes the data in pages, and is designed for storage on disk (as used in databases). Each page can contain a maximum number of entries, often denoted as formula_0. It also guarantees a minimum fill (except for the root node), however best performance has been experienced with a minimum fill of 30%–40% of the maximum number of entries (B-trees guarantee 50% page fill, and B*-trees even 66%). The reason for this is the more complex balancing required for spatial data as opposed to linear data stored in B-trees.
As with most trees, the searching algorithms (e.g., intersection, containment, nearest neighbor search) are rather simple. The key idea is to use the bounding boxes to decide whether or not to search inside a subtree. In this way, most of the nodes in the tree are never read during a search. Like B-trees, R-trees are suitable for large data sets and databases, where nodes can be paged to memory when needed, and the whole tree cannot be kept in main memory. Even if data can be fit in memory (or cached), the R-trees in most practical applications will usually provide performance advantages over naive check of all objects when the number of objects is more than few hundred or so. However, for in-memory applications, there are similar alternatives that can provide slightly better performance or be simpler to implement in practice. To maintain in-memory computing for R-tree in a computer cluster where computing nodes are connected by a network, researchers have used RDMA (Remote Direct Memory Access) to implement data-intensive applications under R-tree in a distributed environment. This approach is scalable for increasingly large applications and achieves high throughput and low latency performance for R-tree.
The key difficulty of R-tree is to build an efficient tree that on one hand is balanced (so the leaf nodes are at the same height) on the other hand the rectangles do not cover too much empty space and do not overlap too much (so that during search, fewer subtrees need to be processed). For example, the original idea for inserting elements to obtain an efficient tree is to always insert into the subtree that requires least enlargement of its bounding box. Once that page is full, the data is split into two sets that should cover the minimal area each. Most of the research and improvements for R-trees aims at improving the way the tree is built and can be grouped into two objectives: building an efficient tree from scratch (known as bulk-loading) and performing changes on an existing tree (insertion and deletion).
R-trees do not guarantee good worst-case performance, but generally perform well with real-world data. While more of theoretical interest, the (bulk-loaded) Priority R-tree variant of the R-tree is worst-case optimal, but due to the increased complexity, has not received much attention in practical applications so far.
When data is organized in an R-tree, the neighbors within a given distance r and the k nearest neighbors (for any Lp-Norm) of all points can efficiently be computed using a spatial join. This is beneficial for many algorithms based on such queries, for example the Local Outlier Factor. DeLi-Clu, Density-Link-Clustering is a cluster analysis algorithm that uses the R-tree structure for a similar kind of spatial join to efficiently compute an OPTICS clustering.
Algorithm.
Data layout.
Data in R-trees is organized in pages that can have a variable number of entries (up to some pre-defined maximum, and usually above a minimum fill). Each entry within a non-leaf node stores two pieces of data: a way of identifying a child node, and the bounding box of all entries within this child node. Leaf nodes store the data required for each child, often a point or bounding box representing the child and an external identifier for the child. For point data, the leaf entries can be just the points themselves. For polygon data (that often requires the storage of large polygons) the common setup is to store only the MBR (minimum bounding rectangle) of the polygon along with a unique identifier in the tree.
Search.
In range searching, the input is a search rectangle (Query box). Searching is quite similar to searching in a B+ tree. The search starts from the root node of the tree. Every internal node contains a set of rectangles and pointers to the corresponding child node and every leaf node contains the rectangles of spatial objects (the pointer to some spatial object can be there). For every rectangle in a node, it has to be decided if it overlaps the search rectangle or not. If yes, the corresponding child node has to be searched also. Searching is done like this in a recursive manner until all overlapping nodes have been traversed. When a leaf node is reached, the contained bounding boxes (rectangles) are tested against the search rectangle and their objects (if there are any) are put into the result set if they lie within the search rectangle.
For priority search such as nearest neighbor search, the query consists of a point or rectangle. The root node is inserted into the priority queue. Until the queue is empty or the desired number of results have been returned the search continues by processing the nearest entry in the queue. Tree nodes are expanded and their children reinserted. Leaf entries are returned when encountered in the queue. This approach can be used with various distance metrics, including great-circle distance for geographic data.
Insertion.
To insert an object, the tree is traversed recursively from the root node. At each step, all rectangles in the current directory node are examined, and a candidate is chosen using a heuristic such as choosing the rectangle which requires least enlargement. The search then descends into this page, until reaching a leaf node. If the leaf node is full, it must be split before the insertion is made. Again, since an exhaustive search is too expensive, a heuristic is employed to split the node into two. Adding the newly created node to the previous level, this level can again overflow, and these overflows can propagate up to the root node; when this node also overflows, a new root node is created and the tree has increased in height.
Choosing the insertion subtree.
The algorithm needs to decide in which subtree to insert. When a data object is fully contained in a single rectangle, the choice is clear. When there are multiple options or rectangles in need of enlargement, the choice can have a significant impact on the performance of the tree.
The objects are inserted into the subtree that needs the least enlargement. A Mixture heuristic is employed throughout. What happens next is it tries to minimize the overlap (in case of ties, prefer least enlargement and then least area); at the higher levels, it behaves similar to the R-tree, but on ties again preferring the subtree with smaller area. The decreased overlap of rectangles in the R*-tree is one of the key benefits over the traditional R-tree.
Splitting an overflowing node.
Since redistributing all objects of a node into two nodes has an exponential number of options, a heuristic needs to be employed to find the best split. In the classic R-tree, Guttman proposed two such heuristics, called QuadraticSplit and LinearSplit. In quadratic split, the algorithm searches for the pair of rectangles that is the worst combination to have in the same node, and puts them as initial objects into the two new groups. It then searches for the entry which has the strongest preference for one of the groups (in terms of area increase) and assigns the object to this group until all objects are assigned (satisfying the minimum fill).
There are other splitting strategies such as Greene's Split, the R*-tree splitting heuristic (which again tries to minimize overlap, but also prefers quadratic pages) or the linear split algorithm proposed by Ang and Tan (which however can produce very irregular rectangles, which are less performant for many real world range and window queries). In addition to having a more advanced splitting heuristic, the R*-tree also tries to avoid splitting a node by reinserting some of the node members, which is similar to the way a B-tree balances overflowing nodes. This was shown to also reduce overlap and thus increase tree performance.
Finally, the X-tree can be seen as a R*-tree variant that can also decide to not split a node, but construct a so-called super-node containing all the extra entries, when it doesn't find a good split (in particular for high-dimensional data).
Deletion.
Deleting an entry from a page may require updating the bounding rectangles of parent pages. However, when a page is underfull, it will not be balanced with its neighbors. Instead, the page will be dissolved and all its children (which may be subtrees, not only leaf objects) will be reinserted. If during this process the root node has a single element, the tree height can decrease.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "l=\\lceil \\text{number of objects} / \\text{capacity}\\rceil"
},
{
"math_id": 2,
"text": "s=\\lceil l^{1/d}\\rceil"
},
{
"math_id": 3,
"text": "s"
}
] | https://en.wikipedia.org/wiki?curid=865249 |
865348 | Point spread function | Response in an optical imaging system
The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response function (IRF) of a focused optical imaging system. The PSF in many contexts can be thought of as the extended blob in an image that represents a single point object, that is considered as a spatial impulse. In functional terms, it is the spatial domain version (i.e., the inverse Fourier transform) of the optical transfer function (OTF) of an imaging system. It is a useful concept in Fourier optics, astronomical imaging, medical imaging, electron microscopy and other imaging techniques such as 3D microscopy (like in confocal laser scanning microscopy) and fluorescence microscopy.
The degree of spreading (blurring) in the image of a point object for an imaging system is a measure of the quality of the imaging system. In non-coherent imaging systems, such as fluorescent microscopes, telescopes or optical microscopes, the image formation process is linear in the image intensity and described by a linear system theory. This means that when two objects A and B are imaged simultaneously by a non-coherent imaging system, the resulting image is equal to the sum of the independently imaged objects. In other words: the imaging of A is unaffected by the imaging of B and "vice versa", owing to the non-interacting property of photons. In space-invariant systems, i.e. those in which the PSF is the same everywhere in the imaging space, the image of a complex object is then the convolution of that object and the PSF. The PSF can be derived from diffraction integrals.
Introduction.
By virtue of the linearity property of optical "non-coherent" imaging systems, i.e.,
"Image"("Object"1 + "Object"2) = "Image"("Object"1) + "Image"("Object"2)
the image of an object in a microscope or telescope as a non-coherent imaging system can be computed by expressing the object-plane field as a weighted sum of 2D impulse functions, and then expressing the image plane field as a weighted sum of the "images" of these impulse functions. This is known as the "superposition principle", valid for linear systems. The images of the individual object-plane impulse functions are called point spread functions (PSF), reflecting the fact that a mathematical "point" of light in the object plane is "spread" out to form a finite area in the image plane. (In some branches of mathematics and physics, these might be referred to as Green's functions or impulse response functions. PSFs are considered impulse response functions for imaging systems.
When the object is divided into discrete point objects of varying intensity, the image is computed as a sum of the PSF of each point. As the PSF is typically determined entirely by the imaging system (that is, microscope or telescope), the entire image can be described by knowing the optical properties of the system. This imaging process is usually formulated by a convolution equation. In microscope image processing and astronomy, knowing the PSF of the measuring device is very important for restoring the (original) object with deconvolution. For the case of laser beams, the PSF can be mathematically modeled using the concepts of Gaussian beams. For instance, deconvolution of the mathematically modeled PSF and the image, improves visibility of features and removes imaging noise.
Theory.
The point spread function may be independent of position in the object plane, in which case it is called "shift invariant". In addition, if there is no distortion in the system, the image plane coordinates are linearly related to the object plane coordinates via the magnification "M" as:
formula_0.
If the imaging system produces an inverted image, we may simply regard the image plane coordinate axes as being reversed from the object plane axes. With these two assumptions, i.e., that the PSF is shift-invariant "and" that there is no distortion, calculating the image plane convolution integral is a straightforward process.
Mathematically, we may represent the object plane field as:
formula_1
i.e., as a sum over weighted impulse functions, although this is also really just stating the shifting property of 2D delta functions (discussed further below). Rewriting the object transmittance function in the form above allows us to calculate the image plane field as the superposition of the images of each of the individual impulse functions, i.e., as a superposition over weighted point spread functions in the image plane using the "same" weighting function as in the object plane, i.e., formula_2. Mathematically, the image is expressed as:
formula_3
in which formula_4 is the image of the impulse function formula_5.
The 2D impulse function may be regarded as the limit (as side dimension "w" tends to zero) of the "square post" function, shown in the figure below.
We imagine the object plane as being decomposed into square areas such as this, with each having its own associated square post function. If the height, "h", of the post is maintained at 1/w2, then as the side dimension "w" tends to zero, the height, "h", tends to infinity in such a way that the volume (integral) remains constant at 1. This gives the 2D impulse the shifting property (which is implied in the equation above), which says that when the 2D impulse function, δ("x" − "u","y" − "v"), is integrated against any other continuous function, "f"("u","v"), it "sifts out" the value of "f" at the location of the impulse, i.e., at the point ("x","y").
The concept of a perfect point source object is central to the idea of PSF. However, there is no such thing in nature as a perfect mathematical point source radiator; the concept is completely non-physical and is rather a mathematical construct used to model and understand optical imaging systems. The utility of the point source concept comes from the fact that a point source in the 2D object plane can only radiate a perfect uniform-amplitude, spherical wave — a wave having perfectly spherical, outward travelling phase fronts with uniform intensity everywhere on the spheres (see Huygens–Fresnel principle). Such a source of uniform spherical waves is shown in the figure below. We also note that a perfect point source radiator will not only radiate a uniform spectrum of propagating plane waves, but a uniform spectrum of exponentially decaying (evanescent) waves as well, and it is these which are responsible for resolution finer than one wavelength (see Fourier optics). This follows from the following Fourier transform expression for a 2D impulse function,
formula_6
The quadratic lens intercepts a "portion" of this spherical wave, and refocuses it onto a blurred point in the image plane. For a single lens, an on-axis point source in the object plane produces an Airy disc PSF in the image plane. It can be shown (see Fourier optics, Huygens–Fresnel principle, Fraunhofer diffraction) that the field radiated by a planar object (or, by reciprocity, the field converging onto a planar image) is related to its corresponding source (or image) plane distribution via a Fourier transform (FT) relation. In addition, a uniform function over a circular area (in one FT domain) corresponds to "J"1("x")/"x" in the other FT domain, where "J"1("x") is the first-order Bessel function of the first kind. That is, a uniformly-illuminated circular aperture that passes a converging uniform spherical wave yields an Airy disk image at the focal plane. A graph of a sample Airy disk is shown in the adjoining figure.
Therefore, the converging ("partial") spherical wave shown in the figure above produces an Airy disc in the image plane. The argument of the function "J"1("x")/"x" is important, because this determines the "scaling" of the Airy disc (in other words, how big the disc is in the image plane). If Θmax is the maximum angle that the converging waves make with the lens axis, "r" is radial distance in the image plane, and wavenumber "k" = 2π/λ where λ = wavelength, then the argument of the function is: kr tan(Θmax). If Θmax is small (only a small portion of the converging spherical wave is available to form the image), then radial distance, r, has to be very large before the total argument of the function moves away from the central spot. In other words, if Θmax is small, the Airy disc is large (which is just another statement of Heisenberg's uncertainty principle for Fourier Transform pairs, namely that small extent in one domain corresponds to wide extent in the other domain, and the two are related via the "space-bandwidth product"). By virtue of this, high magnification systems, which typically have small values of Θmax (by the Abbe sine condition), can have more blur in the image, owing to the broader PSF. The size of the PSF is proportional to the magnification, so that the blur is no worse in a relative sense, but it is definitely worse in an absolute sense.
The figure above illustrates the truncation of the incident spherical wave by the lens. In order to measure the point spread function — or impulse response function — of the lens, a perfect point source that radiates a perfect spherical wave in all directions of space is not needed. This is because the lens has only a finite (angular) bandwidth, or finite intercept angle. Therefore, any angular bandwidth contained in the source, which extends past the edge angle of the lens (i.e., lies outside the bandwidth of the system), is essentially wasted source bandwidth because the lens can't intercept it in order to process it. As a result, a perfect point source is not required in order to measure a perfect point spread function. All we need is a light source which has at least as much angular bandwidth as the lens being tested (and of course, is uniform over that angular sector). In other words, we only require a point source which is produced by a convergent (uniform) spherical wave whose half angle is greater than the edge angle of the lens.
Due to intrinsic limited resolution of the imaging systems, measured PSFs are not free of uncertainty. In imaging, it is desired to suppress the side-lobes of the imaging beam by apodization techniques. In the case of transmission imaging systems with Gaussian beam distribution, the PSF is modeled by the following equation:
formula_7
where "k-factor" depends on the truncation ratio and level of the irradiance, "NA" is numerical aperture, "c" is the speed of light, "f" is the photon frequency of the imaging beam, "Ir" is the intensity of reference beam, "a" is an adjustment factor and formula_8 is the radial position from the center of the beam on the corresponding "z-plane".
History and methods.
The diffraction theory of point spread functions was first studied by Airy in the nineteenth century. He developed an expression for the point spread function amplitude and intensity of a perfect instrument, free of aberrations (the so-called Airy disc). The theory of aberrated point spread functions close to the optimum focal plane was studied by Zernike and Nijboer in the 1930–40s. A central role in their analysis is played by Zernike's circle polynomials that allow an efficient representation of the aberrations of any optical system with rotational symmetry. Recent analytic results have made it possible to extend Nijboer and Zernike's approach for point spread function evaluation to a large volume around the optimum focal point. This extended Nijboer-Zernike (ENZ) theory allows studying the imperfect imaging of three-dimensional objects in confocal microscopy or astronomy under non-ideal imaging conditions. The ENZ-theory has also been applied to the characterization of optical instruments with respect to their aberration by measuring the through-focus intensity distribution and solving an appropriate inverse problem.
Applications.
Microscopy.
In microscopy, experimental determination of PSF requires sub-resolution (point-like) radiating sources. Quantum dots and fluorescent beads are usually considered for this purpose.
Theoretical models as described above, on the other hand, allow the detailed calculation of the PSF for various imaging conditions. The most compact diffraction limited shape of the PSF is usually preferred. However, by using appropriate optical elements (e.g., a spatial light modulator) the shape of the PSF can be engineered towards different applications.
Astronomy.
In observational astronomy, the experimental determination of a PSF is often very straightforward due to the ample supply of point sources (stars or quasars). The form and source of the PSF may vary widely depending on the instrument and the context in which it is used.
For radio telescopes and diffraction-limited space telescopes, the dominant terms in the PSF may be inferred from the configuration of the aperture in the Fourier domain. In practice, there may be multiple terms contributed by the various components in a complex optical system. A complete description of the PSF will also include diffusion of light (or photo-electrons) in the detector, as well as tracking errors in the spacecraft or telescope.
For ground-based optical telescopes, atmospheric turbulence (known as astronomical seeing) dominates the contribution to the PSF. In high-resolution ground-based imaging, the PSF is often found to vary with position in the image (an effect called anisoplanatism). In ground-based adaptive optics systems, the PSF is a combination of the aperture of the system with residual uncorrected atmospheric terms.
Lithography.
The PSF is also a fundamental limit to the conventional focused imaging of a hole, with the minimum printed size being in the range of 0.6-0.7 wavelength/NA, with NA being the numerical aperture of the imaging system. For example, in the case of an EUV system with wavelength of 13.5 nm and NA=0.33, the minimum individual hole size that can be imaged is in the range of 25-29 nm. A phase-shift mask has 180-degree phase edges which allow finer resolution.
Ophthalmology.
Point spread functions have recently become a useful diagnostic tool in clinical ophthalmology. Patients are measured with a Shack-Hartmann wavefront sensor, and special software calculates the PSF for that patient's eye. This method allows a physician to simulate potential treatments on a patient, and estimate how those treatments would alter the patient's PSF. Additionally, once measured the PSF can be minimized using an adaptive optics system. This, in conjunction with a CCD camera and an adaptive optics system, can be used to visualize anatomical structures not otherwise visible "in vivo", such as cone photoreceptors.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(x_i, y_i) = (M x_o, M y_o)"
},
{
"math_id": 1,
"text": " O(x_o,y_o) = \\iint O(u,v) ~ \\delta(x_o-u,y_o-v) ~ du\\, dv"
},
{
"math_id": 2,
"text": "O(x_o,y_o)"
},
{
"math_id": 3,
"text": "I(x_i,y_i) = \\iint O(u,v) ~ \\mathrm{PSF}(x_i/M-u , y_i/M-v) \\, du\\, dv"
},
{
"math_id": 4,
"text": "\\mbox{PSF}(x_i/M-u,y_i/M-v)"
},
{
"math_id": 5,
"text": " \\delta(x_o-u,y_o-v)"
},
{
"math_id": 6,
"text": "\\delta (x,y) \\propto \\iint e^{j(k_x x + k_y y)} \\, d k_x\\, d k_y"
},
{
"math_id": 7,
"text": "\\mathrm{PSF}(f, z) = I_r(0,z,f)\\exp\\left[-z\\alpha(f)-\\dfrac{2\\rho^2}{0.36{\\frac{cka}{\\text{NA}f}}\\sqrt{{1+\\left ( \\frac{2\\ln 2}{c\\pi}\\left ( \\frac{\\text{NA}}{0.56k} \\right )^2 fz\\right )}^2}}\\right],"
},
{
"math_id": 8,
"text": "\\rho"
}
] | https://en.wikipedia.org/wiki?curid=865348 |
865479 | Dutch book theorems | Thought experiment, to justify Bayesian probability
In decision theory, economics, and probability theory, the Dutch book arguments are a set of results showing that agents must satisfy the axioms of rational choice to avoid a kind of self-contradiction called a Dutch book. A Dutch book or money pump is a set of bets that ensures a guaranteed loss, i.e. the gambler will lose money no matter what happens. A set of beliefs and preferences is called coherent if it cannot result in a Dutch book.
The Dutch book arguments are used to explore degrees of certainty in beliefs, and demonstrate that rational agents must be Bayesian; in other words, rationality requires assigning probabilities to events that behave according to the axioms of probability, and having preferences that can be modeled using the von Neumann–Morgenstern axioms.
In economics, is used to model behavior by ruling out situations where agents "burn money" for no real reward; models based on these assumptions are called rational choice models. These assumptions are weakened in behavioral models of decision-making.
The thought experiment was first proposed by the Italian probabilist Bruno de Finetti in order to justify Bayesian probability, and was more thoroughly explored by Leonard Savage, who developed them into a full model of rational choice.
Operational subjective probabilities as wagering odds.
One must set the price of a promise to pay $1 if John Smith wins tomorrow's election, and $0 otherwise. One knows that one's opponent will be able to choose either to buy such a promise from one at the price one has set, or require one to buy such a promise from them, still at the same price. In other words: Player A sets the odds, but Player B decides which side of the bet to take. The price one sets is the "operational subjective probability" that one assigns to the proposition on which one is betting.
If one decides that John Smith is 12.5% likely to win—an arbitrary valuation—one might then set an odds of 7:1 against. This arbitrary valuation — the "operational subjective probability" — determines the payoff to a successful wager. $1 wagered at these odds will produce either a loss of $1 (if Smith loses) or a win of $7 (if Smith wins). If the $1 is placed in pledge as a condition of the bet, then the $1 will also be returned to the bettor, should the bettor win the bet.
The arguments.
The standard Dutch book argument concludes that rational agents must have subjective probabilities for random events, and that these probabilities must satisfy the standard axioms of probability. In other words, any rational person must be willing to assign a (quantitative) subjective probability to different events.
Note that the argument does not imply agents are willing to engage in gambling in the traditional sense. The word "bet" as used here refers to any kind of decision under uncertainty. For example, buying an unfamiliar good at a supermarket is a kind of "bet" (the buyer "bets" that the product is good), as is getting into a car ("betting" that the driver will not be involved in an accident).
Establishing willingness to bet.
The Dutch book argument can be reversed by considering the perspective of the bookmaker. In this case, the Dutch book arguments show that any rational agent must be willing to accept some kinds of risks, i.e. to make uncertain bets, or else they will sometimes refuse "free gifts" or "Czech books", a series of bets leaving them better-off with 100% certainty.
Unitarity.
In one example, a bookmaker has offered the following odds and attracted one bet on each horse whose relative sizes make the result irrelevant. The implied probabilities, i.e. probability of each horse winning, add up to a number greater than 1, violating the axiom of unitarity:
Whichever horse wins in this example, the bookmaker will pay out $200 (including returning the winning stake)—but the punter has bet $210, hence making a loss of $10 on the race.
However, if horse 4 was withdrawn and the bookmaker does not adjust the other odds, the implied probabilities would add up to 0.95. In such a case, a gambler could always reap a profit of $10 by betting $100, $50 and $40 on the remaining three horses, respectively, and not having to stake $20 on the withdrawn horse, which now cannot win.
Other axioms.
Other forms of Dutch books can be used to establish the other axioms of probability, sometimes involving more complex bets like forecasting the order in which horses will finish. In Bayesian probability, Frank P. Ramsey and Bruno de Finetti required personal degrees of belief to be coherent so that a Dutch book could not be made against them, whichever way bets were made. Necessary and sufficient conditions for this are that their degrees of belief satisfy all the axioms of probability.
Dutch books.
A person who has set prices on an array of wagers, in such a way that he or she will make a net gain regardless of the outcome, is said to have made a "Dutch book". When one has a Dutch book, one's opponent always loses. A person who sets prices in a way that gives his or her opponent a Dutch book is not behaving rationally.
A very trivial Dutch book.
The rules do not forbid a set price higher than $1, but a prudent opponent may sell one a high-priced ticket, such that the opponent comes out ahead regardless of the outcome of the event on which the bet is made. The rules also do not forbid a negative price, but an opponent may extract a paid promise from the bettor to pay him or her later should a certain contingency arise. In either case, the price-setter loses. These lose-lose situations parallel the fact that a probability can neither exceed 1 (certainty) nor be less than 0 (no chance of winning).
A more instructive Dutch book.
Now suppose one sets the price of a promise to pay $1 if the Boston Red Sox win next year's World Series, and also the price of a promise to pay $1 if the New York Yankees win, and finally the price of a promise to pay $1 if "either" the Red Sox or the Yankees win. One may set the prices in such a way that
formula_0
But if one sets the price of the third ticket lower than the sum of the first two tickets, a prudent opponent will buy that ticket and sell the other two tickets to the price-setter. By considering the three possible outcomes (Red Sox, Yankees, some other team), one will note that regardless of which of the three outcomes eventuates, one will lose. An analogous fate awaits if one set the price of the third ticket higher than the sum of the other two prices. This parallels the fact that probabilities of mutually exclusive events are additive (see probability axioms).
Conditional wagers and conditional probabilities.
Now imagine a more complicated scenario. One must set the prices of three promises:
Three outcomes are possible: The game is cancelled; the game is played and the Red Sox lose; the game is played and the Red Sox win. One may set the prices in such a way that
formula_1
(where the second price above is that of the bet that includes the refund in case of cancellation). (Note: The prices here are the dimensionless numbers obtained by dividing by $1, which is the payout in all three cases.) A prudent opponent writes three linear inequalities in three variables. The variables are the amounts they will invest in each of the three promises; the value of one of these is negative if they will make the price-setter buy that promise and positive if they will buy it. Each inequality corresponds to one of the three possible outcomes. Each inequality states that your opponent's net gain is more than zero. A solution exists if the determinant of the matrix is not zero. That determinant is:
formula_2
Thus a prudent opponent can make the price setter a sure loser unless one sets one's prices in a way that parallels the simplest conventional characterization of conditional probability.
Another example.
In the 2015 running of the Kentucky Derby, the favorite ("American Pharaoh") was set ante-post at 5:2, the second favorite at 3:1, and the third favorite at 8:1. All other horses had odds against of 12:1 or higher. With these odds, a wager of $10 on each of all 18 starters would result in a net loss if either the favorite or the second favorite were to win.
However, if one assumes that no horse quoted 12:1 or higher will win, and one bets $10 on each of the top three, one is guaranteed at least a small win. The favorite (who did win) would result in a payout of $25, plus the returned $10 wager, giving an ending balance of $35 (a $5 net increase). A win by the second favorite would produce a payoff of $30 plus the original $10 wager, for a net $10 increase. A win by the third favorite gives $80 plus the original $10, for a net increase of $60.
This sort of strategy, so far as it concerns just the top three, forms a Dutch Book. However, if one considers all eighteen contenders, then no Dutch Book exists for this race.
Economics.
In economics, the classic example of a situation in which a consumer X can be Dutch-booked is if they have intransitive preferences. Classical economic theory assumes that preferences are transitive: if someone thinks A is better than B and B is better than C, then they must think A is better than C. Moreover, there cannot be any "cycles" of preferences.
The money pump argument notes that if someone held a set of intransitive preferences, they could be exploited (pumped) for money until being forced to leave the market. Imagine Jane has twenty dollars to buy fruit. She can fill her basket with either oranges or apples. Jane would prefer to have a dollar rather than an apple, an apple rather than an orange, and an orange rather than a dollar. Because Jane would rather have an orange than a dollar, she is willing to buy an orange for just over a dollar (perhaps $1.10). Then, she trades her orange for an apple, because she would rather have an apple rather than an orange. Finally, she sells her apple for a dollar, because she would rather have a dollar than an apple. At this point, Jane is left with $19.90, and has lost 10¢ and gained nothing in return. This process can be repeated until Jane is left with no money. (Note that, if Jane truly holds these preferences, she would see nothing wrong with this process, and would not try to stop this process; at every step, Jane agrees she has been left better off.) After running out of money, Jane leaves the market, and her preferences and actions cease to be economically relevant.
Experiments in behavioral economics show that subjects can violate the requirement for transitive preferences when comparing bets. However, most subjects do not make these choices in within-subject comparisons where the contradiction would be obviously visible (in other words, the subjects do not hold genuinely intransitive preferences, but instead make mistakes when making choices using heuristics).
Economists usually argue that people with preferences like X's will have all their wealth taken from them in the market. If this is the case, we won't observe preferences with intransitivities or other features that allow people to be Dutch-booked. However, if people are somewhat sophisticated about their intransitivities and/or if competition by arbitrageurs drives epsilon to zero, non-"standard" preferences may still be observable.
Coherence.
It can be shown that the set of prices is coherent when they satisfy the probability axioms and related results such as the inclusion–exclusion principle.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Refbegin/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\text{Price}(\\text{Red Sox})+\\text{Price}(\\text{Yankees})\\neq\\text{Price}(\\text{Red Sox or Yankees}) \\, "
},
{
"math_id": 1,
"text": "\\text{Price}(\\text{complete game})\\times\\text{Price}(\\text{Red Sox win}\\mid\\text{complete game}) \\neq \\text{Price}(\\text{Red Sox win and complete game})"
},
{
"math_id": 2,
"text": "\\text{Price}(\\text{complete game})\\times\\text{Price}(\\text{Red Sox win}\\mid\\text{complete game})-\\text{Price}(\\text{Red Sox win and complete game})."
}
] | https://en.wikipedia.org/wiki?curid=865479 |
865686 | Internal set theory | System of mathematical set theory
Internal set theory (IST) is a mathematical theory of sets developed by Edward Nelson that provides an axiomatic basis for a portion of the nonstandard analysis introduced by Abraham Robinson. Instead of adding new elements to the real numbers, Nelson's approach modifies the axiomatic foundations through syntactic enrichment. Thus, the axioms introduce a new term, "standard", which can be used to make discriminations not possible under the conventional ZFC axioms for sets. Thus, IST is an enrichment of ZFC: all axioms of ZFC are satisfied for all classical predicates, while the new unary predicate "standard" satisfies three additional axioms I, S, and T. In particular, suitable nonstandard elements within the set of real numbers can be shown to have properties that correspond to the properties of infinitesimal and unlimited elements.
Nelson's formulation is made more accessible for the lay-mathematician by leaving out many of the complexities of meta-mathematical logic that were initially required to justify rigorously the consistency of number systems containing infinitesimal elements.
Intuitive justification.
Whilst IST has a perfectly formal axiomatic scheme, described below, an intuitive justification of the meaning of the term "standard" is desirable. This is not part of the formal theory, but is a pedagogical device that might help the student interpret the formalism. The essential distinction, similar to the concept of definable numbers, contrasts the finiteness of the domain of concepts that we can specify and discuss, with the unbounded infinity of the set of numbers; compare finitism.
The term "standard" is therefore intuitively taken to correspond to some necessarily finite portion of "accessible" whole numbers. The argument can be applied to any infinite set of objects whatsoever – there are only so many elements that one can specify in finite time using a finite set of symbols and there are always those that lie beyond the limits of our patience and endurance, no matter how we persevere. We must admit to a profusion of "nonstandard" elements—too large or too anonymous to grasp—within any infinite set.
Principles of the "standard" predicate.
The following principles follow from the above intuitive motivation and so should be deducible from the formal axioms. For the moment we take the domain of discussion as being the familiar set of whole numbers.
Formal axioms for IST.
IST is an axiomatic theory in the first-order logic with equality in a language containing a binary predicate symbol ∈ and a unary predicate symbol st("x"). Formulas not involving st (i.e., formulas of the usual language of set theory) are called internal, other formulas are called external. We use the abbreviations
formula_0
IST includes all axioms of the Zermelo–Fraenkel set theory with the axiom of choice (ZFC). Note that the ZFC schemata of separation and replacement are "not" extended to the new language, they can only be used with internal formulas. Moreover, IST includes three new axiom schemata – conveniently one for each initial in its name: Idealisation, Standardisation, and Transfer.
"I": Idealisation.
The statement of this axiom comprises two implications. The right-to-left implication can be reformulated by the simple statement that elements of standard finite sets are standard. The more important left-to-right implication expresses that the collection of all standard sets is contained in a finite (nonstandard) set, and moreover, this finite set can be taken to satisfy any given internal property shared by all standard finite sets.
This very general axiom scheme upholds the existence of "ideal" elements in appropriate circumstances. Three particular applications demonstrate important consequences.
Applied to the relation ≠.
If "S" is standard and finite, we take for the relation "R"("g", "f"): "g" and "f" are not equal and "g" is in "S". Since ""For every standard finite set F there is an element g in S such that g ≠ f for all f in F"" is false (no such "g" exists when "F"
"S"), we may use Idealisation to tell us that ""There is a G in S such that G ≠ f for all standard f" is also false, i.e. all the elements of "S" are standard.
If "S" is infinite, then we take for the relation "R"("g", "f"): "g" and "f" are not equal and "g" is in "S". Since "For every standard finite set F there is an element g in S such that g ≠ f for all f in F" (the infinite set "S" is not a subset of the finite set "F"), we may use Idealisation to derive "There is a G in S such that G ≠ f for all standard f"." In other words, every infinite set contains a nonstandard element (many, in fact).
The power set of a standard finite set is standard (by Transfer) and finite, so all the subsets of a standard finite set are standard.
If "S" is nonstandard, we take for the relation "R"("g", "f"): "g" and "f" are not equal and "g" is in "S". Since ""For every standard finite set F there is an element g in S such that g ≠ f for all f in F" (the nonstandard set "S" is not a subset of the standard and finite set "F"), we may use Idealisation to derive "There is a G in S such that G ≠ f for all standard f." In other words, every nonstandard set contains a nonstandard element.
As a consequence of all these results, all the elements of a set "S" are standard if and only if "S" is standard and finite.
Applied to the relation <.
Since "For every standard, finite set of natural numbers F there is a natural number g such that g > f for all f in F"" – say, "g"
maximum("F") + 1 – we may use Idealisation to derive ""There is a natural number G such that G > f for all standard natural numbers f"." In other words, there exists a natural number greater than each standard natural number.
Applied to the relation ∈.
More precisely we take for "R"("g", "f"): "g" is a finite set containing element "f". Since ""For every standard, finite set F, there is a finite set g such that f ∈ g for all f in F"" – say by choosing "g"
"F" itself – we may use Idealisation to derive ""There is a finite set G such that f ∈ G for all standard f"." For any set "S", the intersection of "S" with the set "G" is a finite subset of "S" that contains every standard element of "S". "G" is necessarily nonstandard.
is an axiom.
is an axiom.
Formal justification for the axioms.
Aside from the intuitive motivations suggested above, it is necessary to justify that additional IST axioms do not lead to errors or inconsistencies in reasoning. Mistakes and philosophical weaknesses in reasoning about infinitesimal numbers in the work of Gottfried Leibniz, Johann Bernoulli, Leonhard Euler, Augustin-Louis Cauchy, and others were the reason that they were originally abandoned for the more cumbersome real number-based arguments developed by Georg Cantor, Richard Dedekind, and Karl Weierstrass, which were perceived as being more rigorous by Weierstrass's followers.
The approach for internal set theory is the same as that for any new axiomatic system—we construct a model for the new axioms using the elements of a simpler, more trusted, axiom scheme. This is quite similar to justifying the consistency of the axioms of elliptic non-Euclidean geometry by noting they can be modeled by an appropriate interpretation of great circles on a sphere in ordinary 3-space.
In fact via a suitable model a proof can be given of the relative consistency of IST as compared with ZFC: if ZFC is consistent, then IST is consistent. In fact, a stronger statement can be made: IST is a conservative extension of ZFC: any internal formula that can be proven within internal set theory can be proven in the Zermelo–Fraenkel axioms with the axiom of choice alone.
Related theories.
Related theories were developed by Karel Hrbacek and others.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\\exists^\\mathrm{st}x\\,\\phi(x)&=\\exists x\\,(\\operatorname{st}(x)\\land\\phi(x)),\\\\\n\\forall^\\mathrm{st}x\\,\\phi(x)&=\\forall x\\,(\\operatorname{st}(x)\\to\\phi(x)).\\end{align}"
},
{
"math_id": 1,
"text": "\\phi"
},
{
"math_id": 2,
"text": "\\forall^\\mathrm{st}z\\,(z\\text{ is finite}\\to\\exists y\\,\\forall x\\in z\\,\\phi(x,y,u_1,\\dots,u_n))\\leftrightarrow\\exists y\\,\\forall^\\mathrm{st}x\\,\\phi(x,y,u_1,\\dots,u_n)."
},
{
"math_id": 3,
"text": "\\forall^\\mathrm{st}x\\,\\exists^\\mathrm{st}y\\,\\forall^\\mathrm{st}t\\,(t\\in y\\leftrightarrow(t\\in x\\land\\phi(t,u_1,\\dots,u_n)))"
},
{
"math_id": 4,
"text": "\\phi(x,u_1,\\dots,u_n)"
},
{
"math_id": 5,
"text": "\\forall^\\mathrm{st}u_1\\dots\\forall^\\mathrm{st}u_n\\,(\\forall^\\mathrm{st}x\\,\\phi(x,u_1,\\dots,u_n)\\to\\forall x\\,\\phi(x,u_1,\\dots,u_n))"
}
] | https://en.wikipedia.org/wiki?curid=865686 |
865764 | Fisher transformation | Statistical transformation
In statistics, the Fisher transformation (or Fisher "z"-transformation) of a Pearson correlation coefficient is its inverse hyperbolic tangent (artanh).
When the sample correlation coefficient "r" is near 1 or -1, its distribution is highly skewed, which makes it difficult to estimate confidence intervals and apply tests of significance for the population correlation coefficient ρ.
The Fisher transformation solves this problem by yielding a variable whose distribution is approximately normally distributed, with a variance that is stable over different values of "r".
Definition.
Given a set of "N" bivariate sample pairs ("X""i", "Y""i"), "i" = 1, ..., "N", the sample correlation coefficient "r" is given by
formula_0
Here formula_1 stands for the covariance between the variables formula_2 and formula_3 and formula_4 stands for the standard deviation of the respective variable. Fisher's z-transformation of "r" is defined as
formula_5
where "ln" is the natural logarithm function and "artanh" is the inverse hyperbolic tangent function.
If ("X", "Y") has a bivariate normal distribution with correlation ρ and the pairs ("X""i", "Y""i") are independent and identically distributed, then "z" is approximately normally distributed with mean
formula_6
and a standard deviation which does not depend on the value of the correlation rho (i.e., a Variance-stabilizing transformation)
formula_7
where "N" is the sample size, and ρ is the true correlation coefficient.
This transformation, and its inverse
formula_8
can be used to construct a large-sample confidence interval for "r" using standard normal theory and derivations. See also application to partial correlation.
Derivation.
Hotelling gives a concise derivation of the Fisher transformation.
To derive the Fisher transformation, one starts by considering an arbitrary increasing, twice-differentiable function of formula_9, say formula_11. Finding the first term in the large-formula_10 expansion of the corresponding skewness formula_12 results in
formula_13
Setting formula_14 and solving the corresponding differential equation for formula_15 yields the inverse hyperbolic tangent formula_16 function.
Similarly expanding the mean "m" and variance "v" of formula_17, one gets
m = formula_18
and
v = formula_19
respectively.
The extra terms are not part of the usual Fisher transformation. For large values of formula_20 and small values of formula_10 they represent a large improvement of accuracy at minimal cost, although they greatly complicate the computation of the inverse – a closed-form expression is not available. The near-constant variance of the transformation is the result of removing its skewness – the actual improvement is achieved by the latter, not by the extra terms. Including the extra terms, i.e., computing (z-m)/v1/2, yields:
formula_21
which has, to an excellent approximation, a standard normal distribution.
Application.
The application of Fisher's transformation can be enhanced using a software calculator as shown in the figure. Assuming that the r-squared value found is 0.80, that there are 30 data , and accepting a 90% confidence interval, the r-squared value in another random sample from the same population may range from 0.588 to 0.921. When r-squared is outside this range, the population is considered to be different.
Discussion.
The Fisher transformation is an approximate variance-stabilizing transformation for "r" when "X" and "Y" follow a bivariate normal distribution. This means that the variance of "z" is approximately constant for all values of the population correlation coefficient "ρ". Without the Fisher transformation, the variance of "r" grows smaller as |"ρ"| gets closer to 1. Since the Fisher transformation is approximately the identity function when |"r"| < 1/2, it is sometimes useful to remember that the variance of "r" is well approximated by 1/"N" as long as |"ρ"| is not too large and "N" is not too small. This is related to the fact that the asymptotic variance of "r" is 1 for bivariate normal data.
The behavior of this transform has been extensively studied since Fisher introduced it in 1915. Fisher himself found the exact distribution of "z" for data from a bivariate normal distribution in 1921; Gayen in 1951
determined the exact distribution of "z" for data from a bivariate Type A Edgeworth distribution. Hotelling in 1953 calculated the Taylor series expressions for the moments of "z" and several related statistics and Hawkins in 1989 discovered the asymptotic distribution of "z" for data from a distribution with bounded fourth moments.
An alternative to the Fisher transformation is to use the exact confidence distribution density for "ρ" given by
formula_22
where formula_23 is the Gaussian hypergeometric function and formula_24 .
Other uses.
While the Fisher transformation is mainly associated with the Pearson product-moment correlation coefficient for bivariate normal observations, it can also be applied to Spearman's rank correlation coefficient in more general cases. A similar result for the asymptotic distribution applies, but with a minor adjustment factor: see the cited article for details. | [
{
"math_id": 0,
"text": "r = \\frac{\\operatorname{cov}(X,Y)}{\\sigma_X \\sigma_Y} = \\frac{\\sum ^N _{i=1}(X_i - \\bar{X})(Y_i - \\bar{Y})}{\\sqrt{\\sum ^N _{i=1}(X_i - \\bar{X})^2} \\sqrt{\\sum ^N _{i=1}(Y_i - \\bar{Y})^2}}."
},
{
"math_id": 1,
"text": "\\operatorname{cov}(X,Y)"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "\\sigma"
},
{
"math_id": 5,
"text": "z = {1 \\over 2}\\ln\\left({1+r \\over 1-r}\\right) = \\operatorname{artanh}(r),"
},
{
"math_id": 6,
"text": "{1 \\over 2}\\ln\\left({{1+\\rho} \\over {1-\\rho}}\\right),"
},
{
"math_id": 7,
"text": "{1 \\over \\sqrt{N-3}},"
},
{
"math_id": 8,
"text": "r = \\frac{\\exp(2z)-1}{\\exp(2z)+1} = \\operatorname{tanh}(z),"
},
{
"math_id": 9,
"text": "r"
},
{
"math_id": 10,
"text": "N"
},
{
"math_id": 11,
"text": "G(r)"
},
{
"math_id": 12,
"text": "\\kappa_3"
},
{
"math_id": 13,
"text": "\\kappa_3=\\frac{6\\rho -3(1-\\rho ^{2})G^{\\prime \\prime }(\\rho )/G^{\\prime }(\\rho )}{\\sqrt{N}}+O(N^{-3/2})."
},
{
"math_id": 14,
"text": "\\kappa_3=0"
},
{
"math_id": 15,
"text": "G"
},
{
"math_id": 16,
"text": "G(\\rho)=\\operatorname{artanh}(\\rho)"
},
{
"math_id": 17,
"text": "\\operatorname{artanh}(r)"
},
{
"math_id": 18,
"text": "\\operatorname{artanh}(\\rho )+\\frac{\\rho }{2N}+O(N^{-2}) "
},
{
"math_id": 19,
"text": "\\frac{1}{N}+\\frac{6-\\rho ^{2}}{2N^{2}}+O(N^{-3}) "
},
{
"math_id": 20,
"text": "\\rho "
},
{
"math_id": 21,
"text": "\\frac{z-\\operatorname{artanh}(\\rho )-\\frac{\\rho }{2N}}{\\sqrt{\\frac{1}{N}+\\frac{6-\\rho ^{2}}{2N^{2}}}}"
},
{
"math_id": 22,
"text": "\\pi (\\rho | r) =\n\\frac{\\Gamma(\\nu+1)}{\\sqrt{2\\pi}\\Gamma(\\nu + \\frac{1}{2})}\n(1 - r^2)^{\\frac{\\nu - 1}{2}} \\cdot\n(1 - \\rho^2)^{\\frac{\\nu - 2}{2}} \\cdot\n(1 - r \\rho )^{\\frac{1-2\\nu}{2}} F\\!\\left(\\frac{3}{2},-\\frac{1}{2}; \\nu + \\frac{1}{2}; \\frac{1 + r \\rho}{2}\\right)"
},
{
"math_id": 23,
"text": "F"
},
{
"math_id": 24,
"text": "\\nu = N-1 > 1"
}
] | https://en.wikipedia.org/wiki?curid=865764 |
8658125 | Convergence problem | In the analytic theory of continued fractions, the convergence problem is the determination of conditions on the partial numerators "a""i" and partial denominators "b""i" that are sufficient to guarantee the convergence of the continued fraction
formula_0
This convergence problem for continued fractions is inherently more difficult than the corresponding convergence problem for infinite series.
Elementary results.
When the elements of an infinite continued fraction consist entirely of positive real numbers, the determinant formula can easily be applied to demonstrate when the continued fraction converges. Since the denominators "B""n" cannot be zero in this simple case, the problem boils down to showing that the product of successive denominators "B""n""B""n"+1 grows more quickly than the product of the partial numerators "a"1"a"2"a"3..."a""n"+1. The convergence problem is much more difficult when the elements of the continued fraction are complex numbers.
Periodic continued fractions.
An infinite periodic continued fraction is a continued fraction of the form
formula_1
where "k" ≥ 1, the sequence of partial numerators {"a"1, "a"2, "a"3, ..., "a""k"} contains no values equal to zero, and the partial numerators {"a"1, "a"2, "a"3, ..., "a""k"} and partial denominators {"b"1, "b"2, "b"3, ..., "b""k"} repeat over and over again, "ad infinitum".
By applying the theory of linear fractional transformations to
formula_2
where "A""k"-1, "B""k"-1, "A""k", and "B""k" are the numerators and denominators of the "k"-1st and "k"th convergents of the infinite periodic continued fraction "x", it can be shown that "x" converges to one of the fixed points of "s"("w") if it converges at all. Specifically, let "r"1 and "r"2 be the roots of the quadratic equation
formula_3
These roots are the fixed points of "s"("w"). If "r"1 and "r"2 are finite then the infinite periodic continued fraction "x" converges if and only if
If the denominator "B""k"-1 is equal to zero then an infinite number of the denominators "B""nk"-1 also vanish, and the continued fraction does not converge to a finite value. And when the two roots "r"1 and "r"2 are equidistant from the "k"-1st convergent – or when "r"1 is closer to the "k"-1st convergent than "r"2 is, but one of the first "k" convergents equals "r"2 – the continued fraction "x" diverges by oscillation.
The special case when period "k" = 1.
If the period of a continued fraction is 1; that is, if
formula_4
where "b" ≠ 0, we can obtain a very strong result. First, by applying an equivalence transformation we see that "x" converges if and only if
formula_5
converges. Then, by applying the more general result obtained above it can be shown that
formula_6
converges for every complex number "z" except when "z" is a negative real number and "z" < −. Moreover, this continued fraction "y" converges to the particular value of
formula_7
that has the larger absolute value (except when "z" is real and "z" < −, in which case the two fixed points of the LFT generating "y" have equal moduli and "y" diverges by oscillation).
By applying another equivalence transformation the condition that guarantees convergence of
formula_8
can also be determined. Since a simple equivalence transformation shows that
formula_9
whenever "z" ≠ 0, the preceding result for the continued fraction "y" can be restated for "x". The infinite periodic continued fraction
formula_10
converges if and only if "z"2 is not a real number lying in the interval −4 < "z"2 ≤ 0 – or, equivalently, "x" converges if and only if "z" ≠ 0 and "z" is not a pure imaginary number with imaginary part between -2 and 2. (Not including either endpoint)
Worpitzky's theorem.
By applying the fundamental inequalities to the continued fraction
formula_11
it can be shown that the following statements hold if |"a""i"| ≤ for the partial numerators "a""i", "i" = 2, 3, 4, ...
formula_12
Because the proof of Worpitzky's theorem employs Euler's continued fraction formula to construct an infinite series that is equivalent to the continued fraction "x", and the series so constructed is absolutely convergent, the Weierstrass M-test can be applied to a modified version of "x". If
formula_13
and a positive real number "M" exists such that |"c""i"| ≤ "M" ("i" = 2, 3, 4, ...), then the sequence of convergents {"f""i"("z")} converges uniformly when
formula_14
and "f"("z") is analytic on that open disk.
Śleszyński–Pringsheim criterion.
In the late 19th century, Śleszyński and later Pringsheim showed that a continued fraction, in which the "a"s and "b"s may be complex numbers, will converge to a finite value if formula_15 for formula_16
Van Vleck's theorem.
Jones and Thron attribute the following result to Van Vleck. Suppose that all the "ai" are equal to 1, and all the "bi" have arguments with:
formula_17
with epsilon being any positive number less than formula_18. In other words, all the "bi" are inside a wedge which has its vertex at the origin, has an opening angle of formula_19, and is symmetric around the positive real axis. Then "fi", the ith convergent to the continued fraction, is finite and has an argument:
formula_20
Also, the sequence of even convergents will converge, as will the sequence of odd convergents. The continued fraction itself will converge if and only if the sum of all the |"bi"| diverges. | [
{
"math_id": 0,
"text": "\nx = b_0 + \\cfrac{a_1}{b_1 + \\cfrac{a_2}{b_2 + \\cfrac{a_3}{b_3 + \\cfrac{a_4}{b_4 + \\ddots}}}}.\\,\n"
},
{
"math_id": 1,
"text": "\nx = \\cfrac{a_1}{b_1 + \\cfrac{a_2}{b_2 + \\cfrac{\\ddots}{\\quad\\ddots\\quad b_{k-1} + \\cfrac{a_k}{b_k + \\cfrac{a_1}{b_1 + \\cfrac{a_2}{b_2 + \\ddots}}}}}}\\,\n"
},
{
"math_id": 2,
"text": "\ns(w) = \\frac{A_{k-1}w + A_k}{B_{k-1}w + B_k}\\,\n"
},
{
"math_id": 3,
"text": "\nB_{k-1}w^2 + (B_k - A_{k-1})w - A_k = 0.\\,\n"
},
{
"math_id": 4,
"text": "\nx = \\underset{1}{\\overset{\\infty}{\\mathrm K}} \\frac{a}{b},\\,\n"
},
{
"math_id": 5,
"text": "\ny = 1 + \\underset{1}{\\overset{\\infty}{\\mathrm K}} \\frac{z}{1}\\qquad \\left(z = \\frac{a}{b^2}\\right)\\,\n"
},
{
"math_id": 6,
"text": "\ny = 1 + \\cfrac{z}{1 + \\cfrac{z}{1 + \\cfrac{z}{1 + \\ddots}}}\\,\n"
},
{
"math_id": 7,
"text": "\ny = \\frac{1}{2}\\left(1 \\pm \\sqrt{4z + 1}\\right)\\,\n"
},
{
"math_id": 8,
"text": "\nx = \\underset{1}{\\overset{\\infty}{\\mathrm K}} \\frac{1}{z} = \\cfrac{1}{z + \\cfrac{1}{z + \\cfrac{1}{z + \\ddots}}}\\,\n"
},
{
"math_id": 9,
"text": "\nx = \\cfrac{z^{-1}}{1 + \\cfrac{z^{-2}}{1 + \\cfrac{z^{-2}}{1 + \\ddots}}}\\,\n"
},
{
"math_id": 10,
"text": "\nx = \\underset{1}{\\overset{\\infty}{\\mathrm K}} \\frac{1}{z}\n"
},
{
"math_id": 11,
"text": "\nx = \\cfrac{1}{1 + \\cfrac{a_2}{1 + \\cfrac{a_3}{1 + \\cfrac{a_4}{1 + \\ddots}}}}\\,\n"
},
{
"math_id": 12,
"text": "\\Omega = \\lbrace z: |z - 4/3| \\leq 2/3 \\rbrace.\\,"
},
{
"math_id": 13,
"text": "\nf(z) = \\cfrac{1}{1 + \\cfrac{c_2z}{1 + \\cfrac{c_3z}{1 + \\cfrac{c_4z}{1 + \\ddots}}}}\\,\n"
},
{
"math_id": 14,
"text": "\n|z| < \\frac{1}{4M}\\,\n"
},
{
"math_id": 15,
"text": "|b_n | \\geq |a_n| + 1 "
},
{
"math_id": 16,
"text": " n \\geq 1. "
},
{
"math_id": 17,
"text": "\n- \\pi /2 + \\epsilon < \\arg ( b_i) < \\pi / 2 - \\epsilon, i \\geq 1,\n"
},
{
"math_id": 18,
"text": "\\pi/2 "
},
{
"math_id": 19,
"text": " \\pi - 2 \\epsilon "
},
{
"math_id": 20,
"text": " \n- \\pi /2 + \\epsilon < \\arg ( f_i ) < \\pi / 2 - \\epsilon, i \\geq 1. \n"
}
] | https://en.wikipedia.org/wiki?curid=8658125 |
865939 | Rational variety | Algebraic variety
In mathematics, a rational variety is an algebraic variety, over a given field "K", which is birationally equivalent to a projective space of some dimension over "K". This means that its function field is isomorphic to
formula_0
the field of all rational functions for some set formula_1 of indeterminates, where "d" is the dimension of the variety.
Rationality and parameterization.
Let "V" be an affine algebraic variety of dimension "d" defined by a prime ideal "I" = ⟨"f"1, ..., "f""k"⟩ in formula_2. If "V" is rational, then there are "n" + 1 polynomials "g"0, ..., "g""n" in formula_3 such that formula_4 In other words, we have a <templatestyles src="Template:Visible anchor/styles.css" />rational parameterization formula_5 of the variety.
Conversely, such a rational parameterization induces a field homomorphism of the field of functions of "V" into formula_3. But this homomorphism is not necessarily onto. If such a parameterization exists, the variety is said to be unirational. Lüroth's theorem (see below) implies that unirational curves are rational. Castelnuovo's theorem implies also that, in characteristic zero, every unirational surface is rational.
Rationality questions.
A rationality question asks whether a given field extension is "rational", in the sense of being (up to isomorphism) the function field of a rational variety; such field extensions are also described as purely transcendental. More precisely, the rationality question for the field extension formula_6 is this: is formula_7 isomorphic to a rational function field over formula_8 in the number of indeterminates given by the transcendence degree?
There are several different variations of this question, arising from the way in which the fields formula_8 and formula_7 are constructed.
For example, let formula_8 be a field, and let
formula_9
be indeterminates over "K" and let "L" be the field generated over "K" by them. Consider a finite group formula_10 permuting those indeterminates over "K". By standard Galois theory, the set of fixed points of this group action is a subfield of formula_7, typically denoted formula_11. The rationality question for formula_12 is called Noether's problem and asks if this field of fixed points is or is not a purely transcendental extension of "K".
In the paper on Galois theory she studied the problem of parameterizing the equations with given Galois group, which she reduced to "Noether's problem". (She first mentioned this problem in where she attributed the problem to E. Fischer.) She showed this was true for "n" = 2, 3, or 4. R. G. Swan (1969) found a counter-example to the Noether's problem, with "n" = 47 and "G" a cyclic group of order 47.
Lüroth's theorem.
A celebrated case is Lüroth's problem, which Jacob Lüroth solved in the nineteenth century. Lüroth's problem concerns subextensions "L" of "K"("X"), the rational functions in the single indeterminate "X". Any such field is either equal to "K" or is also rational, i.e. "L" = "K"("F") for some rational function "F". In geometrical terms this states that a non-constant rational map from the projective line to a curve "C" can only occur when "C" also has genus 0. That fact can be read off geometrically from the Riemann–Hurwitz formula.
Even though Lüroth's theorem is often thought as a non elementary result, several elementary short proofs have been known for a long time. These simple proofs use only the basics of field theory and Gauss's lemma for primitive polynomials (see e.g.).
Unirationality.
A unirational variety "V" over a field "K" is one dominated by a rational variety, so that its function field "K"("V") lies in a pure transcendental field of finite type (which can be chosen to be of finite degree over "K"("V") if "K" is infinite). The solution of Lüroth's problem shows that for algebraic curves, rational and unirational are the same, and Castelnuovo's theorem implies that for complex surfaces unirational implies rational, because both are characterized by the vanishing of both the arithmetic genus and the second plurigenus. Zariski found some examples (Zariski surfaces) in characteristic "p" > 0 that are unirational but not rational. showed that a cubic three-fold is in general not a rational variety, providing an example for three dimensions that unirationality does not imply rationality. Their work used an intermediate Jacobian.
showed that all non-singular quartic threefolds are irrational, though some of them are unirational. found some unirational 3-folds with non-trivial torsion in their third cohomology group, which implies that they are not rational.
For any field "K", János Kollár proved in 2000 that a smooth cubic hypersurface of dimension at least 2 is unirational if it has a point defined over "K". This is an improvement of many classical results, beginning with the case of cubic surfaces (which are rational varieties over an algebraic closure). Other examples of varieties that are shown to be unirational are many cases of the moduli space of curves.
Rationally connected variety.
A rationally connected variety "V" is a projective algebraic variety over an algebraically closed field such that through every two points there passes the image of a regular map from the projective line into "V". Equivalently, a variety is rationally connected if every two points are connected by a rational curve contained in the variety.
This definition differs from that of path connectedness only by the nature of the path, but is very different, as the only algebraic curves which are rationally connected are the rational ones.
Every rational variety, including the projective spaces, is rationally connected, but the converse is false. The class of the rationally connected varieties is thus a generalization of the class of the rational varieties. Unirational varieties are rationally connected, but it is not known if the converse holds.
Stably rational varieties.
A variety "V" is called "stably rational" if formula_13 is rational for some formula_14. Any rational variety is thus, by definition, stably rational. Examples constructed by show, that the converse is false however.
showed that very general hypersurfaces formula_15 are not stably rational, provided that the degree of "V" is at least formula_16.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K(U_1, \\dots , U_d),"
},
{
"math_id": 1,
"text": "\\{U_1, \\dots, U_d\\}"
},
{
"math_id": 2,
"text": "K[X_1, \\dots , X_n]"
},
{
"math_id": 3,
"text": "K(U_1, \\dots , U_d)"
},
{
"math_id": 4,
"text": "f_i(g_1/g_0, \\ldots, g_n/g_0)=0. "
},
{
"math_id": 5,
"text": "x_i=\\frac{g_i}{g_0}(u_1,\\ldots,u_d)"
},
{
"math_id": 6,
"text": "K \\subset L"
},
{
"math_id": 7,
"text": "L"
},
{
"math_id": 8,
"text": "K"
},
{
"math_id": 9,
"text": "\\{y_1, \\dots, y_n \\}"
},
{
"math_id": 10,
"text": "G"
},
{
"math_id": 11,
"text": "L^G"
},
{
"math_id": 12,
"text": "K \\subset L^G"
},
{
"math_id": 13,
"text": "V \\times \\mathbf P^m"
},
{
"math_id": 14,
"text": "m \\ge 0"
},
{
"math_id": 15,
"text": "V \\subset \\mathbf P^{N+1}"
},
{
"math_id": 16,
"text": "\\log_2 N+2"
}
] | https://en.wikipedia.org/wiki?curid=865939 |
866099 | Porous medium | Material containing fluid-filled voids
In materials science, a porous medium or a porous material is a material containing pores (voids). The skeletal portion of the material is often called the "matrix" or "frame". The pores are typically filled with a fluid (liquid or gas). The skeletal material is usually a solid, but structures like foams are often also usefully analyzed using concept of porous media.
A porous medium is most often characterised by its porosity. Other properties of the medium (e.g. permeability, tensile strength, electrical conductivity, tortuosity) can sometimes be derived from the respective properties of its constituents (solid matrix and fluid) and the media porosity and pores structure, but such a derivation is usually complex. Even the concept of porosity is only straightforward for a poroelastic medium.
Often both the solid matrix and the pore network (also known as the pore space) are continuous, so as to form two interpenetrating continua such as in a sponge. However, there is also a concept of closed porosity and effective porosity, i.e. the pore space accessible to flow.
Many natural substances such as rocks and soil (e.g. aquifers, petroleum reservoirs), zeolites, biological tissues (e.g. bones, wood, cork), and man made materials such as cements and ceramics can be considered as porous media. Many of their important properties can only be rationalized by considering them to be porous media.
The concept of porous media is used in many areas of applied science and engineering: filtration, mechanics (acoustics, geomechanics, soil mechanics, rock mechanics), engineering (petroleum engineering, bioremediation, construction engineering), geosciences (hydrogeology, petroleum geology, geophysics), biology and biophysics, material science. Two important current fields of application for porous materials are energy conversion and energy storage, where porous materials are essential for superpacitors, (photo-)catalysis, fuel cells, and batteries.
Microscopic and macroscopic.
At the microscopic and macroscopic levels, porous media can be classified.
At the microscopic scale, the structure is represented statistically by the distribution of pore sizes, the degree of pore interconnection and orientation, the proportion of dead pores, etc.
The macroscopic technique makes use of bulk properties that have been averaged at scales far bigger than pore size.
Depending on the goal, these two techniques are frequently employed since they are complimentary. It is obvious that the microscopic description is required to comprehend surface phenomena like the adsorption of macromolecules from polymer solutions and the blocking of pores, whereas the macroscopic approach is frequently quite sufficient for process design where fluid flow, heat, and mass transfer are of highest concern. and the molecular dimensions are significantly smaller than pore size of the porous system.
Fluid flow through porous media.
Fluid flow through porous media is a subject of common interest and has emerged a separate field of study. The study of more general behaviour of porous media involving deformation of the solid frame is called poromechanics.
The theory of porous flows has applications in inkjet printing and nuclear waste disposal technologies, among others.
Numerous factors influence fluid flow in porous media, and its fundamental function is to expend energy and create fluid via the wellbore. In flow mechanics via porous medium, the connection between energy and flow rate becomes the most significant issue. The most fundamental law that characterizes this connection is Darcy's law, particularly applicable to fine-porous media. In contrast, Forchheimer's law finds utility in the context of coarse-porous media.
Pore structure models.
A representation of the void phase that exists inside porous materials using a set or network of pores. It serves as a structural foundation for the prediction of transport parameters and is employed in the context of pore structure characterisation.
There are many idealized models of pore structures. They can be broadly divided into three categories:
Porous materials often have a fractal-like structure, having a pore surface area that seems to grow indefinitely when viewed with progressively increasing resolution. Mathematically, this is described by assigning the pore surface a Hausdorff dimension greater than 2. Experimental methods for the investigation of pore structures include confocal microscopy and x-ray tomography.
Porous materials have found some applications in many engineering fields including automotive sectors.
Laws for porous materials.
One of the Laws for porous materials is the generalized Murray's law. The generalized Murray's law is based on optimizing mass transfer by minimizing transport resistance in pores with a given volume, and can be applicable for optimizing mass transfer involving mass variations and chemical reactions involving flow processes, molecule or ion diffusion.
For connecting a parent pipe with radius of r0 to many children pipes with radius of ri , the formula of generalized Murray's law is: formula_0, where the X is the ratio of mass variation during mass transfer in the parent pore, the exponent α is dependent on the type of the transfer. For laminar flow "α" =3; for turbulent flow "α" =7/3; for molecule or ionic diffusion "α" =2; etc.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r_o^a={1 \\over 1-X}\\sum_{i=1}^Nr_i^a"
}
] | https://en.wikipedia.org/wiki?curid=866099 |
8661899 | Polymer-based battery | Type of battery
A polymer-based battery uses organic materials instead of bulk metals to form a battery. Currently accepted metal-based batteries pose many challenges due to limited resources, negative environmental impact, and the approaching limit of progress. Redox active polymers are attractive options for electrodes in batteries due to their synthetic availability, high-capacity, flexibility, light weight, low cost, and low toxicity. Recent studies have explored how to increase efficiency and reduce challenges to push polymeric active materials further towards practicality in batteries. Many types of polymers are being explored, including conductive, non-conductive, and radical polymers. Batteries with a combination of electrodes (one metal electrode and one polymeric electrode) are easier to test and compare to current metal-based batteries, however batteries with both a polymer cathode and anode are also a current research focus. Polymer-based batteries, including metal/polymer electrode combinations, should be distinguished from metal-polymer batteries, such as a lithium polymer battery, which most often involve a polymeric electrolyte, as opposed to polymeric active materials.
Organic polymers can be processed at relatively low temperatures, lowering costs. They also produce less carbon dioxide.
History.
Organic batteries are an alternative to the metal reaction battery technologies, and much research is taking place in this area.
An article titled "Plastic-Metal Batteries: New promise for the electric car" wrote in 1982: "Two different organic polymers are being investigated for possible use in batteries" and indicated that the demo he gave was based on work begun in 1976.
Waseda University was approached by NEC in 2001, and began to focus on the organic batteries. In 2002, NEC researcher presented a paper on Piperidinoxyl Polymer technology, and by 2005 they presented an organic radical battery (ORB) based on a modified PTMA, poly(2,2,6,6-tetramethylpiperidinyloxy-4-yl meth-acrylate).
In 2006, Brown University announced a technology based on polypyrrole. In 2007, Waseda announced a new ORB technology based on "soluble polymer, polynorborene with pendant nitroxide radical groups."
In 2015 researchers developed an efficient, conductive, electron-transporting polymer. The discovery employed a "conjugated redox polymer" design with a naphthalene-bithiophene polymer that has been used for transistors and solar cells. Doped with lithium ions it offered significant electronic conductivity and remained stable through 3,000 charge/discharge cycles. Polymers that conduct holes have been available for some time. The polymer exhibits the greatest power density for an organic material under practical measurement conditions. A battery could be 80% charged within 6 seconds. Energy density remained lower than inorganic batteries.
Electrochemistry.
Like metal-based batteries, the reaction in a polymer-based battery is between a positive and a negative electrode with different redox potentials. An electrolyte transports charges between these electrodes. For a substance to be a suitable battery active material, it must be able to participate in a chemically and thermodynamically reversible redox reaction. Unlike metal-based batteries, whose redox process is based on the valence charge of the metals, the redox process of polymer-based batteries is based on a change of state of charge in the organic material. For a high energy density, the electrodes should have similar specific energies.
Classification of active materials.
The active organic material could be a p-type, n-type, or b-type. During charging, p-type materials are oxidized and produce cations, while n-types are reduced and produce anions. B-type organics could be either oxidized or reduced during charging or discharging.
Charge and discharge.
In a commercially available Li-ion battery, the Li+ ions are diffused slowly due to the required intercalation and can generate heat during charge or discharge. Polymer-based batteries, however, have a more efficient charge/discharge process, resulting in improved theoretical rate performance and increased cyclability.
Charge.
To charge a polymer-based battery, a current is applied to oxidize the positive electrode and reduce the negative electrode. The electrolyte salt compensates the charges formed. The limiting factors upon charging a polymer-based battery differ from metal-based batteries and include the full oxidation of the cathode organic, full reduction of the anode organic, or consumption of the electrolyte.
Discharge.
Upon discharge, the electrons go from the anode to cathode externally, while the electrolyte carries the released ions from the polymer. This process, and therefore the rate performance, is limited by the electrolyte ion travel and the electron-transfer rate constant, k0, of the reaction.
This electron transfer rate constant provides a benefit of polymer-based batteries, which typically have high values on the order of 10−1 cm s−1. The organic polymer electrodes are amorphous and swollen, which allows for a higher rate of ionic diffusion and further contributes to a better rate performance. Different polymer reactions, however, have different reaction rates. While a nitroxyl radical has a high reaction rate, organodisulfades have significantly lower rates because bonds are broken and new bonds are formed.
Batteries are commonly evaluated by their theoretical capacity (the total capacity of the battery if 100% of active material were utilized in the reaction). This value can be calculated as follows:
formula_0
where m is the total mass of active material, n is the number of transferred electrons per molar mass of active material, M is the molar mass of active material, and F is Faraday's constant.
Charge and discharge testing.
Most polymer electrodes are tested in a metal-organic battery for ease of comparison to metal-based batteries. In this testing setup, the metal acts as the anode and either n- or p-type polymer electrodes can be used as the cathode. When testing the n-type organic, this metal-polymer battery is charged upon assembly and the n-type material is reduced during discharge, while the metal is oxidized. For p-type organics in a metal-polymer test, the battery is already discharged upon assembly. During initial charging, electrolyte salt cations are reduced and mobilized to the polymeric anode while the organic is oxidized. During discharging, the polymer is reduced while the metal is oxidized to its cation.
Types of active materials.
Conductive polymers.
Conductive polymers can be n-doped or p-doped to form an electrochemically active material with conductivity due to dopant ions on a conjugated polymer backbone. Conductive polymers (i.e. conjugated polymers) are embedded with the redox active group, as opposed to having pendant groups, with the exception of sulfur conductive polymers. They are ideal electrode materials due to their conductivity and redox activity, therefore not requiring large quantities of inactive conductive fillers. However they also tend to have low coulombic efficiency and exhibit poor cyclability and self-discharge. Due to the poor electronic separation of the polymer's charged centers, the redox potentials of conjugated polymers change upon charge and discharge due to a dependence on the dopant levels. As a result of this complication, the discharge profile (cell voltage vs. capacity) of conductive polymer batteries has a sloped curve.
Conductive polymers struggle with stability due to high levels of charge, failing to reach the ideal of one charge per monomer unit of polymer. Stabilizing additives can be incorporated, but these decrease the specific capacity.
Non-conjugated polymers with pendant groups.
Despite the conductivity advantage of conjugated polymers, their many drawbacks as active materials have furthered the exploration of polymers with redox active pendant groups. Groups frequently explored include carbonyls, carbazoles, organosulfur compounds, viologen, and other redox-active molecules with high reactivity and stable voltage upon charge and discharge. These polymers present an advantage over conjugated polymers due to their localized redox sites and more constant redox potential over charge/discharge.
Carbonyl pendant groups.
Carbonyl compounds have been heavily studied, and thus present an advantage, as new active materials with carbonyl pendant groups can be achieved by many different synthetic properties. Polymers with carbonyl groups can form multivalent anions. Stabilization depends on the substituents; vicinal carbonyls are stabilized by enolate formation, aromatic carbonyls are stabilized by delocalization of charge, and quinoidal carbonyls are stabilized by aromaticity.
Organosulfur groups.
Sulfur is one of earth's most abundant elements and thus are advantageous for active electrode materials. Small molecule organosulfur active materials exhibit poor stability, which is partially resolved via incorporation into a polymer. In disulfide polymers, electrochemical charge is stored in a thiolate anion, formed by a reversible two-electron oxidation of the disulfide bond. Electrochemical storage in thioethers is achieved by the two-electron oxidation of a neutral thioether to a thioether with a +2 charge. As active materials, however, organosulfur compounds, however, exhibit weak cyclability.
Radical groups.
Polymeric electrodes in organic radical batteries are electrochemically active with stable organic radical pendant groups that have an unpaired electron in the uncharged state. Nitroxide radicals are the most commonly applied, though phenoxyl and hydrazyl groups are also often used. A nitroxide radical could be reversibly oxidized and the polymer p-doped, or reduced, causing n-doping. Upon charging, the radical is oxidized to an oxoammonium cation, and at the cathode, the radical is reduced to an aminoxyl anion. These processes are reversed upon discharge, and the radicals are regenerated. For stable charge and discharge, both the radical and doped form of the radical must be chemically stable. These batteries exhibit excellent cyclability and power density, attributed to the stability of the radical and the simple one-electron transfer reaction. Slight decrease in capacity after repeated cycling is likely due to a build up of swollen polymer particles which increase the resistance of the electrode. Because the radical polymers are considerably insulating, conductive additives are often added that which lower the theoretical specific capacity. Nearly all organic radical batteries feature a nearly constant voltage during discharge, which is an advantage over conductive polymer batteries. The polymer backbone and cross-linking techniques can be tuned to minimize the solubility of the polymer in the electrolyte, thereby minimizing self-discharge.
Control and performance.
Performance summary comparison of key polymer electrode types.
During discharge, conductive polymers have a sloping voltage that hinders their practical applications. This sloping curve indicates electrochemical instability which could be due to morphology, size, the charge repulsions within the polymer chain during the reaction, or the amorphous state of polymers.
Effect of polymer morphology.
Electrochemical performance of polymer electrodes is affected by polymer size, morphology, and degree of crystallinity. In a polypyrrole (PPy)/Sodium ion hybrid battery, a 2018 study demonstrated that the polymer anode with a fluffy structure consisting of chains of submicron particles performed with a much higher capacity (183 mAh g−1) as compared to bulk PPy (34.8 mAh g−1). The structure of the submicron polypyrrole anode allowed for increased electrical contact between the particles, and the electrolyte was able to further penetrate the polymeric active material. It has also been reported that amorphous polymeric active materials performs better than the crystalline counterpart. In 2014, it was demonstrated that crystalline oligopyrene exhibited a discharge capacity of 42.5 mAh g−1, while the amorphous oligopyrene has a higher capacity of 120 mAh g−1. Further, the crystalline version experienced a sloped charge and discharge voltage and considerable overpotential due to slow diffusion of ClO4−. The amorphous oligopyrene had a voltage plateau during charge and discharge, as well as significantly less overpotential.
Molecular weight control.
The molecular weight of polymers effects their chemical and physical properties, and thus the performance of a polymer electrode. A 2017 study evaluated the effect of molecular weight on electrochemical properties of poly(TEMPO methacrylate) (PTMA). By increasing the monomer to initiator ratio from 50/1 to 1000/1, five different sizes were achieved from 66 to 704 degrees of polymerization. A strong dependence on molecular weight was established, as the higher the molecular weight polymers exhibited a higher specific discharge capacity and better cyclability. This effect was attributed to a reciprocal relationship between molecular weight and solubility in the electrolyte.
Advantages.
Polymer-based batteries have many advantages over metal-based batteries. The electrochemical reactions involved are more simple, and the structural diversity of polymers and method of polymer synthesis allows for increased tunability for desired applications. While new types of inorganic materials are difficult to find, new organic polymers can be much more easily synthesized. Another advantage is that polymer electrode materials may have lower redox potentials, but they have a higher energy density than inorganic materials. And, because the redox reaction kinetics for organics is higher than that for inorganics, they have a higher power density and rate performance. Because of the inherent flexibility and light weight of organic materials as compared to inorganic materials, polymeric electrodes can be printed, cast, and vapor deposited, enabling application in thinner and more flexible devices. Further, most polymers can be synthesized at low cost or extracted from biomass and even recycled, while inorganic metals are limited in availability and can be harmful to the environment.
Organic small molecules also possess many of these advantages, however they are more susceptible to dissolving in the electrolyte. Polymeric organic active materials less easily dissolve and thus exhibit superior cyclability.
Challenges.
Though superior in this sense to small organic molecules, polymers still exhibit solubility in electrolytes, and battery stability is threatened by dissolved active material that can travel between electrodes, leading to decreased cyclability and self-discharge, which indicates weaker mechanical capacity. This issue can be lessened by incorporating the redox-active unit in the polymeric backbone, but this can decrease the theoretical specific capacity and increase electrochemical polarization. Another challenge is that besides conductive polymers, most polymeric electrodes are electrically insulating and therefore require conductive additives, reducing the battery's overall capacity. While polymers do have a low mass density, they have a greater volumetric energy density which in turn would require an increase in volume of devices being powered.
Safety.
A 2009 study evaluated the safety of a hydrophilic radical polymer and found that a radical polymer battery with an aqueous electrolyte is nontoxic, chemically stable, and non-explosive, and is thus a safer alternative to traditional metal-based batteries. Aqueous electrolytes present a safer option over organic electrolytes which can be toxic and can form HF acid. The one-electron redox reaction of a radical polymer electrode during charging generates little heat and therefore has a reduced risk of thermal runaway. Further studies are required to fully understand the safety of all polymeric electrodes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_t (mA\\ h\\ g^{-1})=\\frac{mnF}{M}"
}
] | https://en.wikipedia.org/wiki?curid=8661899 |
866423 | On shell and off shell | Configurations of a system that do or do not satisfy classical equations of motion
In physics, particularly in quantum field theory, configurations of a physical system that satisfy classical equations of motion are called on the mass shell (on shell); while those that do not are called off the mass shell (off shell).
In quantum field theory, virtual particles are termed off shell because they do not satisfy the energy–momentum relation; real exchange particles do satisfy this relation and are termed on (mass) shell. In classical mechanics for instance, in the action formulation, extremal solutions to the variational principle are on shell and the Euler–Lagrange equations give the on-shell equations. Noether's theorem regarding differentiable symmetries of physical action and conservation laws is another on-shell theorem.
Mass shell.
Mass shell is a synonym for mass hyperboloid, meaning the hyperboloid in energy–momentum space describing the solutions to the equation:
formula_0,
the mass–energy equivalence formula which gives the energy formula_1 in terms of the momentum formula_2 and the rest mass formula_3 of a particle. The equation for the mass shell is also often written in terms of the four-momentum; in Einstein notation with metric signature (+,−,−,−) and units where the speed of light formula_4, as formula_5. In the literature, one may also encounter formula_6 if the metric signature used is (−,+,+,+).
The four-momentum of an exchanged virtual particle formula_7 is formula_8, with mass formula_9. The four-momentum formula_8 of the virtual particle is the difference between the four-momenta of the incoming and outgoing particles.
Virtual particles corresponding to internal propagators in a Feynman diagram are in general allowed to be off shell, but the amplitude for the process will diminish depending on how far off shell they are. This is because the formula_10-dependence of the propagator is determined by the four-momenta of the incoming and outgoing particles. The propagator typically has singularities on the mass shell.
When speaking of the propagator, negative values for formula_1 that satisfy the equation are thought of as being on shell, though the classical theory does not allow negative values for the energy of a particle. This is because the propagator incorporates into one expression the cases in which the particle carries energy in one direction, and in which its antiparticle carries energy in the other direction; negative and positive on-shell formula_1 then simply represent opposing flows of positive energy.
Scalar field.
An example comes from considering a scalar field in "D"-dimensional Minkowski space. Consider a Lagrangian density given by formula_11. The action
formula_12
The Euler–Lagrange equation for this action can be found by varying the field and its derivative and setting the variation to zero, and is:
formula_13
Now, consider an infinitesimal spacetime translation formula_14. The Lagrangian density formula_15 is a scalar, and so will infinitesimally transform as formula_16 under the infinitesimal transformation. On the other hand, by Taylor expansion, we have in general
formula_17
Substituting for formula_18 and noting that formula_19 (since the variations are independent at each point in spacetime):
formula_20
Since this has to hold for independent translations formula_21, we may "divide" by formula_22 and write:
formula_23
This is an example of an equation that holds "off shell", since it is true for any fields configuration regardless of whether it respects the equations of motion (in this case, the Euler–Lagrange equation given above). However, we can derive an "on shell" equation by simply substituting the Euler–Lagrange equation:
formula_24
We can write this as:
formula_25
And if we define the quantity in parentheses as formula_26, we have:
formula_27
This is an instance of Noether's theorem. Here, the conserved quantity is the stress–energy tensor, which is only conserved on shell, that is, if the equations of motion are satisfied.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E^2 - |\\vec{p} \\,|^2 c^2 = m_0^2 c^4"
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "\\vec{p}"
},
{
"math_id": 3,
"text": "m_0"
},
{
"math_id": 4,
"text": "c = 1"
},
{
"math_id": 5,
"text": "p^\\mu p_\\mu \\equiv p^2 = m_0^2"
},
{
"math_id": 6,
"text": "p^\\mu p_\\mu = - m_0^2"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "q_\\mu"
},
{
"math_id": 9,
"text": "q^2 = m_X^2"
},
{
"math_id": 10,
"text": "q^2"
},
{
"math_id": 11,
"text": "\\mathcal{L}(\\phi,\\partial_\\mu \\phi)"
},
{
"math_id": 12,
"text": "S = \\int d^D x \\mathcal{L}(\\phi,\\partial_\\mu \\phi)"
},
{
"math_id": 13,
"text": "\\partial_\\mu \\frac{\\partial \\mathcal{L}}{\\partial (\\partial_\\mu \\phi)} = \\frac{\\partial \\mathcal{L}}{\\partial \\phi}"
},
{
"math_id": 14,
"text": "x^\\mu \\rightarrow x^\\mu +\\alpha^\\mu"
},
{
"math_id": 15,
"text": "\\mathcal{L}"
},
{
"math_id": 16,
"text": "\\delta \\mathcal{L} = \\alpha^\\mu \\partial_\\mu \\mathcal{L}"
},
{
"math_id": 17,
"text": "\\delta \\mathcal{L} = \\frac{\\partial \\mathcal{L}}{\\partial \\phi} \\delta \\phi + \\frac{\\partial \\mathcal{L}}{\\partial (\\partial_\\mu \\phi)} \\delta( \\partial_\\mu \\phi) "
},
{
"math_id": 18,
"text": "\\delta \\mathcal{L}"
},
{
"math_id": 19,
"text": "\\delta( \\partial_\\mu \\phi) = \\partial_\\mu ( \\delta \\phi)"
},
{
"math_id": 20,
"text": "\\alpha^\\mu \\partial_\\mu \\mathcal{L} = \\frac{\\partial \\mathcal{L}}{\\partial \\phi} \\alpha^\\mu \\partial_\\mu \\phi + \\frac{\\partial \\mathcal{L}}{\\partial (\\partial_\\nu \\phi)} \\alpha^\\mu \\partial_\\mu \\partial_\\nu \\phi "
},
{
"math_id": 21,
"text": "\\alpha^\\mu = (\\epsilon, 0,...,0) , (0,\\epsilon, ...,0), ..."
},
{
"math_id": 22,
"text": "\\alpha^\\mu"
},
{
"math_id": 23,
"text": " \\partial_\\mu \\mathcal{L} = \\frac{\\partial \\mathcal{L}}{\\partial \\phi} \\partial_\\mu \\phi + \\frac{\\partial \\mathcal{L}}{\\partial (\\partial_\\nu \\phi)} \\partial_\\mu \\partial_\\nu \\phi "
},
{
"math_id": 24,
"text": " \\partial_\\mu \\mathcal{L} = \\partial_\\nu \\frac{\\partial \\mathcal{L}}{\\partial (\\partial_\\nu \\phi)} \\partial_\\mu \\phi + \\frac{\\partial \\mathcal{L}}{\\partial (\\partial_\\nu \\phi)} \\partial_\\mu \\partial_\\nu \\phi "
},
{
"math_id": 25,
"text": " \\partial_\\nu \\left (\\frac{\\partial \\mathcal{L}}{\\partial (\\partial_\\nu \\phi)} \\partial_\\mu \\phi -\\delta^\\nu_\\mu \\mathcal{L} \\right) = 0 "
},
{
"math_id": 26,
"text": "T^\\nu{}_\\mu"
},
{
"math_id": 27,
"text": "\\partial_\\nu T^\\nu{}_\\mu = 0"
}
] | https://en.wikipedia.org/wiki?curid=866423 |
8664461 | Hypergeometric function of a matrix argument | In mathematics, the hypergeometric function of a matrix argument is a generalization of the classical hypergeometric series. It is a function defined by an infinite summation which can be used to evaluate certain multivariate integrals.
Hypergeometric functions of a matrix argument have applications in random matrix theory. For example, the distributions of the extreme eigenvalues of random matrices are often expressed in terms of the hypergeometric function of a matrix argument.
Definition.
Let formula_0 and formula_1 be integers, and let
formula_2 be an formula_3 complex symmetric matrix.
Then the hypergeometric function of a matrix argument formula_4
and parameter formula_5 is defined as
formula_6
where formula_7 means formula_8 is a partition of formula_9, formula_10 is the generalized Pochhammer symbol, and
formula_11 is the "C" normalization of the Jack function.
Two matrix arguments.
If formula_4 and formula_12 are two formula_3 complex symmetric matrices, then the hypergeometric function of two matrix arguments is defined as:
formula_13
where formula_14 is the identity matrix of size formula_15.
Not a typical function of a matrix argument.
Unlike other functions of matrix argument, such as the matrix exponential, which are matrix-valued, the hypergeometric function of (one or two) matrix arguments is scalar-valued.
The parameter "α".
In many publications the parameter formula_16 is omitted. Also, in different publications different values of formula_16 are being implicitly assumed. For example, in the theory of real random matrices (see, e.g., Muirhead, 1984), formula_17 whereas in other settings (e.g., in the complex case—see Gross and Richards, 1989), formula_18. To make matters worse, in random matrix theory researchers tend to prefer a parameter called formula_19 instead of formula_16 which is used in combinatorics.
The thing to remember is that
formula_20
Care should be exercised as to whether a particular text is using a parameter formula_16 or formula_19 and which the particular value of that parameter is.
Typically, in settings involving real random matrices, formula_17 and thus formula_21. In settings involving complex random matrices, one has formula_18 and formula_22. | [
{
"math_id": 0,
"text": "p\\ge 0"
},
{
"math_id": 1,
"text": "q\\ge 0"
},
{
"math_id": 2,
"text": "X "
},
{
"math_id": 3,
"text": "m\\times m"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "\\alpha>0"
},
{
"math_id": 6,
"text": "\n_pF_q^{(\\alpha )}(a_1,\\ldots,a_p;\nb_1,\\ldots,b_q;X) =\n\\sum_{k=0}^\\infty\\sum_{\\kappa\\vdash k}\n\\frac{1}{k!}\\cdot\n\\frac{(a_1)^{(\\alpha )}_\\kappa\\cdots(a_p)_\\kappa^{(\\alpha )}}\n{(b_1)_\\kappa^{(\\alpha )}\\cdots(b_q)_\\kappa^{(\\alpha )}} \\cdot\nC_\\kappa^{(\\alpha )}(X),\n"
},
{
"math_id": 7,
"text": "\\kappa\\vdash k"
},
{
"math_id": 8,
"text": "\\kappa"
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "(a_i)^{(\\alpha )}_{\\kappa}"
},
{
"math_id": 11,
"text": "C_\\kappa^{(\\alpha )}(X)"
},
{
"math_id": 12,
"text": "Y"
},
{
"math_id": 13,
"text": "\n_pF_q^{(\\alpha )}(a_1,\\ldots,a_p;\nb_1,\\ldots,b_q;X,Y) =\n\\sum_{k=0}^\\infty\\sum_{\\kappa\\vdash k}\n\\frac{1}{k!}\\cdot\n\\frac{(a_1)^{(\\alpha )}_\\kappa\\cdots(a_p)_\\kappa^{(\\alpha )}}\n{(b_1)_\\kappa^{(\\alpha )}\\cdots(b_q)_\\kappa^{(\\alpha )}} \\cdot\n\\frac{C_\\kappa^{(\\alpha )}(X)\nC_\\kappa^{(\\alpha )}(Y)\n}{C_\\kappa^{(\\alpha )}(I)},\n"
},
{
"math_id": 14,
"text": "I"
},
{
"math_id": 15,
"text": "m"
},
{
"math_id": 16,
"text": "\\alpha"
},
{
"math_id": 17,
"text": "\\alpha=2"
},
{
"math_id": 18,
"text": "\\alpha=1"
},
{
"math_id": 19,
"text": "\\beta"
},
{
"math_id": 20,
"text": "\\alpha=\\frac{2}{\\beta}."
},
{
"math_id": 21,
"text": "\\beta=1"
},
{
"math_id": 22,
"text": "\\beta=2"
}
] | https://en.wikipedia.org/wiki?curid=8664461 |
8664662 | Generalized Pochhammer symbol | In mathematics, the generalized Pochhammer symbol of parameter formula_0 and partition formula_1 generalizes the classical Pochhammer symbol, named after Leo August Pochhammer, and is defined as
formula_2
It is used in multivariate analysis. | [
{
"math_id": 0,
"text": "\\alpha>0"
},
{
"math_id": 1,
"text": "\\kappa=(\\kappa_1,\\kappa_2,\\ldots,\\kappa_m)"
},
{
"math_id": 2,
"text": "(a)^{(\\alpha )}_\\kappa=\\prod_{i=1}^m \\prod_{j=1}^{\\kappa_i}\n\\left(a-\\frac{i-1}{\\alpha}+j-1\\right).\n"
}
] | https://en.wikipedia.org/wiki?curid=8664662 |
866515 | Address space layout randomization | Computer security technique
Address space layout randomization (ASLR) is a computer security technique involved in preventing exploitation of memory corruption vulnerabilities. In order to prevent an attacker from reliably redirecting code execution to, for example, a particular exploited function in memory, ASLR randomly arranges the address space positions of key data areas of a process, including the base of the executable and the positions of the stack, heap and libraries.
History.
The Linux PaX project first coined the term "ASLR", and published the first design and implementation of ASLR in July 2001 as a patch for the Linux kernel. It is seen as a complete implementation, providing a patch for kernel stack randomization since October 2002.
The first mainstream operating system to support ASLR by default was OpenBSD version 3.4 in 2003, followed by Linux in 2005.
Benefits.
Address space randomization hinders some types of security attacks by making it more difficult for an attacker to predict target addresses. For example, attackers trying to execute return-to-libc attacks must locate the code to be executed, while other attackers trying to execute shellcode injected on the stack have to find the stack first. In both cases, the system makes related memory-addresses unpredictable from the attackers' point of view. These values have to be guessed, and a mistaken guess is not usually recoverable due to the application crashing.
Effectiveness.
Address space layout randomization is based upon the low chance of an attacker guessing the locations of randomly placed areas. Security is increased by increasing the search space. Thus, address space randomization is more effective when more entropy is present in the random offsets. Entropy is increased by either raising the amount of virtual memory area space over which the randomization occurs or reducing the period over which the randomization occurs. The period is typically implemented as small as possible, so most systems must increase VMA space randomization.
To defeat the randomization, attackers must successfully guess the positions of all areas they wish to attack. For data areas such as stack and heap, where custom code or useful data can be loaded, more than one state can be attacked by using NOP slides for code or repeated copies of data. This allows an attack to succeed if the area is randomized to one of a handful of values. In contrast, code areas such as library base and main executable need to be discovered exactly. Often these areas are mixed, for example stack frames are injected onto the stack and a library is returned into.
The following variables can be declared:
To calculate the probability of an attacker succeeding, a number of attempts α carried out without being interrupted by a signature-based IPS, law enforcement, or other factor must be assumed; in the case of brute forcing, the daemon cannot be restarted. The number of relevant bits and how many are being attacked in each attempt must also be calculated, leaving however many bits the attacker has to defeat.
The following formulas represent the probability of success for a given set of α attempts on N bits of entropy.
In many systems, formula_13 can be in the thousands or millions. On 32-bit systems, a typical amount of entropy "N" is 8 bits. For 2004 computer speeds, Shacham and co-workers state "... 16 bits of address randomization can be defeated by a brute force attack within minutes." (The authors' statement depends on the ability to attack the same application multiple times without any delay. Proper implementations of ASLR, like that included in grsecurity, provide several methods to make such brute force attacks infeasible. One method involves preventing an executable from executing for a configurable amount of time if it has crashed a certain number of times.) On modern[ [update]] 64-bit systems, these numbers typically reach the millions at least.
Android, and possibly other systems, implement "Library Load Order Randomization", a form of ASLR which randomizes the order in which libraries are loaded. This supplies very little entropy. An approximation of the number of bits of entropy supplied per needed library appears below; this does not yet account for varied library sizes, so the actual entropy gained is really somewhat higher. Attackers usually need only one library; the math is more complex with multiple libraries, and shown below as well. The case of an attacker using only one library is a simplification of the more complex formula for formula_14.
These values tend to be low even for large values of l, most importantly since attackers typically can use only the C standard library and thus one can often assume that formula_16. However, even for a small number of libraries there are a few bits of entropy gained here; it is thus potentially interesting to combine library load order randomization with VMA address randomization to gain a few extra bits of entropy. These extra bits of entropy will not apply to other mmap() segments, only libraries.
Reducing entropy.
Attackers may make use of several methods to reduce the entropy present in a randomized address space, ranging from simple information leaks to attacking multiple bits of entropy per attack (such as by heap spraying). There is little that can be done about this.
It is possible to leak information about memory layout using format string vulnerabilities. Format string functions such as printf use a variable argument list to do their job; format specifiers describe what the argument list looks like. Because of the way arguments are typically passed, each format specifier moves closer to the top of the stack frame. Eventually, the return pointer and stack frame pointer can be extracted, revealing the address of a vulnerable library and the address of a known stack frame; this can eliminate library and stack randomization as an obstacle to an attacker.
One can also decrease entropy in the stack or heap. The stack typically must be aligned to 16 bytes, and so this is the smallest possible randomization interval; while the heap must be page-aligned, typically 4096 bytes. When attempting an attack, it is possible to align duplicate attacks with these intervals; a NOP slide may be used with shellcode injection, and the string " can be replaced with " for an arbitrary number of slashes when attempting to return to "system". The number of bits removed is exactly formula_17 for n intervals attacked.
Such decreases are limited due to the amount of data in the stack or heap. The stack, for example, is typically limited to and grows to much less; this allows for at most , although a more conservative estimate would be around 8– corresponding to 4– of stack stuffing. The heap on the other hand is limited by the behavior of the memory allocator; in the case of glibc, allocations above 128 KB are created using mmap, limiting attackers to 5 bits of reduction. This is also a limiting factor when brute forcing; although the number of attacks to perform can be reduced, the size of the attacks is increased enough that the behavior could in some circumstances become apparent to intrusion detection systems.
Limitations.
ASLR-protected addresses can be leaked by various side channels, removing mitigation utility. Recent attacks have used information leaked by the CPU branch target predictor buffer (BTB) or memory management unit (MMU) walking page tables. It is not clear if this class of ASLR attack can be mitigated. If they cannot, the benefit of ASLR is reduced or eliminated.
Implementations.
Several mainstream, general-purpose operating systems implement ASLR.
Android.
Android 4.0 Ice Cream Sandwich provides address space layout randomization (ASLR) to help protect system and third-party applications from exploits due to memory-management issues. Position-independent executable support was added in Android 4.1. Android 5.0 dropped non-PIE support and requires all dynamically linked binaries to be position independent. Library load ordering randomization was accepted into the Android open-source project on 26 October 2015, and was included in the Android 7.0 release.
DragonFly BSD.
DragonFly BSD has an implementation of ASLR based upon OpenBSD's model, added in 2010. It is off by default, and can be enabled by setting the sysctl vm.randomize_mmap to 1.
FreeBSD.
Support for ASLR appeared in FreeBSD 13.0. It is enabled by default since 13.2.
iOS (iPhone, iPod touch, iPad).
Apple introduced ASLR in iOS 4.3 (released March 2011).
KASLR was introduced in iOS 6. The randomized kernel base is , where is a random byte from SHA1 (random data) generated by iBoot (the 2nd-stage iOS Boot Loader).
Linux.
The Linux kernel enabled a weak form of ASLR by default since the kernel version 2.6.12, released in June 2005. The PaX and Exec Shield patchsets to the Linux kernel provide more complete implementations. The Exec Shield patch for Linux supplies 19 bits of stack entropy on a period of 16 bytes, and 8 bits of mmap base randomization on a period of 1 page of 4096 bytes. This places the stack base in an area 8 MB wide containing 524,288 possible positions, and the mmap base in an area 1 MB wide containing 256 possible positions.
ASLR can be disabled for a specific process by changing its execution domain, using codice_2. A number of sysctl options control the behavior of mainline ASLR. For example, controls "what" to randomize; the strongest option is 2. controls how many bits to randomize for mmap.
Position-independent executable (PIE) implements a random base address for the main executable binary and has been in place since April 18, 2004. It provides the same address randomness to the main executable as being used for the shared libraries. The PIE feature cannot be used together with the prelink feature for the same executable. The prelink tool implements randomization at prelink time rather than runtime, because by design prelink aims to handle relocating libraries before the dynamic linker has to, which allows the relocation to occur once for many runs of the program. As a result, real address space randomization would defeat the purpose of prelinking.
In 2014, Marco-Gisbert and Ripoll disclosed "offset2lib" technique that weakens Linux ASLR for PIE executables. Linux kernels load PIE executables right after their libraries; as a result, there is a fixed offset between the executable and the library functions. If an attacker finds a way to find the address of a function in the executable, the library addresses are also known. They demonstrated an attack that finds the address in fewer than 400 trys. They proposed a new option to randomize the placement of the executable relative to the library, but it is yet to be incorporated into the upstream as of 2024.
The Linux kernel 5.18 released May 2022 reduced the effectiveness of both 32-bit and 64-bit implementations. Linux filesystems call codice_3 to respond to a file-backed mmap. With a change in 5.18, files greater than 2 MiB are made to return 2 MiB-aligned addresses, so they can be potentially backed by huge pages. (Previously, the increased alignment only applied to Direct Access (DAX) mappings.) In the meantime, the C library (libc) has, over time, grown in size to exceed this 2 MiB threshold, so instead of being aligned to a (typically) 4 KiB page boundary as before, these libraries are now 2 MiB-aligned: a loss of 9 bits of entropy. For 32-bit Linux, many distributions show no randomization "at all" in the placement of the libc. For 64-bit Linux, the 28 bits of entropy is reduced to 19 bits. In response, Ubuntu has increased its setting. Martin Doucha added a Linux Test Project testcase to detect this issue.
Kernel address space layout randomization.
Kernel address space layout randomization (KASLR) enables address space randomization for the Linux kernel image by randomizing where the kernel code is placed at boot time. KASLR was merged into the Linux kernel mainline in kernel version 3.14, released on 30 March 2014. When compiled in, it can be disabled at boot time by specifying nokaslr as one of the kernel's boot parameters.
There are several side-channel attacks in x86 processors that could leak kernel addresses. In late 2017, kernel page-table isolation (KPTI aka KAISER) was developed to defeat these attacks. However, this method cannot protect against side-channel attacks utilizing collisions in branch predictor structures.
As of 2021[ [update]], finer grained kernel address space layout randomization (or function granular KASLR, FGKASLR) is a planned extension of KASLR to randomize down to the function level.
Microsoft Windows.
Microsoft's Windows Vista (released January 2007) and later have ASLR enabled only for executables and dynamic link libraries that are specifically linked to be ASLR-enabled. For compatibility, it is not enabled by default for other applications. Typically, only older software is incompatible and ASLR can be fully enabled by editing a registry entry , or by installing Microsoft's Enhanced Mitigation Experience Toolkit.
The locations of the heap, stack, Process Environment Block, and Thread Environment Block are also randomized. A security whitepaper from Symantec noted that ASLR in 32-bit Windows Vista may not be as robust as expected, and Microsoft has acknowledged a weakness in its implementation.
Host-based intrusion prevention systems such as "WehnTrust" and "Ozone" also offer ASLR for Windows XP and Windows Server 2003 operating systems. WehnTrust is open-source. Complete details of Ozone's implementation are not available.
It was noted in February 2012 that ASLR on 32-bit Windows systems prior to Windows 8 can have its effectiveness reduced in low memory situations. A similar effect also had been achieved on Linux in the same research. The test code caused the Mac OS X 10.7.3 system to kernel panic, so it was left unclear about its ASLR behavior in this scenario.
NetBSD.
Support for ASLR in userland appeared in NetBSD 5.0 (released April 2009), and was enabled by default in NetBSD-current in April 2016.
Kernel ASLR support on amd64 was added in NetBSD-current in October 2017, making NetBSD the first BSD system to support KASLR.
OpenBSD.
In 2003, OpenBSD became the first mainstream operating system to support a strong form of ASLR and to activate it by default.
OpenBSD completed its ASLR support in 2008 when it added support for PIE binaries. OpenBSD 4.4's malloc(3) was designed to improve security by taking advantage of ASLR and gap page features implemented as part of OpenBSD's codice_4 system call, and to detect use-after-free bugs. Released in 2013, OpenBSD 5.3 was the first mainstream operating system to enable position-independent executables by default on multiple hardware platforms, and OpenBSD 5.7 activated position-independent static binaries (Static-PIE) by default.
macOS.
In Mac OS X Leopard 10.5 (released October 2007), Apple introduced randomization for system libraries.
In Mac OS X Lion 10.7 (released July 2011), Apple expanded their implementation to cover all applications, stating "address space layout randomization (ASLR) has been improved for all applications. It is now available for 32-bit apps (as are heap memory protections), making 64-bit and 32-bit applications more resistant to attack."
As of OS X Mountain Lion 10.8 (released July 2012) and later, the entire system including the kernel as well as kexts and zones are randomly relocated during system boot.
Solaris.
ASLR has been introduced in Solaris beginning with Solaris 11.1 (released October 2012). ASLR in Solaris 11.1 can be set system-wide, per zone, or on a per-binary basis.
Exploitation.
A side-channel attack utilizing branch target buffer was demonstrated to bypass ASLR protection. In 2017, an attack named "ASLR⊕Cache" was demonstrated which could defeat ASLR in a web browser using JavaScript.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_s"
},
{
"math_id": 1,
"text": "E_m"
},
{
"math_id": 2,
"text": "E_x"
},
{
"math_id": 3,
"text": "E_h"
},
{
"math_id": 4,
"text": "A_s"
},
{
"math_id": 5,
"text": "A_m"
},
{
"math_id": 6,
"text": "A_x"
},
{
"math_id": 7,
"text": "A_h"
},
{
"math_id": 8,
"text": "\\alpha"
},
{
"math_id": 9,
"text": "N"
},
{
"math_id": 10,
"text": "N = (E_s-A_s) + (E_m-A_m) + (E_x-A_x) + (E_h-A_h)\\,"
},
{
"math_id": 11,
"text": "g \\left ( \\alpha\\, \\right ) = 1 - { \\left ( 1 - {2^{-N}} \\right ) ^ \\alpha\\,} \\,\\text{ if } 0 \\le \\, \\alpha\\,"
},
{
"math_id": 12,
"text": "b \\left ( \\alpha\\, \\right ) = \\frac{\\alpha\\,}{{2^N}} \\,\\text{ if } 0 \\le \\, \\alpha\\, \\le \\, {2^N}"
},
{
"math_id": 13,
"text": "2^N"
},
{
"math_id": 14,
"text": "l = 1"
},
{
"math_id": 15,
"text": "\nE_m = \\begin{cases} \\log_2 \\left (l \\right ) &\\text{ if } \\beta\\, = 1, l \\ge \\, 1 \\\\\n\\sum_{i=l}^{l - \\left ( \\beta\\, - 1 \\right )} \\log_2 \\left (i \\right ) &\\text{ if } \\beta\\, \\ge \\, 1, l \\ge \\, 1\n\\end{cases}\n"
},
{
"math_id": 16,
"text": "\\beta\\, = 1"
},
{
"math_id": 17,
"text": "\\log_2\\!\\left (n \\right )"
}
] | https://en.wikipedia.org/wiki?curid=866515 |
8665621 | Jack function | Generalization of the Jack polynomial
In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.
Definition.
The Jack function formula_0
of an integer partition formula_1, parameter formula_2, and arguments formula_3 can be recursively defined as
follows:
formula_4
formula_5
where the summation is over all partitions formula_6 such that the skew partition formula_7 is a horizontal strip, namely
formula_8 (formula_9 must be zero or otherwise formula_10) and
formula_11
where formula_12 equals formula_13 if formula_14 and formula_15 otherwise. The expressions formula_16 and formula_17 refer to the conjugate partitions of formula_1 and formula_6, respectively. The notation formula_18 means that the product is taken over all coordinates formula_19 of boxes in the Young diagram of the partition formula_1.
Combinatorial formula.
In 1997, F. Knop and S. Sahi gave a purely combinatorial formula for the Jack polynomials formula_20 in "n" variables:
formula_21
The sum is taken over all "admissible" tableaux of shape formula_22 and
formula_23
with
formula_24
An "admissible" tableau of shape formula_25 is a filling of the Young diagram formula_25 with numbers 1,2,…,"n" such that for any box ("i","j") in the tableau,
A box formula_31 is "critical" for the tableau "T" if formula_32 and formula_33
This result can be seen as a special case of the more general combinatorial formula for Macdonald polynomials.
C normalization.
The Jack functions form an orthogonal basis in a space of symmetric polynomials, with inner product:
formula_34
This orthogonality property is unaffected by normalization. The normalization defined above is typically referred to as the J normalization. The C normalization is defined as
formula_35
where
formula_36
For formula_37 is often denoted by formula_38 and called the Zonal polynomial.
P normalization.
The "P" normalization is given by the identity formula_39, where
formula_40
where formula_41 and formula_42 denotes the arm and leg length respectively. Therefore, for formula_43 is the usual Schur function.
Similar to Schur polynomials, formula_44 can be expressed as a sum over Young tableaux. However, one need to add an extra weight to each tableau that depends on the parameter formula_2.
Thus, a formula for the Jack function formula_45 is given by
formula_46
where the sum is taken over all tableaux of shape formula_25, and formula_47 denotes the entry in box "s" of "T".
The weight formula_48 can be defined in the following fashion: Each tableau "T" of shape formula_25 can be interpreted as a sequence of partitions
formula_49
where formula_50 defines the skew shape with content "i" in "T". Then
formula_51
where
formula_52
and the product is taken only over all boxes "s" in formula_25 such that "s" has a box from formula_53 in the same row, but "not" in the same column.
Connection with the Schur polynomial.
When formula_54 the Jack function is a scalar multiple of the Schur polynomial
formula_55
where
formula_56
is the product of all hook lengths of formula_1.
Properties.
If the partition has more parts than the number of variables, then the Jack function is 0:
formula_57
Matrix argument.
In some texts, especially in random matrix theory, authors have found it more convenient to use a matrix argument in the Jack function. The connection is simple. If formula_58 is a matrix with eigenvalues
formula_3, then
formula_59 | [
{
"math_id": 0,
"text": "J_\\kappa^{(\\alpha )}(x_1,x_2,\\ldots,x_m)"
},
{
"math_id": 1,
"text": "\\kappa"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "x_1,x_2,\\ldots,x_m"
},
{
"math_id": 4,
"text": "J_{k}^{(\\alpha )}(x_1)=x_1^k(1+\\alpha)\\cdots (1+(k-1)\\alpha)"
},
{
"math_id": 5,
"text": "J_\\kappa^{(\\alpha )}(x_1,x_2,\\ldots,x_m)=\\sum_\\mu\nJ_\\mu^{(\\alpha )}(x_1,x_2,\\ldots,x_{m-1})\nx_m^{|\\kappa /\\mu|}\\beta_{\\kappa \\mu}, "
},
{
"math_id": 6,
"text": "\\mu"
},
{
"math_id": 7,
"text": "\\kappa/\\mu"
},
{
"math_id": 8,
"text": " \n\\kappa_1\\ge\\mu_1\\ge\\kappa_2\\ge\\mu_2\\ge\\cdots\\ge\\kappa_{n-1}\\ge\\mu_{n-1}\\ge\\kappa_n\n"
},
{
"math_id": 9,
"text": "\\mu_n"
},
{
"math_id": 10,
"text": "J_\\mu(x_1,\\ldots,x_{n-1})=0"
},
{
"math_id": 11,
"text": "\n\\beta_{\\kappa\\mu}=\\frac{\n \\prod_{(i,j)\\in \\kappa} B_{\\kappa\\mu}^\\kappa(i,j)\n}{\n\\prod_{(i,j)\\in \\mu} B_{\\kappa\\mu}^\\mu(i,j)\n},\n"
},
{
"math_id": 12,
"text": "B_{\\kappa\\mu}^\\nu(i,j)"
},
{
"math_id": 13,
"text": "\\kappa_j'-i+\\alpha(\\kappa_i-j+1)"
},
{
"math_id": 14,
"text": "\\kappa_j'=\\mu_j'"
},
{
"math_id": 15,
"text": "\\kappa_j'-i+1+\\alpha(\\kappa_i-j)"
},
{
"math_id": 16,
"text": "\\kappa'"
},
{
"math_id": 17,
"text": "\\mu'"
},
{
"math_id": 18,
"text": "(i,j)\\in\\kappa"
},
{
"math_id": 19,
"text": "(i,j)"
},
{
"math_id": 20,
"text": "J_\\mu^{(\\alpha )}"
},
{
"math_id": 21,
"text": "J_\\mu^{(\\alpha )} = \\sum_{T} d_T(\\alpha) \\prod_{s \\in T} x_{T(s)}."
},
{
"math_id": 22,
"text": "\\lambda,"
},
{
"math_id": 23,
"text": "d_T(\\alpha) = \\prod_{s \\in T \\text{ critical}} d_\\lambda(\\alpha)(s)"
},
{
"math_id": 24,
"text": "d_\\lambda(\\alpha)(s) = \\alpha(a_\\lambda(s) +1) + (l_\\lambda(s) + 1)."
},
{
"math_id": 25,
"text": "\\lambda"
},
{
"math_id": 26,
"text": "T(i,j) \\neq T(i',j)"
},
{
"math_id": 27,
"text": "i'>i."
},
{
"math_id": 28,
"text": "T(i,j) \\neq T(i,j-1)"
},
{
"math_id": 29,
"text": "j>1"
},
{
"math_id": 30,
"text": "i'<i."
},
{
"math_id": 31,
"text": "s = (i,j) \\in \\lambda"
},
{
"math_id": 32,
"text": "j > 1"
},
{
"math_id": 33,
"text": "T(i,j)=T(i,j-1)."
},
{
"math_id": 34,
"text": "\\langle f,g\\rangle = \\int_{[0,2\\pi]^n} f \\left (e^{i\\theta_1},\\ldots,e^{i\\theta_n} \\right ) \\overline{g \\left (e^{i\\theta_1},\\ldots,e^{i\\theta_n} \\right )} \\prod_{1\\le j<k\\le n} \\left |e^{i\\theta_j}-e^{i\\theta_k} \\right |^{\\frac{2}{\\alpha}} d\\theta_1\\cdots d\\theta_n"
},
{
"math_id": 35,
"text": "C_\\kappa^{(\\alpha)}(x_1,\\ldots,x_n) = \\frac{\\alpha^{|\\kappa|}(|\\kappa|)!}{j_\\kappa} J_\\kappa^{(\\alpha)}(x_1,\\ldots,x_n),"
},
{
"math_id": 36,
"text": "j_\\kappa=\\prod_{(i,j)\\in \\kappa} \\left (\\kappa_j'-i+\\alpha \\left (\\kappa_i-j+1 \\right ) \\right ) \\left (\\kappa_j'-i+1+\\alpha \\left (\\kappa_i-j \\right ) \\right )."
},
{
"math_id": 37,
"text": "\\alpha=2, C_\\kappa^{(2)}(x_1,\\ldots,x_n)"
},
{
"math_id": 38,
"text": "C_\\kappa(x_1,\\ldots,x_n)"
},
{
"math_id": 39,
"text": "J_\\lambda = H'_\\lambda P_\\lambda"
},
{
"math_id": 40,
"text": "H'_\\lambda = \\prod_{s\\in \\lambda} (\\alpha a_\\lambda(s) + l_\\lambda(s) + 1)"
},
{
"math_id": 41,
"text": "a_\\lambda"
},
{
"math_id": 42,
"text": "l_\\lambda"
},
{
"math_id": 43,
"text": "\\alpha=1, P_\\lambda"
},
{
"math_id": 44,
"text": "P_\\lambda"
},
{
"math_id": 45,
"text": "P_\\lambda "
},
{
"math_id": 46,
"text": " P_\\lambda = \\sum_{T} \\psi_T(\\alpha) \\prod_{s \\in \\lambda} x_{T(s)}"
},
{
"math_id": 47,
"text": "T(s)"
},
{
"math_id": 48,
"text": " \\psi_T(\\alpha) "
},
{
"math_id": 49,
"text": " \\emptyset = \\nu_1 \\to \\nu_2 \\to \\dots \\to \\nu_n = \\lambda"
},
{
"math_id": 50,
"text": "\\nu_{i+1}/\\nu_i"
},
{
"math_id": 51,
"text": " \\psi_T(\\alpha) = \\prod_i \\psi_{\\nu_{i+1}/\\nu_i}(\\alpha)"
},
{
"math_id": 52,
"text": "\\psi_{\\lambda/\\mu}(\\alpha) = \\prod_{s \\in R_{\\lambda/\\mu}-C_{\\lambda/\\mu} } \\frac{(\\alpha a_\\mu(s) + l_\\mu(s) +1)}{(\\alpha a_\\mu(s) + l_\\mu(s) + \\alpha)} \\frac{(\\alpha a_\\lambda(s) + l_\\lambda(s) + \\alpha)}{(\\alpha a_\\lambda(s) + l_\\lambda(s) +1)}\n"
},
{
"math_id": 53,
"text": "\\lambda/\\mu"
},
{
"math_id": 54,
"text": "\\alpha=1"
},
{
"math_id": 55,
"text": "\nJ^{(1)}_\\kappa(x_1,x_2,\\ldots,x_n) = H_\\kappa s_\\kappa(x_1,x_2,\\ldots,x_n),\n"
},
{
"math_id": 56,
"text": "\nH_\\kappa=\\prod_{(i,j)\\in\\kappa} h_\\kappa(i,j)=\n\\prod_{(i,j)\\in\\kappa} (\\kappa_i+\\kappa_j'-i-j+1)\n"
},
{
"math_id": 57,
"text": "J_\\kappa^{(\\alpha )}(x_1,x_2,\\ldots,x_m)=0, \\mbox{ if }\\kappa_{m+1}>0."
},
{
"math_id": 58,
"text": "X"
},
{
"math_id": 59,
"text": "\nJ_\\kappa^{(\\alpha )}(X)=J_\\kappa^{(\\alpha )}(x_1,x_2,\\ldots,x_m).\n"
}
] | https://en.wikipedia.org/wiki?curid=8665621 |
8667 | Double-slit experiment | Physics experiment, showing light and matter can be modelled by both waves and particles
In modern physics, the double-slit experiment demonstrates that light and matter can satisfy the seemingly incongruous classical definitions for both waves "and" particles. This ambiguity is considered evidence for the fundamentally probabilistic nature of quantum mechanics. This type of experiment was first performed by Thomas Young in 1801, as a demonstration of the wave behavior of visible light. In 1927, Davisson and Germer and, independently George Paget Thomson and his research student Alexander Reid demonstrated that electrons show the same behavior, which was later extended to atoms and molecules. Thomas Young's experiment with light was part of classical physics long before the development of quantum mechanics and the concept of wave–particle duality. He believed it demonstrated that Christiaan Huygens' wave theory of light was correct, and his experiment is sometimes referred to as Young's experiment or Young's slits.
The experiment belongs to a general class of "double path" experiments, in which a wave is split into two separate waves (the wave is typically made of many photons and better referred to as a wave front, not to be confused with the wave properties of the individual photon) that later combine into a single wave. Changes in the path-lengths of both waves result in a phase shift, creating an interference pattern. Another version is the Mach–Zehnder interferometer, which splits the beam with a beam splitter.
In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves); the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. These results demonstrate the principle of wave–particle duality.
Other atomic-scale entities, such as electrons, are found to exhibit the same behavior when fired towards a double slit. Additionally, the detection of individual discrete impacts is observed to be inherently probabilistic, which is inexplicable using classical mechanics.
The experiment can be done with entities much larger than electrons and photons, although it becomes more difficult as size increases. The largest entities for which the double-slit experiment has been performed were molecules that each comprised 2000 atoms (whose total mass was 25,000 atomic mass units).
The double-slit experiment (and its variations) has become a classic for its clarity in expressing the central puzzles of quantum mechanics. Because it demonstrates the fundamental limitation of the ability of the observer to predict experimental results, Richard Feynman called it "a phenomenon which is impossible […] to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery [of quantum mechanics]."
Overview.
If light consisted strictly of ordinary or classical particles, and these particles were fired in a straight line through a slit and allowed to strike a screen on the other side, we would expect to see a pattern corresponding to the size and shape of the slit. However, when this "single-slit experiment" is actually performed, the pattern on the screen is a diffraction pattern in which the light is spread out. The smaller the slit, the greater the angle of spread. The top portion of the image shows the central portion of the pattern formed when a red laser illuminates a slit and, if one looks carefully, two faint side bands. More bands can be seen with a more highly refined apparatus. Diffraction explains the pattern as being the result of the interference of light waves from the slit.
If one illuminates two parallel slits, the light from the two slits again interferes. Here the interference is a more pronounced pattern with a series of alternating light and dark bands. The width of the bands is a property of the frequency of the illuminating light. (See the bottom photograph to the right.)
When Thomas Young (1773–1829) first demonstrated this phenomenon, it indicated that light consists of waves, as the distribution of brightness can be explained by the alternately additive and subtractive interference of wavefronts. Young's experiment, performed in the early 1800s, played a crucial role in the understanding of the wave theory of light, vanquishing the corpuscular theory of light proposed by Isaac Newton, which had been the accepted model of light propagation in the 17th and 18th centuries.
However, the later discovery of the photoelectric effect demonstrated that under different circumstances, light can behave as if it is composed of discrete particles. These seemingly contradictory discoveries made it necessary to go beyond classical physics and take into account the quantum nature of light.
Feynman was fond of saying that all of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment. He also proposed (as a thought experiment) that if detectors were placed before each slit, the interference pattern would disappear.
The Englert–Greenberger duality relation provides a detailed treatment of the mathematics of double-slit interference in the context of quantum mechanics.
A low-intensity double-slit experiment was first performed by G. I. Taylor in 1909, by reducing the level of incident light until photon emission/absorption events were mostly non-overlapping.
A slit interference experiment was not performed with anything other than light until 1961, when Claus Jönsson of the University of Tübingen performed it with coherent electron beams and multiple slits. In 1974, the Italian physicists Pier Giorgio Merli, Gian Franco Missiroli, and Giulio Pozzi performed a related experiment using single electrons from a coherent source and a biprism beam splitter, showing the statistical nature of the buildup of the interference pattern, as predicted by quantum theory. In 2002, the single-electron version of the experiment was voted "the most beautiful experiment" by readers of "Physics World." Since that time a number of related experiments have been published, with a little controversy.
In 2012, Stefano Frabboni and co-workers sent single electrons onto nanofabricated slits (about 100 nm wide) and, by detecting the transmitted electrons with a single-electron detector, they could show the build-up of a double-slit interference pattern. Many related experiments involving the coherent interference have been performed; they are the basis of modern electron diffraction, microscopy and high resolution imaging.
In 2018, single particle interference was demonstrated for antimatter in the Positron Laboratory (L-NESS, Politecnico di Milano) of Rafael Ferragut in Como (Italy), by a group led by Marco Giammarchi.
Variations of the experiment.
Interference from individual particles.
An important version of this experiment involves single particle detection. Illuminating the double-slit with a low intensity results in single particles being detected as white dots on the screen. Remarkably, however, an interference pattern emerges when these particles are allowed to build up one by one (see the image below).
This demonstrates the wave–particle duality, which states that all matter exhibits both wave and particle properties: The particle is measured as a single pulse at a single position, while the modulus squared of the wave describes the probability of detecting the particle at a specific place on the screen giving a statistical interference pattern. This phenomenon has been shown to occur with photons, electrons, atoms, and even some molecules: with buckminsterfullerene (C60) in 2001, with 2 molecules of 430 atoms (C60(C12F25)10 and C168H94F152O8N4S4) in 2011, and with molecules of up to 2000 atoms in 2019.
In addition interference patterns built up from single particles, up to 4 entangled photons can also show interference patterns.
Mach-Zehnder interferometer.
The Mach–Zehnder interferometer can be seen as a simplified version of the double-slit experiment. Instead of propagating through free space after the two slits, and hitting any position in an extended screen, in the interferometer the photons can only propagate via two paths, and hit two discrete photodetectors. This makes it possible to describe it via simple linear algebra in dimension 2, rather than differential equations.
A photon emitted by the laser hits the first beam splitter and is then in a superposition between the two possible paths. In the second beam splitter these paths interfere, causing the photon to hit the photodetector on the right with probability one, and the photodetector on the bottom with probability zero. It is interesting to consider what would happen if the photon were definitely in either of paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by detecting the presence of a photon there. In both cases there will be no interference between the paths anymore, and both photodetectors will be hit with probability 1/2. From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths.
"Which-way" experiments and the principle of complementarity.
A well-known thought experiment predicts that if particle detectors are positioned at the slits, showing through which slit a photon goes, the interference pattern will disappear. This which-way experiment illustrates the complementarity principle that photons can behave as either particles or waves, but cannot be observed as both at the same time.
Despite the importance of this thought experiment in the history of quantum mechanics (for example, see the discussion on ), technically feasible realizations of this experiment were not proposed until the 1970s. (Naive implementations of the textbook thought experiment are not possible because photons cannot be detected without absorbing the photon.) Currently, multiple experiments have been performed illustrating various aspects of complementarity.
An experiment performed in 1987 produced results that demonstrated that partial information could be obtained regarding which path a particle had taken without destroying the interference altogether. This "wave-particle trade-off" takes form of an inequality relating the visibility of the interference pattern and the distinguishability of the which-way paths.
Delayed choice and quantum eraser variations.
Wheeler's delayed-choice experiments demonstrate that extracting "which path" information after a particle passes through the slits can seem to retroactively alter its previous behavior at the slits.
Quantum eraser experiments demonstrate that wave behavior can be restored by erasing or otherwise making permanently unavailable the "which path" information.
A simple do-it-at-home illustration of the quantum eraser phenomenon was given in an article in "Scientific American". If one sets polarizers before each slit with their axes orthogonal to each other, the interference pattern will be eliminated. The polarizers can be considered as introducing which-path information to each beam. Introducing a third polarizer in front of the detector with an axis of 45° relative to the other polarizers "erases" this information, allowing the interference pattern to reappear. This can also be accounted for by considering the light to be a classical wave, and also when using circular polarizers and single photons. Implementations of the polarizers using entangled photon pairs have no classical explanation.
Weak measurement.
In a highly publicized experiment in 2012, researchers claimed to have identified the path each particle had taken without any adverse effects at all on the interference pattern generated by the particles. In order to do this, they used a setup such that particles coming to the screen were not from a point-like source, but from a source with two intensity maxima. However, commentators such as Svensson have pointed out that there is in fact no conflict between the weak measurements performed in this variant of the double-slit experiment and the Heisenberg uncertainty principle. Weak measurement followed by post-selection did not allow simultaneous position and momentum measurements for each individual particle, but rather allowed measurement of the average trajectory of the particles that arrived at different positions. In other words, the experimenters were creating a statistical map of the full trajectory landscape.
Other variations.
In 1967, Pfleegor and Mandel demonstrated two-source interference using two separate lasers as light sources.
It was shown experimentally in 1972 that in a double-slit system where only one slit was open at any time, interference was nonetheless observed provided the path difference was such that the detected photon could have come from either slit. The experimental conditions were such that the photon density in the system was much less than 1.
In 1991, Carnal and Mlynek performed the classic Young's double slit experiment with metastable helium atoms passing through micrometer-scale slits in gold foil.
In 1999, a quantum interference experiment (using a diffraction grating, rather than two slits) was successfully performed with buckyball molecules (each of which comprises 60 carbon atoms). A buckyball is large enough (diameter about 0.7 nm, nearly half a million times larger than a proton) to be seen in an electron microscope.
In 2002, an electron field emission source was used to demonstrate the double-slit experiment. In this experiment, a coherent electron wave was emitted from two closely located emission sites on the needle apex, which acted as double slits, splitting the wave into two coherent electron waves in a vacuum. The interference pattern between the two electron waves could then be observed. In 2017, researchers performed the double-slit experiment using light-induced field electron emitters. With this technique, emission sites can be optically selected on a scale of ten nanometers. By selectively deactivating (closing) one of the two emissions (slits), researchers were able to show that the interference pattern disappeared.
In 2005, E. R. Eliel presented an experimental and theoretical study of the optical transmission of a thin metal screen perforated by two subwavelength slits, separated by many optical wavelengths. The total intensity of the far-field double-slit pattern is shown to be reduced or enhanced as a function of the wavelength of the incident light beam.
In 2012, researchers at the University of Nebraska–Lincoln performed the double-slit experiment with electrons as described by Richard Feynman, using new instruments that allowed control of the transmission of the two slits and the monitoring of single-electron detection events. Electrons were fired by an electron gun and passed through one or two slits of 62 nm wide × 4 μm tall.
In 2013, a quantum interference experiment (using diffraction gratings, rather than two slits) was successfully performed with molecules that each comprised 810 atoms (whose total mass was over 10,000 atomic mass units). The record was raised to 2000 atoms (25,000 amu) in 2019.
Hydrodynamic pilot wave analogs.
Hydrodynamic analogs have been developed that can recreate various aspects of quantum mechanical systems, including single-particle interference through a double-slit. A silicone oil droplet, bouncing along the surface of a liquid, self-propels via resonant interactions with its own wave field. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet's interaction with its own ripples, which form what is known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles – including behaviors customarily taken as evidence that elementary particles are spread through space like waves, without any specific location, until they are measured.
Behaviors mimicked via this hydrodynamic pilot-wave system include quantum single particle diffraction, tunneling, quantized orbits, orbital level splitting, spin, and multimodal statistics. It is also possible to infer uncertainty relations and exclusion principles. Videos are available illustrating various features of this system. (See the External links.)
However, more complicated systems that involve two or more particles in superposition are not amenable to such a simple, classically intuitive explanation. Accordingly, no hydrodynamic analog of entanglement has been developed. Nevertheless, optical analogs are possible.
Double-slit experiment on time.
In 2023, an experiment was reported recreating an interference pattern in time by shining a pump laser pulse at a screen coated in indium tin oxide (ITO) which would alter the properties of the electrons within the material due to the Kerr effect, changing it from transparent to reflective for around 200 femtoseconds long where a subsequent probe laser beam hitting the ITO screen would then see this temporary change in optical properties as a slit in time and two of them as a double slit with a phase difference adding up destructively or constructively on each frequency component resulting in an interference pattern. Similar results have been obtained classically on water waves.
Classical wave-optics formulation.
Much of the behaviour of light can be modelled using classical wave theory. The Huygens–Fresnel principle is one such model; it states that each point on a wavefront generates a secondary wavelet, and that the disturbance at any subsequent point can be found by summing the contributions of the individual wavelets at that point. This summation needs to take into account the phase as well as the amplitude of the individual wavelets. Only the intensity of a light field can be measured—this is proportional to the square of the amplitude.
In the double-slit experiment, the two slits are illuminated by the quasi-monochromatic light of a single laser. If the width of the slits is small enough (much less than the wavelength of the laser light), the slits diffract the light into cylindrical waves. These two cylindrical wavefronts are superimposed, and the amplitude, and therefore the intensity, at any point in the combined wavefronts depends on both the magnitude and the phase of the two wavefronts. The difference in phase between the two waves is determined by the difference in the distance travelled by the two waves.
If the viewing distance is large compared with the separation of the slits (the far field), the phase difference can be found using the geometry shown in the figure below right. The path difference between two waves travelling at an angle θ is given by:
formula_0
Where d is the distance between the two slits. When the two waves are in phase, i.e. the path difference is equal to an integral number of wavelengths, the summed amplitude, and therefore the summed intensity is maximum, and when they are in anti-phase, i.e. the path difference is equal to half a wavelength, one and a half wavelengths, etc., then the two waves cancel and the summed intensity is zero. This effect is known as interference. The interference fringe maxima occur at angles
formula_1
where λ is the wavelength of the light. The angular spacing of the fringes, θ"f", is given by
formula_2
The spacing of the fringes at a distance "z" from the slits is given by
formula_3
For example, if two slits are separated by 0.5 mm ("d"), and are illuminated with a 0.6 μm wavelength laser (λ), then at a distance of 1 m ("z"), the spacing of the fringes will be 1.2 mm.
If the width of the slits "b" is appreciable compared to the wavelength, the Fraunhofer diffraction equation is needed to determine the intensity of the diffracted light as follows:
formula_4
where the sinc function is defined as sinc("x") = sin("x")/"x" for "x" ≠ 0, and sinc(0) = 1.
This is illustrated in the figure above, where the first pattern is the diffraction pattern of a single slit, given by the sinc function in this equation, and the second figure shows the combined intensity of the light diffracted from the two slits, where the cos function represents the fine structure, and the coarser structure represents diffraction by the individual slits as described by the sinc function.
Similar calculations for the near field can be made by applying the Fresnel diffraction equation, which implies that as the plane of observation gets closer to the plane in which the slits are located, the diffraction patterns associated with each slit decrease in size, so that the area in which interference occurs is reduced, and may vanish altogether when there is no overlap in the two diffracted patterns.
Path-integral formulation.
The double-slit experiment can illustrate the path integral formulation of quantum mechanics provided by Feynman. The path integral formulation replaces the classical notion of a single, unique trajectory for a system, with a sum over all possible trajectories. The trajectories are added together by using functional integration.
Each path is considered equally likely, and thus contributes the same amount. However, the phase of this contribution at any given point along the path is determined by the action along the path:
formula_5
All these contributions are then added together, and the magnitude of the final result is squared, to get the probability distribution for the position of a particle:
formula_6
As is always the case when calculating probability, the results must then be normalized by imposing:
formula_7
The probability distribution of the outcome is the normalized square of the norm of the superposition, over all paths from the point of origin to the final point, of waves propagating proportionally to the action along each path. The differences in the cumulative action along the different paths (and thus the relative phases of the contributions) produces the interference pattern observed by the double-slit experiment. Feynman stressed that his formulation is merely a mathematical description, not an attempt to describe a real process that we can measure.
Interpretations of the experiment.
Like the Schrödinger's cat thought experiment, the double-slit experiment is often used to highlight the differences and similarities between the various interpretations of quantum mechanics.
Copenhagen interpretation.
The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics, stemming from the work of Niels Bohr, Werner Heisenberg, Max Born, and others. The term "Copenhagen interpretation" was apparently coined by Heisenberg during the 1950s to refer to ideas developed in the 1925–1927 period, glossing over his disagreements with Bohr. Consequently, there is no definitive historical statement of what the interpretation entails. Features common across versions of the Copenhagen interpretation include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object, except according to the results of its measurement. A particular experiment can demonstrate particle behavior (passing through a definite slit) or wave behavior (interference), but not both at the same time. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' personal beliefs and other arbitrary mental factors.
The results from the most basic double slit experiment, the observation of an interference pattern, is explained by wave interference from the two paths to the screen from each of the two slits. The single-particle results show that the waves are probability amplitudes which square to produce a probability distribution. The particles are discrete and identical; many are needed to build up the full interference pattern. The results from some of the which-way experiments are described as observations of complementarity: modifying the experiment to monitor the slit suppresses the interference pattern. Other which-way experiments make no mention of complementarity in their analysis.
Relational interpretation.
According to the relational interpretation of quantum mechanics, first proposed by Carlo Rovelli, observations such as those in the double-slit experiment result specifically from the interaction between the observer (measuring device) and the object being observed (physically interacted with), not any absolute property possessed by the object. In the case of an electron, if it is initially "observed" at a particular slit, then the observer–particle (photon–electron) interaction includes information about the electron's position. This partially constrains the particle's eventual location at the screen. If it is "observed" (measured with a photon) not at a particular slit but rather at the screen, then there is no "which path" information as part of the interaction, so the electron's "observed" position on the screen is determined strictly by its probability function. This makes the resulting pattern on the screen the same as if each individual electron had passed through both slits.
Many-worlds interpretation.
As with Copenhagen, there are multiple variants of the many-worlds interpretation. The unifying theme is that physical reality is identified with a wavefunction, and this wavefunction always evolves unitarily, i.e., following the Schrödinger equation with no collapses. Consequently, there are many parallel universes, which only interact with each other through interference. David Deutsch argues that the way to understand the double-slit experiment is that in each universe the particle travels through a specific slit, but its motion is affected by the interference with particles in other universes. This creates the observable fringes. David Wallace, another advocate of the many-worlds interpretation, writes that in the familiar setup of the double-slit experiment the two paths are not sufficiently separated for a description in terms of parallel universes to make sense.
De Broglie–Bohm theory.
An alternative to the standard understanding of quantum mechanics, the De Broglie–Bohm theory states that particles also have precise locations at all times, and that their velocities are defined by the wave-function. So while a single particle will travel through one particular slit in the double-slit experiment, the so-called "pilot wave" that influences it will travel through both. The two slit de Broglie-Bohm trajectories were first calculated by Chris Dewdney while working with Chris Philippidis and Basil Hiley at Birkbeck College (London). The de Broglie-Bohm theory produces the same statistical results as standard quantum mechanics, but dispenses with many of its conceptual difficulties by adding complexity through an "ad hoc" quantum potential to guide the particles.
While the model is in many ways similar to Schrödinger equation, it is known to fail for relativistic cases and does not account for features such as particle creation or annihilation in quantum field theory. Many authors such as nobel laureates Werner Heisenberg, Sir Anthony James Leggett and Sir Roger Penrose have criticized it for not adding anything new.
More complex variants of this type of approach have appeared, for instance the "three wave hypothesis" of Ryszard Horodecki as well as other complicated combinations of de Broglie and Compton waves. To date there is no evidence that these are useful.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d \\sin \\theta \\approx d \\theta"
},
{
"math_id": 1,
"text": "~ d \\theta_n = n \\lambda,~ n=0,1,2,\\ldots"
},
{
"math_id": 2,
"text": " \\theta_f \\approx \\lambda / d "
},
{
"math_id": 3,
"text": "~w=z \\theta_f = z \\lambda /d"
},
{
"math_id": 4,
"text": "\n\\begin{align}\nI(\\theta)\n&\\propto \\cos^2 \\left [{\\frac {\\pi d \\sin \\theta}{\\lambda}}\\right]~\\mathrm{sinc}^2 \\left [ \\frac {\\pi b \\sin \\theta}{\\lambda} \\right]\n\\end{align}\n"
},
{
"math_id": 5,
"text": "A_{\\text{path}}(x,y,z,t) = e^{i S(x,y,z,t)}"
},
{
"math_id": 6,
"text": "p(x,y,z,t) \\propto \\left\\vert \\int_{\\text{all paths}} e^{i S(x,y,z,t)} \\right\\vert ^2 "
},
{
"math_id": 7,
"text": "\\iiint_{\\text{all space}}p(x,y,z,t)\\,dV = 1"
}
] | https://en.wikipedia.org/wiki?curid=8667 |
8669149 | Criticism of SUVs | Problems with the automobile class
Sport utility vehicles (SUVs) have been criticized for a variety of environmental and automotive safety reasons. The rise in production and marketing of SUVs in the 2010s and 2020s by auto manufacturers has resulted in over 80% of all new car sales in the United States being SUVs or light trucks by October 2021. This rise in SUV sales has also spilled over into the United Kingdom and the European Union. It has generated calls from car safety advocates to downsize in favor of models such as sedans, wagons, and compacts.
SUVs are classified as light trucks in the United States. In many cases, vehicles classified under "light trucks" can avoid certain fuel economy regulations and size regulations—often called a "light truck exemption". Thus, this loophole has led to the mass upselling and marketing of SUVs, with many viewing it as a corporate scam designed to increase profit margins for the auto industry, particularly for the Big Three in the United States.
SUVs generally have poorer fuel efficiency and require more resources to manufacture than smaller vehicles, thus contributing more to climate change and environmental degradation. Their higher center of gravity significantly increases their risk of rollovers. Their larger mass increases their momentum, which results in more damage to other road users in collisions. Their higher front-end profile reduces visibility and makes them at least twice as likely to kill pedestrians they hit. Large SUVs have been shown to have longer braking distances in the dry than traditional passenger cars and small SUVs. Additionally, the psychological sense of security they provide influences drivers to drive less cautiously or rely on their car for their perceived safety, rather than their own driving.
Safety.
SUVs are generally safer to their occupants and more dangerous to other road users than mid-size cars. A 2021 study by the University of Illinois Springfield showed, for example, that SUVs are 8-times more likely to kill children in an accident than passenger cars, and multiple times more lethal to adult pedestrians and cyclists.
When it comes to mortality for vehicle occupants, four-door minicars have a death rate (per 100,000 registration years rather than mileage) of 82, compared with 46 for very large four-doors. This survey reflects the effects of both vehicle design and driving behaviour. Drivers of SUVs, minivans, and large cars may drive differently from the drivers of small or mid-size cars, and this may affect the survey result.
Rollover.
A high center of gravity makes a vehicle more prone to rollover accidents than lower vehicles, especially if the vehicle leaves the road, or if the driver makes a sharp turn during an emergency maneuver. Figures from the US National Highway Traffic Safety Administration show that most passenger cars have about a 10% chance of rollover if involved in a single-vehicle crash, while SUVs have between 14% and 23% (varying from a low of 14% for the all-wheel-drive (AWD) Ford Edge to a high of 23% for the front-wheel-drive (FWD) Ford Escape). Many modern SUVs are equipped with electronic stability control (ESC) to prevent rollovers on flat surfaces, but 95% of rollovers are "tripped", meaning that the vehicle strikes something low, such as a curb or shallow ditch, causing it to tip over.
According to NHTSA data, early SUVs were at a disadvantage in single-vehicle accidents (such as when the driver falls asleep or loses control swerving around a deer), which involve 43% of fatal accidents, with more than double the chance of rolling over. This risk related closely to overall US motor vehicle fatality data, showing that SUVs and pickups generally had a higher fatality rate than cars of the same manufacturer.
According to "Consumer Reports", as of 2009, SUV rollover safety had improved to the extent that on average there were slightly fewer driver fatalities per million vehicles, due to rollovers, in SUVs as opposed to cars. By 2011 the IIHS reported that "drivers of today's SUVs are among the least likely to die in a crash".
Poor Handling.
Vehicles that are larger and heavier in size like SUVs require large amounts of braking power and more powerful steering assists to aid in turning the wheels more quickly. Because of this, the reaction of an SUV to sudden braking and steering maneuvers will be very different to drivers who are more accustomed to lighter vehicles. This is due to the combination of a vastly higher center of gravity and excessive weight severely affecting the cornering ability of SUVs with rollovers much more likely than cars or minivans, even at low speeds.
Construction.
Heavier-duty SUVs are typically designed with a truck-style chassis with separate body, while lighter-duty (including cross-over) models are more similar to cars, which are typically built with a unitary construction (in which the body actually forms the structure). Originally designed and built to be work vehicles using a truck chassis, SUVs were not comprehensively redesigned to be safely used as passenger vehicles. The British television programme Fifth Gear staged a crash between a first generation (1989–98) Land Rover Discovery with a separate chassis and body, and a modern Renault Espace IV with monocoque (unit) design. The older SUV offered less protection for occupants than the modern multi-purpose vehicle with unitary construction. In some SUV fatalities involving truck-based construction, lawsuits against the automakers "were settled quietly and confidentially, without any public scrutiny of the results—or the underlying problems with SUV design", thus hiding the danger of vehicles such as the Ford Bronco and Explorer compared to regular passenger cars.
Risk to other road users.
Because of greater height and weight and rigid frames, it is contended by Malcolm Gladwell, writing in "The New Yorker" magazine, that SUVs can affect traffic safety. This height and weight, while potentially giving an advantage to occupants of the vehicle, may pose a risk to drivers of smaller vehicles in multi-vehicle accidents, particularly side impacts.
The initial tests of the Ford Excursion were "horrifying" for its ability to vault over the hood of a Ford Taurus. The big SUV was modified to include a type of blocker bar suggested by the French transportation ministry in 1971, a kind of under-vehicle roll bar designed to keep the large Ford Excursion from rolling over cars that were hit by it. The problem is "impact incompatibility", where the "hard points" of the end of chassis rails of SUVs are higher than the "hard points" of cars, causing the SUV to override the engine compartment and crumple zone of the car. There have been few regulations covering designs of SUVs to address the safety issue. The heavy weight is a risk factor with very large passenger cars, not only with SUVs. The typically higher SUV bumper heights and those built using stiff truck-based frames, also increases risks in crashes with passenger cars. The Mercedes ML320 was designed with bumpers at the same height as required for passenger cars.
In parts of Europe, effective 2006, the fitting of metal bullbars, also known as grille guards, brush guards, and push bars, to vehicles such as 4x4s and SUVs are only legal if pedestrian-safe plastic bars and grilles are used. Bullbars are often used in Australia, South Africa, and parts of the United States to protect the vehicle from being disabled should it collide with wildlife.
Safety improvements during the 2010s to the present led automobile manufacturers to make design changes to align the energy-absorbing structures of SUVs with those of cars. As a result, car occupants were only 28 percent more likely to die in collisions with SUVs than with cars between 2013 and 2016, compared with 59 percent between 2009 and 2012, according to the IIHS.
Visibility and backover deaths.
Larger vehicles can create visibility problems for other road users by obscuring their view of traffic lights, signs, and other vehicles on the road, plus the road itself. Depending on the design, drivers of some larger vehicles may themselves suffer from poor visibility to the side and the rear. Poor rearward vision has led to many "backover deaths" where vehicles have run over small children when backing out of driveways. The problem of backover deaths has become so widespread that reversing cameras are being installed on some vehicles to improve rearward vision.
While SUVs are often perceived as having inferior rearward vision compared with regular passenger cars, this is not supported by controlled testing which found poor rearward visibility was not limited to any single vehicle class. Australia's NRMA motoring organisation found that regular passenger cars commonly provided inferior rearward vision compared to SUVs, both because of the prevalence of reversing cameras on modern SUVs and the shape of many popular passenger cars, with their high rear window lines and boots (trunks) obstructing rearward vision. In NRMA testing, two out of 42 SUVs (5%) and 29 out of 163 (18%) regular cars had the worst rating (>15-metre blind spot). Of the vehicles that received a perfect 0-metre blind spot rating, 11 out of 42 (26%) were SUVs and eight out of 163 (5%) were regular passenger cars. All of the "perfect score" vehicles had OEM reversing cameras.
Wide bodies in narrow lanes.
The wider bodies of larger vehicles mean they occupy a greater percentage of road lanes. This is particularly noticeable on the narrow roads sometimes found in dense urban areas or rural areas in Europe. Wider vehicles may also have difficulty fitting in some parking spaces and encroach further into traffic lanes when parked alongside the road.
Psychology.
SUV safety concerns are affected by a perception among some consumers that SUVs are safer for their drivers than standard cars, and that they need not take basic precautions as if they were inside a "defensive capsule". According to G. C. Rapaille, a psychological consultant to automakers, many consumers feel safer in SUVs simply because their ride height makes "[their passengers] higher and dominate and look down [sic]. That you can look down [on other people] is psychologically a very powerful notion." This and the height and weight of SUVs may lead to consumers' perception of safety.
Gladwell also noted that SUV popularity is also a sign that people began to shift automobile safety focus from active to passive, to the point that in the US potential SUV buyers will give up an extra of braking distance because they believe they are helpless to avoid a tractor-trailer hit on any vehicle. The four-wheel drive option available to SUVs reinforced the passive safety notion. To support Gladwell's argument, he mentioned that automotive engineer David Champion noted that in his previous driving experience with Range Rover, his vehicle slid across a four-lane road because he did not perceive the slipping that others had experienced. Gladwell concluded that when a driver feels unsafe when driving a vehicle, it makes the vehicle safer. When a driver feels safe when driving, the vehicle becomes less safe.
Stephen Popiel, a vice president of Millward Brown Goldfarb automotive market-research company, noted that for most automotive consumers, safety has to do with the notion that they are not in complete control. Gladwell argued that many "accidents" are not outside driver's control, such as drunk driving, wearing seat belts, and the driver's age and experience.
Sense of security.
Study into the safety of SUVs conclusions have been mixed. In 2004, the National Highway Traffic Safety Administration released results of a study that indicated that drivers of SUVs were 11% more likely to die in an accident than people in cars. These figures were not driven by vehicle inherent safety alone but indicated perceived increased security on the part of drivers. For example, US SUV drivers were found to be less likely to wear their seatbelts and showed a tendency to drive more recklessly (most sensationally perhaps, in a 1996 finding that SUV drivers were more likely to drive drunk).
Actual driver death rates are monitored by the IIHS and vary between models. These statistics do show average driver death rates in the US were lower in larger vehicles from 2002 to 2005, and that there was significant overlap between vehicle categories.
The IIHS report states, "Pound for pound across vehicle types, cars almost always have lower death rates than pickups or SUVs." The NHTSA recorded occupant (driver or passenger) fatalities per 100 million vehicle miles traveled at 1.16 in 2004 and 1.20 in 2003 for light trucks (SUVs, pick-ups and minivans) compared to 1.18 in 2004 and 1.21 in 2003 for passenger cars (all other vehicles).
Marketing practices.
The marketing techniques used to sell SUVs have been under criticism. Advertisers and manufacturers alike have been assailed for greenwashing. Critics have cited SUV commercials that show the product being driven through a wilderness area, even though relatively few SUVs are ever driven off-road. For example: At 22 November 2023, the ASA (Advertising Standard Authority), banned ads for Toyota Hilux in the UK, for being displayed as being driven on a wilderness area
Fuel economy.
The recent growth of SUVs is sometimes given as one reason why the population has begun to consume more gasoline than in previous years. SUVs generally use more fuel than passenger vehicles or minivans with the same number of seats. Additionally, SUVs up to 8,500 pounds GVWR are classified by the US government as light trucks, and thus are subject to the less strict light truck standard under the Corporate Average Fuel Economy (CAFE) regulations, and SUVs which exceed 8,500 pounds GVWR have been entirely exempt from CAFE standards. This provides less incentive for US manufacturers to produce more fuel-efficient models.
As a result of their off-road design SUVs may have fuel-inefficient features. High profile increases wind resistance and greater mass require heavier suspensions and larger drivetrains, which both contribute to increased vehicle weight. Some SUVs come with tires designed for off-road traction rather than low rolling resistance.
Fuel economy factors include:
Average data for vehicle types sold in the US:
Drag resistance (assuming the same drag coefficient which is not a safe assumption) for SUVs may be 30% higher and the acceleration force has to be 35% larger for the same acceleration, which again is not a safe assumption, than family sedans if we use the figures from the above table.
Pollution.
Because SUVs tend to use more fuel (mile for mile) than cars with the same engine type, they generate higher volumes of pollutants (particularly carbon dioxide) into the atmosphere. This has been confirmed by LCA (Life Cycle Assessment) studies, which quantify the environmental impacts of products such as cars, often from the time they are produced until they are recycled. One LCA study which took into account the production of greenhouse gases, carcinogens, and waste production found that exclusive cars, sports cars and SUVs were "characterized by a poor environmental performance." Another study found that family size internal combustion vehicles still produced fewer emissions than a hybrid SUV.
Various eco-activist groups, such as the Earth Liberation Front or Les Dégonflés have targeted SUV dealerships and privately owned SUVs due to concern over increased fuel usage.
In the US, light trucks and SUVs are held to a less-strict pollution control standard than passenger cars. In response to the perception that a growing share of fuel consumption and emissions are attributable to these vehicles, the Environmental Protection Agency ruled that by the model year 2009, emissions from all light trucks and passenger cars will be regulated equally.
The British national newspaper "The Independent" reported on a study carried out by CNW Marketing Research which suggested that CO2 emissions alone do not reflect the true environmental costs of a car. The newspaper reported that: "CNW moves beyond the usual CO2 emissions figures and uses a "dust-to-dust" calculation of a car's environmental impact, from its creation to its ultimate destruction." The newspaper also reported that the CNW research put the Jeep Wrangler above the Toyota Prius and other hybrid cars as the greenest car that could be bought in the US. However, it was noted that Toyota disputed the proportion of energy used to make a car compared with how much the vehicle uses during its life; CNW said 80% of the energy a car uses is accounted for by manufacture and 20% in use. Toyota claimed the reverse.
The report has raised controversy. When Oregon radio station KATU asked for comment on the CNW report, Professor John Heywood (with the Massachusetts Institute of Technology (MIT)) saw merit in the study saying, "It raises...some good questions" but "I can only guess at how they did the detailed arithmetic... The danger is a report like this will discourage the kind of thinking we want consumers to do – should I invest in this new technology, should I help this new technology?"
The Rocky Mountain Institute alleged that even after making assumptions that would lower the environmental impact of the Hummer H3 relative to the Prius, "the Prius still has a lower impact on the environment. This indicates that the unpublished assumptions and inputs used by CNW must continue the trend of favoring the Hummer or disfavoring the Prius. Since the researchers at Argonne Labs performed a careful survey of all recent life cycle analysis of cars, especially hybrids, our research underlines the deep divide between CNW's study and all scientifically reviewed and accepted work on the same topic."
A report done by the Pacific Institute alleges "serious biases and flaws" in the study published by CNW, claiming that "the report's conclusions rely on faulty methods of analysis, untenable assumptions, selective use and presentation of data, and a complete lack of peer review."
For his part, CNW's Art Spinella says environmental campaigners may be right about SUVs, but hybrids are an expensive part of the automotive picture. The vehicle at the top of his environmentally-friendly list is the Scion xB because it is easy to build, cheap to run and recycle, and carries a cost of 49 cents a mile over its lifetime. "I don't like the Hummer people using that as an example to justify the fact that they bought a Hummer," he said. "Just as it's not for Prius owners to necessarily believe that they're saving the entire globe, the environment for the entire world, that's not true either."
In the June 2008 "From Dust to Dust" study, the Prius cost per lifetime-mile fell 23.5%, to $2.19 per lifetime mile, while the H3 cost rose 12.5%, to $2.33 per lifetime-mile. Actual results depend upon the distance driven during the vehicle's life.
Greenhouse gas emissions.
Unmodified, SUVs emit 700 megatonnes of carbon dioxide per year, which causes global warming. Whereas SUVs can be electrified, their (manufacturing) emissions will always be larger than smaller electric cars. They can also be converted to run on a variety of alternative fuels, including hydrogen. That said, the vast majority of these vehicles are not converted to use alternative fuels.
Weight and size.
The weight of a passenger vehicle has a direct statistical contribution to its driver fatality rate according to Informed for LIFE, more weight being beneficial (to the occupant).
The length and especially width of large SUVs is controversial in urban areas. In areas with limited parking spaces, large SUV drivers have been criticized for parking in stalls marked for compact cars or that are too narrow for the width of larger vehicles. Critics have stated that this causes problems such as the loss of use of the adjacent space, reduced accessibility into the entry of an adjacent vehicle, blockage of driveway space, and damage inflicted, by the door, to adjacent vehicles. As a backlash against the alleged space consumption of SUVs, the city of Florence, has restricted access of SUVs to the center, and Paris and Vienna have debated banning them altogether.
Despite common perceptions, SUVs often have equivalent or less interior storage space than wagons. While handling worse and burning more fuel due to high centre of gravity and weight respectively.
Activism.
Siân Berry was a founder of the "Alliance against Urban 4×4s", which began in Camden in 2003 and became a national campaign demanding measures to stop 4×4s (or sport utility vehicles) "taking over our cities". The campaign was known for its "theatrical demonstrations" and mock parking tickets, credited to Berry (although now adapted by numerous local groups).
In Sweden, a group which called themselves "Asfaltsdjungelns indianer" (en: The Indians of the asphalt jungle), carried out actions in Stockholm, Gothenburg, Malmö and a number of smaller cities. The group, created in 2007, released the air from the tires on an estimated 300 SUVs during their first year. Their mission was to highlight the high fuel consumption of SUVs, as they thought that SUV owners did not have the right to drive such big vehicles at the expense of others. The group received some attention in media, and declared a truce in December 2007.
Similar activist groups, most likely inspired by the Swedish group, have carried out actions in Denmark, Scotland, and Finland.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{P_{accel}= m_{vehicle} \\cdot a \\cdot v }"
},
{
"math_id": 1,
"text": "P_{accel} \\,\\!"
},
{
"math_id": 2,
"text": "m_{vehicle} \\,\\!"
},
{
"math_id": 3,
"text": "{a} \\,\\!"
},
{
"math_id": 4,
"text": "{v} \\,\\!"
},
{
"math_id": 5,
"text": "{P_{drag}= A_{cross} \\cdot cw_{vehicle} \\cdot \\frac {v_{air}^3 \\rho_{air}} {2} }"
},
{
"math_id": 6,
"text": "P_{drag} \\,\\!"
},
{
"math_id": 7,
"text": "{A_{cross}}\\,\\!"
},
{
"math_id": 8,
"text": "{\\rho_{air}} \\,\\!"
},
{
"math_id": 9,
"text": "v_{air} \\,\\!"
},
{
"math_id": 10,
"text": "{P_{roll}= \\mu_{roll} \\cdot m_{vehicle} \\cdot v }"
},
{
"math_id": 11,
"text": "\\mu_{roll} \\,\\!"
}
] | https://en.wikipedia.org/wiki?curid=8669149 |
866991 | Great conjunction | Conjunction of the planets Jupiter and Saturn
A great conjunction is a conjunction of the planets Jupiter and Saturn, when the two planets appear closest together in the sky. Great conjunctions occur approximately every 20 years when Jupiter "overtakes" Saturn in its orbit. They are named "great" for being by far the rarest of the conjunctions between naked-eye planets (i.e. excluding Uranus and Neptune).
The spacing between the planets varies from conjunction to conjunction with most events being 0.5 to 1.3 degrees (30 to 78 arcminutes, or 1 to 2.5 times the width of a full moon). Very close conjunctions happen much less frequently (though the maximum of 1.3° is still close by inner planet standards): separations of less than 10 arcminutes have only happened four times since 1200, most recently in 2020.
In history.
Great conjunctions attracted considerable attention in the past as omens. During the late Middle Ages and Renaissance they were a topic broached by the pre-scientific and transitional astronomer-astrologers of the period up to the time of Tycho Brahe and Johannes Kepler, by scholastic thinkers such as Roger Bacon and Pierre d'Ailly, and they are mentioned in popular and literary works by authors such as Dante Lope de Vega and Shakespeare. This interest is traced back in Europe to translations of Arabic texts, especially Albumasar's book on conjunctions.
Clusterings of several planets were considered even more significant. The Chinese apparently remembered the clustering of all five planets in 1953 BC, and noted the clustering of all but Venus in 1576 BC and of all five in 1059 BC. These were connected in Chinese thought to the founding of the first three historical dynasties, the Xia dynasty, the Shang dynasty, and the Zhou dynasty. The intervals involved, of 377.8 years (19 great conjunction intervals) and 516.4 years (26 great conjunction intervals) bring Mars back to approximately the same position. Further repeats of the 516-year period lead to the clustering in AD 1524, considered ominous in Europe at the time of the Radical Reformation, and the upcoming clustering of September 2040, which will involve all five planets again, in a longitude span of less than 7°.
Celestial mechanics.
On average, great conjunction seasons occur once every 19.859 Julian years (each of which is 365.25 days). This number can be calculated by the synodic period formula
formula_0
in which J and S are the orbital periods of Jupiter (4332.59 days) and Saturn (10759.22 days), respectively. This is about 52 days less than 20 years, but in practice, Earth's orbit size can cause great conjunctions to reoccur anytime between 18 years 10 months and 20 years 8 months after the previous one. (See table below.) Since the equivalent periods of other naked-eye planet pairs are all under 900 days, this makes great conjunctions the rarest.
Occasionally there is more than one great conjunction in a season, which happens whenever they're close enough to opposition: this is called a triple conjunction (which is not exclusive to great conjunctions). In this scenario, Jupiter and Saturn will occupy the same right ascension on three occasions or same ecliptic longitude on three occasions, depending on which definition of "conjunction" one uses (this is due to apparent retrograde motion and happens within months). The most recent triple conjunction occurred in 1980–81 and the next will be in 2238–39.
The most recent great conjunction occurred on 21 December 2020, and the next will occur on 4 November 2040. During the 2020 great conjunction, the two planets were separated in the sky by 6 arcminutes at their closest point, which was the closest distance between the two planets since 1623. The closeness is the result of the conjunction occurring in the vicinity of one of the two longitudes where the two orbits appear to intersect when viewed from the Sun (which has a point of view similar to Earth).
Because 19.859 years is equal to 1.674 Jupiter orbits and 0.674 Saturn orbits, three of these periods come close to a whole number of revolutions. As successive great conjunctions occur nearly 120° apart, their appearances form a triangular pattern. In a series, every third conjunction returns after some 60 years to the vicinity of the first. These returns are observed to be shifted by some 8° relative to the fixed stars, so no more than four of them occur in the same zodiacal constellation. Usually the conjunctions occur in one of the following "triplicities" or "trigons" of zodiacal constellations:
After about 220 years the pattern shifts to the next trigon, and in about 800 or 900 years returns to the first trigon.
The three points of the triangle revolve in the same direction as the planets at the rate of approximately one-sixth of a revolution per four centuries, thus creating especially close conjunctions on an approximately four-century cycle. Currently the longitudes of close great conjunctions are about 307.4 and 127.4 degrees, in Capricornus and Cancer respectively.
In astrology, one of the four elements was ascribed to each triangular pattern. Particular importance was accorded to the occurrence of a great conjunction in a new trigon, which is bound to happen after some 240 years at most. Even greater importance was attributed to the beginning of a new cycle after all fours trigons had been visited.
Medieval astrologers usually gave 960 years as the duration of the full cycle, perhaps because in some cases it took 240 years to pass from one trigon to the next. If a cycle is defined by when the conjunctions return to the same right ascension rather than to the same constellation, then because of axial precession the cycle is less than 800 years. Use of the Alphonsine tables apparently led to the use of precessing signs, and Kepler gave a value of 794 years (40 conjunctions).
Despite mathematical errors and some disagreement among astrologers about when trigons began, belief in the significance of such events generated a stream of publications that grew steadily until the end of the 16th century. As the great conjunction of 1583 was last in the water trigon it was widely supposed to herald apocalyptic changes; a papal bull against divination was issued in 1586 but as nothing significant happened by the feared event of 1603, public interest rapidly died. By the start of the next trigon, modern scientific consensus had condemned astrology as pseudoscience, and astronomers no longer perceived planetary alignments as omens. However, in the year 1962, when all five planets formed a cluster 17° wide, there was considerable concern.
Saturn's orbit plane is inclined 2.485 degrees relative to Earth's, and Jupiter's is inclined 1.303 degrees. The ascending nodes of both planets are similar (100.6 degrees for Jupiter and 113.7 degrees for Saturn), meaning if Saturn is above or below Earth's orbital plane Jupiter usually is too. Because these nodes align so well it would be expected that no closest approach will ever be much worse than the difference between the two inclinations. Indeed, between year 1 and 3000, the maximum conjunction distances were 1.3 degrees in 1306 and 1940. Conjunctions in both years occurred when the planets were tilted most out of the plane: longitude 206 degrees (therefore above the plane) in 1306, and longitude 39 degrees (therefore below the plane) in 1940.
List of great conjunctions (1200 to 2400).
The following table details great conjunctions in between 1200 and 2400. The dates are given for the conjunctions in right ascension (the dates for conjunctions in ecliptic longitude can differ by several days). Dates before 1582 are in the Julian calendar while dates after 1582 are in the Gregorian calendar.
Longitude is measured counterclockwise from the location of the First Point of Aries (the location of the March equinox) at epoch J2000. This non-rotating coordinate system doesn't move with the precession of Earth's axes, thus being suited for calculations of the locations of stars. (In astrometry latitude and longitude are based on the ecliptic which is Earth's orbit extended sunward and anti-sunward indefinitely.) The other common conjunction coordinate system is measured counterclockwise in right ascension from the First Point of Aries and is based on Earth's equator and the meridian of the equinox point both extended upwards indefinitely; ecliptic separations are usually smaller.
Distance is the angular separation between the planets in sixtieths of a degree (minutes of arc) and elongation is the angular distance from the Sun in degrees. An elongation between around −20 and +20 degrees indicates that the Sun is close enough to the conjunction to make it difficult or impossible to see, sometimes more difficult at some geographic latitudes and less difficult elsewhere. Note that the exact moment of conjunction cannot be seen everywhere as it is below the horizon or it is daytime in some places, but a place on Earth affects minimum separation less than it would if an inner planet was involved. Negative elongations indicate the planet is west of the Sun (visible in the morning sky), whereas positive elongations indicate the planet is east of the Sun (visible in the evening sky).
The great conjunction series is roughly analogous to the Saros series for solar eclipses (which are Sun–Moon conjunctions). Conjunctions in a particular series occur about 119.16 years apart. The reason it is every six conjunctions instead of every three is that 119.16 years is closer to a whole number of years than = 59.58 is, so Earth will be closer to the same position in its orbit and conjunctions will appear more similar. All series will have progressions where conjunctions gradually shift from only visible before sunrise to visible throughout the night to only visible after sunset and finally back to the morning sky again. The location in the sky of each conjunction in a series should increase in longitude by 16.3 degrees on average, making one full cycle relative to the stars on average once every 2,634 years. If instead we use the convention of measuring longitude eastward from the First Point of Aries, we have to keep in mind that the equinox circulates once every c. 25,772 years, so longitudes measured that way increase slightly faster and those numbers become 17.95 degrees and 2,390 years.
A conjunction can be a member of a triple conjunction. In a triple conjunction, the series does not advance by one each event as the constellation and year is the same or close to it, this is the only time great conjunctions can be less than about 20 years apart.
<templatestyles src="Col-begin/styles.css"/>
Notable great conjunctions.
<templatestyles src="Template:Bar chart/styles.css"/>
7 BC.
When studying the great conjunction of 1603, Johannes Kepler thought that the Star of Bethlehem might have been the occurrence of a great conjunction. He calculated that a triple conjunction of Jupiter and Saturn occurred in 7 BC (−6 using astronomical year numbering);
1563.
The astronomers from the Cracow Academy (Jan Muscenius, Stanisław Jakobejusz, Nicolaus Schadeck, Petrus Probosczowicze, and others) observed the great conjunction of 1563 to compare Alfonsine tables (based on a geocentric model) with the Prutenic Tables (based on Copernican heliocentrism). In the Prutenic Tables the astronomers found Jupiter and Saturn so close to each other that Jupiter covered Saturn (actual angular separation was 6.8 minutes on 25 August 1563). The Alfonsine tables suggested that the conjunction should be observed on another day but on the day indicated by the Alfonsine tables the angular separation was a full 141 minutes. The Cracow professors suggested following the more accurate Copernican predictions and between 1578 and 1580 Copernican heliocentrism was lectured on three times by Valentin Fontani.
This conjunction was also observed by Tycho Brahe, who noticed that the Copernican and Ptolemaic tables used to predict the conjunction were inaccurate. This led him to realise that progress in astronomy required systematic, rigorous observation, night after night, using the most accurate instruments obtainable.
2020.
The great conjunction of 2020 was the closest since 1623 and eighth closest of the first three millennia AD, with a minimum separation between the two planets of 6.1 arcminutes. This great conjunction was also the most easily visible close conjunction since 1226 (as the previous close conjunctions in 1563 and 1623 were closer to the Sun and therefore more difficult to see). It occurred seven weeks after the heliocentric conjunction, when Jupiter and Saturn shared the same heliocentric longitude.
The closest separation occurred on 21 December at 18:20 UTC, when Jupiter was 0.1° south of Saturn and 30° east of the Sun. This meant both planets appeared together in the field of view of most small- and medium-sized telescopes (though they were distinguishable from each other without optical aid). During the closest approach, both planets appeared to be a binary object to the naked eye. From mid-northern latitudes, the planets were visible one hour after sunset at less than 15° in altitude above the southwestern horizon in the constellation of Capricornus.
The conjunction attracted considerable media attention, with news sources calling it the "Christmas Star" due to the proximity of the date of the conjunction to Christmas, and for a great conjunction being one of the hypothesized explanations for the biblical Star of Bethlehem.
7541.
As well as being a triple conjunction, the great conjunction of 7541 is expected to feature two occultations: one partial on 16 February, and one total on 17 June. Superimposition requires a separation of less than approximately 0.4 arcminutes. This will be the first occultation between the two planets since 6857 BC, and the only instance of two occultations within the same year in maybe a million years.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{1}{\\left ( \\frac{1}{J} - \\frac{1}{S} \\right )} \\approx 7253.46 \\; \\mathrm{days} \\displaystyle,"
}
] | https://en.wikipedia.org/wiki?curid=866991 |
867041 | Tropical geometry | Skeletonized version of algebraic geometry
In mathematics, tropical geometry is the study of polynomials and their geometric properties when addition is replaced with minimization and multiplication is replaced with ordinary addition:
formula_0
formula_1
So for example, the classical polynomial formula_2 would become formula_3. Such polynomials and their solutions have important applications in optimization problems, for example the problem of optimizing departure times for a network of trains.
Tropical geometry is a variant of algebraic geometry in which polynomial graphs resemble piecewise linear meshes, and in which numbers belong to the tropical semiring instead of a field. Because classical and tropical geometry are closely related, results and methods can be converted between them. Algebraic varieties can be mapped to a tropical counterpart and, since this process still retains some geometric information about the original variety, it can be used to help prove and generalize classical results from algebraic geometry, such as the Brill–Noether theorem, using the tools of tropical geometry.
History.
The basic ideas of tropical analysis were developed independently using the same notation by mathematicians working in various fields. The central ideas of tropical geometry appeared in different forms in a number of earlier works. For example, Victor Pavlovich Maslov introduced a tropical version of the process of integration. He also noticed that the Legendre transformation and solutions of the Hamilton–Jacobi equation are linear operations in the tropical sense. However, only since the late 1990s has an effort been made to consolidate the basic definitions of the theory. This was motivated by its application to enumerative algebraic geometry, with ideas from Maxim Kontsevich and works by Grigory Mikhalkin among others.
The adjective "tropical" was coined by French mathematicians in honor of the Hungarian-born Brazilian computer scientist Imre Simon, who wrote on the field. Jean-Éric Pin attributes the coinage to Dominique Perrin, whereas Simon himself attributes the word to Christian Choffrut.
Algebra background.
Tropical geometry is based on the tropical semiring. This is defined in two ways, depending on max or min convention.
The "min tropical semiring" is the semiring formula_4, with the operations:
formula_0
formula_1
The operations formula_5 and formula_6 are referred to as "tropical addition" and "tropical multiplication" respectively. The identity element for formula_5 is formula_7, and the identity element for formula_6 is 0.
Similarly, the "max tropical semiring" is the semiring formula_8, with operations:
formula_9
formula_1
The identity element for formula_5 is formula_10, and the identity element for formula_6 is 0.
These semirings are isomorphic, under negation formula_11, and generally one of these is chosen and referred to simply as the "tropical semiring". Conventions differ between authors and subfields: some use the "min" convention, some use the "max" convention.
The tropical semiring operations model how valuations behave under addition and multiplication in a valued field.
Some common valued fields encountered in tropical geometry (with min convention) are:
Tropical polynomials.
A "tropical polynomial" is a function formula_19 that can be expressed as the tropical sum of a finite number of "monomial terms". A monomial term is a tropical product (and/or quotient) of a constant and variables from formula_20. Thus a tropical polynomial "F" is the minimum of a finite collection of affine-linear functions in which the variables have integer coefficients, so it is concave, continuous, and piecewise linear.
formula_21
Given a polynomial "f" in the Laurent polynomial ring formula_22 where "K" is a valued field, the "tropicalization" of "f", denoted formula_23, is the tropical polynomial obtained from "f" by replacing multiplication and addition by their tropical counterparts and each constant in "K" by its valuation. That is, if
formula_24
then
formula_25
The set of points where a tropical polynomial "F" is non-differentiable is called its associated "tropical hypersurface", denoted formula_26 (in analogy to the vanishing set of a polynomial). Equivalently, formula_26 is the set of points where the minimum among the terms of "F" is achieved at least twice. When formula_27 for a Laurent polynomial "f", this latter characterization of formula_26 reflects the fact that at any solution to formula_28, the minimum valuation of the terms of "f" must be achieved at least twice in order for them all to cancel.
Tropical varieties.
Definitions.
For "X" an algebraic variety in the algebraic torus formula_29, the "tropical variety" of "X" or "tropicalization" of "X", denoted formula_30, is a subset of formula_31 that can be defined in several ways. The equivalence of these definitions is referred to as the "Fundamental Theorem of Tropical Geometry".
Intersection of tropical hypersurfaces.
Let formula_32 be the ideal of Laurent polynomials that vanish on "X" in formula_22. Define
formula_33
When "X" is a hypersurface, its vanishing ideal formula_32 is a principal ideal generated by a Laurent polynomial "f", and the tropical variety formula_30 is precisely the tropical hypersurface formula_34.
Every tropical variety is the intersection of a finite number of tropical hypersurfaces. A finite set of polynomials formula_35 is called a "tropical basis" for "X" if formula_30 is the intersection of the tropical hypersurfaces of formula_36. In general, a generating set of formula_32 is not sufficient to form a tropical basis. The intersection of a finite number of a tropical hypersurfaces is called a "tropical prevariety" and in general is not a tropical variety.
Initial ideals.
Choosing a vector formula_37 in formula_31 defines a map from the monomial terms of formula_22 to formula_38 by sending the term "m" to formula_39. For a Laurent polynomial formula_40, define the "initial form" of "f" to be the sum of the terms formula_41 of "f" for which formula_42 is minimal. For the ideal formula_32, define its "initial ideal" with respect to formula_37 to be
formula_43
Then define
formula_44
Since we are working in the Laurent ring, this is the same as the set of weight vectors for which formula_45 does not contain a monomial.
When "K" has trivial valuation, formula_45 is precisely the initial ideal of formula_32 with respect to the monomial order given by a weight vector formula_37. It follows that formula_30 is a subfan of the Gröbner fan of formula_32.
Image of the valuation map.
Suppose that "X" is a variety over a field "K" with valuation "v" whose image is dense in formula_38 (for example a field of Puiseux series). By acting coordinate-wise, "v" defines a map from the algebraic torus formula_29 to formula_31. Then define
formula_46
where the overline indicates the closure in the Euclidean topology. If the valuation of "K" is not dense in formula_38, then the above definition can be adapted by extending scalars to larger field which does have a dense valuation.
This definition shows that formula_30 is the non-Archimedean amoeba over an algebraically closed non-Archimedean field "K".
If "X" is a variety over formula_13, formula_30 can be considered as the limiting object of the amoeba formula_47 as the base "t" of the logarithm map goes to infinity.
Polyhedral complex.
The following characterization describes tropical varieties intrinsically without reference to algebraic varieties and tropicalization.
A set "V" in formula_31 is an irreducible tropical variety if it is the support of a weighted polyhedral complex of pure dimension "d" that satisfies the "zero-tension condition" and is connected in codimension one. When "d" is one, the zero-tension condition means that around each vertex, the weighted-sum of the out-going directions of edges equals zero. For higher dimension, sums are taken instead around each cell of dimension formula_48 after quotienting out the affine span of the cell. The property that "V" is connected in codimension one means for any two points lying on dimension "d" cells, there is a path connecting them that does not pass through any cells of dimension less than formula_48.
Tropical curves.
The study of "tropical curves" (tropical varieties of dimension one) is particularly well developed and is strongly related to graph theory. For instance, the theory of divisors of tropical curves are related to chip-firing games on graphs associated to the tropical curves.
Many classical theorems of algebraic geometry have counterparts in tropical geometry, including:
Oleg Viro used tropical curves to classify real curves of degree 7 in the plane up to isotopy. His method of "patchworking" gives a procedure to build a real curve of a given isotopy class from its tropical curve.
Applications.
A tropical line appeared in Paul Klemperer's design of auctions used by the Bank of England during the financial crisis in 2007. Yoshinori Shiozawa defined subtropical algebra as max-times or min-times semiring (instead of max-plus and min-plus). He found that Ricardian trade theory (international trade without input trade) can be interpreted as subtropical convex algebra. Tropical geometry has also been used for analyzing the complexity of feedforward neural networks with ReLU activation.
Moreover, several optimization problems arising for instance in job scheduling, location analysis, transportation networks, decision making and discrete event dynamical systems can be formulated and solved in the framework of tropical geometry. A tropical counterpart of the Abel–Jacobi map can be applied to a crystal design. The weights in a weighted finite-state transducer are often required to be a tropical semiring. Tropical geometry can show self-organized criticality.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "x \\oplus y = \\min\\{x, y \\},"
},
{
"math_id": 1,
"text": "x \\otimes y = x + y."
},
{
"math_id": 2,
"text": "x^3 + 2xy + y^4"
},
{
"math_id": 3,
"text": "\\min\\{x+x+x,\\; 2+x+y,\\; y+y+y+y\\}"
},
{
"math_id": 4,
"text": "(\\R \\cup \\{+\\infty\\}, \\oplus, \\otimes)"
},
{
"math_id": 5,
"text": "\\oplus"
},
{
"math_id": 6,
"text": "\\otimes"
},
{
"math_id": 7,
"text": "+\\infty"
},
{
"math_id": 8,
"text": "(\\R \\cup \\{-\\infty\\}, \\oplus, \\otimes)"
},
{
"math_id": 9,
"text": "x \\oplus y = \\max\\{x, y \\},"
},
{
"math_id": 10,
"text": "-\\infty"
},
{
"math_id": 11,
"text": "x \\mapsto -x"
},
{
"math_id": 12,
"text": "\\Q"
},
{
"math_id": 13,
"text": "\\Complex"
},
{
"math_id": 14,
"text": "v(a) = 0"
},
{
"math_id": 15,
"text": "a\\ne 0"
},
{
"math_id": 16,
"text": "v_p(p^n a/b) = n"
},
{
"math_id": 17,
"text": "\\Complex(\\!(t)\\!)"
},
{
"math_id": 18,
"text": "\\Complex\\{\\!\\{t\\}\\!\\}"
},
{
"math_id": 19,
"text": "F\\colon \\R^n\\to \\R"
},
{
"math_id": 20,
"text": "X_1,\\ldots , X_n"
},
{
"math_id": 21,
"text": "\n \\begin{align} F(X_1,\\ldots,X_n) &= \\left(C_1 \\otimes X_1^{\\otimes a_{11}} \\otimes \\cdots \\otimes X_n^{\\otimes a_{n1}}\\right) \\oplus \\cdots \\oplus \\left(C_s \\otimes X_1^{\\otimes a_{1s}} \\otimes \\cdots \\otimes X_n^{\\otimes a_{ns}}\\right)\\\\\n &= \\min \\{C_1+a_{11}X_1+\\cdots+a_{n1}X_n,\\; \\ldots,\\; C_s+a_{1s}X_1+\\cdots+a_{ns}X_n\\}. \\end{align}\n"
},
{
"math_id": 22,
"text": "K[x_1^{\\pm 1},\\ldots ,x_n^{\\pm 1}]"
},
{
"math_id": 23,
"text": "\\operatorname{Trop}(f)"
},
{
"math_id": 24,
"text": " f = \\sum_{i=1}^s c_i x^{A_i} \\quad \\text{ with } A_1,\\ldots,A_s \\in \\Z^n,"
},
{
"math_id": 25,
"text": "\\operatorname{Trop}(f) = \\bigoplus_{i=1}^s v(c_i) \\otimes X^{\\otimes A_i}. "
},
{
"math_id": 26,
"text": "\\mathrm{V}(F)"
},
{
"math_id": 27,
"text": "F = \\operatorname{Trop}(f)"
},
{
"math_id": 28,
"text": "f = 0"
},
{
"math_id": 29,
"text": "(K^{\\times})^n"
},
{
"math_id": 30,
"text": "\\operatorname{Trop}(X)"
},
{
"math_id": 31,
"text": "\\R^n"
},
{
"math_id": 32,
"text": "\\mathrm{I}(X)"
},
{
"math_id": 33,
"text": "\\operatorname{Trop}(X) = \\bigcap_{f \\in \\mathrm{I}(X)} \\mathrm{V}(\\operatorname{Trop}(f)) \\subseteq \\R^n. "
},
{
"math_id": 34,
"text": "\\mathrm{V}(\\operatorname{Trop}(f))"
},
{
"math_id": 35,
"text": "\\{f_1,\\ldots,f_r\\}\\subseteq \\mathrm{I}(X)"
},
{
"math_id": 36,
"text": "\\operatorname{Trop}(f_1),\\ldots,\\operatorname{Trop}(f_r)"
},
{
"math_id": 37,
"text": "\\mathbf{w}"
},
{
"math_id": 38,
"text": "\\R"
},
{
"math_id": 39,
"text": "\\operatorname{Trop}(m)(\\mathbf{w})"
},
{
"math_id": 40,
"text": "f = m_1 + \\cdots + m_s"
},
{
"math_id": 41,
"text": "m_i"
},
{
"math_id": 42,
"text": "\\operatorname{Trop}(m_i)(\\mathbf{w})"
},
{
"math_id": 43,
"text": "\\operatorname{in}_{\\mathbf{w}}\\mathrm{I}(X) = (\\operatorname{in}_{\\mathbf{w}}(f) : f \\in \\mathrm{I}(X))."
},
{
"math_id": 44,
"text": "\\operatorname{Trop}(X) = \\{\\mathbf{w} \\in \\R^n : \\operatorname{in}_{\\mathbf{w}}\\mathrm{I}(X) \\neq (1)\\}. "
},
{
"math_id": 45,
"text": "\\operatorname{in}_{\\mathbf{w}}\\mathrm{I}(X)"
},
{
"math_id": 46,
"text": "\\operatorname{Trop}(X) = \\overline{\\{(v(x_1),\\ldots,v(x_n)) : (x_1,\\ldots,x_n) \\in X \\}}, "
},
{
"math_id": 47,
"text": "\\operatorname{Log}_t(X)"
},
{
"math_id": 48,
"text": "d-1"
}
] | https://en.wikipedia.org/wiki?curid=867041 |
8673754 | List of second moments of area | The following is a list of second moments of area of some shapes. The second moment of area, also known as area moment of inertia, is a geometrical property of an area which reflects how its points are distributed with respect to an arbitrary axis. The unit of dimension of the second moment of area is length to fourth power, L4, and should not be confused with the mass moment of inertia. If the piece is thin, however, the mass moment of inertia equals the area density times the area moment of inertia.
Second moments of area.
Please note that for the second moment of area equations in the below table: formula_0 and formula_1
Parallel axis theorem.
The parallel axis theorem can be used to determine the second moment of area of a rigid body about any axis, given the body's second moment of area about a parallel axis through the body's centroid, the area of the cross section, and the perpendicular distance ("d") between the axes.
formula_2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I_x = \\iint_A y^2 \\, dx \\, dy"
},
{
"math_id": 1,
"text": "I_y = \\iint_A x^2 \\, dx \\, dy."
},
{
"math_id": 2,
"text": "I_{x'} = I_{x} + Ad^2"
}
] | https://en.wikipedia.org/wiki?curid=8673754 |
867515 | Tired light | Class of hypothetical redshift mechanisms
Tired light is a class of hypothetical redshift mechanisms that was proposed as an alternative explanation for the redshift-distance relationship. These models have been proposed as alternatives to the models that involve the expansion of the universe. The concept was first proposed in 1929 by Fritz Zwicky, who suggested that if photons lost energy over time through collisions with other particles in a regular way, the more distant objects would appear redder than more nearby ones.
Zwicky acknowledged that any sort of scattering of light would blur the images of distant objects more than what is seen. Additionally, the surface brightness of galaxies evolving with time, time dilation of cosmological sources, and a thermal spectrum of the cosmic microwave background have been observed—these effects should not be present if the cosmological redshift was due to any tired light scattering mechanism. Despite periodic re-examination of the concept, tired light has not been supported by observational tests and remains a fringe topic in astrophysics.
History and reception.
Tired light was an idea that came about due to the observation made by Edwin Hubble that distant galaxies have redshifts proportional to their distance. Redshift is a shift in the spectrum of the emitted electromagnetic radiation from an object toward lower energies and frequencies, associated with the phenomenon of the Doppler effect. Observers of spiral nebulae such as Vesto Slipher observed that these objects (now known to be separate galaxies) generally exhibited redshift rather than blueshifts independent of where they were located. Since the relation holds in all directions it cannot be attributed to normal movement with respect to a background which would show an assortment of redshifts and blueshifts. Everything is moving "away" from the Milky Way galaxy. Hubble's contribution was to show that the magnitude of the redshift correlated strongly with the distance to the galaxies.
Basing on Slipher's and Hubble's data, in 1927 Georges Lemaître realized that this correlation could fit non-static solutions to the equations of Einstein's theory of gravity, the Friedmann–Lemaître solutions. However Lemaître's article was appreciated only after Hubble's publication of 1929. The universal redshift-distance relation in this solution is attributable to the effect an expanding universe has on a photon traveling on a null spacetime interval (also known as a "light-like" geodesic). In this formulation, there was still an analogous effect to the Doppler effect, though relative velocities need to be handled with more care since distances can be defined in different ways in an expanding universe.
At the same time, other explanations were proposed that did not concord with general relativity. Edward Milne proposed an explanation compatible with special relativity but not general relativity that there was a giant explosion that could explain redshifts (see Milne universe). Others proposed that systematic effects could explain the redshift-distance correlation. Along this line, Fritz Zwicky proposed a "tired light" mechanism in 1929. Zwicky suggested that photons might slowly lose energy as they travel vast distances through a static universe by interaction with matter or other photons, or by some novel physical mechanism. Since a decrease in energy corresponds to an increase in light's wavelength, this effect would produce a redshift in spectral lines that increase proportionally with the distance of the source. The term "tired light" was coined by Richard Tolman in the early 1930s as a way to refer to this idea. Helge Kragh has noted "Zwicky’s hypothesis was the best known and most elaborate alternative to the expanding universe, but it was far from the only one. More than a dozen physicists, astronomers and amateur scientists proposed in the 1930s tired-light ideas having in common the assumption of nebular photons interacting with intergalactic matter to which they transferred part of their energy." Kragh noted in particular John Quincy Stewart, William Duncan MacMillan, and Walther Nernst.
Tired light mechanisms were among the proposed alternatives to the Big Bang and the Steady State cosmologies, both of which relied on the general relativistic expansion of the universe of the FRW metric. Through the middle of the twentieth century, most cosmologists supported one of these two paradigms, but there were a few scientists, especially those who were working on alternatives to general relativity, who worked with the tired light alternative. As the discipline of observational cosmology developed in the late twentieth century and the associated data became more numerous and accurate, the Big Bang emerged as the cosmological theory most supported by the observational evidence, and it remains the accepted consensus model with a current parametrization that precisely specifies the state and evolution of the universe. Although the proposals of "tired light cosmologies" are now more-or-less relegated to the dustbin of history, as a completely alternative proposal tired-light cosmologies were considered a remote possibility worthy of some consideration in cosmology texts well into the 1980s, though it was dismissed as an unlikely and "ad hoc" proposal by mainstream astrophysicists.
By the 1990s and on into the twenty-first century, a number of falsifying observations have shown that "tired light" hypotheses are not viable explanations for cosmological redshifts. For example, in a static universe with tired light mechanisms, the surface brightness of stars and galaxies should be constant, that is, the farther an object is, the less light we receive, but its apparent area diminishes as well, so the light received divided by the apparent area should be constant. In an expanding universe, the surface brightness diminishes with distance. As the observed object recedes, photons are emitted at a reduced rate because each photon has to travel a distance that is a little longer than the previous one, while its energy is reduced a little because of increasing redshift at a larger distance. On the other hand, in an expanding universe, the object appears to be larger than it really is, because it was closer to us when the photons started their travel. This causes a difference in surface brilliance of objects between a static and an expanding Universe. This is known as the Tolman surface brightness test that in those studies favors the expanding universe hypothesis and rules out static tired light models.
Redshift is directly observable and used by cosmologists as a direct measure of lookback time. They often refer to age and distance to objects in terms of redshift rather than years or light-years. In such a scale, the Big Bang corresponds to a redshift of infinity. Alternative theories of gravity that do not have an expanding universe in them need an alternative to explain the correspondence between redshift and distance that is "sui generis" to the expanding metrics of general relativity. Such theories are sometimes referred to as "tired-light cosmologies", though not all authors are necessarily aware of the historical antecedents.
Specific falsified models.
In general, any "tired light" mechanism must solve some basic problems, in that the observed redshift must:
A number of tired light mechanisms have been suggested over the years. Fritz Zwicky, in his paper proposing these models investigated a number of redshift explanations, ruling out some himself. The simplest form of a tired light theory assumes an exponential decrease in photon energy with distance traveled:
formula_0
where formula_1 is the energy of the photon at distance formula_2 from the source of light, formula_3 is the energy of the photon at the source of light, and formula_4 is a large constant characterizing the "resistance of the space". To correspond to Hubble's law, the constant formula_4 must be several gigaparsecs. For example, Zwicky considered whether an integrated Compton effect could account for the scale normalization of the above model:
<templatestyles src="Template:Blockquote/styles.css" />
This expected "blurring" of cosmologically distant objects is not seen in the observational evidence, though it would take much larger telescopes than those available at that time to show this with certainty. Alternatively, Zwicky proposed a kind of Sachs–Wolfe effect explanation for the redshift distance relation:
<templatestyles src="Template:Blockquote/styles.css" />
Zwicky's proposals were carefully presented as falsifiable according to later observations:
<templatestyles src="Template:Blockquote/styles.css" />
Such broadening of absorption lines is not seen in high-redshift objects, thus falsifying this particular hypothesis.
Zwicky also notes, in the same paper, that according to a tired light model a distance-redshift relationship would necessarily be present in the light from sources within our own galaxy (even if the redshift would be so small that it would be hard to measure), that do not appear under a recessional-velocity based theory. He writes, referring to sources of light within our galaxy: "It is especially desirable to determine the redshift independent of the proper velocities of the objects observed". Subsequent to this, astronomers have patiently mapped out the three-dimensional velocity-position phase space for the galaxy and found the redshifts and blueshifts of galactic objects to accord well with the statistical distribution of a spiral galaxy, eliminating the intrinsic redshift component as an effect.
Following after Zwicky in 1935, Edwin Hubble and Richard Tolman compared recessional redshift with a non-recessional one, writing that they
<templatestyles src="Template:Blockquote/styles.css" />both incline to the opinion, however, that if the red-shift is not due to recessional motion, its explanation will probably involve some quite new physical principles [... and] use of a static Einstein model of the universe, combined with the assumption that the photons emitted by a nebula lose energy on their journey to the observer by some unknown effect, which is linear with distance, and which leads to a decrease in frequency, without appreciable transverse deflection. These conditions became almost impossible to meet and the overall success of general relativistic explanations for the redshift-distance relation is one of the core reasons that the Big Bang model of the universe remains the cosmology preferred by researchers.
In the early 1950s, Erwin Finlay-Freundlich proposed a redshift as "the result of loss of energy by observed photons traversing a radiation field". which was cited and argued for as an explanation for the redshift-distance relation in a 1962 astrophysics theory "Nature" paper by University of Manchester physics professor P. F. Browne. The pre-eminent cosmologist Ralph Asher Alpher wrote a letter to "Nature" three months later in response to this suggestion heavily criticizing the approach, "No generally accepted physical mechanism has been proposed for this loss." Still, until the so-called "Age of Precision Cosmology" was ushered in with results from the WMAP space probe and modern redshift surveys, tired light models could occasionally get published in the mainstream journals, including one that was published in the February 1979 edition of "Nature" proposing "photon decay" in a curved spacetime that was five months later criticized in the same journal as being wholly inconsistent with observations of the gravitational redshift observed in the solar limb. In 1986 a paper claiming tired light theories explained redshift better than cosmic expansion was published in the "Astrophysical Journal", but ten months later, in the same journal, such tired light models were shown to be inconsistent with extant observations. As cosmological measurements became more precise and the statistics in cosmological data sets improved, tired light proposals ended up being falsified, to the extent that the theory was described in 2001 by science writer Charles Seife as being "firmly on the fringe of physics 30 years ago; still, scientists sought more direct proofs of the expansion of the cosmos".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E(x)=E_0 \\exp\\left(-\\frac{x}{R_0}\\right)"
},
{
"math_id": 1,
"text": "E(x)"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "E_0"
},
{
"math_id": 4,
"text": "R_0"
}
] | https://en.wikipedia.org/wiki?curid=867515 |
867542 | Topical medication | Medication applied to body surfaces
A topical medication is a medication that is applied to a particular place on or in the body. Most often topical medication means application to body surfaces such as the skin or mucous membranes to treat ailments via a large range of classes including creams, foams, gels, lotions, and ointments. Many topical medications are epicutaneous, meaning that they are applied directly to the skin. Topical medications may also be inhalational, such as asthma medications, or applied to the surface of tissues other than the skin, such as eye drops applied to the conjunctiva, or ear drops placed in the ear, or medications applied to the surface of a tooth. The word "topical" derives from Greek τοπικός "topikos", "of a place".
Justification.
Topical drug delivery is a route of administering drugs via the skin to provide topical therapeutic effects. As skin is one of the largest and most superficial organs in the human body, pharmacists utilise it to deliver various drugs. This system usually provides a local effect on certain positions of the body. In ancient times, people used herbs to put on wounds for relieving the inflammatory effect or as pain relievers. The use of topical drug delivery system is much broader now, from smoking cessation to beauty purposes. Nowadays, there are numerous dosage forms that can be used topically, including cream, ointment, lotion, patches, dusting powder and much more. There are many advantages for this drug delivery system – avoiding first pass metabolism which can increase its bioavailability, being convenient and easy to apply to a large area, being easy to terminate the medication and avoiding gastro-intestinal irritations. All these can increase the patient compliance. However, there are several disadvantages for this system – causing skin irritations and symptoms like rashes and itchiness may occur. Also, only small particles can pass through the skin, which limits the choice of drugs. Since skin is the main medium of topical drug delivery system, its conditions determine the rate of skin penetration leading to affecting the pharmacokinetics of the drug. The temperature, pH value and dryness of the skin need to be considered. There are some novel topical drugs in the market which can utilise the system as much as possible.
This localized system provides topical therapeutic effects via skin, eyes, nose and vagina to treat diseases. The most common usage is for local skin infection problems. Dermatological products have various formulations and range in consistency though the most popular dermal products are semisolid dosage forms to provide topical treatment.
Factor affecting topical drug absorption.
Topical drug absorption depends on two major factors – biological and physicochemical properties.
The first factor concerns body structure effects on the drugs. The degradation of drugs can be affected by the site of applications. Some studies discovered different Percutaneous absorption patterns. Apart from the place, age also affects the absorption as the skin structure changes with age. The lowered collagen and broadened blood capillary networks happen with ageing. These features alter the effectiveness of absorption of both hydrophilic and lipophilic substances into stratum corneum underneath the surface of skin. The skin surface integrity can also affect the permeability of drugs such as the density of hair follicles, sweat glands or disintegrated by inflammation or dehydration.
The other factor concerns metabolism of medications on skin. When the percutaneous drug is applied on skin, it will be gradually absorbed down the skin. Normally, when the drugs are absorbed, they will be metabolised by various enzymes in our body and the amount will be lower. The exact amount delivered to the target action site determines the potency and bioavailability of the drugs. If the concentration is too low, the therapeutic effect is impeded; if the concentration is too high, drug toxicity may happen to cause side effects or even do harm to our body. For the topical drug delivery way, degradation of drugs in skin is very low compared to liver. The metabolism of drugs is mainly by metabolic enzyme cytochrome P450, and this enzyme is not active in skin. The CYP450 actively metabolized drugs can then maintain high concentration when being applied on skin. Despite CYP450 enzyme action, the partition coefficient (K) determines the activity of topical drugs. The ability of drug particles to go through the skin layer also affects the absorption of drugs. For transdermal activity, medicines with higher K value are harder to get rid of the lipid layer of skin cells. The trapped molecules then cannot penetrate into the skin. This reduces the efficacy of the transdermal drugs. The drugs target cells underneath the skin or need to diffuse into blood capillary to exert their effect. Meanwhile, the size of particles affects this transdermal process. The smaller the drug molecules, the faster the rate of penetration. Polarity of the drugs can affect this diffusion rate too. If the drug shows lower degree of ionization, it is less polar. Therefore, it can have a faster absorption rate.
Local versus systemic effect.
The definition of the topical route of administration sometimes states that both the application location and the pharmacodynamic effect thereof is local.
In other cases, "topical" is defined as applied to a localized area of the body or to the surface of a body part regardless of the location of the effect. By this definition, topical administration also includes transdermal application, where the substance is administered onto the skin but is absorbed into the body to attain systemic distribution. Such medications are generally hydrophobic chemicals, such as steroid hormones. Specific types include transdermal patches which have become a popular means of administering some drugs for birth control, hormone replacement therapy, and prevention of motion sickness. One example of an antibiotic that may be applied topically is chloramphenicol.
If defined strictly as having local effect, the topical route of administration can also include enteral administration of medications that are poorly absorbable by the gastrointestinal tract. One poorly absorbable antibiotic is vancomycin, which is recommended by mouth as a treatment for severe "Clostridium difficile" colitis.
Choice of base formulation.
A medication's potency often is changed with its base. For example, some topical steroids will be classified one or two strengths higher when moving from cream to ointment. As a rule of thumb, an ointment base is more occlusive and will drive the medication into the skin more rapidly than a solution or cream base.
The manufacturer of each topical product has total control over the content of the base of a medication. Although containing the same active ingredients, one manufacturer's cream might be more acidic than the next, which could cause skin irritation or change its absorption rate. For example, a vaginal formulation of miconazole antifungal cream might irritate the skin less than an athlete foot formulation of miconazole cream. These variations can, on occasion, result in different clinical outcomes, even though the active ingredient is the same. No comparative potency labeling exists to ensure equal efficacy between brands of topical steroids (percentage of oil vs water dramatically affect the potency of topical steroid). Studies have confirmed that the potency of some topical steroid products may differ according to manufacturer or brand. An example of this is the case of brand name Valisone cream and Kenalog cream in clinical studies have demonstrated significantly better vasoconstrictions than some forms of this drug produced by generic drug manufacturers. However, in a simple base like an ointment, much less variation between manufacturers is common.
In dermatology, the base of a topical medication is often as important as the medication itself. It is extremely important to receive a medication in the correct base, before applying to the skin. A pharmacist should not substitute an ointment for a cream, or vice versa, as the potency of the medication can change. Some physicians use a thick ointment to replace the waterproof barrier of the inflamed skin in the treatment of eczema, and a cream might not accomplish the same clinical intention.
Formulations.
There are many general classes, with no clear dividing line among similar formulations. As a result, what the manufacturer's marketing department chooses to list on the label of a topical medication might be completely different from what the form would normally be called.
Cream.
A cream is an emulsion of oil and water in approximately equal proportions. It penetrates the stratum corneum outer layer of skin wall. Cream is thicker than lotion, and maintains its shape when removed from its container. It tends to be moderate in moisturizing tendency. For topical steroid products, oil-in-water emulsions are common. Creams have a significant risk of causing immunological sensitization due to preservatives and have a high rate of acceptance by patients. There is a great variation in ingredients, composition, pH, and tolerance among generic brands.
Foam.
Topical corticosteroid foams are suitable for treating a range of skin conditions that respond to corticosteroids. These foams are typically simple to apply, which can lead to better patient compliance and, in turn, improve treatment results for those who favor a more convenient and cleaner topical option. Foam can be typically seen with topical steroids marketed for the scalp.
Gel.
Gels are thicker than liquids. Gels are often a semisolid emulsion and sometimes use alcohol as a solvent for the active ingredient; some gels liquefy at body temperature. Gel tends to be cellulose cut with alcohol or acetone. Gels tend to be self-drying, tend to have greatly variable ingredients between brands, and carry a significant risk of inducing hypersensitivity due to fragrances and preservatives. Gel is useful for hairy areas and body folds. In applying gel one should avoid fissures in the skin, due to the stinging effect of the alcohol base. Gel enjoys a high rate of acceptance due to its cosmetic elegance.
Lotion.
Lotions are similar to solution but are thicker and tend to be more emollient in nature than solution. They are usually oil mixed with water, and more often than not have less alcohol than solution. Lotions can be drying if they contain a high amount of alcohol.
Ointment.
An ointment is a homogeneous, viscous, semi-solid preparation; most commonly a greasy, thick water-in-oil emulsion (80% oil, 20% water) having a high viscosity, that is intended for external application to the skin or mucous membranes. Ointments have a water number that defines the maximum amount of water that they can contain. They are used as emollients or for the application of active ingredients to the skin for protective, therapeutic, or prophylactic purposes and where a degree of occlusion is desired.
Ointments are used topically on a variety of body surfaces. These include the skin and the mucous membranes of the eye (an "eye ointment"), chest, vulva, anus, and nose. An ointment may or may not be medicated.
Ointments are usually very moisturizing, and good for dry skin. They have a low risk of sensitization due to having few ingredients beyond the base oil or fat, and low irritation risk. There is typically little variability between brands of drugs. They are often disliked by patients due to greasiness.
The vehicle of an ointment is known as the "ointment base". The choice of a base depends upon the clinical indication for the ointment. The different types of ointment bases are:
The medicaments are dispersed in the base and are divided after penetrating the living cells of the skin.
The water number of an ointment is the maximum quantity of water that 100g of a base can contain at 20 °C.
Ointments are formulated using hydrophobic, hydrophilic, or water-emulsifying bases to provide preparations that are immiscible, miscible, or emulsifiable with skin secretions. They can also be derived from hydrocarbon (fatty), absorption, water-removable, or water-soluble bases.
Evaluation of ointments:
Properties which affect choice of an ointment base are:
Methods of preparation of ointments:
Paste.
Paste combines three agents – oil, water, and powder. It is an ointment in which a powder is suspended.
Powder.
Powder is either the pure drug by itself (talcum powder), or is made of the drug mixed in a carrier such as corn starch or corn cob powder (Zeosorb AF – miconazole powder). Can be used as an inhaled topical (cocaine powder used in nasal surgery).
Shake lotion.
A shake lotion is a mixture that separates into two or three parts over time. Frequently, an oil mixed with a water-based solution needs to be shaken into suspension before use and includes the instructions: "Shake well before use".
Solid.
Medication may be placed in a solid form. Examples are deodorant, antiperspirants, astringents, and hemostatic agents. Some solids melt when they reach body temperature (e.g. rectal suppositories).
Sponge.
Certain contraceptive methods rely on sponge as a carrier of a liquid medicine. Lemon juice embedded in a sponge has been used as a primitive contraception in some cultures.
Tape.
Cordran tape is an example of a topical steroid applied under occlusion by tape. This greatly increases the potency and absorption of the topical steroid and is used to treat inflammatory skin diseases.
Tincture.
A tincture is a skin preparation that has a high percentage of alcohol. It would normally be used as a drug vehicle if drying of the area is desired.
Topical solution.
Topical solutions can be marketed as drops, rinses, or sprays, are generally of low viscosity, and often use alcohol or water in the base. These are usually a powder dissolved in alcohol, water, and sometimes oil; although a solution that uses alcohol as a base ingredient, as in topical steroids, can cause drying of the skin. There is significant variability among brands, and some solutions may cause irritation, depending on the preservative(s) and fragrances used in the base.
Some examples of topical solutions are given below:
Transdermal patch.
Transdermal patches can be a very precise time released method of delivering a drug. Cutting a patch in half might affect the dose delivered. The release of the active component from a transdermal delivery system (patch) may be controlled by diffusion through the adhesive which covers the whole patch, by diffusion through a membrane which may only have adhesive on the patch rim or drug release may be controlled by release from a polymer matrix. Cutting a patch might cause rapid dehydration of the base of the medicine and affect the rate of diffusion.
Vapor.
Some medications are applied as an ointment or gel, and reach the mucous membrane via vaporization. Examples are nasal topical decongestants and smelling salt.
Topical Drug Classification System (TCS).
Topical drug classification system (TCS) is proposed by the FDA. It is designed from the Biopharmaceutics Classification System (BCS) for oral immediate release solid drug products which is very successful for decades. There are 3 aspects to assess and 4 classes in total. The 3 aspects include qualitative (Q1), quantitative (Q2) and similarity of in vitro release (IVR) rate (Q3).
Advantages of topical drug delivery systems.
In the early 1970s, the Alza Corporation, through their founder Alejandro Zaffaroni, filed the first US patents describing transdermal delivery systems for scopolamine, nitroglycerin and nicotine. People found that applying medicines on the body surfaces is beneficial in many aspects. Skin medicines can give faster onset and local effect on our body as the surface cream can bypass first pass metabolism such as hepatic and intestinal metabolism. Apart from the absorption, dermal drugs effectively prevent oral delivery limitations such as nausea and vomiting and poor appliances due to unpalatable tastes of the drugs . Topical application is an easy way for patients to tackle skin infections in a painless and non-invasive way. From a patient perspective, applying drugs on skin also provides stable dosage in blood so as to give the optimal bioavailability and therapeutic effects. In case of overdose or unwanted side effects, patients can take off or wash out the medicines quickly to eliminate toxicity by simply removing the patch to stop the delivery of drugs.
Disadvantages of topical drug delivery systems.
The site of putting the patches for topical drugs may get irritated and have rashes and feel itchy. Hence, some topical drugs including nicotine patches for smoking cessation are advised to change places for each application to avoid continuous irritation of the skin. Also, since the drug needs to penetrate the skin, some drugs may not be able to pass through the skin. Some drugs are then “wasted” and the bioavailability of the drug will decrease.
Challenges for designing topical dosage form.
Skin penetration is the main challenge for any topical dosage form. The drug needs to penetrate the skin in order to get into the body to apply its function. The drug follows the Fick's first law of diffusion. One of the most common versions of Fick's first law of diffusion is:
formula_0
where
For D is described by the Stokes–Einstein equation. The equation is:
formula_1
where
Assuming concentration gradient is constant for all newly applied topical drugs and the temperature is constant (normal body temperature: 37 °C), the viscosity and radius of the drug determine the flux of diffusion. The higher the viscosity or the larger the radius of the drug is, the lower the diffusion flux of the drug is.
New developments.
There are many factors for drug developers to consider in developing new topical formulations.
The first one is the effect of the drug vehicle. The medium to carry the topical drugs can affect the penetration of the drug active ingredient and efficacy. For example, this carrier can have a cooling, drying, emollient or protective action to suit the required conditions of the application site such as applying gel or lotion for hairy areas. Meanwhile, scientists need to match the type of preparation with the type of lesions. For example, they need to avoid oily ointments for acute weepy dermatitis. Chemists also need to consider the irritation or any sensitization potential to ensure that the topical application can be stable during storage and transport to maintain its efficacy. Another potential material is nanofiber-based dispersion to improve the adhesion of active ingredients on the skin.
In order to enhance drug penetration into the skin, scientists have several ways to achieve their purposes by using chemical, biochemical, physical and super saturation enhancement. Advanced Emulgel technology is a breakthrough of painkilling topical drugs. It helps the gel to enter deeply down the skin layer to strengthen delivery of diclofenac to the point of pain so as to achieve better therapeutic effects by modifying the above properties.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "J=-D{dc \\over dx}"
},
{
"math_id": 1,
"text": "D=\\frac{RT}{6\\pi\\eta rN_A}"
},
{
"math_id": 2,
"text": "N_A"
}
] | https://en.wikipedia.org/wiki?curid=867542 |
867612 | 163 (number) | Natural number
163 (one hundred [and] sixty-three) is the natural number following 162 and preceding 164.
In mathematics.
163 is the 38th prime number and a strong prime in the sense that it is greater than the arithmetic mean of its two neighboring primes.
163 is a lucky prime and a fortunate number.
163 is a strictly non-palindromic number, since it is not palindromic in any base between base 2 and base 161.
Given 163, the Mertens function returns 0, it is the fourth prime with this property, the first three such primes are 2, 101 and 149.
As approximations, formula_0, and formula_1
163 is a permutable prime in base 12, which it is written as 117, the permutations of its digits are 171 and 711, the two numbers in base 12 are 229 and 1021 in base 10, both of which are prime.
The function formula_2 gives prime values for all values of formula_3 between 0 and 39, while for formula_4 approximately half of all values are prime. 163 appears as a result of solving formula_5, which gives formula_6.
163 is a Heegner number, the largest of the nine such numbers. That is, the ring of integers of the field formula_7 has unique factorization for formula_8. The only other such integers are
formula_9. (sequence in the OEIS)
163 is the number of linearly Z-independent McKay-Thompson series for the monster group, which also represent their collective maximum dimensional representation. This fact about 163 might be a clue for understanding monstrous moonshine.
formula_10
appears in the Ramanujan constant, since -163 is a quadratic nonresidue to modulo all the primes 3, 5, 7, ..., 37. In which formula_11 almost equals the integer 262537412640768744 = 6403203 + 744. Martin Gardner famously asserted that this identity was exact in a 1975 April Fools' hoax in "Scientific American"; in fact the value is 262537412640768743.99999999999925007259...
It also satisfies formula_12.
In other fields.
163 is also:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi \\approx {2^9 \\over 163} \\approx 3.1411..."
},
{
"math_id": 1,
"text": "e \\approx {163 \\over 3\\cdot4\\cdot5} \\approx 2.7166\\dots"
},
{
"math_id": 2,
"text": "f(n) = n^2 - n + 41"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "n < 3000"
},
{
"math_id": 5,
"text": "f(n)=0"
},
{
"math_id": 6,
"text": "n = (-1+ \\sqrt{-163} ) / 2"
},
{
"math_id": 7,
"text": "\\mathbb{Q}(\\sqrt{-a})"
},
{
"math_id": 8,
"text": "a=163"
},
{
"math_id": 9,
"text": "a = 1, 2, 3, 7, 11, 19, 43, 67"
},
{
"math_id": 10,
"text": "\\sqrt{163}"
},
{
"math_id": 11,
"text": "e^{\\pi \\sqrt{163}}"
},
{
"math_id": 12,
"text": "\\frac{163}{\\log(163)}=32.99999873884..."
}
] | https://en.wikipedia.org/wiki?curid=867612 |
867671 | Importance sampling | Distribution estimation technique
Importance sampling is a Monte Carlo method for evaluating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. Its introduction in statistics is generally attributed to a paper by Teun Kloek and Herman K. van Dijk in 1978, but its precursors can be found in statistical physics as early as 1949. Importance sampling is also related to umbrella sampling in computational physics. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both.
Basic theory.
Let formula_0 be a random variable in some probability space formula_1. We wish to estimate the expected value of "X" under "P", denoted E["X;P"]. If we have statistically independent random samples formula_2, generated according to "P", then an empirical estimate of E["X;P"] is
formula_3
and the precision of this estimate depends on the variance of "X":
formula_4
The basic idea of importance sampling is to sample the states from a different distribution to lower the variance of the estimation of E["X;P"], or when sampling from "P" is difficult.
This is accomplished by first choosing a random variable formula_5 such that E["L";"P"] = 1 and that "P"-almost everywhere formula_6.
With the variable "L" we define a probability formula_7 that satisfies
formula_8
The variable "X"/"L" will thus be sampled under "P"("L") to estimate E["X;P"] as above and this estimation is improved when
formula_9.
When "X" is of constant sign over Ω, the best variable "L" would clearly be formula_10, so that "X"/"L"* is the searched constant E["X;P"] and a single sample under "P"("L"*) suffices to give its value. Unfortunately we cannot take that choice, because E["X;P"] is precisely the value we are looking for! However this theoretical best case "L*" gives us an insight into what importance sampling does:
formula_11
to the right, formula_12 is one of the infinitesimal elements that sum up to E["X";"P"]:
formula_13
therefore, a good probability change "P"("L") in importance sampling will redistribute the law of "X" so that its samples' frequencies are sorted directly according to their weights in E["X";"P"]. Hence the name "importance sampling."
Importance sampling is often used as a Monte Carlo integrator.
When formula_14 is the uniform distribution and formula_15, E["X;P"] corresponds to the integral of the real function formula_16.
Application to probabilistic inference.
Such methods are frequently used to estimate posterior densities or expectations in state and/or parameter estimation problems in probabilistic models that are too hard to treat analytically. Examples include Bayesian networks and importance weighted variational autoencoders.
Application to simulation.
Importance sampling is a variance reduction technique that can be used in the Monte Carlo method. The idea behind importance sampling is that certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. If these "important" values are emphasized by sampling more frequently, then the estimator variance can be reduced. Hence, the basic methodology in importance sampling is to choose a distribution which "encourages" the important values. This use of "biased" distributions will result in a biased estimator if it is applied directly in the simulation. However, the simulation outputs are weighted to correct for the use of the biased distribution, and this ensures that the new importance sampling estimator is unbiased. The weight is given by the likelihood ratio, that is, the Radon–Nikodym derivative of the true underlying distribution with respect to the biased simulation distribution.
The fundamental issue in implementing importance sampling simulation is the choice of the biased distribution which encourages the important regions of the input variables. Choosing or designing a good biased distribution is the "art" of importance sampling. The rewards for a good distribution can be huge run-time savings; the penalty for a bad distribution can be longer run times than for a general Monte Carlo simulation without importance sampling.
Consider formula_17 to be the sample and formula_18 to be the likelihood ratio, where formula_19 is the probability density (mass) function of the desired distribution and formula_20 is the probability density (mass) function of the biased/proposal/sample distribution. Then the problem can be characterized by choosing the sample distribution formula_20 that minimizes the variance of the scaled sample:
formula_21
It can be shown that the following distribution minimizes the above variance:
formula_22
Notice that when formula_23, this variance becomes 0.
Mathematical approach.
Consider estimating by simulation the probability formula_24 of an event formula_25, where formula_17 is a random variable with cumulative distribution function formula_26 and probability density function formula_27, where prime denotes derivative. A formula_28-length independent and identically distributed (i.i.d.) sequence formula_29 is generated from the distribution formula_30, and the number formula_31 of random variables that lie above the threshold formula_32 are counted. The random variable formula_31 is characterized by the Binomial distribution
formula_33
One can show that formula_34, and formula_35, so in the limit formula_36 we are able to obtain formula_37. Note that the variance is low if formula_38. Importance sampling is concerned with the determination and use of an alternate density function formula_39(for formula_17), usually referred to as a biasing density, for the simulation experiment. This density allows the event formula_40 to occur more frequently, so the sequence lengths formula_28 gets smaller for a given estimator variance. Alternatively, for a given formula_28, use of the biasing density results in a variance smaller than that of the conventional Monte Carlo estimate. From the definition of formula_24, we can introduce formula_39 as below.
formula_41
where
formula_42
is a likelihood ratio and is referred to as the weighting function. The last equality in the above equation motivates the estimator
formula_43
This is the importance sampling estimator of formula_24 and is unbiased. That is, the estimation procedure is to generate i.i.d. samples from formula_39 and for each sample which exceeds formula_44, the estimate is incremented by the weight formula_45 evaluated at the sample value. The results are averaged over formula_46 trials. The variance of the importance sampling estimator is easily shown to be
formula_47
Now, the importance sampling problem then focuses on finding a biasing density formula_39 such that the variance of the importance sampling estimator is less than the variance of the general Monte Carlo estimate. For some biasing density function, which minimizes the variance, and under certain conditions reduces it to zero, it is called an optimal biasing density function.
Conventional biasing methods.
Although there are many kinds of biasing methods, the following two methods are most widely used in the applications of importance sampling.
Scaling.
Shifting probability mass into the event region formula_40 by positive scaling of the random variable formula_48 with a number greater than unity has the effect of increasing the variance (mean also) of the density function. This results in a heavier tail of the density, leading to an increase in the event probability. Scaling is probably one of the earliest biasing methods known and has been extensively used in practice. It is simple to implement and usually provides conservative simulation gains as compared to other methods.
In importance sampling by scaling, the simulation density is chosen as the density function of the scaled random variable formula_49, where usually formula_50 for tail probability estimation. By transformation,
formula_51
and the weighting function is
formula_52
While scaling shifts probability mass into the desired event region, it also pushes mass into the complementary region formula_53 which is undesirable. If formula_48 is a sum of formula_54 random variables, the spreading of mass takes place in an formula_54 dimensional space. The consequence of this is a decreasing importance sampling gain for increasing formula_54, and is called the dimensionality effect.
A modern version of importance sampling by scaling is e.g. so-called sigma-scaled sampling (SSS) which is running multiple Monte Carlo (MC) analysis with different scaling factors. In opposite to many other high yield estimation methods (like worst-case distances WCD) SSS does not suffer much from the dimensionality problem. Also addressing multiple MC outputs causes no degradation in efficiency. On the other hand, as WCD, SSS is only designed for Gaussian statistical variables, and in opposite to WCD, the SSS method is not designed to provide accurate statistical corners. Another SSS disadvantage is that the MC runs with large scale factors may become difficult, e. g. due to model and simulator convergence problems. In addition, in SSS we face a strong bias-variance trade-off: Using large scale factors, we obtain quite stable yield results, but the larger the scale factors, the larger the bias error. If the advantages of SSS does not matter much in the application of interest, then often other methods are more efficient.
Translation.
Another simple and effective biasing technique employs translation of the density function (and hence random variable) to place much of its probability mass in the rare event region. Translation does not suffer from a dimensionality effect and has been successfully used in several applications relating to simulation of digital communication systems. It often provides better simulation gains than scaling. In biasing by translation, the simulation density is given by
formula_55
where formula_56 is the amount of shift and is to be chosen to minimize the variance of the importance sampling estimator.
Effects of system complexity.
The fundamental problem with importance sampling is that designing good biased distributions becomes more complicated as the system complexity increases. Complex systems are the systems with long memory since complex processing of a few inputs is much easier to handle. This dimensionality or memory can cause problems in three ways:
In principle, the importance sampling ideas remain the same in these situations, but the design becomes much harder. A successful approach to combat this problem is essentially breaking down a simulation into several smaller, more sharply defined subproblems. Then importance sampling strategies are used to target each of the simpler subproblems. Examples of techniques to break the simulation down are conditioning and error-event simulation (EES) and regenerative simulation.
Evaluation of importance sampling.
In order to identify successful importance sampling techniques, it is useful to be able to quantify the run-time savings due to the use of the importance sampling approach. The performance measure commonly used is formula_57, and this can be interpreted as the speed-up factor by which the importance sampling estimator achieves the same precision as the MC estimator. This has to be computed empirically since the estimator variances are not likely to be analytically possible when their mean is intractable. Other useful concepts in quantifying an importance sampling estimator are the variance bounds and the notion of asymptotic efficiency. One related measure is the so-called Effective Sample Size (ESS).
Variance cost function.
Variance is not the only possible cost function for a simulation, and other cost functions, such as the mean absolute deviation, are used in various statistical applications. Nevertheless, the variance is the primary cost function addressed in the literature, probably due to the use of variances in confidence intervals and in the performance measure formula_57.
An associated issue is the fact that the ratio formula_57 overestimates the run-time savings due to importance sampling since it does not include the extra computing time required to compute the weight function. Hence, some people evaluate the net run-time improvement by various means. Perhaps a more serious overhead to importance sampling is the time taken to devise and program the technique and analytically derive the desired weight function.
Multiple and adaptive importance sampling.
When different proposal distributions, formula_58 , formula_59 are jointly used for drawing the samples formula_60 different proper weighting functions can be employed (e.g., see ). In an adaptive setting, the proposal distributions, formula_61 , formula_59 and formula_62 are updated each iteration formula_32 of the adaptive importance sampling algorithm. Hence, since a population of proposal densities is used, several suitable combinations of sampling and weighting schemes can be employed.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X\\colon \\Omega\\to \\mathbb{R}"
},
{
"math_id": 1,
"text": "(\\Omega,\\mathcal{F},P)"
},
{
"math_id": 2,
"text": "x_1, \\ldots, x_n"
},
{
"math_id": 3,
"text": "\n \\widehat{\\mathbf{E}}_{n}[X;P] = \\frac{1}{n} \\sum_{i=1}^n x_i \\quad \\mathrm{where}\\; x_i \\sim P(X)\n"
},
{
"math_id": 4,
"text": "\n \\operatorname{var}[\\widehat{\\mathbf{E}}_{n};P] = \\frac{\\operatorname{var}[X;P]} n.\n"
},
{
"math_id": 5,
"text": "L\\geq 0"
},
{
"math_id": 6,
"text": "L(\\omega)\\neq 0"
},
{
"math_id": 7,
"text": "P^{(L)}"
},
{
"math_id": 8,
"text": "\n \\mathbf{E}[X;P] = \\mathbf{E}\\left[\\frac{X}{L};P^{(L)}\\right].\n"
},
{
"math_id": 9,
"text": "\\operatorname{var}\\left[\\frac{X}{L};P^{(L)}\\right] < \\operatorname{var}[X;P]"
},
{
"math_id": 10,
"text": "L^*=\\frac{X}{\\mathbf{E}[X;P]}\\geq 0"
},
{
"math_id": 11,
"text": "\n\\begin{align}\\forall a\\in\\mathbb{R}, \\; P^{(L^*)}(X\\in[a;a+da]) &= \\int_{\\omega\\in\\{X\\in[a;a+da]\\}} \\frac{X(\\omega)}{E[X;P]} \\, dP(\\omega) \\\\[6pt] &= \\frac{1}{E[X;P]}\\; a\\,P(X\\in[a;a+da]) \n\\end{align}"
},
{
"math_id": 12,
"text": "a\\,P(X\\in[a;a+da])"
},
{
"math_id": 13,
"text": "E[X;P] = \\int_{a=-\\infty}^{+\\infty} a\\,P(X\\in[a;a+da]) "
},
{
"math_id": 14,
"text": "P"
},
{
"math_id": 15,
"text": "\\Omega =\\mathbb{R}"
},
{
"math_id": 16,
"text": "X\\colon \\mathbb{R}\\to\\mathbb{R}"
},
{
"math_id": 17,
"text": "X"
},
{
"math_id": 18,
"text": "\\frac{f(X)}{g(X)}"
},
{
"math_id": 19,
"text": "f"
},
{
"math_id": 20,
"text": "g"
},
{
"math_id": 21,
"text": "g^* = \\min_g \\operatorname{var}_g \\left( X \\frac{f(X)}{g(X)} \\right)."
},
{
"math_id": 22,
"text": " \ng^*(X) = \\frac{|X| f(X)}{ \\int |x| f(x) \\, dx}.\n"
},
{
"math_id": 23,
"text": "X\\ge 0"
},
{
"math_id": 24,
"text": "p_t\\,"
},
{
"math_id": 25,
"text": "X \\ge t"
},
{
"math_id": 26,
"text": "F(x)"
},
{
"math_id": 27,
"text": "f(x)= F'(x)\\,"
},
{
"math_id": 28,
"text": "K"
},
{
"math_id": 29,
"text": "X_i\\,"
},
{
"math_id": 30,
"text": "F"
},
{
"math_id": 31,
"text": "k_t"
},
{
"math_id": 32,
"text": "t"
},
{
"math_id": 33,
"text": "P(k_t = k)={K\\choose k}p_t^k(1-p_t)^{K-k},\\,\\quad \\quad k=0,1,\\dots,K."
},
{
"math_id": 34,
"text": "\\operatorname{E} [k_t/K] = p_t"
},
{
"math_id": 35,
"text": "\\operatorname{var} [k_t/K] = p_t(1-p_t)/K"
},
{
"math_id": 36,
"text": "K \\to \\infty"
},
{
"math_id": 37,
"text": "p_t"
},
{
"math_id": 38,
"text": "p_t \\approx 1"
},
{
"math_id": 39,
"text": "f_*\\,"
},
{
"math_id": 40,
"text": "{ X \\ge t\\ }"
},
{
"math_id": 41,
"text": "\n\\begin{align}\np_t & = {E} [1(X \\ge t)] \\\\[6pt]\n& = \\int 1(x \\ge t) \\frac{f(x)}{f_*(x)} f_*(x) \\,dx \\\\[6pt]\n& = E_* [1(X \\ge t) W(X)]\n\\end{align}\n"
},
{
"math_id": 42,
"text": "W(\\cdot) \\equiv \\frac{f(\\cdot)}{f_*(\\cdot)} "
},
{
"math_id": 43,
"text": " \\hat p_t = \\frac{1}{K}\\,\\sum_{i=1}^K 1(X_i \\ge t) W(X_i),\\,\\quad \\quad X_i \\sim f_*"
},
{
"math_id": 44,
"text": "t\\,"
},
{
"math_id": 45,
"text": "W\\,"
},
{
"math_id": 46,
"text": "K\\,"
},
{
"math_id": 47,
"text": "\n\\begin{align}\n\\operatorname{var}_*\\widehat p_t & = \\frac{1}{K}\\operatorname{var}_* [1(X \\ge t)W(X)] \\\\[5pt]\n& = \\frac{1}{K}\\left\\{{E_*}[1(X \\ge t)^2 W^2(X)] - p_t^2\\right\\} \\\\[5pt]\n& = \\frac{1}{K}\\left\\{{E}[1(X \\ge t) W(X)] - p_t^2\\right\\}\n\\end{align}\n"
},
{
"math_id": 48,
"text": "X\\,"
},
{
"math_id": 49,
"text": "aX\\,"
},
{
"math_id": 50,
"text": "a>1"
},
{
"math_id": 51,
"text": " f_*(x)=\\frac{1}{a} f \\bigg( \\frac{x}{a} \\bigg)\\,"
},
{
"math_id": 52,
"text": " W(x)= a \\frac{f(x)}{f(x/a)} \\,"
},
{
"math_id": 53,
"text": "X<t\\,"
},
{
"math_id": 54,
"text": "n\\,"
},
{
"math_id": 55,
"text": " f_*(x)= f(x-c), \\quad c>0 \\,"
},
{
"math_id": 56,
"text": "c\\,"
},
{
"math_id": 57,
"text": "\\sigma^2_{MC} / \\sigma^2_{IS} \\,"
},
{
"math_id": 58,
"text": "g_n(x)"
},
{
"math_id": 59,
"text": "n=1,\\ldots,N,"
},
{
"math_id": 60,
"text": "x_1, \\ldots, x_N, "
},
{
"math_id": 61,
"text": "g_{n,t}(x)"
},
{
"math_id": 62,
"text": "t=1,\\ldots,T,"
}
] | https://en.wikipedia.org/wiki?curid=867671 |
86777 | Northern pike | Species of fish
<templatestyles src="Template:Taxobox/core/styles.css" />
The northern pike (Esox lucius) is a species of carnivorous fish of the genus "Esox" (pikes). They are commonly found in moderately salty and fresh waters of the Northern Hemisphere ("i.e." holarctic in distribution). They are known simply as a pike (PL: pike) in Great Britain, Ireland, most of Eastern Europe, Canada and the U.S., although in the Midwest, they may be called a Northern.
Pike can grow to a relatively large size. Their average length is about , with maximum recorded lengths of up to and maximum weights of . The IGFA currently recognises a pike caught by Lothar Louis on Greffern Lake, Germany, on 16 October 1986, as the all-tackle world-record holding northern pike Northern pike grow to larger sizes in Eurasia than in North America, and in coastal Eurasian regions than inland ones.
Etymology.
The northern pike gets its common name from its resemblance to the pole-weapon known as the pike (from the Middle English for 'pointed'). Various other unofficial trivial names are common pike, Lakes pike, great northern pike, great northern, northern (in the U.S. Upper Midwest and in the Canadian provinces of Alberta, Manitoba and Saskatchewan), jackfish, jack, slough shark, snake, slimer, slough snake, gator (due to a head similar in shape to that of an alligator), hammer handle, and other such names as "long head" or "pointy nose". Numerous other names can be found in "Field Museum Zool. Leaflet Number 9". Its earlier common name, the luci (now lucy) or luce when fully grown, was used to form its taxonomic name ("Esox lucius") and is used in heraldry.
Description.
Northern pike are most often olive green, shading from yellow to white along the belly. The flank is marked with short, light bar-like spots and a few to many dark spots on the fins. Sometimes, the fins are reddish. Younger pike have yellow stripes along a green body; later, the stripes divide into light spots and the body turns from green to olive green. The lower half of the gill cover lacks scales, and it has large sensory pores on its head and on the underside of its lower jaw which are part of the lateral line system. Unlike the similar-looking and closely related muskellunge, the northern pike has light markings on a dark body background and fewer than six sensory pores on the underside of each side of the lower jaw.
A hybrid between northern pike and muskellunge is known as a tiger muskellunge ("Esox masquinongy × lucius" or "Esox lucius × masquinongy", depending on the sex of each of the contributing species). In the hybrids, the males are invariably sterile, while females are often fertile, and may back-cross with the parent species. Another form of northern pike, the silver pike, is not a subspecies but rather a mutation that occurs in scattered populations. Silver pike, sometimes called silver muskellunge, lack the rows of spots and appear silver, white, or silvery-blue in color. When ill, silver pike have been known to display a somewhat purplish hue; long illness is also the most common cause of male sterility.
In Italy, the newly identified species "Esox cisalpinus" ("southern pike") was long thought to be a color variation of the northern pike, but was in 2011 announced to be a species of its own.
Length and weight.
Northern pike in North America seldom reach the size of their European counterparts; one of the largest specimens known was a specimen from New York. It was caught in Great Sacandaga Lake on 15 September 1940 by Peter Dubuc. Reports of far larger pike have been made, but these are either misidentifications of the pike's larger relative, the muskellunge, or simply have not been properly documented and belong in the realm of legend.
As northern pike grow longer, they increase in weight, and the relationship between length and weight is not linear. The relationship between total length ("L", in inches) and total weight ("W", in pounds) for nearly all species of fish can be expressed by an equation of the form
formula_0
Invariably, "b" is close to 3.0 for all species, and "c" is a constant that varies among species. For northern pike, "b" = 3.096 and "c" = 0.000180 ("c" = 7.089 enables one to put length in meters and weight in kilograms). The relationship described in this section suggests a northern pike will weigh about , while a northern pike will weigh about .
Age.
Northern Pike typically live to 10–15 years, but sometimes up to 25 years.
Habitat.
Pike are found in sluggish streams and shallow, weedy places in lakes and reservoirs, as well as in cold, clear, rocky waters. They are typical ambush predators; they lie in wait for prey, holding perfectly still for long periods, and then exhibit remarkable acceleration as they strike. They inhabit any water body that contains fish, but suitable places for spawning are also essential. Because of their cannibalistic nature, young pike need places where they can take shelter between plants so they are not eaten. In both cases, rich submerged vegetation is needed. Pike are seldom found in brackish water, except for the Baltic Sea area, here they can be found spending time both in the mouths of rivers and in the open brackish waters of the Baltic Sea. It is normal for pike to return to fresh water after a period in these brackish waters. They seem to prefer water with less turbidity, but that is likely related to their dependence on the presence of vegetation.
Geographic distribution.
"Esox lucius" is found in fresh water throughout the Northern Hemisphere, including Russia, Europe, and North America. It has also been introduced to lakes in Morocco, and is even found in brackish water of the Baltic Sea, but they are confined to the low-salinity water at the surface of the sea, and are seldom seen in brackish water elsewhere.
Within North America, northern pike populations are found in Maine, New Hampshire, Vermont, Massachusetts, Rhode Island, Connecticut, New York, New Jersey, Pennsylvania, Maryland, West Virginia, Ohio, Michigan, Indiana, Illinois, Wisconsin, Minnesota, Iowa, Missouri, North Dakota, South Dakota, Nebraska, Kansas, Montana, Idaho, Utah, Colorado, Oklahoma, northern Texas, northern New Mexico, northern Arizona, Alaska, the Yukon, the Northwest Territories, Alberta, Saskatchewan, Manitoba, Ontario, and Québec (pike are rare in British Columbia and east coast provinces). Watersheds in which pike are found include the Ohio Valley, the upper Mississippi River and its tributaries, and the Great Lakes Basin. They are also stocked in, or have been introduced to, some western lakes and reservoirs for sport fishing, although some fisheries managers believe this practice often threatens other species of fish such as bass, trout, and salmon, causing government agencies to attempt to exterminate the pike by poisoning lakes, such as Stormy Lake, Alaska. "E. lucius" is a severe invasive predator in Box Canyon Reservoir on the Pend Oreille River in northeastern Washington.
Behaviour.
Aggression.
The northern pike is a relatively aggressive species, especially with regard to feeding. For example, when food sources are scarce, cannibalism develops, starting around five weeks in a small percentage of populations. This cannibalism occurs when the ratio of predator to prey is two to one. One can expect this because when food is scarce, Northern pike fight for survival, such as turning on smaller pike to feed; this is seen in other species such as tiger salamanders. Usually, pike tend to feed on smaller fish, such as the banded killifish. However, when pike exceed long, they feed on larger fish.
Because of cannibalism when food is short, pike suffer a fairly high young mortality rate. Cannibalism is more prevalent in cool summers, as the upcoming pike have slow growth rates in that season and might not be able to reach a size to deter the larger pike. Cannibalism is likely to arise in low growth and low food conditions. Pike do not discriminate siblings well, so cannibalism between siblings is likely.
Aggression also arises from a need for space. Young pike tend to have their food stolen by larger pike. Pike are aggressive if not given enough space because they are territorial. They use a form of foraging known as ambush foraging. Unlike species such as perch, pike undergo bursts of energy instead of actively chasing down prey. As such, a fair amount of inactive time occurs until they find prey. Hunting efficiency decreases with competition; the larger the pike, the larger the area controlled by that particular pike. An inverse relation to vegetation density and pike size exists, which is due to the possibility of cannibalism from the largest pike. This makes sense, as the smaller pike need more vegetation to avoid being eaten. Large pike do not have this worry and can afford the advantage of a large line of sight. They prefer a tree structure habitat.
There has been at least one instance of a pike attacking a dog.
Pike are occasionally preyed upon by otters.
Physical behavioural traits.
Pike are capable of "fast start" movements, which are sudden high-energy bursts of unsteady swimming. Many other fish exhibit this movement as well. Most fish use this mechanism to avoid life-threatening situations. For the pike, however, it is a tool used to capture prey from their sedentary positions. They flash out in such bursts and capture their prey. These fast starts terminate when the pike has reached maximum velocity. During such motions, pike make "S" conformations while swimming at high rates. To decelerate, they, simply make a "C" conformation, exponentially slowing down their speed so that they can "stop". An interesting behavioural trait that pike have is that they have short digestion times and long feeding periods. They can undergo many of these fast bursts to collect as much prey as they can. Pike are least active during the night.
Reproduction.
Pike have a strong homing behaviour; they inhabit certain areas by nature. During the summer, they tend to group closer to vegetation than during the winter. The exact reason is not clear, but likely is a result of foraging or possibly reproductive needs to safeguard young. Pike diel rhythm changes significantly over the year. On sunny days, pike stay closer to the shallow shore. On windy days, they are further from shore. When close to the shore, pike have a preference for shallow, vegetated areas. Pike are more stationary in reservoirs than lakes. A possibility is that lakes have more prey to feed upon, or possibly in reservoirs prey will ultimately cross paths with the pike. As such, this could be a form of energy conservation. Pike breed in the spring.
Pike are physically capable of breeding at an age of about two years, spawning in spring when the water temperature first reaches about . They have a tendency to lay a large number of eggs. A likely explanation for such actions is to produce as many surviving offspring as possible, as many most likely die early in life. In females, the gonads enlarge when it is time to shed her eggs. However, after they are shed, these eggs will not hatch if the water is below . Male pike arrive at the breeding grounds before females do, preceding them by a few weeks. In addition, the males stay after the spawning is finished. Parental stock is vital for pike success. Egg survival has been shown to be positively correlated with number of eggs laid. For breeding, the more stable the water, the greater the fitness of the pike. Mortality results from toxic concentrations of iron or rapid temperature changes, and adult abundance and the strength of the resulting year classes are not related. It is based upon two points of development: one during embryo stage between fertilization and closure of the blastopore, and the second between hatching and the termination of the alevin stage.
The colour of the sticky eggs is yellow to orange; the diameter is . The embryos are in length and able to swim after hatching, but stay on the bottom for some time. The embryonic stage is five to 16 days, dependent on water temperature (at and , respectively). Under natural circumstances, the survival from free-swimming larva to 75-mm pike is around 5%.
Food.
The young, free-swimming pike feed on small invertebrates starting with "Daphnia", and quickly move on to bigger prey, such as "Asellus" and "Gammarus". When the body length is , they start feeding on small fish.
A pike has a very typical hunting behaviour; it is able to remain stationary in the water by moving the last fin rays of the dorsal fins and the pectoral fins. Before striking, it bends its body and darts out to the prey using the large surface of its caudal fin, dorsal fin, and anal fin to propel itself. The fish has a distinctive habit of catching its prey sideways in the mouth, immobilising it with its sharp, backward-pointing teeth, and then turning the prey headfirst to swallow it. For larger prey, the pike will usually attempt to drown the prey before carrying it off to be consumed. It eats mainly fish and frogs, but also small mammals and birds fall prey to pike. Young pike have been found dead from choking on a pike of a similar size, an observation referred to by the renowned English poet Ted Hughes in his famous poem "Pike". Northern pike also feed on insects, crayfish, and leeches. They are not very particular and eat spiny fish like perch, and will even take fish as small as sticklebacks if they are the only available prey.
Pike are known to occasionally hunt and consume larger water birds, such as an incident in 2016 when an individual was observed trying to drown and eat a great crested grebe, an incident in which a pike choked to death after killing and attempting to eat a tufted duck, as well as an incident in 2015 where an attack by a large pike between three and four feet long was implicated as a possible cause for the injury and death of an adult mute swan on Lower Lough Erne, Northern Ireland, but it is generally believed that such attacks are only rare occurrences.
The northern pike is a largely solitary predator. It migrates during a spawning season, and it follows prey fish like common roaches to their deeper winter quarters. Sometimes, divers observe groups of similar-sized pike that cooperate some to start hunting at the same time, so "wolfpack" theories are given. Large pike can be caught on dead immobile fish, so these pike are thought to move about in a rather large territory to find food. Large pike are also known to cruise large water bodies at a few metres deep, probably pursuing schools of prey fish. Smaller pike are more of ambush predators, probably because of their vulnerability to cannibalism. Pike are often found near the exit of culverts, which can be attributed to the presence of schools of prey fish and the opportunity for ambush. Being potamodromous, all esocids tend to display limited migration, although some local movement may be of key significance for population dynamics. In the Baltic, they are known to follow herring schools, so have some seasonal migration.
Importance to humans.
Although it is generally known as a "sporting" quarry, some anglers release pike they have caught because the flesh is considered bony, especially due to the substantial (epipleural) "Y-bones". The white and mild-tasting flesh of pikes nonetheless has a long and distinguished history in cuisine and is popular fare in Europe and parts of North America. Among fishing communities where pike is popular fare, the ability of a filleter to effectively remove the bones from the fillets while minimizing the amount of flesh lost in the process (known as "de-boning") is a highly valued skill. There are methods for filleting pike and leaving the "y-bones" in the fish's body; this does leave some flesh on the fish but avoids the sometimes difficult process of "de-boning". Larger fish are more easily filleted (and much easier to de-bone), while smaller ones are often processed as forcemeat to eliminate their many small bones, and then used in preparations such as quenelles and fish mousses. Historical references to cooking pike go as far back as the Romans. Fishing for pike is said to be very exciting with their aggressive hits and aerial acrobatics. Pike are among the largest North American freshwater game fish.
Because of their prolific and predatory nature, laws have been enacted in some places to help stop the spread of northern pike outside of their native range. For instance, in California, anglers are required by law to remove the head from a pike once it has been caught. In Alaska, pike are native north and west of the Alaska Range, but have been illegally introduced to south-central Alaska by game fishermen. In south-central Alaska, no limit is imposed in most areas. Pike are seen as a threat to native wild stocks of salmon by some fishery managers.
Notably in Britain and Ireland, pike are greatly admired as a sporting fish and they are returned alive to the water to safeguard future sport and maintain the balance of a fishery. The Pike Anglers Club has campaigned to preserve pike since 1977, arguing that the removal of pike from waters can lead to an explosion of smaller fish, and to ensure pike removal stops, which is damaging to both the sport fishery and the environment.
Sport fishing.
Pike angling is becoming an increasingly popular pastime in Europe. Effective methods for catching include dead baits, lure fishing, and jerk baiting. They are prized as game fish for their large size and aggressive nature.
Lake fishing for pike from the shore is especially effective during spring, when the big pike move into the shallows to spawn in weedy areas, and later many remain there to feed on other spawning coarse fish species to regain their condition after spawning. Smaller jack pike often remain in the shallows for their own protection, and for the small fish food available there. For the hot summer and during inactive phases, the larger female pike tend to retire to deeper water and/or places with better cover. This gives the boat angler good fishing during the summer and winter seasons. Trolling (towing a fairy or bait behind a moving boat) is a popular technique.
The use of float tubes is another method of fishing for pike on small to medium-sized still waters. Fly fishing for pike is another eligible way of catching these fish, and the float tube is now recognized as an especially suitable water craft for pike fly-fishing. Also they have been caught this way by using patterns that imitate small fry or invertebrates.
In recent decades, more pike are released back to the water after catching (catch and release), but they can easily be damaged when handled. Handling those fish with dry hands can easily damage their mucus-covered skin and possibly lead to their deaths from infections.
Since they have very sharp and numerous teeth, care is required in unhooking a pike. Barbless trebles are recommended when angling for this species, as they simplify unhooking. This is undertaken using long forceps, with 30-cm artery clamps the ideal tool. When holding the pike from below on the lower jaw, it will open its mouth. It should be kept out of the water for the minimum amount of time possible, and should be given extra time to recover if being weighed and photographed before release. It's also recommended that anglers use an unhooking mat to prevent it from harm. If practicing live release, calling the fish "caught" when it is alongside a boat is recommended. Remove the hook by grabbing it with needle-nosed pliers while the fish is still submerged and giving it a flip in the direction that turns the hook out of the mouth. This avoids damage to the fish and the stress of being out of water.
In Finland, catching a "kymppihauki", a pike weighing at least , is considered the qualification as a master fisherman.
Many countries have banned the use of live fish for bait, but pike can be caught with dead fish, which they locate by smell. For this technique, fat marine fish like herring, sardines and mackerel are often used. Compared to other fish like the eel, the pike does not have a good sense of smell, but it is still more than adequate to find the baitfish. Baitfish can be used as groundbait, but also below a float carried by the wind. This method is often used in wintertime and best done in lakes near schools of preyfish or at the deeper parts of shallow water bodies, where pike and preyfish tend to gather in great numbers.
Pike make use of the lateral line system to follow the vortices produced by the perceived prey, and the whirling movement of the spinner is probably a good way to imitate or exaggerate these. Jerkbaits are also effective and can produce spectacular bites with pike attacking these erratic-moving lures at full speed. For trolling, big plugs or softbaits can be used. Spoons with mirror finishes are very effective when the sun is at a sharp angle to the water in the mornings or evenings because they generate the vibrations previously discussed and cause a glint of reflective sunlight that mimics the flash of white-bellied prey.
When fishing in shallow water for smaller pike, lighter and smaller lures are frequently used. The humble 'woolly bugger' fly is a favourite lure among keen fly fisherman of the southern hemisphere. Fly fishing for pike is an established aspect of the sport and there are now numerous dedicated products to use specifically to target these fish.
In mythology.
In the Finnish epic poetry "Kalevala", wise demigod Väinämöinen creates a magical kantele (string instrument) from the jawbone of a giant pike.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "W = c L^b."
}
] | https://en.wikipedia.org/wiki?curid=86777 |
8680792 | Does Anybody Really Know What Time It Is? | "Does Anybody Really Know What Time It Is?" is a song written and sung by Robert Lamm and recorded by the group Chicago. It was included on their 1969 debut album "Chicago Transit Authority" and released as a single in 1970.
Background.
According to Robert Lamm, "Does Anybody Really Know What Time It Is?" was the first song recorded for their debut album. The song was not released as a single until two tracks from the band's second album, "Make Me Smile" and "25 or 6 to 4", had become hits. It became the band's third straight Top 10 single, peaking at No. 7 in the U.S. and No. 2 in Canada. Because the song straddled years in its chart run, it is not ranked on the major U.S. year-end charts. However, in Canada, where it charted higher, it is ranked as both the 59th biggest hit of 1970 and the 37th biggest hit of 1971.
Lamm said of the song:
"[It's] not a complicated song, but it’s certainly a quirky song. But that was my intent. I wanted to write something that wasn’t ordinary, that wasn’t blues-based, that didn’t have ice cream changes, and would allow the horns to shine and give Lee Loughnane a solo. So all that was the intent."
The original uncut album version opens with a brief free form piano solo performed by Lamm. A spoken verse by Lamm is mixed into the sung final verse of the album version. The single version does not include the free form intro, and was originally mixed and issued in mono. A stereo re-edit (beginning from the point where the free form intro leaves off) was issued on the group's "" greatest hits CD set.
A 2:54 shorter edit (omitting not only the opening free-form piano solo but also the subsequent varying-time-signature horn/piano dialog—therefore starting at the trumpet solo which begins the main movement—and without the spoken part) was included on the original vinyl version of "Chicago's Greatest Hits", but was not included on the CD version. This shorter edit was included on the CD version of the compilation album "If You Leave Me Now." This version was used as a radio edit version. A shorter version at 2:46 (starting midway through the trumpet solo) was issued as a promotional single, which finally appeared on 2007's "".
A live version on the "Chicago at Carnegie Hall" box set presents an expanded version of the "free form" intro, which itself is given its own track.
Various versions of the song receive airplay; the promotional single edit is the version played on certain 'Classic Hits' stations and 1970s radio shows. For example, radio station KKMJ plays the promo edit version on its 'Super Songs' of the 70s weekend, as does Classic Hits KXBT. By contrast, the True Oldies Channel plays the 3:20 single version. An AM radio station in Boston (WJIB 740 which also simulcasts in Maine as WJTO 730) plays the original vinyl "" edit.
Composition.
Right after the free form piano solo, the time signature of the fanfare preceding the trumpet solo is, per bar, formula_0, formula_1, formula_2, formula_0, formula_1 and formula_0, then transitions to a section in formula_3 for 6 bars, then goes into formula_4 for one bar. The song stays in formula_0 after that.
Reception.
"Cash Box" said of the song that Chicago's "exciting arrangements and superb material add up to an aural outburst that should blossom as a flowering chart entry." "Record World" said that it's a "winning cut" and a "natural hit if ever there was one."
Chart performance.
<templatestyles src="Col-begin/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tfrac{4}{4}"
},
{
"math_id": 1,
"text": "\\tfrac{7}{8}"
},
{
"math_id": 2,
"text": "\\tfrac{9}{8}"
},
{
"math_id": 3,
"text": "\\tfrac{5}{8}"
},
{
"math_id": 4,
"text": "\\tfrac{6}{8}"
}
] | https://en.wikipedia.org/wiki?curid=8680792 |
8681 | Data compression ratio | Measurement of the power of a data compression algorithm
Data compression ratio, also known as compression power, is a measurement of the relative reduction in size of data representation produced by a data compression algorithm. It is typically expressed as the division of uncompressed size by compressed size.
Definition.
Data compression ratio is defined as the ratio between the "uncompressed size" and "compressed size":
formula_0
Thus, a representation that compresses a file's storage size from 10 MB to 2 MB has a compression ratio of 10/2 = 5, often notated as an explicit ratio, 5:1 (read "five" to "one"), or as an implicit ratio, 5/1. This formulation applies equally for compression, where the uncompressed size is that of the original; and for decompression, where the uncompressed size is that of the reproduction.
Sometimes the "space saving" is given instead, which is defined as the reduction in size relative to the uncompressed size:
formula_1
Thus, a representation that compresses the storage size of a file from 10 MB to 2 MB yields a space saving of 1 - 2/10 = 0.8, often notated as a percentage, 80%.
For signals of indefinite size, such as streaming audio and video, the compression ratio is defined in terms of uncompressed and compressed data rates instead of data sizes:
formula_2
and instead of space saving, one speaks of data-rate saving, which is defined as the data-rate reduction relative to the uncompressed data rate:
formula_3
For example, uncompressed songs in CD format have a data rate of 16 bits/channel x 2 channels x 44.1 kHz ≅ 1.4 Mbit/s, whereas AAC files on an iPod are typically compressed to 128 kbit/s, yielding a compression ratio of 10.9, for a data-rate saving of 0.91, or 91%.
When the uncompressed data rate is known, the compression ratio can be inferred from the compressed data rate.
Lossless vs. Lossy.
Lossless compression of digitized data such as video, digitized film, and audio preserves all the information, but it does not generally achieve compression ratio much better than 2:1 because of the intrinsic entropy of the data. Compression algorithms which provide higher ratios either incur very large overheads or work only for specific data sequences (e.g. compressing a file with mostly zeros). In contrast, lossy compression (e.g. JPEG for images, or MP3 and Opus for audio) can achieve much higher compression ratios at the cost of a decrease in quality, such as Bluetooth audio streaming, as visual or audio compression artifacts from loss of important information are introduced. A compression ratio of at least 50:1 is needed to get 1080i video into a 20 Mbit/s MPEG transport stream.
Uses.
The data compression ratio can serve as a measure of the complexity of a data set or signal. In particular it is used to approximate the algorithmic complexity. It is also used to see how much of a file is able to be compressed without increasing its original size. | [
{
"math_id": 0,
"text": " {\\rm Compression\\;Ratio} = \\frac{\\rm Uncompressed\\;Size}{\\rm Compressed\\;Size}"
},
{
"math_id": 1,
"text": "{\\rm Space\\;Saving} = 1 - \\frac{\\rm Compressed\\;Size}{\\rm Uncompressed\\;Size}"
},
{
"math_id": 2,
"text": " {\\rm Compression\\;Ratio} = \\frac{\\rm Uncompressed\\;Data\\;Rate}{\\rm Compressed\\;Data\\;Rate}"
},
{
"math_id": 3,
"text": "{\\rm Data\\;Rate\\;Saving} = 1 - \\frac{\\rm Compressed\\;Data\\;Rate}{\\rm Uncompressed\\;Data\\;Rate}"
}
] | https://en.wikipedia.org/wiki?curid=8681 |
868145 | Aperiodic tiling | Form of plane tiling without repeats at scale
An aperiodic tiling is a non-periodic tiling with the additional property that it does not contain arbitrarily large periodic regions or patches. A set of tile-types (or prototiles) is aperiodic if copies of these tiles can form only non-periodic tilings.
The Penrose tilings are a well-known example of aperiodic tilings.
In March 2023, four researchers, David Smith, Joseph Samuel Myers, Craig S. Kaplan, and Chaim Goodman-Strauss, announced the proof that the tile discovered by David Smith is an aperiodic monotile, i.e., a solution to the einstein problem, a problem that seeks the existence of any single shape aperiodic tile. In May 2023 the same authors published a chiral aperiodic monotile with similar but stronger constraints.
Aperiodic tilings serve as mathematical models for quasicrystals, physical solids that were discovered in 1982 by Dan Shechtman who subsequently won the Nobel prize in 2011. However, the specific local structure of these materials is still poorly understood.
Several methods for constructing aperiodic tilings are known.
Definition and illustration.
Consider a periodic tiling by unit squares (it looks like infinite graph paper). Now cut one square into two rectangles. The tiling obtained in this way is non-periodic: there is no non-zero shift that leaves this tiling fixed. But clearly this example is much less interesting than the Penrose tiling. In order to rule out such boring examples, one defines an aperiodic tiling to be one that does not contain arbitrarily large periodic parts.
A tiling is called aperiodic if its hull contains only non-periodic tilings. The hull of a tiling formula_0 contains all translates "T" + "x" of "T", together with all tilings that can be approximated by translates of "T". Formally this is the closure of the set formula_1 in the local topology. In the local topology (resp. the corresponding metric) two tilings are formula_2-close if they agree in a ball of radius formula_3 around the origin (possibly after shifting one of the tilings by an amount less than formula_2).
To give an even simpler example than above, consider a one-dimensional tiling "T" of the line that looks like ..."aaaaaabaaaaa"... where "a" represents an interval of length one, "b" represents an interval of length two. Thus the tiling "T" consists of infinitely many copies of "a" and one copy of "b" (with centre 0, say). Now all translates of "T" are the tilings with one "b" somewhere and "a"s else. The sequence of tilings where "b" is centred at formula_4 converges – in the local topology – to the periodic tiling consisting of "a"s only. Thus "T" is not an aperiodic tiling, since its hull contains the periodic tiling ..."aaaaaa"...
For well-behaved tilings (e.g. substitution tilings with finitely many local patterns) holds: if a tiling is non-periodic and repetitive (i.e. each patch occurs in a uniformly dense way throughout the tiling), then it is aperiodic.
History.
The first specific occurrence of aperiodic tilings arose in 1961, when logician Hao Wang tried to determine whether the domino problem is decidable – that is, whether there exists an algorithm for deciding if a given finite set of prototiles admits a tiling of the plane. Wang found algorithms to enumerate the tilesets that cannot tile the plane, and the tilesets that tile it periodically; by this he showed that such a decision algorithm exists if every finite set of prototiles that admits a tiling of the plane also admits a periodic tiling. In 1964, Robert Berger found an aperiodic set of prototiles from which he demonstrated that the tiling problem is in fact not decidable. This first such set, used by Berger in his proof of undecidability, required 20,426 Wang tiles. Berger later reduced his set to 104, and Hans Läuchli subsequently found an aperiodic set requiring only 40 Wang tiles. A smaller set, of six aperiodic tiles (based on Wang tiles), was discovered by Raphael M. Robinson in 1971. Roger Penrose discovered three more sets in 1973 and 1974, reducing the number of tiles needed to two, and Robert Ammann discovered several new sets in 1977. The number of tiles required was reduced to one in 2023 by David Smith, Joseph Samuel Myers, Craig S. Kaplan, and Chaim Goodman-Strauss.
The aperiodic Penrose tilings can be generated not only by an aperiodic set of prototiles, but also by a substitution and by a cut-and-project method. After the discovery of quasicrystals aperiodic tilings become studied intensively by physicists and mathematicians. The cut-and-project method of N.G. de Bruijn for Penrose tilings eventually turned out to be an instance of the theory of Meyer sets. Today there is a large amount of literature on aperiodic tilings.
An "einstein" (, one stone) is an aperiodic tiling that uses only a single shape. The first such tile was discovered in 2010 - Socolar–Taylor tile, which is however not connected into one piece. In 2023 a connected tile was discovered, using a shape termed a "hat".
Constructions.
There are a few constructions of aperiodic tilings known. Some constructions are based on infinite families of aperiodic sets of tiles. The tilings which have been found so far are mostly constructed in a few ways, primarily by forcing some sort of non-periodic hierarchical structure. Despite this, the undecidability of the domino problem ensures that there must be infinitely many distinct principles of construction, and that in fact, there exist aperiodic sets of tiles for which there can be no proof of their aperiodicity.
However, there are three principles of construction that have been predominantly used for finite sets of prototiles up until 2023:
For some tilings only one of the constructions is known to yield that tiling. Others can be constructed by all three classical methods, e.g. the Penrose tilings.
Goodman-Straus proved that all tilings generated by substitution rules and satisfying a technical condition can be generated through matching rules. The technical condition is mild and usually satisfied in practice. The tiles are required to admit a set of "hereditary edges" such that the substitution tiling is "sibling-edge-to-edge".
Aperiodic hierarchical tilings through matching.
For a tiling congruent copies of the prototiles need to pave all of the Euclidean plane without overlaps (except at boundaries) and without leaving uncovered pieces. Therefore the boundaries of the tiles forming a tiling need to match geometrically. This is generally true for all tilings, aperiodic and periodic ones. Sometimes these geometric matching condition is enough to force a tile set to be aperiodic, this is e.g. the case for Robinsion's tilings discussed below.
Sometimes additional matching rules are required to hold. These usually involve colors or markings that have to match over several tiles across boundaries. Wang tiles usually require such additional rules.
In some cases it has been possible to replace matching rules by geometric matching conditions altogether by modifying the prototiles at their boundary. The
Penrose tiling (P1) originally consists of four prototiles together with some matching rules. One of the four tiles is a pentagon. One can replace this pentagon prototile by three distinct pentagonal shapes that have additional protrusions and indentations at the boundary making three distinct tiles. Together with the three other prototiles with suitably adapted boundaries one gets a set of six prototiles that essentially create the same aperiodic tilings as the original four tiles, but for the six tiles no additional matching rules are necessary, the geometric matching condition suffice.
Also note that Robinsion's protiles below come equipped with markings to make it easier to visually recognize the structure, but these markings do not put more matching rules on the tiles as are already in place through the geometric boundaries.
To date, there is not a formal definition describing when a tiling has a hierarchical structure; nonetheless, it is clear that substitution tilings have them, as do the tilings of Berger, Knuth, Läuchli, Robinson and Ammann. As with the term "aperiodic tiling" itself, the term "aperiodic "hierarchical" tiling" is a convenient shorthand, meaning something along the lines of "a set of tiles admitting only non-periodic tilings with a hierarchical structure".
For aperiodic tilings, whether additional matching rules are involved or not, the matching conditions forces some hierarchical structure on the tilings that in turn make period structures impossible.
Each of these sets of tiles, in any tiling they admit, forces a particular hierarchical structure. (In many later examples, this structure can be described as a substitution tiling system; this is described below). No tiling admitted by such a set of tiles can be periodic, simply because no single translation can leave the entire hierarchical structure invariant. Consider Robinson's 1971 tiles:
Any tiling by these tiles can only exhibit a hierarchy of square lattices: the centre of any orange square is also a corner of a larger orange square, ad infinitum. Any translation must be smaller than some size of square, and so cannot leave any such tiling invariant.
Robinson proves these tiles must form this structure inductively; in effect, the tiles must form blocks which themselves fit together as larger versions of the original tiles, and so on.
This idea – of finding sets of tiles that can only admit hierarchical structures – has been used in the construction of most known aperiodic sets of tiles to date.
However, the tiling produced in this way is not unique, not even up to isometries of the Euclidean group, e.g. translations and rotations. A complete tiling of the plane constructed from Robinsion's tiles may or may not have "faults" (also called "corridors") going off to infinity in up to four "arms" and there are additional choices that allow for the encoding of infinite words from Σω for an alphabet Σ of up to four letters. In summary there are uncountably many different tilings unrelated by Euclidean isometries, all of them necessarily nonperiodic, that can arise from the Robinsion's tiles.
Substitutions.
Substitution tiling systems provide a rich source of aperiodic tilings. A set of tiles that forces a substitution structure to emerge is said to enforce the substitution structure. For example, the chair tiles shown below admit a substitution, and a portion of a substitution tiling is shown at right below. These substitution tilings are necessarily non-periodic, in precisely the same manner as described above, but the chair tile itself is not aperiodic – it is easy to find periodic tilings by unmarked chair tiles that satisfy the geometric matching conditions.
However, the tiles shown below force the chair substitution structure to emerge, and so are themselves aperiodic.
The Penrose tiles, and shortly thereafter Amman's several different sets of tiles, were the first example based on explicitly forcing a substitution tiling structure to emerge. Joshua Socolar, Roger Penrose, Ludwig Danzer, and Chaim Goodman-Strauss have found several subsequent sets. Shahar Mozes gave the first general construction, showing that every product of one-dimensional substitution systems can be enforced by matching rules. Charles Radin found rules enforcing the Conway-pinwheel substitution tiling system. In 1998, Goodman-Strauss showed that local matching rules can be found to force any substitution tiling structure, subject to some mild conditions.
Cut-and-project method.
Non-periodic tilings can also be obtained by projection of higher-dimensional structures into spaces with lower dimensionality and under some circumstances there can be tiles that enforce this non-periodic structure and so are aperiodic. The Penrose tiles are the first and most famous example of this, as first noted in the pioneering work of de Bruijn. There is yet no complete (algebraic) characterization of cut and project tilings that can be enforced by matching rules, although numerous necessary or sufficient conditions are known.
Other techniques.
Only a few different kinds of constructions have been found. Notably, Jarkko Kari gave an aperiodic set of Wang tiles based on multiplications by 2 or 2/3 of real numbers encoded by lines of tiles (the encoding is related to Sturmian sequences made as the differences of consecutive elements of Beatty sequences), with the aperiodicity mainly relying on the fact that 2"n"/3"m" is never equal to 1 for any positive integers "n" and "m". This method was later adapted by Goodman-Strauss to give a strongly aperiodic set of tiles in the hyperbolic plane. Shahar Mozes has found many alternative constructions of aperiodic sets of tiles, some in more exotic settings; for example in semi-simple Lie groups. Block and Weinberger used homological methods to construct aperiodic sets of tiles for all non-amenable manifolds. Joshua Socolar also gave another way to enforce aperiodicity, in terms of "alternating condition". This generally leads to much smaller tile sets than the one derived from substitutions.
Physics.
Aperiodic tilings were considered as mathematical artefacts until 1984, when physicist Dan Shechtman announced the discovery of a phase of an aluminium-manganese alloy which produced a sharp diffractogram with an unambiguous fivefold symmetry – so it had to be a crystalline substance with icosahedral symmetry. In 1975 Robert Ammann had already extended the Penrose construction to a three-dimensional icosahedral equivalent. In such cases the term 'tiling' is taken to mean 'filling the space'. Photonic devices are currently built as aperiodical sequences of different layers, being thus aperiodic in one direction and periodic in the other two. Quasicrystal structures of Cd–Te appear to consist of atomic layers in which the atoms are arranged in a planar aperiodic pattern. Sometimes an energetical minimum or a maximum of entropy occur for such aperiodic structures. Steinhardt has shown that Gummelt's overlapping decagons allow the application of an extremal principle and thus provide the link between the mathematics of aperiodic tiling and the structure of quasicrystals. Faraday waves have been observed to form large patches of aperiodic patterns. The physics of this discovery has revived the interest in incommensurate structures and frequencies suggesting to link aperiodic tilings with interference phenomena.
Confusion regarding terminology.
The term "aperiodic" has been used in a wide variety of ways in the mathematical literature on tilings (and in other mathematical fields as well, such as dynamical systems or graph theory, with altogether different meanings). With respect to tilings the term aperiodic was sometimes used synonymously with the term non-periodic. A "non-periodic" tiling is simply one that is not fixed by any non-trivial translation. Sometimes the term described – implicitly or explicitly – a tiling generated by an aperiodic set of prototiles. Frequently the term aperiodic was just used vaguely to describe the structures under consideration, referring to physical aperiodic solids, namely quasicrystals, or to something non-periodic with some kind of global order.
The use of the word "tiling" is problematic as well, despite its straightforward definition. There is no single Penrose tiling, for example: the Penrose rhombs admit infinitely many tilings (which cannot be distinguished locally). A common solution is to try to use the terms carefully in technical writing, but recognize the widespread use of the informal terms.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T \\subset \\R^d"
},
{
"math_id": 1,
"text": "\\{ T+x \\, : \\, x \\in \\R^d \\}"
},
{
"math_id": 2,
"text": "\\varepsilon"
},
{
"math_id": 3,
"text": "1/\\varepsilon"
},
{
"math_id": 4,
"text": "1,2,4, \\ldots,2^n,\\ldots"
}
] | https://en.wikipedia.org/wiki?curid=868145 |
8687388 | Clifford bundle | In mathematics, a Clifford bundle is an algebra bundle whose fibers have the structure of a Clifford algebra and whose local trivializations respect the algebra structure. There is a natural Clifford bundle associated to any (pseudo) Riemannian manifold "M" which is called the Clifford bundle of "M".
General construction.
Let "V" be a (real or complex) vector space together with a symmetric bilinear form <·,·>. The Clifford algebra "Cℓ"("V") is a natural (unital associative) algebra generated by "V" subject only to the relation
formula_0
for all "v" in "V". One can construct "Cℓ"("V") as a quotient of the tensor algebra of "V" by the ideal generated by the above relation.
Like other tensor operations, this construction can be carried out fiberwise on a smooth vector bundle. Let "E" be a smooth vector bundle over a smooth manifold "M", and let "g" be a smooth symmetric bilinear form on "E". The Clifford bundle of "E" is the fiber bundle whose fibers are the Clifford algebras generated by the fibers of "E":
formula_1
The topology of "Cℓ"("E") is determined by that of "E" via an associated bundle construction.
One is most often interested in the case where "g" is positive-definite or at least nondegenerate; that is, when ("E", "g") is a Riemannian or pseudo-Riemannian vector bundle. For concreteness, suppose that ("E", "g") is a Riemannian vector bundle. The Clifford bundle of "E" can be constructed as follows. Let "Cℓ""n"R be the Clifford algebra generated by R"n" with the Euclidean metric. The standard action of the orthogonal group O("n") on R"n" induces a graded automorphism of "Cℓ""n"R. The homomorphism
formula_2
is determined by
formula_3
where "v""i" are all vectors in R"n". The Clifford bundle of "E" is then given by
formula_4
where "F"("E") is the orthonormal frame bundle of "E". It is clear from this construction that the structure group of "Cℓ"("E") is O("n"). Since O("n") acts by graded automorphisms on "Cℓ""n"R it follows that "Cℓ"("E") is a bundle of Z2-graded algebras over "M". The Clifford bundle "Cℓ"("E") can then be decomposed into even and odd subbundles:
formula_5
If the vector bundle "E" is orientable then one can reduce the structure group of "Cℓ"("E") from O("n") to SO("n") in the natural manner.
Clifford bundle of a Riemannian manifold.
If "M" is a Riemannian manifold with metric "g", then the Clifford bundle of "M" is the Clifford bundle generated by the tangent bundle "TM". One can also build a Clifford bundle out of the cotangent bundle "T"*"M". The metric induces a natural isomorphism "TM" = "T"*"M" and therefore an isomorphism "Cℓ"("TM") = "Cℓ"("T"*"M").
There is a natural vector bundle isomorphism between the Clifford bundle of "M" and the exterior bundle of "M":
formula_6
This is an isomorphism of vector bundles "not" algebra bundles. The isomorphism is induced from the corresponding isomorphism on each fiber. In this way one can think of sections of the Clifford bundle as differential forms on "M" equipped with Clifford multiplication rather than the wedge product (which is independent of the metric).
The above isomorphism respects the grading in the sense that
formula_7
Local description.
For a vector formula_8 at formula_9, and a form formula_10 the Clifford multiplication is defined as
formula_11,
where the metric duality to change vector to the one form is used in the first term.
Then the exterior derivative formula_12 and coderivative formula_13 can be related to the metric connection formula_14 using the choice of an orthonormal base formula_15 by
formula_16.
Using these definitions, the Dirac-Kähler operator is defined by
formula_17.
On a star domain the operator can be inverted using Poincaré lemma for exterior derivative and its Hodge star dual for coderivative. Practical way of doing this is by homotopy and cohomotopy operators. | [
{
"math_id": 0,
"text": "v^2 = \\langle v,v\\rangle"
},
{
"math_id": 1,
"text": "C\\ell(E) = \\coprod_{x\\in M} C\\ell(E_x,g_x)"
},
{
"math_id": 2,
"text": "\\rho : \\mathrm O(n) \\to \\mathrm{Aut}(C\\ell_n\\mathbb R)"
},
{
"math_id": 3,
"text": "\\rho(A)(v_1v_2\\cdots v_k) = (Av_1)(Av_2)\\cdots(Av_k)"
},
{
"math_id": 4,
"text": "C\\ell(E) = F(E) \\times_\\rho C\\ell_n\\mathbb R"
},
{
"math_id": 5,
"text": "C\\ell(E) = C\\ell^0(E) \\oplus C\\ell^1(E)."
},
{
"math_id": 6,
"text": "C\\ell(T^*M) \\cong \\Lambda(T^*M)."
},
{
"math_id": 7,
"text": "\\begin{align}\nC\\ell^0(T^*M) &= \\Lambda^{\\mathrm{even}}(T^*M)\\\\\nC\\ell^1(T^*M) &= \\Lambda^{\\mathrm{odd}}(T^*M).\n\\end{align}"
},
{
"math_id": 8,
"text": "v \\in T_{x}M"
},
{
"math_id": 9,
"text": "x\\in M"
},
{
"math_id": 10,
"text": "\\psi \\in \\Lambda(T_{x}M)"
},
{
"math_id": 11,
"text": "v\\psi= v\\wedge \\psi + v \\lrcorner \\psi"
},
{
"math_id": 12,
"text": "d"
},
{
"math_id": 13,
"text": "\\delta"
},
{
"math_id": 14,
"text": "\\nabla"
},
{
"math_id": 15,
"text": "\\{ e_{a}\\}"
},
{
"math_id": 16,
"text": "d=e^{a}\\wedge \\nabla_{e_a}, \\quad \\delta = -e^{a}\\lrcorner\\nabla_{e_a}"
},
{
"math_id": 17,
"text": "D = e^{a}\\nabla_{e_a}=d-\\delta"
}
] | https://en.wikipedia.org/wiki?curid=8687388 |
8687911 | Proper acceleration | Physical acceleration experienced by an object
In relativity theory, proper acceleration is the physical acceleration (i.e., measurable acceleration as by an accelerometer) experienced by an object. It is thus acceleration relative to a free-fall, or inertial, observer who is momentarily at rest relative to the object being measured. Gravitation therefore does not cause proper acceleration, because the same gravity acts equally on the inertial observer. As a consequence, all inertial observers always have a proper acceleration of zero.
Proper acceleration contrasts with "coordinate" "acceleration", which is dependent on choice of coordinate systems and thus upon choice of observers (see three-acceleration in special relativity).
In the standard inertial coordinates of special relativity, for unidirectional motion, proper acceleration is the rate of change of proper velocity with respect to coordinate time.
In an inertial frame in which the object is momentarily at rest, the proper acceleration 3-vector, combined with a zero time-component, yields the object's "four-acceleration", which makes proper-acceleration's magnitude Lorentz-invariant. Thus the concept is useful: (i) with accelerated coordinate systems, (ii) at relativistic speeds, and (iii) in "curved spacetime".
In an accelerating rocket after launch, or even in a rocket standing on the launch pad, the proper acceleration is the acceleration felt by the occupants, and which is described as g-force (which is "not" a force but rather an acceleration; see that article for more discussion) delivered by the vehicle only. The "acceleration of gravity" (involved in the "force of gravity") never contributes to proper acceleration in any circumstances, and thus the proper acceleration felt by observers standing on the ground is due to the mechanical force "from the ground", not due to the "force" or "acceleration" of gravity. If the ground is removed and the observer allowed to free-fall, the observer will experience coordinate acceleration, but no proper acceleration, and thus no g-force. Generally, objects in a state of inertial motion, also called "free-fall" or a "ballistic path" (including objects in orbit) experience no proper acceleration (neglecting small tidal accelerations for inertial paths in gravitational fields). This state is also known as "zero gravity" ("zero-g") or "free-fall," and it produces a sensation of weightlessness.
Proper acceleration reduces to coordinate acceleration in an inertial coordinate system in flat spacetime (i.e. in the absence of gravity), provided the magnitude of the object's proper-velocity (momentum per unit mass) is much less than the speed of light "c". Only in such situations is coordinate acceleration "entirely" felt as a g-force (i.e. a proper acceleration, also defined as one that produces measurable weight).
In situations in which gravitation is absent but the chosen coordinate system is not inertial, but is accelerated with the observer (such as the accelerated reference frame of an accelerating rocket, or a frame fixed upon objects in a centrifuge), then g-forces and corresponding proper accelerations felt by observers in these coordinate systems are caused by the mechanical forces which resist their weight in such systems. This weight, in turn, is produced by fictitious forces or "inertial forces" which appear in all such accelerated coordinate systems, in a manner somewhat like the weight produced by the "force of gravity" in systems where objects are fixed in space with regard to the gravitating body (as on the surface of the Earth).
The total (mechanical) force that is calculated to induce the proper acceleration on a mass at rest in a coordinate system that has a proper acceleration, via Newton's law F = "m"a, is called the proper force. As seen above, the proper force is equal to the opposing reaction force that is measured as an object's "operational weight" (i.e. its weight as measured by a device like a spring scale, in vacuum, in the object's coordinate system). Thus, the proper force on an object is always equal and opposite to its measured weight.
Examples.
When holding onto a carousel that turns at constant angular velocity an observer experiences a radially inward (centripetal) proper-acceleration due to the interaction between the handhold and the observer's hand. This cancels the radially outward "geometric acceleration" associated with their spinning coordinate frame. This outward acceleration (from the spinning frame's perspective) will become the coordinate acceleration when they let go, causing them to fly off along a zero proper-acceleration (geodesic) path. Unaccelerated observers, of course, in their frame simply see their equal proper and coordinate accelerations vanish when they let go.
Similarly, standing on a non-rotating planet (and on earth for practical purposes) observers experience an upward proper-acceleration due to the normal force exerted by the earth on the bottom of their shoes. This cancels the downward geometric acceleration due to the choice of coordinate system (a so-called shell-frame). That downward acceleration becomes coordinate if they inadvertently step off a cliff into a zero proper-acceleration (geodesic or rain-frame) trajectory.
"Geometric accelerations" (due to the connection term in the coordinate system's covariant derivative below) act on "every gram of our being", while proper-accelerations are usually caused by an external force. Introductory physics courses often treat gravity's downward (geometric) acceleration as due to a mass-proportional force. This, along with diligent avoidance of unaccelerated frames, allows them to treat proper and coordinate acceleration as the same thing.
Even then if an object maintains a "constant proper-acceleration" from rest over an extended period in flat spacetime, observers in the rest frame will see the object's coordinate acceleration decrease as its coordinate velocity approaches lightspeed. The rate at which the object's proper-velocity goes up, nevertheless, remains constant.
Thus the distinction between proper-acceleration and coordinate acceleration allows one to track the experience of accelerated travelers from various non-Newtonian perspectives. These perspectives include those of accelerated coordinate systems (like a carousel), of high speeds (where proper and coordinate times differ), and of curved spacetime (like that associated with gravity on Earth).
Classical applications.
At low speeds in the inertial coordinate systems of Newtonian physics, proper acceleration simply equals the coordinate acceleration a = d2x/d"t"2. As reviewed above, however, it differs from coordinate acceleration if one chooses (against Newton's advice) to describe the world from the perspective of an accelerated coordinate system like a motor vehicle accelerating from rest, or a stone being spun around in a slingshot. If one chooses to recognize that gravity is caused by the curvature of spacetime (see below), proper acceleration differs from coordinate acceleration in a gravitational field.
For example, an object subjected to physical or proper acceleration ao will be seen by observers in a coordinate system undergoing constant acceleration aframe to have coordinate acceleration:
formula_0
Thus if the object is accelerating with the frame, observers fixed to the frame will see no acceleration at all.
Similarly, an object undergoing physical or proper acceleration ao will be seen by observers in a frame rotating with angular velocity ω to have coordinate acceleration:
formula_1
In the equation above, there are three geometric acceleration terms on the right-hand side. The first "centrifugal acceleration" term depends only on the radial position r and not the velocity of our object, the second "Coriolis acceleration" term depends only on the object's velocity in the rotating frame vrot but not its position, and the third "Euler acceleration" term depends only on position and the rate of change of the frame's angular velocity.
In each of these cases, physical or proper acceleration differs from coordinate acceleration because the latter can be affected by your choice of coordinate system as well as by physical forces acting on the object. Those components of coordinate acceleration "not" caused by physical forces (like direct contact or electrostatic attraction) are often attributed (as in the Newtonian example above) to forces that: (i) act on every gram of the object, (ii) cause mass-independent accelerations, and (iii) don't exist from all points of view. Such geometric (or improper) forces include Coriolis forces, Euler forces, g-forces, centrifugal forces and (as we see below) gravity forces as well.
Viewed from a flat spacetime slice.
Proper-acceleration's relationships to coordinate acceleration in a specified slice of flat spacetime follow from Minkowski's flat-space metric equation ("c" d"τ")2 = ("c" d"t")2 − (dx)2. Here a single reference frame of yardsticks and synchronized clocks define map position x and map time "t" respectively, the traveling object's clocks define proper time "τ", and the "d" preceding a coordinate means infinitesimal change. These relationships allow one to tackle various problems of "anyspeed engineering", albeit only from the vantage point of an observer whose extended map frame defines simultaneity.
Acceleration in (1+1)D.
In the unidirectional case i.e. when the object's acceleration is parallel or antiparallel to its velocity in the spacetime slice of the observer, proper acceleration α and coordinate acceleration a are related through the Lorentz factor γ by α = "γ"3a. Hence the change in proper-velocity w=dx/dτ is the integral of proper acceleration over map-time t i.e. Δ"w" = "α"Δ"t" for constant α. At low speeds this reduces to the well-known relation between coordinate velocity and coordinate acceleration times map-time, i.e. Δ"v"="a"Δ"t".
For constant unidirectional proper-acceleration, similar relationships exist between rapidity "η" and elapsed proper time Δ"τ", as well as between Lorentz factor "γ" and distance traveled Δ"x". To be specific:
formula_2
where the various velocity parameters are related by
formula_3
These equations describe some consequences of accelerated travel at high speed. For example, imagine a spaceship that can accelerate its passengers at "1 gee" (10 m/s2 or about 1.0 light year per year squared) halfway to their destination, and then decelerate them at "1 gee" for the remaining half so as to provide earth-like artificial gravity from point A to point B over the shortest possible time. For a map-distance of Δ"x"AB, the first equation above predicts a midpoint Lorentz factor (up from its unit rest value) of "γ"mid = 1 + "α"(Δ"x"AB/2)/c2. Hence the round-trip time on traveler clocks will be Δ"τ" = 4("c"/"α") cosh−1("γ"mid), during which the time elapsed on map clocks will be Δ"t" = 4("c"/"α") sinh[cosh−1("γ"mid)].
This imagined spaceship could offer round trips to Proxima Centauri lasting about 7.1 traveler years (~12 years on Earth clocks), round trips to the Milky Way's central black hole of about 40 years (~54,000 years elapsed on earth clocks), and round trips to Andromeda Galaxy lasting around 57 years (over 5 million years on Earth clocks). Unfortunately, sustaining 1-gee acceleration for years is easier said than done, as illustrated by the maximum payload to launch mass ratios shown in the figure at right.
In curved spacetime.
In the language of general relativity, the components of an object's acceleration four-vector "A" (whose magnitude is proper acceleration) are related to elements of the four-velocity via a covariant derivative "D" with respect to proper time τ:
formula_4
Here "U" is the object's four-velocity, and "Γ" represents the coordinate system's 64 connection coefficients or Christoffel symbols. Note that the Greek subscripts take on four possible values, namely 0 for the time-axis and 1–3 for spatial coordinate axes, and that repeated indices are used to indicate summation over all values of that index. Trajectories with zero proper acceleration are referred to as geodesics.
The left hand side of this set of four equations (one each for the time-like and three spacelike values of index λ) is the object's proper-acceleration 3-vector combined with a null time component as seen from the vantage point of a reference or book-keeper coordinate system in which the object is at rest. The first term on the right hand side lists the rate at which the time-like (energy/"mc") and space-like (momentum/"m") components of the object's four-velocity "U" change, per unit time "τ" on traveler clocks.
Let's solve for that first term on the right since at low speeds its spacelike components represent the coordinate acceleration. More generally, when that first term goes to zero the object's coordinate acceleration goes to zero. This yields
formula_5
Thus, as exemplified with the first two animations above, coordinate acceleration goes to zero whenever proper-acceleration is exactly canceled by the connection (or "geometric acceleration") term on the far right. "Caution:" This term may be a sum of as many as sixteen separate velocity and position dependent terms, since the repeated indices "μ" and "ν" are by convention summed over all pairs of their four allowed values.
Force and equivalence.
The above equation also offers some perspective on forces and the equivalence principle. Consider "local" book-keeper coordinates for the metric (e.g. a local Lorentz tetrad like that which global positioning systems provide information on) to describe time in seconds, and space in distance units along perpendicular axes. If we multiply the above equation by the traveling object's rest mass m, and divide by Lorentz factor "γ" = d"t"/d"τ", the spacelike components express the rate of momentum change for that object from the perspective of the coordinates used to describe the metric.
This in turn can be broken down into parts due to proper and geometric components of acceleration and force. If we further multiply the time-like component by lightspeed "c", and define coordinate velocity as v = dx/d"t", we get an expression for rate of energy change as well:
formula_6 (timelike) and formula_7 (spacelike).
Here "a""o" is an acceleration due to proper forces and "a""g" is, by default, a geometric acceleration that we see applied to the object because of our coordinate system choice. At low speeds these accelerations combine to generate a coordinate acceleration like a = d2x/d"t"2, while for unidirectional motion "at any speed" "a""o"'s magnitude is that of proper acceleration "α" as in the section above where "α" = "γ"3"a" when "a""g" is zero. In general expressing these accelerations and forces can be complicated.
Nonetheless, if we use this breakdown to describe the connection coefficient (Γ) term above in terms of geometric forces, then the motion of objects from the point of view of "any coordinate system" (at least at low speeds) can be seen as locally Newtonian. This is already common practice e.g. with centrifugal force and gravity. Thus the equivalence principle extends the local usefulness of Newton's laws to accelerated coordinate systems and beyond.
Surface dwellers on a planet.
For low speed observers being held at fixed radius from the center of a spherical planet or star, coordinate acceleration ashell is approximately related to proper acceleration ao by:
formula_8
where the planet or star's Schwarzschild radius "r"s = 2"GM" / "c"2. As our shell observer's radius approaches the Schwarzschild radius, the proper acceleration "a"o needed to keep it from falling in becomes intolerable.
On the other hand, for "r" ≫ "r"s, an upward proper force of only "GMm"/"r"2 is needed to prevent one from accelerating downward. At the Earth's surface this becomes:
formula_9
where g is the downward 9.8 m/s2 acceleration due to gravity, and formula_10 is a unit vector in the radially outward direction from the center of the gravitating body. Thus here an outward proper force of mg is needed to keep one from accelerating downward.
Four-vector derivations.
The spacetime equations of this section allow one to address "all deviations" between proper and coordinate acceleration in a single calculation. For example, let's calculate the Christoffel symbols:
formula_11
for the far-coordinate Schwarzschild metric ("c" d"τ")2 = (1−"r"s/"r")("c" d"t")2 − (1/(1−"r"s/"r"))d"r"2 − "r"2 d"θ"2 − ("r" sin "θ")2 d"φ"2, where "r"s is the Schwarzschild radius 2"GM"/"c"2. The resulting array of coefficients becomes:
formula_12
From this you can obtain the shell-frame proper acceleration by setting coordinate acceleration to zero and thus requiring that proper acceleration cancel the geometric acceleration of a stationary object i.e. formula_13. This does not solve the problem yet, since Schwarzschild coordinates in curved spacetime are book-keeper coordinates but not those of a local observer. The magnitude of the above proper acceleration 4-vector, namely formula_14, is however precisely what we want i.e. the upward frame-invariant proper acceleration needed to counteract the downward geometric acceleration felt by dwellers on the surface of a planet.
A special case of the above Christoffel symbol set is the flat-space spherical coordinate set obtained by setting "r"s or "M" above to zero:
formula_15
From this we can obtain, for example, the centri"petal" proper acceleration needed to cancel the centri"fugal" geometric acceleration of an object moving at constant angular velocity "ω" = d"φ"/d"τ" at the equator where "θ" = "π"/2. Forming the same 4-vector sum as above for the case of d"θ"/d"τ" and d"r"/d"τ" zero yields nothing more than the classical acceleration for rotational motion given above, i.e. formula_16 so that "a"o = "ω"2"r". Coriolis effects also reside in these connection coefficients, and similarly arise from coordinate-frame geometry alone.
See also.
<templatestyles src="Div col/styles.css"/>
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec{a}_\\text{acc} = \\vec{a}_\\text{o} - \\vec{a}_\\text{frame}."
},
{
"math_id": 1,
"text": "\\vec{a}_\\text{rot} = \n\\vec{a}_\\text{o} - \\vec\\omega \\times (\\vec\\omega \\times \\vec{r} ) - 2 \\vec\\omega \\times \\vec{v}_\\text{rot} - \\frac{d \\vec\\omega}{dt} \\times \\vec{r}."
},
{
"math_id": 2,
"text": "\\alpha=\\frac{\\Delta w}{\\Delta t}=c \\frac{\\Delta \\eta}{\\Delta \\tau}=c^2 \\frac{\\Delta \\gamma}{\\Delta x},"
},
{
"math_id": 3,
"text": "\\eta = \\sinh^{-1}\\left(\\frac{w}{c}\\right) = \\tanh^{-1}\\left(\\frac{v}{c}\\right) = \\pm \\cosh^{-1}\\left(\\gamma\\right) ."
},
{
"math_id": 4,
"text": "A^\\lambda := \\frac{DU^\\lambda }{d\\tau} = \\frac{dU^\\lambda }{d\\tau } + \\Gamma^\\lambda {}_{\\mu \\nu}U^\\mu U^\\nu "
},
{
"math_id": 5,
"text": "\\frac{dU^\\lambda }{d\\tau } =A^\\lambda - \\Gamma^\\lambda {}_{\\mu \\nu}U^\\mu U^\\nu."
},
{
"math_id": 6,
"text": "\\frac{dE}{dt}=\\vec{v}\\cdot\\frac{d\\vec{p}}{dt}"
},
{
"math_id": 7,
"text": "\\frac{d\\vec{p}}{dt} = \\sum \\vec{f_o} + \\sum \\vec{f_g} = m(\\vec{a_o}+\\vec{a_g}) "
},
{
"math_id": 8,
"text": "\\vec{a}_\\text{shell} = \\vec{a}_\\text{o} - \\sqrt{\\frac{r}{r-r_s}} \\frac{G M}{r^2} \\hat{r} "
},
{
"math_id": 9,
"text": "\\vec{a}_\\text{shell} = \\vec{a}_o - g \\hat{r}"
},
{
"math_id": 10,
"text": "\\hat{r}"
},
{
"math_id": 11,
"text": "\\left(\n\\begin{array}{llll}\n \\left\\{\\Gamma _{tt}^t,\\Gamma _{tr}^t,\\Gamma _{t\\theta }^t,\\Gamma _{t\\phi }^t\\right\\} & \\left\\{\\Gamma _{rt}^t,\\Gamma _{rr}^t,\\Gamma\n _{r\\theta }^t,\\Gamma _{r\\phi }^t\\right\\} & \\left\\{\\Gamma _{\\theta t}^t,\\Gamma _{\\theta r}^t,\\Gamma _{\\theta \\theta }^t,\\Gamma _{\\theta\n \\phi }^t\\right\\} & \\left\\{\\Gamma _{\\phi t}^t,\\Gamma _{\\phi r}^t,\\Gamma _{\\phi \\theta }^t,\\Gamma _{\\phi \\phi }^t\\right\\} \\\\\n \\left\\{\\Gamma _{tt}^r,\\Gamma _{tr}^r,\\Gamma _{t\\theta }^r,\\Gamma _{t\\phi }^r\\right\\} & \\left\\{\\Gamma _{rt}^r,\\Gamma _{rr}^r,\\Gamma\n _{r\\theta }^r,\\Gamma _{r\\phi }^r\\right\\} & \\left\\{\\Gamma _{\\theta t}^r,\\Gamma _{\\theta r}^r,\\Gamma _{\\theta \\theta }^r,\\Gamma _{\\theta\n \\phi }^r\\right\\} & \\left\\{\\Gamma _{\\phi t}^r,\\Gamma _{\\phi r}^r,\\Gamma _{\\phi \\theta }^r,\\Gamma _{\\phi \\phi }^r\\right\\} \\\\\n \\left\\{\\Gamma _{tt}^{\\theta },\\Gamma _{tr}^{\\theta },\\Gamma _{t\\theta }^{\\theta },\\Gamma _{t\\phi }^{\\theta }\\right\\} & \\left\\{\\Gamma\n _{rt}^{\\theta },\\Gamma _{rr}^{\\theta },\\Gamma _{r\\theta }^{\\theta },\\Gamma _{r\\phi }^{\\theta }\\right\\} & \\left\\{\\Gamma _{\\theta t}^{\\theta\n },\\Gamma _{\\theta r}^{\\theta },\\Gamma_{\\theta \\theta }^{\\theta },\\Gamma _{\\theta \\phi }^{\\theta }\\right\\} & \\left\\{\\Gamma _{\\phi\n t}^{\\theta },\\Gamma _{\\phi r}^{\\theta },\\Gamma _{\\phi \\theta }^{\\theta },\\Gamma _{\\phi \\phi }^{\\theta }\\right\\} \\\\\n \\left\\{\\Gamma _{tt}^{\\phi },\\Gamma _{tr}^{\\phi },\\Gamma _{t\\theta }^{\\phi },\\Gamma _{t\\phi }^{\\phi }\\right\\} & \\left\\{\\Gamma _{rt}^{\\phi\n },\\Gamma _{rr}^{\\phi },\\Gamma _{r\\theta }^{\\phi },\\Gamma _{r\\phi }^{\\phi }\\right\\} & \\left\\{\\Gamma _{\\theta t}^{\\phi },\\Gamma _{\\theta\n r}^{\\phi },\\Gamma _{\\theta \\theta }^{\\phi },\\Gamma _{\\theta \\phi }^{\\phi }\\right\\} & \\left\\{\\Gamma _{\\phi t}^{\\phi },\\Gamma _{\\phi\n r}^{\\phi },\\Gamma _{\\phi \\theta }^{\\phi },\\Gamma _{\\phi \\phi }^{\\phi }\\right\\}\n\\end{array}\n\\right)"
},
{
"math_id": 12,
"text": "\\left(\n\\begin{array}{llll}\n \\left\\{0,\\frac{r_s}{2 r (r - r_s)},0,0\\right\\} & \\left\\{\\frac{r_s}{2 r (r - r_s)},0,0,0\\right\\} & \\{0,0,0,0\\} & \\{0,0,0,0\\} \\\\\n \\left\\{\\frac{r_s c^2 (r-r_s)}{2 r^3},0,0,0\\right\\} & \\left\\{0,\\frac{r_s}{2 r (r_s-r)},0,0\\right\\} & \\{0,0,r_s-r,0\\} & \\left\\{0,0,0,(r_s-r) \\sin ^2\\theta\n \\right\\} \\\\\n \\{0,0,0,0\\} & \\left\\{0,0,\\frac{1}{r},0\\right\\} & \\left\\{0,\\frac{1}{r},0,0\\right\\} & \\{0,0,0,-\\cos \\theta \\sin \\theta \\} \\\\\n \\{0,0,0,0\\} & \\left\\{0,0,0,\\frac{1}{r}\\right\\} & \\{0,0,0,\\cot (\\theta )\\} & \\left\\{0,\\frac{1}{r},\\cot \\theta ,0\\right\\}\n\\end{array}\n\\right)."
},
{
"math_id": 13,
"text": "A^\\lambda = \\Gamma^\\lambda {}_{\\mu \\nu}U^\\mu U^\\nu = \\{0,GM/r^2,0,0\\}"
},
{
"math_id": 14,
"text": "\\alpha = \\sqrt{1/(1-r_s/r)}GM/r^2"
},
{
"math_id": 15,
"text": "\\left(\n\\begin{array}{llll}\n \\left\\{0,0,0,0\\right\\} & \\left\\{0,0,0,0\\right\\} & \\{0,0,0,0\\} & \\{0,0,0,0\\} \\\\\n \\left\\{0,0,0,0\\right\\} & \\left\\{0,0,0,0\\right\\} & \\{0,0,-r,0\\} & \\left\\{0,0,0,-r \\sin ^2\\theta\n \\right\\} \\\\\n \\{0,0,0,0\\} & \\left\\{0,0,\\frac{1}{r},0\\right\\} & \\left\\{0,\\frac{1}{r},0,0\\right\\} & \\{0,0,0,-\\cos \\theta \\sin \\theta \\} \\\\\n \\{0,0,0,0\\} & \\left\\{0,0,0,\\frac{1}{r}\\right\\} & \\{0,0,0,\\cot \\theta \\} & \\left\\{0,\\frac{1}{r},\\cot \\theta ,0\\right\\}\n\\end{array}\n\\right)."
},
{
"math_id": 16,
"text": "A^\\lambda = \\Gamma^\\lambda {}_{\\mu \\nu}U^\\mu U^\\nu = \\{0,-r(d\\phi/d\\tau)^2,0,0\\}"
}
] | https://en.wikipedia.org/wiki?curid=8687911 |
869048 | Hounsfield scale | Quantitative scale of radiodensity
The Hounsfield scale ( ), named after Sir Godfrey Hounsfield, is a quantitative scale for describing radiodensity. It is frequently used in CT scans, where its value is also termed CT number.
Definition.
The Hounsfield unit (HU) scale is a linear transformation of the original linear attenuation coefficient measurement into one in which the radiodensity of distilled water at standard pressure and temperature (STP) is defined as 0 Hounsfield units (HU), while the radiodensity of air at STP is defined as −1000 HU. In a voxel with average linear attenuation coefficient formula_0, the corresponding HU value is therefore given by:
formula_1
where formula_2 and formula_3 are respectively the linear attenuation coefficients of water and air.
Thus, a change of one Hounsfield unit (HU) represents a change of 0.1% of the attenuation coefficient of water since the attenuation coefficient of air is nearly zero.
Calibration tests of HU with reference to water and other materials may be done to ensure standardised response. This is particularly important for CT scans used in radiotherapy treatment planning, where HU is converted to electron density. Variation in the measured values of reference materials with known composition, and variation between and within slices may be used as part of test procedures.
Rationale.
The above standards were chosen as they are universally available references and suited to the key application for which computed axial tomography was developed: imaging the internal anatomy of living creatures based on organized water structures and mostly living in air, "e.g." humans.
Values for different body tissues and material.
HU-based differentiation of material applies to medical-grade dual-energy CT scans but not to cone beam computed tomography (CBCT) scans, as CBCT scans provide unreliable HU readings.
Values reported here are approximations. Different dynamics are reported from one study to another.
Exact HU dynamics can vary from one CT acquisition to another due to CT acquisition and reconstruction parameters (kV, filters, reconstruction algorithms, etc.). The use of contrast agents modifies HU as well in some body parts (mainly blood).
A practical application of this is in evaluation of tumors, where, for example, an adrenal tumor with a radiodensity of less than 10 HU is rather fatty in composition and almost certainly a benign adrenal adenoma.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "HU = 1000\\times\\frac{\\mu - \\mu_{\\textrm{water}}}{\\mu_{\\textrm{water}} - \\mu_{\\textrm{air}}}"
},
{
"math_id": 2,
"text": "\\mu_{\\textrm{water}}"
},
{
"math_id": 3,
"text": "\\mu_{\\textrm{air}}"
}
] | https://en.wikipedia.org/wiki?curid=869048 |
86908 | FIFO and LIFO accounting | Methods used in managing inventory
FIFO and LIFO accounting are methods used in managing inventory and financial matters involving the amount of money a company has to have tied up within inventory of produced goods, raw materials, parts, components, or feedstocks. They are used to manage assumptions of costs related to inventory, stock repurchases (if purchased at different prices), and various other accounting purposes. The following equation is useful when determining inventory costing methods:
formula_0
FIFO.
"FIFO" stands for "first-in, first-out", meaning that the oldest inventory items are recorded as sold first (but this does not necessarily mean that the exact oldest physical object has been tracked and sold). In other words, the cost associated with the inventory that was purchased first is the cost expensed first.
A company might use the LIFO method for accounting purposes, even if it uses FIFO for inventory management purposes (i.e., for the actual storage, shelving, and sale of its merchandise). For example, a company that sells many perishable goods, such as a supermarket chain, is likely to follow the FIFO method when managing inventory, to ensure that goods with earlier expiration dates are sold before goods with later expiration dates. However, this does not preclude that same company from accounting for its merchandise with the LIFO method.
With FIFO, the cost of inventory reported on the balance sheet represents the cost of the inventory most recently purchased. FIFO most closely mimics the flow of inventory, as businesses are far more likely to sell the oldest inventory first.
Consider this example: Foo Co. had the following inventory at hand, in order of acquisition in November:
If Foo Co. sells 210 units during November, the company would expense the cost associated with the first 100 units at $50 and the remaining 110 units at $55. Under FIFO, the total cost of sales for November would be $11,050. The ending inventory would be calculated the following way:
Thus, the balance sheet would now show the inventory valued at $5250.
FIFO Tax Implications.
FIFO will have a higher ending inventory value and lower cost of goods sold (COGS) compared to LIFO in a period of rising prices. Therefore, under these circumstances, FIFO would produce a higher gross profit and, similarly, a higher income tax expense.
LIFO.
"LIFO" stands for "last-in, first-out", meaning that the most recently produced items are recorded as sold first. Since the 1970s, some U.S. companies shifted towards the use of LIFO, which reduces their income taxes in times of inflation, but since International Financial Reporting Standards (IFRS) banned LIFO, more companies returned to FIFO.
LIFO is used only in the United States, which is governed by the generally accepted accounting principles (GAAP). Section 472 of the Internal Revenue Code directs how LIFO may be used if necessary. The code directs that LIFO may be used "only if the taxpayer establishes" that they have no other way of valuing their inventory.
In the FIFO example above, the company (Foo Co.), using LIFO accounting, would expense the cost associated with the first 75 units at $59, 125 more units at $55, and the remaining 10 units at $50. Under LIFO, the total cost of sales for November would be $11,800. The ending inventory would be calculated the following way:
The balance sheet would show $4500 in inventory under LIFO.
The difference between the cost of an inventory calculated under the FIFO and LIFO methods is called the "LIFO reserve" (in the example above, it is $750, i.e. $5250 - $4500). This reserve, a form of contra account, is essentially the amount by which an entity's taxable income has been deferred by using the LIFO method.
In most sets of accounting standards, such as the International Financial Reporting Standards, FIFO (or LIFO) valuation principles are "in-fine" subordinated to the higher principle of lower of cost or market valuation.
In the United States, publicly traded entities which use LIFO for taxation purposes must also use LIFO for financial reporting purposes, but such companies are also likely to report a LIFO reserve to their shareholders. A number of tax reform proposals have argued for the repeal of LIFO tax provision. The "Save LIFO Coalition" argues in favor of the retention of LIFO.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Beginning Inventory Balance} + \\text{Purchased (or Manufactured) Inventory} = \\text{Inventory Sold} + \\text{Ending Inventory Balance}."
}
] | https://en.wikipedia.org/wiki?curid=86908 |
8691587 | Thomas Kilgore Sherwood | American chemical engineer
Thomas Kilgore Sherwood (July 25, 1903 – January 14, 1976) was a noted American chemical engineer and a founding member of the National Academy of Engineering.
Biography.
Sherwood was born in Columbus, Ohio, and spent much of his youth in Montreal. In 1923 he received his B.S. from McGill University, and entered the Massachusetts Institute of Technology (MIT) for his Ph.D. His dissertation, "The Mechanism of the Drying of Solids," was completed in 1929, a year after he had become assistant professor at Worcester Polytechnic Institute. In 1930 he returned to MIT as assistant professor where he remained until his retirement, serving as associate professor (1933), professor (1941), and dean of engineering (1946–1952). In 1969 he retired from MIT to become professor of chemical engineering at the University of California, Berkeley.
Sherwood's primary research area was mass transfer, and in 1937 he published the first major textbook in the field, "Absorption and Extraction" (republished 1974 as "Mass Transfer"). The Sherwood number is named in his honor:
formula_0
where
His activities in World War II included organizing chemical engineers for the National Defense Research Committee (NDRC) in 1940; consulting to the Baruch Committee on synthetic rubber development (1942); serving as NDRC Section Chief for Miscellaneous Chemical Engineering Problems (1942), where he oversaw creation of new hydraulic fluids, antifouling coatings for ship bottoms, large smoke screen generators, etc.; and member of the Whitman Committee on jet propulsion (1944). In autumn 1944 he followed American troops into Europe to gather scientific intelligence. His industrial consulting work included efforts in seawater desalination, removal of sulfur dioxide from emissions, freeze-drying blood, and the manufacture of penicillin and vinyl acetate.
Sherwood received the U.S. Medal for Merit (1948), won major awards from the American Institute of Chemical Engineers and American Chemical Society, and was a member of the American Academy of Arts and Sciences (1948), National Academy of Sciences (1958), and National Academy of Engineering.
Works.
<templatestyles src="Refbegin/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nSh = \\frac{K_c L}{\\mathcal{D}}\n"
},
{
"math_id": 1,
"text": "K_c"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "\\mathcal{D}"
}
] | https://en.wikipedia.org/wiki?curid=8691587 |
869255 | Periodic points of complex quadratic mappings | This article describes periodic points of some complex quadratic maps. A map is a formula for computing a value of a variable based on its own previous value or values; a quadratic map is one that involves the previous value raised to the powers one and two; and a complex map is one in which the variable and the parameters are complex numbers. A periodic point of a map is a value of the variable that occurs repeatedly after intervals of a fixed length.
These periodic points play a role in the theories of Fatou and Julia sets.
Definitions.
Let
formula_0
be the complex quadric mapping, where formula_1 and formula_2 are complex numbers.
Notationally, formula_3 is the formula_4-fold composition of formula_5 with itself (not to be confused with the formula_4th derivative of formula_5)—that is, the value after the "k"-th iteration of the function formula_6 Thus
formula_7
Periodic points of a complex quadratic mapping of period formula_8 are points formula_1 of the dynamical plane such that
formula_9
where formula_8 is the smallest positive integer for which the equation holds at that "z".
We can introduce a new function:
formula_10
so periodic points are zeros of function formula_11: points "z" satisfying
formula_12
which is a polynomial of degree formula_13
Number of periodic points.
The degree of the polynomial formula_11 describing periodic points is formula_14 so it has exactly formula_14 complex roots (= periodic points), counted with multiplicity.
Stability of periodic points (orbit) - multiplier.
The multiplier (or eigenvalue, derivative) formula_15 of a rational map formula_16 iterated formula_8 times at cyclic point formula_17 is defined as:
formula_18
where formula_19 is the first derivative of formula_20 with respect to formula_1 at formula_17.
Because the multiplier is the same at all periodic points on a given orbit, it is called a multiplier of the periodic orbit.
The multiplier is:
A periodic point is
Periodic points
Period-1 points (fixed points).
Finite fixed points.
Let us begin by finding all finite points left unchanged by one application of formula_16. These are the points that satisfy formula_29. That is, we wish to solve
formula_30
which can be rewritten as
formula_31
Since this is an ordinary quadratic equation in one unknown, we can apply the standard quadratic solution formula:
formula_32 and formula_33
So for formula_34 we have two finite fixed points formula_35 and formula_36.
Since
formula_37 and formula_38 where formula_39
we have formula_40.
Thus fixed points are symmetrical about formula_41.
Complex dynamics.
Here different notation is commonly used:
formula_42 with multiplier formula_43
and
formula_44 with multiplier formula_45
Again we have
formula_46
Since the derivative with respect to "z" is
formula_47
we have
formula_48
This implies that formula_49 can have at most one attractive fixed point.
These points are distinguished by the facts that:
Special cases.
An important case of the quadratic mapping is formula_53. In this case, we get formula_54 and formula_55. In this case, 0 is a superattractive fixed point, and 1 belongs to the Julia set.
Only one fixed point.
We have formula_56 exactly when formula_57 This equation has one solution, formula_58 in which case formula_59. In fact formula_60 is the largest positive, purely real value for which a finite attractor exists.
Infinite fixed point.
We can extend the complex plane formula_61 to the Riemann sphere (extended complex plane) formula_62 by adding infinity:
formula_63
and extend formula_5 such that formula_64
Then infinity is:
Period-2 cycles.
Period-2 cycles are two distinct points formula_66 and formula_67 such that formula_68 and formula_69, and hence
formula_70
for formula_71:
formula_72
Equating this to "z", we obtain
formula_73
This equation is a polynomial of degree 4, and so has four (possibly non-distinct) solutions. However, we already know two of the solutions. They are formula_35 and formula_36, computed above, since if these points are left unchanged by one application of formula_16, then clearly they will be unchanged by more than one application of formula_16.
Our 4th-order polynomial can therefore be factored in 2 ways:
formula_74
First method of factorization.
This expands directly as formula_75 (note the alternating signs), where
formula_76
formula_77
formula_78
formula_79
We already have two solutions, and only need the other two. Hence the problem is equivalent to solving a quadratic polynomial. In particular, note that
formula_80
and
formula_81
Adding these to the above, we get formula_82 and formula_83. Matching these against the coefficients from expanding formula_16, we get
formula_84 and formula_85
From this, we easily get
formula_86 and formula_87.
From here, we construct a quadratic equation with formula_88 and apply the standard solution formula to get
formula_89 and formula_90
Closer examination shows that:
formula_68 and formula_91
meaning these two points are the two points on a single period-2 cycle.
Second method of factorization.
We can factor the quartic by using polynomial long division to divide out the factors formula_92 and formula_93 which account for the two fixed points formula_35 and formula_36 (whose values were given earlier and which still remain at the fixed point after two iterations):
formula_94
The roots of the first factor are the two fixed points. They are repelling outside the main cardioid.
The second factor has the two roots
formula_95
These two roots, which are the same as those found by the first method, form the period-2 orbit.
Special cases.
Again, let us look at formula_53. Then
formula_96 and formula_97
both of which are complex numbers. We have formula_98. Thus, both these points are "hiding" in the Julia set.
Another special case is formula_99, which gives formula_100 and formula_101. This gives the well-known superattractive cycle found in the largest period-2 lobe of the quadratic Mandelbrot set.
Cycles for period greater than 2.
The degree of the equation formula_102 is 2"n"; thus for example, to find the points on a 3-cycle we would need to solve an equation of degree 8. After factoring out the factors giving the two fixed points, we would have a sixth degree equation.
There is no general solution in radicals to polynomial equations of degree five or higher, so the points on a cycle of period greater than 2 must in general be computed using numerical methods. However, in the specific case of period 4 the cyclical points have lengthy expressions in radicals.
In the case "c" = –2, trigonometric solutions exist for the periodic points of all periods. The case formula_103 is equivalent to the logistic map case "r" = 4: formula_104 Here the equivalence is given by formula_105 One of the "k"-cycles of the logistic variable "x" (all of which cycles are repelling) is
formula_106
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f_c(z) = z^2+c\\,"
},
{
"math_id": 1,
"text": "z"
},
{
"math_id": 2,
"text": "c"
},
{
"math_id": 3,
"text": "f^{(k)} _c (z)"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "f_c"
},
{
"math_id": 6,
"text": "f _c."
},
{
"math_id": 7,
"text": "f^{(k)} _c (z) = f_c(f^{(k-1)} _c (z))."
},
{
"math_id": 8,
"text": "p"
},
{
"math_id": 9,
"text": "f^{(p)} _c (z) = z,"
},
{
"math_id": 10,
"text": "F_p(z,f) = f^{(p)} _c (z) - z,"
},
{
"math_id": 11,
"text": "F_p(z,f)"
},
{
"math_id": 12,
"text": "F_p(z,f) = 0,"
},
{
"math_id": 13,
"text": "2^p."
},
{
"math_id": 14,
"text": "d = 2^p"
},
{
"math_id": 15,
"text": "m(f^p,z_0)=\\lambda"
},
{
"math_id": 16,
"text": "f"
},
{
"math_id": 17,
"text": "z_0"
},
{
"math_id": 18,
"text": "m(f^p,z_0) = \\lambda = \\begin{cases} \n f^{p \\prime}(z_0), &\\mbox{if }z_0 \\ne \\infty \\\\\n \\frac{1}{f^{p \\prime} (z_0)}, & \\mbox{if }z_0 = \\infty \\end{cases}"
},
{
"math_id": 19,
"text": "f^{p\\prime} (z_0)"
},
{
"math_id": 20,
"text": "f^p"
},
{
"math_id": 21,
"text": "abs(\\lambda). \\,"
},
{
"math_id": 22,
"text": "abs(\\lambda) < 1;"
},
{
"math_id": 23,
"text": "abs(\\lambda) = 0;"
},
{
"math_id": 24,
"text": "0 < abs(\\lambda) < 1;"
},
{
"math_id": 25,
"text": "abs(\\lambda) = 1;"
},
{
"math_id": 26,
"text": "\\lambda"
},
{
"math_id": 27,
"text": "abs(\\lambda)=1"
},
{
"math_id": 28,
"text": "abs(\\lambda) > 1."
},
{
"math_id": 29,
"text": "f_c(z)=z"
},
{
"math_id": 30,
"text": "z^2+c=z,\\,"
},
{
"math_id": 31,
"text": "\\ z^2-z+c=0."
},
{
"math_id": 32,
"text": "\\alpha_1 = \\frac{1-\\sqrt{1-4c}}{2}"
},
{
"math_id": 33,
"text": "\\alpha_2 = \\frac{1+\\sqrt{1-4c}}{2}."
},
{
"math_id": 34,
"text": "c \\in \\mathbb{C} \\setminus \\{1/4\\}"
},
{
"math_id": 35,
"text": "\\alpha_1"
},
{
"math_id": 36,
"text": "\\alpha_2"
},
{
"math_id": 37,
"text": "\\alpha_1 = \\frac{1}{2}-m"
},
{
"math_id": 38,
"text": "\\alpha_2 = \\frac{1}{2}+m"
},
{
"math_id": 39,
"text": "m = \\frac{\\sqrt{1-4c}}{2},"
},
{
"math_id": 40,
"text": "\\alpha_1 + \\alpha_2 = 1"
},
{
"math_id": 41,
"text": "z = 1/2"
},
{
"math_id": 42,
"text": "\\alpha_c = \\frac{1-\\sqrt{1-4c}}{2}"
},
{
"math_id": 43,
"text": "\\lambda_{\\alpha_c} = 1-\\sqrt{1-4c}"
},
{
"math_id": 44,
"text": "\\beta_c = \\frac{1+\\sqrt{1-4c}}{2}"
},
{
"math_id": 45,
"text": "\\lambda_{\\beta_c} = 1+\\sqrt{1-4c}."
},
{
"math_id": 46,
"text": "\\alpha_c + \\beta_c = 1 ."
},
{
"math_id": 47,
"text": "P_c'(z) = \\frac{d}{dz}P_c(z) = 2z ,"
},
{
"math_id": 48,
"text": "P_c'(\\alpha_c) + P_c'(\\beta_c)= 2 \\alpha_c + 2 \\beta_c = 2 (\\alpha_c + \\beta_c) = 2 ."
},
{
"math_id": 49,
"text": "P_c"
},
{
"math_id": 50,
"text": "\\beta_c"
},
{
"math_id": 51,
"text": "c \\in M \\setminus \\left\\{ 1/4 \\right\\}"
},
{
"math_id": 52,
"text": "\\alpha_c"
},
{
"math_id": 53,
"text": "c=0"
},
{
"math_id": 54,
"text": "\\alpha_1 = 0"
},
{
"math_id": 55,
"text": "\\alpha_2=1"
},
{
"math_id": 56,
"text": "\\alpha_1=\\alpha_2"
},
{
"math_id": 57,
"text": "1-4c=0."
},
{
"math_id": 58,
"text": "c=1/4,"
},
{
"math_id": 59,
"text": "\\alpha_1=\\alpha_2=1/2"
},
{
"math_id": 60,
"text": "c=1/4"
},
{
"math_id": 61,
"text": "\\mathbb{C}"
},
{
"math_id": 62,
"text": "\\mathbb{\\hat{C}}"
},
{
"math_id": 63,
"text": "\\mathbb{\\hat{C}} = \\mathbb{C} \\cup \\{ \\infty \\}"
},
{
"math_id": 64,
"text": "f_c(\\infty)=\\infty."
},
{
"math_id": 65,
"text": "f_c(\\infty)=\\infty=f^{-1}_c(\\infty)."
},
{
"math_id": 66,
"text": "\\beta_1"
},
{
"math_id": 67,
"text": "\\beta_2"
},
{
"math_id": 68,
"text": "f_c(\\beta_1) = \\beta_2"
},
{
"math_id": 69,
"text": "f_c(\\beta_2) = \\beta_1"
},
{
"math_id": 70,
"text": "f_c(f_c(\\beta_n)) = \\beta_n"
},
{
"math_id": 71,
"text": "n \\in \\{1, 2\\}"
},
{
"math_id": 72,
"text": "f_c(f_c(z)) = (z^2+c)^2+c = z^4 + 2cz^2 + c^2 + c."
},
{
"math_id": 73,
"text": "z^4 + 2cz^2 - z + c^2 + c = 0."
},
{
"math_id": 74,
"text": "(z-\\alpha_1)(z-\\alpha_2)(z-\\beta_1)(z-\\beta_2) = 0.\\,"
},
{
"math_id": 75,
"text": "x^4 - Ax^3 + Bx^2 - Cx + D = 0"
},
{
"math_id": 76,
"text": "D = \\alpha_1 \\alpha_2 \\beta_1 \\beta_2, \\,"
},
{
"math_id": 77,
"text": "C = \\alpha_1 \\alpha_2 \\beta_1 + \\alpha_1 \\alpha_2 \\beta_2 + \\alpha_1 \\beta_1 \\beta_2 + \\alpha_2 \\beta_1 \\beta_2, \\,"
},
{
"math_id": 78,
"text": "B = \\alpha_1 \\alpha_2 + \\alpha_1 \\beta_1 + \\alpha_1 \\beta_2 + \\alpha_2 \\beta_1 + \\alpha_2 \\beta_2 + \\beta_1 \\beta_2, \\,"
},
{
"math_id": 79,
"text": "A = \\alpha_1 + \\alpha_2 + \\beta_1 + \\beta_2.\\,"
},
{
"math_id": 80,
"text": "\\alpha_1 + \\alpha_2 = \\frac{1-\\sqrt{1-4c}}{2} + \\frac{1+\\sqrt{1-4c}}{2} = \\frac{1+1}{2} = 1"
},
{
"math_id": 81,
"text": "\\alpha_1 \\alpha_2 = \\frac{(1-\\sqrt{1-4c})(1+\\sqrt{1-4c})}{4} = \\frac{1^2 - (\\sqrt{1-4c})^2}{4}= \\frac{1 - 1 + 4c}{4} = \\frac{4c}{4} = c."
},
{
"math_id": 82,
"text": "D = c \\beta_1 \\beta_2"
},
{
"math_id": 83,
"text": "A = 1 + \\beta_1 + \\beta_2"
},
{
"math_id": 84,
"text": "D = c \\beta_1 \\beta_2 = c^2 + c"
},
{
"math_id": 85,
"text": "A = 1 + \\beta_1 + \\beta_2 = 0."
},
{
"math_id": 86,
"text": "\\beta_1 \\beta_2 = c + 1"
},
{
"math_id": 87,
"text": "\\beta_1 + \\beta_2 = -1"
},
{
"math_id": 88,
"text": "A' = 1, B = 1, C = c+1"
},
{
"math_id": 89,
"text": "\\beta_1 = \\frac{-1 - \\sqrt{-3 -4c}}{2}"
},
{
"math_id": 90,
"text": "\\beta_2 = \\frac{-1 + \\sqrt{-3 -4c}}{2}."
},
{
"math_id": 91,
"text": "f_c(\\beta_2) = \\beta_1,"
},
{
"math_id": 92,
"text": "(z-\\alpha_1)"
},
{
"math_id": 93,
"text": "(z-\\alpha_2), "
},
{
"math_id": 94,
"text": "(z^2+c)^2 + c -z = (z^2 + c - z)(z^2 + z + c +1 ). \\,"
},
{
"math_id": 95,
"text": "\\frac{-1 \\pm \\sqrt{-3 -4c}}{2}. \\,"
},
{
"math_id": 96,
"text": "\\beta_1 = \\frac{-1 - i\\sqrt{3}}{2}"
},
{
"math_id": 97,
"text": "\\beta_2 = \\frac{-1 + i\\sqrt{3}}{2},"
},
{
"math_id": 98,
"text": "| \\beta_1 | = | \\beta_2 | = 1"
},
{
"math_id": 99,
"text": "c=-1"
},
{
"math_id": 100,
"text": "\\beta_1 = 0"
},
{
"math_id": 101,
"text": "\\beta_2 = -1"
},
{
"math_id": 102,
"text": "f^{(n)}(z)=z"
},
{
"math_id": 103,
"text": "z_{n+1}=z_n^2-2"
},
{
"math_id": 104,
"text": "x_{n+1}=4x_n(1-x_n)."
},
{
"math_id": 105,
"text": "z=2-4x."
},
{
"math_id": 106,
"text": "\\sin^2\\left(\\frac{2\\pi}{2^k-1}\\right), \\, \\sin^2\\left(2\\cdot\\frac{2\\pi}{2^k-1}\\right), \\, \\sin^2\\left(2^2\\cdot\\frac{2\\pi}{2^k-1}\\right), \\, \\sin^2\\left(2^3\\cdot\\frac{2\\pi}{2^k-1}\\right), \\dots , \\sin^2\\left(2^{k-1}\\frac{2\\pi}{2^k-1}\\right)."
}
] | https://en.wikipedia.org/wiki?curid=869255 |
869496 | Mitral regurgitation | Form of valvular heart disease
Medical condition
Mitral regurgitation (MR), also known as mitral insufficiency or mitral incompetence, is a form of valvular heart disease in which the mitral valve is insufficient and does not close properly when the heart pumps out blood. It is the abnormal leaking of blood backwards – regurgitation from the left ventricle, through the mitral valve, into the left atrium, when the left ventricle contracts. Mitral regurgitation is the most common form of valvular heart disease.
<templatestyles src="Template:TOC limit/styles.css" />
Definition.
Mitral regurgitation, also known as mitral insufficiency or mitral incompetence, is the backward flow of blood from the left ventricle, through the mitral valve, and into the left atrium, when the left ventricle contracts, resulting in a systolic murmur radiating to the left armpit.
Signs and symptoms.
Mitral regurgitation may be present for many years before any symptoms appear. The symptoms associated with MR are dependent on which phase of the disease process the individual is in. Individuals with acute MR are typically severely symptomatic and will have the signs and symptoms of acute decompensated congestive heart failure (i.e. shortness of breath, pulmonary edema, orthopnea, and paroxysmal nocturnal dyspnea). In acute cases, a murmur and tachycardia may be the only distinctive signs.
Individuals with chronic compensated MR may be asymptomatic for long periods of time, with a normal exercise tolerance and no evidence of heart failure. Over time, however, there may be decompensation and patients can develop volume overload (congestive heart failure). Symptoms of entry into a decompensated phase may include fatigue, shortness of breath particularly on exertion, and leg swelling. Also, there may be development of an irregular heart rhythm known as atrial fibrillation.
Findings on clinical examination depend on the severity and duration of MR. The mitral component of the first heart sound is usually soft and with a laterally displaced apex beat, often with heave. The first heart sound is followed by a high-pitched holosystolic murmur at the apex, radiating to the back or clavicular area. Its duration is, as the name suggests, the whole of systole. The loudness of the murmur does not correlate well with the severity of regurgitation. It may be followed by a loud, palpable P2, heard best when lying on the left side. A third heart sound is commonly heard.
Patients with mitral valve prolapse may have a holosystolic murmur or often a mid-to-late systolic click and a late systolic murmur. Cases with a late systolic regurgitant murmur may still be associated with significant hemodynamic consequences.
Mitral regurgitation as a result of papillary muscle damage or rupture may be a complication of a heart attack and lead to cardiogenic shock.
Cause.
The mitral valve apparatus comprises two valve leaflets, the mitral annulus, which forms a ring around the valve leaflets, and the papillary muscles, which tether the valve leaflets to the left ventricle and prevent them from prolapsing into the left atrium. The "chordae tendineae" is also present and connects the valve leaflets to the papillary muscles. Dysfunction of any of these portions of the mitral valve apparatus can cause regurgitation.
The most common cause of MR in developed countries is mitral valve prolapse. It is the most common cause of primary mitral regurgitation in the United States, causing about 50% of cases. Myxomatous degeneration of the mitral valve is more common in women as well as with advancing age, which causes a stretching of the leaflets of the valve and the chordae tendineae. Such elongation prevents the valve leaflets from fully coming together when the valve closes, causing the valve leaflets to prolapse into the left atrium, thereby causing MR.
Ischemic heart disease causes MR by the combination of ischemic dysfunction of the papillary muscles, and the dilatation of the left ventricle. This can lead to the subsequent displacement of the papillary muscles and the dilatation of the mitral valve annulus.
Rheumatic fever (RF), Marfan's syndrome and the Ehlers–Danlos syndromes are other typical causes. Mitral valve stenosis (MVS) can sometimes be a cause of mitral regurgitation (MR) in the sense that a stenotic valve (calcified and with restricted range of movement) allows backflow (regurgitation) if it is too stiff and misshapen to close completely. Most MVS is caused by RF, so one can say that MVS is sometimes the proximal cause of MI/MR (that is, stenotic MI/MR) and that RF is often the distal cause of MVS, MI/MR, or both. MR and mitral valve prolapse are also common in Ehlers–Danlos syndromes.
Secondary mitral regurgitation is due to the dilatation of the left ventricle that causes stretching of the mitral valve annulus and displacement of the papillary muscles. This dilatation of the left ventricle can be due to any cause of dilated cardiomyopathy including aortic insufficiency, nonischemic dilated cardiomyopathy, and noncompaction cardiomyopathy. Because the papillary muscles, chordae, and valve leaflets are usually normal in such conditions, it is also called functional mitral regurgitation.
Acute MR is most often caused by endocarditis, mainly "S. aureus". Rupture or dysfunction of the papillary muscle are also common causes in acute cases, dysfunction, which can include mitral valve prolapse.
Pathophysiology.
The pathophysiology of MR can be broken into three phases of the disease process: the acute phase, the chronic compensated phase, and the chronic decompensated phase.
Acute phase.
Acute MR (as may occur due to the sudden rupture of the chordae tendinae or papillary muscle) causes a sudden volume overload of both the left atrium and the left ventricle. The left ventricle develops volume overload because with every contraction it now has to pump out not only the volume of blood that goes into the aorta (the forward cardiac output or forward stroke volume) but also the blood that regurgitates into the left atrium (the regurgitant volume). The combination of the forward stroke volume and the regurgitant volume is known as the total stroke volume of the left ventricle.
In the acute setting, the stroke volume of the left ventricle is increased (increased ejection fraction); this happens because of more complete emptying of the heart. However, as it progresses the LV volume increases and the contractile function deteriorates, thus leading to dysfunctional LV and a decrease in ejection fraction. The increase in stroke volume is explained by the Frank–Starling mechanism, in which increased ventricular pre-load stretches the myocardium such that contractions are more forceful.
The regurgitant volume causes a volume overload and a pressure overload of the left atrium and the left ventricle. The increased pressures in the left side of the heart may inhibit drainage of blood from the lungs via the pulmonary veins and lead to pulmonary congestion.
Chronic phase.
Compensated.
If the MR develops slowly over months to years or if the acute phase cannot be managed with medical therapy, the individual will enter the chronic compensated phase of the disease. In this phase, the left ventricle develops eccentric hypertrophy in order to better manage the larger than normal stroke volume. The eccentric hypertrophy and the increased diastolic volume combine to increase the stroke volume (to levels well above normal) so that the forward stroke volume (forward cardiac output) approaches the normal levels.In the left atrium, the volume overload causes enlargement of the left atrium, allowing the filling pressure in the left atrium to decrease. This improves the drainage from the pulmonary veins, and signs and symptoms of pulmonary congestion will decrease.
These changes in the left ventricle and left atrium improve the low forward cardiac output state and the pulmonary congestion that occur in the acute phase of the disease. Individuals in the chronic compensated phase may be asymptomatic and have normal exercise tolerances.
Decompensated.
An individual may be in the compensated phase of MR for years, but will eventually develop left ventricular dysfunction, the hallmark for the chronic decompensated phase of MR. It is currently unclear what causes an individual to enter the decompensated phase of this disease. However, the decompensated phase is characterized by calcium overload within the cardiac myocytes.
In this phase, the ventricular myocardium is no longer able to contract adequately to compensate for the volume overload of mitral regurgitation, and the stroke volume of the left ventricle will decrease. The decreased stroke volume causes a decreased forward cardiac output and an increase in the end-systolic volume. The increased end-systolic volume translates to increased filling pressures of the left ventricle and increased pulmonary venous congestion. The individual may again have symptoms of congestive heart failure.
The left ventricle begins to dilate during this phase. This causes a dilatation of the mitral valve annulus, which may worsen the degree of MR. The dilated left ventricle causes an increase in the wall stress of the cardiac chamber as well.While the ejection fraction is less in the chronic decompensated phase than in the acute phase or the chronic compensated phase, it may still be in the normal range (i.e.: > 50 percent), and may not decrease until late in the disease course. A decreased ejection fraction in an individual with MR and no other cardiac abnormality should alert the physician that the disease may be in its decompensated phase.
Diagnosis.
There are many diagnostic tests that have abnormal results in the presence of MR. These tests suggest the diagnosis of MR and may indicate to the physician that further testing is warranted. For instance, the electrocardiogram (ECG) in long-standing MR may show evidence of left atrial enlargement and left ventricular dilatation. Atrial fibrillation may also be noted on the ECG in individuals with chronic mitral regurgitation. The ECG may not show any of these findings in the setting of acute MR.
The quantification of MR usually employs imaging studies such as echocardiography or magnetic resonance angiography of the heart.
Chest X-ray.
The chest X-ray in individuals with chronic MR is characterized by enlargement of the left atrium and the left ventricle, and then maybe calcification of the mitral valve.
Echocardiogram.
An echocardiogram is commonly used to confirm the diagnosis of MR. Color doppler flow on the transthoracic echocardiogram (TTE) will reveal a jet of blood flowing from the left ventricle into the left atrium during ventricular systole. Also, it may detect a dilated left atrium and ventricle and decreased left ventricular function. A transesophageal echocardiogram can give clearer images if needed as the back of the heart can also be viewed.
Electrocardiography.
P mitrale is a broad, bifid notched P wave in several or many leads with a prominent late negative component to the P wave in lead V1, and may be seen in MR, but also in mitral stenosis, and, potentially, any cause of overload of the left atrium.
Quantification of mitral regurgitation.
The degree of severity of MR can be quantified by the regurgitant fraction, which is the percentage of the left ventricular stroke volume that regurgitates into the left atrium.
regurgitant fraction = formula_0
where Vmitral and Vaortic are, respectively, the volumes of blood that flow forward through the mitral valve and aortic valve during a cardiac cycle.
Methods that have been used to assess the regurgitant fraction in mitral regurgitation include echocardiography, cardiac catheterization, fast CT scan, and cardiac MRI.The echocardiographic technique to measure the regurgitant fraction is to determine the forward flow through the mitral valve (from the left atrium to the left ventricle) during ventricular diastole, and comparing it with the flow out of the left ventricle through the aortic valve in ventricular systole. This method assumes that the aortic valve does not have aortic insufficiency.Another way to quantify the degree of MR is to determine the area of the regurgitant flow at the level of the valve. This is known as the regurgitant orifice area and correlates with the size of the defect in the mitral valve. One particular echocardiographic technique used to measure the orifice area is measurement of the proximal isovelocity surface area (PISA). The flaw of using PISA to determine the mitral valve regurgitant orifice area is that it measures the flow at one moment in time in the cardiac cycle, which may not reflect the average performance of the regurgitant jet.
Treatment.
The treatment of MR depends on the acuteness of the disease and whether there are associated signs of hemodynamic compromise. In general, medical therapy is non-curative and is used for mild-to-moderate regurgitation or in patients unable to tolerate surgery.
In acute MR secondary to a mechanical defect in the heart (i.e., rupture of a papillary muscle or chordae tendineae), the treatment of choice is mitral valve surgery. If the patient is hypotensive prior to the surgical procedure, an intra-aortic balloon pump may be placed in order to improve perfusion of the organs and to decrease the degree of MR.
Medicine.
If the individual with acute MR is normotensive, vasodilators may be of use to decrease the afterload seen by the left ventricle and thereby decrease the regurgitant fraction. The vasodilator most commonly used is nitroprusside.
Individuals with chronic MR can be treated with vasodilators as well to decrease afterload. In the chronic state, the most commonly used agents are ACE inhibitors and hydralazine. Studies have shown that the use of ACE inhibitors and hydralazine can delay surgical treatment of MR. The current guidelines for treatment of MR limit the use of vasodilators to individuals with hypertension, however. Any hypertension is treated aggressively, e.g. by diuretics and a low sodium diet. In both hypertensive and normotensive cases, digoxin and antiarrhythmics are also indicated. Also, chronic anticoagulation is given where there is concomitant mitral valve prolapse or atrial fibrillation.
Surgery.
Surgery is curative of mitral valve regurgitation. There are two surgical options for the treatment of MR: mitral valve replacement and mitral valve repair. Mitral valve repair is preferred to mitral valve replacement where a repair is feasible as bioprosthetic replacement valves have a limited lifespan of 10 to 15 years, whereas synthetic replacement valves require ongoing use of blood thinners to reduce the risk of stroke. There are two general categories of approaches to mitral valve repair: resection of the prolapsed valvular segment (sometimes referred to as the "Carpentier" approach) and installation of artificial chordae to "anchor" the prolapsed segment to the papillary muscle (sometimes referred to as the "David" approach). With the resection approach, any prolapsing tissue is resected, in effect removing the hole through which the blood is leaking. In the artificial chordae approach, ePTFE (expanded polytetrafluoroethylene, or Gore-Tex) sutures are used to replace the broken or stretched chordae tendonae, bringing the natural tissue back into the physiological position, thus restoring the natural anatomy of the valve. With both techniques, an annuloplasty ring is typically secured to the annulus, or opening of the mitral valve, to provide additional structural support. In some cases, the "double orifice" (or 'Alfieri') technique for mitral valve repair, the opening of the mitral valve is sewn closed in the middle, leaving the two ends still able to open. This ensures that the mitral valve closes when the left ventricle pumps blood, yet allows the mitral valve to open at the two ends to fill the left ventricle with blood before it pumps. In general, mitral valve surgery requires "open-heart" surgery in which the heart is arrested and the patient is placed on a heart-lung machine (cardiopulmonary bypass). This allows the complex surgery to proceed in a still environment.
Due to the physiological stress associated with open-heart surgery, elderly and very sick patients may be subject to increased risk, and may not be candidates for this type of surgery. As a consequence, there are attempts to identify means of correcting MR on a beating heart. The Alfieri technique for instance, has been replicated using a percutaneous catheter technique, which installs a "MitraClip" device to hold the middle of the mitral valve closed.
Indications for surgery for chronic MR include signs of left ventricular dysfunction with ejection fraction less than 60%, severe pulmonary hypertension with pulmonary artery systolic pressure greater than 50 mmHg at rest or 60 mmHg during activity, and new-onset atrial fibrillation.
Epidemiology.
Significant mitral valve regurgitation has a prevalence of approximately 2% of the population, affecting males and females equally. It is one of the two most common valvular heart diseases in the elderly, and the commonest type of valvular heart disease in low and middle income countries.
In a study of 595 male elite football players aged 18–38 and 47 sedentary non-athletes, mitral regurgitation was found in 20% football players and 15% in control group. Football players with mitral regurgitation were found to have larger mitral annulus diameter compared to athletes without regurgitation, and left atrium diameter was larger in athletes with MR.
See also.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{V_{mitral} - V_{aortic}} {V_{mitral}} \\times 100\\%"
}
] | https://en.wikipedia.org/wiki?curid=869496 |
869590 | Canonical quantization | Process of converting a classical physical theory into one compatible with quantum mechanics
In physics, canonical quantization is a procedure for quantizing a classical theory, while attempting to preserve the formal structure, such as symmetries, of the classical theory to the greatest extent possible.
Historically, this was not quite Werner Heisenberg's route to obtaining quantum mechanics, but Paul Dirac introduced it in his 1926 doctoral thesis, the "method of classical analogy" for quantization, and detailed it in his classic text "Principles of Quantum Mechanics". The word "canonical" arises from the Hamiltonian approach to classical mechanics, in which a system's dynamics is generated via canonical Poisson brackets, a structure which is "only partially preserved" in canonical quantization.
This method was further used by Paul Dirac in the context of quantum field theory, in his construction of quantum electrodynamics. In the field theory context, it is also called the second quantization of fields, in contrast to the semi-classical first quantization of single particles.
History.
When it was first developed, quantum physics dealt only with the quantization of the motion of particles, leaving the electromagnetic field classical, hence the name quantum mechanics.
Later the electromagnetic field was also quantized, and even the particles themselves became represented through quantized fields, resulting in the development of quantum electrodynamics (QED) and quantum field theory in general. Thus, by convention, the original form of particle quantum mechanics is denoted first quantization, while quantum field theory is formulated in the language of second quantization.
First quantization.
Single particle systems.
The following exposition is based on Dirac's treatise on quantum mechanics.
In the classical mechanics of a particle, there are dynamic variables which are called coordinates (x) and momenta (p). These specify the "state" of a classical system. The canonical structure (also known as the symplectic structure) of classical mechanics consists of Poisson brackets enclosing these variables, such as {"x", "p"} = 1. All transformations of variables which preserve these brackets are allowed as canonical transformations in classical mechanics. Motion itself is such a canonical transformation.
By contrast, in quantum mechanics, all significant features of a particle are contained in a state formula_0, called a quantum state. Observables are represented by operators acting on a Hilbert space of such quantum states.
The eigenvalue of an operator acting on one of its eigenstates represents the value of a measurement on the particle thus represented. For example, the energy is read off by the Hamiltonian operator formula_1 acting on a state formula_2, yielding
formula_3
where "En" is the characteristic energy associated to this formula_2 eigenstate.
Any state could be represented as a linear combination of eigenstates of energy; for example,
formula_4where "an" are constant coefficients.
As in classical mechanics, all dynamical operators can be represented by functions of the position and momentum ones, formula_5 and formula_6, respectively. The connection between this representation and the more usual wavefunction representation is given by the eigenstate of the position operator formula_5 representing a particle at position formula_7, which is denoted by an element formula_8 in the Hilbert space, and which satisfies formula_9. Then, formula_10.
Likewise, the eigenstates formula_11 of the momentum operator formula_6 specify the momentum representation: formula_12.
The central relation between these operators is a quantum analog of the above Poisson bracket of classical mechanics, the canonical commutation relation,
formula_13
This relation encodes (and formally leads to) the uncertainty principle, in the form Δ"x" Δ"p" ≥ "ħ"/2. This algebraic structure may be thus considered as the quantum analog of the "canonical structure" of classical mechanics.
Many-particle systems.
When turning to N-particle systems, i.e., systems containing N identical particles (particles characterized by the same quantum numbers such as mass, charge and spin), it is necessary to extend the single-particle state function formula_14 to the N-particle state function formula_15. A fundamental difference between classical and quantum mechanics concerns the concept of indistinguishability of identical particles. Only two species of particles are thus possible in quantum physics, the so-called bosons and fermions which obey the following rules for each kind of particle:
where we have interchanged two coordinates formula_18 of the state function. The usual wave function is obtained using the Slater determinant and the identical particles theory. Using this basis, it is possible to solve various many-particle problems.
Issues and limitations.
Classical and quantum brackets.
Dirac's book details his popular rule of supplanting Poisson brackets by commutators:
formula_19
One might interpret this proposal as saying that we should seek a "quantization map" formula_20 mapping a function formula_21 on the classical phase space to an operator formula_22 on the quantum Hilbert space such that
formula_23
It is now known that there is no reasonable such quantization map satisfying the above identity exactly for all functions formula_21 and formula_24.
Groenewold's theorem.
One concrete version of the above impossibility claim is Groenewold's theorem (after Dutch theoretical physicist Hilbrand J. Groenewold), which we describe for a system with one degree of freedom for simplicity. Let us accept the following "ground rules" for the map formula_20. First, formula_20 should send the constant function 1 to the identity operator. Second, formula_20 should take formula_7 and formula_25 to the usual position and momentum operators formula_26 and formula_27. Third, formula_20 should take a polynomial in formula_7 and formula_25 to a "polynomial" in formula_26 and formula_27, that is, a finite linear combinations of products of formula_26 and formula_27, which may be taken in any desired order. In its simplest form, Groenewold's theorem says that there is no map satisfying the above ground rules and also the bracket condition
formula_28
for all polynomials formula_21 and formula_24.
Actually, the nonexistence of such a map occurs already by the time we reach polynomials of degree four. Note that the Poisson bracket of two polynomials of degree four has degree six, so it does not exactly make sense to require a map on polynomials of degree four to respect the bracket condition. We "can", however, require that the bracket condition holds when formula_21 and formula_24 have degree three. Groenewold's theorem can be stated as follows:
<templatestyles src="Math_theorem/styles.css" />
Theorem — There is no quantization map formula_20 (following the above ground rules) on polynomials of degree less than or equal to four that satisfies
formula_29
whenever formula_21 and formula_24 have degree less than or equal to three. (Note that in this case, formula_30 has degree less than or equal to four.)
The proof can be outlined as follows. Suppose we first try to find a quantization map on polynomials of degree less than or equal to three satisfying the bracket condition whenever formula_21 has degree less than or equal to two and formula_24 has degree less than or equal to two. Then there is precisely one such map, and it is the Weyl quantization. The impossibility result now is obtained by writing the same polynomial of degree four as a Poisson bracket of polynomials of degree three "in two different ways". Specifically, we have
formula_31
On the other hand, we have already seen that if there is going to be a quantization map on polynomials of degree three, it must be the Weyl quantization; that is, we have already determined the only possible quantization of all the cubic polynomials above.
The argument is finished by computing by brute force that
formula_32
does not coincide with
formula_33
Thus, we have two incompatible requirements for the value of formula_34.
Axioms for quantization.
If Q represents the quantization map that acts on functions f in classical phase space, then the following properties are usually considered desirable:
However, not only are these four properties mutually inconsistent, "any three" of them are also inconsistent! As it turns out, the only pairs of these properties that lead to self-consistent, nontrivial solutions are 2 & 3, and possibly 1 & 3 or 1 & 4. Accepting properties 1 & 2, along with a weaker condition that 3 be true only asymptotically in the limit "ħ"→0 (see Moyal bracket), leads to deformation quantization, and some extraneous information must be provided, as in the standard theories utilized in most of physics. Accepting properties 1 & 2 & 3 but restricting the space of quantizable observables to exclude terms such as the cubic ones in the above example amounts to geometric quantization.
Second quantization: field theory.
Quantum mechanics was successful at describing non-relativistic systems with fixed numbers of particles, but a new framework was needed to describe systems in which particles can be created or destroyed, for example, the electromagnetic field, considered as a collection of photons. It was eventually realized that special relativity was inconsistent with single-particle quantum mechanics, so that all particles are now described relativistically by quantum fields.
When the canonical quantization procedure is applied to a field, such as the electromagnetic field, the classical field variables become "quantum operators". Thus, the normal modes comprising the amplitude of the field are simple oscillators, each of which is quantized in standard first quantization, above, without ambiguity. The resulting quanta are identified with individual particles or excitations. For example, the quanta of the electromagnetic field are identified with photons. Unlike first quantization, conventional second quantization is completely unambiguous, in effect a functor, since the constituent set of its oscillators are quantized unambiguously.
Historically, quantizing the classical theory of a single particle gave rise to a wavefunction. The classical equations of motion of a field are typically identical in form to the (quantum) equations for the wave-function of "one of its quanta". For example, the Klein–Gordon equation is the classical equation of motion for a free scalar field, but also the quantum equation for a scalar particle wave-function. This meant that quantizing a field "appeared" to be similar to quantizing a theory that was already quantized, leading to the fanciful term second quantization in the early literature, which is still used to describe field quantization, even though the modern interpretation detailed is different.
One drawback to canonical quantization for a relativistic field is that by relying on the Hamiltonian to determine time dependence, relativistic invariance is no longer manifest. Thus it is necessary to check that relativistic invariance is not lost. Alternatively, the Feynman integral approach is available for quantizing relativistic fields, and is manifestly invariant. For non-relativistic field theories, such as those used in condensed matter physics, Lorentz invariance is not an issue.
Field operators.
Quantum mechanically, the variables of a field (such as the field's amplitude at a given point) are represented by operators on a Hilbert space. In general, all observables are constructed as operators on the Hilbert space, and the time-evolution of the operators is governed by the Hamiltonian, which must be a positive operator. A state formula_40 annihilated by the Hamiltonian must be identified as the vacuum state, which is the basis for building all other states. In a non-interacting (free) field theory, the vacuum is normally identified as a state containing zero particles. In a theory with interacting particles, identifying the vacuum is more subtle, due to vacuum polarization, which implies that the physical vacuum in quantum field theory is never really empty. For further elaboration, see the articles on the quantum mechanical vacuum and the vacuum of quantum chromodynamics. The details of the canonical quantization depend on the field being quantized, and whether it is free or interacting.
Real scalar field.
A scalar field theory provides a good example of the canonical quantization procedure. Classically, a scalar field is a collection of an infinity of oscillator normal modes. It suffices to consider a 1+1-dimensional space-time formula_41 in which the spatial direction is compactified to a circle of circumference 2π, rendering the momenta discrete.
The classical Lagrangian density describes an , labelled by x which is now a "label" (and not the displacement dynamical variable to be quantized), denoted by the classical field φ,
formula_42
where "V"("φ") is a potential term, often taken to be a polynomial or monomial of degree 3 or higher. The action functional is
formula_43The canonical momentum obtained via the Legendre transformation using the action L is formula_44, and the classical Hamiltonian is found to be
formula_45
Canonical quantization treats the variables φ and π as operators with canonical commutation relations at time t= 0, given by
formula_46
Operators constructed from φ and π can then formally be defined at other times via the time-evolution generated by the Hamiltonian,
formula_47
However, since φ and π no longer commute, this expression is ambiguous at the quantum level. The problem is to construct a representation of the relevant operators formula_48 on a Hilbert space formula_49 and to construct a positive operator H as a quantum operator on this Hilbert space in such a way that it gives this evolution for the operators formula_48 as given by the preceding equation, and to show that formula_49 contains a vacuum state formula_40 on which H has zero eigenvalue. In practice, this construction is a difficult problem for interacting field theories, and has been solved completely only in a few simple cases via the methods of constructive quantum field theory. Many of these issues can be sidestepped using the Feynman integral as described for a particular "V"("φ") in the article on scalar field theory.
In the case of a free field, with "V"("φ") = 0, the quantization procedure is relatively straightforward. It is convenient to Fourier transform the fields, so that
formula_50
The reality of the fields implies that
formula_51The classical Hamiltonian may be expanded in Fourier modes as
formula_52
where formula_53.
This Hamiltonian is thus recognizable as an infinite sum of classical normal mode oscillator excitations "φk", each one of which is quantized in the standard manner, so the free quantum Hamiltonian looks identical. It is the "φk"s that have become operators obeying the standard commutation relations, ["φk", "πk"†] = ["φk"†, "πk"] = "iħ", with all others vanishing. The collective Hilbert space of all these oscillators is thus constructed using creation and annihilation operators constructed from these modes,
formula_54
for which ["ak", "ak"†] = 1 for all k, with all other commutators vanishing.
The vacuum formula_40 is taken to be annihilated by all of the "ak", and formula_49 is the Hilbert space constructed by applying any combination of the infinite collection of creation operators "ak"† to formula_40. This Hilbert space is called Fock space. For each k, this construction is identical to a quantum harmonic oscillator. The quantum field is an infinite array of quantum oscillators. The quantum Hamiltonian then amounts to
formula_55where "Nk" may be interpreted as the "number operator" giving the number of particles in a state with momentum k.
This Hamiltonian differs from the previous expression by the subtraction of the zero-point energy "ħωk"/2 of each harmonic oscillator. This satisfies the condition that H must annihilate the vacuum, without affecting the time-evolution of operators via the above exponentiation operation. This subtraction of the zero-point energy may be considered to be a resolution of the quantum operator ordering ambiguity, since it is equivalent to requiring that "all creation operators appear to the left of annihilation operators" in the expansion of the Hamiltonian. This procedure is known as Wick ordering or normal ordering.
Other fields.
All other fields can be quantized by a generalization of this procedure. Vector or tensor fields simply have more components, and independent creation and destruction operators must be introduced for each independent component. If a field has any internal symmetry, then creation and destruction operators must be introduced for each component of the field related to this symmetry as well. If there is a gauge symmetry, then the number of independent components of the field must be carefully analyzed to avoid over-counting equivalent configurations, and gauge-fixing may be applied if needed.
It turns out that commutation relations are useful only for quantizing "bosons", for which the occupancy number of any state is unlimited. To quantize "fermions", which satisfy the Pauli exclusion principle, anti-commutators are needed. These are defined by {"A", "B"}
"AB" + "BA".
When quantizing fermions, the fields are expanded in creation and annihilation operators, "θk"†, "θk", which satisfy
formula_56
The states are constructed on a vacuum formula_40 annihilated by the "θk", and the Fock space is built by applying all products of creation operators "θk"† to |0⟩. Pauli's exclusion principle is satisfied, because formula_57, by virtue of the anti-commutation relations.
Condensates.
The construction of the scalar field states above assumed that the potential was minimized at φ = 0, so that the vacuum minimizing the Hamiltonian satisfies ⟨"φ"⟩ = 0, indicating that the vacuum expectation value (VEV) of the field is zero. In cases involving spontaneous symmetry breaking, it is possible to have a non-zero VEV, because the potential is minimized for a value φ = v . This occurs for example, if "V"("φ")
"gφ"4 − 2"m"2"φ"2 with "g" > 0 and "m"2 > 0, for which the minimum energy is found at "v"
±"m"/√"g". The value of v in one of these vacua may be considered as "condensate" of the field φ. Canonical quantization then can be carried out for the "shifted field" "φ"("x","t") − "v", and particle states with respect to the shifted vacuum are defined by quantizing the shifted field. This construction is utilized in the Higgs mechanism in the standard model of particle physics.
Mathematical quantization.
Deformation quantization.
The classical theory is described using a spacelike foliation of spacetime with the state at each slice being described by an element of a symplectic manifold with the time evolution given by the symplectomorphism generated by a Hamiltonian function over the symplectic manifold. The "quantum algebra" of "operators" is an ħ-deformation of the algebra of smooth functions over the symplectic space such that the leading term in the Taylor expansion over ħ of the commutator ["A", "B"] expressed in the phase space formulation is "iħ"{"A", "B"}. (Here, the curly braces denote the Poisson bracket. The subleading terms are all encoded in the Moyal bracket, the suitable quantum deformation of the Poisson bracket.) In general, for the quantities (observables) involved,
and providing the arguments of such brackets, "ħ"-deformations are highly nonunique—quantization is an "art", and is specified by the physical context.
Now, one looks for unitary representations of this quantum algebra. With respect to such a unitary representation, a symplectomorphism in the classical theory would now deform to a (metaplectic) unitary transformation. In particular, the time evolution symplectomorphism generated by the classical Hamiltonian deforms to a unitary transformation generated by the corresponding quantum Hamiltonian.
A further generalization is to consider a Poisson manifold instead of a symplectic space for the classical theory and perform an "ħ"-deformation of the corresponding Poisson algebra or even Poisson supermanifolds.
Geometric quantization.
In contrast to the theory of deformation quantization described above, geometric quantization seeks to construct an actual Hilbert space and operators on it. Starting with a symplectic manifold formula_58, one first constructs a prequantum Hilbert space consisting of the space of square-integrable sections of an appropriate line bundle over formula_58. On this space, one can map "all" classical observables to operators on the prequantum Hilbert space, with the commutator corresponding exactly to the Poisson bracket. The prequantum Hilbert space, however, is clearly too big to describe the quantization of formula_58.
One then proceeds by choosing a polarization, that is (roughly), a choice of formula_59 variables on the formula_60-dimensional phase space. The "quantum" Hilbert space is then the space of sections that depend only on the formula_59 chosen variables, in the sense that they are covariantly constant in the other formula_59 directions. If the chosen variables are real, we get something like the traditional Schrödinger Hilbert space. If the chosen variables are complex, we get something like the Segal–Bargmann space. | [
{
"math_id": 0,
"text": "|\\psi\\rangle"
},
{
"math_id": 1,
"text": "\\hat{H}"
},
{
"math_id": 2,
"text": "|\\psi_n\\rangle"
},
{
"math_id": 3,
"text": "\\hat{H}|\\psi_n\\rangle=E_n|\\psi_n\\rangle,"
},
{
"math_id": 4,
"text": "|\\psi\\rangle=\\sum_{n=0}^{\\infty} a_n |\\psi_n\\rangle ,"
},
{
"math_id": 5,
"text": "\\hat{X}"
},
{
"math_id": 6,
"text": "\\hat{P}"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "|x\\rangle"
},
{
"math_id": 9,
"text": "\\hat{X}|x\\rangle = x|x\\rangle"
},
{
"math_id": 10,
"text": "\\psi(x)= \\langle x|\\psi\\rangle"
},
{
"math_id": 11,
"text": "|p\\rangle"
},
{
"math_id": 12,
"text": "\\psi(p)= \\langle p|\\psi\\rangle"
},
{
"math_id": 13,
"text": "[\\hat{X},\\hat{P}] = \\hat{X}\\hat{P}-\\hat{P}\\hat{X} = i\\hbar."
},
{
"math_id": 14,
"text": "\\psi(\\mathbf{r})"
},
{
"math_id": 15,
"text": "\\psi(\\mathbf{r}_1,\\mathbf{r}_2,\\dots,\\mathbf{r}_N)"
},
{
"math_id": 16,
"text": "\\psi(\\mathbf{r}_1,\\dots,\\mathbf{r}_j,\\dots,\\mathbf{r}_k,\\dots,\\mathbf{r}_N)=+\\psi(\\mathbf{r}_1,\\dots,\\mathbf{r}_k,\\dots,\\mathbf{r}_j,\\dots,\\mathbf{r}_N),"
},
{
"math_id": 17,
"text": "\\psi(\\mathbf{r}_1,\\dots,\\mathbf{r}_j,\\dots,\\mathbf{r}_k,\\dots,\\mathbf{r}_N)=-\\psi(\\mathbf{r}_1,\\dots,\\mathbf{r}_k,\\dots,\\mathbf{r}_j,\\dots,\\mathbf{r}_N),"
},
{
"math_id": 18,
"text": "(\\mathbf{r}_j, \\mathbf{r}_k)"
},
{
"math_id": 19,
"text": "\\{A,B\\} \\longmapsto \\tfrac{1}{i \\hbar} [\\hat{A},\\hat{B}] ~."
},
{
"math_id": 20,
"text": "Q"
},
{
"math_id": 21,
"text": "f"
},
{
"math_id": 22,
"text": "Q_f"
},
{
"math_id": 23,
"text": "Q_{\\{f,g\\}} = \\frac{1}{i\\hbar}[Q_f,Q_g]"
},
{
"math_id": 24,
"text": "g"
},
{
"math_id": 25,
"text": "p"
},
{
"math_id": 26,
"text": "X"
},
{
"math_id": 27,
"text": "P"
},
{
"math_id": 28,
"text": "Q_{\\{f,g\\}} = \\frac{1}{i\\hbar} [Q_f,Q_g]"
},
{
"math_id": 29,
"text": " Q_{ \\{f, g\\} } = \\frac{1}{i\\hbar}[Q_f,Q_g]"
},
{
"math_id": 30,
"text": "\\{f,g\\}"
},
{
"math_id": 31,
"text": "x^2 p^2 = \\frac{1}{9} \\{x^3,p^3\\} = \\frac{1}{3} \\{x^2p,xp^2\\}"
},
{
"math_id": 32,
"text": "\\frac{1}{9}[Q(x^3),Q(p^3)]"
},
{
"math_id": 33,
"text": "\\frac{1}{3}[Q(x^2p),Q(xp^2)]."
},
{
"math_id": 34,
"text": "Q(x^2p^2)"
},
{
"math_id": 35,
"text": "Q_x \\psi = x \\psi"
},
{
"math_id": 36,
"text": "Q_p \\psi = -i\\hbar \\partial_x \\psi ~~"
},
{
"math_id": 37,
"text": "f \\longmapsto Q_f ~~"
},
{
"math_id": 38,
"text": "[Q_f,Q_g]=i\\hbar Q_{\\{f,g\\}}~~"
},
{
"math_id": 39,
"text": "Q_{g \\circ f}=g(Q_f)~~"
},
{
"math_id": 40,
"text": "|0\\rangle"
},
{
"math_id": 41,
"text": "\\mathbb{R} \\times S_1,"
},
{
"math_id": 42,
"text": "\\mathcal{L}(\\phi) = \\tfrac{1}{2}(\\partial_t \\phi)^2 - \\tfrac{1}{2}(\\partial_x \\phi)^2 - \\tfrac{1}{2} m^2\\phi^2 - V(\\phi),"
},
{
"math_id": 43,
"text": "S(\\phi) = \\int \\mathcal{L}(\\phi) dx dt = \\int L(\\phi, \\partial_t\\phi) dt \\, ."
},
{
"math_id": 44,
"text": "\\pi = \\partial_t\\phi"
},
{
"math_id": 45,
"text": "H(\\phi,\\pi) = \\int dx \\left[\\tfrac{1}{2} \\pi^2 + \\tfrac{1}{2} (\\partial_x \\phi)^2 + \\tfrac{1}{2} m^2 \\phi^2 + V(\\phi)\\right]."
},
{
"math_id": 46,
"text": "[\\phi(x),\\phi(y)] = 0, \\ \\ [\\pi(x), \\pi(y)] = 0, \\ \\ [\\phi(x),\\pi(y)] = i\\hbar \\delta(x-y)."
},
{
"math_id": 47,
"text": " \\mathcal{O}(t) = e^{itH} \\mathcal{O} e^{-itH}."
},
{
"math_id": 48,
"text": "\\mathcal{O}"
},
{
"math_id": 49,
"text": "\\mathcal{H}"
},
{
"math_id": 50,
"text": " \\phi_k = \\int \\phi(x) e^{-ikx} dx, \\ \\ \\pi_k = \\int \\pi(x) e^{-ikx} dx. "
},
{
"math_id": 51,
"text": "\\phi_{-k} = \\phi_k^\\dagger, ~~~ \\pi_{-k} = \\pi_k^\\dagger ."
},
{
"math_id": 52,
"text": " H=\\frac{1}{2}\\sum_{k=-\\infty}^{\\infty}\\left[\\pi_k \\pi_k^\\dagger + \\omega_k^2\\phi_k\\phi_k^\\dagger\\right],"
},
{
"math_id": 53,
"text": "\\omega_k = \\sqrt{k^2+m^2}"
},
{
"math_id": 54,
"text": " a_k = \\frac{1}{\\sqrt{2\\hbar\\omega_k}}\\left(\\omega_k\\phi_k + i\\pi_k\\right), \\ \\ a_k^\\dagger = \\frac{1}{\\sqrt{2\\hbar\\omega_k}}\\left(\\omega_k\\phi_k^\\dagger - i\\pi_k^\\dagger\\right), "
},
{
"math_id": 55,
"text": " H = \\sum_{k=-\\infty}^{\\infty} \\hbar\\omega_k a_k^\\dagger a_k = \\sum_{k=-\\infty}^{\\infty} \\hbar\\omega_k N_k ,"
},
{
"math_id": 56,
"text": "\\{\\theta_k,\\theta_l^\\dagger\\} = \\delta_{kl}, \\ \\ \\{\\theta_k, \\theta_l\\} = 0, \\ \\ \\{\\theta_k^\\dagger, \\theta_l^\\dagger\\} = 0. "
},
{
"math_id": 57,
"text": "(\\theta_k^\\dagger)^2|0\\rangle = 0"
},
{
"math_id": 58,
"text": "M"
},
{
"math_id": 59,
"text": "n"
},
{
"math_id": 60,
"text": "2n"
}
] | https://en.wikipedia.org/wiki?curid=869590 |
8696119 | Ultraviolet photoelectron spectroscopy | Measurement of kinetic energy spectra
Ultraviolet photoelectron spectroscopy (UPS) refers to the measurement of kinetic energy spectra of photoelectrons emitted by molecules that have absorbed ultraviolet photons, in order to determine molecular orbital energies in the valence region.
Basic theory.
If Albert Einstein's photoelectric law is applied to a free molecule, the kinetic energy (formula_0) of an emitted photoelectron is given by
formula_1
where "h" is the Planck constant, "ν" is the frequency of the ionizing light, and "I" is an ionization energy for the formation of a singly charged ion in either the ground state or an excited state. According to Koopmans' theorem, each such ionization energy may be identified with the energy of an occupied molecular orbital. The ground-state ion is formed by removal of an electron from the highest occupied molecular orbital, while excited ions are formed by removal of an electron from a lower occupied orbital.
History.
Before 1960, virtually all measurements of photoelectron kinetic energies were for electrons emitted from metals and other solid surfaces. In about 1956, Kai Siegbahn developed X-ray photoelectron spectroscopy (XPS) for surface chemical analysis. This method uses x-ray sources to study energy levels of atomic core electrons, and at the time had an energy resolution of about 1 eV (electronvolt).
The ultraviolet photoelectron spectroscopy (UPS) was pioneered by Feodor I. Vilesov, a physicist at St. Petersburg (Leningrad) State University in Russia (USSR) in 1961 to study the photoelectron spectra of free molecules in the gas phase. The early experiments used monochromatized radiation from a hydrogen discharge and a retarding potential analyzer to measure the photoelectron energies.
The PES was further developed by David W. Turner, a physical chemist at Imperial College in London and then at Oxford University, in a series of publications from 1962 to 1967. As a photon source, he used a helium discharge lamp that emits a wavelength of 58.4 nm (corresponding to an energy of 21.2 eV) in the vacuum ultraviolet region. With this source, Turner's group obtained an energy resolution of 0.02 eV. Turner referred to the method as "molecular photoelectron spectroscopy", now usually "ultraviolet photoelectron spectroscopy" or UPS. As compared to XPS, UPS is limited to energy levels of valence electrons, but measures them more accurately. After 1967, commercial UPS spectrometers became available. One of the latest commercial devices was the Perkin Elmer PS18. For the last twenty years, the systems have been homemade. One of the latest in progress – Phoenix II – is that of the laboratory of Pau, IPREM developed by Dr. Jean-Marc Sotiropoulos.
Application.
The UPS measures experimental molecular orbital energies for comparison with theoretical values from quantum chemistry, which was also extensively developed in the 1960s. The photoelectron spectrum of a molecule contains a series of peaks each corresponding to one valence-region molecular orbital energy level. Also, the high resolution allowed the observation of fine structure due to vibrational levels of the molecular ion, which facilitates the assignment of peaks to bonding, nonbonding or antibonding molecular orbitals.
The method was later extended to the study of solid surfaces where it is usually described as photoemission spectroscopy (PES). It is particularly sensitive to the surface region (to 10 nm depth), due to the short range of the emitted photoelectrons (compared to X-rays). It is therefore used to study adsorbed species and their binding to the surface, as well as their orientation on the surface.
A useful result from characterization of solids by UPS is the determination of the work function of the material. An example of this determination is given by Park et al. Briefly, the full width of the photoelectron spectrum (from the highest kinetic energy/lowest binding energy point to the low kinetic energy cutoff) is measured and subtracted from the photon energy of the exciting radiation, and the difference is the work function. Often, the sample is electrically biased negative to separate the low energy cutoff from the spectrometer response.
Outlook.
UPS has seen a considerable revival with the increasing availability of synchrotron light sources that provide a wide range of monochromatic photon energies.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " E_\\text{k}"
},
{
"math_id": 1,
"text": " E_\\text{k} = h\\nu - I\\,,"
}
] | https://en.wikipedia.org/wiki?curid=8696119 |
869825 | Optical flow | Pattern of motion in a visual scene due to relative motion of the observer
Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent velocities of movement of brightness pattern in an image.
The concept of optical flow was introduced by the American psychologist James J. Gibson in the 1940s to describe the visual stimulus provided to animals moving through the world. Gibson stressed the importance of optic flow for affordance perception, the ability to discern possibilities for action within the environment. Followers of Gibson and his ecological approach to psychology have further demonstrated the role of the optical flow stimulus for the perception of movement by the observer in the world; perception of the shape, distance and movement of objects in the world; and the control of locomotion.
The term optical flow is also used by roboticists, encompassing related techniques from image processing and control of navigation including motion detection, object segmentation, time-to-contact information, focus of expansion calculations, luminance, motion compensated encoding, and stereo disparity measurement.
Estimation.
Sequences of ordered images allow the estimation of motion as either instantaneous image velocities or discrete image displacements. Fleet and Weiss provide a tutorial introduction to gradient based optical flow.
John L. Barron, David J. Fleet, and Steven Beauchemin provide a performance analysis of a number of optical flow techniques. It emphasizes the accuracy and density of measurements.
The optical flow methods try to calculate the motion between two image frames which are taken at times formula_0 and formula_1 at every voxel position. These methods are called differential since they are based on local Taylor series approximations of the image signal; that is, they use partial derivatives with respect to the spatial and temporal coordinates.
For a (2D + "t")-dimensional case (3D or "n"-D cases are similar) a voxel at location formula_2 with intensity formula_3 will have moved by formula_4, formula_5 and formula_6 between the two image frames, and the following "brightness constancy constraint" can be given:
formula_7
Assuming the movement to be small, the image constraint at formula_3 with Taylor series can be developed to get:
formula_8higher-order terms
By truncating the higher order terms (which performs a linearization) it follows that:
formula_9
or, dividing by formula_6,
formula_10
which results in
formula_11
where formula_12 are the formula_13 and formula_14 components of the velocity or optical flow of formula_3 and formula_15, formula_16 and formula_17 are the derivatives of the image at formula_2 in the corresponding directions. formula_18,formula_19 and formula_20 can be written for the derivatives in the following.
Thus:
formula_21
or
formula_22
This is an equation in two unknowns and cannot be solved as such. This is known as the "aperture problem" of the optical flow algorithms. To find the optical flow another set of equations is needed, given by some additional constraint. All optical flow methods introduce additional conditions for estimating the actual flow.
Methods for determination.
Many of these, in addition to the current state-of-the-art algorithms are evaluated on the Middlebury Benchmark Dataset. Other popular benchmark datasets are KITTI and Sintel.
Uses.
Motion estimation and video compression have developed as a major aspect of optical flow research. While the optical flow field is superficially similar to a dense motion field derived from the techniques of motion estimation, optical flow is the study of not only the determination of the optical flow field itself, but also of its use in estimating the three-dimensional nature and structure of the scene, as well as the 3D motion of objects and the observer relative to the scene, most of them using the image Jacobian.
Optical flow was used by robotics researchers in many areas such as: object detection and tracking, image dominant plane extraction, movement detection, robot navigation and visual odometry. Optical flow information has been recognized as being useful for controlling micro air vehicles.
The application of optical flow includes the problem of inferring not only the motion of the observer and objects in the scene, but also the structure of objects and the environment. Since awareness of motion and the generation of mental maps of the structure of our environment are critical components of animal (and human) vision, the conversion of this innate ability to a computer capability is similarly crucial in the field of machine vision.
Consider a five-frame clip of a ball moving from the bottom left of a field of vision, to the top right. Motion estimation techniques can determine that on a two dimensional plane the ball is moving up and to the right and vectors describing this motion can be extracted from the sequence of frames. For the purposes of video compression (e.g., MPEG), the sequence is now described as well as it needs to be. However, in the field of machine vision, the question of whether the ball is moving to the right or if the observer is moving to the left is unknowable yet critical information. Not even if a static, patterned background were present in the five frames, could we confidently state that the ball was moving to the right, because the pattern might have an infinite distance to the observer.
Optical flow sensor.
Various configurations of optical flow sensors exist. One configuration is an image sensor chip connected to a processor programmed to run an optical flow algorithm. Another configuration uses a vision chip, which is an integrated circuit having both the image sensor and the processor on the same die, allowing for a compact implementation. An example of this is a generic optical mouse sensor used in an optical mouse. In some cases the processing circuitry may be implemented using analog or mixed-signal circuits to enable fast optical flow computation using minimal current consumption.
One area of contemporary research is the use of neuromorphic engineering techniques to implement circuits that respond to optical flow, and thus may be appropriate for use in an optical flow sensor. Such circuits may draw inspiration from biological neural circuitry that similarly responds to optical flow.
Optical flow sensors are used extensively in computer optical mice, as the main sensing component for measuring the motion of the mouse across a surface.
Optical flow sensors are also being used in robotics applications, primarily where there is a need to measure visual motion or relative motion between the robot and other objects in the vicinity of the robot. The use of optical flow sensors in unmanned aerial vehicles (UAVs), for stability and obstacle avoidance, is also an area of current research.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t"
},
{
"math_id": 1,
"text": "t+\\Delta t"
},
{
"math_id": 2,
"text": "(x,y,t)"
},
{
"math_id": 3,
"text": "I(x,y,t)"
},
{
"math_id": 4,
"text": "\\Delta x"
},
{
"math_id": 5,
"text": "\\Delta y"
},
{
"math_id": 6,
"text": "\\Delta t"
},
{
"math_id": 7,
"text": "I(x,y,t) = I(x+\\Delta x, y + \\Delta y, t + \\Delta t)"
},
{
"math_id": 8,
"text": "I(x+\\Delta x,y+\\Delta y,t+\\Delta t) = I(x,y,t) + \\frac{\\partial I}{\\partial x}\\,\\Delta x+\\frac{\\partial I}{\\partial y}\\,\\Delta y+\\frac{\\partial I}{\\partial t} \\, \\Delta t+{}"
},
{
"math_id": 9,
"text": "\\frac{\\partial I}{\\partial x}\\Delta x+\\frac{\\partial I}{\\partial y}\\Delta y+\\frac{\\partial I}{\\partial t}\\Delta t = 0"
},
{
"math_id": 10,
"text": "\\frac{\\partial I}{\\partial x}\\frac{\\Delta x}{\\Delta t} + \\frac{\\partial I}{\\partial y}\\frac{\\Delta y}{\\Delta t} + \\frac{\\partial I}{\\partial t} \\frac{\\Delta t}{\\Delta t} = 0"
},
{
"math_id": 11,
"text": "\\frac{\\partial I}{\\partial x}V_x+\\frac{\\partial I}{\\partial y}V_y+\\frac{\\partial I}{\\partial t} = 0"
},
{
"math_id": 12,
"text": "V_x,V_y"
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "y"
},
{
"math_id": 15,
"text": "\\tfrac{\\partial I}{\\partial x}"
},
{
"math_id": 16,
"text": "\\tfrac{\\partial I}{\\partial y}"
},
{
"math_id": 17,
"text": "\\tfrac{\\partial I}{\\partial t}"
},
{
"math_id": 18,
"text": "I_x"
},
{
"math_id": 19,
"text": " I_y"
},
{
"math_id": 20,
"text": " I_t"
},
{
"math_id": 21,
"text": "I_xV_x+I_yV_y=-I_t"
},
{
"math_id": 22,
"text": "\\nabla I\\cdot\\vec{V} = -I_t"
}
] | https://en.wikipedia.org/wiki?curid=869825 |
87019 | Ductility | Degree to which a material under stress irreversibly deforms before failure
Ductility refers to the ability of a material to sustain significant plastic deformation before fracture. Plastic deformation is the permanent distortion of a material under applied stress, as opposed to elastic deformation, which is reversible upon removing the stress. Ductility is a critical mechanical performance indicator, particularly in applications that require materials to bend, stretch, or deform in other ways without breaking. The extent of ductility can be quantitatively assessed using the percent elongation at break, given by the equation:
formula_0
where formula_1 is the length of the material after fracture and formula_2 is the original length before testing. This formula helps in quantifying how much a material can stretch under tensile stress before failure, providing key insights into its ductile behavior. Ductility is an important consideration in engineering and manufacturing. It defines a material's suitability for certain manufacturing operations (such as cold working) and its capacity to absorb mechanical overload. Some metals that are generally described as ductile include gold and copper, while platinum is the most ductile of all metals in pure form. However, not all metals experience ductile failure as some can be characterized with brittle failure like cast iron. Polymers generally can be viewed as ductile materials as they typically allow for plastic deformation.
Inorganic materials, including a wide variety of ceramics and semiconductors, are generally characterized by their brittleness. This brittleness primarily stems from their strong ionic or covalent bonds, which maintain the atoms in a rigid, densely packed arrangement. Such a rigid lattice structure restricts the movement of atoms or dislocations, essential for plastic deformation. The significant difference in ductility observed between metals and inorganic semiconductor or insulator can be traced back to each material’s inherent characteristics, including the nature of their defects, such as dislocations, and their specific chemical bonding properties. Consequently, unlike ductile metals and some organic materials with ductility (%"EL)" from 1.2% to over 1200%, brittle inorganic semiconductors and ceramic insulators typically show much smaller ductility at room temperature.
Malleability, a similar mechanical property, is characterized by a material's ability to deform plastically without failure under compressive stress. Historically, materials were considered malleable if they were amenable to forming by hammering or rolling. Lead is an example of a material which is relatively malleable but not ductile.
Materials science.
Ductility is especially important in metalworking, as materials that crack, break or shatter under stress cannot be manipulated using metal-forming processes such as hammering, rolling, drawing or extruding. Malleable materials can be formed cold using stamping or pressing, whereas brittle materials may be cast or thermoformed.
High degrees of ductility occur due to metallic bonds, which are found predominantly in metals; this leads to the common perception that metals are ductile in general. In metallic bonds valence shell electrons are delocalized and shared between many atoms. The delocalized electrons allow metal atoms to slide past one another without being subjected to strong repulsive forces that would cause other materials to shatter.
The ductility of steel varies depending on the alloying constituents. Increasing the levels of carbon decreases ductility. Many plastics and amorphous solids, such as Play-Doh, are also malleable. The most ductile metal is platinum and the most malleable metal is gold. When highly stretched, such metals distort via formation, reorientation and migration of dislocations and crystal twins without noticeable hardening.
Quantification.
Basic definitions.
The quantities commonly used to define ductility in a tension test are relative elongation (in percent, sometimes denoted as formula_3) and reduction of area (sometimes denoted as formula_4) at fracture. Fracture strain is the engineering strain at which a test specimen fractures during a uniaxial tensile test. Percent elongation, or engineering strain at fracture, can be written as:
formula_5
Percent reduction in area can be written as:
formula_6
where the area of concern is the cross-sectional area of the gauge of the specimen.
According to Shigley's Mechanical Engineering Design, "significant" denotes about 5.0 percent elongation.
Effect of sample dimensions.
An important point concerning the value of the ductility (nominal strain at failure) in a tensile test is that it commonly exhibits a dependence on sample dimensions. However, a universal parameter should exhibit no such dependence (and, indeed, there is no dependence for properties such as stiffness, yield stress and ultimate tensile strength). This occurs because the measured strain (displacement) at fracture commonly incorporates contributions from both the uniform deformation occurring up to the onset of necking and the subsequent deformation of the neck (during which there is little or no deformation in the rest of the sample). The significance of the contribution from neck development depends on the "aspect ratio" (length / diameter) of the gauge length, being greater when the ratio is low. This is a simple geometric effect, which has been clearly identified. There have been both experimental studies and theoretical explorations of the effect, mostly based on Finite Element Method (FEM) modelling. Nevertheless, it is not universally appreciated and, since the range of sample dimensions in common use is quite wide, it can lead to highly significant variations (by factors of up to 2 or 3) in ductility values obtained for the same material in different tests.
A more meaningful representation of ductility would be obtained by identifying the strain at the onset of necking, which should be independent of sample dimensions. This point can be difficult to identify on a (nominal) stress-strain curve, because the peak (representing the onset of necking) is often relatively flat. Moreover, some (brittle) materials fracture before the onset of necking, such that there is no peak. In practice, for many purposes it is preferable to carry out a different kind of test, designed to evaluate the toughness (energy absorbed during fracture), rather than use ductility values obtained in tensile tests.
In an absolute sense, "ductility" values are therefore virtually meaningless. The actual (true) strain in the neck at the point of fracture bears no direct relation to the raw number obtained from the nominal stress-strain curve; the true strain in the neck is often considerably higher. Also, the true stress at the point of fracture is usually higher than the apparent value according to the plot. The load often drops while the neck develops, but the sectional area in the neck is also dropping (more sharply), so the true stress there is rising. There is no simple way of estimating this value, since it depends on the geometry of the neck. While the true strain at fracture is a genuine indicator of "ductility", it cannot readily be obtained from a conventional tensile test.
The Reduction in Area (RA) is defined as the decrease in sectional area at the neck (usually obtained by measurement of the diameter at one or both of the fractured ends), divided by the original sectional area. It is sometimes stated that this is a more reliable indicator of the "ductility" than the elongation at failure (partly in recognition of the fact that the latter is dependent on the aspect ratio of the gauge length, although this dependence is far from being universally appreciated). There is something in this argument, but the RA is still some way from being a genuinely meaningful parameter. One objection is that it is not easy to measure accurately, particularly with samples that are not circular in section. Rather more fundamentally, it is affected by both the uniform plastic deformation that took place before necking and by the development of the neck. Furthermore, it is sensitive to exactly what happens in the latter stages of necking, when the true strain is often becoming very high and the behavior is of limited significance in terms of a meaningful definition of strength (or toughness). There has again been extensive study of this issue.
Ductile–brittle transition temperature.
Metals can undergo two different types of fractures: brittle fracture or ductile fracture. Failure propagation occurs faster in brittle materials due to the ability for ductile materials to undergo plastic deformation. Thus, ductile materials are able to sustain more stress due to their ability to absorb more energy prior to failure than brittle materials are. The plastic deformation results in the material following a modification of the Griffith equation, where the critical fracture stress increases due to the plastic work required to extend the crack adding to the work necessary to form the crack - work corresponding to the increase in surface energy that results from the formation of an addition crack surface. The plastic deformation of ductile metals is important as it can be a sign of the potential failure of the metal. Yet, the point at which the material exhibits a ductile behavior versus a brittle behavior is not only dependent on the material itself but also on the temperature at which the stress is being applied to the material. The temperature where the material changes from brittle to ductile or vice versa is crucial for the design of load-bearing metallic products. The minimum temperature at which the metal transitions from a brittle behavior to a ductile behavior, or from a ductile behavior to a brittle behavior, is known as the ductile-brittle transition temperature (DBTT). Below the DBTT, the material will not be able to plastically deform, and the crack propagation rate increases rapidly leading to the material undergoing brittle failure rapidly. Furthermore, DBTT is important since, once a material is cooled below the DBTT, it has a much greater tendency to shatter on impact instead of bending or deforming (low temperature embrittlement). Thus, the DBTT indicates the temperature at which, as temperature decreases, a material's ability to deform in a ductile manner decreases and so the rate of crack propagation drastically increases. In other words, solids are very brittle at very low temperatures, and their toughness becomes much higher at elevated temperatures.
For more general applications, it is preferred to have a lower DBTT to ensure the material has a wider ductility range. This ensures that sudden cracks are inhibited so that failures in the metal body are prevented. It has been determined that the more slip systems a material has, the wider the range of temperatures ductile behavior is exhibited at. This is due to the slip systems allowing for more motion of dislocations when a stress is applied to the material. Thus, in materials with a lower amount of slip systems, dislocations are often pinned by obstacles leading to strain hardening, which increases the materials strength which makes the material more brittle. For this reason, FCC (face centered cubic) structures are ductile over a wide range of temperatures, BCC (body centered cubic) structures are ductile only at high temperatures, and HCP (hexagonal closest packed) structures are often brittle over wide ranges of temperatures. This leads to each of these structures having different performances as they approach failure (fatigue, overload, and stress cracking) under various temperatures, and shows the importance of the DBTT in selecting the correct material for a specific application. For example, zamak 3 exhibits good ductility at room temperature but shatters when impacted at sub-zero temperatures. DBTT is a very important consideration in selecting materials that are subjected to mechanical stresses. A similar phenomenon, the glass transition temperature, occurs with glasses and polymers, although the mechanism is different in these amorphous materials. The DBTT is also dependent on the size of the grains within the metal, as typically smaller grain size leads to an increase in tensile strength, resulting in an increase in ductility and decrease in the DBTT. This increase in tensile strength is due to the smaller grain sizes resulting in grain boundary hardening occurring within the material, where the dislocations require a larger stress to cross the grain boundaries and continue to propagate throughout the material. It has been shown that by continuing to refine ferrite grains to reduce their size, from 40 microns down to 1.3 microns, that it is possible to eliminate the DBTT entirely so that a brittle fracture never occurs in ferritic steel (as the DBTT required would be below absolute zero).
In some materials, the transition is sharper than others and typically requires a temperature-sensitive deformation mechanism. For example, in materials with a body-centered cubic (bcc) lattice the DBTT is readily apparent, as the motion of screw dislocations is very temperature sensitive because the rearrangement of the dislocation core prior to slip requires thermal activation. This can be problematic for steels with a high ferrite content. This famously resulted in serious hull cracking in Liberty ships in colder waters during World War II, causing many sinkings. DBTT can also be influenced by external factors such as neutron radiation, which leads to an increase in internal lattice defects and a corresponding decrease in ductility and increase in DBTT.
The most accurate method of measuring the DBTT of a material is by fracture testing. Typically four-point bend testing at a range of temperatures is performed on pre-cracked bars of polished material. Two fracture tests are typically utilized to determine the DBTT of specific metals: the Charpy V-Notch test and the Izod test. The Charpy V-notch test determines the impact energy absorption ability or toughness of the specimen by measuring the potential energy difference resulting from the collision between a mass on a free-falling pendulum and the machined V-shaped notch in the sample, resulting in the pendulum breaking through the sample. The DBTT is determined by repeating this test over a variety of temperatures and noting when the resulting fracture changes to a brittle behavior which occurs when the absorbed energy is dramatically decreased. The Izod test is essentially the same as the Charpy test, with the only differentiating factor being the placement of the sample; In the former the sample is placed vertically, while in the latter the sample is placed horizontally with respect to the bottom of the base.
For experiments conducted at higher temperatures, dislocation activity increases. At a certain temperature, dislocations shield the crack tip to such an extent that the applied deformation rate is not sufficient for the stress intensity at the crack-tip to reach the critical value for fracture (KiC). The temperature at which this occurs is the ductile–brittle transition temperature. If experiments are performed at a higher strain rate, more dislocation shielding is required to prevent brittle fracture, and the transition temperature is raised.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "%EL= \\left ( \\frac{l_f-l_0}{l_0} \\right )\\times100"
},
{
"math_id": 1,
"text": "l_f"
},
{
"math_id": 2,
"text": "l_0"
},
{
"math_id": 3,
"text": "\\varepsilon_f"
},
{
"math_id": 4,
"text": "q"
},
{
"math_id": 5,
"text": "\\%EL = \\frac{\\text{final gauge length - initial gauge length}}{\\text{initial gauge length}} = \\frac{l_f - l_0}{l_0} \\cdot 100"
},
{
"math_id": 6,
"text": "\\%RA = \\frac{\\text{change in area}}{\\text{original area}} = \\frac{A_0 - A_f}{A_0} \\cdot 100"
}
] | https://en.wikipedia.org/wiki?curid=87019 |
87027 | Malleability (cryptography) | Malleability is a property of some cryptographic algorithms. An encryption algorithm is "malleable" if it is possible to transform a ciphertext into another ciphertext which decrypts to a related plaintext. That is, given an encryption of a plaintext formula_0, it is possible to generate another ciphertext which decrypts to formula_1, for a known function formula_2, without necessarily knowing or learning formula_0.
Malleability is often an undesirable property in a general-purpose cryptosystem, since it allows an attacker to modify the contents of a message. For example, suppose that a bank uses a stream cipher to hide its financial information, and a user sends an encrypted message containing, say, "TRANSFER $0000100.00 TO ACCOUNT #199." If an attacker can modify the message on the wire, and can guess the format of the unencrypted message, the attacker could change the amount of the transaction, or the recipient of the funds, e.g. "TRANSFER $0100000.00 TO ACCOUNT #227". Malleability does not refer to the attacker's ability to read the encrypted message. Both before and after tampering, the attacker cannot read the encrypted message.
On the other hand, some cryptosystems are malleable by design. In other words, in some circumstances it may be viewed as a feature that anyone can transform an encryption of formula_0 into a valid encryption of formula_1 (for some restricted class of functions formula_2) without necessarily learning formula_0. Such schemes are known as homomorphic encryption schemes.
A cryptosystem may be semantically secure against chosen plaintext attacks or even non-adaptive chosen ciphertext attacks (CCA1) while still being malleable. However, security against adaptive chosen ciphertext attacks (CCA2) is equivalent to non-malleability.
Example malleable cryptosystems.
In a stream cipher, the ciphertext is produced by taking the exclusive or of the plaintext and a pseudorandom stream based on a secret key formula_3, as formula_4. An adversary can construct an encryption of formula_5 for any formula_6, as formula_7.
In the RSA cryptosystem, a plaintext formula_0 is encrypted as formula_8, where formula_9 is the public key. Given such a ciphertext, an adversary can construct an encryption of formula_10 for any formula_6, as formula_11. For this reason, RSA is commonly used together with padding methods such as OAEP or PKCS1.
In the ElGamal cryptosystem, a plaintext formula_0 is encrypted as formula_12, where formula_13 is the public key. Given such a ciphertext formula_14, an adversary can compute formula_15, which is a valid encryption of formula_16, for any formula_6.
In contrast, the Cramer-Shoup system (which is based on ElGamal) is not malleable.
In the Paillier, ElGamal, and RSA cryptosystems, it is also possible to combine "several" ciphertexts together in a useful way to produce a related ciphertext. In Paillier, given only the public key and an encryption of formula_17 and formula_18, one can compute a valid encryption of their sum formula_19. In ElGamal and in RSA, one can combine encryptions of formula_17 and formula_18 to obtain a valid encryption of their product formula_20.
Block ciphers in the cipher block chaining mode of operation, for example, are partly malleable: flipping a bit in a ciphertext block will completely mangle the plaintext it decrypts to, but will result in the same bit being flipped in the plaintext of the next block. This allows an attacker to 'sacrifice' one block of plaintext in order to change some data in the next one, possibly managing to maliciously alter the message. This is essentially the core idea of the padding oracle attack on CBC, which allows the attacker to decrypt almost an entire ciphertext without knowing the key. For this and many other reasons, a message authentication code is required to guard against any method of tampering.
Complete non-malleability.
Fischlin, in 2005, defined the notion of complete non-malleability as the ability of the system to remain non-malleable while giving the adversary additional power to choose a new public key which could be a function of the original public key. In other words, the adversary shouldn't be able to come up with a ciphertext whose underlying plaintext is related to the original message through a relation that also takes public keys into account.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "f(m)"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "E(m) = m \\oplus S(k)"
},
{
"math_id": 5,
"text": "m \\oplus t"
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": "E(m) \\oplus t = m \\oplus t \\oplus S(k) = E(m \\oplus t)"
},
{
"math_id": 8,
"text": "E(m) = m^e \\bmod n"
},
{
"math_id": 9,
"text": "(e,n)"
},
{
"math_id": 10,
"text": "mt"
},
{
"math_id": 11,
"text": "E(m) \\cdot t^e \\bmod n = (mt)^e \\bmod n = E(mt)"
},
{
"math_id": 12,
"text": "E(m) = (g^b, m A^b)"
},
{
"math_id": 13,
"text": "(g,A)"
},
{
"math_id": 14,
"text": "(c_1, c_2)"
},
{
"math_id": 15,
"text": "(c_1, t \\cdot c_2)"
},
{
"math_id": 16,
"text": "tm"
},
{
"math_id": 17,
"text": "m_1"
},
{
"math_id": 18,
"text": "m_2"
},
{
"math_id": 19,
"text": "m_1+m_2"
},
{
"math_id": 20,
"text": "m_1 m_2"
}
] | https://en.wikipedia.org/wiki?curid=87027 |
8702775 | Zero-forcing equalizer | The zero-forcing equalizer is a form of linear equalization algorithm used in communication systems which applies the inverse of the frequency response of the channel. This form of equalizer was first proposed by Robert Lucky.
The zero-forcing equalizer applies the inverse of the channel frequency response to the received signal, to restore the signal after the channel. It has many useful applications. For example, it is studied heavily for IEEE 802.11n (MIMO) where knowing the channel allows recovery of the two or more streams which will be received on top of each other on each antenna. The name "zero-forcing corresponds" to bringing down the intersymbol interference (ISI) to zero in a noise-free case. This will be useful when ISI is significant compared to noise.
For a channel with frequency response formula_0 the zero-forcing equalizer formula_1 is constructed by formula_2. Thus the combination of channel and equalizer gives a flat frequency response and linear phase formula_3.
In reality, zero-forcing equalization does not work in most applications, for the following reasons:
This second item is often the more limiting condition. These problems are addressed in the linear MMSE equalizer by making a small modification to the denominator of formula_1: formula_4, where k is related to the channel response and the signal SNR.
Algorithm.
If the channel response (or channel transfer function) for a particular channel is H(s) then the input signal is multiplied by the reciprocal of it. This is intended to remove the effect of channel from the received signal, in particular the intersymbol interference (ISI).
The zero-forcing equalizer removes all ISI, and is ideal when the channel is noiseless. However, when the channel is noisy, the zero-forcing equalizer will amplify the noise greatly at frequencies "f" where the channel response H(j2π"f") has a small magnitude (i.e. near zeroes of the channel) in the attempt to invert the channel completely. A more balanced linear equalizer in this case is the minimum mean-square error equalizer, which does not usually eliminate ISI completely but instead minimizes the total power of the noise and ISI components in the output.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F(f)"
},
{
"math_id": 1,
"text": "C(f)"
},
{
"math_id": 2,
"text": "C(f) = 1/F(f)"
},
{
"math_id": 3,
"text": "F(f)C(f) = 1"
},
{
"math_id": 4,
"text": "C(f) = 1/(F(f) + k)"
}
] | https://en.wikipedia.org/wiki?curid=8702775 |
87030 | Glottochronology | Part of lexicostatistics dealing with the chronological relationship between languages
Glottochronology (from Attic Greek γλῶττα "tongue, language" and χρόνος "time") is the part of lexicostatistics which involves comparative linguistics and deals with the chronological relationship between languages.
The idea was developed by Morris Swadesh in the 1950s in his article on Salish internal relationships. He developed the idea under two assumptions: there indeed exists a relatively stable "basic vocabulary" (referred to as "Swadesh lists") in all languages of the world; and, any replacements happen in a way analogous to radioactive decay in a constant percentage per time elapsed. Using mathematics and statistics, Swadesh developed an equation to determine when languages separated and give an approximate time of when the separation occurred. His methods aimed to aid linguistic anthropologists by giving them a definitive way to determine a separation date between two languages. The formula provides an approximate number of centuries since two languages were supposed to have separated from a singular common ancestor. His methods also purported to provide information on when ancient languages may have existed.
Despite multiple studies and literature containing the information of glottochronology, it is not widely used today and is surrounded with controversy. Glottochronology tracks language separation from thousands of years ago but many linguists are skeptical of the concept because it is more of a 'probability' rather than a 'certainty.' On the other hand, some linguists may say that glottochronology is gaining traction because of its relatedness to archaeological dates. Glottochronology is not as accurate as archaeological data, but some linguists still believe that it can provide a solid estimate.
Over time many different extensions of the Swadesh method evolved; however, Swadesh's original method is so well known that 'glottochronology' is usually associated with him.
Methodology.
Word list.
The original method of glottochronology presumed that the core vocabulary of a language is replaced at a constant (or constant average) rate across all languages and cultures and so can be used to measure the passage of time. The process makes use of a list of lexical terms and morphemes which are similar to multiple languages.
Lists were compiled by Morris Swadesh and assumed to be resistant against borrowing (originally designed in 1952 as a list of 200 items, but the refined 100-word list in Swadesh (1955) is much more common among modern day linguists). The core vocabulary was designed to encompass concepts common to every human language such as personal pronouns, body parts, heavenly bodies and living beings, verbs of basic actions, numerals, basic adjectives, kin terms, and natural occurrences and events. Through a basic word list, one eliminates concepts that are specific to a particular culture or time period. It has been found through differentiating word lists that the ideal is really impossible and that the meaning set may need to be tailored to the languages being compared. Word lists are not homogenous throughout studies and they are often changed and designed to suit both languages being studied. Linguists find that it is difficult to find a word list where all words used are culturally unbiased. Many alternative word lists have been compiled by other linguists and often use fewer meaning slots.
The percentage of cognates (words with a common origin) in the word lists is then measured. The larger the percentage of cognates, the more recently the two languages being compared are presumed to have separated.
Below is an example of a basic word list composed of basic Turkish words and their English translations.
Glottochronologic constant.
Determining word lists rely on morpheme decay or change in vocabulary. Morpheme decay must stay at a constant rate for glottochronology to be applied to a language. This leads to a critique of the glottochronologic formula because some linguists argue that the morpheme decay rate is not guaranteed to stay the same throughout history.
American Linguist Robert Lees obtained a value for the "glottochronological constant" (r) of words by considering the known changes in 13 pairs of languages using the 200 word list. He obtained a value of 0.805 ± 0.0176 with 90% confidence. For his 100-word list Swadesh obtained a value of 0.86, the higher value reflecting the elimination of semantically unstable words. The constant is related to the retention rate of words by the following formula:
formula_0
"L" is the rate of replacement, ln represents the natural logarithm and "r" is the glottochronological constant.
Divergence time.
The basic formula of glottochronology in its shortest form is this:
formula_1
"t" = a given period of time from one stage of the language to another (measured in millennia), "c" = proportion of wordlist items retained at the end of that period and "L" = rate of replacement for that word list.
One can also therefore formulate:
formula_2
By testing historically verifiable cases in which "t" is known by nonlinguistic data (such as the approximate distance from Classical Latin to modern Romance languages), Swadesh arrived at the empirical value of approximately 0.14 for "L", which means that the rate of replacement constitutes around 14 words from the 100-wordlist per millennium. This is represented in the table below.
Results.
Glottochronology was found to work in the case of Indo-European, accounting for 87% of the variance. It is also postulated to work for Afro-Asiatic (Fleming 1973), Chinese (Munro 1978) and Amerind (Stark 1973; Baumhoff and Olmsted 1963). For Amerind, correlations have been obtained with radiocarbon dating and blood groups as well as archaeology.
The approach of Gray and Atkinson, as they state, has nothing to do with "glottochronology".
Discussion.
The concept of language change is old, and its history is reviewed in Hymes (1973) and Wells (1973). In some sense, glottochronology is a reconstruction of history and can often be closely related to archaeology. Many linguistic studies find the success of glottochronology to be found alongside archaeological data. Glottochronology itself dates back to the mid-20th century. An introduction to the subject is given in Embleton (1986) and in McMahon and McMahon (2005).
Glottochronology has been controversial ever since, partly because of issues of accuracy but also because of the question of whether its basis is sound (for example, Bergsland 1958; Bergsland and Vogt 1962; Fodor 1961; Chrétien 1962; Guy 1980). The concerns have been addressed by Dobson et al. (1972), Dyen (1973) and Kruskal, Dyen and Black (1973). The assumption of a single-word replacement rate can distort the divergence-time estimate when borrowed words are included (Thomason and Kaufman 1988).
An overview of recent arguments can be obtained from the papers of a conference held at the McDonald Institute in 2000. The presentations vary from "Why linguists don't do dates" to the one by Starostin discussed above.
Since its original inception, glottochronology has been rejected by many linguists, mostly Indo-Europeanists of the school of the traditional comparative method. Criticisms have been answered in particular around three points of discussion:
Thus, in Bergsland & Vogt (1962), the authors make an impressive demonstration, on the basis of actual language data verifiable by extralinguistic sources, that the "rate of change" for Icelandic constituted around 4% per millennium, but for closely connected Riksmal (Literary Norwegian), it would amount to as much as 20% (Swadesh's proposed "constant rate" was supposed to be around 14% per millennium).
That and several other similar examples effectively proved that Swadesh's formula would not work on all available material, which is a serious accusation since evidence that can be used to "calibrate" the meaning of "L" (language history recorded during prolonged periods of time) is not overwhelmingly large in the first place.
It is highly likely that the chance of replacement is different for every word or feature ("each word has its own history", among hundreds of other sources:).
That global assumption has been modified and downgraded to single words, even in single languages, in many newer attempts (see below).
There is a lack of understanding of Swadesh's mathematical/statistical methods. Some linguists reject the methods in full because the statistics lead to 'probabilities' when linguists trust 'certainties' more.
New methods developed by Gray & Atkinson are claimed to avoid those issues but are still seen as controversial, primarily since they often produce results that are incompatible with known data and because of additional methodological issues.
Modifications.
Somewhere in between the original concept of Swadesh and the rejection of glottochronology in its entirety lies the idea that glottochronology as a formal method of linguistic analysis becomes valid with the help of several important modifications. Thus, inhomogeneities in the replacement rate were dealt with by Van der Merwe (1966) by splitting the word list into classes each with their own rate, while Dyen, James and Cole (1967) allowed each meaning to have its own rate. Simultaneous estimation of divergence time and replacement rate was studied by Kruskal, Dyen and Black.
Brainard (1970) allowed for chance cognation, and drift effects were introduced by Gleason (1959). Sankoff (1973) suggested introducing a borrowing parameter and allowed synonyms.
A combination of the various improvements is given in Sankoff's "Fully Parameterised Lexicostatistics". In 1972, Sankoff in a biological context developed a model of genetic divergence of populations. Embleton (1981) derives a simplified version of that in a linguistic context. She carries out a number of simulations using this which are shown to give good results.
Improvements in statistical methodology related to a completely different branch of science, phylogenetics; the study of changes in DNA over time sparked a recent renewed interest. The new methods are more robust than the earlier ones because they calibrate points on the tree with known historical events and smooth the rates of change across them. As such, they no longer require the assumption of a constant rate of change (Gray & Atkinson 2003).
Starostin's method.
Another attempt to introduce such modifications was performed by the Russian linguist Sergei Starostin, who had proposed the following:
The resulting formula, taking into account both the time dependence and the individual stability quotients, looks as follows:
formula_3
In that formula, −"Lc" reflects the gradual slowing down of the replacement process because of different individual rates since the least stable elements are the first and the quickest to be replaced, and the square root represents the reverse trend, the acceleration of replacement as items in the original wordlist "age" and become more prone to shifting their meaning. This formula is obviously more complicated than Swadesh's original one, but, it yields, as shown by Starostin, more credible results than the former and more or less agrees with all the cases of language separation that can be confirmed by historical knowledge. On the other hand, it shows that glottochronology can really be used only as a serious scientific tool on language families whose historical phonology has been meticulously elaborated (at least to the point of being able to distinguish between cognates and loanwords clearly).
Time-depth estimation.
The McDonald Institute hosted a conference on the issue of time-depth estimation in 2000. The published papers give an idea of the views on glottochronology at that time. They vary from "Why linguists don't do dates" to the one by Starostin discussed above. Note that in the referenced Gray and Atkinson paper, they hold that their methods cannot be called "glottochronology" by confining this term to its original method. | [
{
"math_id": 0,
"text": "L = 2\\ln(r) "
},
{
"math_id": 1,
"text": "t = \\frac{\\ln(c)}{-L}"
},
{
"math_id": 2,
"text": " t = -\\frac{\\ln(c)}{2\\ln(r)}"
},
{
"math_id": 3,
"text": "t = \\sqrt \\frac{\\ln(c)}{-Lc}"
}
] | https://en.wikipedia.org/wiki?curid=87030 |
870399 | Set cover problem | Classical problem in combinatorics
The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory.
Given a set of elements {1, 2, …, "n"} (called the universe) and a collection S of m subsets whose union equals the universe, the set cover problem is to identify the smallest sub-collection of S whose union equals the universe. For example, consider the universe "U" = {1, 2, 3, 4, 5} and the collection of sets "S" = { {1, 2, 3}, {2, 4}, {3, 4}, {4, 5} }. Clearly the union of S is U. However, we can cover all elements with only two sets: { {1, 2, 3}, {4, 5} }; see picture. Therefore, the solution to the set cover problem has size 2.
More formally, given a universe formula_0 and a family formula_1 of subsets of formula_0, a set cover is a subfamily formula_2 of sets whose union is formula_0.
The decision version of set covering is NP-complete. It is one of Karp's 21 NP-complete problems shown to be NP-complete in 1972. The optimization/search version of set cover is NP-hard. It is a problem "whose study has led to the development of fundamental techniques for the entire field" of approximation algorithms.
Variants.
In the weighted set cover problem, each set is assigned a positive weight (representing its cost), and the goal is to find a set cover with a smallest weight. The usual (unweighted) set cover corresponds to all sets having a weight of 1.
In the fractional set cover problem, it is allowed to select fractions of sets, rather than entire sets. A fractional set cover is an assignment of a fraction (a number in [0,1]) to each set in formula_1, such that for each element "x" in the universe, the sum of fractions of sets that contain "x" is at least 1. The goal is to find a fractional set cover in which the sum of fractions is as small as possible. Note that a (usual) set cover is equivalent to a fractional set cover in which all fractions are either 0 or 1; therefore, the size of the smallest fractional cover is at most the size of the smallest cover, but may be smaller. For example, consider the universe "U" = {1, 2, 3} and the collection of sets "S" = { {1, 2}, {2, 3}, {3, 1} }. The smallest set cover has a size of 2, e.g. { {1, 2}, {2, 3} }. But there is a fractional set cover of size 1.5, in which a 0.5 fraction of each set is taken.
Linear program formulation.
The set cover problem can be formulated as the following integer linear program (ILP).
For a more compact representation of the covering constraint, one can define an incidence matrix "formula_5", where each row corresponds to an element and each column corresponds to a set, and "formula_6" if element e is in set s, and "formula_7" otherwise. Then, the covering constraint can be written as formula_8.
Weighted set cover is described by a program identical to the one given above, except that the objective function to minimize is formula_9, where formula_10 is the weight of set formula_11.
Fractional set cover is described by a program identical to the one given above, except that formula_12 can be non-integer, so the last constraint is replaced by formula_13.
This linear program belongs to the more general class of LPs for covering problems, as all the coefficients in the objective function and both sides of the constraints are non-negative. The integrality gap of the ILP is at most formula_14 (where formula_15 is the size of the universe). It has been shown that its relaxation indeed gives a factor-formula_14 approximation algorithm for the minimum set cover problem. See randomized rounding#setcover for a detailed explanation.
Hitting set formulation.
Set covering is equivalent to the hitting set problem. That is seen by observing that an instance of set covering can
be viewed as an arbitrary bipartite graph, with the universe represented by vertices on the left, the sets represented by vertices on the
right, and edges representing the membership of elements to sets. The task is then to find a minimum cardinality subset of left-vertices that has a non-trivial intersection with each of the right-vertices, which is precisely the hitting set problem.
In the field of computational geometry, a hitting set for a collection of geometrical objects is also called a stabbing set or piercing set.
Greedy algorithm.
There is a greedy algorithm for polynomial time approximation of set covering that chooses sets according to one rule: at each stage, choose the set that contains the largest number of uncovered elements. This method can be implemented in time linear in the sum of sizes of the input sets, using a bucket queue to prioritize the sets. It achieves an approximation ratio of formula_16, where formula_17 is the size of the set to be covered. In other words, it finds a covering that may be formula_18 times as large as the minimum one, where formula_18 is the formula_19-th harmonic number:
formula_20
This greedy algorithm actually achieves an approximation ratio of formula_21 where formula_22 is the maximum cardinality set of formula_23. For formula_24dense instances, however, there exists a formula_25-approximation algorithm for every formula_26.
There is a standard example on which the greedy algorithm achieves an approximation ratio of formula_27.
The universe consists of formula_28 elements. The set system consists of formula_4 pairwise disjoint sets
formula_29 with sizes formula_30 respectively, as well as two additional disjoint sets formula_31,
each of which contains half of the elements from each formula_32. On this input, the greedy algorithm takes the sets
formula_33, in that order, while the optimal solution consists only of formula_34 and formula_35.
An example of such an input for formula_36 is pictured on the right.
Inapproximability results show that the greedy algorithm is essentially the best-possible polynomial time approximation algorithm for set cover up to lower order terms
(see Inapproximability results below), under plausible complexity assumptions. A tighter analysis for the greedy algorithm shows that the approximation ratio is exactly formula_37.
Low-frequency systems.
If each element occurs in at most f sets, then a solution can be found in polynomial time that approximates the optimum to within a factor of f using LP relaxation.
If the constraint formula_38 is replaced by formula_39 for all S in formula_1 in the integer linear program shown above, then it becomes a (non-integer) linear program L. The algorithm can be described as follows:
Inapproximability results.
When formula_40 refers to the size of the universe, showed that set covering cannot be approximated in polynomial time to within a factor of formula_41, unless NP has quasi-polynomial time algorithms. Feige (1998) improved this lower bound to formula_42 under the same assumptions, which essentially matches the approximation ratio achieved by the greedy algorithm. established a lower bound
of formula_43, where formula_44 is a certain constant, under the weaker assumption that Pformula_45NP.
A similar result with a higher value of formula_44 was recently proved by . showed optimal inapproximability by proving that it cannot be approximated to formula_46 unless Pformula_47NP.
In low-frequency systems, proved it is NP-hard to approximate set cover to better than formula_48.
If the Unique games conjecture is true, this can be improved to formula_49 as proven by .
proves that set cover instances with sets of size at most formula_50 cannot be approximated to a factor better than formula_51 unless Pformula_47NP, thus making the approximation of formula_52 of the greedy algorithm essentially tight in this case.
Weighted set cover.
Relaxing the integer linear program for weighted set cover stated above, one may use randomized rounding to get an formula_53-factor approximation. Non weighted set cover can be adapted to the weighted case.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{U}"
},
{
"math_id": 1,
"text": "\\mathcal{S}"
},
{
"math_id": 2,
"text": "\\mathcal{C}\\subseteq\\mathcal{S}"
},
{
"math_id": 3,
"text": "(\\mathcal{U},\\mathcal{S})"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "A_{e,s}=1"
},
{
"math_id": 7,
"text": "A_{e,s}=0"
},
{
"math_id": 8,
"text": "A x \\geqslant 1 "
},
{
"math_id": 9,
"text": "\\sum_{s \\in \\mathcal S} w_s x_s"
},
{
"math_id": 10,
"text": "w_{s}"
},
{
"math_id": 11,
"text": "s\\in \\mathcal{S}"
},
{
"math_id": 12,
"text": "x_s"
},
{
"math_id": 13,
"text": "0 \\leq x_s\\leq 1"
},
{
"math_id": 14,
"text": "\\scriptstyle \\log n"
},
{
"math_id": 15,
"text": "\\scriptstyle n"
},
{
"math_id": 16,
"text": "H(s)"
},
{
"math_id": 17,
"text": "s"
},
{
"math_id": 18,
"text": "H(n)"
},
{
"math_id": 19,
"text": "n"
},
{
"math_id": 20,
"text": " H(n) = \\sum_{k=1}^{n} \\frac{1}{k} \\le \\ln{n} +1"
},
{
"math_id": 21,
"text": "H(s^\\prime)"
},
{
"math_id": 22,
"text": "s^\\prime"
},
{
"math_id": 23,
"text": "S"
},
{
"math_id": 24,
"text": "\\delta-"
},
{
"math_id": 25,
"text": "c \\ln{m}"
},
{
"math_id": 26,
"text": "c > 0"
},
{
"math_id": 27,
"text": "\\log_2(n)/2"
},
{
"math_id": 28,
"text": "n=2^{(k+1)}-2"
},
{
"math_id": 29,
"text": "S_1,\\ldots,S_k"
},
{
"math_id": 30,
"text": "2,4,8,\\ldots,2^k"
},
{
"math_id": 31,
"text": "T_0,T_1"
},
{
"math_id": 32,
"text": "S_i"
},
{
"math_id": 33,
"text": "S_k,\\ldots,S_1"
},
{
"math_id": 34,
"text": "T_0"
},
{
"math_id": 35,
"text": "T_1"
},
{
"math_id": 36,
"text": "k=3"
},
{
"math_id": 37,
"text": "\\ln{n} - \\ln{\\ln{n}} + \\Theta(1)"
},
{
"math_id": 38,
"text": "x_S\\in\\{0,1\\}"
},
{
"math_id": 39,
"text": "x_S \\geq 0"
},
{
"math_id": 40,
"text": " n"
},
{
"math_id": 41,
"text": "\\tfrac{1}{2}\\log_2{n} \\approx 0.72\\ln{n}"
},
{
"math_id": 42,
"text": "\\bigl(1-o(1)\\bigr)\\cdot\\ln{n}"
},
{
"math_id": 43,
"text": "c\\cdot\\ln{n}"
},
{
"math_id": 44,
"text": "c"
},
{
"math_id": 45,
"text": "\\not="
},
{
"math_id": 46,
"text": "\\bigl(1 - o(1)\\bigr) \\cdot \\ln{n}"
},
{
"math_id": 47,
"text": "="
},
{
"math_id": 48,
"text": "f-1-\\epsilon"
},
{
"math_id": 49,
"text": "f-\\epsilon"
},
{
"math_id": 50,
"text": "\\Delta"
},
{
"math_id": 51,
"text": "\\ln \\Delta - O(\\ln \\ln \\Delta)"
},
{
"math_id": 52,
"text": "\\ln \\Delta + 1"
},
{
"math_id": 53,
"text": "O(\\log n)"
},
{
"math_id": 54,
"text": "\\mathbb{R}^d"
}
] | https://en.wikipedia.org/wiki?curid=870399 |
8707155 | Statistical shape analysis | Analysis of geometric properties
Statistical shape analysis is an analysis of the geometrical properties of some given set of shapes by statistical methods. For instance, it could be used to quantify differences between male and female gorilla skull shapes, normal and pathological bone shapes, leaf outlines with and without herbivory by insects, etc. Important aspects of shape analysis are to obtain a measure of distance between shapes, to estimate mean shapes from (possibly random) samples, to estimate shape variability within samples, to perform clustering and to test for differences between shapes. One of the main methods used is principal component analysis (PCA). Statistical shape analysis has applications in various fields, including medical imaging, computer vision, computational anatomy, sensor measurement, and geographical profiling.
Landmark-based techniques.
In the point distribution model, a shape is determined by a finite set of coordinate points, known as landmark points. These landmark points often correspond to important identifiable features such as the corners of the eyes. Once the points are collected some form of registration is undertaken. This can be a baseline methods used by Fred Bookstein for geometric morphometrics in anthropology. Or an approach like Procrustes analysis which finds an average shape.
David George Kendall investigated the statistical distribution of the shape of triangles, and represented each triangle by a point on a sphere. He used this distribution on the sphere to investigate ley lines and whether three stones were more likely to be co-linear than might be expected. Statistical distribution like the Kent distribution can be used to analyse the distribution of such spaces.
Alternatively, shapes can be represented by curves or surfaces representing their contours, by the spatial region they occupy.
Shape deformations.
Differences between shapes can be quantified by investigating deformations transforming one shape into another. In particular a diffeomorphism preserves smoothness in the deformation. This was pioneered in D'Arcy Thompson's On Growth and Form before the advent of computers. Deformations can be interpreted as resulting from a force applied to the shape. Mathematically, a deformation is defined as a mapping from a shape "x" to a shape "y" by a transformation function formula_0, i.e., formula_1. Given a notion of size of deformations, the distance between two shapes can be defined as the size of the smallest deformation between these shapes.
is the focus on comparison of shapes and forms with a metric structure based on diffeomorphisms, and is central to the field of Computational anatomy. Diffeomorphic registration, introduced in the 90's, is now an important player with existing codes bases organized around ANTS, DARTEL, DEMONS, LDDMM, StationaryLDDMM, and FastLDDMM are examples of actively used computational codes for constructing correspondences between coordinate systems based on sparse features and dense images. Voxel-based morphometry (VBM) is an important technology built on many of these principles. Methods based on diffeomorphic flows are also used. For example, deformations could be diffeomorphisms of the ambient space, resulting in the LDDMM (Large Deformation Diffeomorphic Metric Mapping) framework for shape comparison.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Phi"
},
{
"math_id": 1,
"text": "y = \\Phi(x) "
}
] | https://en.wikipedia.org/wiki?curid=8707155 |
87089 | Snake lemma | Theorem in homological algebra
The snake lemma is a tool used in mathematics, particularly homological algebra, to construct long exact sequences. The snake lemma is valid in every abelian category and is a crucial tool in homological algebra and its applications, for instance in algebraic topology. Homomorphisms constructed with its help are generally called "connecting homomorphisms".
Statement.
In an abelian category (such as the category of abelian groups or the category of vector spaces over a given field), consider a commutative diagram:
where the rows are exact sequences and 0 is the zero object.
Then there is an exact sequence relating the kernels and cokernels of "a", "b", and "c":
formula_0
where "d" is a homomorphism, known as the "connecting homomorphism".
Furthermore, if the morphism "f" is a monomorphism, then so is the morphism formula_1, and if "g"' is an epimorphism, then so is formula_2.
The cokernels here are: formula_3, formula_4, formula_5.
Explanation of the name.
To see where the snake lemma gets its name, expand the diagram above as follows:
and then the exact sequence that is the conclusion of the lemma can be drawn on this expanded diagram in the reversed "S" shape of a slithering snake.
Construction of the maps.
The maps between the kernels and the maps between the cokernels are induced in a natural manner by the given (horizontal) maps because of the diagram's commutativity. The exactness of the two induced sequences follows in a straightforward way from the exactness of the rows of the original diagram. The important statement of the lemma is that a "connecting homomorphism" "d" exists which completes the exact sequence.
In the case of abelian groups or modules over some ring, the map "d" can be constructed as follows:
Pick an element "x" in ker "c" and view it as an element of "C"; since "g" is surjective, there exists "y" in "B" with "g"("y") = "x". Because of the commutativity of the diagram, we have "g"'("b"("y")) = "c"("g"("y")) = "c"("x") = 0 (since "x" is in the kernel of "c"), and therefore "b"("y") is in the kernel of "g' ". Since the bottom row is exact, we find an element "z" in "A' " with "f" '("z") = "b"("y"). "z" is unique by injectivity of "f" '. We then define "d"("x") = "z" + "im"("a"). Now one has to check that "d" is well-defined (i.e., "d"("x") only depends on "x" and not on the choice of "y"), that it is a homomorphism, and that the resulting long sequence is indeed exact. One may routinely verify the exactness by diagram chasing (see the proof of Lemma 9.1 in ).
Once that is done, the theorem is proven for abelian groups or modules over a ring. For the general case, the argument may be rephrased in terms of properties of arrows and cancellation instead of elements. Alternatively, one may invoke Mitchell's embedding theorem.
Naturality.
In the applications, one often needs to show that long exact sequences are "natural" (in the sense of natural transformations). This follows from the naturality of the sequence produced by the snake lemma.
If
is a commutative diagram with exact rows, then the snake lemma can be applied twice, to the "front" and to the "back", yielding two long exact sequences; these are related by a commutative diagram of the form
Example.
Let formula_6 be field, formula_7 be a formula_6-vector space. formula_7 is formula_8-module by formula_9 being a formula_6-linear transformation, so we can tensor formula_7 and formula_6 over formula_8.
formula_10
Given a short exact sequence of formula_6-vector spaces formula_11, we can induce an exact sequence formula_12 by right exactness of tensor product. But the sequence formula_13 is not exact in general. Hence, a natural question arises. Why is this sequence not exact?
According to the diagram above, we can induce an exact sequence formula_14 by applying the snake lemma. Thus, the snake lemma reflects the tensor product's failure to be exact.
In the category of groups.
While many results of homological algebra, such as the five lemma or the nine lemma, hold for abelian categories as well as in the category of groups, the snake lemma does not. Indeed, arbitrary cokernels do not exist. However, one can replace cokernels by (left) cosets formula_15, formula_16, and formula_17.
Then the connecting homomorphism can still be defined, and one can write down a sequence as in the statement of the snake lemma. This will always be a chain complex, but it may fail to be exact. Exactness can be asserted, however, when the vertical sequences in the diagram are exact, that is, when the images of "a", "b", and "c" are normal subgroups.
Counterexample.
Consider the alternating group formula_18: this contains a subgroup isomorphic to the symmetric group formula_19, which in turn can be written as a semidirect product of cyclic groups: formula_20. This gives rise to the following diagram with exact rows:
formula_21
Note that the middle column is not exact: formula_22 is not a normal subgroup in the semidirect product.
Since formula_18 is simple, the right vertical arrow has trivial cokernel. Meanwhile the quotient group formula_23 is isomorphic to formula_22. The sequence in the statement of the snake lemma is therefore
formula_24,
which indeed fails to be exact.
In popular culture.
The proof of the snake lemma is taught by Jill Clayburgh's character at the very beginning of the 1980 film "It's My Turn".
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\ker a ~{\\color{Gray}\\longrightarrow}~ \\ker b ~{\\color{Gray}\\longrightarrow}~ \\ker c ~\\overset{d}{\\longrightarrow}~ \\operatorname{coker}a ~{\\color{Gray}\\longrightarrow}~ \\operatorname{coker}b ~{\\color{Gray}\\longrightarrow}~ \\operatorname{coker}c"
},
{
"math_id": 1,
"text": "\\ker a ~{\\color{Gray}\\longrightarrow}~ \\ker b"
},
{
"math_id": 2,
"text": "\\operatorname{coker} b ~{\\color{Gray}\\longrightarrow}~ \\operatorname{coker} c"
},
{
"math_id": 3,
"text": "\\operatorname{coker}a = A'/\\operatorname{im}a"
},
{
"math_id": 4,
"text": "\\operatorname{coker}b = B'/\\operatorname{im}b"
},
{
"math_id": 5,
"text": "\\operatorname{coker}c = C'/\\operatorname{im}c"
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "V"
},
{
"math_id": 8,
"text": "k[t]"
},
{
"math_id": 9,
"text": "t:V \\to V"
},
{
"math_id": 10,
"text": "V \\otimes_{k[t]} k = V \\otimes_{k[t]} (k[t]/(t)) = V/tV = \\operatorname{coker}(t) ."
},
{
"math_id": 11,
"text": "0 \\to M \\to N \\to P \\to 0"
},
{
"math_id": 12,
"text": "M \\otimes_{k[t]} k \\to N \\otimes_{k[t]} k \\to P \\otimes_{k[t]} k \\to 0"
},
{
"math_id": 13,
"text": "0 \\to M \\otimes_{k[t]} k \\to N \\otimes_{k[t]} k \\to P \\otimes_{k[t]} k \\to 0"
},
{
"math_id": 14,
"text": "\\ker(t_M) \\to \\ker(t_N) \\to \\ker(t_P) \\to M \\otimes_{k[t]} k \\to N \\otimes_{k[t]} k \\to P \\otimes_{k[t]} k \\to 0"
},
{
"math_id": 15,
"text": "A'/\\operatorname{im} a"
},
{
"math_id": 16,
"text": "B'/\\operatorname{im} b"
},
{
"math_id": 17,
"text": "C'/\\operatorname{im} c"
},
{
"math_id": 18,
"text": "A_5"
},
{
"math_id": 19,
"text": "S_3"
},
{
"math_id": 20,
"text": "S_3\\simeq C_3\\rtimes C_2"
},
{
"math_id": 21,
"text": "\\begin{matrix} & 1 & \\to & C_3 & \\to & C_3 & \\to 1\\\\\n& \\downarrow && \\downarrow && \\downarrow \\\\\n1 \\to & 1 & \\to & S_3 & \\to & A_5\n\\end{matrix}\n"
},
{
"math_id": 22,
"text": "C_2"
},
{
"math_id": 23,
"text": "S_3/C_3"
},
{
"math_id": 24,
"text": "1 \\longrightarrow 1 \\longrightarrow 1 \\longrightarrow 1 \\longrightarrow C_2 \\longrightarrow 1"
}
] | https://en.wikipedia.org/wiki?curid=87089 |
8711785 | Ulam number | Mathematical sequence
In mathematics, the Ulam numbers comprise an integer sequence devised by and named after Stanislaw Ulam, who introduced it in 1964. The standard Ulam sequence (the (1, 2)-Ulam sequence) starts with "U"1 = 1 and "U"2 = 2. Then for "n" > 2, "U""n" is defined to be the smallest integer that is the sum of two distinct earlier terms in exactly one way and larger than all earlier terms.
Examples.
As a consequence of the definition, 3 is an Ulam number (1 + 2); and 4 is an Ulam number (1 + 3). (Here 2 + 2 is not a second representation of 4, because the previous terms must be distinct.) The integer 5 is not an Ulam number, because 5 = 1 + 4 = 2 + 3. The first few terms are
1, 2, 3, 4, 6, 8, 11, 13, 16, 18, 26, 28, 36, 38, 47, 48, 53, 57, 62, 69, 72, 77, 82, 87, 97, 99, 102, 106, 114, 126, 131, 138, 145, 148, 155, 175, 177, 180, 182, 189, 197, 206, 209, 219, 221, 236, 238, 241, 243, 253, 258, 260, 273, 282, ... (sequence in the OEIS).
There are infinitely many Ulam numbers. For, after the first "n" numbers in the sequence have already been determined, it is always possible to extend the sequence by one more element: "U""n"−1 + "U""n" is uniquely represented as a sum of two of the first "n" numbers, and there may be other smaller numbers that are also uniquely represented in this way, so the next element can be chosen as the smallest of these uniquely representable numbers.
Ulam is said to have conjectured that the numbers have zero density, but they seem to have a density of approximately 0.07398.
Properties.
Apart from 1 + 2 = 3 any subsequent Ulam number cannot be the sum of its two prior consecutive Ulam numbers.
Proof: Assume that for "n" > 2, "U""n"−1 + "U""n" = "U""n"+1 is the required sum in only one way; then so does "U""n"−2 + "U""n" produce a sum in only one way, and it falls between "U""n" and "U""n"+1. This contradicts the condition that "U""n"+1 is the next smallest Ulam number.
For "n" > 2, any three consecutive Ulam numbers ("U""n"−1, "U""n", "U""n"+1) as integer sides will form a triangle.
Proof: The previous property states that for "n" > 2, "U""n"−2 + "U""n" ≥ "U""n" + 1. Consequently "U""n"−1 + "U""n" > "U""n"+1 and because "U""n"−1 < "U""n" < "U""n"+1 the triangle inequality is satisfied.
The sequence of Ulam numbers forms a complete sequence.
Proof: By definition "U""n" = "U""j" + "U""k" where "j" < "k" < "n" and is the smallest integer that is the sum of two distinct smaller Ulam numbers in exactly one way. This means that for all "U""n" with "n" > 3, the greatest value that "U""j" can have is "U""n"−3 and the greatest value that "U""k" can have is "U""n"−1.
Hence "U""n" ≤ "U""n"−1 + "U""n"−3 < 2"U""n"−1 and "U""1" = 1, "U""2" = 2, "U""3" = 3. This is a sufficient condition for Ulam numbers to be a complete sequence.
For every integer "n" > 1 there is always at least one Ulam number "U""j" such that "n" ≤ "U""j" < 2"n".
Proof: It has been proved that there are infinitely many Ulam numbers and they start at 1. Therefore for every integer "n" > 1 it is possible to find "j" such that "U""j"−1 ≤ "n" ≤ "U""j". From the proof above for "n" > 3, "U""j" ≤ "U""j"−1 + "U""j"−3 < 2"U""j"−1. Therefore "n" ≤ "U""j" < "2U""j"−1 ≤ 2"n". Also for "n" = 2 and 3 the property is true by calculation.
In any sequence of 5 consecutive positive integers {"i", "i" + 1..., "i" + 4}, "i" > 4 there can be a maximum of 2 Ulam numbers.
Proof: Assume that the sequence {"i", "i" + 1..., "i" + 4} has its first value "i" = "U""j" an Ulam number then it is possible that "i" + 1 is the next Ulam number "U""j"+1. Now consider "i" + 2, this cannot be the next Ulam number "U""j"+2 because it is not a unique sum of two previous terms. "i" + 2 = "U""j"+1 + "U"1 = "U""j" + "U"2. A similar argument exists for "i" + 3 and "i" + 4.
Inequalities.
Ulam numbers are pseudo-random and too irregular to have tight bounds. Nevertheless from the properties above, namely, at worst the next Ulam number "U""n"+1 ≤ "U""n" + "U""n"−2 and in any five consecutive positive integers at most two can be Ulam numbers, it can be stated that
"n"−7 ≤ "U""n" ≤ "N""n"+1 for "n" > 0,
where "N""n" are the numbers in Narayana’s cows sequence: 1,1,1,2,3,4,6,9,13,19... with the recurrence relation "N""n" = "N""n"−1 +"N""n"−3 that starts at "N"0.
Hidden structure.
It has been observed that the first 10 million Ulam numbers satisfy formula_0 except for the four elements formula_1 (this has now been verified for the first formula_2 Ulam numbers). Inequalities of this type are usually true for sequences exhibiting some form of periodicity but the Ulam sequence does not seem to be periodic and the phenomenon is not understood. It can be exploited to do a fast computation of the Ulam sequence (see External links).
Generalizations.
The idea can be generalized as ("u", "v")-Ulam numbers by selecting different starting values ("u", "v"). A sequence of ("u", "v")-Ulam numbers is "regular" if the sequence of differences between consecutive numbers in the sequence is eventually periodic. When "v" is an odd number greater than three, the (2, "v")-Ulam numbers are regular. When "v" is congruent to 1 (mod 4) and at least five, the (4, "v")-Ulam numbers are again regular. However, the Ulam numbers themselves do not appear to be regular.
A sequence of numbers is said to be "s"-"additive" if each number in the sequence, after the initial 2"s" terms of the sequence, has exactly "s" representations as a sum of two previous numbers. Thus, the Ulam numbers and the ("u", "v")-Ulam numbers are 1-additive sequences.
If a sequence is formed by appending the largest number with a unique representation as a sum of two earlier numbers, instead of appending the smallest uniquely representable number, then the resulting sequence is the sequence of Fibonacci numbers.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\cos{(2.5714474995 \n\\, U_n)} < 0"
},
{
"math_id": 1,
"text": "\\left\\{2,3,47,69\\right\\}"
},
{
"math_id": 2,
"text": "10^9"
}
] | https://en.wikipedia.org/wiki?curid=8711785 |
8712675 | Hamming(7,4) | Linear error-correcting code
In coding theory, Hamming(7,4) is a linear error-correcting code that encodes four bits of data into seven bits by adding three parity bits. It is a member of a larger family of Hamming codes, but the term "Hamming code" often refers to this specific code that Richard W. Hamming introduced in 1950. At the time, Hamming worked at Bell Telephone Laboratories and was frustrated with the error-prone punched card reader, which is why he started working on error-correcting codes.
The Hamming code adds three additional check bits to every four data bits of the message. Hamming's (7,4) algorithm can correct any single-bit error, or detect all single-bit and two-bit errors. In other words, the minimal Hamming distance between any two correct codewords is 3, and received words can be correctly decoded if they are at a distance of at most one from the codeword that was transmitted by the sender. This means that for transmission medium situations where burst errors do not occur, Hamming's (7,4) code is effective (as the medium would have to be extremely noisy for two out of seven bits to be flipped).
In quantum information, the Hamming (7,4) is used as the base for the Steane code, a type of CSS code used for quantum error correction.
Goal.
The goal of the Hamming codes is to create a set of parity bits that overlap so that a single-bit error in a data bit "or" a parity bit can be detected and corrected. While multiple overlaps can be created, the general method is presented in Hamming codes.
This table describes which parity bits cover which transmitted bits in the encoded word. For example, "p"2 provides an even parity for bits 2, 3, 6, and 7. It also details which transmitted bit is covered by which parity bit by reading the column. For example, "d"1 is covered by "p"1 and "p"2 but not "p"3 This table will have a striking resemblance to the parity-check matrix (H) in the next section.
Furthermore, if the parity columns in the above table were removed
then resemblance to rows 1, 2, and 4 of the code generator matrix (G) below will also be evident.
So, by picking the parity bit coverage correctly, all errors with a Hamming distance of 1 can be detected and corrected, which is the point of using a Hamming code.
Hamming matrices.
Hamming codes can be computed in linear algebra terms through matrices because Hamming codes are linear codes. For the purposes of Hamming codes, two Hamming matrices can be defined: the code generator matrix G and the parity-check matrix H:
formula_0
As mentioned above, rows 1, 2, and 4 of G should look familiar as they map the data bits to their parity bits:
The remaining rows (3, 5, 6, 7) map the data to their position in encoded form and there is only 1 in that row so it is an identical copy. In fact, these four rows are linearly independent and form the identity matrix (by design, not coincidence).
Also as mentioned above, the three rows of H should be familiar. These rows are used to compute the syndrome vector at the receiving end and if the syndrome vector is the null vector (all zeros) then the received word is error-free; if non-zero then the value indicates which bit has been flipped.
The four data bits — assembled as a vector p — is pre-multiplied by G (i.e., Gp) and taken modulo 2 to yield the encoded value that is transmitted. The original 4 data bits are converted to seven bits (hence the name "Hamming(7,4)") with three parity bits added to ensure even parity using the above data bit coverages. The first table above shows the mapping between each data and parity bit into its final bit position (1 through 7) but this can also be presented in a Venn diagram. The first diagram in this article shows three circles (one for each parity bit) and encloses data bits that each parity bit covers. The second diagram (shown to the right) is identical but, instead, the bit positions are marked.
For the remainder of this section, the following 4 bits (shown as a column vector) will be used as a running example:
formula_1
Channel coding.
Suppose we want to transmit this data (codice_0) over a noisy communications channel. Specifically, a binary symmetric channel meaning that error corruption does not favor either zero or one (it is symmetric in causing errors). Furthermore, all source vectors are assumed to be equiprobable. We take the product of G and p, with entries modulo 2, to determine the transmitted codeword x:
formula_2
This means that codice_1 would be transmitted instead of transmitting codice_0.
Programmers concerned about multiplication should observe that each row of the result is the least significant bit of the Population Count of set bits resulting from the row and column being Bitwise ANDed together rather than multiplied.
In the adjacent diagram, the seven bits of the encoded word are inserted into their respective locations; from inspection it is clear that the parity of the red, green, and blue circles are even:
What will be shown shortly is that if, during transmission, a bit is flipped then the parity of two or all three circles will be incorrect and the errored bit can be determined (even if one of the parity bits) by knowing that the parity of all three of these circles should be even.
Parity check.
If no error occurs during transmission, then the received codeword r is identical to the transmitted codeword x:
formula_3
The receiver multiplies H and r to obtain the syndrome vector z, which indicates whether an error has occurred, and if so, for which codeword bit. Performing this multiplication (again, entries modulo 2):
formula_4
Since the syndrome z is the null vector, the receiver can conclude that no error has occurred. This conclusion is based on the observation that when the data vector is multiplied by G, a change of basis occurs into a vector subspace that is the kernel of H. As long as nothing happens during transmission, r will remain in the kernel of H and the multiplication will yield the null vector.
Error correction.
Otherwise, suppose, we can write
formula_5
modulo 2, where e"i" is the formula_6 unit vector, that is, a zero vector with a 1 in the formula_7, counting from 1.
formula_8
Thus the above expression signifies a single bit error in the formula_7 place.
Now, if we multiply this vector by H:
formula_9
Since x is the transmitted data, it is without error, and as a result, the product of H and x is zero. Thus
formula_10
Now, the product of H with the formula_7 standard basis vector picks out that column of H, we know the error occurs in the place where this column of H occurs.
For example, suppose we have introduced a bit error on bit #5
formula_11
The diagram to the right shows the bit error (shown in blue text) and the bad parity created (shown in red text) in the red and green circles. The bit error can be detected by computing the parity of the red, green, and blue circles. If a bad parity is detected then the data bit that overlaps "only" the bad parity circles is the bit with the error. In the above example, the red and green circles have bad parity so the bit corresponding to the intersection of red and green but not blue indicates the errored bit.
Now,
formula_12
which corresponds to the fifth column of H. Furthermore, the general algorithm used ("see Hamming code#General algorithm") was intentional in its construction so that the syndrome of 101 corresponds to the binary value of 5, which indicates the fifth bit was corrupted. Thus, an error has been detected in bit 5, and can be corrected (simply flip or negate its value):
formula_13
This corrected received value indeed, now, matches the transmitted value x from above.
Decoding.
Once the received vector has been determined to be error-free or corrected if an error occurred (assuming only zero or one bit errors are possible) then the received data needs to be decoded back into the original four bits.
First, define a matrix R:
formula_14
Then the received value, pr, is equal to Rr. Using the running example from above
formula_15
Multiple bit errors.
It is not difficult to show that only single bit errors can be corrected using this scheme. Alternatively, Hamming codes can be used to detect single and double bit errors, by merely noting that the product of H is nonzero whenever errors have occurred. In the adjacent diagram, bits 4 and 5 were flipped. This yields only one circle (green) with an invalid parity but the errors are not recoverable.
However, the Hamming (7,4) and similar Hamming codes cannot distinguish between single-bit errors and two-bit errors. That is, two-bit errors appear the same as one-bit errors. If error correction is performed on a two-bit error the result will be incorrect.
Similarly, Hamming codes cannot detect or recover from an arbitrary three-bit error; Consider the diagram: if the bit in the green circle (colored red) were 1, the parity checking would return the null vector, indicating that there is no error in the codeword.
All codewords.
Since the source is only 4 bits then there are only 16 possible transmitted words. Included is the eight-bit value if an extra parity bit is used ("see Hamming(7,4) code with an additional parity bit"). (The data bits are shown in blue; the parity bits are shown in red; and the extra parity bit shown in green.)
E7 lattice.
The Hamming(7,4) code is closely related to the E7 lattice and, in fact, can be used to construct it, or more precisely, its dual lattice E7∗ (a similar construction for E7 uses the dual code [7,3,4]2). In particular, taking the set of all vectors "x" in Z"7" with "x" congruent (modulo 2) to a codeword of Hamming(7,4), and rescaling by 1/√2, gives the lattice E7∗
formula_16
This is a particular instance of a more general relation between lattices and codes. For instance, the extended (8,4)-Hamming code, which arises from the addition of a parity bit, is also related to the E8 lattice.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{G^T} := \\begin{pmatrix}\n 1 & 1 & 0 & 1 \\\\\n 1 & 0 & 1 & 1 \\\\\n 1 & 0 & 0 & 0 \\\\\n 0 & 1 & 1 & 1 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1 \\\\\n\\end{pmatrix}, \\qquad \\mathbf{H} := \\begin{pmatrix}\n 1 & 0 & 1 & 0 & 1 & 0 & 1 \\\\\n 0 & 1 & 1 & 0 & 0 & 1 & 1 \\\\\n 0 & 0 & 0 & 1 & 1 & 1 & 1 \\\\\n\\end{pmatrix}."
},
{
"math_id": 1,
"text": "\\mathbf{p} = \\begin{pmatrix} d_1 \\\\ d_2 \\\\ d_3 \\\\ d_4 \\\\ \\end{pmatrix} = \\begin{pmatrix} 1 \\\\ 0 \\\\ 1 \\\\ 1 \\end{pmatrix}"
},
{
"math_id": 2,
"text": "\\mathbf{x} = \\mathbf{G^T} \\mathbf{p} =\n\\begin{pmatrix}\n 1 & 1 & 0 & 1 \\\\\n 1 & 0 & 1 & 1 \\\\\n 1 & 0 & 0 & 0 \\\\\n 0 & 1 & 1 & 1 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1 \\\\\n\\end{pmatrix}\n\\begin{pmatrix} 1 \\\\ 0 \\\\ 1 \\\\ 1 \\end{pmatrix} =\n\\begin{pmatrix} 2 \\\\ 3 \\\\ 1 \\\\ 2 \\\\ 0 \\\\ 1 \\\\ 1 \\end{pmatrix} =\n\\begin{pmatrix} 0 \\\\ 1 \\\\ 1 \\\\ 0 \\\\ 0 \\\\ 1 \\\\ 1 \\end{pmatrix} "
},
{
"math_id": 3,
"text": "\\mathbf{r} = \\mathbf{x}"
},
{
"math_id": 4,
"text": "\\mathbf{z} = \\mathbf{H}\\mathbf{r} = \n\\begin{pmatrix}\n 1 & 0 & 1 & 0 & 1 & 0 & 1 \\\\\n 0 & 1 & 1 & 0 & 0 & 1 & 1 \\\\\n 0 & 0 & 0 & 1 & 1 & 1 & 1 \\\\\n\\end{pmatrix}\n\\begin{pmatrix} 0 \\\\ 1 \\\\ 1 \\\\ 0 \\\\ 0 \\\\ 1 \\\\ 1 \\end{pmatrix} =\n\\begin{pmatrix} 2 \\\\ 4 \\\\ 2 \\end{pmatrix} = \n\\begin{pmatrix} 0 \\\\ 0 \\\\ 0 \\end{pmatrix} "
},
{
"math_id": 5,
"text": "\\mathbf{r} = \\mathbf{x} +\\mathbf{e}_i"
},
{
"math_id": 6,
"text": "i_{th}"
},
{
"math_id": 7,
"text": "i^{th}"
},
{
"math_id": 8,
"text": "\\mathbf{e}_2 = \\begin{pmatrix} 0 \\\\ 1 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 0 \\end{pmatrix}"
},
{
"math_id": 9,
"text": "\\mathbf{Hr} = \\mathbf{H} \\left( \\mathbf{x}+\\mathbf{e}_i \\right) = \\mathbf{Hx} + \\mathbf{He}_i"
},
{
"math_id": 10,
"text": " \\mathbf{Hx} + \\mathbf{He}_i = \\mathbf{0} + \\mathbf{He}_i = \\mathbf{He}_i"
},
{
"math_id": 11,
"text": "\\mathbf{r} = \\mathbf{x}+\\mathbf{e}_5 = \\begin{pmatrix} 0 \\\\ 1 \\\\ 1 \\\\ 0 \\\\ 0 \\\\ 1 \\\\ 1 \\end{pmatrix} + \\begin{pmatrix} 0 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 1 \\\\ 0 \\\\ 0 \\end{pmatrix} = \\begin{pmatrix} 0 \\\\ 1 \\\\ 1 \\\\ 0 \\\\ 1 \\\\ 1 \\\\ 1 \\end{pmatrix}"
},
{
"math_id": 12,
"text": "\\mathbf{z} = \\mathbf{Hr} = \\begin{pmatrix} 1 & 0 & 1 & 0 & 1 & 0 & 1 \\\\ 0 & 1 & 1 & 0 & 0 & 1 & 1 \\\\ 0 & 0 & 0 & 1 & 1 & 1 & 1 \\\\ \\end{pmatrix} \n\\begin{pmatrix} 0 \\\\ 1 \\\\ 1 \\\\ 0 \\\\ 1 \\\\ 1 \\\\ 1 \\end{pmatrix} = \\begin{pmatrix} 3 \\\\ 4 \\\\ 3 \\end{pmatrix} = \\begin{pmatrix} 1 \\\\ 0 \\\\ 1 \\end{pmatrix} "
},
{
"math_id": 13,
"text": " \\mathbf{r}_{\\text{corrected}} = \\begin{pmatrix} 0 \\\\ 1 \\\\ 1 \\\\ 0 \\\\ \\overline{1} \\\\ 1 \\\\ 1 \\end{pmatrix} = \\begin{pmatrix} 0 \\\\ 1 \\\\ 1 \\\\ 0 \\\\ 0 \\\\ 1 \\\\ 1 \\end{pmatrix} "
},
{
"math_id": 14,
"text": "\\mathbf{R} = \\begin{pmatrix}\n 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\\n\\end{pmatrix} "
},
{
"math_id": 15,
"text": "\\mathbf{p_r} = \\begin{pmatrix}\n 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\\n\\end{pmatrix}\n\\begin{pmatrix} 0 \\\\ 1 \\\\ 1 \\\\ 0 \\\\ 0 \\\\ 1 \\\\ 1 \\end{pmatrix} = \\begin{pmatrix} 1 \\\\ 0 \\\\ 1 \\\\ 1 \\end{pmatrix} "
},
{
"math_id": 16,
"text": " E_7^* = \\tfrac{1}{\\sqrt 2}\\left\\{x \\in \\mathbb Z^7 : x \\bmod 2 \\in [7,4,3]_2 \\right\\}."
}
] | https://en.wikipedia.org/wiki?curid=8712675 |
871280 | Observability | In control theory, visible state of a system
Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs.
In control theory, the observability and controllability of a linear system are mathematical duals.
The concept of observability was introduced by the Hungarian-American engineer Rudolf E. Kálmán for linear dynamic systems. A dynamical system designed to estimate the state of a system from measurements of the outputs is called a "state observer" for that system, such as Kalman filters.
Definition.
Consider a physical system modeled in state-space representation. A system is said to be observable if, for every possible evolution of state and control vectors, the current state can be estimated using only the information from outputs (physically, this generally corresponds to information obtained by sensors). In other words, one can determine the behavior of the entire system from the system's outputs. On the other hand, if the system is not observable, there are state trajectories that are not distinguishable by only measuring the outputs.
Linear time-invariant systems.
For time-invariant linear systems in the state space representation, there are convenient tests to check whether a system is observable. Consider a SISO system with formula_0 state variables (see state space for details about MIMO systems) given by
formula_1
formula_2
Observability matrix.
If and only if the column rank of the "observability matrix", defined as
formula_3
is equal to formula_0, then the system is observable. The rationale for this test is that if formula_0 columns are linearly independent, then each of the formula_0 state variables is viewable through linear combinations of the output variables formula_4.
Related concepts.
Observability index.
The "observability index" formula_5 of a linear time-invariant discrete system is the smallest natural number for which the following is satisfied: formula_6, where
formula_7
Unobservable subspace.
The "unobservable subspace" formula_8 of the linear system is the kernel of the linear map formula_9 given byformula_10where formula_11 is the set of continuous functions from formula_12 to formula_13. formula_8 can also be written as
formula_14
Since the system is observable if and only if formula_15, the system is observable if and only if formula_8 is the zero subspace.
The following properties for the unobservable subspace are valid:
Detectability.
A slightly weaker notion than observability is "detectability". A system is detectable if all the unobservable states are stable.
Detectability conditions are important in the context of sensor networks.
Linear time-varying systems.
Consider the continuous linear time-variant system
formula_19
formula_20
Suppose that the matrices formula_21, formula_22 and formula_23 are given as well as inputs and outputs formula_24 and formula_4 for all formula_25 then it is possible to determine formula_26 to within an additive constant vector which lies in the null space of formula_27 defined by
formula_28
where formula_29 is the state-transition matrix.
It is possible to determine a unique formula_26 if formula_27 is nonsingular. In fact, it is not possible to distinguish the initial state for formula_30 from that of formula_31 if formula_32 is in the null space of formula_27.
Note that the matrix formula_33 defined as above has the following properties:
formula_35
formula_36
Observability matrix generalization.
The system is observable in formula_37 if and only if there exists an interval formula_37 in formula_12 such that the matrix formula_27 is nonsingular.
If formula_38 are analytic, then the system is observable in the interval [formula_39,formula_40] if there exists formula_41 and a positive integer "k" such that
formula_42
where formula_43 and formula_44 is defined recursively as
formula_45
Example.
Consider a system varying analytically in formula_46 and matricesformula_47 Then formula_48 , and since this matrix has rank = 3, the system is observable on every nontrivial interval of formula_12.
Nonlinear systems.
Given the system formula_49, formula_50. Where formula_51 the state vector, formula_52 the input vector and formula_53 the output vector. formula_54 are to be smooth vector fields.
Define the observation space formula_55 to be the space containing all repeated Lie derivatives, then the system is observable in formula_56 if and only if formula_57, where
formula_58
Early criteria for observability in nonlinear dynamic systems were discovered by Griffith and Kumar, Kou, Elliot and Tarn, and Singh.
There also exist an observability criteria for nonlinear time-varying systems.
Static systems and general topological spaces.
Observability may also be characterized for steady state systems (systems typically defined in terms of algebraic equations and inequalities), or more generally, for sets in formula_59. Just as observability criteria are used to predict the behavior of Kalman filters or other observers in the dynamic system case, observability criteria for sets in formula_59 are used to predict the behavior of data reconciliation and other static estimators. In the nonlinear case, observability can be characterized for individual variables, and also for local estimator behavior rather than just global behavior.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\dot{\\mathbf{x}}(t) = \\mathbf{A} \\mathbf{x}(t) + \\mathbf{B} \\mathbf{u}(t)"
},
{
"math_id": 2,
"text": "\\mathbf{y}(t) = \\mathbf{C} \\mathbf{x}(t) + \\mathbf{D} \\mathbf{u}(t)"
},
{
"math_id": 3,
"text": "\\mathcal{O}=\\begin{bmatrix} C \\\\ CA \\\\ CA^2 \\\\ \\vdots \\\\ CA^{n-1} \\end{bmatrix}"
},
{
"math_id": 4,
"text": "y"
},
{
"math_id": 5,
"text": "v"
},
{
"math_id": 6,
"text": "\\text{rank}{(\\mathcal{O}_v)} = \\text{rank}{(\\mathcal{O}_{v+1})}"
},
{
"math_id": 7,
"text": " \\mathcal{O}_v=\\begin{bmatrix} C \\\\ CA \\\\ CA^2 \\\\ \\vdots \\\\ CA^{v-1} \\end{bmatrix}."
},
{
"math_id": 8,
"text": "N"
},
{
"math_id": 9,
"text": "G"
},
{
"math_id": 10,
"text": " \\begin{align}\nG \\colon \\mathbb{R}^{n} &\\rightarrow \\mathcal{C}(\\mathbb{R};\\mathbb{R}^n) \\\\\n x(0) &\\mapsto C e^{A t} x(0)\n\\end{align} "
},
{
"math_id": 11,
"text": "\\mathcal{C}(\\mathbb{R};\\mathbb{R}^n)"
},
{
"math_id": 12,
"text": "\\mathbb{R}"
},
{
"math_id": 13,
"text": "\\mathbb{R}^n "
},
{
"math_id": 14,
"text": " N = \\bigcap_{k=0}^{n-1} \\ker(CA^k)= \\ker{\\mathcal{O}} "
},
{
"math_id": 15,
"text": "\\operatorname{rank}(\\mathcal{O}) = n"
},
{
"math_id": 16,
"text": " N \\subset Ke(C) "
},
{
"math_id": 17,
"text": " A(N) \\subset N "
},
{
"math_id": 18,
"text": " N= \\bigcup \\{ S \\subset R^n \\mid S \\subset Ke(C), A(S) \\subset N \\} "
},
{
"math_id": 19,
"text": "\\dot{\\mathbf{x}}(t) = A(t) \\mathbf{x}(t) + B(t) \\mathbf{u}(t) \\, "
},
{
"math_id": 20,
"text": "\\mathbf{y}(t) = C(t) \\mathbf{x}(t). \\, "
},
{
"math_id": 21,
"text": "A"
},
{
"math_id": 22,
"text": "B"
},
{
"math_id": 23,
"text": "C"
},
{
"math_id": 24,
"text": "u"
},
{
"math_id": 25,
"text": "t \\in [t_0,t_1];"
},
{
"math_id": 26,
"text": "x(t_0)"
},
{
"math_id": 27,
"text": "M(t_0,t_1)"
},
{
"math_id": 28,
"text": "M(t_0,t_1) = \\int_{t_0}^{t_1} \\varphi(t,t_0)^{T}C(t)^{T}C(t)\\varphi(t,t_0) \\, dt"
},
{
"math_id": 29,
"text": "\\varphi"
},
{
"math_id": 30,
"text": "x_1"
},
{
"math_id": 31,
"text": "x_2"
},
{
"math_id": 32,
"text": "x_1 - x_2"
},
{
"math_id": 33,
"text": "M"
},
{
"math_id": 34,
"text": "t_1 \\geq t_0"
},
{
"math_id": 35,
"text": "\\frac{d}{dt}M(t,t_1) = -A(t)^{T}M(t,t_1)-M(t,t_1)A(t)-C(t)^{T}C(t), \\; M(t_1,t_1) = 0"
},
{
"math_id": 36,
"text": "M(t_0,t_1) = M(t_0,t) + \\varphi(t,t_0)^T M(t,t_1)\\varphi(t,t_0)"
},
{
"math_id": 37,
"text": "[t_0,t_1]"
},
{
"math_id": 38,
"text": "A(t), C(t)"
},
{
"math_id": 39,
"text": "t_0"
},
{
"math_id": 40,
"text": "t_1"
},
{
"math_id": 41,
"text": "\\bar{t} \\in [t_0,t_1]"
},
{
"math_id": 42,
"text": " \\operatorname{rank} \\begin{bmatrix}\n & N_0(\\bar{t}) & \\\\ \n & N_1(\\bar{t}) & \\\\ \n & \\vdots & \\\\ \n & N_{k}(\\bar{t}) & \n\\end{bmatrix} = n, "
},
{
"math_id": 43,
"text": "N_0(t):=C(t)"
},
{
"math_id": 44,
"text": "N_i(t)"
},
{
"math_id": 45,
"text": "N_{i+1}(t) := N_i(t)A(t) + \\frac{\\mathrm{d}}{\\mathrm{d} t}N_i(t),\\ i = 0, \\ldots, k-1 "
},
{
"math_id": 46,
"text": " (-\\infty,\\infty) "
},
{
"math_id": 47,
"text": "A(t) = \\begin{bmatrix}\nt & 1 & 0\\\\ \n0 & t^{3} & 0\\\\ \n0 & 0 & t^{2} \n\\end{bmatrix},\\, C(t) = \\begin{bmatrix}\n1 & 0 & 1\n\\end{bmatrix}."
},
{
"math_id": 48,
"text": " \\begin{bmatrix}\nN_0(0) \\\\\nN_1(0) \\\\\nN_2(0) \n\\end{bmatrix}\n = \\begin{bmatrix}\n1 & 0 & 1 \\\\ \n0 & 1 & 0 \\\\ \n1& 0 & 0 \n\\end{bmatrix}"
},
{
"math_id": 49,
"text": "\\dot{x} = f(x) + \\sum_{j=1}^mg_j(x)u_j "
},
{
"math_id": 50,
"text": "y_i = h_i(x), i \\in p"
},
{
"math_id": 51,
"text": "x \\in \\mathbb{R}^n"
},
{
"math_id": 52,
"text": "u \\in \\mathbb{R}^m"
},
{
"math_id": 53,
"text": "y \\in \\mathbb{R}^p"
},
{
"math_id": 54,
"text": "f,g,h"
},
{
"math_id": 55,
"text": "\\mathcal{O}_s"
},
{
"math_id": 56,
"text": "x_0"
},
{
"math_id": 57,
"text": "\\dim(d\\mathcal{O}_s(x_0)) = n"
},
{
"math_id": 58,
"text": "d\\mathcal{O}_s(x_0) = \\operatorname{span}(dh_1(x_0), \\ldots , dh_p(x_0), dL_{v_i}L_{v_{i-1}}, \\ldots , L_{v_1}h_j(x_0)),\\ j\\in p, k=1,2,\\ldots."
},
{
"math_id": 59,
"text": "\\mathbb{R}^n"
}
] | https://en.wikipedia.org/wiki?curid=871280 |
8714796 | Chou–Fasman method | The Chou–Fasman method is an empirical technique for the prediction of secondary structures in proteins, originally developed in the 1970s by Peter Y. Chou and Gerald D. Fasman. The method is based on analyses of the relative frequencies of each amino acid in alpha helices, beta sheets, and turns based on known protein structures solved with X-ray crystallography. From these frequencies a set of probability parameters were derived for the appearance of each amino acid in each secondary structure type, and these parameters are used to predict the probability that a given sequence of amino acids would form a helix, a beta strand, or a turn in a protein. The method is at most about 50–60% accurate in identifying correct secondary structures, which is significantly less accurate than the modern machine learning–based techniques.
Amino acid propensities.
The original Chou–Fasman parameters found some strong tendencies among individual amino acids to prefer one type of secondary structure over others. Alanine, glutamate, leucine, and methionine were identified as helix formers, while proline and glycine, due to the unique conformational properties of their peptide bonds, commonly end a helix. The original Chou–Fasman parameters were derived from a very small and non-representative sample of protein structures due to the small number of such structures that were known at the time of their original work. These original parameters have since been shown to be unreliable and have been updated from a current dataset, along with modifications to the initial algorithm.
The Chou–Fasman method takes into account only the probability that each individual amino acid will appear in a helix, strand, or turn. Unlike the more complex GOR method, it does not reflect the conditional probabilities of an amino acid to form a particular secondary structure given that its neighbors already possess that structure. This lack of cooperativity increases its computational efficiency but decreases its accuracy, since the propensities of individual amino acids are often not strong enough to render a definitive prediction.
Algorithm.
The Chou–Fasman method predicts helices and strands in a similar fashion, first searching linearly through the sequence for a "nucleation" region of high helix or strand probability and then extending the region until a subsequent four-residue window carries a probability of less than 1. As originally described, four out of any six contiguous amino acids were sufficient to nucleate helix, and three out of any contiguous five were sufficient for a sheet. The probability thresholds for helix and strand nucleations are constant but not necessarily equal; originally 1.03 was set as the helix cutoff and 1.00 for the strand cutoff.
Turns are also evaluated in four-residue windows, but are calculated using a multi-step procedure because many turn regions contain amino acids that could also appear in helix or sheet regions. Four-residue turns also have their own characteristic amino acids; proline and glycine are both common in turns. A turn is predicted only if the turn probability is greater than the helix or sheet probabilities "and" a probability value based on the positions of particular amino acids in the turn exceeds a predetermined threshold. The turn probability p(t) is determined as:
formula_0
where "j" is the position of the amino acid in the four-residue window. If p(t) exceeds an arbitrary cutoff value (originally 7.5e–3), the mean of the p(j)'s exceeds 1, and p(t) exceeds the alpha helix and beta sheet probabilities for that window, then a turn is predicted. If the first two conditions are met but the probability of a beta sheet p(b) exceeds p(t), then a sheet is predicted instead.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\np(t) = p_{t}(j) \\times p_{t}(j+1) \\times p_{t}(j+2) \\times p_{t}(j+3)\n"
}
] | https://en.wikipedia.org/wiki?curid=8714796 |
8714937 | Discharging method (discrete mathematics) | Technique used to prove lemmas in structural graph theory
The discharging method is a technique used to prove lemmas in structural graph theory. Discharging is most well known for its central role in the proof of the four color theorem. The discharging method is used to prove that every graph in a certain class contains some subgraph from a specified list. The presence of the desired subgraph is then often used to prove a coloring result.
Most commonly, discharging is applied to planar graphs.
Initially, a "charge" is assigned to each face and each vertex of the graph.
The charges are assigned so that they sum to a small positive number. During the "Discharging Phase" the charge at each face or vertex may be redistributed to nearby faces and vertices, as required by a set of discharging rules. However, each discharging rule maintains the sum of the charges. The rules are designed so that after the discharging phase each face or vertex with positive charge lies in one of the desired subgraphs. Since the sum of the charges is positive, some face or vertex must have a positive charge. Many discharging arguments use one of a few standard initial charge functions (these are listed below). Successful application of the discharging method requires creative design of discharging rules.
An example.
In 1904, Wernicke introduced the discharging method to prove the following theorem, which was part of an attempt to prove the four color theorem.
Theorem: If a planar graph has minimum degree 5, then it either has an edge
with endpoints both of degree 5 or one with endpoints of degrees 5 and 6.
Proof:
We use formula_0, formula_1, and formula_2 to denote the sets of vertices, faces, and edges, respectively.
We call an edge "light" if its endpoints are both of degree 5 or are of degrees 5 and 6.
Embed the graph in the plane. To prove the theorem, it is sufficient to only consider planar triangulations (because, if it holds on a triangulation, when removing nodes to return to the original graph, neither node on either side of the desired edge can be removed without reducing the minimum degree of the graph below 5). We arbitrarily add edges to the graph until it is a triangulation.
Since the original graph had minimum degree 5, each endpoint of a new edge has degree at least 6.
So, none of the new edges are light.
Thus, if the triangulation contains a light edge, then that edge must have been in the original graph.
We give the charge formula_3 to each vertex formula_4 and the charge formula_5 to each face formula_6, where formula_7 denotes the degree of a vertex and the length of a face. (Since the graph is a triangulation, the charge on each face is 0.) Recall that the sum of all the degrees in the graph is equal to twice the number of edges; similarly, the sum of all the face lengths equals twice the number of edges. Using Euler's Formula, it's easy to see that the sum of all the charges is 12:
formula_8
We use only a single discharging rule:
We consider which vertices could have positive final charge.
The only vertices with positive initial charge are vertices of degree 5.
Each degree 5 vertex gives a charge of 1/5 to each neighbor.
So, each vertex is given a total charge of at most formula_9.
The initial charge of each vertex v is formula_3.
So, the final charge of each vertex is at most formula_10. Hence, a vertex can only have positive final charge if it has degree at most 7. Now we show that each vertex with positive final charge is adjacent to an endpoint of a light edge.
If a vertex formula_4 has degree 5 or 6 and has positive final charge, then formula_4 received charge from an adjacent degree 5 vertex formula_11, so edge formula_12 is light. If a vertex formula_4 has degree 7 and has positive final charge, then formula_4 received charge from at least 6 adjacent degree 5 vertices. Since the graph is a triangulation, the vertices adjacent to formula_4 must form a cycle, and since it has only degree 7, the degree 5 neighbors cannot be all separated by vertices of higher degree; at least two of the degree 5 neighbors of formula_4 must be adjacent to each other on this cycle. This yields the light edge.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "F"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "6-d(v)"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "6-2d(f)"
},
{
"math_id": 6,
"text": "f"
},
{
"math_id": 7,
"text": "d(x)"
},
{
"math_id": 8,
"text": "\n\\begin{align}\n\\sum_{f\\in F} 6-2d(f) + \\sum_{v\\in V} 6-d(v) =& \\\\ \n\n6|F| - 2(2|E|) + 6|V| - 2|E| =& \\\\\n\n6(|F| - |E| + |V|) = &&12.\n\\end{align}\n"
},
{
"math_id": 9,
"text": "d(v)/5"
},
{
"math_id": 10,
"text": "6-4d(v)/5"
},
{
"math_id": 11,
"text": "u"
},
{
"math_id": 12,
"text": "uv"
}
] | https://en.wikipedia.org/wiki?curid=8714937 |
871681 | Mixture model | Statistical concept
In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information. Mixture models are used for clustering, under the name model-based clustering, and also for density estimation.
Mixture models should not be confused with models for compositional data, i.e., data whose components are constrained to sum to a constant value (1, 100%, etc.). However, compositional models can be thought of as mixture models, where members of the population are sampled at random. Conversely, mixture models can be thought of as compositional models, where the total size reading population has been normalized to 1.
Structure.
General mixture model.
A typical finite-dimensional mixture model is a hierarchical model consisting of the following components:
In addition, in a Bayesian setting, the mixture weights and parameters will themselves be random variables, and prior distributions will be placed over the variables. In such a case, the weights are typically viewed as a "K"-dimensional random vector drawn from a Dirichlet distribution (the conjugate prior of the categorical distribution), and the parameters will be distributed according to their respective conjugate priors.
Mathematically, a basic parametric mixture model can be described as follows:
formula_0
In a Bayesian setting, all parameters are associated with random variables, as follows:
formula_1
This characterization uses "F" and "H" to describe arbitrary distributions over observations and parameters, respectively. Typically "H" will be the conjugate prior of "F". The two most common choices of "F" are Gaussian aka "normal" (for real-valued observations) and categorical (for discrete observations). Other common possibilities for the distribution of the mixture components are:
Specific examples.
Gaussian mixture model.
A typical non-Bayesian Gaussian mixture model looks like this:
formula_2
A Bayesian version of a Gaussian mixture model is as follows:
formula_3formula_4
Multivariate Gaussian mixture model.
A Bayesian Gaussian mixture model is commonly extended to fit a vector of unknown parameters (denoted in bold), or multivariate normal distributions. In a multivariate distribution (i.e. one modelling a vector formula_5 with "N" random variables) one may model a vector of parameters (such as several observations of a signal or patches within an image) using a Gaussian mixture model prior distribution on the vector of estimates given by
formula_6
where the "ith" vector component is characterized by normal distributions with weights formula_7, means formula_8 and covariance matrices formula_9. To incorporate this prior into a Bayesian estimation, the prior is multiplied with the known distribution formula_10 of the data formula_5 conditioned on the parameters formula_11 to be estimated. With this formulation, the posterior distribution formula_12 is "also" a Gaussian mixture model of the form
formula_13
with new parameters formula_14 and formula_15 that are updated using the EM algorithm.
Although EM-based parameter updates are well-established, providing the initial estimates for these parameters is currently an area of active research. Note that this formulation yields a closed-form solution to the complete posterior distribution. Estimations of the random variable formula_11 may be obtained via one of several estimators, such as the mean or maximum of the posterior distribution.
Such distributions are useful for assuming patch-wise shapes of images and clusters, for example. In the case of image representation, each Gaussian may be tilted, expanded, and warped according to the covariance matrices formula_9. One Gaussian distribution of the set is fit to each patch (usually of size 8x8 pixels) in the image. Notably, any distribution of points around a cluster (see "k"-means) may be accurately given enough Gaussian components, but scarcely over "K"=20 components are needed to accurately model a given image distribution or cluster of data.
Categorical mixture model.
A typical non-Bayesian mixture model with categorical observations looks like this:
The random variables:
formula_26
A typical Bayesian mixture model with categorical observations looks like this:
The random variables:
formula_31
Examples.
A financial model.
Financial returns often behave differently in normal situations and during crisis times. A mixture model for return data seems reasonable. Sometimes the model used is a jump-diffusion model, or as a mixture of two normal distributions. See and for further context.
House prices.
Assume that we observe the prices of "N" different houses. Different types of houses in different neighborhoods will have vastly different prices, but the price of a particular type of house in a particular neighborhood (e.g., three-bedroom house in moderately upscale neighborhood) will tend to cluster fairly closely around the mean. One possible model of such prices would be to assume that the prices are accurately described by a mixture model with "K" different components, each distributed as a normal distribution with unknown mean and variance, with each component specifying a particular combination of house type/neighborhood. Fitting this model to observed prices, e.g., using the expectation-maximization algorithm, would tend to cluster the prices according to house type/neighborhood and reveal the spread of prices in each type/neighborhood. (Note that for values such as prices or incomes that are guaranteed to be positive and which tend to grow exponentially, a log-normal distribution might actually be a better model than a normal distribution.)
Topics in a document.
Assume that a document is composed of "N" different words from a total vocabulary of size "V", where each word corresponds to one of "K" possible topics. The distribution of such words could be modelled as a mixture of "K" different "V"-dimensional categorical distributions. A model of this sort is commonly termed a topic model. Note that expectation maximization applied to such a model will typically fail to produce realistic results, due (among other things) to the excessive number of parameters. Some sorts of additional assumptions are typically necessary to get good results. Typically two sorts of additional components are added to the model:
*For example, a Markov chain could be placed on the topic identities (i.e., the latent variables specifying the mixture component of each observation), corresponding to the fact that nearby words belong to similar topics. (This results in a hidden Markov model, specifically one where a prior distribution is placed over state transitions that favors transitions that stay in the same state.)
*Another possibility is the latent Dirichlet allocation model, which divides up the words into "D" different documents and assumes that in each document only a small number of topics occur with any frequency.
Handwriting recognition.
The following example is based on an example in Christopher M. Bishop, "Pattern Recognition and Machine Learning".
Imagine that we are given an "N"×"N" black-and-white image that is known to be a scan of a hand-written digit between 0 and 9, but we don't know which digit is written. We can create a mixture model with formula_32 different components, where each component is a vector of size formula_33 of Bernoulli distributions (one per pixel). Such a model can be trained with the expectation-maximization algorithm on an unlabeled set of hand-written digits, and will effectively cluster the images according to the digit being written. The same model could then be used to recognize the digit of another image simply by holding the parameters constant, computing the probability of the new image for each possible digit (a trivial calculation), and returning the digit that generated the highest probability.
Assessing projectile accuracy (a.k.a. circular error probable, CEP).
Mixture models apply in the problem of directing multiple projectiles at a target (as in air, land, or sea defense applications), where the physical and/or statistical characteristics of the projectiles differ within the multiple projectiles. An example might be shots from multiple munitions types or shots from multiple locations directed at one target. The combination of projectile types may be characterized as a Gaussian mixture model. Further, a well-known measure of accuracy for a group of projectiles is the circular error probable (CEP), which is the number "R" such that, on average, half of the group of projectiles falls within the circle of radius "R" about the target point. The mixture model can be used to determine (or estimate) the value "R". The mixture model properly captures the different types of projectiles.
Direct and indirect applications.
The financial example above is one direct application of the mixture model, a situation in which we assume an underlying mechanism so that each observation belongs to one of some number of different sources or categories. This underlying mechanism may or may not, however, be observable. In this form of mixture, each of the sources is described by a component probability density function, and its mixture weight is the probability that an observation comes from this component.
In an indirect application of the mixture model we do not assume such a mechanism. The mixture model is simply used for its mathematical flexibilities. For example, a mixture of two normal distributions with different means may result in a density with two modes, which is not modeled by standard parametric distributions. Another example is given by the possibility of mixture distributions to model fatter tails than the basic Gaussian ones, so as to be a candidate for modeling more extreme events. When combined with dynamical consistency, this approach has been applied to financial derivatives valuation in presence of the volatility smile in the context of local volatility models. This defines our application.
Predictive Maintenance.
The mixture model-based clustering is also predominantly used in identifying the state of the machine in predictive maintenance. Density plots are used to analyze the density of high dimensional features. If multi-model densities are observed, then it is assumed that a finite set of densities are formed by a finite set of normal mixtures. A multivariate Gaussian mixture model is used to cluster the feature data into k number of groups where k represents each state of the machine. The machine state can be a normal state, power off state, or faulty state. Each formed cluster can be diagnosed using techniques such as spectral analysis. In the recent years, this has also been widely used in other areas such as early fault detection.
Fuzzy image segmentation.
In image processing and computer vision, traditional image segmentation models often assign to one pixel only one exclusive pattern. In fuzzy or soft segmentation, any pattern can have certain "ownership" over any single pixel. If the patterns are Gaussian, fuzzy segmentation naturally results in Gaussian mixtures. Combined with other analytic or geometric tools (e.g., phase transitions over diffusive boundaries), such spatially regularized mixture models could lead to more realistic and computationally efficient segmentation methods.
Point set registration.
Probabilistic mixture models such as Gaussian mixture models (GMM) are used to resolve point set registration problems in image processing and computer vision fields. For pair-wise point set registration, one point set is regarded as the centroids of mixture models, and the other point set is regarded as data points (observations). State-of-the-art methods are e.g. coherent point drift (CPD)
and Student's t-distribution mixture models (TMM).
The result of recent research demonstrate the superiority of hybrid mixture models
(e.g. combining Student's t-distribution and Watson distribution/Bingham distribution to model spatial positions and axes orientations separately) compare to CPD and TMM, in terms of inherent robustness, accuracy and discriminative capacity.
Identifiability.
Identifiability refers to the existence of a unique characterization for any one of the models in the class (family) being considered. Estimation procedures may not be well-defined and asymptotic theory may not hold if a model is not identifiable.
Example.
Let "J" be the class of all binomial distributions with "n"
2. Then a mixture of two members of "J" would have
formula_34
formula_35
and "p"2
1 − "p"0 − "p"1. Clearly, given "p"0 and "p"1, it is not possible to determine the above mixture model uniquely, as there are three parameters ("π", "θ"1, "θ"2) to be determined.
Definition.
Consider a mixture of parametric distributions of the same class. Let
formula_36
be the class of all component distributions. Then the convex hull "K" of "J" defines the class of all finite mixture of distributions in "J":
formula_37
"K" is said to be identifiable if all its members are unique, that is, given two members "p" and "p′" in "K", being mixtures of "k" distributions and "k′" distributions respectively in "J", we have "p
p′" if and only if, first of all, "k
k′" and secondly we can reorder the summations such that "ai
ai"′ and "ƒi
ƒi"′ for all "i".
Parameter estimation and system identification.
Parametric mixture models are often used when we know the distribution "Y" and we can sample from "X", but we would like to determine the "ai" and "θi" values. Such situations can arise in studies in which we sample from a population that is composed of several distinct subpopulations.
It is common to think of probability mixture modeling as a missing data problem. One way to understand this is to assume that the data points under consideration have "membership" in one of the distributions we are using to model the data. When we start, this membership is unknown, or missing. The job of estimation is to devise appropriate parameters for the model functions we choose, with the connection to the data points being represented as their membership in the individual model distributions.
A variety of approaches to the problem of mixture decomposition have been proposed, many of which focus on maximum likelihood methods such as expectation maximization (EM) or maximum "a posteriori" estimation (MAP). Generally these methods consider separately the questions of system identification and parameter estimation; methods to determine the number and functional form of components within a mixture are distinguished from methods to estimate the corresponding parameter values. Some notable departures are the graphical methods as outlined in Tarter and Lock and more recently minimum message length (MML) techniques such as Figueiredo and Jain and to some extent the moment matching pattern analysis routines suggested by McWilliam and Loh (2009).
Expectation maximization (EM).
Expectation maximization (EM) is seemingly the most popular technique used to determine the parameters of a mixture with an "a priori" given number of components. This is a particular way of implementing maximum likelihood estimation for this problem. EM is of particular appeal for finite normal mixtures where closed-form expressions are possible such as in the following iterative algorithm by Dempster "et al." (1977)
formula_38
formula_39
formula_40
with the posterior probabilities
formula_41
Thus on the basis of the current estimate for the parameters, the conditional probability for a given observation "x"("t") being generated from state "s" is determined for each "t"
1, …, "N" ; "N" being the sample size. The parameters are then updated such that the new component weights correspond to the average conditional probability and each component mean and covariance is the component specific weighted average of the mean and covariance of the entire sample.
Dempster also showed that each successive EM iteration will not decrease the likelihood, a property not shared by other gradient based maximization techniques. Moreover, EM naturally embeds within it constraints on the probability vector, and for sufficiently large sample sizes positive definiteness of the covariance iterates. This is a key advantage since explicitly constrained methods incur extra computational costs to check and maintain appropriate values. Theoretically EM is a first-order algorithm and as such converges slowly to a fixed-point solution. Redner and Walker (1984) make this point arguing in favour of superlinear and second order Newton and quasi-Newton methods and reporting slow convergence in EM on the basis of their empirical tests. They do concede that convergence in likelihood was rapid even if convergence in the parameter values themselves was not. The relative merits of EM and other algorithms vis-à-vis convergence have been discussed in other literature.
Other common objections to the use of EM are that it has a propensity to spuriously identify local maxima, as well as displaying sensitivity to initial values. One may address these problems by evaluating EM at several initial points in the parameter space but this is computationally costly and other approaches, such as the annealing EM method of Udea and Nakano (1998) (in which the initial components are essentially forced to overlap, providing a less heterogeneous basis for initial guesses), may be preferable.
Figueiredo and Jain note that convergence to 'meaningless' parameter values obtained at the boundary (where regularity conditions breakdown, e.g., Ghosh and Sen (1985)) is frequently observed when the number of model components exceeds the optimal/true one. On this basis they suggest a unified approach to estimation and identification in which the initial "n" is chosen to greatly exceed the expected optimal value. Their optimization routine is constructed via a minimum message length (MML) criterion that effectively eliminates a candidate component if there is insufficient information to support it. In this way it is possible to systematize reductions in "n" and consider estimation and identification jointly.
The expectation step.
With initial guesses for the parameters of our mixture model, "partial membership" of each data point in each constituent distribution is computed by calculating expectation values for the membership variables of each data point. That is, for each data point "xj" and distribution "Yi", the membership value "y""i", "j" is:
formula_42
The maximization step.
With expectation values in hand for group membership, plug-in estimates are recomputed for the distribution parameters.
The mixing coefficients "ai" are the means of the membership values over the "N" data points.
formula_43
The component model parameters "θi" are also calculated by expectation maximization using data points "xj" that have been weighted using the membership values. For example, if "θ" is a mean "μ"
formula_44
With new estimates for "ai" and the "θi"'s, the expectation step is repeated to recompute new membership values. The entire procedure is repeated until model parameters converge.
Markov chain Monte Carlo.
As an alternative to the EM algorithm, the mixture model parameters can be deduced using posterior sampling as indicated by Bayes' theorem. This is still regarded as an incomplete data problem whereby membership of data points is the missing data. A two-step iterative procedure known as Gibbs sampling can be used.
The previous example of a mixture of two Gaussian distributions can demonstrate how the method works. As before, initial guesses of the parameters for the mixture model are made. Instead of computing partial memberships for each elemental distribution, a membership value for each data point is drawn from a Bernoulli distribution (that is, it will be assigned to either the first or the second Gaussian). The Bernoulli parameter "θ" is determined for each data point on the basis of one of the constituent distributions. Draws from the distribution generate membership associations for each data point. Plug-in estimators can then be used as in the M step of EM to generate a new set of mixture model parameters, and the binomial draw step repeated.
Moment matching.
The method of moment matching is one of the oldest techniques for determining the mixture parameters dating back to Karl Pearson's seminal work of 1894.
In this approach the parameters of the mixture are determined such that the composite distribution has moments matching some given value. In many instances extraction of solutions to the moment equations may present non-trivial algebraic or computational problems. Moreover, numerical analysis by Day has indicated that such methods may be inefficient compared to EM. Nonetheless, there has been renewed interest in this method, e.g., Craigmile and Titterington (1998) and Wang.
McWilliam and Loh (2009) consider the characterisation of a hyper-cuboid normal mixture copula in large dimensional systems for which EM would be computationally prohibitive. Here a pattern analysis routine is used to generate multivariate tail-dependencies consistent with a set of univariate and (in some sense) bivariate moments. The performance of this method is then evaluated using equity log-return data with Kolmogorov–Smirnov test statistics suggesting a good descriptive fit.
Spectral method.
Some problems in mixture model estimation can be solved using spectral methods.
In particular it becomes useful if data points "xi" are points in high-dimensional real space, and the hidden distributions are known to be log-concave (such as Gaussian distribution or Exponential distribution).
Spectral methods of learning mixture models are based on the use of Singular Value Decomposition of a matrix which contains data points.
The idea is to consider the top "k" singular vectors, where "k" is the number of distributions to be learned. The projection
of each data point to a linear subspace spanned by those vectors groups points originating from the same distribution
very close together, while points from different distributions stay far apart.
One distinctive feature of the spectral method is that it allows us to prove that if
distributions satisfy certain separation condition (e.g., not too close), then the estimated mixture will be very close to the true one with high probability.
Graphical Methods.
Tarter and Lock describe a graphical approach to mixture identification in which a kernel function is applied to an empirical frequency plot so to reduce intra-component variance. In this way one may more readily identify components having differing means. While this "λ"-method does not require prior knowledge of the number or functional form of the components its success does rely on the choice of the kernel parameters which to some extent implicitly embeds assumptions about the component structure.
Other methods.
Some of them can even probably learn mixtures of heavy-tailed distributions including those with
infinite variance (see links to papers below).
In this setting, EM based methods would not work, since the Expectation step would diverge due to presence of
outliers.
A simulation.
To simulate a sample of size "N" that is from a mixture of distributions "F""i", "i"=1 to "n", with probabilities "p""i" (sum= "p""i" = 1):
Extensions.
In a Bayesian setting, additional levels can be added to the graphical model defining the mixture model. For example, in the common latent Dirichlet allocation topic model, the observations are sets of words drawn from "D" different documents and the "K" mixture components represent topics that are shared across documents. Each document has a different set of mixture weights, which specify the topics prevalent in that document. All sets of mixture weights share common hyperparameters.
A very common extension is to connect the latent variables defining the mixture component identities into a Markov chain, instead of assuming that they are independent identically distributed random variables. The resulting model is termed a hidden Markov model and is one of the most common sequential hierarchical models. Numerous extensions of hidden Markov models have been developed; see the resulting article for more information.
History.
Mixture distributions and the problem of mixture decomposition, that is the identification of its constituent components and the parameters thereof, has been cited in the literature as far back as 1846 (Quetelet in McLachlan, 2000) although common reference is made to the work of Karl Pearson (1894) as the first author to explicitly address the decomposition problem in characterising non-normal attributes of forehead to body length ratios in female shore crab populations. The motivation for this work was provided by the zoologist Walter Frank Raphael Weldon who had speculated in 1893 (in Tarter and Lock) that asymmetry in the histogram of these ratios could signal evolutionary divergence. Pearson's approach was to fit a univariate mixture of two normals to the data by choosing the five parameters of the mixture such that the empirical moments matched that of the model.
While his work was successful in identifying two potentially distinct sub-populations and in demonstrating the flexibility of mixtures as a moment matching tool, the formulation required the solution of a 9th degree (nonic) polynomial which at the time posed a significant computational challenge.
Subsequent works focused on addressing these problems, but it was not until the advent of the modern computer and the popularisation of Maximum Likelihood (MLE) parameterisation techniques that research really took off. Since that time there has been a vast body of research on the subject spanning areas such as fisheries research, agriculture, botany, economics, medicine, genetics, psychology, palaeontology, electrophoresis, finance, geology and zoology.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{array}{lcl}\nK &=& \\text{number of mixture components} \\\\\nN &=& \\text{number of observations} \\\\\n\\theta_{i=1 \\dots K} &=& \\text{parameter of distribution of observation associated with component } i \\\\\n\\phi_{i=1 \\dots K} &=& \\text{mixture weight, i.e., prior probability of a particular component } i \\\\\n\\boldsymbol\\phi &=& K\\text{-dimensional vector composed of all the individual } \\phi_{1 \\dots K} \\text{; must sum to 1} \\\\\nz_{i=1 \\dots N} &=& \\text{component of observation } i \\\\\nx_{i=1 \\dots N} &=& \\text{observation } i \\\\\nF(x|\\theta) &=& \\text{probability distribution of an observation, parametrized on } \\theta \\\\\nz_{i=1 \\dots N} &\\sim& \\operatorname{Categorical}(\\boldsymbol\\phi) \\\\\nx_{i=1 \\dots N}|z_{i=1 \\dots N} &\\sim& F(\\theta_{z_i})\n\\end{array}\n"
},
{
"math_id": 1,
"text": "\n\\begin{array}{lcl}\nK,N &=& \\text{as above} \\\\\n\\theta_{i=1 \\dots K}, \\phi_{i=1 \\dots K}, \\boldsymbol\\phi &=& \\text{as above} \\\\\nz_{i=1 \\dots N}, x_{i=1 \\dots N}, F(x|\\theta) &=& \\text{as above} \\\\\n\\alpha &=& \\text{shared hyperparameter for component parameters} \\\\\n\\beta &=& \\text{shared hyperparameter for mixture weights} \\\\\nH(\\theta|\\alpha) &=& \\text{prior probability distribution of component parameters, parametrized on } \\alpha \\\\\n\\theta_{i=1 \\dots K} &\\sim& H(\\theta|\\alpha) \\\\\n\\boldsymbol\\phi &\\sim& \\operatorname{Symmetric-Dirichlet}_K(\\beta) \\\\\nz_{i=1 \\dots N}|\\boldsymbol\\phi &\\sim& \\operatorname{Categorical}(\\boldsymbol\\phi) \\\\\nx_{i=1 \\dots N}|z_{i=1 \\dots N},\\theta_{i=1 \\dots K} &\\sim& F(\\theta_{z_i})\n\\end{array}\n"
},
{
"math_id": 2,
"text": "\n\\begin{array}{lcl}\nK,N &=& \\text{as above} \\\\\n\\phi_{i=1 \\dots K}, \\boldsymbol\\phi &=& \\text{as above} \\\\\nz_{i=1 \\dots N}, x_{i=1 \\dots N} &=& \\text{as above} \\\\\n\\theta_{i=1 \\dots K} &=& \\{ \\mu_{i=1 \\dots K}, \\sigma^2_{i=1 \\dots K} \\} \\\\\n\\mu_{i=1 \\dots K} &=& \\text{mean of component } i \\\\\n\\sigma^2_{i=1 \\dots K} &=& \\text{variance of component } i \\\\\nz_{i=1 \\dots N} &\\sim& \\operatorname{Categorical}(\\boldsymbol\\phi) \\\\\nx_{i=1 \\dots N} &\\sim& \\mathcal{N}(\\mu_{z_i}, \\sigma^2_{z_i})\n\\end{array}\n"
},
{
"math_id": 3,
"text": "\n\\begin{array}{lcl}\nK,N &=& \\text{as above} \\\\\n\\phi_{i=1 \\dots K}, \\boldsymbol\\phi &=& \\text{as above} \\\\\nz_{i=1 \\dots N}, x_{i=1 \\dots N} &=& \\text{as above} \\\\\n\\theta_{i=1 \\dots K} &=& \\{ \\mu_{i=1 \\dots K}, \\sigma^2_{i=1 \\dots K} \\} \\\\\n\\mu_{i=1 \\dots K} &=& \\text{mean of component } i \\\\\n\\sigma^2_{i=1 \\dots K} &=& \\text{variance of component } i \\\\\n\\mu_0, \\lambda, \\nu, \\sigma_0^2 &=& \\text{shared hyperparameters} \\\\\n\\mu_{i=1 \\dots K} &\\sim& \\mathcal{N}(\\mu_0, \\lambda\\sigma_i^2) \\\\\n\\sigma_{i=1 \\dots K}^2 &\\sim& \\operatorname{Inverse-Gamma}(\\nu, \\sigma_0^2) \\\\\n\\boldsymbol\\phi &\\sim& \\operatorname{Symmetric-Dirichlet}_K(\\beta) \\\\\nz_{i=1 \\dots N} &\\sim& \\operatorname{Categorical}(\\boldsymbol\\phi) \\\\\nx_{i=1 \\dots N} &\\sim& \\mathcal{N}(\\mu_{z_i}, \\sigma^2_{z_i})\n\\end{array}\n"
},
{
"math_id": 4,
"text": ""
},
{
"math_id": 5,
"text": "\\boldsymbol{x}"
},
{
"math_id": 6,
"text": "\np(\\boldsymbol{\\theta}) = \\sum_{i=1}^K\\phi_i \\mathcal{N}(\\boldsymbol{\\mu_i,\\Sigma_i})\n"
},
{
"math_id": 7,
"text": "\\phi_i"
},
{
"math_id": 8,
"text": "\\boldsymbol{\\mu_i}"
},
{
"math_id": 9,
"text": "\\boldsymbol{\\Sigma_i}"
},
{
"math_id": 10,
"text": "p(\\boldsymbol{x | \\theta})"
},
{
"math_id": 11,
"text": "\\boldsymbol{\\theta}"
},
{
"math_id": 12,
"text": "p(\\boldsymbol{\\theta | x})"
},
{
"math_id": 13,
"text": "\np(\\boldsymbol{\\theta | x}) = \\sum_{i=1}^K\\tilde{\\phi_i} \\mathcal{N}(\\boldsymbol{\\tilde{\\mu_i},\\tilde{\\Sigma_i}})\n"
},
{
"math_id": 14,
"text": "\\tilde{\\phi_i}, \\boldsymbol{\\tilde{\\mu_i}}"
},
{
"math_id": 15,
"text": "\\boldsymbol{\\tilde{\\Sigma_i}}"
},
{
"math_id": 16,
"text": "K,N:"
},
{
"math_id": 17,
"text": "\\phi_{i=1 \\dots K}, \\boldsymbol\\phi:"
},
{
"math_id": 18,
"text": "z_{i=1 \\dots N}, x_{i=1 \\dots N}:"
},
{
"math_id": 19,
"text": "V:"
},
{
"math_id": 20,
"text": "\\theta_{i=1 \\dots K, j=1 \\dots V}:"
},
{
"math_id": 21,
"text": "i"
},
{
"math_id": 22,
"text": "j"
},
{
"math_id": 23,
"text": "\\boldsymbol\\theta_{i=1 \\dots K}:"
},
{
"math_id": 24,
"text": "V,"
},
{
"math_id": 25,
"text": "\\theta_{i,1 \\dots V};"
},
{
"math_id": 26,
"text": "\n\\begin{array}{lcl}\nz_{i=1 \\dots N} &\\sim& \\operatorname{Categorical}(\\boldsymbol\\phi) \\\\\nx_{i=1 \\dots N} &\\sim& \\text{Categorical}(\\boldsymbol\\theta_{z_i})\n\\end{array}\n"
},
{
"math_id": 27,
"text": "\\alpha:"
},
{
"math_id": 28,
"text": "\\boldsymbol\\theta"
},
{
"math_id": 29,
"text": "\\beta:"
},
{
"math_id": 30,
"text": "\\boldsymbol\\phi"
},
{
"math_id": 31,
"text": "\n\\begin{array}{lcl}\n\\boldsymbol\\phi &\\sim& \\operatorname{Symmetric-Dirichlet}_K(\\beta) \\\\\n\\boldsymbol\\theta_{i=1 \\dots K} &\\sim& \\text{Symmetric-Dirichlet}_V(\\alpha) \\\\\nz_{i=1 \\dots N} &\\sim& \\operatorname{Categorical}(\\boldsymbol\\phi) \\\\\nx_{i=1 \\dots N} &\\sim& \\text{Categorical}(\\boldsymbol\\theta_{z_i})\n\\end{array}\n"
},
{
"math_id": 32,
"text": "K=10"
},
{
"math_id": 33,
"text": "N^2"
},
{
"math_id": 34,
"text": "p_0=\\pi(1-\\theta_1)^2+(1-\\pi)(1-\\theta_2)^2"
},
{
"math_id": 35,
"text": "p_1=2\\pi\\theta_1(1-\\theta_1)+2(1-\\pi)\\theta_2(1-\\theta_2)"
},
{
"math_id": 36,
"text": "J=\\{f(\\cdot ; \\theta):\\theta\\in\\Omega\\}"
},
{
"math_id": 37,
"text": "K=\\left\\{p(\\cdot):p(\\cdot)=\\sum_{i=1}^n a_i f_i(\\cdot ; \\theta_i), a_i>0, \\sum_{i=1}^n a_i=1, f_i(\\cdot ; \\theta_i)\\in J\\ \\forall i,n\\right\\}"
},
{
"math_id": 38,
"text": " w_s^{(j+1)} = \\frac{1}{N} \\sum_{t =1}^N h_s^{(j)}(t) "
},
{
"math_id": 39,
"text": " \\mu_s^{(j+1)} = \\frac{\\sum_{t =1}^N h_s^{(j)}(t) x^{(t)}}{\\sum_{t =1}^N h_s^{(j)}(t)} "
},
{
"math_id": 40,
"text": " \\Sigma_s^{(j+1)} = \\frac{\\sum_{t =1}^N h_s^{(j)}(t) [x^{(t)}-\\mu_s^{(j+1)}][x^{(t)}-\\mu_s^{(j+1)}]^{\\top}}{\\sum_{t =1}^N h_s^{(j)}(t)} "
},
{
"math_id": 41,
"text": " h_s^{(j)}(t) = \\frac{w_s^{(j)} p_s(x^{(t)}; \\mu_s^{(j)},\\Sigma_s^{(j)}) }{ \\sum_{i = 1}^n w_i^{(j)} p_i(x^{(t)}; \\mu_i^{(j)}, \\Sigma_i^{(j)})}. "
},
{
"math_id": 42,
"text": " y_{i,j} = \\frac{a_i f_Y(x_j;\\theta_i)}{f_{X}(x_j)}."
},
{
"math_id": 43,
"text": " a_i = \\frac{1}{N}\\sum_{j=1}^N y_{i,j}"
},
{
"math_id": 44,
"text": " \\mu_{i} = \\frac{\\sum_{j} y_{i,j}x_{j}}{\\sum_{j} y_{i,j}}."
}
] | https://en.wikipedia.org/wiki?curid=871681 |
871687 | Killing spinor | Type of Dirac operator eigenspinor
Killing spinor is a term used in mathematics and physics.
Definition.
By the more narrow definition, commonly used in mathematics, the term Killing spinor indicates those twistor
spinors which are also eigenspinors of the Dirac operator. The term is named after Wilhelm Killing.
Another equivalent definition is that Killing spinors are the solutions to the Killing equation for a so-called Killing number.
More formally:
A Killing spinor on a Riemannian spin manifold "M" is a spinor field formula_0 which satisfies
formula_1
for all tangent vectors "X", where formula_2 is the spinor covariant derivative, formula_3 is Clifford multiplication and formula_4 is a constant, called the Killing number of formula_0. If formula_5 then the spinor is called a parallel spinor.
Applications.
In physics, Killing spinors are used in supergravity and superstring theory, in particular for finding solutions which preserve some supersymmetry. They are a special kind of spinor field related to Killing vector fields and Killing tensors.
Properties.
If formula_6 is a manifold with a Killing spinor, then formula_6 is an Einstein manifold with Ricci curvature formula_7, where formula_8 is the Killing constant.
Types of Killing spinor fields.
If formula_8 is purely imaginary, then formula_6 is a noncompact manifold; if formula_8 is 0, then the spinor field is parallel; finally, if formula_8 is real, then formula_6 is compact, and the spinor field is called a ``real spinor field."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\psi"
},
{
"math_id": 1,
"text": "\\nabla_X\\psi=\\lambda X\\cdot\\psi"
},
{
"math_id": 2,
"text": "\\nabla"
},
{
"math_id": 3,
"text": "\\cdot"
},
{
"math_id": 4,
"text": "\\lambda \\in \\mathbb{C}"
},
{
"math_id": 5,
"text": "\\lambda=0"
},
{
"math_id": 6,
"text": "\\mathcal{M}"
},
{
"math_id": 7,
"text": "Ric=4(n-1)\\alpha^2 "
},
{
"math_id": 8,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=871687 |
87175 | Herd immunity | Concept in epidemiology
Herd immunity (also called herd effect, community immunity, population immunity, or mass immunity) is a form of indirect protection that applies only to contagious diseases. It occurs when a sufficient percentage of a population has become immune to an infection, whether through previous infections or vaccination, thereby reducing the likelihood of infection for individuals who lack immunity.
Once the herd immunity has been reached, disease gradually disappears from a population and may result in eradication or permanent reduction of infections to zero if achieved worldwide. Herd immunity created via vaccination has contributed to the reduction of many diseases.
Effects.
Protection of those without immunity.
Some individuals either cannot develop immunity after vaccination or for medical reasons cannot be vaccinated. Newborn infants are too young to receive many vaccines, either for safety reasons or because passive immunity renders the vaccine ineffective. Individuals who are immunodeficient due to HIV/AIDS, lymphoma, leukemia, bone marrow cancer, an impaired spleen, chemotherapy, or radiotherapy may have lost any immunity that they previously had and vaccines may not be of any use for them because of their immunodeficiency.
A portion of those vaccinated may not develop long-term immunity. Vaccine contraindications may prevent certain individuals from being vaccinated. In addition to not being immune, individuals in one of these groups may be at a greater risk of developing complications from infection because of their medical status, but they may still be protected if a large enough percentage of the population is immune.
High levels of immunity in one age group can create herd immunity for other age groups. Vaccinating adults against pertussis reduces pertussis incidence in infants too young to be vaccinated, who are at the greatest risk of complications from the disease. This is especially important for close family members, who account for most of the transmissions to young infants. In the same manner, children receiving vaccines against pneumococcus reduces pneumococcal disease incidence among younger, unvaccinated siblings. Vaccinating children against pneumococcus and rotavirus has had the effect of reducing pneumococcus- and rotavirus-attributable hospitalizations for older children and adults, who do not normally receive these vaccines. Influenza (flu) is more severe in the elderly than in younger age groups, but influenza vaccines lack effectiveness in this demographic due to a waning of the immune system with age. The prioritization of school-age children for seasonal flu immunization, which is more effective than vaccinating the elderly, however, has been shown to create a certain degree of protection for the elderly.
For sexually transmitted infections (STIs), high levels of immunity in heterosexuals of one sex induces herd immunity for heterosexuals of both sexes. Vaccines against STIs that are targeted at heterosexuals of one sex result in significant declines in STIs in heterosexuals of both sexes if vaccine uptake in the target sex is high. Herd immunity from female vaccination does not, however, extend to males who have sex with males. High-risk behaviors make eliminating STIs difficult because, even though most infections occur among individuals with moderate risk, the majority of transmissions occur because of individuals who engage in high-risk behaviors. For this reason, in certain populations it may be necessary to immunize high-risk individuals regardless of sex.
Evolutionary pressure and serotype replacement.
Herd immunity itself acts as an evolutionary pressure on pathogens, influencing viral evolution by encouraging the production of novel strains, referred to as escape mutants, that are able to evade herd immunity and infect previously immune individuals. The evolution of new strains is known as serotype replacement, or serotype shifting, as the prevalence of a specific serotype declines due to high levels of immunity, allowing other serotypes to replace it.
At the molecular level, viruses escape from herd immunity through antigenic drift, which is when mutations accumulate in the portion of the viral genome that encodes for the virus's surface antigen, typically a protein of the virus capsid, producing a change in the viral epitope. Alternatively, the reassortment of separate viral genome segments, or antigenic shift, which is more common when there are more strains in circulation, can also produce new serotypes. When either of these occur, memory T cells no longer recognize the virus, so people are not immune to the dominant circulating strain. For both influenza and norovirus, epidemics temporarily induce herd immunity until a new dominant strain emerges, causing successive waves of epidemics. As this evolution poses a challenge to herd immunity, broadly neutralizing antibodies and "universal" vaccines that can provide protection beyond a specific serotype are in development.
Initial vaccines against "Streptococcus pneumoniae" significantly reduced nasopharyngeal carriage of vaccine serotypes (VTs), including antibiotic-resistant types, only to be entirely offset by increased carriage of non-vaccine serotypes (NVTs). This did not result in a proportionate increase in disease incidence though, since NVTs were less invasive than VTs. Since then, pneumococcal vaccines that provide protection from the emerging serotypes have been introduced and have successfully countered their emergence. The possibility of future shifting remains, so further strategies to deal with this include expansion of VT coverage and the development of vaccines that use either killed whole-cells, which have more surface antigens, or proteins present in multiple serotypes.
Eradication of diseases.
If herd immunity has been established and maintained in a population for a sufficient time, the disease is inevitably eliminated – no more endemic transmissions occur. If elimination is achieved worldwide and the number of cases is permanently reduced to zero, then a disease can be declared eradicated. Eradication can thus be considered the final effect or end-result of public health initiatives to control the spread of contagious disease. In cases in which herd immunity is compromised, on the contrary, disease outbreaks among the unvaccinated population are likely to occur.
The benefits of eradication include ending all morbidity and mortality caused by the disease, financial savings for individuals, health care providers, and governments, and enabling resources used to control the disease to be used elsewhere. To date, two diseases have been eradicated using herd immunity and vaccination: rinderpest and smallpox. Eradication efforts that rely on herd immunity are currently underway for poliomyelitis, though civil unrest and distrust of modern medicine have made this difficult. Mandatory vaccination may be beneficial to eradication efforts if not enough people choose to get vaccinated.
Free riding.
Herd immunity is vulnerable to the free rider problem. Individuals who lack immunity, including those who choose not to vaccinate, free ride off the herd immunity created by those who are immune. As the number of free riders in a population increases, outbreaks of preventable diseases become more common and more severe due to loss of herd immunity. Individuals may choose to free ride or be hesitant to vaccinate for a variety of reasons, including the belief that vaccines are ineffective, or that the risks associated with vaccines are greater than those associated with infection, mistrust of vaccines or public health officials, bandwagoning or groupthinking, social norms or peer pressure, and religious beliefs. Certain individuals are more likely to choose not to receive vaccines if vaccination rates are high enough to convince a person that he or she may not need to be vaccinated, since a sufficient percentage of others are already immune.
Mechanism.
Individuals who are immune to a disease act as a barrier in the spread of disease, slowing or preventing the transmission of disease to others. An individual's immunity can be acquired via a natural infection or through artificial means, such as vaccination. When a critical proportion of the population becomes immune, called the "herd immunity threshold" (HIT) or "herd immunity level" (HIL), the disease may no longer persist in the population, ceasing to be endemic.
The theoretical basis for herd immunity generally assumes that vaccines induce solid immunity, that populations mix at random, that the pathogen does not evolve to evade the immune response, and that there is no non-human vector for the disease.
Theoretical basis.
The critical value, or threshold, in a given population, is the point where the disease reaches an endemic steady state, which means that the infection level is neither growing nor declining exponentially. This threshold can be calculated from the effective reproduction number "R"e, which is obtained by taking the product of the basic reproduction number "R"0, the average number of new infections caused by each case in an entirely susceptible population that is homogeneous, or well-mixed, meaning each individual is equally likely to come into contact with any other susceptible individual in the population, and "S", the proportion of the population who are susceptible to infection, and setting this product to be equal to 1:
formula_0
"S" can be rewritten as (1 − "p"), where "p" is the proportion of the population that is immune so that "p" + "S" equals one. Then, the equation can be rearranged to place "p" by itself as follows:
formula_1
formula_2
formula_3
With "p" being by itself on the left side of the equation, it can be renamed as "p"c, representing the critical proportion of the population needed to be immune to stop the transmission of disease, which is the same as the "herd immunity threshold" HIT. "R"0 functions as a measure of contagiousness, so low "R"0 values are associated with lower HITs, whereas higher "R"0s result in higher HITs. For example, the HIT for a disease with an "R"0 of 2 is theoretically only 50%, whereas a disease with an "R"0 of 10 the theoretical HIT is 90%.
When the effective reproduction number "R"e of a contagious disease is reduced to and sustained below 1 new individual per infection, the number of cases occurring in the population gradually decreases until the disease has been eliminated. If a population is immune to a disease in excess of that disease's HIT, the number of cases reduces at a faster rate, outbreaks are even less likely to happen, and outbreaks that occur are smaller than they would be otherwise. If the effective reproduction number increases to above 1, then the disease is neither in a steady state nor decreasing in incidence, but is actively spreading through the population and infecting a larger number of people than usual.
An assumption in these calculations is that populations are homogeneous, or well-mixed, meaning that every individual is equally likely to come into contact with any other individual, when in reality populations are better described as social networks as individuals tend to cluster together, remaining in relatively close contact with a limited number of other individuals. In these networks, transmission only occurs between those who are geographically or physically close to one another. The shape and size of a network is likely to alter a disease's HIT, making incidence either more or less common. Mathematical models can use contact matrices to estimate the likelihood of encounters and thus transmission.
In heterogeneous populations, "R"0 is considered to be a measure of the number of cases generated by a "typical" contagious person, which depends on how individuals within a network interact with each other. Interactions within networks are more common than between networks, in which case the most highly connected networks transmit disease more easily, resulting in a higher "R"0 and a higher HIT than would be required in a less connected network. In networks that either opt not to become immune or are not immunized sufficiently, diseases may persist despite not existing in better-immunized networks.
Overshoot.
The cumulative proportion of individuals who get infected during the course of a disease outbreak can exceed the HIT. This is because the HIT does not represent the point at which the disease stops spreading, but rather the point at which each infected person infects fewer than one additional person on average. When the HIT is reached, the number of additional infections does not immediately drop to zero. The excess of the cumulative proportion of infected individuals over the theoretical HIT is known as the overshoot.
Boosts.
Vaccination.
The primary way to boost levels of immunity in a population is through vaccination. Vaccination is originally based on the observation that milkmaids exposed to cowpox were immune to smallpox, so the practice of inoculating people with the cowpox virus began as a way to prevent smallpox. Well-developed vaccines provide protection in a far safer way than natural infections, as vaccines generally do not cause the diseases they protect against and severe adverse effects are significantly less common than complications from natural infections.
The immune system does not distinguish between natural infections and vaccines, forming an active response to both, so immunity induced via vaccination is similar to what would have occurred from contracting and recovering from the disease. To achieve herd immunity through vaccination, vaccine manufacturers aim to produce vaccines with low failure rates, and policy makers aim to encourage their use. After the successful introduction and widespread use of a vaccine, sharp declines in the incidence of diseases it protects against can be observed, which decreases the number of hospitalizations and deaths caused by such diseases.
Assuming a vaccine is 100% effective, then the equation used for calculating the herd immunity threshold can be used for calculating the vaccination level needed to eliminate a disease, written as "V"c. Vaccines are usually imperfect however, so the effectiveness, "E", of a vaccine must be accounted for:
formula_4
From this equation, it can be observed that if "E" is less than (1 − 1/"R"0), then it is impossible to eliminate a disease, even if the entire population is vaccinated. Similarly, waning vaccine-induced immunity, as occurs with acellular pertussis vaccines, requires higher levels of booster vaccination to sustain herd immunity. If a disease has ceased to be endemic to a population, then natural infections no longer contribute to a reduction in the fraction of the population that is susceptible. Only vaccination contributes to this reduction. The relation between vaccine coverage and effectiveness and disease incidence can be shown by subtracting the product of the effectiveness of a vaccine and the proportion of the population that is vaccinated, "p"v, from the herd immunity threshold equation as follows:
formula_5
It can be observed from this equation that, all other things being equal ("ceteris paribus"), any increase in either vaccine coverage or vaccine effectiveness, including any increase in excess of a disease's HIT, further reduces the number of cases of a disease. The rate of decline in cases depends on a disease's "R"0, with diseases with lower "R"0 values experiencing sharper declines.
Vaccines usually have at least one contraindication for a specific population for medical reasons, but if both effectiveness and coverage are high enough then herd immunity can protect these individuals. Vaccine effectiveness is often, but not always, adversely affected by passive immunity, so additional doses are recommended for some vaccines while others are not administered until after an individual has lost his or her passive immunity.
Passive immunity.
Individual immunity can also be gained passively, when antibodies to a pathogen are transferred from one individual to another. This can occur naturally, whereby maternal antibodies, primarily immunoglobulin G antibodies, are transferred across the placenta and in colostrum to fetuses and newborns. Passive immunity can also be gained artificially, when a susceptible person is injected with antibodies from the serum or plasma of an immune person.
Protection generated from passive immunity is immediate, but wanes over the course of weeks to months, so any contribution to herd immunity is temporary. For diseases that are especially severe among fetuses and newborns, such as influenza and tetanus, pregnant women may be immunized in order to transfer antibodies to the child. In the same way, high-risk groups that are either more likely to experience infection, or are more likely to develop complications from infection, may receive antibody preparations to prevent these infections or to reduce the severity of symptoms.
Cost–benefit analysis.
Herd immunity is often accounted for when conducting cost–benefit analyses of vaccination programs. It is regarded as a positive externality of high levels of immunity, producing an additional benefit of disease reduction that would not occur had no herd immunity been generated in the population. Therefore, herd immunity's inclusion in cost–benefit analyses results both in more favorable cost-effectiveness or cost–benefit ratios, and an increase in the number of disease cases averted by vaccination. Study designs done to estimate herd immunity's benefit include recording disease incidence in households with a vaccinated member, randomizing a population in a single geographic area to be vaccinated or not, and observing the incidence of disease before and after beginning a vaccination program. From these, it can be observed that disease incidence may decrease to a level beyond what can be predicted from direct protection alone, indicating that herd immunity contributed to the reduction. When serotype replacement is accounted for, it reduces the predicted benefits of vaccination.
History.
Herd immunity was recognized as a naturally occurring phenomenon in the 1930s when it was observed that after a significant number of children had become immune to measles, the number of new infections temporarily decreased. Mass vaccination to induce herd immunity has since become common and proved successful in preventing the spread of many contagious diseases. Opposition to vaccination has posed a challenge to herd immunity, allowing preventable diseases to persist in or return to populations with inadequate vaccination rates.
The exact herd immunity threshold (HIT) varies depending on the basic reproduction number of the disease. An example of a disease with a high threshold was the measles, with a HIT exceeding 95%.
The term "herd immunity" was first used in 1894 by American veterinary scientist and then Chief of the Bureau of Animal Industry of the US Department of Agriculture Daniel Elmer Salmon to describe the healthy vitality and resistance to disease of well-fed herds of hogs. In 1916 veterinary scientists inside the same Bureau of Animal Industry used the term to refer to the immunity arising following recovery in cattle infected with brucellosis, also known as "contagious abortion." By 1923 it was being used by British bacteriologists to describe experimental epidemics with mice, experiments undertaken as part of efforts to model human epidemic disease. By the end of the 1920s the concept was used extensively - particularly among British scientists - to describe the build up of immunity in populations to diseases such as diphtheria, scarlet fever, and influenza. Herd immunity was recognized as a naturally occurring phenomenon in the 1930s when A. W. Hedrich published research on the epidemiology of measles in Baltimore, and took notice that after many children had become immune to measles, the number of new infections temporarily decreased, including among susceptible children. In spite of this knowledge, efforts to control and eliminate measles were unsuccessful until mass vaccination using the measles vaccine began in the 1960s. Mass vaccination, discussions of disease eradication, and cost–benefit analyses of vaccination subsequently prompted more widespread use of the term "herd immunity". In the 1970s, the theorem used to calculate a disease's herd immunity threshold was developed. During the smallpox eradication campaign in the 1960s and 1970s, the practice of "ring vaccination", to which herd immunity is integral, began as a way to immunize every person in a "ring" around an infected individual to prevent outbreaks from spreading.
Since the adoption of mass and ring vaccination, complexities and challenges to herd immunity have arisen. Modeling of the spread of contagious disease originally made a number of assumptions, namely that entire populations are susceptible and well-mixed, which is not the case in reality, so more precise equations have been developed. In recent decades, it has been recognized that the dominant strain of a microorganism in circulation may change due to herd immunity, either because of herd immunity acting as an evolutionary pressure or because herd immunity against one strain allowed another already-existing strain to spread. Emerging or ongoing fears and controversies about vaccination have reduced or eliminated herd immunity in certain communities, allowing preventable diseases to persist in or return to these communities.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R_0 \\cdot S=1. "
},
{
"math_id": 1,
"text": " R_0 \\cdot (1-p)=1, "
},
{
"math_id": 2,
"text": " 1-p=\\frac {1} {R_0}, "
},
{
"math_id": 3,
"text": " p_c=1 - \\frac {1} {R_0}. "
},
{
"math_id": 4,
"text": " V_c=\\frac {1 - \\frac {1} {R_0}}{E}. "
},
{
"math_id": 5,
"text": " \\left(1 - \\frac {1} {R_0}\\right) - (E \\times p_v). "
}
] | https://en.wikipedia.org/wiki?curid=87175 |
8719288 | Conservation form | Conservation form or "Eulerian form" refers to an arrangement of an equation or system of equations, usually representing a hyperbolic system, that emphasizes that a property represented is conserved, i.e. a type of continuity equation. The term is usually used in the context of continuum mechanics.
General form.
Equations in conservation form take the form
formula_0
for any conserved quantity formula_1, with a suitable function formula_2. An equation of this form can be transformed into an integral equation
formula_3
using the divergence theorem. The integral equation states that the change rate of the integral of the quantity formula_1 over an arbitrary control volume formula_4 is given by the flux formula_5 through the boundary of the control volume, with formula_6 being the outer surface normal through the boundary. formula_1 is neither produced nor consumed inside of formula_4 and is hence conserved. A typical choice for formula_2 is formula_7, with velocity formula_8, meaning that the quantity formula_1 flows with a given velocity field.
The integral form of such equations is usually the physically more natural formulation, and the differential equation arises from differentiation. Since the integral equation can also have non-differentiable solutions, the equality of both formulations can break down in some cases, leading to weak solutions and severe numerical difficulties in simulations of such equations.
Example.
An example of a set of equations written in conservation form are the Euler equations of fluid flow:
formula_9
formula_10
formula_11
Each of these represents the conservation of mass, momentum and energy, respectively. | [
{
"math_id": 0,
"text": "\\frac{d \\xi}{d t} + \\boldsymbol \\nabla \\cdot \\mathbf f(\\xi) = 0"
},
{
"math_id": 1,
"text": "\\xi"
},
{
"math_id": 2,
"text": "\\mathbf f"
},
{
"math_id": 3,
"text": "\\frac d{d t} \\int_V \\xi ~ dV = -\\oint_{\\partial V} \\mathbf f(\\xi) \\cdot \\boldsymbol \\nu ~ dS"
},
{
"math_id": 4,
"text": "V"
},
{
"math_id": 5,
"text": "\\mathbf f(\\xi)"
},
{
"math_id": 6,
"text": "\\boldsymbol \\nu"
},
{
"math_id": 7,
"text": "\\mathbf f(\\xi) = \\xi \\mathbf u"
},
{
"math_id": 8,
"text": "\\mathbf u"
},
{
"math_id": 9,
"text": " \\frac{\\partial\\rho}{\\partial t} + \\nabla\\cdot(\\rho\\mathbf u) = 0 "
},
{
"math_id": 10,
"text": " \\frac{\\partial\\rho \\mathbf u}{\\partial t} + \\nabla\\cdot(\\rho \\mathbf u \\otimes \\mathbf u + p \\mathbf I) = 0 "
},
{
"math_id": 11,
"text": " \\frac{\\partial E}{\\partial t} + \\nabla\\cdot(\\mathbf u(E+pV)) = 0 "
}
] | https://en.wikipedia.org/wiki?curid=8719288 |
8719641 | RELIKT-1 | Soviet cosmic microwave background experiment on the Prognoz 9 satellite
RELIKT-1 from (sometimes RELICT-1) was a Soviet cosmic microwave background anisotropy experiment launched on board the Prognoz 9 satellite on 1 July 1983. It operated until February 1984. It was the first CMB satellite (followed by the Cosmic Background Explorer in 1989) and measured the CMB dipole, the Galactic plane, and gave upper limits on the quadrupole moment.
A follow-up, RELIKT-2, would have been launched around 1993, and a RELIKT-3 was proposed, but neither took place due to the dissolution of the Soviet Union.
Launch and observations.
RELIKT-1 was launched on board the Prognoz-9 satellite on 1 January 1983. The satellite was in a highly eccentric orbit, with perigee around 1,000 km and apogee around 750,000 km, and an orbital period of 26 days.
RELIKT-1 observed at 37 GHz (8 mm), with a bandwidth of 0.4 GHz and an angular resolution of 5.8°. It used a superheterodyne, or Dicke-type modulation radiometer with an automatic balancer for the two input levels with a 30-second time constant. The noise in 1 second was 31mK, with a system temperature of 300K, and a receiver temperature of 110K. The signal was sampled twice a second, and the noise was correlated between samples.
The receiver used two corrugated horn antennas, one pointing parallel to the spacecraft spin axis, the other pointing at a parabolic antenna to point at 90° from the spin axis. The satellite rotated every 120 seconds. The experiment weight , and consumed 50W of power.
The radiometer was calibrated to 5% accuracy before launch, as was an internal noise source (which was used every four days during observations). Additionally the moon was used as a calibrator, as it was observed twice a month, and the in-flight system temperatures were measured to vary by 4% on a weekly basis.
The satellite rotation axis was kept constant for a week, giving 5040 scans of a great circle, after which it was changed to a new axis. The signal was recorded onto a tape recorder, and transmitted to Earth every four days. It observed for 6 months, giving 31 different scans that covered the whole sky, all of which intersected at the ecliptic poles. The experiment ceased observations in February 1984, after collecting 15 million measurements.
Results.
It measured the CMB dipole, the Galactic plane, and reported constraints on the quadrupole moment.
The first dipole measurement was reported in 1984, while the telescope was still observing, at 2.1±0.5mK, and upper limits on the quadrupole of 0.2mK. It also detected brighter-than-expected Galactic plane emission from compact HII regions.
A reanalysis of the data by Strukov et al. in 1992 found a quadrupole formula_0 between formula_1 and formula_2 at 90% confidence level, and also reported a negative anomaly at l=150°, b=-70° at a 99% confidence level,
Another reanalysis of the data by Klypin, Stukov and Skulachev in 1992 found a dipole of 3.15±0.12mK, with a direction of 11h17m±10m and -7.5°±2.5°. It placed a limit on the CMB quadrupole of formula_3 with a 95% confidence level, assuming a Harrison-Zeldovich spectrum, or formula_4 without assuming a model. The results were close to those measured by the Cosmic Background Explorer and the Tenerife Experiment.
RELIKT-2.
The second RELIKT satellite would have been launched in mid-1993. It would have had five channels to observe at 21.7 (13.8), 24.5 (8.7), 59.0 (5.1), 83.0 (3.6) and 193 GHz (1.6mm), using degenerated paramps. It would have had corrugated horns to give a resolution of 7°, and a more distant orbit to avoid contamination from the Moon and Sun, with a mission duration around 2 years, to give a better sensitivity than COBE. It would have been cooled to 100K. It was constructed, and was undergoing tests in 1992. It would have been launched as the Libris satellite on a Molniya rocket. The launch was put back to 1996, with expanded plans to observe with 1.5-3° resolution from two spacecraft in 1995, but ultimately never took place because of the Soviet Union's break-up and lack of funding.
A RELIKT-3 was also planned, which would have observed at 34–90 GHz with a resolution around 1°.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(\\Delta T/T)_{quad}"
},
{
"math_id": 1,
"text": "6\\times10^{-6}"
},
{
"math_id": 2,
"text": "3.3\\times10^{-5}"
},
{
"math_id": 3,
"text": "(\\Delta T/T)_{quad} = 1.5 \\times 10^{-5}"
},
{
"math_id": 4,
"text": "<3.0\\times10^{-5}"
}
] | https://en.wikipedia.org/wiki?curid=8719641 |
8720712 | Multiplication theorem | Identity obeyed by many special functions related to the gamma function
In mathematics, the multiplication theorem is a certain type of identity obeyed by many special functions related to the gamma function. For the explicit case of the gamma function, the identity is a product of values; thus the name. The various relations all stem from the same underlying principle; that is, the relation for one special function can be derived from that for the others, and is simply a manifestation of the same identity in different guises.
Finite characteristic.
The multiplication theorem takes two common forms. In the first case, a finite number of terms are added or multiplied to give the relation. In the second case, an infinite number of terms are added or multiplied. The finite form typically occurs only for the gamma and related functions, for which the identity follows from a p-adic relation over a finite field. For example, the multiplication theorem for the gamma function follows from the Chowla–Selberg formula, which follows from the theory of complex multiplication. The infinite sums are much more common, and follow from characteristic zero relations on the hypergeometric series.
The following tabulates the various appearances of the multiplication theorem for finite characteristic; the characteristic zero relations are given further down. In all cases, "n" and "k" are non-negative integers. For the special case of "n" = 2, the theorem is commonly referred to as the duplication formula.
Gamma function–Legendre formula.
The duplication formula and the multiplication theorem for the gamma function are the prototypical examples. The duplication formula for the gamma function is
formula_0
It is also called the Legendre duplication formula or Legendre relation, in honor of Adrien-Marie Legendre. The multiplication theorem is
formula_1
for integer "k" ≥ 1, and is sometimes called Gauss's multiplication formula, in honour of Carl Friedrich Gauss. The multiplication theorem for the gamma functions can be understood to be a special case, for the trivial Dirichlet character, of the Chowla–Selberg formula.
Sine function.
Formally similar duplication formulas hold for the sine function, which are rather simple consequences of the trigonometric identities. Here one has the duplication formula
formula_2
and, more generally, for any integer "k", one has
formula_3
Polygamma function, harmonic numbers.
The polygamma function is the logarithmic derivative of the gamma function, and thus, the multiplication theorem becomes additive, instead of multiplicative:
formula_4
for formula_5, and, for formula_6, one has the digamma function:
formula_7
The polygamma identities can be used to obtain a multiplication theorem for harmonic numbers.
Hurwitz zeta function.
For the Hurwitz zeta function generalizes the polygamma function to non-integer orders, and thus obeys a very similar multiplication theorem:
formula_8
where formula_9 is the Riemann zeta function. This is a special case of
formula_10
and
formula_11
Multiplication formulas for the non-principal characters may be given in the form of Dirichlet L-functions.
Periodic zeta function.
The periodic zeta function is sometimes defined as
formula_12
where Li"s"("z") is the polylogarithm. It obeys the duplication formula
formula_13
As such, it is an eigenvector of the Bernoulli operator with eigenvalue 21−"s". The multiplication theorem is
formula_14
The periodic zeta function occurs in the reflection formula for the Hurwitz zeta function, which is why the relation that it obeys, and the Hurwitz zeta relation, differ by the interchange of "s" → 1−"s".
The Bernoulli polynomials may be obtained as a limiting case of the periodic zeta function, taking "s" to be an integer, and thus the multiplication theorem there can be derived from the above. Similarly, substituting "q" = log "z" leads to the multiplication theorem for the polylogarithm.
Polylogarithm.
The duplication formula takes the form
formula_15
The general multiplication formula is in the form of a Gauss sum or discrete Fourier transform:
formula_16
These identities follow from that on the periodic zeta function, taking "z" = log "q".
Kummer's function.
The duplication formula for Kummer's function is
formula_17
and thus resembles that for the polylogarithm, but twisted by "i".
Bernoulli polynomials.
For the Bernoulli polynomials, the multiplication theorems were given by Joseph Ludwig Raabe in 1851:
formula_18
and for the Euler polynomials,
formula_19
and
formula_20
The Bernoulli polynomials may be obtained as a special case of the Hurwitz zeta function, and thus the identities follow from there.
Bernoulli map.
The Bernoulli map is a certain simple model of a dissipative dynamical system, describing the effect of a shift operator on an infinite string of coin-flips (the Cantor set). The Bernoulli map is a one-sided version of the closely related Baker's map. The Bernoulli map generalizes to a k-adic version, which acts on infinite strings of "k" symbols: this is the Bernoulli scheme. The transfer operator formula_21 corresponding to the shift operator on the Bernoulli scheme is given by
formula_22
Perhaps not surprisingly, the eigenvectors of this operator are given by the Bernoulli polynomials. That is, one has that
formula_23
It is the fact that the eigenvalues formula_24 that marks this as a dissipative system: for a non-dissipative measure-preserving dynamical system, the eigenvalues of the transfer operator lie on the unit circle.
One may construct a function obeying the multiplication theorem from any totally multiplicative function. Let formula_25 be totally multiplicative; that is, formula_26 for any integers "m", "n". Define its Fourier series as
formula_27
Assuming that the sum converges, so that "g"("x") exists, one then has that it obeys the multiplication theorem; that is, that
formula_28
That is, "g"("x") is an eigenfunction of Bernoulli transfer operator, with eigenvalue "f"("k"). The multiplication theorem for the Bernoulli polynomials then follows as a special case of the multiplicative function formula_29. The Dirichlet characters are fully multiplicative, and thus can be readily used to obtain additional identities of this form.
Characteristic zero.
The multiplication theorem over a field of characteristic zero does not close after a finite number of terms, but requires an infinite series to be expressed. Examples include that for the Bessel function formula_30:
formula_31
where formula_32 and formula_33 may be taken as arbitrary complex numbers. Such characteristic-zero identities follow generally from one of many possible identities on the hypergeometric series. | [
{
"math_id": 0,
"text": "\n\\Gamma(z) \\; \\Gamma\\left(z + \\frac{1}{2}\\right) = 2^{1-2z} \\; \\sqrt{\\pi} \\; \\Gamma(2z).\n"
},
{
"math_id": 1,
"text": "\n\\Gamma(z) \\; \\Gamma\\left(z + \\frac{1}{k}\\right) \\; \\Gamma\\left(z + \\frac{2}{k}\\right) \\cdots\n\\Gamma\\left(z + \\frac{k-1}{k}\\right) =\n(2 \\pi)^{ \\frac{k-1}{2}} \\; k^{\\frac{1-2kz}{2} } \\; \\Gamma(kz)\n"
},
{
"math_id": 2,
"text": "\n\\sin(\\pi x)\\sin\\left(\\pi\\left(x+\\frac{1}{2}\\right)\\right) = \\frac{1}{2}\\sin(2\\pi x)\n"
},
{
"math_id": 3,
"text": "\n\\sin(\\pi x)\\sin\\left(\\pi\\left(x+\\frac{1}{k}\\right)\\right) \\cdots \\sin\\left(\\pi\\left(x+\\frac{k-1}{k}\\right)\\right) = 2^{1-k} \\sin(k \\pi x)\n"
},
{
"math_id": 4,
"text": "k^{m} \\psi^{(m-1)}(kz) = \\sum_{n=0}^{k-1}\n\\psi^{(m-1)}\\left(z+\\frac{n}{k}\\right)"
},
{
"math_id": 5,
"text": "m>1"
},
{
"math_id": 6,
"text": "m=1"
},
{
"math_id": 7,
"text": "k\\left[\\psi(kz)-\\log(k)\\right] = \\sum_{n=0}^{k-1}\n\\psi\\left(z+\\frac{n}{k}\\right)."
},
{
"math_id": 8,
"text": "k^s\\zeta(s)=\\sum_{n=1}^k \\zeta\\left(s,\\frac{n}{k}\\right),"
},
{
"math_id": 9,
"text": "\\zeta(s)"
},
{
"math_id": 10,
"text": "k^s\\,\\zeta(s,kz)= \\sum_{n=0}^{k-1}\\zeta\\left(s,z+\\frac{n}{k}\\right)"
},
{
"math_id": 11,
"text": "\\zeta(s,kz)=\\sum^{\\infty}_{n=0} {s+n-1 \\choose n} (1-k)^n z^n \\zeta(s+n,z)."
},
{
"math_id": 12,
"text": "F(s;q) = \\sum_{m=1}^\\infty \\frac {e^{2\\pi imq}}{m^s}\n=\\operatorname{Li}_s\\left(e^{2\\pi i q} \\right) "
},
{
"math_id": 13,
"text": "2^{1-s} F(s;q) = F\\left(s,\\frac{q}{2}\\right)\n+ F\\left(s,\\frac{q+1}{2}\\right)."
},
{
"math_id": 14,
"text": "k^{1-s} F(s;kq) = \\sum_{n=0}^{k-1} F\\left(s,q+\\frac{n}{k}\\right)."
},
{
"math_id": 15,
"text": "2^{1-s}\\operatorname{Li}_s(z^2) = \\operatorname{Li}_s(z)+\\operatorname{Li}_s(-z)."
},
{
"math_id": 16,
"text": "k^{1-s} \\operatorname{Li}_s(z^k) =\n\\sum_{n=0}^{k-1}\\operatorname{Li}_s\\left(ze^{i2\\pi n/k}\\right)."
},
{
"math_id": 17,
"text": "2^{1-n}\\Lambda_n(-z^2) = \\Lambda_n(z)+\\Lambda_n(-z)"
},
{
"math_id": 18,
"text": "k^{1-m} B_m(kx)=\\sum_{n=0}^{k-1} B_m \\left(x+\\frac{n}{k}\\right)"
},
{
"math_id": 19,
"text": "k^{-m} E_m(kx)= \\sum_{n=0}^{k-1}\n(-1)^n E_m \\left(x+\\frac{n}{k}\\right)\n\\quad \\mbox{ for } k=1,3,\\dots"
},
{
"math_id": 20,
"text": "k^{-m} E_m(kx)= \\frac{-2}{m+1} \\sum_{n=0}^{k-1}\n(-1)^n B_{m+1} \\left(x+\\frac{n}{k}\\right)\n\\quad \\mbox{ for } k=2,4,\\dots."
},
{
"math_id": 21,
"text": "\\mathcal{L}_k"
},
{
"math_id": 22,
"text": "[\\mathcal{L}_k f](x) = \\frac{1}{k}\\sum_{n=0}^{k-1}f\\left(\\frac{x+n}{k}\\right)"
},
{
"math_id": 23,
"text": "\\mathcal{L}_k B_m = \\frac{1}{k^m}B_m"
},
{
"math_id": 24,
"text": "k^{-m}<1"
},
{
"math_id": 25,
"text": "f(n)"
},
{
"math_id": 26,
"text": "f(mn)=f(m)f(n)"
},
{
"math_id": 27,
"text": "g(x)=\\sum_{n=1}^\\infty f(n) \\exp(2\\pi inx)"
},
{
"math_id": 28,
"text": "\\frac{1}{k}\\sum_{n=0}^{k-1}g\\left(\\frac{x+n}{k}\\right)=f(k)g(x)"
},
{
"math_id": 29,
"text": "f(n)=n^{-s}"
},
{
"math_id": 30,
"text": "J_\\nu(z)"
},
{
"math_id": 31,
"text": "\n\\lambda^{-\\nu} J_\\nu (\\lambda z) =\n\\sum_{n=0}^\\infty \\frac{1}{n!}\n\\left(\\frac{(1-\\lambda^2)z}{2}\\right)^n\nJ_{\\nu+n}(z),\n"
},
{
"math_id": 32,
"text": "\\lambda"
},
{
"math_id": 33,
"text": "\\nu"
}
] | https://en.wikipedia.org/wiki?curid=8720712 |
87210 | Sigmoid function | Mathematical function having a characteristic S-shaped curve or sigmoid curve
A sigmoid function is any mathematical function whose graph has a characteristic S-shaped or sigmoid curve.
A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:
formula_0
Other standard sigmoid functions are given in the Examples section. In some fields, most notably in the context of artificial neural networks, the term "sigmoid function" is used as an alias for the logistic function.
Special cases of the sigmoid function include the Gompertz curve (used in modeling systems that saturate at large values of x) and the ogee curve (used in the spillway of some dams). Sigmoid functions have domain of all real numbers, with return (response) value commonly monotonically increasing but could be decreasing. Sigmoid functions most often show a return value (y axis) in the range 0 to 1. Another commonly used range is from −1 to 1.
A wide variety of sigmoid functions including the logistic and hyperbolic tangent functions have been used as the activation function of artificial neurons. Sigmoid curves are also common in statistics as cumulative distribution functions (which go from 0 to 1), such as the integrals of the logistic density, the normal density, and Student's "t" probability density functions. The logistic sigmoid function is invertible, and its inverse is the logit function.
Definition.
A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point.
Properties.
In general, a sigmoid function is monotonic, and has a first derivative which is bell shaped. Conversely, the integral of any continuous, non-negative, bell-shaped function (with one local maximum and no local minimum, unless degenerate) will be sigmoidal. Thus the cumulative distribution functions for many common probability distributions are sigmoidal. One such example is the error function, which is related to the cumulative distribution function of a normal distribution; another is the arctan function, which is related to the cumulative distribution function of a Cauchy distribution.
A sigmoid function is constrained by a pair of horizontal asymptotes as formula_1.
A sigmoid function is convex for values less than a particular point, and it is concave for values greater than that point: in many of the examples here, that point is 0.
Examples.
formula_15 using the hyperbolic tangent mentioned above. Here, formula_16 is a free parameter encoding the slope at formula_17, which must be greater than or equal to formula_18 because any smaller value will result in a function with multiple inflection points, which is therefore not a true sigmoid. This function is unusual because it actually attains the limiting values of -1 and 1 within a finite range, meaning that its value is constant at -1 for all formula_19 and at 1 for all formula_20. Nonetheless, it is smooth (infinitely differentiable, formula_21) "everywhere", including at formula_22.
Applications.
Many natural processes, such as those of complex system learning curves, exhibit a progression from small beginnings that accelerates and approaches a climax over time. When a specific mathematical model is lacking, a sigmoid function is often used.
The van Genuchten–Gupta model is based on an inverted S-curve and applied to the response of crop yield to soil salinity.
Examples of the application of the logistic S-curve to the response of crop yield (wheat) to both the soil salinity and depth to water table in the soil are shown in .
In artificial neural networks, sometimes non-smooth functions are used instead for efficiency; these are known as hard sigmoids.
In audio signal processing, sigmoid functions are used as waveshaper transfer functions to emulate the sound of analog circuitry clipping.
In biochemistry and pharmacology, the Hill and Hill–Langmuir equations are sigmoid functions.
In computer graphics and real-time rendering, some of the sigmoid functions are used to blend colors or geometry between two values, smoothly and without visible seams or discontinuities.
Titration curves between strong acids and strong bases have a sigmoid shape due to the logarithmic nature of the pH scale.
The logistic function can be calculated efficiently by utilizing type III Unums.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma(x) = \\frac{1}{1 + e^{-x}} = \\frac{e^x}{1 + e^x}=1-\\sigma(-x)."
},
{
"math_id": 1,
"text": "x \\rightarrow \\pm \\infty"
},
{
"math_id": 2,
"text": " f(x) = \\frac{1}{1 + e^{-x}} "
},
{
"math_id": 3,
"text": " f(x) = \\tanh x = \\frac{e^x-e^{-x}}{e^x+e^{-x}} "
},
{
"math_id": 4,
"text": " f(x) = \\arctan x "
},
{
"math_id": 5,
"text": " f(x) = \\operatorname{gd}(x) = \\int_0^x \\frac{dt}{\\cosh t} = 2\\arctan\\left(\\tanh\\left(\\frac{x}{2}\\right)\\right) "
},
{
"math_id": 6,
"text": " f(x) = \\operatorname{erf}(x) = \\frac{2}{\\sqrt{\\pi}} \\int_0^x e^{-t^2} \\, dt "
},
{
"math_id": 7,
"text": " f(x) = \\left(1 + e^{-x} \\right)^{-\\alpha}, \\quad \\alpha > 0 "
},
{
"math_id": 8,
"text": " f(x) = \\begin{cases}\n{\\displaystyle\n\\left( \\int_0^1 \\left(1 - u^2\\right)^N du \\right)^{-1} \\int_0^x \\left( 1 - u^2 \\right)^N \\ du}, & |x| \\le 1 \\\\\n\\\\\n\\sgn(x) & |x| \\ge 1 \\\\\n\\end{cases} \\quad N \\in \\mathbb{Z} \\ge 1 "
},
{
"math_id": 9,
"text": " f(x) = \\frac{x}{\\sqrt{1+x^2}} "
},
{
"math_id": 10,
"text": " f(x) = \\frac{x}{\\left(1 + |x|^{k}\\right)^{1/k}} "
},
{
"math_id": 11,
"text": " f(x) = \\varphi(\\varphi(x, \\beta), \\alpha) , "
},
{
"math_id": 12,
"text": " \\varphi(x, \\lambda) = \\begin{cases} (1 - \\lambda x)^{1/\\lambda} & \\lambda \\ne 0 \\\\e^{-x} & \\lambda = 0 \\\\ \\end{cases} "
},
{
"math_id": 13,
"text": "\\alpha < 1"
},
{
"math_id": 14,
"text": "\\beta < 1"
},
{
"math_id": 15,
"text": "\\begin{align}f(x) &= \\begin{cases}\n{\\displaystyle\n\\frac{2}{1+e^{-2m\\frac{x}{1-x^2}}} - 1}, & |x| < 1 \\\\\n\\\\\n\\sgn(x) & |x| \\ge 1 \\\\\n\\end{cases} \\\\\n&= \\begin{cases}\n{\\displaystyle\n\\tanh\\left(m\\frac{x}{1-x^2}\\right)}, & |x| < 1 \\\\\n\\\\\n\\sgn(x) & |x| \\ge 1 \\\\\n\\end{cases}\\end{align}"
},
{
"math_id": 16,
"text": "m"
},
{
"math_id": 17,
"text": "x=0"
},
{
"math_id": 18,
"text": "\\sqrt{3}"
},
{
"math_id": 19,
"text": "x \\leq -1"
},
{
"math_id": 20,
"text": "x \\geq 1"
},
{
"math_id": 21,
"text": "C^\\infty"
},
{
"math_id": 22,
"text": "x = \\pm 1"
}
] | https://en.wikipedia.org/wiki?curid=87210 |
8721698 | Resolvent set | In linear algebra and operator theory, the resolvent set of a linear operator is a set of complex numbers for which the operator is in some sense "well-behaved". The resolvent set plays an important role in the resolvent formalism.
Definitions.
Let "X" be a Banach space and let formula_0 be a linear operator with domain formula_1. Let id denote the identity operator on "X". For any formula_2, let
formula_3
A complex number formula_4 is said to be a regular value if the following three statements are true:
The resolvent set of "L" is the set of all regular values of "L":
formula_8
The spectrum is the complement of the resolvent set
formula_9
and subject to a mutually singular spectral decomposition into the point spectrum (when condition 1 fails), the continuous spectrum (when condition 2 fails) and the residual spectrum (when condition 3 fails).
If formula_10 is a closed operator, then so is each formula_5, and condition 3 may be replaced by requiring that formula_5 be surjective.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L\\colon D(L)\\rightarrow X"
},
{
"math_id": 1,
"text": "D(L) \\subseteq X"
},
{
"math_id": 2,
"text": "\\lambda \\in \\mathbb{C}"
},
{
"math_id": 3,
"text": "L_{\\lambda} = L - \\lambda\\,\\mathrm{id}."
},
{
"math_id": 4,
"text": "\\lambda"
},
{
"math_id": 5,
"text": "L_\\lambda"
},
{
"math_id": 6,
"text": "R(\\lambda, L)=(L-\\lambda \\,\\mathrm{id})^{-1}"
},
{
"math_id": 7,
"text": "R(\\lambda,L)"
},
{
"math_id": 8,
"text": "\\rho(L) = \\{ \\lambda \\in \\mathbb{C} \\mid \\lambda \\mbox{ is a regular value of } L \\}."
},
{
"math_id": 9,
"text": "\\sigma (L) = \\mathbb{C} \\setminus \\rho (L),"
},
{
"math_id": 10,
"text": "L"
},
{
"math_id": 11,
"text": "\\rho(L) \\subseteq \\mathbb{C}"
}
] | https://en.wikipedia.org/wiki?curid=8721698 |
872175 | Cryptanalysis of the Enigma | Decryption of the cipher of the Enigma machine
Cryptanalysis of the Enigma ciphering system enabled the western Allies in World War II to read substantial amounts of Morse-coded radio communications of the Axis powers that had been enciphered using Enigma machines. This yielded military intelligence which, along with that from other decrypted Axis radio and teleprinter transmissions, was given the codename "Ultra".
The Enigma machines were a family of portable cipher machines with rotor scramblers. Good operating procedures, properly enforced, would have made the plugboard Enigma machine unbreakable to the Allies at that time.
The German plugboard-equipped Enigma became the principal crypto-system of the German Reich and later of other Axis powers. In December 1932 it was "broken" by mathematician Marian Rejewski at the Polish General Staff's Cipher Bureau, using mathematical permutation group theory combined with French-supplied intelligence material obtained from a German spy. By 1938 Rejewski had invented a device, the cryptologic bomb, and Henryk Zygalski had devised his sheets, to make the cipher-breaking more efficient. Five weeks before the outbreak of World War II, in late July 1939 at a conference just south of Warsaw, the Polish Cipher Bureau shared its Enigma-breaking techniques and technology with the French and British.
During the German invasion of Poland, core Polish Cipher Bureau personnel were evacuated via Romania to France, where they established the "PC Bruno" signals intelligence station with French facilities support. Successful cooperation among the Poles, French, and British continued until June 1940, when France surrendered to the Germans.
From this beginning, the British Government Code and Cypher School at Bletchley Park built up an extensive cryptanalytic capability. Initially the decryption was mainly of "Luftwaffe" (German air force) and a few "Heer" (German army) messages, as the "Kriegsmarine" (German navy) employed much more secure procedures for using Enigma. Alan Turing, a Cambridge University mathematician and logician, provided much of the original thinking that led to upgrading of the Polish cryptologic bomb used in decrypting German Enigma ciphers. However, the "Kriegsmarine" introduced an Enigma version with a fourth rotor for its U-boats, resulting in a prolonged period when these messages could not be decrypted. With the capture of cipher keys and the use of much faster US Navy bombes, regular, rapid reading of U-boat messages resumed.
General principles.
The Enigma machines produced a polyalphabetic substitution cipher. During World War I, inventors in several countries realised that a purely random key sequence, containing no repetitive pattern, would, in principle, make a polyalphabetic substitution cipher unbreakable. This led to the development of rotor machines which alter each character in the plaintext to produce the ciphertext, by means of a scrambler comprising a set of "rotors" that alter the electrical path from character to character, between the input device and the output device. This constant altering of the electrical pathway produces a very long period before the pattern—the key sequence or substitution alphabet—repeats.
Decrypting enciphered messages involves three stages, defined somewhat differently in that era than in modern cryptography. First, there is the "identification" of the system in use, in this case Enigma; second, "breaking" the system by establishing exactly how encryption takes place, and third, "solving", which involves finding the way that the machine was set up for an individual message, "i.e." the "message key". Today, it is often assumed that an attacker knows how the encipherment process works (see Kerckhoffs's principle) and "breaking" is often used for "solving" a key. Enigma machines, however, had so many potential internal wiring states that reconstructing the machine, independent of particular settings, was a very difficult task.
The Enigma machine.
The Enigma rotor machine was potentially an excellent system. It generated a polyalphabetic substitution cipher, with a period before repetition of the substitution alphabet that was much longer than any message, or set of messages, sent with the same key.
A major weakness of the system, however, was that no letter could be enciphered to itself. This meant that some possible solutions could quickly be eliminated because of the same letter appearing in the same place in both the ciphertext and the putative piece of plaintext. Comparing the possible plaintext "Keine besonderen Ereignisse" (literally, "no special occurrences"—perhaps better translated as "nothing to report"; a phrase regularly used by one German outpost in North Africa), with a section of ciphertext, might produce the following:
Structure.
The mechanism of the Enigma consisted of a keyboard connected to a battery and a current entry plate or wheel (German: "Eintrittswalze"), at the right hand end of the scrambler (usually via a plugboard in the military versions). This contained a set of 26 contacts that made electrical connection with the set of 26 spring-loaded pins on the right hand rotor. The internal wiring of the core of each rotor provided an electrical pathway from the pins on one side to different connection points on the other. The left hand side of each rotor made electrical connection with the rotor to its left. The leftmost rotor then made contact with the reflector (German: "Umkehrwalze"). The reflector provided a set of thirteen paired connections to return the current back through the scrambler rotors, and eventually to the lampboard where a lamp under a letter was illuminated.
Whenever a key on the keyboard was pressed, the stepping motion was actuated, advancing the rightmost rotor one position. Because it moved with each key pressed it is sometimes called the "fast rotor". When a notch on that rotor engaged with a pawl on the middle rotor, that too moved; and similarly with the leftmost ('slow') rotor.
There are a huge number of ways that the connections within each scrambler rotor—and between the entry plate and the keyboard or plugboard or lampboard—could be arranged. For the reflector plate there are fewer, but still a large number of options to its possible wirings.
Each scrambler rotor could be set to any one of its 26 starting positions (any letter of the alphabet). For the Enigma machines with only three rotors, their sequence in the scrambler—which was known as the "wheel order (WO)" to Allied cryptanalysts—could be selected from the six that are possible.
Later Enigma models included an "alphabet ring" like a tyre around the core of each rotor. This could be set in any one of 26 positions in relation to the rotor's core. The ring contained one or more notches that engaged with a pawl that advanced the next rotor to the left.
Later still, the three rotors for the scrambler were selected from a set of five or, in the case of the German Navy, eight rotors. The alphabet rings of rotors VI, VII, and VIII contained two notches which, despite shortening the period of the substitution alphabet, made decryption more difficult.
Most military Enigmas also featured a plugboard (German: "Steckerbrett"). This altered the electrical pathway between the keyboard and the entry wheel of the scrambler and, in the opposite direction, between the scrambler and the lampboard. It did this by exchanging letters reciprocally, so that if "A" was plugged to "G" then pressing key "A" would lead to current entering the scrambler at the "G" position, and if "G" was pressed the current would enter at "A". The same connections applied for the current on the way out to the lamp panel.
To decipher German military Enigma messages, the following information would need to be known.
Logical structure of the machine (unchanging)
Internal settings (usually changed less frequently than external settings)
External settings (usually changed more frequently than internal settings)
Discovering the logical structure of the machine may be called "breaking" it, a one-off process except when changes or additions were made to the machines. Finding the internal and external settings for one or more messages may be called "solving" – although breaking is often used for this process as well.
Security properties.
The various Enigma models provided different levels of security. The presence of a plugboard ("Steckerbrett") substantially increased the security of the encipherment. Each pair of letters that were connected together by a plugboard lead were referred to as "stecker partners", and the letters that remained unconnected were said to be "self-steckered". In general, the unsteckered Enigma was used for commercial and diplomatic traffic and could be broken relatively easily using hand methods, while attacking versions with a plugboard was much more difficult. The British read unsteckered Enigma messages sent during the Spanish Civil War, and also some Italian naval traffic enciphered early in World War II.
The strength of the security of the ciphers that were produced by the Enigma machine was a product of the large numbers associated with the scrambling process.
However, the way that Enigma was used by the Germans meant that, if the settings for one day (or whatever period was represented by each row of the setting sheet) were established, the rest of the messages for that network on that day could quickly be deciphered.
The security of Enigma ciphers did have fundamental weaknesses that proved helpful to cryptanalysts.
Key setting.
Enigma featured the major operational convenience of being symmetrical (or self-inverse). This meant that decipherment worked in the same way as encipherment, so that when the ciphertext was typed in, the sequence of lamps that lit yielded the plaintext.
Identical setting of the machines at the transmitting and receiving ends was achieved by key setting procedures. These varied from time to time and across different networks. They consisted of "setting sheets" in a "codebook". which were distributed to all users of a network, and were changed regularly. The message key was transmitted in an "indicator" as part of the message preamble. The word "key" was also used at Bletchley Park to describe the network that used the same Enigma setting sheets. Initially these were recorded using coloured pencils and were given the names "red", "light blue" etc., and later the names of birds such as "kestrel". During World War II the settings for most networks lasted for 24 hours, although towards the end of the war, some were changed more frequently. The sheets had columns specifying, for each day of the month, the rotors to be used and their positions, the ring positions and the plugboard connections. For security, the dates were in reverse chronological order down the page, so that each row could be cut off and destroyed when it was finished with.
Up until 15 September 1938, the transmitting operator indicated to the receiving operator(s) how to set their rotors, by choosing a three letter "message key" (the key specific to that message) and enciphering it twice using the specified initial ring positions (the "Grundstellung"). The resultant 6-letter indicator, was then transmitted before the enciphered text of the message. Suppose that the specified "Grundstellung" was "RAO", and the chosen 3-letter message key was "IHL", the operator would set the rotors to "RAO" and encipher "IHL" twice. The resultant ciphertext, "DQYQQT", would be transmitted, at which point the rotors would be changed to the message key ("IHL") and then the message itself enciphered. The receiving operator would use the specified "Grundstellung RAO" to decipher the first six letters, yielding "IHLIHL". The receiving operator, seeing the repeated message key would know that there had been no corruption and use "IHL" to decipher the message.
The weakness in this indicator procedure came from two factors. First, use of a global "Grundstellung" —this was changed in September 1938 so that the operator selected his initial position to encrypt the message key, and sent the initial position in clear followed by the enciphered message key. The second problem was the repetition of message key within the indicator, which was a serious security flaw. The message setting was encoded twice, resulting in a relation between first and fourth, second and fifth, and third and sixth character. This security problem enabled the Polish Cipher Bureau to break into the pre-war Enigma system as early as 1932. On 1 May 1940 the Germans changed the procedures to encipher the message key only once.
British efforts.
In 1927, the UK openly purchased a commercial Enigma. Its operation was analysed and reported. Although a leading British cryptographer, Dilly Knox (a veteran of World War I and the cryptanalytical activities of the Royal Navy's Room 40), worked on decipherment he had only the messages he generated himself to practice with. After Germany supplied modified commercial machines to the Nationalist side in the Spanish Civil War, and with the Italian Navy (who were also aiding the Nationalists) using a version of the commercial Enigma that did not have a plugboard, Britain could intercept the radio broadcast messages. In April 1937 Knox made his first decryption of an Enigma encryption using a technique that he called "buttoning up" to discover the rotor wirings and another that he called "rodding" to solve messages. This relied heavily on cribs and on a crossword-solver's expertise in Italian, as it yielded a limited number of spaced-out letters at a time.
Britain had no ability to read the messages broadcast by Germany, which used the military Enigma machine.
Polish breakthroughs.
In the 1920s the German military began using a 3-rotor Enigma, whose security was increased in 1930 by the addition of a plugboard. The Polish Cipher Bureau sought to break it because of the threat that Poland faced from Germany, but the early attempts did not succeed. Mathematicians having earlier rendered great services in breaking Russian ciphers and codes, in early 1929 the Polish Cipher Bureau invited mathematics students at Poznań University – who had a good knowledge of the German language due to the area having only after World War I been liberated from Germany – to take a course in cryptology.
After the course, the Bureau recruited some students to work part-time at a Bureau branch set up in Poznań. On 1 September 1932, 27-year-old mathematician Marian Rejewski and two fellow Poznań University mathematics graduates, Henryk Zygalski and Jerzy Różycki, were hired by the Bureau in Warsaw. Their first task was reconstructing a four-letter German naval code.
Near the end of 1932 Rejewski was asked to work a couple of hours a day at breaking the Enigma cipher. His work on it may have begun in late October or early November 1932.
Rejewski's characteristics method.
Marian Rejewski quickly spotted the Germans' major procedural weaknesses of specifying a single indicator setting ("Grundstellung") for all messages on a network for a day, and repeating the operator's chosen "message key" in the enciphered 6-letter indicator. Those procedural mistakes allowed Rejewski to decipher the message keys without knowing any of the machine's wirings. In the above example of "DQYQQT" being the enciphered indicator, it is known that the first letter "D" and the fourth letter "Q" represent the same letter, enciphered three positions apart in the scrambler sequence. Similarly with "Q" and "Q" in the second and fifth positions, and "Y" and "T" in the third and sixth. Rejewski exploited this fact by collecting a sufficient set of messages enciphered with the same indicator setting, and assembling three tables for the 1,4, the 2,5, and the 3,6 pairings. Each of these tables might look something like the following:
A path from one first letter to the corresponding fourth letter, then from that letter as the first letter to its corresponding fourth letter, and so on until the first letter recurs, traces out a cycle group. The following table contains six cycle groups.
Rejewski recognised that a cycle group must pair with another group of the same length. Even though Rejewski did not know the rotor wirings or the plugboard permutation, the German mistake allowed him to reduce the number of possible substitution ciphers to a small number. For the 1,4 pairing above, there are only 1×3×9=27 possibilities for the substitution ciphers at positions 1 and 4.
Rejewski also exploited cipher clerk laziness. Scores of messages would be enciphered by several cipher clerks, but some of those messages would have the same encrypted indicator. That meant that both clerks happened to choose the same three letter starting position. Such a collision should be rare with randomly selected starting positions, but lazy cipher clerks often chose starting positions such as "AAA", "BBB", or "CCC". Those security mistakes allowed Rejewski to solve each of the six permutations used to encipher the indicator.
That solution was an extraordinary feat. Rejewski did it without knowing the plugboard permutation or the rotor wirings. Even after solving for the six permutations, Rejewski did not know how the plugboard was set or the positions of the rotors. Knowing the six permutations also did not allow Rejewski to read any messages.
The spy and the rotor wiring.
Before Rejewski started work on the Enigma, the French had a spy, Hans-Thilo Schmidt, who worked at Germany's Cipher Office in Berlin and had access to some Enigma documents. Even with the help of those documents, the French did not make progress on breaking the Enigma. The French decided to share the material with their British and Polish allies. In a December 1931 meeting, the French provided Gwido Langer, head of the Polish Cipher Bureau, with copies of some Enigma material. Langer asked the French for more material, and Gustave Bertrand of French Military Intelligence quickly obliged; Bertrand provided additional material in May and September 1932. The documents included two German manuals and two pages of Enigma daily keys.
In December 1932, the Bureau provided Rejewski with some German manuals and monthly keys. The material enabled Rejewski to achieve "one of the most important breakthroughs in cryptologic history" by using the theory of permutations and groups to work out the Enigma scrambler wiring.
Rejewski could look at a day's cipher traffic and solve for the permutations at the six sequential positions used to encipher the indicator. Since Rejewski had the cipher key for the day, he knew and could factor out the plugboard permutation. He assumed the keyboard permutation was the same as the commercial Enigma, so he factored that out. He knew the rotor order, the ring settings, and the starting position. He developed a set of equations that would allow him to solve for the rightmost rotor wiring assuming the two rotors to the left did not move.
He attempted to solve the equations, but failed with inconsistent results. After some thought, he realised one of his assumptions must be wrong.
Rejewski found that the connections between the military Enigma's keyboard and the entry ring were not, as in the commercial Enigma, in the order of the keys on a German typewriter. He made an inspired correct guess that it was in alphabetical order. Britain's Dilly Knox was astonished when he learned, in July 1939, that the arrangement was so simple.
With the new assumption, Rejewski succeeded in solving the wiring of the rightmost rotor. The next month's cipher traffic used a different rotor in the rightmost position, so Rejewski used the same equations to solve for its wiring. With those rotors known, the remaining third rotor and the reflector wiring were determined. Without capturing a single rotor to reverse engineer, Rejewski had determined the logical structure of the machine.
The Polish Cipher Bureau then had some Enigma machine replicas made; the replicas were called "Enigma doubles".
The grill method.
The Poles now had the machine's wiring secrets, but they still needed to determine the daily keys for the cipher traffic. The Poles would examine the Enigma traffic and use the method of characteristics to determine the six permutations used for the indicator. The Poles would then use the grill method to determine the rightmost rotor and its position. That search would be complicated by the plugboard permutation, but that permutation only swapped six pairs of letters – not enough to disrupt the search. The grill method also determined the plugboard wiring. The grill method could also be used to determine the middle and left rotors and their setting (and those tasks were simpler because there was no plugboard), but the Poles eventually compiled a catalogue of the 3×2×26×26=4056 possible Q permutations (reflector and 2 leftmost rotor permutations), so they could just look up the answer.
The only remaining secret of the daily key would be the ring settings, and the Poles would attack that problem with brute force. Most messages would start with the three letters "ANX" ("an" is German for "to" and the "X" character was used as a space). It may take almost 26×26×26=17576 trials, but that was doable. Once the ring settings were found, the Poles could read the day's traffic.
The Germans made it easy for the Poles in the beginning. The rotor order only changed every quarter, so the Poles would not have to search for the rotor order. Later the Germans changed it every month, but that would not cause much trouble, either. Eventually, the Germans would change the rotor order every day, and late in the war (after Poland had been overrun) the rotor order might be changed during the day.
The Poles kept improving their techniques as the Germans kept improving their security measures.
Invariant cycle lengths and the card catalogue.
Rejewski realised that, although the letters in the cycle groups were changed by the plugboard, the number and lengths of the cycles were unaffected—in the example above, six cycle groups with lengths of 9, 9, 3, 3, 1, and 1. He described this invariant structure as the "characteristic" of the indicator setting. There were only 105,456 possible rotor settings. The Poles therefore set about creating a "card catalogue" of these cycle patterns.
The cycle-length method would avoid using the grill. The card catalogue would index the cycle-length for all starting positions (except for turnovers that occurred while enciphering an indicator). The day's traffic would be examined to discover the cycles in the permutations. The card catalogue would be consulted to find the possible starting positions. There are roughly 1 million possible cycle-length combinations and only 105,456 starting positions. Having found a starting position, the Poles would use an Enigma double to determine the cycles at that starting position without a plugboard. The Poles would then compare those cycles to the cycles with the (unknown) plugboard and solve for the plugboard permutation (a simple substitution cipher). Then the Poles could find the remaining secret of the ring settings with the ANX method.
The problem was compiling the large card catalogue.
Rejewski, in 1934 or 1935, devised a machine to facilitate making the catalogue and called it a "cyclometer". This "comprised two sets of rotors... connected by wires through which electric current could run. Rotor N in the second set was three letters out of phase with respect to rotor N in the first set, whereas rotors L and M in the second set were always set the same way as rotors L and M in the first set". Preparation of this catalogue, using the cyclometer, was, said Rejewski, "laborious and took over a year, but when it was ready, obtaining daily keys was a question of [some fifteen] minutes".
However, on 1 November 1937, the Germans changed the Enigma reflector, necessitating the production of a new catalogue—"a task which [says Rejewski] consumed, on account of our greater experience, probably somewhat less than a year's time".
This characteristics method stopped working for German naval Enigma messages on 1 May 1937, when the indicator procedure was changed to one involving special codebooks (see German Navy 3-rotor Enigma below). Worse still, on 15 September 1938 it stopped working for German Army and Luftwaffe messages because operators were then required to choose their own "Grundstellung" (initial rotor setting) for each message. Although German army message keys would still be double-enciphered, the day's keys would not be double-enciphered at the same initial setting, so the characteristic could no longer be found or exploited.
Perforated sheets.
Although the characteristics method no longer worked, the inclusion of the enciphered message key twice gave rise to a phenomenon that the cryptanalyst Henryk Zygalski was able to exploit. Sometimes (about one message in eight) one of the repeated letters in the message key enciphered to the same letter on both occasions. These occurrences were called "samiczki" (in English, "females"—a term later used at Bletchley Park).
Only a limited number of scrambler settings would give rise to females, and these would have been identifiable from the card catalogue. If the first six letters of the ciphertext were SZVSIK", this would be termed a 1–4 female; if "WHOEHS", a 2–5 female; and if "ASWCRW, a 3–6 female. The method was called "Netz" (from "Netzverfahren", "net method"), or the Zygalski sheet method as it used perforated sheets that he devised, although at Bletchley Park Zygalski's name was not used for security reasons. About ten females from a day's messages were required for success.
There was a set of 26 of these sheets for each of the six possible sequences "wheel orders". Each sheet was for the left (slowest-moving) rotor. The 51×51 matrices on the sheets represented the 676 possible starting positions of the middle and right rotors. The sheets contained about 1000 holes in the positions in which a female could occur. The set of sheets for that day's messages would be appropriately positioned on top of each other in the perforated sheets apparatus. Rejewski wrote about how the device was operated:
<templatestyles src="Template:Blockquote/styles.css" />When the sheets were superposed and moved in the proper sequence and the proper manner with respect to each other, in accordance with a strictly defined program, the number of visible apertures gradually decreased. And, if a sufficient quantity of data was available, there finally remained a single aperture, probably corresponding to the right case, that is, to the solution. From the position of the aperture one could calculate the order of the rotors, the setting of their rings, and, by comparing the letters of the cipher keys with the letters in the machine, likewise permutation S; in other words, the entire cipher key.
The holes in the sheets were painstakingly cut with razor blades and in the three months before the next major setback, the sets of sheets for only two of the possible six wheel orders had been produced.
Polish "bomba".
After Rejewski's characteristics method became useless, he invented an electro-mechanical device that was dubbed the "bomba kryptologiczna", 'cryptologic bomb'. Each machine contained six sets of Enigma rotors for the six positions of the repeated three-letter key. Like the Zygalski sheet method, the "bomba" relied on the occurrence of "females", but required only three instead of about ten for the sheet method. Six "bomby" were constructed, one for each of the then possible "wheel orders". Each "bomba" conducted an exhaustive (brute-force) analysis of the 17,576 possible message keys.
Rejewski has written about the device: <templatestyles src="Template:Blockquote/styles.css" />The bomb method, invented in the autumn of 1938, consisted largely in the automation and acceleration of the process of reconstructing daily keys. Each cryptologic bomb (six were built in Warsaw for the Biuro Szyfrów Cipher Bureau before September 1939) essentially constituted an electrically powered aggregate of six Enigmas. It took the place of about one hundred workers and shortened the time for obtaining a key to about two hours.
The cipher message transmitted the "Grundstellung" in the clear, so when a "bomba" found a match, it revealed the rotor order, the rotor positions, and the ring settings. The only remaining secret was the plugboard permutation.
Major setback.
On 15 December 1938, the German Army increased the complexity of Enigma enciphering by introducing two additional rotors (IV and V). This increased the number of possible "wheel orders" from 6 to 60. The Poles could then read only the small minority of messages that used neither of the two new rotors. They did not have the resources to commission 54 more bombs or produce 58 sets of Zygalski sheets. Other Enigma users received the two new rotors at the same time. However, until 1 July 1939 the "Sicherheitsdienst" (SD)—the intelligence agency of the SS and the Nazi Party—continued to use its machines in the old way with the same indicator setting for all messages. This allowed Rejewski to reuse his previous method, and by about the turn of the year he had worked out the wirings of the two new rotors. On 1 January 1939, the Germans increased the number of plugboard connections from between five and eight to between seven and ten, which made other methods of decryption even more difficult.
Rejewski wrote, in a 1979 critique of appendix 1, volume 1 (1979), of the official history of British Intelligence in the Second World War: <templatestyles src="Template:Blockquote/styles.css" />
World War II.
Polish disclosures.
As the likelihood of war increased in 1939, Britain and France pledged support for Poland in the event of action that threatened its independence. In April, Germany withdrew from the German–Polish Non-Aggression Pact of January 1934. The Polish General Staff, realising what was likely to happen, decided to share their work on Enigma decryption with their western allies. Marian Rejewski later wrote:
<templatestyles src="Template:Blockquote/styles.css" />
At a conference near Warsaw on 26 and 27 July 1939, the Poles revealed to the French and British that they had broken Enigma and pledged to give each a Polish-reconstructed Enigma, along with details of their Enigma-solving techniques and equipment, including Zygalski's perforated sheets and Rejewski's cryptologic bomb. In return, the British pledged to prepare two full sets of Zygalski sheets for all 60 possible wheel orders. Dilly Knox was a member of the British delegation. He commented on the fragility of the Polish system's reliance on the repetition in the indicator, because it might "at any moment be cancelled". In August, two Polish Enigma doubles were sent to Paris, whence Gustave Bertrand took one to London, handing it to Stewart Menzies of Britain's Secret Intelligence Service at Victoria Station.
Gordon Welchman, who became head of Hut 6 at Bletchley Park, wrote:
<templatestyles src="Template:Blockquote/styles.css" />Hut 6 Ultra would never have gotten off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use.
Peter Calvocoressi, who became head of the Luftwaffe section in Hut 3, wrote of the Polish contribution:
<templatestyles src="Template:Blockquote/styles.css" />The one moot point is—how valuable? According to the best qualified judges it accelerated the breaking of Enigma by perhaps a year. The British did not adopt Polish techniques but they were enlightened by them.
"PC Bruno".
On 5 September 1939 the Cipher Bureau began preparations to evacuate key personnel and equipment from Warsaw. Soon a special evacuation train, the Echelon F, transported them eastward, then south. By the time the Cipher Bureau was ordered to cross the border into allied Romania on 17 September, they had destroyed all sensitive documents and equipment and were down to a single very crowded truck. The vehicle was confiscated at the border by a Romanian officer, who separated the military from the civilian personnel. Taking advantage of the confusion, the three mathematicians ignored the Romanian's instructions. They anticipated that in an internment camp they might be identified by the Romanian security police, in which the German Abwehr and SD had informers.
The mathematicians went to the nearest railroad station, exchanged money, bought tickets, and boarded the first train headed south. After a dozen or so hours, they reached Bucharest, at the other end of Romania. There they went to the British embassy. Told by the British to "come back in a few days", they next tried the French embassy, introducing themselves as "friends of Bolek" (Bertrand's Polish code name) and asking to speak with a French military officer. A French Army colonel telephoned Paris and then issued instructions for the three Poles to be assisted in evacuating to Paris.
On 20 October 1939, at "PC Bruno" outside Paris, the Polish cryptologists resumed work on German Enigma ciphers, in collaboration with Bletchley Park.
"PC Bruno" and Bletchley Park worked together closely, communicating via a telegraph line secured by the use of Enigma doubles. In January 1940 Alan Turing spent several days at "PC Bruno" conferring with his Polish colleagues. He had brought the Poles a full set of Zygalski sheets that had been punched at Bletchley Park by John Jeffreys using Polish-supplied information, and on 17 January 1940, the Poles made the first break into wartime Enigma traffic—that from 28 October 1939. From that time, until the Fall of France in June 1940, 17 per cent of the Enigma keys that were found by the allies were solved at "PC Bruno".
Just before opening their 10 May 1940 offensive against the Low Countries and France, the Germans made the feared change in the indicator procedure, discontinuing the duplication of the enciphered message key. This meant that the Zygalski sheet method no longer worked. Instead, the cryptanalysts had to rely on exploiting the operator weaknesses described below, particularly the cillies and the Herivel tip.
After the June Franco-German armistice, the Polish cryptological team resumed work in France's southern "Free Zone", although probably not on Enigma. Marian Rejewski and Henryk Zygalski, after many travails, perilous journeys, and Spanish imprisonment, finally made it to Britain, where they were inducted into the Polish Army and put to work breaking German "SS" and "SD" hand ciphers at a Polish signals facility in Boxmoor. Due to their having been in occupied France, it was thought too risky to invite them to work at Bletchley Park.
After the German occupation of Vichy France, several of those who had worked at "PC Bruno" were captured by the Germans. Despite the dire circumstances in which some of them were held, none betrayed the secret of Enigma's decryption.
Operating shortcomings.
Apart from some less-than-ideal inherent characteristics of the Enigma, in practice the system's greatest weakness was the large numbers of messages and some ways that Enigma was used. The basic principle of this sort of enciphering machine is that it should deliver a stream of transformations that are difficult for a cryptanalyst to predict. Some of the instructions to operators, and operator sloppiness, had the opposite effect. Without these operating shortcomings, Enigma would almost certainly not have been broken.
The shortcomings that Allied cryptanalysts exploited included:
Other useful shortcomings that were discovered by the British and later the American cryptanalysts included the following, many of which depended on frequent solving of a particular network:
Mavis Lever, a member of Dilly Knox's team, recalled an occasion when there was an unusual message, from the Italian Navy, whose exploitation led to the British victory at the Battle of Cape Matapan. <templatestyles src="Template:Blockquote/styles.css" />The one snag with Enigma of course is the fact that if you press "A", you can get every other letter but "A". I picked up this message and—one was so used to looking at things and making instant decisions—I thought: 'Something's gone. What has this chap done? There is not a single "L" in this message.'
My chap had been told to send out a dummy message and he had just had a fag [cigarette] and pressed the last key on the keyboard, the "L". So that was the only letter that didn't come out. We had got the biggest crib we ever had, the encypherment was "LLLL", right through the message and that gave us the new wiring for the wheel [rotor]. That's the sort of thing we were trained to do. Instinctively look for something that had gone wrong or someone who had done something silly and torn up the rule book.
Postwar debriefings of German cryptographic specialists, conducted as part of project TICOM, tend to support the view that the Germans were well aware that the un-steckered Enigma was theoretically solvable, but thought that the steckered Enigma had not been solved.
Crib-based decryption.
The term "crib" was used at Bletchley Park to denote any "known plaintext" or "suspected plaintext" at some point in an enciphered message.
Britain's Government Code and Cipher School (GC&CS), before its move to Bletchley Park, had realised the value of recruiting mathematicians and logicians to work in codebreaking teams. Alan Turing, a Cambridge University mathematician with an interest in cryptology and in machines for implementing logical operations—and who was regarded by many as a genius—had started work for GC&CS on a part-time basis from about the time of the Munich Crisis in 1938. Gordon Welchman, another Cambridge mathematician, had also received initial training in 1938, and they both reported to Bletchley Park on 4 September 1939, the day after Britain declared war on Germany.
Most of the Polish success had relied on the repetition within the indicator. But as soon as Turing moved to Bletchley Park—where he initially joined Dilly Knox in the research section—he set about seeking methods that did not rely on this weakness, as they correctly anticipated that the German Army and Air Force might follow the German Navy in improving their indicator system.
The Poles had used an early form of crib-based decryption in the days when only six leads were used on the plugboard. The technique became known as the "Forty Weepy Weepy" method for the following reason. When a message was a continuation of a previous one, the plaintext would start with "FORT" (from "Fortsetzung", meaning "continuation") followed by the time of the first message given twice bracketed by the letter "Y". At this time numerals were represented by the letters on the top row of the Enigma keyboard. So, "continuation of message sent at 2330" was represented as "FORTYWEEPYYWEEPY".
"Cribs" were fundamental to the British approach to solving Enigma keys, but guessing the plaintext for a message was a highly skilled business. So in 1940 Stuart Milner-Barry set up a special "Crib Room" in Hut 8.
Foremost among the knowledge needed for identifying cribs was the text of previous decrypts. Bletchley Park maintained detailed indexes of message preambles, of every person, of every ship, of every unit, of every weapon, of every technical term, and of repeated phrases such as forms of address and other German military jargon. For each message the traffic analysis recorded the radio frequency, the date and time of intercept, and the preamble—which contained the network-identifying discriminant, the time of origin of the message, the callsign of the originating and receiving stations, and the indicator setting. This allowed cross referencing of a new message with a previous one. Thus, as Derek Taunt, another Cambridge mathematician-cryptanalyst wrote, the truism that "nothing succeeds like success" is particularly apposite here.
Stereotypical messages included "Keine besonderen Ereignisse" (literally, "no special occurrences"—perhaps better translated as "nothing to report"), "An die Gruppe" ("to the group") and a number that came from weather stations such as "weub null seqs null null" ("weather survey 0600"). This was actually rendered as "WEUBYYNULLSEQSNULLNULL". The word "WEUB" being short for "Wetterübersicht", "YY" was used as a separator, and "SEQS" was common abbreviation of "sechs" (German for "six"). As another example, Field Marshal Erwin Rommel's Quartermaster started all of his messages to his commander with the same formal introduction.
With a combination of probable plaintext fragment and the fact that no letter could be enciphered as itself, a corresponding ciphertext fragment could often be tested by trying every possible alignment of the crib against the ciphertext, a procedure known as "crib-dragging". This, however, was only one aspect of the processes of solving a key. Derek Taunt has written that the three cardinal personal qualities that were in demand for cryptanalysis were (1) a creative imagination, (2) a well-developed critical faculty, and (3) a habit of meticulousness. Skill at solving crossword puzzles was famously tested in recruiting some cryptanalysts. This was useful in working out plugboard settings when a possible solution was being examined. For example, if the crib was the word "WETTER" (German for "weather") and a possible decrypt before the plugboard settings had been discovered, was "TEWWER", it is easy to see that "T" with "W" are "stecker partners". These examples, although illustrative of the principles, greatly over-simplify the cryptanalysts' tasks.
A fruitful source of cribs was re-encipherments of messages that had previously been decrypted either from a lower-level manual cipher or from another Enigma network. This was called a "kiss" and happened particularly with German naval messages being sent in the "dockyard cipher" and repeated "verbatim" in an Enigma cipher. One German agent in Britain, Nathalie Sergueiew, code named "Treasure", who had been 'turned' to work for the Allies, was very verbose in her messages back to Germany, which were then re-transmitted on the "Abwehr" Enigma network. She was kept going by MI5 because this provided long cribs, not because of her usefulness as an agent to feed incorrect information to the "Abwehr".
Occasionally, when there was a particularly urgent need to solve German naval Enigma keys, such as when an Arctic convoy was about to depart, mines would be laid by the RAF in a defined position, whose grid reference in the German naval system did not contain any of the words (such as "sechs" or "sieben") for which abbreviations or alternatives were sometimes used. The warning message about the mines and then the "all clear" message, would be transmitted both using the "dockyard cipher" and the U-boat Enigma network. This process of "planting" a crib was called "gardening".
Although "cillies" were not actually cribs, the "chit-chat" in clear that Enigma operators indulged in among themselves often gave a clue as to the cillies that they might generate.
When captured German Enigma operators revealed that they had been instructed to encipher numbers by spelling them out rather than using the top row of the keyboard, Alan Turing reviewed decrypted messages and determined that the word "eins" ("one") appeared in 90% of messages. Turing automated the crib process, creating the "Eins Catalogue", which assumed that "eins" was encoded at all positions in the plaintext. The catalogue included every possible rotor position for "EINS" with that day's "wheel order" and plugboard connections.
British "bombe".
The British bombe was an electromechanical device designed by Alan Turing soon after he arrived at Bletchley Park in September 1939. Harold "Doc" Keen of the British Tabulating Machine Company (BTM) in Letchworth ( from Bletchley) was the engineer who turned Turing's ideas into a working machine—under the codename CANTAB. Turing's specification developed the ideas of the Poles' "bomba kryptologiczna" but was designed for the much more general crib-based decryption.
The bombe helped to identify the "wheel order", the initial positions of the rotor cores, and the "stecker partner" of a specified letter. This was achieved by examining all 17,576 possible scrambler positions for a set of "wheel orders" on a comparison between a crib and the ciphertext, so as to eliminate possibilities that contradicted the Enigma's known characteristics. In the words of Gordon Welchman "the task of the bombe was simply to reduce the assumptions of "wheel order" and scrambler positions that required 'further analysis' to a manageable number".
The demountable drums on the front of the bombe were wired identically to the connections made by Enigma's different rotors. Unlike them, however, the input and output contacts for the left-hand and the right-hand sides were separate, making 104 contacts between each drum and the rest of the machine. This allowed a set of scramblers to be connected in series by means of 26-way cables. Electrical connections between the rotating drums' wiring and the rear plugboard were by means of metal brushes. When the bombe detected a scrambler position with no contradictions, it stopped and the operator would note the position before restarting it.
Although Welchman had been given the task of studying Enigma traffic call signs and discriminants, he knew from Turing about the bombe design and early in 1940, before the first pre-production bombe was delivered, he showed him an idea to increase its effectiveness. It exploited the reciprocity in plugboard connections, to reduce considerably the number of scrambler settings that needed to be considered further. This became known as the "diagonal board" and was subsequently incorporated to great effect in all the bombes.
A cryptanalyst would prepare a crib for comparison with the ciphertext. This was a complicated and sophisticated task, which later took the Americans some time to master. As well as the crib, a decision as to which of the many possible "wheel orders" could be omitted had to be made. Turing's Banburismus was used in making this major economy. The cryptanalyst would then compile a "menu" which specified the connections of the cables of the patch panels on the back of the machine, and a particular letter whose "stecker partner" was sought. The menu reflected the relationships between the letters of the crib and those of the ciphertext. Some of these formed loops (or "closures" as Turing called them) in a similar way to the "cycles" that the Poles had exploited.
The reciprocal nature of the plugboard meant that no letter could be connected to more than one other letter. When there was a contradiction of two different letters apparently being "stecker partners" with the letter in the menu, the bombe would detect this, and move on. If, however, this happened with a letter that was not part of the menu, a false stop could occur. In refining down the set of stops for further examination, the cryptanalyst would eliminate stops that contained such a contradiction. The other plugboard connections and the settings of the alphabet rings would then be worked out before the scrambler positions at the possible true stops were tried out on Typex machines that had been adapted to mimic Enigmas. All the remaining stops would correctly decrypt the crib, but only the true stop would produce the correct plaintext of the whole message.
To avoid wasting scarce bombe time on menus that were likely to yield an excessive number of false stops, Turing performed a lengthy probability analysis (without any electronic aids) of the estimated number of stops per rotor order. It was adopted as standard practice only to use menus that were estimated to produce no more than four stops per "wheel order". This allowed an 8-letter crib for a 3-closure menu, an 11-letter crib for a 2-closure menu, and a 14-letter crib for a menu with only one closure. If there was no closure, at least 16 letters were required in the crib. The longer the crib, however, the more likely it was that "turn-over" of the middle rotor would have occurred.
The production model 3-rotor bombes contained 36 scramblers arranged in three banks of twelve. Each bank was used for a different "wheel order" by fitting it with the drums that corresponded to the Enigma rotors being tested. The first bombe was named "Victory" and was delivered to Bletchley Park on 18 March 1940. The next one, which included the diagonal board, was delivered on 8 August 1940. It was referred to as a "spider bombe" and was named "Agnus Dei" which soon became "Agnes" and then "Aggie". The production of British bombes was relatively slow at first, with only five bombes being in use in June 1941, 15 by the year end, 30 by September 1942, 49 by January 1943 but eventually 210 at the end of the war.
A refinement that was developed for use on messages from those networks that disallowed the plugboard ("Stecker") connection of adjacent letters, was the "Consecutive Stecker Knock Out". This was fitted to 40 bombes and produced a useful reduction in false stops.
Initially the bombes were operated by ex-BTM servicemen, but in March 1941 the first detachment of members of the Women's Royal Naval Service (known as "Wrens") arrived at Bletchley Park to become bombe operators. By 1945 there were some 2,000 Wrens operating the bombes. Because of the risk of bombing, relatively few of the bombes were located at Bletchley Park. The largest two outstations were at Eastcote (some 110 bombes and 800 Wrens) and Stanmore (some 50 bombes and 500 Wrens). There were also bombe outstations at Wavendon, Adstock, and Gayhurst. Communication with Bletchley Park was by teleprinter links.
When the German Navy started using 4-rotor Enigmas, about sixty 4-rotor bombes were produced at Letchworth, some with the assistance of the General Post Office. The NCR-manufactured US Navy 4-rotor bombes were, however, very fast and the most successful. They were extensively used by Bletchley Park over teleprinter links (using the Combined Cipher Machine) to OP-20-G for both 3-rotor and 4-rotor jobs.
"Luftwaffe" Enigma.
Although the German army, SS, police, and railway all used Enigma with similar procedures, it was the "Luftwaffe" (Air Force) that was the first and most fruitful source of Ultra intelligence during the war. The messages were decrypted in Hut 6 at Bletchley Park and turned into intelligence reports in Hut 3. The network code-named 'Red' at Bletchley Park was broken regularly and quickly from 22 May 1940 until the end of hostilities. Indeed, the Air Force section of Hut 3 expected the new day's Enigma settings to have been established in Hut 6 by breakfast time. The relative ease of solving this network's settings was a product of plentiful cribs and frequent German operating mistakes. Luftwaffe chief Hermann Göring was known to use it for trivial communications, including informing squadron commanders to make sure the pilots he was going to decorate had been properly deloused. Such messages became known as "Göring funnies" to the staff at Bletchley Park.
"Abwehr" Enigma.
Dilly Knox's last great cryptanalytical success, before his untimely death in February 1943, was the solving of the "Abwehr" Enigma in 1941. Intercepts of traffic which had an 8-letter indicator sequence before the usual 5-letter groups led to the suspicion that a 4-rotor machine was being used. The assumption was correctly made that the indicator consisted of a 4-letter message key enciphered twice. The machine itself was similar to a Model G Enigma, with three conventional rotors, though it did not have a plug board. The principal difference to the model G was that it was equipped with a reflector that was advanced by the stepping mechanism once it had been set by hand to its starting position (in all other variants, the reflector was fixed). Collecting a set of enciphered message keys for a particular day allowed "cycles" (or "boxes" as Knox called them) to be assembled in a similar way to the method used by the Poles in the 1930s.
Knox was able to derive, using his "buttoning up" procedure, some of the wiring of the rotor that had been loaded in the fast position on that day. Progressively he was able to derive the wiring of all three rotors. Once that had been done, he was able to work out the wiring of the reflector. Deriving the indicator setting for that day was achieved using Knox's time-consuming "rodding" procedure. This involved a great deal of trial and error, imagination, and crossword puzzle-solving skills, but was helped by "cillies".
The "Abwehr" was the intelligence and counter-espionage service of the German High Command. The spies that it placed in enemy countries used a lower level cipher (which was broken by Oliver Strachey's section at Bletchley Park) for their transmissions. However, the messages were often then re-transmitted word-for-word on the "Abwehr's" internal Enigma networks, which gave the best possible crib for deciphering that day's indicator setting. Interception and analysis of "Abwehr" transmissions led to the remarkable state of affairs that allowed MI5 to give a categorical assurance that all the German spies in Britain were controlled as double agents working for the Allies under the Double Cross System.
German Army Enigma.
In the summer of 1940 following the Franco-German armistice, most Army Enigma traffic was travelling by land lines rather than radio and so was not available to Bletchley Park. The air Battle of Britain was crucial, so it was not surprising that the concentration of scarce resources was on "Luftwaffe" and "Abwehr" traffic. It was not until early in 1941 that the first breaks were made into German Army Enigma traffic, and it was the spring of 1942 before it was broken reliably, albeit often with some delay. It is unclear whether the German Army Enigma operators made deciphering more difficult by making fewer operating mistakes.
German Naval Enigma.
The German Navy used Enigma in the same way as the German Army and Air Force until 1 May 1937 when they changed to a substantially different system. This used the same sort of setting sheet but, importantly, it included the ground key for a period of two, sometimes three days. The message setting was concealed in the indicator by selecting a trigram from a book (the "Kenngruppenbuch", or K-Book) and performing a bigram substitution on it. This defeated the Poles, although they suspected some sort of bigram substitution.
The procedure for the naval sending operator was as follows. First they selected a trigram from the K-Book, say YLA. They then looked in the appropriate columns of the K-Book and selected another trigram, say YVT, and wrote it in the boxes at the top of the message form:
They then filled in the "dots" with any letters, giving say:
Finally they looked up the vertical pairs of letters in the Bigram Tables
QY→UB YL→LK VA→RS TG→PW
and wrote down the resultant pairs, UB, LK, RS, and PW which were transmitted as two four letter groups at the start and end of the enciphered message. The receiving operator performed the converse procedure to obtain the message key for setting his Enigma rotors.
As well as these "Kriegsmarine" procedures being much more secure than those of the German Army and Air Force, the German Navy Enigma introduced three more rotors (VI, VII, and VIII), early in 1940. The choice of three rotors from eight meant that there were a total of 336 possible permutations of rotors and their positions.
Alan Turing decided to take responsibility for German naval Enigma because "no one else was doing anything about it and I could have it to myself". He established Hut 8 with Peter Twinn and two "girls". Turing used the indicators and message settings for traffic from 1–8 May 1937 that the Poles had worked out, and some very elegant deductions to diagnose the complete indicator system. After the messages were deciphered they were translated for transmission to the Admiralty in Hut 4.
German Navy 3-rotor Enigma.
The first break of wartime traffic was in December 1939, into signals that had been intercepted in November 1938, when only three rotors and six plugboard leads had been in use. It used "Forty Weepy Weepy" cribs.
A captured German "Funkmaat" ("radio operator") named Meyer had revealed that numerals were now spelt out as words. EINS, the German for "one", was present in about 90% of genuine German Navy messages. An EINS catalogue was compiled consisting of the encipherment of EINS at all 105,456 rotor settings. These were compared with the ciphertext, and when matches were found, about a quarter of them yielded the correct plaintext. Later this process was automated in Mr Freeborn's section using Hollerith equipment. When the ground key was known, this EINS-ing procedure could yield three bigrams for the tables that were then gradually assembled.
Further progress required more information from German Enigma users. This was achieved through a succession of "pinches", the capture of Enigma parts and codebooks. The first of these was on 12 February 1940, when rotors VI and VII, whose wiring was at that time unknown, were captured from the , by minesweeper .
On 26 April 1940, the Narvik-bound German patrol boat "VP2623", disguised as a Dutch trawler named "Polares", was captured by . This yielded an instruction manual, codebook sheets, and a record of some transmissions, which provided complete cribs. This confirmed that Turing's deductions about the trigram/bigram process were correct and allowed a total of six days' messages to be broken, the last of these using the first of the bombes. However, the numerous possible rotor sequences, together with a paucity of usable cribs, made the methods used against the Army and Air Force Enigma messages of very limited value with respect to the Navy messages.
At the end of 1939, Turing extended the clock method invented by the Polish cryptanalyst Jerzy Różycki. Turing's method became known as "Banburismus". Turing said that at that stage "I was not sure that it would work in practice, and was not in fact sure until some days had actually broken". Banburismus used large cards printed in Banbury (hence the Banburismus name) to discover correlations and a statistical scoring system to determine likely rotor orders ("Walzenlage") to be tried on the bombes. The practice conserved scarce bombe time and allowed more messages to be attacked. In practice, the 336 possible rotor orders could be reduced to perhaps 18 to be run on the bombes. Knowledge of the bigrams was essential for Banburismus, and building up the tables took a long time. This lack of visible progress led to Frank Birch, head of the Naval Section, to write on 21 August 1940 to Edward Travis, Deputy Director of Bletchley Park: <templatestyles src="Template:Blockquote/styles.css" />"I'm worried about Naval Enigma. I've been worried for a long time, but haven't liked to say as much... Turing and Twinn are like people waiting for a miracle, without believing in miracles..."
Schemes for capturing Enigma material were conceived including, in September 1940, Operation Ruthless by Lieutenant Commander Ian Fleming (author of the James Bond novels). When this was cancelled, Birch told Fleming that "Turing and Twinn came to me like undertakers cheated of a nice corpse..."
A major advance came through Operation Claymore, a commando raid on the Lofoten Islands on 4 March 1941. The German armed trawler "Krebs" was captured, including the complete Enigma keys for February, but no bigram tables or K-book. However, the material was sufficient to reconstruct the bigram tables by "EINS-ing", and by late March they were almost complete.
Banburismus then started to become extremely useful. Hut 8 was expanded and moved to 24-hour working, and a crib room was established. The story of Banburismus for the next two years was one of improving methods, of struggling to get sufficient staff, and of a steady growth in the relative and absolute importance of cribbing as the increasing numbers of bombes made the running of cribs ever faster. Of value in this period were further "pinches" such as those from the German weather ships "München" and "Lauenburg" and the submarines and .
Despite the introduction of the 4-rotor Enigma for Atlantic U-boats, the analysis of traffic enciphered with the 3-rotor Enigma proved of immense value to the Allied navies. Banburismus was used until July 1943, when it became more efficient to use the many more bombes that had become available.
M4 (German Navy 4-rotor Enigma).
On 1 February 1942, the Enigma messages to and from Atlantic U-boats, which Bletchley Park called "Shark", became significantly different from the rest of the traffic, which they called "Dolphin".
This was because a new Enigma version had been brought into use. It was a development of the 3-rotor Enigma with the reflector replaced by a thin rotor and a thin reflector. Eventually, there were two fourth-position rotors that were called Beta and Gamma and two thin reflectors, Bruno and Caesarm, which could be used in any combination. These rotors were not advanced by the rotor to their right, in the way that rotors I through VIII were.
The introduction of the fourth rotor did not catch Bletchley Park by surprise, because captured material dated January 1941 had made reference to its development as an adaptation of the 3-rotor machine, with the fourth rotor wheel to be a reflector wheel. Indeed, because of operator errors, the wiring of the new fourth rotor had already been worked out.
This major challenge could not be met by using existing methods and resources for a number of reasons.
It seemed, therefore, that effective, fast, 4-rotor bombes were the only way forward. This was an immense problem and it gave a great deal of trouble. Work on a high speed machine had been started by Wynn-Williams of the TRE late in 1941 and some nine months later Harold Keen of BTM started work independently. Early in 1942, Bletchley Park were a long way from possessing a high speed machine of any sort.
Eventually, after a long period of being unable to decipher U-boat messages, a source of cribs was found. This was the Kurzsignale (short signals), a code which the German navy used to minimise the duration of transmissions, thereby reducing the risk of being located by high-frequency direction finding techniques. The messages were only 22 characters long and were used to report sightings of possible Allied targets. A copy of the code book had been captured from on 9 May 1941. A similar coding system was used for weather reports from U-boats, the "Wetterkurzschlüssel", (Weather Short Code Book). A copy of this had been captured from on 29 or 30 October 1942. These short signals had been used for deciphering 3-rotor Enigma messages and it was discovered that the new rotor had a neutral position at which it, and its matching reflector, behaved just like a 3-rotor Enigma reflector. This allowed messages enciphered at this neutral position to be deciphered by a 3-rotor machine, and hence deciphered by a standard bombe. Deciphered Short Signals provided good material for bombe menus for Shark. Regular deciphering of U-boat traffic restarted in December 1942.
Italian naval Enigma.
In 1940 Dilly Knox wanted to establish whether the Italian Navy were still using the same system that he had cracked during the Spanish Civil War; he instructed his assistants to use rodding to see whether the crib "PERX" ("per" being Italian for "for" and "X" being used to indicate a space between words) worked for the first part of the message. After three months there was no success, but Mavis Lever, a 19-year-old student, found that rodding produced "PERS" for the first four letters of one message. She then (against orders) tried beyond this and obtained "PERSONALE" (Italian for "personal"). This confirmed that the Italians were indeed using the same machines and procedures.
The subsequent breaking of Italian naval Enigma ciphers led to substantial Allied successes. The cipher-breaking was disguised by sending a reconnaissance aircraft to the known location of a warship before attacking it, so that the Italians assumed that this was how they had been discovered. The Royal Navy's victory at the Battle of Cape Matapan in March 1941 was considerably helped by Ultra intelligence obtained from Italian naval Enigma signals.
American "bombes".
Unlike the situation at Bletchley Park, the United States armed services did not share a combined cryptanalytical service. Before the US joined the war, there was collaboration with Britain, albeit with a considerable amount of caution on Britain's side because of the extreme importance of Germany and her allies not learning that its codes were being broken. Despite some worthwhile collaboration among the cryptanalysts, their superiors took some time to achieve a trusting relationship in which both British and American bombes were used to mutual benefit.
In February 1941, Captain Abraham Sinkov and Lieutenant Leo Rosen of the US Army, and Lieutenants Robert Weeks and Prescott Currier of the US Navy, arrived at Bletchley Park, bringing, among other things, a replica of the 'Purple' cipher machine for Bletchley Park's Japanese section in Hut 7. The four returned to America after ten weeks, with a naval radio direction finding unit and many documents, including a "paper Enigma".
The main American response to the 4-rotor Enigma was the US Navy bombe, which was manufactured in much less constrained facilities than were available in wartime Britain. Colonel John Tiltman, who later became Deputy Director at Bletchley Park, visited the US Navy cryptanalysis office (OP-20-G) in April 1942 and recognised America's vital interest in deciphering U-boat traffic. The urgent need, doubts about the British engineering workload, and slow progress prompted the US to start investigating designs for a Navy bombe, based on the full blueprints and wiring diagrams received by US Navy Lieutenants Robert Ely and Joseph Eachus at Bletchley Park in July 1942. Funding for a full, $2 million, Navy development effort was requested on 3 September 1942 and approved the following day.
Commander Edward Travis, Deputy Director and Frank Birch, Head of the German Naval Section travelled from Bletchley Park to Washington in September 1942. With Carl Frederick Holden, US Director of Naval Communications they established, on 2 October 1942, a UK:US accord which may have "a stronger claim than BRUSA to being the forerunner of the UKUSA Agreement", being the first agreement "to establish the special Sigint relationship between the two countries", and "it set the pattern for UKUSA, in that the United States was very much the senior partner in the alliance". It established a relationship of "full collaboration" between Bletchley Park and OP-20-G.
An all electronic solution to the problem of a fast bombe was considered, but rejected for pragmatic reasons, and a contract was let with the National Cash Register Corporation (NCR) in Dayton, Ohio. This established the United States Naval Computing Machine Laboratory. Engineering development was led by NCR's Joseph Desch, a brilliant inventor and engineer. He had already been working on electronic counting devices.
Alan Turing, who had written a memorandum to OP-20-G (probably in 1941), was seconded to the British Joint Staff Mission in Washington in December 1942, because of his exceptionally wide knowledge about the bombes and the methods of their use. He was asked to look at the bombes that were being built by NCR and at the security of certain speech cipher equipment under development at Bell Labs. He visited OP-20-G, and went to NCR in Dayton on 21 December. He was able to show that it was not necessary to build 336 Bombes, one for each possible rotor order, by utilising techniques such as Banburismus. The initial order was scaled down to 96 machines.
The US Navy bombes used drums for the Enigma rotors in much the same way as the British bombes, but were very much faster. The first machine was completed and tested on 3 May 1943. Soon these bombes were more available than the British bombes at Bletchley Park and its outstations, and as a consequence they were put to use for Hut 6 as well as Hut 8 work. A total of 121 Navy bombes were produced.
The US Army also produced a version of a bombe. It was physically very different from the British and US Navy bombes. A contract was signed with Bell Labs on 30 September 1942. The machine was designed to analyse 3-rotor, not 4-rotor traffic. It did not use drums to represent the Enigma rotors, using instead telephone-type relays. It could, however, handle one problem that the bombes with drums could not. The set of ten bombes consisted of a total of 144 Enigma-equivalents, each mounted on a rack approximately long high and wide. There were 12 control stations which could allocate any of the Enigma-equivalents into the desired configuration by means of plugboards. Rotor order changes did not require the mechanical process of changing drums, but was achieved in about half a minute by means of push buttons. A 3-rotor run took about 10 minutes.
German suspicions.
The German navy was concerned that Enigma could be compromised. They printed key schedules in water-soluble inks so that they could not be salvaged. They policed their operators and disciplined them when they made errors that could compromise the cipher. The navy minimised its exposure. For example, ships that might be captured or run aground did not carry Enigma machines. When ships were lost in circumstances where the enemy might salvage them, the Germans investigated. After investigating some losses in 1940, Germany changed some message indicators.
In April 1940, the British sank eight German destroyers in Norway. The Germans concluded that it was unlikely that the British were reading Enigma.
In May 1941, the British deciphered some messages that gave the location of some supply ships for the battleship "Bismarck" and the cruiser "Prinz Eugen". As part of the "Operation Rheinübung" commerce raid, the Germans had assigned five tankers, two supply ships, and two scouts to support the warships. After the "Bismarck" was sunk, the British directed its forces to sink the supporting ships "Belchen", "Esso Hamburg", "Egerland", and some others. The Admiralty specifically did not target the tanker "Gedania" and the scout "Gonzenheim", figuring that sinking so many ships within one week would indicate to Germany that Britain was reading Enigma. However, by chance, British forces found those two ships and sank them. The Germans investigated, but concluded Enigma had not been breached by either seizures or brute force cryptanalysis. Nevertheless, the Germans took some steps to make Enigma more secure. Grid locations (an encoded latitude and longitude) were further disguised using digraph tables and a numeric offset. The U-boats were given their own network, "Triton", to minimise the chance of a cryptanalytic attack.
In August 1941, the British captured . The Germans concluded the crew would have destroyed the important documents, so the cipher was safe. Even if the British had captured the materials intact and could read Enigma, the British would lose that ability when the keys changed on 1 November.
Although Germany realised that convoys were avoiding its wolfpacks, it did not attribute that ability to reading Enigma traffic. Instead, Dönitz thought that Britain was using radar and direction finding. The "Kriegsmarine" continued to increase the number of networks to avoid superimposition attacks on Enigma. At the beginning of 1943, the "Kriegsmarine" had 13 networks.
The "Kriegsmarine" also improved the Enigma. On 1 February 1942, it started using the four-rotor Enigma. The improved security meant that convoys no longer had as much information about the whereabouts of wolfpacks, and were therefore less able to avoid areas where they would be attacked. The increased success of wolfpack attacks following the strengthening of the encryption might have given the Germans a clue that the previous Enigma codes had been broken. However, that recognition did not happen because other things changed at the same time, the United States had entered the war and Dönitz had sent U-boats to raid the US East Coast where there were many easy targets.
In early 1943, Dönitz was worried that the Allies were reading Enigma. Germany's own cryptanalysis of Allied communications showed surprising accuracy in its estimates of wolfpack sizes. It was concluded, however, that Allied direction finding was the source. The Germans also recovered a cavity magnetron, used to generate radar waves, from a downed British bomber. The conclusion was that the Enigma was secure. The Germans were still suspicious, so each submarine got its own key net in June 1944.
By 1945, almost all German Enigma traffic (Wehrmacht military; comprising the Heer, Kriegsmarine, and Luftwaffe; and German intelligence and security services like the Abwehr, SD, etc.) could be decrypted within a day or two, yet the Germans remained confident of its security. They openly discussed their plans and movements, handing the Allies huge amounts of information, not all of which was used effectively. For example, Rommel's actions at Kasserine Pass were clearly foreshadowed in decrypted Enigma traffic, but the Americans did not properly appreciate the information.
After the war, Allied TICOM project teams found and detained a considerable number of German cryptographic personnel. Among the things learned was that German cryptographers, at least, understood very well that Enigma messages might be read; they knew Enigma was not unbreakable. They just found it impossible to imagine anyone going to the immense effort required. When Abwehr personnel who had worked on Fish cryptography and Russian traffic were interned at Rosenheim around May 1945, they were not at all surprised that Enigma had been broken, only that someone had mustered all the resources in time to actually do it. Admiral Dönitz had been advised that a cryptanalytic attack was the least likely of all security problems.
After World War II.
Modern computers can be used to solve Enigma, using a variety of techniques. There have been projects to decrypt some remaining messages using distributed computing.
On 8 May 2020, to mark the 75th anniversary of VE Day, GCHQ released the last Enigma message to be decrypted by codebreakers at Bletchley Park. The message was sent at 07:35 on 7 May 1945 by a German radio operator in Cuxhaven and read: "British troops entered Cuxhaven at 14:00 on 6 May 1945 – all radio broadcast will cease with immediate effect – I wish you all again the best of luck". It was immediately followed by another message: "Closing down forever – all the best – goodbye".
The break into Enigma had been kept a secret until 1974. The machines were used well into the 1960s in Switzerland, Norway (Norenigma), and in some British colonies.
References and notes.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\tfrac{26!}{(26-2L)! \\cdot L! \\cdot 2^L}"
}
] | https://en.wikipedia.org/wiki?curid=872175 |
8722051 | Laplacian smoothing | Algorithm to smooth a polygonal mesh
Laplacian smoothing is an algorithm to smooth a polygonal mesh. For each vertex in a mesh, a new position is chosen based on local information (such as the position of neighbours) and the vertex is moved there. In the case that a mesh is topologically a rectangular grid (that is, each internal vertex is connected to four neighbours) then this operation produces the Laplacian of the mesh.
More formally, the smoothing operation may be described per-vertex as:
formula_0
Where formula_1 is the number of adjacent vertices to node formula_2, formula_3 is the position of the formula_4-th adjacent vertex and formula_5 is the new position for node formula_2. | [
{
"math_id": 0,
"text": "\\bar{x}_{i}= \\frac{1}{N} \\sum_{j=1}^{N}\\bar{x}_j "
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "\\bar{x}_{j}"
},
{
"math_id": 4,
"text": "j"
},
{
"math_id": 5,
"text": "\\bar{x}_{i}"
}
] | https://en.wikipedia.org/wiki?curid=8722051 |
8722775 | Systematic code | In coding theory, a systematic code is any error-correcting code in which the input data are embedded in the encoded output. Conversely, in a non-systematic code the output does not contain the input symbols.
Systematic codes have the advantage that the parity data can simply be appended to the source block, and receivers do not need to recover the original source symbols if received correctly – this is useful for example if error-correction coding is combined with a hash function for quickly determining the correctness of the received source symbols, or in cases where errors occur in erasures and a received symbol is thus always correct. Furthermore, for engineering purposes such as synchronization and monitoring, it is desirable to get reasonable good estimates of the received source symbols without going through the lengthy decoding process which may be carried out at a remote site at a later time.
Properties.
Every non-systematic linear code can be transformed into a systematic code with essentially the same properties (i.e., minimum distance).
Because of the advantages cited above, linear error-correcting codes are therefore generally implemented as systematic codes. However, for certain decoding algorithms such as sequential decoding or maximum-likelihood decoding, a non-systematic structure can increase performance in terms of undetected decoding error probability when the minimum "free" distance of the code is larger.
For a systematic linear code, the generator matrix, formula_0, can always be written as formula_1, where formula_2 is the identity matrix of size formula_3. | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "G = [ I_k | P ]"
},
{
"math_id": 2,
"text": "I_k"
},
{
"math_id": 3,
"text": "k"
}
] | https://en.wikipedia.org/wiki?curid=8722775 |
872314 | Darboux integral | Integral constructed using Darboux sums
In real analysis, the Darboux integral is constructed using Darboux sums and is one possible definition of the integral of a function. Darboux integrals are equivalent to Riemann integrals, meaning that a function is Darboux-integrable if and only if it is Riemann-integrable, and the values of the two integrals, if they exist, are equal. The definition of the Darboux integral has the advantage of being easier to apply in computations or proofs than that of the Riemann integral. Consequently, introductory textbooks on calculus and real analysis often develop Riemann integration using the Darboux integral, rather than the true Riemann integral. Moreover, the definition is readily extended to defining Riemann–Stieltjes integration. Darboux integrals are named after their inventor, Gaston Darboux (1842–1917).
Definition.
The definition of the Darboux integral considers upper and lower (Darboux) integrals, which exist for any bounded real-valued function formula_0 on the interval formula_1 The Darboux integral exists if and only if the upper and lower integrals are equal. The upper and lower integrals are in turn the infimum and supremum, respectively, of upper and lower (Darboux) sums which over- and underestimate, respectively, the "area under the curve." In particular, for a given partition of the interval of integration, the upper and lower sums add together the areas of rectangular slices whose heights are the supremum and infimum, respectively, of "f" in each subinterval of the partition. These ideas are made precise below:
Darboux sums.
A partition of an interval formula_2 is a finite sequence of values formula_3 such that
formula_4
Each interval formula_5 is called a "subinterval" of the partition. Let formula_6 be a bounded function, and let
formula_7
be a partition of formula_2. Let
formula_8
The upper Darboux sum of formula_0 with respect to formula_9 is
formula_10
The lower Darboux sum of formula_0 with respect to formula_9 is
formula_11
The lower and upper Darboux sums are often called the lower and upper sums.
Darboux integrals.
The upper Darboux integral of "f" is
formula_12
The lower Darboux integral of "f" is
formula_13
In some literature, an integral symbol with an underline and overline represent the lower and upper Darboux integrals respectively:
formula_14
and like Darboux sums they are sometimes simply called the "lower and upper integrals".
If "U""f" = "L""f", then we call the common value the "Darboux integral". We also say that "f" is "Darboux-integrable" or simply "integrable" and set
formula_15
An equivalent and sometimes useful criterion for the integrability of "f" is to show that for every ε > 0 there exists a partition "P"ε of ["a", "b"] such that
formula_16
then "F" is Lipschitz continuous. An identical result holds if "F" is defined using an upper Darboux integral.
Examples.
A Darboux-integrable function.
Suppose we want to show that the function formula_24 is Darboux-integrable on the interval formula_25 and determine its value. To do this we partition formula_25 into formula_26 equally sized subintervals each of length formula_27. We denote a partition of formula_26 equally sized subintervals as formula_28.
Now since formula_24 is strictly increasing on formula_25, the infimum on any particular subinterval is given by its starting point. Likewise the supremum on any particular subinterval is given by its end point. The starting point of the formula_29-th subinterval in formula_28 is formula_30 and the end point is formula_31. Thus the lower Darboux sum on a partition formula_28 is given by
formula_32
similarly, the upper Darboux sum is given by
formula_33
Since
formula_34
Thus for given any formula_35, we have that any partition formula_28 with formula_36 satisfies
formula_37
which shows that formula_0 is Darboux integrable. To find the value of the integral note that
formula_38
A nonintegrable function.
Suppose we have the Dirichlet function formula_39 defined as
formula_40
Since the rational and irrational numbers are both dense subsets of formula_41, it follows that formula_0 takes on the value of 0 and 1 on every subinterval of any partition. Thus for any partition formula_9 we have
formula_42
from which we can see that the lower and upper Darboux integrals are unequal.
Refinement of a partition and relation to Riemann integration.
A "refinement" of the partition formula_43 is a partition formula_44 such that for all "i" = 0, …, "n" there is an integer "r"("i") such that
formula_45
In other words, to make a refinement, cut the subintervals into smaller pieces and do not remove any existing cuts.
If formula_46 is a refinement of formula_47 then
formula_48
and
formula_49
If "P"1, "P"2 are two partitions of the same interval (one need not be a refinement of the other), then
formula_50
and it follows that
formula_51
Riemann sums always lie between the corresponding lower and upper Darboux sums. Formally, if formula_52 and formula_53 together make a tagged partition
formula_54
(as in the definition of the Riemann integral), and if the Riemann sum of formula_0 is equal to "R" corresponding to "P" and "T", then
formula_55
From the previous fact, Riemann integrals are at least as strong as Darboux integrals: if the Darboux integral exists, then the upper and lower Darboux sums corresponding to a sufficiently fine partition will be close to the value of the integral, so any Riemann sum over the same partition will also be close to the value of the integral. There is (see below) a tagged partition that comes arbitrarily close to the value of the upper Darboux integral or lower Darboux integral, and consequently, if the Riemann integral exists, then the Darboux integral must exist as well.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "[a,b]."
},
{
"math_id": 2,
"text": "[a,b]"
},
{
"math_id": 3,
"text": "x_{i}"
},
{
"math_id": 4,
"text": "a = x_0 < x_1 < \\cdots < x_n = b."
},
{
"math_id": 5,
"text": "[x_{i-1},x_i]"
},
{
"math_id": 6,
"text": "f:[a,b]\\to\\R"
},
{
"math_id": 7,
"text": "P = (x_0, \\ldots, x_n)"
},
{
"math_id": 8,
"text": "\\begin{align}\nM_i = \\sup_{x\\in[x_{i-1},x_{i}]} f(x), \\\\\nm_i = \\inf_{x\\in[x_{i-1},x_{i}]} f(x).\n\\end{align}"
},
{
"math_id": 9,
"text": "P"
},
{
"math_id": 10,
"text": "U_{f, P} = \\sum_{i=1}^n (x_{i}-x_{i-1}) M_i. \\,\\!"
},
{
"math_id": 11,
"text": "L_{f, P} = \\sum_{i=1}^n (x_{i}-x_{i-1}) m_i. \\,\\!"
},
{
"math_id": 12,
"text": "U_f = \\inf\\{U_{f,P} \\colon P \\text{ is a partition of } [a,b]\\}."
},
{
"math_id": 13,
"text": "L_f = \\sup\\{L_{f,P} \\colon P \\text{ is a partition of } [a,b]\\}."
},
{
"math_id": 14,
"text": "\\begin{align}\n&{} L_f \\equiv \\underline{\\int_{a}^{b}} f(x) \\, \\mathrm{d}x, \\\\\n&{} U_f \\equiv \\overline{\\int_{a}^{b}} f(x) \\, \\mathrm{d}x,\n\\end{align}"
},
{
"math_id": 15,
"text": "\\int_a^b {f(t)\\,dt} = U_f = L_f."
},
{
"math_id": 16,
"text": "U_{f,P_\\epsilon} - L_{f,P_\\epsilon} < \\varepsilon."
},
{
"math_id": 17,
"text": "(b-a)\\inf_{x \\in [a,b]} f(x) \\leq L_{f,P} \\leq U_{f,P} \\leq (b-a)\\sup_{x \\in [a,b]} f(x)"
},
{
"math_id": 18,
"text": "\\underline{\\int_{a}^{b}} f(x) \\, dx \\leq \\overline{\\int_{a}^{b}} f(x) \\, dx"
},
{
"math_id": 19,
"text": "\\begin{align}\n\\underline{\\int_{a}^{b}} f(x) \\, dx &= \\underline{\\int_{a}^{c}} f(x) \\, dx + \\underline{\\int_{c}^{b}} f(x) \\, dx\\\\[6pt]\n\\overline{\\int_{a}^{b}} f(x) \\, dx &= \\overline{\\int_{a}^{c}} f(x) \\, dx + \\overline{\\int_{c}^{b}} f(x) \\, dx\n\\end{align}"
},
{
"math_id": 20,
"text": "\\begin{align}\n\\underline{\\int_{a}^{b}} f(x) \\, dx + \\underline{\\int_{a}^{b}} g(x) \\, dx &\\leq \\underline{\\int_{a}^{b}} (f(x) + g(x)) \\, dx\\\\[6pt]\n\\overline{\\int_{a}^{b}} f(x) \\, dx + \\overline{\\int_{a}^{b}} g(x) \\, dx &\\geq \\overline{\\int_{a}^{b}} (f(x) + g(x)) \\, dx\n\\end{align}"
},
{
"math_id": 21,
"text": "\\begin{align}\n\\underline{\\int_{a}^{b}} cf(x) \\, dx &= c\\underline{\\int_{a}^{b}} f(x)\\, dx \\\\[6pt]\n\\overline{\\int_{a}^{b}} cf(x) \\, dx &= c\\overline{\\int_{a}^{b}} f(x)\\, dx\n\\end{align}"
},
{
"math_id": 22,
"text": "\\begin{align}\n\\underline{\\int_{a}^{b}} cf(x)\\, dx &= c\\overline{\\int_{a}^{b}} f(x)\\, dx \\\\[6pt]\n\\overline{\\int_{a}^{b}} cf(x)\\, dx &= c\\underline{\\int_{a}^{b}} f(x)\\, dx\n\\end{align}"
},
{
"math_id": 23,
"text": "\\begin{align}\n&{} F : [a, b] \\to \\R \\\\\n&{} F(x) = \\underline{\\int_{a}^{x}} f(t) \\, dt,\n\\end{align}"
},
{
"math_id": 24,
"text": "f(x)=x"
},
{
"math_id": 25,
"text": "[0,1]"
},
{
"math_id": 26,
"text": "n"
},
{
"math_id": 27,
"text": "1/n"
},
{
"math_id": 28,
"text": "P_n"
},
{
"math_id": 29,
"text": "k"
},
{
"math_id": 30,
"text": "(k-1)/n"
},
{
"math_id": 31,
"text": "k/n"
},
{
"math_id": 32,
"text": "\\begin{align}\nL_{f,P_n} &= \\sum_{k = 1}^{n} f(x_{k-1})(x_{k} - x_{k-1}) \\\\\n &= \\sum_{k = 1}^{n} \\frac{k-1}{n} \\cdot \\frac{1}{n} \\\\\n &= \\frac{1}{n^2} \\sum_{k = 1}^{n} [k-1] \\\\ \n &= \\frac{1}{n^2}\\left[ \\frac{(n-1)n}{2} \\right]\n\\end{align}"
},
{
"math_id": 33,
"text": "\\begin{align}\nU_{f,P_n} &= \\sum_{k = 1}^{n} f(x_{k})(x_{k} - x_{k-1}) \\\\\n &= \\sum_{k = 1}^{n} \\frac{k}{n} \\cdot \\frac{1}{n} \\\\\n &= \\frac{1}{n^2} \\sum_{k = 1}^{n} k \\\\ \n &= \\frac{1}{n^2}\\left[ \\frac{(n+1)n}{2} \\right]\n\\end{align}"
},
{
"math_id": 34,
"text": "U_{f,P_n} - L_{f,P_n} = \\frac{1}{n}"
},
{
"math_id": 35,
"text": "\\varepsilon>0"
},
{
"math_id": 36,
"text": "n > \\frac{1}{\\varepsilon}"
},
{
"math_id": 37,
"text": "U_{f,P_n} - L_{f,P_n} < \\varepsilon"
},
{
"math_id": 38,
"text": "\\int_{0}^{1}f(x) \\, dx\n= \\lim_{n \\to \\infty} U_{f,P_n}\n= \\lim_{n \\to \\infty} L_{f,P_n}\n= \\frac{1}{2}"
},
{
"math_id": 39,
"text": "f:[0,1] \\to \\R"
},
{
"math_id": 40,
"text": "\\begin{align}\nf(x) &=\n \\begin{cases}\n 0 & \\text{if }x\\text{ is rational} \\\\\n 1 & \\text{if }x\\text{ is irrational}\n \\end{cases}\n\\end{align}"
},
{
"math_id": 41,
"text": "\\mathbb{R}"
},
{
"math_id": 42,
"text": "\\begin{align}\nL_{f,P} &=\\sum_{k = 1}^{n}(x_{k} - x_{k-1})\\inf_{x \\in [x_{k-1},x_{k}]}f = 0 \\\\\nU_{f,P} &=\\sum_{k = 1}^{n}(x_{k} - x_{k-1}) \\sup_{x \\in [x_{k-1},x_{k}]}f = 1\n\\end{align}"
},
{
"math_id": 43,
"text": "x_0, \\ldots, x_n"
},
{
"math_id": 44,
"text": "y_0, \\ldots, y_m"
},
{
"math_id": 45,
"text": " x_{i} = y_{r(i)} . "
},
{
"math_id": 46,
"text": "P' = (y_0,\\ldots,y_m) "
},
{
"math_id": 47,
"text": "P = (x_0,\\ldots,x_n) , "
},
{
"math_id": 48,
"text": "U_{f, P} \\ge U_{f, P'} "
},
{
"math_id": 49,
"text": "L_{f, P} \\le L_{f, P'}. "
},
{
"math_id": 50,
"text": "L_{f, P_1} \\le U_{f, P_2}, "
},
{
"math_id": 51,
"text": "L_f \\le U_f . "
},
{
"math_id": 52,
"text": "P = (x_0,\\ldots,x_n) "
},
{
"math_id": 53,
"text": "T = (t_1,\\ldots,t_n) "
},
{
"math_id": 54,
"text": " x_0 \\le t_1 \\le x_1\\le \\cdots \\le x_{n-1} \\le t_n \\le x_n "
},
{
"math_id": 55,
"text": "L_{f, P} \\le R \\le U_{f, P}. "
}
] | https://en.wikipedia.org/wiki?curid=872314 |
8723207 | Stokesian dynamics | Stokesian dynamics
is a solution technique for the Langevin equation, which is the relevant form of Newton's 2nd law for a Brownian particle. The method treats the suspended particles in a discrete sense while the continuum approximation remains valid for the surrounding fluid, i.e., the suspended particles are generally assumed to be significantly larger than the molecules of the solvent. The particles then interact through hydrodynamic forces transmitted via the continuum fluid, and when the particle Reynolds number is small, these forces are determined through the linear Stokes equations (hence the name of the method). In addition, the method can also resolve non-hydrodynamic forces, such as Brownian forces, arising from the fluctuating motion of the fluid, and interparticle or external forces. Stokesian Dynamics can thus be applied to a variety of problems, including sedimentation, diffusion and rheology, and it aims to provide the same level of understanding for multiphase particulate systems as molecular dynamics does for statistical properties of matter. For formula_0 rigid particles of radius formula_1 suspended in an incompressible Newtonian fluid of viscosity formula_2 and density formula_3, the motion of the fluid is governed by the Navier–Stokes equations, while the motion of the particles is described by the coupled equation of motion:
formula_4
In the above equation formula_5 is the particle translational/rotational velocity
vector of dimension 6N. formula_6 is the hydrodynamic force, i.e., force exerted by the fluid on the particle due to relative motion between them. formula_7 is the stochastic Brownian force due to thermal motion of fluid particles. formula_8 is the deterministic nonhydrodynamic force, which may be almost any form of interparticle or external force, e.g. electrostatic repulsion between like charged particles. Brownian dynamics is one of the popular techniques of solving the Langevin equation, but the hydrodynamic interaction in Brownian dynamics is highly simplified and normally includes only the isolated body resistance. On the other hand, Stokesian dynamics includes the many body hydrodynamic interactions. Hydrodynamic interaction is very important for non-equilibrium suspensions, like a sheared suspension, where it plays a vital role in its microstructure and hence its properties. Stokesian dynamics is used primarily for non-equilibrium suspensions where it has been shown to provide results which agree with experiments.
Hydrodynamic interaction.
When the motion on the particle scale is such that the particle Reynolds number is small, the hydrodynamic force exerted on the particles in a suspension undergoing a bulk linear shear flow is:
formula_9
Here, formula_10 is the velocity of the bulk shear flow evaluated at the particle
center, formula_11 is the symmetric part of the velocity-gradient tensor; formula_12 and formula_13 are the configuration-dependent resistance matrices that give the hydrodynamic force/torque on the particles due to their motion relative to the fluid (formula_12) and due to the imposed shear flow (formula_13). Note that the subscripts on the matrices indicate the coupling between kinematic (formula_5) and dynamic (formula_14) quantities.
One of the key features of Stokesian dynamics is its handling of the hydrodynamic interactions, which is fairly accurate without being computationally inhibitive (like boundary integral methods) for a large number of particles. Classical Stokesian dynamics requires formula_15 operations where "N" is the number of particles in the system (usually a periodic box). Recent advances have reduced the computational cost to about formula_16
Brownian force.
The stochastic or Brownian force formula_7 arises from the thermal fluctuations in the fluid and is characterized by:
formula_17
formula_18
The angle brackets denote an ensemble average, formula_19 is the Boltzmann constant, formula_20 is the absolute temperature and formula_21 is the delta function. The amplitude of the correlation between the Brownian forces at time formula_22 and at time formula_23 results from the fluctuation-dissipation theorem for the N-body system.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "\\eta"
},
{
"math_id": 3,
"text": "\\rho"
},
{
"math_id": 4,
"text": "\\mathbf{m}\\frac{d\\mathbf{U}}{dt} = \\mathbf{F}^\\mathrm{H} + \\mathbf{F}^\\mathrm{B} + \\mathbf{F}^\\mathrm{P}. "
},
{
"math_id": 5,
"text": "\\mathbf{U}"
},
{
"math_id": 6,
"text": "\\mathbf{F}^\\mathrm{H}"
},
{
"math_id": 7,
"text": "\\mathbf{F}^\\mathrm{B}"
},
{
"math_id": 8,
"text": " \\mathbf{F}^\\mathrm{P}"
},
{
"math_id": 9,
"text": "\\mathbf{F}^\\mathrm{H} = -\\mathbf{R}_\\mathrm{FU}(\\mathbf{U}-\\mathbf{U}^{\\infty}) + \\mathbf{R}^\\mathrm{FE}:\\mathbf{E}^{\\infty}. "
},
{
"math_id": 10,
"text": "\\mathbf{U}^{\\infty}"
},
{
"math_id": 11,
"text": "\\mathbf{E}^{\\infty}"
},
{
"math_id": 12,
"text": "\\mathbf{R}_\\mathrm{FU}"
},
{
"math_id": 13,
"text": "\\mathbf{R}_\\mathrm{FE}"
},
{
"math_id": 14,
"text": "\\mathbf{F}"
},
{
"math_id": 15,
"text": "O(N^{3})"
},
{
"math_id": 16,
"text": " O(N^{1.25} \\, \\log N). "
},
{
"math_id": 17,
"text": " \\left\\langle\\mathbf{F}^\\mathrm{B}\\right\\rangle = 0"
},
{
"math_id": 18,
"text": " \\left\\langle\\mathbf{F}^\\mathrm{B}(0)\\mathbf{F}^\\mathrm{B}(t)\\right\\rangle = 2kT\\mathbf{R}_\\mathrm{FU}\\delta(t)"
},
{
"math_id": 19,
"text": "k"
},
{
"math_id": 20,
"text": "T"
},
{
"math_id": 21,
"text": "\\delta(t)"
},
{
"math_id": 22,
"text": "0"
},
{
"math_id": 23,
"text": "t"
}
] | https://en.wikipedia.org/wiki?curid=8723207 |
872374 | Parastatistics | Notion in statistical mechanics
In quantum mechanics and statistical mechanics, parastatistics is a hypothetical alternative to the established particle statistics models (Bose–Einstein statistics, Fermi–Dirac statistics and Maxwell–Boltzmann statistics). Other alternatives include anyonic statistics and braid statistics, both of these involving lower spacetime dimensions. Herbert S. Green is credited with the creation of parastatistics in 1953. The particles predicted by parastatistics have not been experimentally observed.
Formalism.
Consider the operator algebra of a system of "N" identical particles. This is a *-algebra. There is an "SN" group (symmetric group of order "N") acting upon the operator algebra with the intended interpretation of permuting the "N" particles. Quantum mechanics requires focus on observables having a physical meaning, and the observables would have to be invariant under all possible permutations of the "N" particles. For example, in the case "N" = 2, "R"2 − "R"1 cannot be an observable because it changes sign if we switch the two particles, but the distance between the two particles : |"R"2 − "R"1| is a legitimate observable.
In other words, the observable algebra would have to be a *-subalgebra invariant under the action of "SN" (noting that this does not mean that every element of the operator algebra invariant under "SN" is an observable). This allows different superselection sectors, each parameterized by a Young diagram of "SN".
In particular:
Trilinear relations.
There are creation and annihilation operators satisfying the trilinear commutation relations
formula_0
formula_1
formula_2
Quantum field theory.
A paraboson field of order "p", formula_3 where if "x" and "y" are spacelike-separated points, formula_4 and formula_5 if formula_6 where [,] is the commutator and {,} is the anticommutator. Note that this disagrees with the spin-statistics theorem, which is for bosons and not parabosons. There might be a group such as the symmetric group "Sp" acting upon the "φ"("i")s. Observables would have to be operators which are invariant under the group in question. However, the existence of such a symmetry is not essential.
A parafermion field formula_7 of order "p", where if "x" and "y" are spacelike-separated points, formula_8 and formula_9 if formula_6. The same comment about observables would apply together with the requirement that they have even grading under the grading where the "ψ"s have odd grading.
The "parafermionic and parabosonic algebras" are generated by elements that obey the commutation and anticommutation relations. They generalize the usual "fermionic algebra" and the "bosonic algebra" of quantum mechanics. The Dirac algebra and the Duffin–Kemmer–Petiau algebra appear as special cases of the parafermionic algebra for order "p" = 1 and "p" = 2, respectively.
Explanation.
Note that if "x" and "y" are spacelike-separated points, "φ"("x") and "φ"("y") neither commute nor anticommute unless "p"=1. The same comment applies to "ψ"("x") and "ψ"("y"). So, if we have "n" spacelike separated points "x"1, ..., "x""n",
formula_10
corresponds to creating "n" identical parabosons at "x"1..., "x""n". Similarly,
formula_11
corresponds to creating "n" identical parafermions. Because these fields neither commute nor anticommute
formula_12
and
formula_13
gives distinct states for each permutation π in "Sn".
We can define a permutation operator formula_14 by
formula_15
and
formula_16
respectively. This can be shown to be well-defined as long as formula_14 is only restricted to states spanned by the vectors given above (essentially the states with "n" identical particles). It is also unitary. Moreover, formula_17 is an operator-valued representation of the symmetric group "Sn" and as such, we can interpret it as the action of "Sn" upon the "n"-particle Hilbert space itself, turning it into a unitary representation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left[ a_k, \\left[ a_l^\\dagger, a_m \\right]_{\\pm}\\right]_- = [a_k,a_l^\\dagger]_{\\mp}a_m \\pm a_l^\\dagger \\left[ a_k, a_m \\right]_{\\mp} \\pm [a_k,a_m]_{\\mp}a_l^\\dagger+a_m \\left[ a_k, a_l^\\dagger \\right]_{\\mp}= 2\\delta_{kl}a_m"
},
{
"math_id": 1,
"text": "\\left[ a_k, \\left[ a_l^\\dagger, a_m^\\dagger \\right]_{\\pm}\\right]_- =\\left[a_k,a_l^\\dagger\\right]_{\\mp}a_m^\\dagger \\pm a_l^\\dagger \\left[ a_k, a_m^\\dagger \\right]_{\\mp} \\pm \\left[a_k, a_m^\\dagger\\right]_{\\mp} a_l^\\dagger + a_m^\\dagger \\left[ a_k, a_l^\\dagger \\right]_{\\mp}= 2\\delta_{kl}a_m^\\dagger \\pm 2\\delta_{km}a_l^\\dagger"
},
{
"math_id": 2,
"text": "\\left[ a_k, \\left[ a_l, a_m \\right]_{\\pm}\\right]_- = [a_k,a_l]_{\\mp}a_m \\pm a_l \\left[ a_k, a_m \\right]_{\\mp} \\pm [a_k,a_m]_{\\mp}a_l + a_m \\left[ a_k, a_l \\right]_{\\mp} = 0"
},
{
"math_id": 3,
"text": "\\phi(x)=\\sum_{i=1}^p \\phi^{(i)}(x)"
},
{
"math_id": 4,
"text": "[\\phi^{(i)}(x),\\phi^{(i)}(y)]=0"
},
{
"math_id": 5,
"text": "\\{\\phi^{(i)}(x),\\phi^{(j)}(y)\\}=0"
},
{
"math_id": 6,
"text": "i\\neq j"
},
{
"math_id": 7,
"text": "\\psi(x)=\\sum_{i=1}^p \\psi^{(i)}(x)"
},
{
"math_id": 8,
"text": "\\{\\psi^{(i)}(x),\\psi^{(i)}(y)\\}=0"
},
{
"math_id": 9,
"text": "[\\psi^{(i)}(x),\\psi^{(j)}(y)]=0"
},
{
"math_id": 10,
"text": "\\phi(x_1)\\cdots \\phi(x_n)|\\Omega\\rangle"
},
{
"math_id": 11,
"text": "\\psi(x_1)\\cdots \\psi(x_n)|\\Omega\\rangle"
},
{
"math_id": 12,
"text": "\\phi(x_{\\pi(1)})\\cdots \\phi(x_{\\pi(n)})|\\Omega\\rangle"
},
{
"math_id": 13,
"text": "\\psi(x_{\\pi(1)})\\cdots \\psi(x_{\\pi(n)})|\\Omega\\rangle"
},
{
"math_id": 14,
"text": "\\mathcal{E}(\\pi)"
},
{
"math_id": 15,
"text": "\\mathcal{E}(\\pi)\\left[\\phi(x_1)\\cdots \\phi(x_n)|\\Omega\\rangle\\right]=\\phi(x_{\\pi^{-1}(1)})\\cdots \\phi(x_{\\pi^{-1}(n)})|\\Omega\\rangle"
},
{
"math_id": 16,
"text": "\\mathcal{E}(\\pi)\\left[\\psi(x_1)\\cdots \\psi(x_n)|\\Omega\\rangle\\right]=\\psi(x_{\\pi^{-1}(1)})\\cdots \\psi(x_{\\pi^{-1}(n)})|\\Omega\\rangle"
},
{
"math_id": 17,
"text": "\\mathcal{E}"
}
] | https://en.wikipedia.org/wiki?curid=872374 |
8724 | Doppler effect | Frequency change of a wave for observer relative to its source
The Doppler effect (also Doppler shift) is the change in the frequency of a wave in relation to an observer who is moving relative to the source of the wave. The "Doppler effect" is named after the physicist Christian Doppler, who described the phenomenon in 1842. A common example of Doppler shift is the change of pitch heard when a vehicle sounding a horn approaches and recedes from an observer. Compared to the emitted frequency, the received frequency is higher during the approach, identical at the instant of passing by, and lower during the recession.
When the source of the sound wave is moving towards the observer, each successive cycle of the wave is emitted from a position closer to the observer than the previous cycle. Hence, from the observer's perspective, the time between cycles is reduced, meaning the frequency is increased. Conversely, if the source of the sound wave is moving away from the observer, each cycle of the wave is emitted from a position farther from the observer than the previous cycle, so the arrival time between successive cycles is increased, thus reducing the frequency.
For waves that propagate in a medium, such as sound waves, the velocity of the observer and of the source are relative to the medium in which the waves are transmitted. The total Doppler effect in such cases may therefore result from motion of the source, motion of the observer, motion of the medium, or any combination thereof. For waves propagating in vacuum, as is possible for electromagnetic waves or gravitational waves, only the difference in velocity between the observer and the source needs to be considered.
History.
Doppler first proposed this effect in 1842 in his treatise "Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels" (On the coloured light of the binary stars and some other stars of the heavens). The hypothesis was tested for sound waves by Buys Ballot in 1845. He confirmed that the sound's pitch was higher than the emitted frequency when the sound source approached him, and lower than the emitted frequency when the sound source receded from him. Hippolyte Fizeau discovered independently the same phenomenon on electromagnetic waves in 1848 (in France, the effect is sometimes called "effet Doppler-Fizeau" but that name was not adopted by the rest of the world as Fizeau's discovery was six years after Doppler's proposal). In Britain, John Scott Russell made an experimental study of the Doppler effect (1848).
General.
In classical physics, where the speeds of source and the receiver relative to the medium are lower than the speed of waves in the medium, the relationship between observed frequency formula_0 and emitted frequency formula_1 is given by:
formula_2
where
Note this relationship predicts that the frequency will decrease if either source or receiver is moving away from the other.
Equivalently, under the assumption that the source is either directly approaching or receding from the observer:
formula_7
where
If the source approaches the observer at an angle (but still with a constant speed), the observed frequency that is first heard is higher than the object's emitted frequency. Thereafter, there is a monotonic decrease in the observed frequency as it gets closer to the observer, through equality when it is coming from a direction perpendicular to the relative motion (and was emitted at the point of closest approach; but when the wave is received, the source and observer will no longer be at their closest), and a continued monotonic decrease as it recedes from the observer. When the observer is very close to the path of the object, the transition from high to low frequency is very abrupt. When the observer is far from the path of the object, the transition from high to low frequency is gradual.
If the speeds formula_6 and formula_11 are small compared to the speed of the wave, the relationship between observed frequency formula_0 and emitted frequency formula_1 is approximately
where
<templatestyles src="Math_proof/styles.css" />Proof
Given formula_14
we divide for formula_5
formula_15
Since formula_16 we can substitute using the Taylor's series expansion of formula_17 truncating all formula_18 and higher terms:
formula_19
Consequences.
Assuming a stationary observer and a wave source moving towards the observer at (or exceeding) the speed of the wave, the Doppler equation predicts an infinite (or negative) frequency as from the observer's perspective. Thus, the Doppler equation is inapplicable for such cases. If the wave is a sound wave and the sound source is moving faster than the speed of sound, the resulting shock wave creates a sonic boom.
Lord Rayleigh predicted the following effect in his classic book on sound: if the observer were moving from the (stationary) source at twice the speed of sound, a musical piece "previously" emitted by that source would be heard in correct tempo and pitch, but as if played "backwards".
Applications.
Sirens.
A siren on a passing emergency vehicle will start out higher than its stationary pitch, slide down as it passes, and continue lower than its stationary pitch as it recedes from the observer. Astronomer John Dobson explained the effect thus:
<templatestyles src="Template:Blockquote/styles.css" />The reason the siren slides is because it doesn't hit you.
In other words, if the siren approached the observer directly, the pitch would remain constant, at a higher than stationary pitch, until the vehicle hit him, and then immediately jump to a new lower pitch. Because the vehicle passes by the observer, the radial speed does not remain constant, but instead varies as a function of the angle between his line of sight and the siren's velocity:
formula_20
where formula_21 is the angle between the object's forward velocity and the line of sight from the object to the observer.
Astronomy.
The Doppler effect for electromagnetic waves such as light is of widespread use in astronomy to measure the speed at which stars and galaxies are approaching or receding from us, resulting in so called blueshift or redshift, respectively. This may be used to detect if an apparently single star is, in reality, a close binary, to measure the rotational speed of stars and galaxies, or to detect exoplanets. This effect typically happens on a very small scale; there would not be a noticeable difference in visible light to the unaided eye.
The use of the Doppler effect in astronomy depends on knowledge of precise frequencies of discrete lines in the spectra of stars.
Among the nearby stars, the largest radial velocities with respect to the Sun are +308 km/s (BD-15°4041, also known as LHS 52, 81.7 light-years away) and −260 km/s (Woolley 9722, also known as Wolf 1106 and LHS 64, 78.2 light-years away). Positive radial speed means the star is receding from the Sun, negative that it is approaching.
Redshift is also used to measure the expansion of the universe. It is sometimes claimed that this is not truly a Doppler effect but instead arises from the expansion of space. However, this picture can be misleading because the expansion of space is only a mathematical convention, corresponding to a choice of coordinates. The most natural interpretation of the cosmological redshift is that it is indeed a Doppler shift.
Distant galaxies also exhibit peculiar motion distinct from their cosmological recession speeds. If redshifts are used to determine distances in accordance with Hubble's law, then these peculiar motions give rise to redshift-space distortions.
Radar.
The Doppler effect is used in some types of radar, to measure the velocity of detected objects. A radar beam is fired at a moving target — e.g. a motor car, as police use radar to detect speeding motorists — as it approaches or recedes from the radar source. Each successive radar wave has to travel farther to reach the car, before being reflected and re-detected near the source. As each wave has to move farther, the gap between each wave increases, increasing the wavelength. In some situations, the radar beam is fired at the moving car as it approaches, in which case each successive wave travels a lesser distance, decreasing the wavelength. In either situation, calculations from the Doppler effect accurately determine the car's speed. Moreover, the proximity fuze, developed during World War II, relies upon Doppler radar to detonate explosives at the correct time, height, distance, etc.
Because the Doppler shift affects the wave incident upon the target as well as the wave reflected back to the radar, the change in frequency observed by a radar due to a target moving at relative speed formula_22 is twice that from the same target emitting a wave:
formula_23
Medical.
An echocardiogram can, within certain limits, produce an accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. One of the limitations is that the ultrasound beam should be as parallel to the blood flow as possible. Velocity measurements allow assessment of cardiac valve areas and function, abnormal communications between the left and right side of the heart, leaking of blood through the valves (valvular regurgitation), and calculation of the cardiac output. Contrast-enhanced ultrasound using gas-filled microbubble contrast media can be used to improve velocity or other flow-related medical measurements.
Although "Doppler" has become synonymous with "velocity measurement" in medical imaging, in many cases it is not the frequency shift (Doppler shift) of the received signal that is measured, but the phase shift ("when" the received signal arrives).
Velocity measurements of blood flow are also used in other fields of medical ultrasonography, such as obstetric ultrasonography and neurology. Velocity measurement of blood flow in arteries and veins based on Doppler effect is an effective tool for diagnosis of vascular problems like stenosis.
Flow measurement.
Instruments such as the laser Doppler velocimeter (LDV), and acoustic Doppler velocimeter (ADV) have been developed to measure velocities in a fluid flow. The LDV emits a light beam and the ADV emits an ultrasonic acoustic burst, and measure the Doppler shift in wavelengths of reflections from particles moving with the flow. The actual flow is computed as a function of the water velocity and phase. This technique allows non-intrusive flow measurements, at high precision and high frequency.
Velocity profile measurement.
Developed originally for velocity measurements in medical applications (blood flow), Ultrasonic Doppler Velocimetry (UDV) can measure in real time complete velocity profile in almost any liquids containing particles in suspension such as dust, gas bubbles, emulsions. Flows can be pulsating, oscillating, laminar or turbulent, stationary or transient. This technique is fully non-invasive.
Satellites.
Satellite navigation.
The Doppler shift can be exploited for satellite navigation such as in Transit and DORIS.
Satellite communication.
Doppler also needs to be compensated in satellite communication.
Fast moving satellites can have a Doppler shift of dozens of kilohertz relative to a ground station. The speed, thus magnitude of Doppler effect, changes due to earth curvature. Dynamic Doppler compensation, where the frequency of a signal is changed progressively during transmission, is used so the satellite receives a constant frequency signal. After realizing that the Doppler shift had not been considered before launch of the Huygens probe of the 2005 Cassini–Huygens mission, the probe trajectory was altered to approach Titan in such a way that its transmissions traveled perpendicular to its direction of motion relative to Cassini, greatly reducing the Doppler shift.
Doppler shift of the direct path can be estimated by the following formula:
formula_25
where formula_26 is the speed of the mobile station, formula_27 is the wavelength of the carrier, formula_24 is the elevation angle of the satellite and formula_21 is the driving direction with respect to the satellite.
The additional Doppler shift due to the satellite moving can be described as:
formula_28
where formula_29 is the relative speed of the satellite.
Audio.
The Leslie speaker, most commonly associated with and predominantly used with the famous Hammond organ, takes advantage of the Doppler effect by using an electric motor to rotate an acoustic horn around a loudspeaker, sending its sound in a circle. This results at the listener's ear in rapidly fluctuating frequencies of a keyboard note.
Vibration measurement.
A laser Doppler vibrometer (LDV) is a non-contact instrument for measuring vibration. The laser beam from the LDV is directed at the surface of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface.
Robotics.
Dynamic real-time path planning in robotics to aid the movement of robots in a sophisticated environment with moving obstacles often take help of Doppler effect. Such applications are specially used for competitive robotics where the environment is constantly changing, such as robosoccer.
Inverse Doppler effect.
Since 1968 scientists such as Victor Veselago have speculated about the possibility of an inverse Doppler effect. The size of the Doppler shift depends on the refractive index of the medium a wave is traveling through. Some materials are capable of negative refraction, which should lead to a Doppler shift that works in a direction opposite that of a conventional Doppler shift. The first experiment that detected this effect was conducted by Nigel Seddon and Trevor Bearpark in Bristol, United Kingdom in 2003. Later, the inverse Doppler effect was observed in some inhomogeneous materials, and predicted inside a Vavilov–Cherenkov cone.
Primary sources.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "f_\\text{0}"
},
{
"math_id": 2,
"text": "f = \\left( \\frac{c \\pm v_\\text{r}}{c \\mp v_\\text{s}} \\right) f_0 "
},
{
"math_id": 3,
"text": "c "
},
{
"math_id": 4,
"text": "v_\\text{r} "
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "v_\\text{s} "
},
{
"math_id": 7,
"text": "\\frac{f}{v_{wr}} = \\frac{f_0}{v_{ws}} = \\frac{1}{\\lambda}"
},
{
"math_id": 8,
"text": "v_{wr}"
},
{
"math_id": 9,
"text": "v_{ws}"
},
{
"math_id": 10,
"text": "\\lambda"
},
{
"math_id": 11,
"text": "v_\\text{r} \\,"
},
{
"math_id": 12,
"text": "\\Delta f = f - f_0 "
},
{
"math_id": 13,
"text": "\\Delta v = -(v_\\text{r} - v_\\text{s}) "
},
{
"math_id": 14,
"text": "f = \\left( \\frac{c + v_\\text{r}}{c + v_\\text{s}} \\right) f_0"
},
{
"math_id": 15,
"text": "f\n= \\left( \\frac{1 + \\frac{v_\\text{r}} {c}} {1 + \\frac{v_\\text{s}} {c}} \\right) f_0\n= \\left( 1 + \\frac{v_\\text{r}}{c} \\right) \\left( \\frac{1}{1 + \\frac{v_\\text{s}} {c}} \\right) f_0 "
},
{
"math_id": 16,
"text": "\\frac{v_\\text{s}}{c} \\ll 1"
},
{
"math_id": 17,
"text": "\\frac{1} {1 + x}"
},
{
"math_id": 18,
"text": "x^2"
},
{
"math_id": 19,
"text": " \\frac{1} {1 + \\frac{v_\\text{s}}{c}} \\approx 1 - \\frac{v_\\text{s}}{c}"
},
{
"math_id": 20,
"text": "v_\\text{radial} = v_\\text{s} \\cos(\\theta)"
},
{
"math_id": 21,
"text": "\\theta"
},
{
"math_id": 22,
"text": "\\Delta v"
},
{
"math_id": 23,
"text": "\\Delta f=\\frac{2\\Delta v}{c}f_0."
},
{
"math_id": 24,
"text": "\\phi"
},
{
"math_id": 25,
"text": "f_{\\rm D, dir} = \\frac{v_{\\rm mob}}{\\lambda_{\\rm c}}\\cos\\phi \\cos\\theta"
},
{
"math_id": 26,
"text": "v_\\text{mob}"
},
{
"math_id": 27,
"text": "\\lambda_{\\rm c}"
},
{
"math_id": 28,
"text": "f_{\\rm D,sat} = \\frac{v_{\\rm rel,sat}}{\\lambda_{\\rm c}}"
},
{
"math_id": 29,
"text": "v_{\\rm rel,sat}"
}
] | https://en.wikipedia.org/wiki?curid=8724 |
872412 | Superselection | Rule forbidding the coherence of certain states
In quantum mechanics, superselection extends the concept of selection rules.
Superselection rules are postulated rules forbidding the preparation of quantum states that exhibit coherence between eigenstates of certain observables.
It was originally introduced by Gian Carlo Wick, Arthur Wightman, and Eugene Wigner to impose additional restrictions to quantum theory beyond those of selection rules.
Mathematically speaking, two quantum states formula_0 and formula_1 are separated by a selection rule if formula_2 for the given Hamiltonian formula_3, while they are separated by a superselection rule if formula_4 for "all "physical observables formula_5. Because no observable connects formula_6 and formula_7 they cannot be put into a quantum superposition formula_8, and/or a quantum superposition cannot be distinguished from a classical mixture of the two states. It also implies that there is a classically conserved quantity that differs between the two states.
A superselection sector is a concept used in quantum mechanics when a representation of a *-algebra is decomposed into irreducible components. It formalizes the idea that not all self-adjoint operators are observables because the relative phase of a superposition of nonzero states from different irreducible components is not observable (the expectation values of the observables can't distinguish between them).
Formulation.
Suppose "A" is a unital *-algebra and "O" is a unital *-subalgebra whose self-adjoint elements correspond to observables. A unitary representation of "O" may be decomposed as the direct sum of irreducible unitary representations of "O". Each isotypic component in this decomposition is called a "superselection sector". Observables preserve the superselection sectors.
Relationship to symmetry.
Symmetries often give rise to superselection sectors (although this is not the only way they occur). Suppose a group "G" acts upon "A", and that H is a unitary representation of both "A" and "G" which is equivariant in the sense that for all "g" in "G", "a" in "A" and "ψ" in H,
formula_9
Suppose that "O" is an invariant subalgebra of "A" under "G" (all observables are invariant under "G", but not every self-adjoint operator invariant under "G" is necessarily an observable). H decomposes into superselection sectors, each of which is the tensor product of an irreducible representation of "G" with a representation of "O".
This can be generalized by assuming that H is only a representation of an extension or cover "K" of "G". (For instance "G" could be the Lorentz group, and "K" the corresponding spin double cover.) Alternatively, one can replace "G" by a Lie algebra, Lie superalgebra or a Hopf algebra.
Examples.
Consider a quantum mechanical particle confined to a closed loop (i.e., a periodic line of period "L"). The superselection sectors are labeled by an angle θ between 0 and 2π. All the wave functions within a single superselection sector satisfy
formula_10
Superselection sectors.
A large physical system with infinitely many degrees of freedom does not always visit every possible state, even if it has enough energy. If a magnet is magnetized in a certain direction, each spin will fluctuate at any temperature, but the net magnetization will never change. The reason is that it is infinitely improbable that all the infinitely many spins at each different position will all fluctuate together in the same way.
A big system often has superselection sectors. In a solid, different rotations and translations which are not lattice symmetries define superselection sectors. In general, a superselection rule is a quantity that can never change through local fluctuations. Aside from order parameters like the magnetization of a magnet, there are also topological quantities, like the winding number. If a string is wound around a circular wire, the total number of times it winds around never changes under local fluctuations. This is an ordinary conservation law. If the wire is an infinite line, under conditions that the vacuum does not have winding number fluctuations which are coherent throughout the system, the conservation law is a superselection rule --- the probability that the winding will unwind is zero.
There are quantum fluctuations, superpositions arising from different configurations of a phase-type path integral, and statistical fluctuations from a Boltzmann type path integral. Both of these path integrals have the property that large changes in an effectively infinite system require an improbable conspiracy between the fluctuations. So there are both statistical mechanical and quantum mechanical superselection rules.
In a theory where the vacuum is invariant under a symmetry, the conserved charge leads to superselection sectors in the case that the charge is conserved. Electric charge is conserved in our universe, so it seems at first like a trivial example. But when a superconductor fills space, or equivalently in a Higgs phase, electric charge is still globally conserved but no longer defines the superselection sectors. The sloshing of the superconductor can bring charges into any volume at very little cost. In this case, the superselection sectors of the vacuum are labeled by the direction of the Higgs field. Since different Higgs directions are related by an exact symmetry, they are all exactly equivalent. This suggests a deep relationship between symmetry breaking directions and conserved charges.
Discrete symmetry.
In the 2D Ising model, at low temperatures, there are two distinct pure states, one with the average spin pointing up and the other with the average spin pointing down. This is the ordered phase. At high temperatures, there is only one pure state with an average spin of zero. This is the disordered phase. At the phase transition between the two, the symmetry between spin up and spin down is broken.
Below the phase transition temperature, an infinite ising model can be in either the mostly-plus or the mostly-minus configuration. If it starts in the mostly-plus phase, it will never reach the mostly-minus, even though flipping all the spins will give the same energy. By changing the temperature, the system acquired a new superselection rule--- the average spin. There are two superselection sectors--- mostly minus and mostly plus.
There are also other superselection sectors; for instance, states where the left half of the plane is mostly plus and the right half of the plane is mostly minus.
When a new superselection rule appears, the system has spontaneously ordered. Above the critical temperature, the ising model is disordered. It could visit every state in principle. Below the transition, the system chooses one of two possibilities at random and never changes its mind.
For any finite system, the superselection is imperfect. An Ising model on a finite lattice will eventually fluctuate from the mostly plus to the mostly minus at any nonzero temperature, but it takes a very long time. The amount of time is exponentially small in the size of the system measured in correlation lengths, so for all practical purposes the flip never happens even in systems only a few times larger than the correlation length.
Continuous symmetries.
If a statistical or quantum field has three real valued scalar fields formula_11, and the energy or action only depends on combinations which are symmetric under rotations of these components into each other, the contributions with the lowest dimension are (summation convention):
formula_12
and define the action in a quantum field context or free energy in the statistical context. There are two phases. When t is large, the potential tends to move the average formula_13 to zero. For t large and negative, the quadratic potential pushes formula_13 out, but the quartic potential prevents it from becoming infinite. If this is done in a quantum path integral, this is a quantum phase transition, in a classical partition function, a classical phase transition.
So as t moves toward more negative values in either context, the field has to choose some direction to point. Once it does this, it cannot change its mind. The system has "ordered". In the ordered phase, there is still a little bit of symmetry--- rotations around the axis of the breaking. The field can point in any direction labelled by all the points on a unit sphere in formula_13 space, which is the coset space of the unbroken SO(2) subgroup in the full symmetry group SO(3).
In the disordered phase, the superselection sectors are described by the representation of SO(3) under which a given configuration transforms globally. Because the SO(3) is unbroken, different representations will not mix with each other. No local fluctuation will ever bring in nontrivial SO(3) configurations from infinity. A local configuration is entirely defined by its representation.
There is a mass gap, or a correlation length, which separates configurations with a nontrivial SO(3) transformations from the rotationally invariant vacuum. This is true until the critical point in t where the mass gap disappears and the correlation length is infinite. The vanishing gap is a sign that the fluctuations in the SO(3) field are about to condense.
In the ordered region, there are field configurations which can carry topological charge. These are labeled by elements of the second homotopy group formula_14. Each of these describe a different field configuration which at large distances from the origin is a winding configuration. Although each such isolated configuration has infinite energy, it labels superselection sectors where the difference in energy between two states is finite. In addition, pairs of winding configurations with opposite topological charge can be produced copiously as the transition is approached from below.
When the winding number is zero, so that the field everywhere points in the same direction, there is an additional infinity of superselection sectors, each labelled by a different value of the unbroken SO(2) charge.
In the ordered state, there is a mass gap for the superselection sectors labeled by a nonzero integer, because the topological solitons are massive, even infinitely massive. But there is no mass gap for all the superselection sectors labeled by zero because there are massless Goldstone bosons describing fluctuations in the direction of the condensate.
If the field values are identified under a Z2 reflection (corresponding to flipping the sign of all the formula_13 fields), the superselection sectors are labelled by a nonnegative integer (the absolute value of the topological charge).
O(3) charges only make sense in the disordered phase and not at all in the ordered phase. This is because when the symmetry is broken there is a condensate which is charged, which is not invariant under the symmetry group. Conversely, the topological charge only makes sense in the ordered phase and not at all in the disordered phase, because in some hand-waving way there is a "topological condensate" in the disordered phase which randomizes the field from point to point. The randomizing can be thought of as crossing many condensed topological winding boundaries.
The very question of what charges are meaningful depends very much on the phase. Approaching the phase transition from the disordered side, the mass of the charges particles approaches zero. Approaching it from the ordered side, the mass gap associated with fluctuations of the topological solitons approaches zero.
Examples in particle physics.
In the standard model of particle physics, in the electroweak sector,
the low energy model is SU(2) and U(1) broken to U(1) by a Higgs doublet. The
only superselection rule determining the configuration is the total electric charge.
If there are monopoles, then the monopole charge must be included.
If the Higgs t parameter is varied so that it does not acquire a vacuum expectation
value, the universe is now symmetric under an unbroken SU(2) and U(1) gauge group. If
the SU(2) has infinitesimally weak couplings, so that it only confines at enormous
distances, then the representation of the SU(2) group and the U(1) charge both are
superselection rules. But if the SU(2) has a nonzero coupling then the superselection
sectors are separated by infinite mass because the mass of any state in a nontrivial representation is infinite.
By changing the temperature, the Higgs fluctuations can zero out the expectation value at
a finite temperature. Above this temperature, the SU(2) and U(1) quantum numbers describe
the superselection sectors. Below the phase transition, only electric charge defines the superselection sector.
Consider the global flavour symmetry of QCD in the chiral limit where the masses of the quarks are zero. This is not exactly the universe in which we live, where the up and down quarks have a tiny but nonzero mass, but it is a very good approximation, to the extent that isospin is conserved.
Below a certain temperature which is the symmetry restoration temperature, the phase is ordered.
The chiral condensate forms, and pions of small mass are produced. The SU(Nf) charges, Isospin and Hypercharge and SU(3), make sense. Above the QCD temperature lies a disordered phase where SU(Nf)×SU(Nf) and color SU(3) charges make sense.
It is an open question whether the deconfinement temperature of QCD is also the temperature at which the chiral condensate melts.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\psi_1"
},
{
"math_id": 1,
"text": "\\psi_2"
},
{
"math_id": 2,
"text": " \\langle \\psi_1 | H | \\psi_2 \\rangle = 0"
},
{
"math_id": 3,
"text": " H "
},
{
"math_id": 4,
"text": " \\langle \\psi_1 | A | \\psi_2 \\rangle = 0"
},
{
"math_id": 5,
"text": " A "
},
{
"math_id": 6,
"text": " \\langle \\psi_1 | "
},
{
"math_id": 7,
"text": " | \\psi_2 \\rangle "
},
{
"math_id": 8,
"text": " \\alpha | \\psi_1 \\rangle + \\beta | \\psi_2 \\rangle "
},
{
"math_id": 9,
"text": " g (a\\cdot\\psi) = (ga)\\cdot (g\\psi)"
},
{
"math_id": 10,
"text": "\\psi(x+L)=e^{i \\theta}\\psi(x)."
},
{
"math_id": 11,
"text": "\\phi_1,\\phi_2,\\phi_3 "
},
{
"math_id": 12,
"text": "\n|\\nabla \\phi_i|^2 + t \\phi_i^2 + \\lambda (\\phi_i^2)^2 \n\\,"
},
{
"math_id": 13,
"text": "\\phi"
},
{
"math_id": 14,
"text": "\\pi_2(SO(3)/SO(2))=\\mathbb{Z}"
}
] | https://en.wikipedia.org/wiki?curid=872412 |
872484 | Richard P. Brent | Australian mathematician and computer scientist
Richard Peirce Brent is an Australian mathematician and computer scientist. He is an emeritus professor at the Australian National University. From March 2005 to March 2010 he was a Federation Fellow at the Australian National University. His research interests include number theory (in particular factorisation), random number generators, computer architecture, and analysis of algorithms.
In 1973, he published a root-finding algorithm (an algorithm for solving equations numerically) which is now known as Brent's method.
In 1975 he and Eugene Salamin independently conceived the Salamin–Brent algorithm, used in high-precision calculation of formula_0. At the same time, he showed that all the elementary functions (such as log("x"), sin("x") etc.) can be evaluated to high precision in the same time as formula_0 (apart from a small constant factor) using the arithmetic-geometric mean of Carl Friedrich Gauss.
In 1979 he showed that the first 75 million complex zeros of the Riemann zeta function lie on the critical line, providing some experimental evidence for the Riemann hypothesis.
In 1980 he and Nobel laureate Edwin McMillan found a new algorithm for high-precision computation of the Euler–Mascheroni constant formula_1 using Bessel functions, and showed that formula_1 can not have a simple rational form "p"/"q" (where "p" and "q" are integers) unless "q" is extremely large (greater than 1015000).
In 1980 he and John Pollard factored the eighth Fermat number using a variant of the Pollard rho algorithm. He later factored the tenth and eleventh Fermat numbers using Lenstra's elliptic curve factorisation algorithm.
In 2002, Brent, Samuli Larvala and Paul Zimmermann discovered a very large primitive trinomial over GF(2):
formula_2
The degree 6972593 is the exponent of a Mersenne prime.
In 2009 and 2016, Brent and Paul Zimmermann discovered some even larger primitive trinomials, for example:
formula_3
The degree 43112609 is again the exponent of a Mersenne prime. The highest degree trinomials found were three trinomials of degree 74,207,281, also a Mersenne prime exponent.
In 2011, Brent and Paul Zimmermann published "Modern Computer Arithmetic" (Cambridge University Press), a book about algorithms for performing arithmetic, and their implementation on modern computers.
Brent is a Fellow of the Association for Computing Machinery, the IEEE, SIAM and the Australian Academy of Science. In 2005, he was awarded the Hannan Medal by the Australian Academy of Science. In 2014, he was awarded the Moyal Medal by Macquarie University.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi"
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": "x^{6972593} + x^{3037958} + 1."
},
{
"math_id": 3,
"text": "x^{43112609} + x^{3569337} + 1."
}
] | https://en.wikipedia.org/wiki?curid=872484 |
8726682 | Etching (microfabrication) | Technique in microfabrication used to remove material and create structures
Etching is used in microfabrication to chemically remove layers from the surface of a wafer during manufacturing. Etching is a critically important process module in fabrication, and every wafer undergoes many etching steps before it is complete.
For many etch steps, part of the wafer is protected from the etchant by a "masking" material which resists etching. In some cases, the masking material is a photoresist which has been patterned using photolithography. Other situations require a more durable mask, such as silicon nitride.
Etching media and technology.
The two fundamental types of etchants are liquid-phase ("wet") and plasma-phase ("dry"). Each of these exists in several varieties.
Wet etching.
The first etching processes used liquid-phase ("wet") etchants. This process is now largely outdated but was used up until the late 1980s when it was superseded by dry plasma etching.147 The wafer can be immersed in a bath of etchant, which must be agitated to achieve good process control. For instance, buffered hydrofluoric acid (BHF) is used commonly to etch silicon dioxide over a silicon substrate.
Different specialized etchants can be used to characterize the surface etched.
Wet etchants are usually isotropic, which leads to a large bias when etching thick films. They also require the disposal of large amounts of toxic waste. For these reasons, they are seldom used in state-of-the-art processes. However, the photographic developer used for photoresist resembles wet etching.
As an alternative to immersion, single wafer machines use the Bernoulli principle to employ a gas (usually, pure nitrogen) to cushion and protect one side of the wafer while etchant is applied to the other side. It can be done to either the front side or back side. The etch chemistry is dispensed on the top side when in the machine and the bottom side is not affected. This etching method is particularly effective just before "backend" processing (BEOL), where wafers are normally very much thinner after wafer backgrinding, and very sensitive to thermal or mechanical stress. Etching a thin layer of even a few micrometres will remove microcracks produced during backgrinding resulting in the wafer having dramatically increased strength and flexibility without breaking.
Anisotropic wet etching (Orientation dependent etching).
Some wet etchants etch crystalline materials at very different rates depending upon which crystal face is exposed. In single-crystal materials (e.g. silicon wafers), this effect can allow very high anisotropy, as shown in the figure. The term "crystallographic etching" is synonymous with "anisotropic etching along crystal planes".
However, for some non-crystal materials like glass, there are unconventional ways to etch in an anisotropic manner. The authors employ multistream laminar flow that contains etching non-etching solutions to fabricate a glass groove. The etching solution at the center is flanked by non-etching solutions and the area contacting etching solutions is limited by the surrounding non-etching solutions. The etching direction is thereby mainly vertical to the glass surface. The scanning electron microscopy (SEM) images demonstrate the breaking of the conventional theoretical limit of aspect ratio (width/height=0.5) and contribute a two-fold improvement (width/height=1).
Several anisotropic wet etchants are available for silicon, all of them hot aqueous caustics. For instance, potassium hydroxide (KOH) displays an etch rate selectivity 400 times higher in <100> crystal directions than in <111> directions. EDP (an aqueous solution of ethylene diamine and pyrocatechol), displays a <100>/<111> selectivity of 17X, does not etch silicon dioxide as KOH does, and also displays high selectivity between lightly doped and heavily boron-doped (p-type) silicon. Use of these etchants on wafers that already contain CMOS integrated circuits requires protecting the circuitry. KOH may introduce mobile potassium ions into silicon dioxide, and EDP is highly corrosive and carcinogenic, so care is required in their use. Tetramethylammonium hydroxide (TMAH) presents a safer alternative than EDP, with a 37X selectivity between {100} and {111} planes in silicon.
Etching a (100) silicon surface through a rectangular hole in a masking material, like a hole in a layer of silicon nitride, creates a pit with flat sloping {111}-oriented sidewalls and a flat (100)-oriented bottom. The {111}-oriented sidewalls have an angle to the surface of the wafer of:
formula_0
If the etching is continued "to completion", i.e. until the flat bottom disappears, the pit becomes a trench with a V-shaped cross-section. If the original rectangle was a perfect square, the pit when etched to completion displays a pyramidal shape.
The undercut, "δ", under an edge of the masking material is given by:
formula_1,
where "R"xxx is the etch rate in the <xxx> direction, "T" is the etch time, "D" is the etch depth and "S" is the anisotropy of the material and etchant.
Different etchants have different anisotropies. Below is a table of common anisotropic etchants for silicon:
Plasma etching.
Modern very large scale integration (VLSI) processes avoid wet etching, and use "plasma etching" instead. Plasma etchers can operate in several modes by adjusting the parameters of the plasma. Ordinary plasma etching operates between 0.1 and 5 Torr. (This unit of pressure, commonly used in vacuum engineering, equals approximately 133.3 pascals.) The plasma produces energetic free radicals, neutrally charged, that react at the surface of the wafer. Since neutral particles attack the wafer from all angles, this process is isotropic.
Plasma etching can be isotropic, i.e., exhibiting a lateral undercut rate on a patterned surface approximately the same as its downward etch rate, or can be anisotropic, i.e., exhibiting a smaller lateral undercut rate than its downward etch rate. Such anisotropy is maximized in deep reactive ion etching (DRIE). The use of the term anisotropy for plasma etching should not be conflated with the use of the same term when referring to orientation-dependent etching.
The source gas for the plasma usually contains small molecules rich in chlorine or fluorine. For instance, carbon tetrachloride (CCl4) etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. A plasma containing oxygen is used to oxidize ("ash") photoresist and facilitate its removal.
"Ion milling", or "sputter etching", uses lower pressures, often as low as 10−4 Torr (10 mPa). It bombards the wafer with energetic ions of noble gases, often Ar+, which knock atoms from the substrate by transferring momentum. Because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. On the other hand, it tends to display poor selectivity. Reactive-ion etching (RIE) operates under conditions intermediate between sputter and plasma etching (between 10−3 and 10−1 Torr). Deep reactive-ion etching (DRIE) modifies the RIE technique to produce deep, narrow features.
Figures of merit.
If the etch is intended to make a cavity in a material, the depth of the cavity may be controlled approximately using the etching time and the known etch rate. More often, though, etching must entirely remove the top layer of a multilayer structure, without damaging the underlying or masking layers. The etching system's ability to do this depends on the ratio of etch rates in the two materials ("selectivity").
Some etches undercut the masking layer and form cavities with sloping sidewalls. The distance of undercutting is called "bias". Etchants with large bias are called "isotropic", because they erode the substrate equally in all directions. Modern processes greatly prefer "anisotropic" etches, because they produce sharp, well-controlled features.
References.
Inline references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\arctan\\sqrt{2}=54.7^\\circ"
},
{
"math_id": 1,
"text": "\\delta = \\frac{\\sqrt{6} D}{S}=\\frac{\\sqrt{6} R_{100}T}{R_{100}/R_{111}}=\\sqrt{6}TR_{111}"
}
] | https://en.wikipedia.org/wiki?curid=8726682 |
8729683 | Stirling numbers and exponential generating functions in symbolic combinatorics | The use of exponential generating functions (EGFs) to study the properties of Stirling numbers is a classical exercise in combinatorial mathematics and possibly the canonical example of how symbolic combinatorics is used. It also illustrates the parallels in the construction of these two types of numbers, lending support to the binomial-style notation that is used for them.
This article uses the coefficient extraction operator formula_0 for formal power series, as well as the (labelled) operators formula_1 (for cycles) and formula_2 (for sets) on combinatorial classes, which are explained on the page for symbolic combinatorics. Given a combinatorial class, the cycle operator creates the class obtained by placing objects from the source class along a cycle of some length, where cyclical symmetries are taken into account, and the set operator creates the class obtained by placing objects from the source class in a set (symmetries from the symmetric group, i.e. an "unstructured bag".) The two combinatorial classes (shown without additional markers) are
formula_3
and
formula_4
where formula_5 is the singleton class.
Warning: The notation used here for the Stirling numbers is not that of the Wikipedia articles on Stirling numbers; square brackets denote the signed Stirling numbers here.
Stirling numbers of the first kind.
The unsigned Stirling numbers of the first kind count the number of permutations of ["n"] with "k" cycles. A permutation is a set of cycles, and hence the set formula_6 of permutations is given by
formula_7
where the singleton formula_8 marks cycles. This decomposition is examined in some detail on the page on the statistics of random permutations.
Translating to generating functions we obtain the mixed generating function of the unsigned Stirling numbers of the first kind:
formula_9
Now the signed Stirling numbers of the first kind are obtained from the unsigned ones through the relation
formula_10
Hence the generating function formula_11 of these numbers is
formula_12
A variety of identities may be derived by manipulating this generating function:
formula_13
In particular, the order of summation may be exchanged, and derivatives taken, and then "z" or "u" may be fixed.
Finite sums.
A simple sum is
formula_14
This formula holds because the exponential generating function of the sum is
formula_15
Infinite sums.
Some infinite sums include
formula_16
where formula_17 (the singularity nearest to formula_18
of formula_19 is at formula_20)
This relation holds because
formula_21
Stirling numbers of the second kind.
These numbers count the number of partitions of ["n"] into "k" nonempty subsets. First consider the total number of partitions, i.e. "B""n" where
formula_22
i.e. the Bell numbers. The Flajolet–Sedgewick fundamental theorem applies (labelled case).
The set formula_23 of partitions into non-empty subsets is given by ("set of non-empty sets of singletons")
formula_24
This decomposition is entirely analogous to the construction of the set formula_6 of permutations from cycles, which is given by
formula_25
and yields the Stirling numbers of the first kind. Hence the name "Stirling numbers of the second kind."
The decomposition is equivalent to the EGF
formula_26
Differentiate to obtain
formula_27
which implies that
formula_28
by convolution of exponential generating functions and because differentiating an EGF drops the first coefficient and shifts "B""n"+1 to "z" "n"/"n"!.
The EGF of the Stirling numbers of the second kind is obtained by marking every subset that goes into the partition with the term formula_29, giving
formula_30
Translating to generating functions, we obtain
formula_31
This EGF yields the formula for the Stirling numbers of the second kind:
formula_32
or
formula_33
which simplifies to
formula_34 | [
{
"math_id": 0,
"text": "[z^n]"
},
{
"math_id": 1,
"text": "\\mathfrak{C}"
},
{
"math_id": 2,
"text": "\\mathfrak{P}"
},
{
"math_id": 3,
"text": " \\mathcal{P} = \\operatorname{SET}(\\operatorname{CYC}(\\mathcal{Z})),"
},
{
"math_id": 4,
"text": " \\mathcal{B} = \\operatorname{SET}(\\operatorname{SET}_{\\ge 1}(\\mathcal{Z})),"
},
{
"math_id": 5,
"text": "\\mathcal{Z}"
},
{
"math_id": 6,
"text": "\\mathcal{P}\\,"
},
{
"math_id": 7,
"text": " \\mathcal{P} = \\operatorname{SET}(\\mathcal{U} \\times \\operatorname{CYC}(\\mathcal{Z})), \\, "
},
{
"math_id": 8,
"text": "\\mathcal{U}"
},
{
"math_id": 9,
"text": "G(z, u) = \\exp \\left( u \\log \\frac{1}{1-z} \\right) =\n\\left(\\frac{1}{1-z} \\right)^u =\n\\sum_{n=0}^\\infty \\sum_{k=0}^n \n\\left[\\begin{matrix} n \\\\ k \\end{matrix}\\right] u^k \\, \\frac{z^n}{n!}.\n"
},
{
"math_id": 10,
"text": "(-1)^{n-k} \\left[\\begin{matrix} n \\\\ k \\end{matrix}\\right]."
},
{
"math_id": 11,
"text": "H(z, u)"
},
{
"math_id": 12,
"text": " H(z, u) = G(-z, -u) =\n\\left(\\frac{1}{1+z} \\right)^{-u} = (1+z)^u =\n\\sum_{n=0}^\\infty \\sum_{k=0}^n \n(-1)^{n-k} \\left[\\begin{matrix} n \\\\ k \\end{matrix}\\right] u^k \\, \\frac{z^n}{n!}."
},
{
"math_id": 13,
"text": "(1+z)^u = \\sum_{n=0}^\\infty {u \\choose n} z^n = \n\\sum_{n=0}^\\infty \\frac {z^n}{n!} \\sum_{k=0}^n \n(-1)^{n-k} \\left[\\begin{matrix} n \\\\ k \\end{matrix}\\right] u^k = \n\\sum_{k=0}^\\infty u^k\n\\sum_{n=k}^\\infty \\frac {z^n}{n!}\n(-1)^{n-k} \\left[\\begin{matrix} n \\\\ k \\end{matrix}\\right] = \ne^{u\\log(1+z)}.\n"
},
{
"math_id": 14,
"text": "\\sum_{k=0}^n (-1)^k \n\\left[\\begin{matrix} n \\\\ k \\end{matrix}\\right] = \n(-1)^n n!."
},
{
"math_id": 15,
"text": "H(z, -1) = \\frac{1}{1+z}\n\n\\quad \\mbox{and hence} \\quad\n\nn! [z^n] H(z, -1) = (-1)^n n!."
},
{
"math_id": 16,
"text": "\\sum_{n=k}^\\infty \n\\left[\\begin{matrix} n \\\\ k \\end{matrix}\\right] \n\\frac{z^n}{n!} = \\frac {\\left( -\\log (1-z)\\right)^k}{k!}\n"
},
{
"math_id": 17,
"text": "|z|<1"
},
{
"math_id": 18,
"text": "z=0"
},
{
"math_id": 19,
"text": "\\log (1+z)"
},
{
"math_id": 20,
"text": "z=-1."
},
{
"math_id": 21,
"text": "[u^k] H(z, u) = [u^k] \\exp \\left( u \\log (1+z) \\right) =\n\\frac {\\left(\\log (1+z)\\right)^k}{k!}."
},
{
"math_id": 22,
"text": "B_n = \\sum_{k=1}^n \\left\\{\\begin{matrix} n \\\\ k \\end{matrix}\\right\\}\n\\mbox{ and } B_0 = 1,"
},
{
"math_id": 23,
"text": "\\mathcal{B}\\,"
},
{
"math_id": 24,
"text": " \\mathcal{B} = \\operatorname{SET}(\\operatorname{SET}_{\\ge 1}(\\mathcal{Z}))."
},
{
"math_id": 25,
"text": " \\mathcal{P} = \\operatorname{SET}(\\operatorname{CYC}(\\mathcal{Z}))."
},
{
"math_id": 26,
"text": " B(z) = \\exp \\left(\\exp z - 1\\right)."
},
{
"math_id": 27,
"text": " \\frac{d}{dz} B(z) = \n\\exp \\left(\\exp z - 1\\right) \\exp z = B(z) \\exp z,"
},
{
"math_id": 28,
"text": " B_{n+1} = \\sum_{k=0}^n {n \\choose k} B_k,"
},
{
"math_id": 29,
"text": "\\mathcal{U}\\,"
},
{
"math_id": 30,
"text": " \\mathcal{B} = \\operatorname{SET}(\\mathcal{U} \\times \\operatorname{SET}_{\\ge 1}(\\mathcal{Z}))."
},
{
"math_id": 31,
"text": " B(z, u) = \\exp \\left(u \\left(\\exp z - 1\\right)\\right)."
},
{
"math_id": 32,
"text": " \\left\\{\\begin{matrix} n \\\\ k \\end{matrix}\\right\\} =\nn! [u^k] [z^n] B(z, u) =\nn! [z^n] \\frac{(\\exp z - 1)^k}{k!}"
},
{
"math_id": 33,
"text": "\nn! [z^n] \\frac{1}{k!} \\sum_{j=0}^k {k \\choose j} \\exp(jz) (-1)^{k-j}\n"
},
{
"math_id": 34,
"text": " \\frac{n!}{k!} \\sum_{j=0}^k {k \\choose j} (-1)^{k-j} \\frac{j^n}{n!} =\n\\frac{1}{k!} \\sum_{j=0}^k {k \\choose j} (-1)^{k-j} j^n."
}
] | https://en.wikipedia.org/wiki?curid=8729683 |
87299 | Heaviside step function | Indicator function of positive numbers
The Heaviside step function, or the unit step function, usually denoted by H or θ (but sometimes u, 1 or 𝟙), is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. Different conventions concerning the value "H"(0) are in use. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.
The function was originally developed in operational calculus for the solution of differential equations, where it represents a signal that switches on at a specified time and stays switched on indefinitely. Oliver Heaviside, who developed the operational calculus as a tool in the analysis of telegraphic communications, represented the function as 1.
Taking the convention that "H"(0)
1, the Heaviside function may be defined as:
The Dirac delta function is the derivative of the Heaviside function:
formula_4
Hence the Heaviside function can be considered to be the integral of the Dirac delta function. This is sometimes written as
formula_5
although this expansion may not hold (or even make sense) for "x"
0, depending on which formalism one uses to give meaning to integrals involving δ. In this context, the Heaviside function is the cumulative distribution function of a random variable which is almost surely 0. (See Constant random variable.)
Approximations to the Heaviside step function are of use in biochemistry and neuroscience, where logistic approximations of step functions (such as the Hill and the Michaelis–Menten equations) may be used to approximate binary cellular switches in response to chemical signals.
Analytic approximations.
For a smooth approximation to the step function, one can use the logistic function
formula_6
where a larger k corresponds to a sharper transition at "x"
0. If we take "H"(0)
, equality holds in the limit:
formula_7
There are many other smooth, analytic approximations to the step function. Among the possibilities are:
formula_8
These limits hold pointwise and in the sense of distributions. In general, however, pointwise convergence need not imply distributional convergence, and vice versa distributional convergence need not imply pointwise convergence. (However, if all members of a pointwise convergent sequence of functions are uniformly bounded by some "nice" function, then convergence holds in the sense of distributions too.)
In general, any cumulative distribution function of a continuous probability distribution that is peaked around zero and has a parameter that controls for variance can serve as an approximation, in the limit as the variance approaches zero. For example, all three of the above approximations are cumulative distribution functions of common probability distributions: the logistic, Cauchy and normal distributions, respectively.
Integral representations.
Often an integral representation of the Heaviside step function is useful:
formula_9
where the second representation is easy to deduce from the first, given that the step function is real and thus is its own complex conjugate.
Zero argument.
Since H is usually used in integration, and the value of a function at a single point does not affect its integral, it rarely matters what particular value is chosen of "H"(0). Indeed when H is considered as a distribution or an element of "L"∞ (see "Lp" space) it does not even make sense to talk of a value at zero, since such objects are only defined almost everywhere. If using some analytic approximation (as in the examples above) then often whatever happens to be the relevant limit at zero is used.
There exist various reasons for choosing a particular value.
is often used since the graph then has rotational symmetry; put another way, "H" − is then an odd function. In this case the following relation with the sign function holds for all x: formula_10
1 is used when H needs to be right-continuous. For instance cumulative distribution functions are usually taken to be right continuous, as are functions integrated against in Lebesgue–Stieltjes integration. In this case H is the indicator function of a closed semi-infinite interval: formula_11 The corresponding probability distribution is the degenerate distribution.
0 is used when H needs to be left-continuous. In this case H is an indicator function of an open semi-infinite interval: formula_12
[0,1].
Discrete form.
An alternative form of the unit step, defined instead as a function formula_13 (that is, taking in a discrete variable n), is:
formula_14
or using the half-maximum convention:
formula_15
where n is an integer. If n is an integer, then "n" < 0 must imply that "n" ≤ −1, while "n" > 0 must imply that the function attains unity at "n" = 1. Therefore the "step function" exhibits ramp-like behavior over the domain of [−1, 1], and cannot authentically be a step function, using the half-maximum convention.
Unlike the continuous case, the definition of "H"[0] is significant.
The discrete-time unit impulse is the first difference of the discrete-time step
formula_16
This function is the cumulative summation of the Kronecker delta:
formula_17
where
formula_18
is the discrete unit impulse function.
Antiderivative and derivative.
The ramp function is an antiderivative of the Heaviside step function:
formula_19
The distributional derivative of the Heaviside step function is the Dirac delta function:
formula_20
Fourier transform.
The Fourier transform of the Heaviside step function is a distribution. Using one choice of constants for the definition of the Fourier transform we have
formula_21
Here p.v. is the distribution that takes a test function φ to the Cauchy principal value of formula_22. The limit appearing in the integral is also taken in the sense of (tempered) distributions.
Unilateral Laplace transform.
The Laplace transform of the Heaviside step function is a meromorphic function. Using the unilateral Laplace transform we have:
formula_23
When the bilateral transform is used, the integral can be split in two parts and the result will be the same.
Other expressions.
The Heaviside step function can be represented as a hyperfunction as
formula_24
where log "z" is the principal value of the complex logarithm of z.
It can also be expressed for "x" ≠ 0 in terms of the absolute value function as
formula_25
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H(x) := \\begin{cases} 1, & x \\geq 0 \\\\ 0, & x < 0 \\end{cases}"
},
{
"math_id": 1,
"text": "H(x) := [x \\geq 0]"
},
{
"math_id": 2,
"text": "H(x) := \\mathbf{1}_{x \\geq 0}=\\mathbf 1_{\\mathbb R_+}(x)"
},
{
"math_id": 3,
"text": "H(x) := \\frac{d}{dx} \\max \\{ x, 0 \\}\\quad \\mbox{for } x \\ne 0"
},
{
"math_id": 4,
"text": "\\delta(x)= \\frac{d}{dx} H(x)."
},
{
"math_id": 5,
"text": "H(x) := \\int_{-\\infty}^x \\delta(s)\\,ds"
},
{
"math_id": 6,
"text": "H(x) \\approx \\tfrac{1}{2} + \\tfrac{1}{2}\\tanh kx = \\frac{1}{1+e^{-2kx}},"
},
{
"math_id": 7,
"text": "H(x)=\\lim_{k \\to \\infty}\\tfrac{1}{2}(1+\\tanh kx)=\\lim_{k \\to \\infty}\\frac{1}{1+e^{-2kx}}."
},
{
"math_id": 8,
"text": "\\begin{align}\n H(x) &= \\lim_{k \\to \\infty} \\left(\\tfrac{1}{2} + \\tfrac{1}{\\pi}\\arctan kx\\right)\\\\\n H(x) &= \\lim_{k \\to \\infty}\\left(\\tfrac{1}{2} + \\tfrac12\\operatorname{erf} kx\\right)\n\\end{align}"
},
{
"math_id": 9,
"text": "\\begin{align}\n H(x)&=\\lim_{ \\varepsilon \\to 0^+} -\\frac{1}{2\\pi i}\\int_{-\\infty}^\\infty \\frac{1}{\\tau+i\\varepsilon} e^{-i x \\tau} d\\tau \\\\\n &=\\lim_{ \\varepsilon \\to 0^+} \\frac{1}{2\\pi i}\\int_{-\\infty}^\\infty \\frac{1}{\\tau-i\\varepsilon} e^{i x \\tau} d\\tau.\n\\end{align}"
},
{
"math_id": 10,
"text": " H(x) = \\tfrac12(1 + \\sgn x)."
},
{
"math_id": 11,
"text": " H(x) = \\mathbf{1}_{[0,\\infty)}(x)."
},
{
"math_id": 12,
"text": " H(x) = \\mathbf{1}_{(0,\\infty)}(x)."
},
{
"math_id": 13,
"text": "H : \\mathbb{Z} \\rarr \\mathbb{R}"
},
{
"math_id": 14,
"text": "H[n]=\\begin{cases} 0, & n < 0, \\\\ 1, & n \\ge 0, \\end{cases} "
},
{
"math_id": 15,
"text": "H[n]=\\begin{cases} 0, & n < 0, \\\\ \\tfrac12, & n = 0,\\\\ 1, & n > 0, \\end{cases} "
},
{
"math_id": 16,
"text": " \\delta[n] = H[n] - H[n-1]."
},
{
"math_id": 17,
"text": " H[n] = \\sum_{k=-\\infty}^{n} \\delta[k] "
},
{
"math_id": 18,
"text": " \\delta[k] = \\delta_{k,0} "
},
{
"math_id": 19,
"text": "\\int_{-\\infty}^{x} H(\\xi)\\,d\\xi = x H(x) = \\max\\{0,x\\} \\,."
},
{
"math_id": 20,
"text": " \\frac{d H(x)}{dx} = \\delta(x) \\,."
},
{
"math_id": 21,
"text": "\\hat{H}(s) = \\lim_{N\\to\\infty}\\int^N_{-N} e^{-2\\pi i x s} H(x)\\,dx = \\frac{1}{2} \\left( \\delta(s) - \\frac{i}{\\pi} \\operatorname{p.v.}\\frac{1}{s} \\right)."
},
{
"math_id": 22,
"text": "\\textstyle\\int_{-\\infty}^\\infty \\frac{\\varphi(s)}{s} \\, ds"
},
{
"math_id": 23,
"text": "\\begin{align}\n \\hat{H}(s) &= \\lim_{N\\to\\infty}\\int^N_{0} e^{-sx} H(x)\\,dx\\\\\n &= \\lim_{N\\to\\infty}\\int^N_{0} e^{-sx} \\,dx\\\\\n &= \\frac{1}{s} \\end{align}"
},
{
"math_id": 24,
"text": "H(x) = \\left(1-\\frac{1}{2\\pi i}\\log z,\\ -\\frac{1}{2\\pi i}\\log z\\right)."
},
{
"math_id": 25,
"text": " H(x) = \\frac{x + |x|}{2x} \\,."
}
] | https://en.wikipedia.org/wiki?curid=87299 |
8730871 | Dual quaternion | Eight-dimensional algebra over the real numbers
In mathematics, the dual quaternions are an 8-dimensional real algebra isomorphic to the tensor product of the quaternions and the dual numbers. Thus, they may be constructed in the same way as the quaternions, except using dual numbers instead of real numbers as coefficients. A dual quaternion can be represented in the form "A" + ε"B", where "A" and "B" are ordinary quaternions and ε is the dual unit, which satisfies ε2 = 0 and commutes with every element of the algebra.
Unlike quaternions, the dual quaternions do not form a division algebra.
In mechanics, the dual quaternions are applied as a number system to represent rigid transformations in three dimensions. Since the space of dual quaternions is 8-dimensional and a rigid transformation has six real degrees of freedom, three for translations and three for rotations, dual quaternions obeying two algebraic constraints are used in this application. Since unit quaternions are subject to two algebraic constraints, unit quaternions are standard to represent rigid transformations.
Similar to the way that rotations in 3D space can be represented by quaternions of unit length, rigid motions in 3D space can be represented by dual quaternions of unit length. This fact is used in theoretical kinematics (see McCarthy), and in applications to 3D computer graphics, robotics and computer vision. Polynomials with coefficients given by (non-zero real norm) dual quaternions have also been used in the context of mechanical linkages design.
History.
W. R. Hamilton introduced quaternions in 1843, and by 1873 W. K. Clifford obtained a broad generalization of these numbers that he called "biquaternions", which is an example of what is now called a Clifford algebra.
In 1898 Alexander McAulay used Ω with Ω2 = 0 to generate the dual quaternion algebra. However, his terminology of "octonions" did not stick as today's octonions are another algebra.
In 1891 Eduard Study realized that this associative algebra was ideal for describing the group of motions of three-dimensional space. He further developed the idea in "Geometrie der Dynamen" in 1901.
B. L. van der Waerden called the structure "Study biquaternions", one of three eight-dimensional algebras referred to as biquaternions.
In 1895, Russian mathematician Aleksandr Kotelnikov developed dual vectors and dual quaternions for use in the study of mechanics.
Formulas.
In order to describe operations with dual quaternions, it is helpful to first consider quaternions.
A quaternion is a linear combination of the basis elements 1, "i", "j", and "k". Hamilton's product rule for "i", "j", and "k" is often written as
formula_0
Compute "i" ( "i j k" ) = −"j k" = −"i", to obtain "j k" = "i", and ( "i j k" ) "k" = −"i j" = −"k" or "i j" = "k". Now because "j" ( "j k" ) = "j i" = −"k", we see that this product yields "i j" = −"j i", which links quaternions to the properties of determinants.
A convenient way to work with the quaternion product is to write a quaternion as the sum of a scalar and a vector (strictly speaking a bivector), that is "A" = "a"0 + A, where "a"0 is a real number and A = "A"1 "i" + "A"2 "j" + "A"3 "k" is a three dimensional vector. The vector dot and cross operations can now be used to define the quaternion product of "A" = "a"0 + A and "C" = "c"0 + C as
formula_1
A dual quaternion is usually described as a quaternion with dual numbers as coefficients. A dual number is an ordered pair "â" = ( "a", "b" ). Two dual numbers add componentwise and multiply by the rule "â ĉ" = ( "a", "b" ) ( "c", "d" ) = ("a c", "a d" + "b c"). Dual numbers are often written in the form "â" = "a" + ε"b", where ε is the dual unit that commutes with "i", "j", "k" and has the property ε2 = 0.
The result is that a dual quaternion can be written as an ordered pair of quaternions ( "A", "B" ). Two dual quaternions add componentwise and multiply by the rule,
formula_2
It is convenient to write a dual quaternion as the sum of a dual scalar and a dual vector, "Â" = "â"0 + "A", where "â"0 = ( "a", "b" ) and "A" = ( A, B ) is the dual vector that defines a screw. This notation allows us to write the product of two dual quaternions as
formula_3
Addition.
The addition of dual quaternions is defined componentwise so that given,
formula_4
and
formula_5
then
formula_6
Multiplication.
Multiplication of two dual quaternion follows from the multiplication rules for the quaternion units i, j, k and commutative multiplication by the dual unit ε. In particular, given
formula_7
and
formula_8
then
formula_9
Notice that there is no "BD" term, because the definition of dual numbers requires that ε2 = 0.
This gives us the multiplication table (note the multiplication order is row times column):
Conjugate.
The conjugate of a dual quaternion is the extension of the conjugate of a quaternion, that is
formula_10
As with quaternions, the conjugate of the product of dual quaternions, "Ĝ" = "ÂĈ", is the product of their conjugates in reverse order,
formula_11
It is useful to introduce the functions Sc(∗) and Vec(∗) that select the scalar and vector parts of a quaternion, or the dual scalar and dual vector parts of a dual quaternion. In particular, if "Â" = "â"0 + "A", then
formula_12
This allows the definition of the conjugate of "Â" as
formula_13
or,
formula_14
The product of a dual quaternion with its conjugate yields
formula_15
This is a dual scalar which is the "magnitude squared" of the dual quaternion.
Dual number conjugate.
A second type of conjugate of a dual quaternion is given by taking the dual number conjugate, given by
formula_16
The quaternion and dual number conjugates can be combined into a third form of conjugate given by
formula_17
In the context of dual quaternions, the term "conjugate" can be used to mean the quaternion conjugate, dual number conjugate, or both.
Norm.
The "norm" of a dual quaternion |"Â"| is computed using the conjugate to compute |"Â"| = √"Â Â"*. This is a dual number called the "magnitude" of the dual quaternion. Dual quaternions with |"Â"| = 1 are "unit dual quaternions".
Dual quaternions of magnitude 1 are used to represent spatial Euclidean displacements. Notice that the requirement that "Â Â"* = 1, introduces two algebraic constraints on the components of "Â", that is
formula_18
The first of these constraints, formula_19 implies that formula_20 has magnitude 1, while the second constraint, formula_21 implies that formula_20 and formula_22 are orthogonal.
Inverse.
If "p" + ε "q" is a dual quaternion, and "p" is not zero, then the inverse dual quaternion is given by
"p"−1 (1 − ε "q" "p"−1).
Thus the elements of the subspace { ε q : q ∈ H } do not have inverses. This subspace is called an ideal in ring theory. It happens to be the unique maximal ideal of the ring of dual numbers.
The group of units of the dual number ring then consists of numbers not in the ideal. The dual numbers form a local ring since there is a unique maximal ideal. The group of units is a Lie group and can be studied using the exponential mapping. Dual quaternions have been used to exhibit transformations in the Euclidean group. A typical element can be written as a screw transformation.
Dual quaternions and spatial displacements.
A benefit of the dual quaternion formulation of the composition of two spatial displacements "D""B" = (["R""B"], b) and "D""A" = (["R""A"],a) is that the resulting dual quaternion yields directly the screw axis and dual angle of the composite displacement "D""C" = "D""B""D""A".
In general, the dual quaternion associated with a spatial displacement "D" = (["A"], d) is constructed from its screw axis "S" = (S, V) and the dual angle ("φ", "d") where "φ" is the rotation about and "d" the slide along this axis, which defines the displacement "D". The associated dual quaternion is given by,
formula_23
Let the composition of the displacement DB with DA be the displacement "D""C" = "D""B""D""A". The screw axis and dual angle of DC is obtained from the product of the dual quaternions of DA and DB, given by
formula_24
That is, the composite displacement DC=DBDA has the associated dual quaternion given by
formula_25
Expand this product in order to obtain
formula_26
Divide both sides of this equation by the identity
formula_27
to obtain
formula_28
This is Rodrigues' formula for the screw axis of a composite displacement defined in terms of the screw axes of the two displacements. He derived this formula in 1840.
The three screw axes A, B, and C form a spatial triangle and the dual angles at these "vertices" between the common normals that form the sides of this triangle are directly related to the dual angles of the three spatial displacements.
Matrix form of dual quaternion multiplication.
The matrix representation of the quaternion product is convenient for programming quaternion computations using matrix algebra, which is true for dual quaternion operations as well.
The quaternion product AC is a linear transformation by the operator A of the components of the quaternion C, therefore there is a matrix representation of A operating on the vector formed from the components of C.
Assemble the components of the quaternion C = c0 + C into the array C = (C1, C2, C3, c0). Notice that the components of the vector part of the quaternion are listed first and the scalar is listed last. This is an arbitrary choice, but once this convention is selected we must abide by it.
The quaternion product AC can now be represented as the matrix product
formula_29
The product AC can also be viewed as an operation by C on the components of A, in which case we have
formula_30
The dual quaternion product ÂĈ = (A, B)(C, D) = (AC, AD+BC) can be formulated as a matrix operation as follows. Assemble the components of Ĉ into the eight dimensional array Ĉ = (C1, C2, C3, c0, D1, D2, D3, d0), then ÂĈ is given by the 8x8 matrix product
formula_31
As we saw for quaternions, the product ÂĈ can be viewed as the operation of Ĉ on the coordinate vector Â, which means ÂĈ can also be formulated as,
formula_32
More on spatial displacements.
The dual quaternion of a displacement D=([A], d) can be constructed from the quaternion S=cos(φ/2) + sin(φ/2)S that defines the rotation [A] and the vector quaternion constructed from the translation vector d, given by D = d1i + d2j + d3k. Using this notation, the dual quaternion for the displacement D=([A], d) is given by
formula_33
Let the Plücker coordinates of a line in the direction x through a point p in a moving body and its coordinates in the fixed frame which is in the direction X through the point P be given by,
formula_34
Then the dual quaternion of the displacement of this body transforms Plücker coordinates in the moving frame to Plücker coordinates in the fixed frame by the formula
formula_35
Using the matrix form of the dual quaternion product this becomes,
formula_36
This calculation is easily managed using matrix operations.
Dual quaternions and 4×4 homogeneous transforms.
It might be helpful, especially in rigid body motion, to represent unit dual quaternions as homogeneous matrices. As given above a dual quaternion can be written as: formula_37 where "r" and "d" are both quaternions. The "r" quaternion is known as the real or rotational part and the formula_38 quaternion is known as the dual or displacement part.
The rotation part can be given by
formula_39
where formula_40 is the angle of rotation about the direction given by unit vector formula_41. The displacement part can be written as
formula_42.
The dual-quaternion equivalent of a 3D-vector is
formula_43
and its transformation by formula_44 is given by
formula_45.
These dual quaternions (or actually their transformations on 3D-vectors) can be represented by the homogeneous transformation matrix
formula_46
where the 3×3 orthogonal matrix is given by
formula_47
For the 3D-vector
formula_48
the transformation by T is given by
formula_49
Connection to Clifford algebras.
Besides being the tensor product of two Clifford algebras, the quaternions and the dual numbers, the dual quaternions have two other formulations in terms of Clifford algebras.
First, dual quaternions are isomorphic to the Clifford algebra generated by 3 anticommuting elements formula_50, formula_51, formula_52 with formula_53 and formula_54. If we define formula_55 and formula_56, then the relations defining the dual quaternions are implied by these and vice versa. Second, the dual quaternions are isomorphic to the even part of the Clifford algebra generated by 4 anticommuting elements formula_57 with
formula_58
For details, see Clifford algebras: dual quaternions.
Eponyms.
Since both Eduard Study and William Kingdon Clifford used and wrote about dual quaternions, at times authors refer to dual quaternions as "Study biquaternions" or "Clifford biquaternions". The latter eponym has also been used to refer to split-biquaternions. Read the article by Joe Rooney linked below for view of a supporter of W.K. Clifford's claim. Since the claims of Clifford and Study are in contention, it is convenient to use the current designation "dual quaternion" to avoid conflict.
References.
Notes
<templatestyles src="Reflist/styles.css" />
Sources
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " i^2 = j^2 = k^2 = ijk = -1 ."
},
{
"math_id": 1,
"text": " G = AC = (a_0 + \\mathbf{A})(c_0 + \\mathbf{C}) = (a_0 c_0 - \\mathbf{A}\\cdot \\mathbf{C}) + (c_0 \\mathbf{A} + a_0 \\mathbf{C} + \\mathbf{A}\\times\\mathbf{C})."
},
{
"math_id": 2,
"text": " \\hat{A}\\hat{C} = (A, B)(C, D) = (AC, AD+BC)."
},
{
"math_id": 3,
"text": " \\hat{G} = \\hat{A}\\hat{C} = (\\hat{a}_0 + \\mathsf{A})(\\hat{c}_0 + \\mathsf{C}) = (\\hat{a}_0 \\hat{c}_0 - \\mathsf{A}\\cdot \\mathsf{C}) + (\\hat{c}_0 \\mathsf{A} + \\hat{a}_0 \\mathsf{C} + \\mathsf{A}\\times\\mathsf{C})."
},
{
"math_id": 4,
"text": " \\hat{A} = (A, B) = a_0 + a_1 i + a_2 j + a_3 k + b_0 \\varepsilon + b_1 \\varepsilon i + b_2 \\varepsilon j + b_3 \\varepsilon k, "
},
{
"math_id": 5,
"text": " \\hat{C} = (C, D) = c_0 + c_1 i + c_2 j + c_3 k + d_0 \\varepsilon + d_1 \\varepsilon i + d_2 \\varepsilon j + d_3 \\varepsilon k, "
},
{
"math_id": 6,
"text": " \\hat{A} + \\hat{C} = (A+C, B+D) = (a_0+c_0) + (a_1+c_1) i + (a_2+c_2) j + (a_3+c_3) k + (b_0+d_0) \\varepsilon + (b_1+d_1) \\varepsilon i + (b_2+d_2) \\varepsilon j + (b_3+d_3) \\varepsilon k, "
},
{
"math_id": 7,
"text": " \\hat{A} = (A, B) = A + \\varepsilon B, "
},
{
"math_id": 8,
"text": " \\hat{C} = (C, D) = C + \\varepsilon D, "
},
{
"math_id": 9,
"text": " \\hat{A}\\hat{C} = (A + \\varepsilon B)(C + \\varepsilon D) = AC + \\varepsilon (AD+BC)."
},
{
"math_id": 10,
"text": " \\hat{A}^* = (A^*, B^*) = A^* + \\varepsilon B^*. \\!"
},
{
"math_id": 11,
"text": " \\hat{G}^* = (\\hat{A}\\hat{C})^* = \\hat{C}^*\\hat{A}^*."
},
{
"math_id": 12,
"text": " \\mbox{Sc}(\\hat{A}) = \\hat{a}_0, \\mbox{Vec}(\\hat{A}) = \\mathsf{A}."
},
{
"math_id": 13,
"text": " \\hat{A}^* = \\mbox{Sc}(\\hat{A}) - \\mbox{Vec}(\\hat{A})."
},
{
"math_id": 14,
"text": " (\\hat{a}_0+\\mathsf{A})^* = \\hat{a}_0 - \\mathsf{A}."
},
{
"math_id": 15,
"text": "\\hat{A}\\hat{A}^* = (\\hat{a}_0+\\mathsf{A})(\\hat{a}_0 - \\mathsf{A}) = \\hat{a}_0^2 + \\mathsf{A}\\cdot\\mathsf{A}."
},
{
"math_id": 16,
"text": " \\overline{\\hat{A}} = (A, -B) = A - \\varepsilon B. \\!"
},
{
"math_id": 17,
"text": " \\overline{\\hat{A}^*} = (A^*, -B^*) = A^* - \\varepsilon B^*. \\!"
},
{
"math_id": 18,
"text": " \\hat{A}\\hat{A}^* = (A, B)(A^*, B^*) = (AA^*, AB^* + BA^*) = (1, 0)."
},
{
"math_id": 19,
"text": " AA^* = 1,"
},
{
"math_id": 20,
"text": "A"
},
{
"math_id": 21,
"text": " AB^* + BA^* = 0,"
},
{
"math_id": 22,
"text": "B"
},
{
"math_id": 23,
"text": " \\hat{S} = \\cos\\frac{\\hat{\\phi}}{2} + \\sin\\frac{\\hat{\\phi}}{2} \\mathsf{S}. "
},
{
"math_id": 24,
"text": "\\hat{A}=\\cos(\\hat{\\alpha}/2)+ \\sin(\\hat{\\alpha}/2)\\mathsf{A}\\quad\n\\text{and}\\quad \\hat{B}=\\cos(\\hat{\\beta}/2)+ \\sin(\\hat{\\beta}/2)\\mathsf{B}."
},
{
"math_id": 25,
"text": " \\hat{C} = \\cos\\frac{\\hat{\\gamma}}{2}+\\sin\\frac{\\hat{\\gamma}}{2}\\mathsf{C}\n=\n\\left(\\cos\\frac{\\hat{\\beta}}{2}+\\sin\\frac{\\hat{\\beta}}{2}\\mathsf{B}\\right) \\left(\\cos\\frac{\\hat{\\alpha}}{2}+\n\\sin\\frac{\\hat{\\alpha}}{2}\\mathsf{A}\\right).\n"
},
{
"math_id": 26,
"text": "\n\\cos\\frac{\\hat{\\gamma}}{2}+\\sin\\frac{\\hat{\\gamma}}{2} \\mathsf{C} = \n\\left(\\cos\\frac{\\hat{\\beta}}{2}\\cos\\frac{\\hat{\\alpha}}{2} - \n\\sin\\frac{\\hat{\\beta}}{2}\\sin\\frac{\\hat{\\alpha}}{2} \\mathsf{B}\\cdot \\mathsf{A}\\right) + \\left(\\sin\\frac{\\hat{\\beta}}{2}\\cos\\frac{\\hat{\\alpha}}{2} \\mathsf{B} + \n\\sin\\frac{\\hat{\\alpha}}{2}\\cos\\frac{\\hat{\\beta}}{2} \\mathsf{A} + \n\\sin\\frac{\\hat{\\beta}}{2}\\sin\\frac{\\hat{\\alpha}}{2} \\mathsf{B}\\times \\mathsf{A}\\right).\n"
},
{
"math_id": 27,
"text": " \\cos\\frac{\\hat{\\gamma}}{2} = \\cos\\frac{\\hat{\\beta}}{2}\\cos\\frac{\\hat{\\alpha}}{2} - \n\\sin\\frac{\\hat{\\beta}}{2}\\sin\\frac{\\hat{\\alpha}}{2} \\mathsf{B}\\cdot \\mathsf{A}"
},
{
"math_id": 28,
"text": " \\tan\\frac{\\hat{\\gamma}}{2} \\mathsf{C} = \\frac{\\tan\\frac{\\hat{\\beta}}{2}\\mathsf{B} + \n\\tan\\frac{\\hat{\\alpha}}{2} \\mathsf{A} + \n\\tan\\frac{\\hat{\\beta}}{2}\\tan\\frac{\\hat{\\alpha}}{2} \\mathsf{B}\\times \\mathsf{A}}{1 - \n\\tan\\frac{\\hat{\\beta}}{2}\\tan\\frac{\\hat{\\alpha}}{2} \\mathsf{B}\\cdot \\mathsf{A}}.\n"
},
{
"math_id": 29,
"text": " \nAC = [A^+] C =\n\\begin{bmatrix}\na_0 & A_3 & -A_2 & A_1 \\\\\n-A_3 & a_0 & A_1 & A_2 \\\\\nA_2 & -A_1 & a_0 & A_3 \\\\\n-A_1 & -A_2 & -A_3 & a_0\n\\end{bmatrix}\n\\begin{Bmatrix} C_1 \\\\ C_2 \\\\ C_3 \\\\ c_0 \\end{Bmatrix}.\n"
},
{
"math_id": 30,
"text": "\nAC = [C^-] A = \\begin{bmatrix}\nc_0 & -C_3 & C_2 & C_1 \\\\\nC_3 & c_0 & -C_1 & C_2 \\\\\n-C_2 & C_1 & c_0 & C_3 \\\\\n-C_1 & -C_2 & -C_3 & c_0\n\\end{bmatrix}\n\\begin{Bmatrix} A_1 \\\\ A_2 \\\\ A_3 \\\\ a_0 \\end{Bmatrix}.\n"
},
{
"math_id": 31,
"text": "\n\\hat{A}\\hat{C} = [\\hat{A}^+]\\hat{C} = \\begin{bmatrix} A^+ & 0 \\\\ B^+ & A^+ \\end{bmatrix}\\begin{Bmatrix} C \\\\ D\\end{Bmatrix}.\n"
},
{
"math_id": 32,
"text": "\n\\hat{A}\\hat{C} = [\\hat{C}^-]\\hat{A} = \\begin{bmatrix} C^- & 0 \\\\ D^- & C^- \\end{bmatrix}\\begin{Bmatrix} A \\\\ B\\end{Bmatrix}.\n"
},
{
"math_id": 33,
"text": " \\hat{S} = S + \\varepsilon \\frac{1}{2}DS. "
},
{
"math_id": 34,
"text": "\\hat{x}=\\mathbf{x} + \\varepsilon \\mathbf{p}\\times\\mathbf{x}\\quad\\text{and}\\quad\\hat{X}=\\mathbf{X} + \\varepsilon \\mathbf{P}\\times\\mathbf{X}."
},
{
"math_id": 35,
"text": "\\hat{X} = \\hat{S}\\hat{x}\\overline{\\hat{S}^*}."
},
{
"math_id": 36,
"text": "\\hat{X} =[\\hat{S}^+][\\hat{S}^-]^*\\hat{x}."
},
{
"math_id": 37,
"text": "\\hat q = r + d\\varepsilon r"
},
{
"math_id": 38,
"text": "d"
},
{
"math_id": 39,
"text": "r = r_w + r_xi + r_yj + r_zk = \\cos \\left( \\frac{\\theta}{2} \\right) + \\sin \\left( \\frac{\\theta}{2} \\right) \\left( \\vec{a} \\cdot (i, j, k) \\right)"
},
{
"math_id": 40,
"text": "\\theta"
},
{
"math_id": 41,
"text": "\\vec{a}"
},
{
"math_id": 42,
"text": "d = 0 + \\frac{\\Delta x}{2}i + \\frac{\\Delta y}{2}j + \\frac{\\Delta z}{2}k"
},
{
"math_id": 43,
"text": "\\hat v := 1 + \\varepsilon (v_x i + v_y j + v_z k)"
},
{
"math_id": 44,
"text": "\\hat q"
},
{
"math_id": 45,
"text": "\\hat{v}' = \\hat q \\cdot \\hat v \\cdot \\overline{\\hat q^*}"
},
{
"math_id": 46,
"text": " T = \\begin{pmatrix}\n1 & 0 & 0 & 0 & \\\\\n\\Delta x & & & \\\\\n\\Delta y & & R & \\\\\n\\Delta z & & & \\\\\n\\end{pmatrix}"
},
{
"math_id": 47,
"text": "R =\\begin{pmatrix}\nr_w^2+r_x^2-r_y^2-r_z^2 & 2r_xr_y-2r_wr_z & 2r_xr_z+2r_wr_y \\\\\n2r_xr_y+2r_wr_z & r_w^2-r_x^2+r_y^2-r_z^2 & 2r_yr_z-2r_wr_x \\\\\n2r_xr_z-2r_wr_y & 2r_yr_z+2r_wr_x & r_w^2-r_x^2-r_y^2+r_z^2\\\\\n\\end{pmatrix}."
},
{
"math_id": 48,
"text": " v = \\begin{pmatrix}\n1 \\\\\nv_x \\\\\nv_y \\\\\nv_z \\\\\n\\end{pmatrix}"
},
{
"math_id": 49,
"text": "\\vec{v}' = T \\cdot \\vec{v}"
},
{
"math_id": 50,
"text": "i"
},
{
"math_id": 51,
"text": "j"
},
{
"math_id": 52,
"text": "e"
},
{
"math_id": 53,
"text": "i^2 = j^2 = -1"
},
{
"math_id": 54,
"text": "e^2 = 0"
},
{
"math_id": 55,
"text": "k = ij"
},
{
"math_id": 56,
"text": "\\varepsilon = ek"
},
{
"math_id": 57,
"text": "e_1, e_2, e_3, e_4"
},
{
"math_id": 58,
"text": "e_1 ^2 = e_2^2 = e_3^2 = -1, \\,\\, e_4^2 = 0."
}
] | https://en.wikipedia.org/wiki?curid=8730871 |
873118 | Borromean rings | Three linked but pairwise separated rings
In mathematics, the Borromean rings are three simple closed curves in three-dimensional space that are topologically linked and cannot be separated from each other, but that break apart into two unknotted and unlinked loops when any one of the three is cut or removed. Most commonly, these rings are drawn as three circles in the plane, in the pattern of a Venn diagram, alternatingly crossing over and under each other at the points where they cross. Other triples of curves are said to form the Borromean rings as long as they are topologically equivalent to the curves depicted in this drawing.
The Borromean rings are named after the Italian House of Borromeo, who used the circular form of these rings as an element of their coat of arms, but designs based on the Borromean rings have been used in many cultures, including by the Norsemen and in Japan. They have been used in Christian symbolism as a sign of the Trinity, and in modern commerce as the logo of Ballantine beer, giving them the alternative name Ballantine rings. Physical instances of the Borromean rings have been made from linked DNA or other molecules, and they have analogues in the Efimov state and Borromean nuclei, both of which have three components bound to each other although no two of them are bound.
Geometrically, the Borromean rings may be realized by linked ellipses, or (using the vertices of a regular icosahedron) by linked golden rectangles. It is impossible to realize them using circles in three-dimensional space, but it has been conjectured that they may be realized by copies of any non-circular simple closed curve in space. In knot theory, the Borromean rings can be proved to be linked by counting their Fox n-colorings. As links, they are Brunnian, alternating, algebraic, and hyperbolic. In arithmetic topology, certain triples of prime numbers have analogous linking properties to the Borromean rings.
Definition and notation.
It is common in mathematics publications that define the Borromean rings to do so as a link diagram, a drawing of curves in the plane with crossings marked to indicate which curve or part of a curve passes above or below at each crossing. Such a drawing can be transformed into a system of curves in three-dimensional space by embedding the plane into space and deforming the curves drawn on it above or below the embedded plane at each crossing, as indicated in the diagram. The commonly-used diagram for the Borromean rings consists of three equal circles centered at the points of an equilateral triangle, close enough together that their interiors have a common intersection (such as in a Venn diagram or the three circles used to define the Reuleaux triangle). Its crossings alternate between above and below when considered in consecutive order around each circle; another equivalent way to describe the over-under relation between the three circles is that each circle passes over a second circle at both of their crossings, and under the third circle at both of their crossings. Two links are said to be equivalent if there is a continuous deformation of space (an ambient isotopy) taking one to another, and the Borromean rings may refer to any link that is equivalent in this sense to the standard diagram for this link.
In "The Knot Atlas", the Borromean rings are denoted with the code "L6a4"; the notation means that this is a link with six crossings and an alternating diagram, the fourth of five alternating 6-crossing links identified by Morwen Thistlethwaite in a list of all prime links with up to 13 crossings. In the tables of knots and links in Dale Rolfsen's 1976 book "Knots and Links", extending earlier listings in the 1920s by Alexander and Briggs, the Borromean rings were given the Alexander–Briggs notation "6", meaning that this is the second of three 6-crossing 3-component links to be listed. The Conway notation for the Borromean rings, ".1", is an abbreviated description of the standard link diagram for this link.
History and symbolism.
The name "Borromean rings" comes from the use of these rings, in the form of three linked circles, in the coat of arms of the aristocratic Borromeo family in Northern Italy. The link itself is much older and has appeared in the form of the , three linked equilateral triangles with parallel sides, on Norse image stones dating back to the 7th century. The Ōmiwa Shrine in Japan is also decorated with a motif of the Borromean rings, in their conventional circular form. A stone pillar in the 6th-century Marundeeswarar Temple in India shows three equilateral triangles rotated from each other to form a regular enneagram; like the Borromean rings these three triangles are linked and not pairwise linked, but this crossing pattern describes a different link than the Borromean rings.
The Borromean rings have been used in different contexts to indicate strength in unity. In particular, some have used the design to symbolize the Trinity. A 13th-century French manuscript depicting the Borromean rings labeled as unity in trinity was lost in a fire in the 1940s, but reproduced in an 1843 book by Adolphe Napoléon Didron. Didron and others have speculated that the description of the Trinity as three equal circles in canto 33 of Dante's "Paradiso" was inspired by similar images, although Dante does not detail the geometric arrangement of these circles. The psychoanalyst Jacques Lacan found inspiration in the Borromean rings as a model for his topology of human subjectivity, with each ring representing a fundamental Lacanian component of reality (the "real", the "imaginary", and the "symbolic").
The rings were used as the logo of Ballantine beer, and are still used by the Ballantine brand beer, now distributed by the current brand owner, the Pabst Brewing Company. For this reason they have sometimes been called the "Ballantine rings".
The first work of knot theory to include the Borromean rings was a catalog of knots and links compiled in 1876 by Peter Tait. In recreational mathematics, the Borromean rings were popularized by Martin Gardner, who featured Seifert surfaces for the Borromean rings in his September 1961 "Mathematical Games" column in "Scientific American". In 2006, the International Mathematical Union decided at the 25th International Congress of Mathematicians in Madrid, Spain to use a new logo based on the Borromean rings.
Partial and multiple rings.
In medieval and renaissance Europe, a number of visual signs consist of three elements interlaced together in the same way that the Borromean rings are shown interlaced (in their conventional two-dimensional depiction), but with individual elements that are not closed loops. Examples of such symbols are the Snoldelev stone horns and the Diana of Poitiers crescents.
Some knot-theoretic links contain multiple Borromean rings configurations; one five-loop link of this type is used as a symbol in Discordianism, based on a depiction in the "Principia Discordia".
Mathematical properties.
Linkedness.
In knot theory, the Borromean rings are a simple example of a Brunnian link, a link that cannot be separated but that falls apart into separate unknotted loops as soon as any one of its components is removed. There are infinitely many Brunnian links, and infinitely many three-curve Brunnian links, of which the Borromean rings are the simplest.
There are a number of ways of seeing that the Borromean rings are linked. One is to use Fox n-colorings, colorings of the arcs of a link diagram with the integers modulo n so that at each crossing, the two colors at the undercrossing have the same average (modulo n) as the color of the overcrossing arc, and so that at least two colors are used. The number of colorings meeting these conditions is a knot invariant, independent of the diagram chosen for the link. A trivial link with three components has formula_0 colorings, obtained from its standard diagram by choosing a color independently for each component and discarding the formula_1 colorings that only use one color. For standard diagram of the Borromean rings, on the other hand, the same pairs of arcs meet at two undercrossings, forcing the arcs that cross over them to have the same color as each other, from which it follows that the only colorings that meet the crossing conditions violate the condition of using more than one color. Because the trivial link has many valid colorings and the Borromean rings have none, they cannot be equivalent.
The Borromean rings are an alternating link, as their conventional link diagram has crossings that alternate between passing over and under each curve, in order along the curve. They are also an algebraic link, a link that can be decomposed by Conway spheres into 2-tangles. They are the simplest alternating algebraic link which does not have a diagram that is simultaneously alternating and algebraic. It follows from the Tait conjectures that the crossing number of the Borromean rings (the fewest crossings in any of their link diagrams) is 6, the number of crossings in their alternating diagram.
Ring shape.
The Borromean rings are typically drawn with their rings projecting to circles in the plane of the drawing, but three-dimensional circular Borromean rings are an impossible object: it is not possible to form the Borromean rings from circles in three-dimensional space. More generally Michael H. Freedman and Richard Skora (1987) proved using four-dimensional hyperbolic geometry that no Brunnian link can be exactly circular. For three rings in their conventional Borromean arrangement, this can be seen from considering the link diagram. If one assumes that two of the circles touch at their two crossing points, then they lie in either a plane or a sphere. In either case, the third circle must pass through this plane or sphere four times, without lying in it, which is impossible. Another argument for the impossibility of circular realizations, by Helge Tverberg, uses inversive geometry to transform any three circles so that one of them becomes a line, making it easier to argue that the other two circles do not link with it to form the Borromean rings.
However, the Borromean rings can be realized using ellipses. These may be taken to be of arbitrarily small eccentricity: no matter how close to being circular their shape may be, as long as they are not perfectly circular, they can form Borromean links if suitably positioned. A realization of the Borromean rings by three mutually perpendicular golden rectangles can be found within a regular icosahedron by connecting three opposite pairs of its edges. Every three unknotted polygons in Euclidean space may be combined, after a suitable scaling transformation, to form the Borromean rings. If all three polygons are planar, then scaling is not needed. In particular, because the Borromean rings can be realized by three triangles, the minimum number of sides possible for each of its loops, the stick number of the Borromean rings is nine.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Are there three unknotted curves, not all circles, that cannot form the Borromean rings?
More generally, Matthew Cook has conjectured that any three unknotted simple closed curves in space, not all circles, can be combined without scaling to form the Borromean rings. After Jason Cantarella suggested a possible counterexample, Hugh Nelson Howards weakened the conjecture to apply to any three planar curves that are not all circles. On the other hand, although there are infinitely many Brunnian links with three links, the Borromean rings are the only one that can be formed from three convex curves.
Ropelength.
In knot theory, the ropelength of a knot or link is the shortest length of flexible rope (of radius one) that can realize it. Mathematically, such a realization can be described by a smooth curve whose radius-one tubular neighborhood avoids self-intersections. The minimum ropelength of the Borromean rings has not been proven, but the smallest value that has been attained is realized by three copies of a 2-lobed planar curve. Although it resembles an earlier candidate for minimum ropelength, constructed from four circular arcs of radius two, it is slightly modified from that shape, and is composed from 42 smooth pieces defined by elliptic integrals, making it shorter by a fraction of a percent than the piecewise-circular realization. It is this realization, conjectured to minimize ropelength, that was used for the International Mathematical Union logo. Its length is formula_2, while the best proven lower bound on the length is formula_3.
For a discrete analogue of ropelength, the shortest representation using only edges of the integer lattice, the minimum length for the Borromean rings is exactly formula_4. This is the length of a representation using three formula_5 integer rectangles, inscribed in Jessen's icosahedron in the same way that the representation by golden rectangles is inscribed in the regular icosahedron.
Hyperbolic geometry.
The Borromean rings are a hyperbolic link: the space surrounding the Borromean rings (their link complement) admits a complete hyperbolic metric of finite volume. Although hyperbolic links are now considered plentiful, the Borromean rings were one of the earliest examples to be proved hyperbolic, in the 1970s, and this link complement was a central example in the video "Not Knot", produced in 1991 by the Geometry Center.
Hyperbolic manifolds can be decomposed in a canonical way into gluings of hyperbolic polyhedra (the Epstein–Penner decomposition) and for the Borromean complement this decomposition consists of two ideal regular octahedra. The volume of the Borromean complement is formula_6 where formula_7 is the Lobachevsky function and formula_8 is Catalan's constant. The complement of the Borromean rings is universal, in the sense that every closed 3-manifold is a branched cover over this space.
Number theory.
In arithmetic topology, there is an analogy between knots and prime numbers in which one considers links between primes. The triple of primes (13, 61, 937) are linked modulo 2 (the Rédei symbol is −1) but are pairwise unlinked modulo 2 (the Legendre symbols are all 1). Therefore, these primes have been called a "proper Borromean triple modulo 2" or "mod 2 Borromean primes".
Physical realizations.
A monkey's fist knot is essentially a 3-dimensional representation of the Borromean rings, albeit with three layers, in most cases. Sculptor John Robinson has made artworks with three equilateral triangles made out of sheet metal, linked to form Borromean rings and resembling a three-dimensional version of the valknut. A common design for a folding wooden tripod consists of three pieces carved from a single piece of wood, with each piece consisting of two lengths of wood, the legs and upper sides of the tripod, connected by two segments of wood that surround an elongated central hole in the piece. Another of the three pieces passes through each of these holes, linking the three pieces together in the Borromean rings pattern. Tripods of this form have been described as coming from Indian or African hand crafts.
In chemistry, molecular Borromean rings are the molecular counterparts of Borromean rings, which are mechanically-interlocked molecular architectures. In 1997, biologist Chengde Mao and coworkers of New York University succeeded in constructing a set of rings from DNA. In 2003, chemist Fraser Stoddart and coworkers at UCLA utilised coordination chemistry to construct a set of rings in one step from 18 components. Borromean ring structures have been used to describe noble metal clusters shielded by a surface layer of thiolate ligands. A library of Borromean networks has been synthesized by design by Giuseppe Resnati and coworkers via halogen bond driven self-assembly. In order to access the molecular Borromean ring consisting of three unequal cycles a step-by-step synthesis was proposed by Jay S. Siegel and coworkers.
In physics, a quantum-mechanical analog of Borromean rings is called a halo state or an Efimov state, and consists of three bound particles that are not pairwise bound. The existence of such states was predicted by physicist Vitaly Efimov, in 1970, and confirmed by multiple experiments beginning in 2006. This phenomenon is closely related to a Borromean nucleus, a stable atomic nucleus consisting of three groups of particles that would be unstable in pairs. Another analog of the Borromean rings in quantum information theory involves the entanglement of three qubits in the Greenberger–Horne–Zeilinger state.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n^3-n"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\approx 58.006"
},
{
"math_id": 3,
"text": "12\\pi\\approx 37.699"
},
{
"math_id": 4,
"text": "36"
},
{
"math_id": 5,
"text": "2\\times 4"
},
{
"math_id": 6,
"text": "16\\Lambda(\\pi/4)=8G \\approx 7.32772\\dots"
},
{
"math_id": 7,
"text": "\\Lambda"
},
{
"math_id": 8,
"text": "G"
}
] | https://en.wikipedia.org/wiki?curid=873118 |
87339 | Naive Bayes classifier | Probabilistic classification algorithm
In statistics, naive Bayes classifiers are a family of linear "probabilistic classifiers" which assumes that the features are conditionally independent, given the target class. The strength (naivety) of this assumption is what gives the classifier its name. These classifiers are among the simplest Bayesian network models.
Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression,718 which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers.
In the statistics literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but naive Bayes is not (necessarily) a Bayesian method.
Introduction.
Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to problem instances, represented as vectors of feature values, where the class labels are drawn from some finite set. There is not a single algorithm for training such classifiers, but a family of algorithms based on a common principle: all naive Bayes classifiers assume that the value of a particular feature is independent of the value of any other feature, given the class variable. For example, a fruit may be considered to be an apple if it is red, round, and about 10 cm in diameter. A naive Bayes classifier considers each of these features to contribute independently to the probability that this fruit is an apple, regardless of any possible correlations between the color, roundness, and diameter features.
In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods.
Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, an analysis of the Bayesian classification problem showed that there are sound theoretical reasons for the apparently implausible efficacy of naive Bayes classifiers. Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as boosted trees or random forests.
An advantage of naive Bayes is that it only requires a small amount of training data to estimate the parameters necessary for classification.
Probabilistic model.
Abstractly, naive Bayes is a conditional probability model: it assigns probabilities formula_0 for each of the K possible outcomes or "classes" formula_1 given a problem instance to be classified, represented by a vector formula_2 encoding some n features (independent variables).
The problem with the above formulation is that if the number of features n is large or if a feature can take on a large number of values, then basing such a model on probability tables is infeasible. The model must therefore be reformulated to make it more tractable. Using Bayes' theorem, the conditional probability can be decomposed as:
formula_3
In plain English, using Bayesian probability terminology, the above equation can be written as
formula_4
In practice, there is interest only in the numerator of that fraction, because the denominator does not depend on formula_5 and the values of the features formula_6 are given, so that the denominator is effectively constant.
The numerator is equivalent to the joint probability model
formula_7
which can be rewritten as follows, using the chain rule for repeated applications of the definition of conditional probability:
formula_8
Now the "naive" conditional independence assumptions come into play: assume that all features in formula_9 are mutually independent, conditional on the category formula_1. Under this assumption,
formula_10
Thus, the joint model can be expressed as
formula_11
where formula_12 denotes proportionality since the denominator formula_13 is omitted.
This means that under the above independence assumptions, the conditional distribution over the class variable formula_5 is:
formula_14
where the evidence formula_15 is a scaling factor dependent only on formula_16, that is, a constant if the values of the feature variables are known.
Constructing a classifier from the probability model.
The discussion so far has derived the independent feature model, that is, the naive Bayes probability model. The naive Bayes classifier combines this model with a decision rule. One common rule is to pick the hypothesis that is most probable so as to minimize the probability of misclassification; this is known as the "maximum "a posteriori"" or "MAP" decision rule. The corresponding classifier, a Bayes classifier, is the function that assigns a class label formula_17 for some k as follows:
formula_18
Parameter estimation and event models.
A class's prior may be calculated by assuming equiprobable classes, i.e., formula_19, or by calculating an estimate for the class probability from the training set:
formula_20
To estimate the parameters for a feature's distribution, one must assume a distribution or generate nonparametric models for the features from the training set.
The assumptions on distributions of features are called the "event model" of the naive Bayes classifier. For discrete features like the ones encountered in document classification (include spam filtering), multinomial and Bernoulli distributions are popular. These assumptions lead to two distinct models, which are often confused.
Gaussian naive Bayes.
When dealing with continuous data, a typical assumption is that the continuous values associated with each class are distributed according to a normal (or Gaussian) distribution. For example, suppose the training data contains a continuous attribute, formula_21. The data is first segmented by the class, and then the mean and variance of formula_21 is computed in each class. Let formula_22 be the mean of the values in formula_21 associated with class formula_1, and let formula_23 be the Bessel corrected variance of the values in formula_21 associated with class formula_1. Suppose one has collected some observation value formula_24. Then, the probability "density" of formula_24 given a class formula_1, i.e., formula_25, can be computed by plugging formula_24 into the equation for a normal distribution parameterized by formula_22 and formula_23. Formally,
formula_26
Another common technique for handling continuous values is to use binning to discretize the feature values and obtain a new set of Bernoulli-distributed features. Some literature suggests that this is required in order to use naive Bayes, but it is not true, as the discretization may throw away discriminative information.
Sometimes the distribution of class-conditional marginal densities is far from normal. In these cases, kernel density estimation can be used for a more realistic estimate of the marginal densities of each class. This method, which was introduced by John and Langley, can boost the accuracy of the classifier considerably.
Multinomial naive Bayes.
With a multinomial event model, samples (feature vectors) represent the frequencies with which certain events have been generated by a multinomial formula_27 where formula_28 is the probability that event i occurs (or K such multinomials in the multiclass case). A feature vector formula_29 is then a histogram, with formula_6 counting the number of times event i was observed in a particular instance. This is the event model typically used for document classification, with events representing the occurrence of a word in a single document (see bag of words assumption). The likelihood of observing a histogram x is given by:
formula_30
where formula_31.
The multinomial naive Bayes classifier becomes a linear classifier when expressed in log-space:
formula_32
where formula_33 and formula_34. Estimating the parameters in log space is advantageous since multiplying a large number of small values can lead to significant rounding error. Applying a log transform reduces the effect of this rounding error.
If a given class and feature value never occur together in the training data, then the frequency-based probability estimate will be zero, because the probability estimate is directly proportional to the number of occurrences of a feature's value. This is problematic because it will wipe out all information in the other probabilities when they are multiplied. Therefore, it is often desirable to incorporate a small-sample correction, called pseudocount, in all probability estimates such that no probability is ever set to be exactly zero. This way of regularizing naive Bayes is called Laplace smoothing when the pseudocount is one, and Lidstone smoothing in the general case.
Rennie "et al." discuss problems with the multinomial assumption in the context of document classification and possible ways to alleviate those problems, including the use of tf–idf weights instead of raw term frequencies and document length normalization, to produce a naive Bayes classifier that is competitive with support vector machines.
Bernoulli naive Bayes.
In the multivariate Bernoulli event model, features are independent Boolean variables (binary variables) describing inputs. Like the multinomial model, this model is popular for document classification tasks, where binary term occurrence features are used rather than term frequencies. If formula_6 is a Boolean expressing the occurrence or absence of the i'th term from the vocabulary, then the likelihood of a document given a class formula_1 is given by:
formula_35
where formula_36 is the probability of class formula_1 generating the term formula_6. This event model is especially popular for classifying short texts. It has the benefit of explicitly modelling the absence of terms. Note that a naive Bayes classifier with a Bernoulli event model is not the same as a multinomial NB classifier with frequency counts truncated to one.
Semi-supervised parameter estimation.
Given a way to train a naive Bayes classifier from labeled data, it's possible to construct a semi-supervised training algorithm that can learn from a combination of labeled and unlabeled data by running the supervised learning algorithm in a loop:
Convergence is determined based on improvement to the model likelihood formula_40, where formula_41 denotes the parameters of the naive Bayes model.
This training algorithm is an instance of the more general expectation–maximization algorithm (EM): the prediction step inside the loop is the "E"-step of EM, while the re-training of naive Bayes is the "M"-step. The algorithm is formally justified by the assumption that the data are generated by a mixture model, and the components of this mixture model are exactly the classes of the classification problem.
Discussion.
Despite the fact that the far-reaching independence assumptions are often inaccurate, the naive Bayes classifier has several properties that make it surprisingly useful in practice. In particular, the decoupling of the class conditional feature distributions means that each distribution can be independently estimated as a one-dimensional distribution. This helps alleviate problems stemming from the curse of dimensionality, such as the need for data sets that scale exponentially with the number of features. While naive Bayes often fails to produce a good estimate for the correct class probabilities, this may not be a requirement for many applications. For example, the naive Bayes classifier will make the correct MAP decision rule classification so long as the correct class is predicted as more probable than any other class. This is true regardless of whether the probability estimate is slightly, or even grossly inaccurate. In this manner, the overall classifier can be robust enough to ignore serious deficiencies in its underlying naive probability model. Other reasons for the observed success of the naive Bayes classifier are discussed in the literature cited below.
Relation to logistic regression.
In the case of discrete inputs (indicator or frequency features for discrete events), naive Bayes classifiers form a "generative-discriminative" pair with multinomial logistic regression classifiers: each naive Bayes classifier can be considered a way of fitting a probability model that optimizes the joint likelihood formula_42, while logistic regression fits the same probability model to optimize the conditional formula_43.
More formally, we have the following:
<templatestyles src="Math_theorem/styles.css" />
Theorem — Naive Bayes classifiers on binary features are subsumed by logistic regression classifiers.
<templatestyles src="Math_proof/styles.css" />Proof
Consider a generic multiclass classification problem, with possible classes formula_44, then the (non-naive) Bayes classifier gives, by Bayes theorem:
formula_45
The naive Bayes classifier gives
formula_46
where
formula_47
This is exactly a logistic regression classifier.
The link between the two can be seen by observing that the decision function for naive Bayes (in the binary case) can be rewritten as "predict class formula_48 if the odds of formula_49 exceed those of formula_50". Expressing this in log-space gives:
formula_51
The left-hand side of this equation is the log-odds, or "logit", the quantity predicted by the linear model that underlies logistic regression. Since naive Bayes is also a linear model for the two "discrete" event models, it can be reparametrised as a linear function formula_52. Obtaining the probabilities is then a matter of applying the logistic function to formula_53, or in the multiclass case, the softmax function.
Discriminative classifiers have lower asymptotic error than generative ones; however, research by Ng and Jordan has shown that in some practical cases naive Bayes can outperform logistic regression because it reaches its asymptotic error faster.
Examples.
Person classification.
Problem: classify whether a given person is a male or a female based on the measured features.
The features include height, weight, and foot size. Although with NB classifier we treat them as independent, they are not in reality.
Training.
Example training set below.
The classifier created from the training set using a Gaussian distribution assumption would be (given variances are "unbiased" sample variances):
The following example assumes equiprobable classes so that P(male)= P(female) = 0.5. This prior probability distribution might be based on prior knowledge of frequencies in the larger population or in the training set.
Testing.
Below is a sample to be classified as male or female.
In order to classify the sample, one has to determine which posterior is greater, male or female. For the classification as male the posterior is given by
formula_54
For the classification as female the posterior is given by
formula_55
The evidence (also termed normalizing constant) may be calculated:
formula_56
However, given the sample, the evidence is a constant and thus scales both posteriors equally. It therefore does not affect classification and can be ignored. The probability distribution for the sex of the sample can now be determined:
formula_57
formula_58
where formula_59 and formula_60 are the parameters of normal distribution which have been previously determined from the training set. Note that a value greater than 1 is OK here – it is a probability density rather than a probability, because "height" is a continuous variable.
formula_61
formula_62
formula_63
formula_64
formula_65
formula_66
formula_67
formula_68
Since posterior numerator is greater in the female case, the prediction is that the sample is female.
Document classification.
Here is a worked example of naive Bayesian classification to the document classification problem.
Consider the problem of classifying documents by their content, for example into spam and non-spam e-mails. Imagine that documents are drawn from a number of classes of documents which can be modeled as sets of words where the (independent) probability that the i-th word of a given document occurs in a document from class "C" can be written as
formula_69
Then the probability that a given document "D" contains all of the words formula_70, given a class "C", is
formula_71
The question that has to be answered is: "what is the probability that a given document "D" belongs to a given class "C"?" In other words, what is formula_72?
Now by definition
formula_73
and
formula_74
Bayes' theorem manipulates these into a statement of probability in terms of likelihood.
formula_75
Assume for the moment that there are only two mutually exclusive classes, "S" and ¬"S" (e.g. spam and not spam), such that every element (email) is in either one or the other;
formula_76
and
formula_77
Using the Bayesian result above, one can write:
formula_78
formula_79
Dividing one by the other gives:
formula_80
Which can be re-factored as:
formula_81
Thus, the probability ratio p("S" | "D") / p(¬"S" | "D") can be expressed in terms of a series of likelihood ratios.
The actual probability p("S" | "D") can be easily computed from log (p("S" | "D") / p(¬"S" | "D")) based on the observation that p("S" | "D") + p(¬"S" | "D") = 1.
Taking the logarithm of all these ratios, one obtains:
formula_82
(This technique of "log-likelihood ratios" is a common technique in statistics.
In the case of two mutually exclusive alternatives (such as this example), the conversion of a log-likelihood ratio to a probability takes the form of a sigmoid curve: see logit for details.)
Finally, the document can be classified as follows. It is spam if formula_83 (i. e., formula_84), otherwise it is not spam.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p(C_k \\mid x_1, \\ldots, x_n)"
},
{
"math_id": 1,
"text": "C_k"
},
{
"math_id": 2,
"text": "\\mathbf{x} = (x_1, \\ldots, x_n)"
},
{
"math_id": 3,
"text": "p(C_k \\mid \\mathbf{x}) = \\frac{p(C_k) \\ p(\\mathbf{x} \\mid C_k)}{p(\\mathbf{x})} \\,"
},
{
"math_id": 4,
"text": "\\text{posterior} = \\frac{\\text{prior} \\times \\text{likelihood}}{\\text{evidence}} \\,"
},
{
"math_id": 5,
"text": "C"
},
{
"math_id": 6,
"text": "x_i"
},
{
"math_id": 7,
"text": "p(C_k, x_1, \\ldots, x_n)\\,"
},
{
"math_id": 8,
"text": "\\begin{align}\np(C_k, x_1, \\ldots, x_n) & = p(x_1, \\ldots, x_n, C_k) \\\\\n & = p(x_1 \\mid x_2, \\ldots, x_n, C_k) \\ p(x_2, \\ldots, x_n, C_k) \\\\\n & = p(x_1 \\mid x_2, \\ldots, x_n, C_k) \\ p(x_2 \\mid x_3, \\ldots, x_n, C_k) \\ p(x_3, \\ldots, x_n, C_k) \\\\\n & = \\cdots \\\\\n & = p(x_1 \\mid x_2, \\ldots, x_n, C_k) \\ p(x_2 \\mid x_3, \\ldots, x_n, C_k) \\cdots p(x_{n-1} \\mid x_n, C_k) \\ p(x_n \\mid C_k) \\ p(C_k) \\\\\n\\end{align}"
},
{
"math_id": 9,
"text": "\\mathbf{x}"
},
{
"math_id": 10,
"text": "p(x_i \\mid x_{i+1}, \\ldots ,x_{n}, C_k ) = p(x_i \\mid C_k)\\,."
},
{
"math_id": 11,
"text": "\\begin{align}\np(C_k \\mid x_1, \\ldots, x_n) \\varpropto\\ & p(C_k, x_1, \\ldots, x_n) \\\\\n & = p(C_k) \\ p(x_1 \\mid C_k) \\ p(x_2\\mid C_k) \\ p(x_3\\mid C_k) \\ \\cdots \\\\\n & = p(C_k) \\prod_{i=1}^n p(x_i \\mid C_k)\\,,\n\\end{align}"
},
{
"math_id": 12,
"text": "\\varpropto"
},
{
"math_id": 13,
"text": "p(\\mathbf{x})"
},
{
"math_id": 14,
"text": "p(C_k \\mid x_1, \\ldots, x_n) = \\frac{1}{Z} \\ p(C_k) \\prod_{i=1}^n p(x_i \\mid C_k)"
},
{
"math_id": 15,
"text": "Z = p(\\mathbf{x}) = \\sum_k p(C_k) \\ p(\\mathbf{x} \\mid C_k)"
},
{
"math_id": 16,
"text": "x_1, \\ldots, x_n"
},
{
"math_id": 17,
"text": "\\hat{y} = C_k"
},
{
"math_id": 18,
"text": "\\hat{y} = \\underset{k \\in \\{1, \\ldots, K\\}}{\\operatorname{argmax}} \\ p(C_k) \\displaystyle\\prod_{i=1}^n p(x_i \\mid C_k)."
},
{
"math_id": 19,
"text": "p(C_k) = \\frac{1}{K}"
},
{
"math_id": 20,
"text": "\\text{prior for a given class} = \\frac{\\text{no. of samples in that class}}{\\text{total no. of samples}} \\,"
},
{
"math_id": 21,
"text": "x"
},
{
"math_id": 22,
"text": "\\mu_k"
},
{
"math_id": 23,
"text": "\\sigma^2_k"
},
{
"math_id": 24,
"text": "v"
},
{
"math_id": 25,
"text": "p(x=v \\mid C_k)"
},
{
"math_id": 26,
"text": "\np(x=v \\mid C_k) = \\frac{1}{\\sqrt{2\\pi\\sigma^2_k}}\\,e^{ -\\frac{(v-\\mu_k)^2}{2\\sigma^2_k} }\n"
},
{
"math_id": 27,
"text": "(p_1, \\dots, p_n)"
},
{
"math_id": 28,
"text": "p_i"
},
{
"math_id": 29,
"text": "\\mathbf{x} = (x_1, \\dots, x_n)"
},
{
"math_id": 30,
"text": "\np(\\mathbf{x} \\mid C_k) = \\frac{(\\sum_{i=1}^n x_i)!}{\\prod_{i=1}^n x_i !} \\prod_{i=1}^n {p_{ki}}^{x_i}\n"
},
{
"math_id": 31,
"text": "p_{ki} := p(x_i \\mid C_k)"
},
{
"math_id": 32,
"text": "\n\\begin{align}\n\\log p(C_k \\mid \\mathbf{x}) & \\varpropto \\log \\left( p(C_k) \\prod_{i=1}^n {p_{ki}}^{x_i} \\right) \\\\\n & = \\log p(C_k) + \\sum_{i=1}^n x_i \\cdot \\log p_{ki} \\\\\n & = b + \\mathbf{w}_k^\\top \\mathbf{x}\n\\end{align}\n"
},
{
"math_id": 33,
"text": "b = \\log p(C_k)"
},
{
"math_id": 34,
"text": "w_{ki} = \\log p_{ki}"
},
{
"math_id": 35,
"text": "\np(\\mathbf{x} \\mid C_k) = \\prod_{i=1}^n p_{ki}^{x_i} (1 - p_{ki})^{(1-x_i)}\n"
},
{
"math_id": 36,
"text": "p_{ki}"
},
{
"math_id": 37,
"text": "D = L \\uplus U"
},
{
"math_id": 38,
"text": "P(C \\mid x)"
},
{
"math_id": 39,
"text": "D"
},
{
"math_id": 40,
"text": "P(D \\mid \\theta)"
},
{
"math_id": 41,
"text": "\\theta"
},
{
"math_id": 42,
"text": "p(C, \\mathbf{x})"
},
{
"math_id": 43,
"text": "p(C \\mid \\mathbf{x})"
},
{
"math_id": 44,
"text": "Y\\in \\{1, ..., n\\}"
},
{
"math_id": 45,
"text": "p(Y \\mid X=x) = \\text{softmax}(\\{\\ln p(Y = k) + \\ln p(X=x \\mid Y=k)\\}_k)"
},
{
"math_id": 46,
"text": "\\text{softmax}\\left(\\left\\{\\ln p(Y = k) + \\frac 12 \\sum_i (a^+_{i, k} - a^-_{i, k})x_i + (a^+_{i, k} + a^-_{i, k})\\right\\}_k\\right)"
},
{
"math_id": 47,
"text": "a^+_{i, s} = \\ln p(X_i=+1 \\mid Y=s);\\quad a^-_{i, s} = \\ln p(X_i=-1 \\mid Y=s)"
},
{
"math_id": 48,
"text": "C_1"
},
{
"math_id": 49,
"text": "p(C_1 \\mid \\mathbf{x})"
},
{
"math_id": 50,
"text": "p(C_2 \\mid \\mathbf{x})"
},
{
"math_id": 51,
"text": "\n\\log\\frac{p(C_1 \\mid \\mathbf{x})}{p(C_2 \\mid \\mathbf{x})} = \\log p(C_1 \\mid \\mathbf{x}) - \\log p(C_2 \\mid \\mathbf{x}) > 0\n"
},
{
"math_id": 52,
"text": "b + \\mathbf{w}^\\top x > 0"
},
{
"math_id": 53,
"text": "b + \\mathbf{w}^\\top x"
},
{
"math_id": 54,
"text": "\n\\text{posterior (male)} = \\frac{P(\\text{male}) \\, p(\\text{height} \\mid \\text{male}) \\, p(\\text{weight} \\mid \\text{male}) \\, p(\\text{foot size} \\mid \\text{male})}{\\text{evidence}}\n"
},
{
"math_id": 55,
"text": "\n\\text{posterior (female)} = \\frac{P(\\text{female}) \\, p(\\text{height} \\mid \\text{female}) \\, p(\\text{weight} \\mid \\text{female}) \\, p(\\text{foot size} \\mid \\text{female})}{\\text{evidence}}\n"
},
{
"math_id": 56,
"text": "\\begin{align}\n\\text{evidence} = P(\\text{male}) \\, p(\\text{height} \\mid \\text{male}) \\, p(\\text{weight} \\mid \\text{male}) \\, p(\\text{foot size} \\mid \\text{male}) \\\\\n+ P(\\text{female}) \\, p(\\text{height} \\mid \\text{female}) \\, p(\\text{weight} \\mid \\text{female}) \\, p(\\text{foot size} \\mid \\text{female})\n\\end{align}"
},
{
"math_id": 57,
"text": "P(\\text{male}) = 0.5"
},
{
"math_id": 58,
"text": "p({\\text{height}} \\mid \\text{male}) = \\frac{1}{\\sqrt{2\\pi \\sigma^2}}\\exp\\left(\\frac{-(6-\\mu)^2}{2\\sigma^2}\\right) \\approx 1.5789,"
},
{
"math_id": 59,
"text": "\\mu = 5.855"
},
{
"math_id": 60,
"text": "\\sigma^2 = 3.5033 \\cdot 10^{-2}"
},
{
"math_id": 61,
"text": "p({\\text{weight}} \\mid \\text{male}) = \\frac{1}{\\sqrt{2\\pi \\sigma^2}}\\exp\\left(\\frac{-(130-\\mu)^2}{2\\sigma^2}\\right) = 5.9881 \\cdot 10^{-6}"
},
{
"math_id": 62,
"text": "p({\\text{foot size}} \\mid \\text{male}) = \\frac{1}{\\sqrt{2\\pi \\sigma^2}}\\exp\\left(\\frac{-(8-\\mu)^2}{2\\sigma^2}\\right) = 1.3112 \\cdot 10^{-3}"
},
{
"math_id": 63,
"text": "\\text{posterior numerator (male)} = \\text{their product} = 6.1984 \\cdot 10^{-9}"
},
{
"math_id": 64,
"text": "P({\\text{female}}) = 0.5"
},
{
"math_id": 65,
"text": "p({\\text{height}} \\mid {\\text{female}}) = 2.23 \\cdot 10^{-1}"
},
{
"math_id": 66,
"text": "p({\\text{weight}} \\mid {\\text{female}}) = 1.6789 \\cdot 10^{-2}"
},
{
"math_id": 67,
"text": "p({\\text{foot size}} \\mid {\\text{female}}) = 2.8669 \\cdot 10^{-1}"
},
{
"math_id": 68,
"text": "\\text{posterior numerator (female)} = \\text{their product} = 5.3778 \\cdot 10^{-4}"
},
{
"math_id": 69,
"text": "p(w_i \\mid C)\\,"
},
{
"math_id": 70,
"text": "w_i"
},
{
"math_id": 71,
"text": "p(D\\mid C) = \\prod_i p(w_i \\mid C)\\,"
},
{
"math_id": 72,
"text": "p(C \\mid D)\\,"
},
{
"math_id": 73,
"text": "p(D\\mid C)={p(D\\cap C)\\over p(C)}"
},
{
"math_id": 74,
"text": "p(C \\mid D) = {p(D\\cap C)\\over p(D)}"
},
{
"math_id": 75,
"text": "p(C\\mid D) = \\frac{p(C)\\,p(D\\mid C)}{p(D)}"
},
{
"math_id": 76,
"text": "p(D\\mid S)=\\prod_i p(w_i \\mid S)\\,"
},
{
"math_id": 77,
"text": "p(D\\mid\\neg S)=\\prod_i p(w_i\\mid\\neg S)\\,"
},
{
"math_id": 78,
"text": "p(S\\mid D)={p(S)\\over p(D)}\\,\\prod_i p(w_i \\mid S)"
},
{
"math_id": 79,
"text": "p(\\neg S\\mid D)={p(\\neg S)\\over p(D)}\\,\\prod_i p(w_i \\mid\\neg S)"
},
{
"math_id": 80,
"text": "{p(S\\mid D)\\over p(\\neg S\\mid D)}={p(S)\\,\\prod_i p(w_i \\mid S)\\over p(\\neg S)\\,\\prod_i p(w_i \\mid\\neg S)}"
},
{
"math_id": 81,
"text": "{p(S\\mid D)\\over p(\\neg S\\mid D)}={p(S)\\over p(\\neg S)}\\,\\prod_i {p(w_i \\mid S)\\over p(w_i \\mid\\neg S)}"
},
{
"math_id": 82,
"text": "\\ln{p(S\\mid D)\\over p(\\neg S\\mid D)}=\\ln{p(S)\\over p(\\neg S)}+\\sum_i \\ln{p(w_i\\mid S)\\over p(w_i\\mid\\neg S)}"
},
{
"math_id": 83,
"text": "p(S\\mid D) > p(\\neg S\\mid D)"
},
{
"math_id": 84,
"text": "\\ln{p(S\\mid D) \\over p(\\neg S\\mid D)} > 0"
}
] | https://en.wikipedia.org/wiki?curid=87339 |
87352 | Graph of a function | Representation of a mathematical function
In mathematics, the graph of a function formula_0 is the set of ordered pairs formula_1, where formula_2 In the common case where formula_3 and formula_4 are real numbers, these pairs are Cartesian coordinates of points in a plane and often form a curve.
The graphical representation of the graph of a function is also known as a "plot".
In the case of functions of two variables – that is, functions whose domain consists of pairs formula_1 –, the graph usually refers to the set of ordered triples formula_5 where formula_6. This is a subset of three-dimensional space; for a continuous real-valued function of two real variables, its graph forms a surface, which can be visualized as a "surface plot".
In science, engineering, technology, finance, and other areas, graphs are tools used for many purposes. In the simplest case one variable is plotted as a function of another, typically using rectangular axes; see "Plot (graphics)" for details.
A graph of a function is a special case of a relation.
In the modern foundations of mathematics, and, typically, in set theory, a function is actually equal to its graph. However, it is often useful to see functions as mappings, which consist not only of the relation between input and output, but also which set is the domain, and which set is the codomain. For example, to say that a function is onto (surjective) or not the codomain should be taken into account. The graph of a function on its own does not determine the codomain. It is common to use both terms "function" and "graph of a function" since even if considered the same object, they indicate viewing it from a different perspective.
Definition.
Given a function formula_7 from a set X (the domain) to a set Y (the codomain), the graph of the function is the set
formula_8
which is a subset of the Cartesian product formula_9. In the definition of a function in terms of set theory, it is common to identify a function with its graph, although, formally, a function is formed by the triple consisting of its domain, its codomain and its graph.
Examples.
Functions of one variable.
The graph of the function formula_10 defined by
formula_11
is the subset of the set formula_12
formula_13
From the graph, the domain formula_14 is recovered as the set of first component of each pair in the graph formula_15.
Similarly, the range can be recovered as formula_16.
The codomain formula_17, however, cannot be determined from the graph alone.
The graph of the cubic polynomial on the real line
formula_18
is
formula_19
If this set is plotted on a Cartesian plane, the result is a curve (see figure).
Functions of two variables.
The graph of the trigonometric function
formula_20
is
formula_21
If this set is plotted on a three dimensional Cartesian coordinate system, the result is a surface (see figure).
Oftentimes it is helpful to show with the graph, the gradient of the function and several level curves. The level curves can be mapped on the function surface or can be projected on the bottom plane. The second figure shows such a drawing of the graph of the function:
formula_22
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "(x, y)"
},
{
"math_id": 2,
"text": "f(x) = y."
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "f(x)"
},
{
"math_id": 5,
"text": "(x, y, z)"
},
{
"math_id": 6,
"text": "f(x,y) = z"
},
{
"math_id": 7,
"text": "f : X \\to Y"
},
{
"math_id": 8,
"text": "G(f) = \\{(x,f(x)) : x \\in X\\},"
},
{
"math_id": 9,
"text": "X\\times Y"
},
{
"math_id": 10,
"text": "f : \\{1,2,3\\} \\to \\{a,b,c,d\\}"
},
{
"math_id": 11,
"text": "f(x)=\n \\begin{cases}\n a, & \\text{if }x=1, \\\\ d, & \\text{if }x=2, \\\\ c, & \\text{if }x=3, \n \\end{cases}\n "
},
{
"math_id": 12,
"text": "\\{1,2,3\\} \\times \\{a,b,c,d\\}"
},
{
"math_id": 13,
"text": "G(f) = \\{ (1,a), (2,d), (3,c) \\}."
},
{
"math_id": 14,
"text": "\\{1,2,3\\}"
},
{
"math_id": 15,
"text": "\\{1,2,3\\} = \\{x :\\ \\exists y,\\text{ such that }(x,y) \\in G(f)\\}"
},
{
"math_id": 16,
"text": "\\{a,c,d\\} = \\{y : \\exists x,\\text{ such that }(x,y)\\in G(f)\\}"
},
{
"math_id": 17,
"text": "\\{a,b,c,d\\}"
},
{
"math_id": 18,
"text": "f(x) = x^3 - 9x"
},
{
"math_id": 19,
"text": "\\{ (x, x^3 - 9x) : x \\text{ is a real number} \\}."
},
{
"math_id": 20,
"text": "f(x,y) = \\sin(x^2)\\cos(y^2)"
},
{
"math_id": 21,
"text": "\\{ (x, y, \\sin(x^2) \\cos(y^2)) : x \\text{ and } y \\text{ are real numbers} \\}."
},
{
"math_id": 22,
"text": "f(x, y) = -(\\cos(x^2) + \\cos(y^2))^2."
}
] | https://en.wikipedia.org/wiki?curid=87352 |
8736971 | Supercontinuum | In optics, a supercontinuum is formed when a collection of nonlinear processes act together upon a pump beam in order to cause severe spectral broadening of the original pump beam, for example using a microstructured optical fiber. The result is a smooth spectral continuum (see figure 1 for a typical example). There is no consensus on how much broadening constitutes a supercontinuum; however researchers have published work claiming as little as 60 nm of broadening as a supercontinuum. There is also no agreement on the spectral flatness required to define the bandwidth of the source, with authors using anything from 5 dB to 40 dB or more. In addition the term supercontinuum itself did not gain widespread acceptance until this century, with many authors using alternative phrases to describe their continua during the 1970s, 1980s and 1990s.
In the decade leading up to 2014, the development of supercontinua sources emerged as a research field. This is largely due to technological developments, which have allowed more controlled and accessible generation of supercontinua. This renewed research has created a variety of new light sources which are finding applications in a diverse range of fields, including optical coherence tomography, frequency metrology, fluorescence lifetime imaging, optical communications, gas sensing, and many others. The application of these sources has created a feedback loop whereby the scientists utilising the supercontinua are demanding better customisable continua to suit their particular applications. This has driven researchers to develop novel methods to produce these continua and to develop theories to understand their formation and aid future development. As a result, rapid progress has been made in developing these sources since 2000.
While supercontinuum generation has for long been the preserve of fibers, in the years leading up to 2012, integrated waveguides came of age to produce extremely broad spectra, opening the door to more economical, compact, robust, scalable, and mass-producible supercontinuum sources.
Historical overview.
The 1960s and 1970s.
In 1964 Jones and Stoicheff reported using a continua generated by a maser to study induced Raman absorption in liquids at optical frequencies. It had been noted by Stoicheff in an early publication that "when the maser emission was in a single sharp spectral line, all the Raman emission lines were sharp; whenever the maser emission contained additional components, all of the Raman emission lines, with the exception of the first Stokes line, were considerably broadened, sometimes up to several hundred cm−1." These weak continua, as they were described, allowed the first Raman absorption spectroscopy measurements to be made.
In 1970 Alfano and Shapiro reported the first measurements of frequency broadening in crystals and glasses using a frequency doubled Nd:Glass mode-locked laser. The output pulses were approximately 4 ps and had a pulse energy of 5 mJ. The filaments formed produced the first white light spectra in the range from 400-700 nm and the authors explained their formation through self-phase modulation and four-wave mixing. The filaments themselves were of no real use as a source; nevertheless the authors suggested that the crystals might prove useful as ultrafast light gates. Alfano is the discoverer and inventor of the supercontinuum in 1970 with three seminal articles in same issue of Phy Rev Letters (24, 592,584,1217(1970)) on ultimate white light source now called supercontinuum.
The study of atomic vapours, organic vapours, and liquids by Raman absorption spectroscopy through the 1960s and 1970s drove the development of continua sources. By the early 1970s, continua formed by nanosecond duration flash lamps and laser-triggered breakdown spark in gases, along with laser-excited fluorescence continua from scintillator dyes, were being used to study the excited states. These sources all had problems; what was required was a source that produced broad continua at high power levels with a reasonable efficiency. In 1976 Lin and Stolen reported a new nanosecond source that produced continua with a bandwidth of 110-180 nm centred on 530 nm at output powers of around a kW. The system used a 10-20 kW dye laser producing 10 ns pulses with 15-20 nm of bandwidth to pump a 19.5 m long, 7 μm core diameter silica fibre . They could only manage a coupling efficiency in the region of 5-10%.
By 1978 Lin and Nguyen reported several continua, most notably one stretching from 0.7-1.6 μm using a 315 m long GeOformula_0-doped silica fibre with a 33 μm core. The optical setup was similar to Lin's previous work with Stolen, except in this instance the pump source was a 150 kW, 20 ns, Q-switched Nd:YAG laser. Indeed, they had so much power available to them that two thirds was attenuated away to prevent damage to the fibre. The 50 kW coupled into the fibre emerged as a 12 kW continuum . Stokes lines were clearly visible up to 1.3 μm, at which point the continuum began to smooth out, except for a large loss due to water absorption at 1.38 μm. As they increased the launch power beyond 50 kW they noted that the continuum extends down into the green part of the visible spectrum. However, the higher power levels quickly damaged their fibre. In the same paper they also pumped a single mode fibre with a 6 μm core diameter and "a few 100 m in length." It generated a similar continuum spanning from 0.9 μm to 1.7 μm with reduced launch and output powers. Without realising it, they had also generated optical solitons for the first time.
The 1980s.
In 1980 Fujii "et al." repeated Lin's 1978 setup with a mode-locked Nd:YAG. The peak power of the pulses was reported as being greater than 100 kW and they achieved better than 70% coupling efficiency into a 10 μm core single-mode Ge-doped fibre. Unusually, they did not report their pulse duration. Their spectrum spanned the entire spectral window in silica from 300 nm to 2100 nm. The authors concerned themselves with the visible side of the spectrum and identified the main mechanism for generation to be four-wave mixing of the pump and Raman generated Stokes. However, there were some higher order modes, which were attributed to sum-frequency generation between the pump and Stokes lines. The phase-matching condition was met by coupling of the up-converted light and the quasi-continuum of cladding modes.
A further advance was reported by Washio "et al." in 1980 when they pumped 150 m of single-mode fibre with a 1.34 μm Q-switched Nd:YAG laser. This was just inside the anomalous dispersion regime for their fibre. The result was a continua which stretched from 1.15 to 1.6 μm and showed no discrete Stokes lines.
Up to this point no one had really provided a suitable explanation why the continuum smoothed out between the Stokes lines at longer wavelengths in fibres. In the majority of cases this is explained by soliton mechanisms; however, solitons were not reported in fibres until 1985. It was realised that self-phase modulation could not account for the broad continua seen, but for the most part little else was offered as an explanation.
In 1982 Smirnov "et al." reported similar results to that achieved by Lin in 1978. Using multimode phosphosilicate fibres pumped at 0.53 and 1.06 μm, they saw the normal Stokes components and a spectrum which extended from the ultraviolet to the near infrared. They calculated that the spectral broadening due to self-phase modulation should have been 910 cm−1, but their continuum was greater than 3000 cm−1. They concluded that "an optical continuum cannot be explained by self-phase modulation alone." They continued by pointing out the difficulties of phase-matching over long lengths of fibre to maintain four-wave mixing, and reported an unusual damage mechanism (with hindsight this would probably be considered a very short fibre fuse). They note a much earlier suggestion by Loy and Shen that if the nanosecond pulses consisted of sub-nanosecond spikes in a nanosecond envelope, it would explain the broad continuum.
This idea of very short pulses resulting in the broad continuum was studied a year later when Fork "et al." reported using 80 fs pulses from a colliding mode-locked laser. The laser's wavelength was 627 nm and they used it to pump a jet of ethylene glycol. They collimated the resulting continuum and measured the pulse duration at different wavelengths, noting that the red part of the continuum was at the front of the pulse and the blue at the rear. They reported very small chirps across the continuum. These observations and others led them to state that self-phase modulation was the dominant effect by some margin. However they also noted that their calculations showed that the continuum remained much larger than self-phase modulation would allow, suggesting that four-wave mixing processes must also be present. They stated that it was much easier to produce a reliable, repeatable continuum using a femtosecond source. Over the ensuing years this source was developed further and used to examine other liquids.
In the same year Nakazawa and Tokuda reported using the two transitions in Nd:YAG at 1.32 and 1.34 μm to pump a multimode fibre simultaneously at these wavelengths. They attributed the continuum spectrum to a combination of forced four-wave mixing and a superposition of sequential stimulated Raman scattering. The main advantage of this was that they were able to generate a continuum at the relatively low pump powers of a few kW, compared to previous work.
During the early to late 1980s Alfano, Ho, Corkum, Manassah and others carried out a wide variety of experiments, though very little of it involved fibres. The majority of the work centred on using faster sources (10 ps and below) to pump various crystals, liquids, gases, and semiconductors in order to generate continua mostly in the visible region. Self-phase modulation was normally used to explain the processes although from the mid-1980s other explanations were offered, including second harmonic generation cross-phase modulation and induced phase modulation. Indeed, efforts were made to explain why self-phase modulation might well result in much broader continua, mostly through modifications to theory by including factors such as a slowly varying amplitude envelope among others.
In 1987 Gomes "et al." reported cascaded stimulated Raman scattering in a single mode phosphosilicate-based fibre. They pumped the fibre with a Q-switched and mode-locked Nd:YAG, which produced 130 ps pulses with 700 kW peak power. They launched up to 56 kW into the fibre and as a result of the phosphorus achieved a much broader and flatter continuum than had been achieved to that point with silica fibre. A year later Gouveia-Neto "et al." from the same group published a paper describing the formation and propagation of soliton waves from modulation instability. They used a 1.32 μm Nd:YAG laser which produced 100 ps pulses with 200 W peak power to pump 500 m of single mode fibre with a 7 μm core diameter. The zero-dispersion wavelength of the fibre was at 1.30 μm, placing the pump just inside the anomalous dispersion regime. They noted pulses emerging with durations of less than 500 fs (solitons) and as they increased the pump power a continuum was formed stretching from 1.3 to 1.5 μm.
The 1990s.
Gross "et al." in 1992 published a paper modelling the formation of supercontinua (in the anomalous group velocity dispersion region) when generated by femtosecond pulses in fibre. It was easily the most complete model, to that date, with fundamental solitons and soliton self-frequency shift emerging as solutions to the equations.
The applicability of supercontinua for use in wavelength-division multiplexed (WDM) systems for optical communications was investigated heavily during the 1990s. In 1993 Morioka "et al." reported a 100 wavelength channel multiplexing scheme which simultaneously produced one hundred 10 ps pulses in the 1.224-1.394 μm spectra region with a 1.9 nm spectral spacing. They produced a supercontinuum using a Nd:YLF pump centred on 1.314 μm which was mode-locked to produce 7.6 ps pulses. They then filtered the resulting continuum with a birefringent fibre to generate the channels.
Morioka and Mori continued development of telecommunications technologies utilising supercontinuum generation throughout the 1990s up to the present day. Their research included: using a supercontinua to measure the group velocity dispersion in optical fibres; the demonstration of a 1 Tbit/s-based WDM system; and more recently a 1000 channel dense wavelength-division multiplexed (DWDM) system capable of 2.8 Tbit/s using a supercontinuum fractionally more than 60 nm wide.
The first demonstration of a fibre-based supercontinuum pumped by a fibre-based laser was reported by Chernikov "et al." in 1997. They made use of distributed backscattering to achieve passive Q-switching in single-mode ytterbium- and erbium-doped fibres. The passive Q-switching produced pulses with a 10 kW peak power and a 2 ns duration. The resulting continuum stretched from 1 μm to the edge of the silica window at 2.3 μm. The first three Stokes lines were visible and the continuum stretched down to about 0.7 μm but at significantly reduced power levels.
Progress since 2000.
Advances made during the 1980s meant that it had become clear that to get the broadest continua in fibre, it was most efficient to pump in the anomalous dispersion regime. However it was difficult to capitalise upon this with high-power 1 μm lasers as it had proven extremely difficult to achieve a zero-dispersion wavelength of much less than 1.3 μm in conventional silica fibre. A solution appeared with the invention of photonic-crystal fibers (PCF) in 1996 by Knight "et al." The properties of PCFs are discussed in detail elsewhere, but they have two properties which make PCF an excellent medium for supercontinuum generation, namely: high nonlinearity and a customisable zero-dispersion wavelength. Among the first was Ranka "et al." in 2000, who used a 75 cm PCF with a zero-dispersion at 767 nm and a 1.7 μm core diameter. They pumped the fibre with 100 fs, 800 pJ pulses at 790 nm to produce a flat continuum from between 400 and 1450 nm.
This work was followed by others pumping short lengths of PCF with zero-dispersions around 800 nm with high-power femtosecond Ti:sapphire lasers. Lehtonen "et al." studied the effect of polarization on the formation of the continua in a birefringent PCF, as well as varying the pump wavelength (728-810 nm) and pulse duration (70-300 fs). They found that the best continua were formed just inside the anomalous region with 300 fs pulses. Shorter pulses resulted in clear separation of the solitons which were visible in the spectral output. Herrmann "et al." provided a convincing explanation of the development of femtosecond supercontinua, specifically the reduction of solitons from high orders down to the fundamental and the production of dispersive waves during this process. Fully fibre-integrated femtosecond sources have since been developed and demonstrated.
Other areas of development since 2000 have included: supercontinua sources that operate in the picosecond, nanosecond, and CW regimes; the development of fibres to include new materials, production techniques and tapers; novel methods for generating broader continua; novel propagation equations for describing supercontinuum in photonic nanowires, and the development of numerical models to explain and aid understanding of supercontinuum generation. Unfortunately, an in-depth discussion of these achievements is beyond this article but the reader is referred to an excellent review article by Dudley "et al."
Supercontinuum generation in integrated photonics platforms.
While optical fibers have been the workhorse of supercontinuum generation since its inception, integrated waveguide-based sources of supercontinuum have become an active area of research in the twenty first century. These chip-scale platforms promise to miniaturize supercontinuum sources into devices that are compact, robust, scalable, mass producible and more economical. Such platforms also allow dispersion engineering by varying the cross-sectional geometry of the waveguide. Silicon bases materials such as silica, silicon nitride, crystalline silicon, and amorphous silicon have demonstrated supercontinuum generation spanning the visible, near-infrared, and mid-infrared regions of the electromagnetic spectrum. As of 2015, the widest supercontinuum generated on chip extends from 470 nm in the visible to 2130 nm for the infrared wavelength region.
Description of dynamics of continuum formation in fiber.
In this section we will briefly discuss the dynamics of the two main regimes in which supercontinua are generated in fibre. As previously stated a supercontinuum occurs through the interaction of many nonlinear processes to cause extensive spectral broadening. Many of these processes such as: self-phase modulation, four-wave mixing, and soliton-based dynamics have been well understood, individually, for some time. The breakthroughs in recent years have involved understanding and modelling how all these processes interact together to generate supercontinua and how parameters can be engineered to enhance and control continuum formation. The two main regimes are the soliton fission regime and modulation instability regime. The physical processes can be considered to be quite similar and the descriptions really enable us to distinguish between the processes that drive the continuum formation for varying pump conditions. A third regime, pumping in the normal dispersion region, is also covered. This is a perfectly viable way to generate a supercontinuum. However, it is not possible to generate the same bandwidths by this method.
Soliton fission regime.
In the soliton fission regime a short, high-power, femtosecond pulse is launched into the PCF or other highly nonlinear fiber. The femtosecond pulse may be considered as a high order soliton, consequently it rapidly broadens and then fissions into fundamental solitons. During the fission process excess energy is shed as dispersive waves on the short wavelength side. Generally these dispersive waves will undergo no further shifting and thus the extension short of the pump is dependent on how broadly the soliton expands as it breathes. The fundamental solitons then undergo intra-pulse Raman scattering and shift to longer wavelengths (also known as the soliton self-frequency shift), generating the long wavelength side of the continuum. It is possible for the soliton Raman continuum to interact with the dispersive radiation via four-wave mixing and cross-phase modulation. Under certain circumstances, it is possible for these dispersive waves to be coupled with the solitons via the soliton trapping effect. This effect means that as the soliton self-frequency shifts to longer wavelengths, the coupled dispersive wave is shifted to shorter wavelengths as dictated by the group velocity matching conditions. Generally, this soliton trapping mechanism allows for the continuum to extend to shorter wavelengths than is possible via any other mechanism.
The first supercontinuum generated in PCF operated in this regime and many of the subsequent experiments also made use of ultra-short pulsed femtosecond systems as a pump source. One of the main advantages of this regime is that the continuum often exhibits a high degree of temporal coherence, in addition it is possible to generate broad supercontinua in very short lengths of PCF. Disadvantages include an inability to scale to very high average powers in the continuum, although the limiting factor here is the available pump sources; and typically the spectrum is not smooth due to the localised nature of the spectral components which generate it.
Whether this regime is dominant can be worked out from the pulse and fibre parameters. We can define a soliton fission length, formula_1, to estimate the length at which the highest soliton compression is achieved, such that:
formula_2
where formula_3 is the characteristic dispersion length and formula_4 is the soliton order. As fission tends to occur at this length then provided that formula_1 is shorter than the length of the fibre and other characteristic length scales such as the modulation instability length formula_5, fission will dominate.
Modulation instability regime.
Modulation instability (MI) leads to the breakup of a continuous wave (CW) or quasi-continuous wave fields, which becomes a train of fundamental solitons.
The solitons generated in this regime are fundamental, as several papers on CW and quasi-CW supercontinuum formation have accredited short wavelength generation to soliton fission and dispersive wave generation as described above. In a similar manner to the soliton fission regime, the long wavelength side of the continuum is generated by the solitons undergoing intra-pulse Raman scattering and self-frequency shifting to longer wavelengths. As the MI process is noise driven, a distribution of solitons with different energies are created, resulting in different rates of self-frequency shifting. The net result is that MI driven soliton-Raman continua tends to be spectrally much smoother than those generated in the fission regime. Short wavelength generation is driven by four-wave mixing, especially for higher peak powers in the quasi-CW regime. In the pure CW regime, short wavelength generation has only recently been achieved at wavelengths shorter than those of a 1 μm pump source. In this case soliton trapping has been shown to play a role in short wavelength generation in the MI driven regime.
A continuum will only occur in the MI regime if the fibre and field parameters are such that MI forms and dominates over other processes such as fission. In a similar fashion to the fission regime it is constructive to develop a characteristic length scale for MI, formula_5:
formula_6
where formula_7 is the level of the background noise below the peak power level. Equation is essentially a measure of the length required for the MI gain to amplify the background quantum noise into solitons. Typically this shot noise is taken to be ~200 dB down. So provided formula_8 then MI will dominate over soliton fission in the quasi-CW case and this condition may be expressed as:
formula_9
The middle term of the equation is simply the soliton equation. For MI to dominate we need the left hand side to be much less than the right hand side which implies that the soliton order must be much greater than 4. In practice this boundary has been established as being approximately formula_10. Therefore, we can see that it is predominantly ultra-short pulses that lead to the soliton fission mechanism.
Pumping in the normal dispersion regime.
The two regimes outlined above assume that the pump is in the anomalous dispersion region. It is possible to create supercontinua in the normal region and in fact many of the early results discussed in the historical overview were pumped in the normal dispersion regime. If the input pulses are short enough then self-phase modulation can lead to significant broadening which is temporally coherent. However, if the pulses are not ultra-short then stimulated-Raman scattering tends to dominate and typically a series of cascaded discrete Stokes lines will appear until the zero-dispersion wavelength is reached. At this point a soliton Raman continuum may form. As pumping in the anomalous is much more efficient for continuum generation, the majority of modern sources avoiding pumping in the normal dispersion regime.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\textstyle _2"
},
{
"math_id": 1,
"text": "L_{\\mathrm{fiss}}"
},
{
"math_id": 2,
"text": "L_{\\mathrm{fiss}}=\\frac{L_D}{N}=\\sqrt{\\frac{\\tau^2_0}{|\\beta_2|\\gamma P_0}}"
},
{
"math_id": 3,
"text": "L_D"
},
{
"math_id": 4,
"text": "N"
},
{
"math_id": 5,
"text": "L_{\\mathrm{MI}}"
},
{
"math_id": 6,
"text": "L_{\\mathrm{MI}}=\\frac{n_{\\mathrm{dB}}}{20\\gamma P_0\\lg10}\\sim\\frac{4}{\\gamma P_0} "
},
{
"math_id": 7,
"text": "n_{\\mathrm{dB}}"
},
{
"math_id": 8,
"text": "L_{\\mathrm{MI}} \\ll L_{\\mathrm{fiss}}"
},
{
"math_id": 9,
"text": "4^2\\ll\\frac{\\gamma P_0\\tau_0^2}{|\\beta_2|}=N^2 "
},
{
"math_id": 10,
"text": "N=16"
}
] | https://en.wikipedia.org/wiki?curid=8736971 |
87372 | Additive function | Function that can be written as a sum over prime factors
In number theory, an additive function is an arithmetic function "f"("n") of the positive integer variable "n" such that whenever "a" and "b" are coprime, the function applied to the product "ab" is the sum of the values of the function applied to "a" and "b":
formula_0
Completely additive.
An additive function "f"("n") is said to be completely additive if formula_1 holds "for all" positive integers "a" and "b", even when they are not coprime. Totally additive is also used in this sense by analogy with totally multiplicative functions. If "f" is a completely additive function then "f"(1) = 0.
Every completely additive function is additive, but not vice versa.
Examples.
Examples of arithmetic functions which are completely additive are:
"a"0(4) = 2 + 2 = 4
"a"0(20) = "a"0(22 · 5) = 2 + 2 + 5 = 9
"a"0(27) = 3 + 3 + 3 = 9
"a"0(144) = "a"0(24 · 32) = "a"0(24) + "a"0(32) = 8 + 6 = 14
"a"0(2000) = "a"0(24 · 53) = "a"0(24) + "a"0(53) = 8 + 15 = 23
"a"0(2003) = 2003
"a"0(54,032,858,972,279) = 1240658
"a"0(54,032,858,972,302) = 1780417
"a"0(20,802,650,704,327,415) = 1240681
Ω(1) = 0, since 1 has no prime factors
Ω(4) = 2
Ω(16) = Ω(2·2·2·2) = 4
Ω(20) = Ω(2·2·5) = 3
Ω(27) = Ω(3·3·3) = 3
Ω(144) = Ω(24 · 32) = Ω(24) + Ω(32) = 4 + 2 = 6
Ω(2000) = Ω(24 · 53) = Ω(24) + Ω(53) = 4 + 3 = 7
Ω(2001) = 3
Ω(2002) = 4
Ω(2003) = 1
Ω(54,032,858,972,279) = Ω(11 ⋅ 19932 ⋅ 1236661) = 4 ;
Ω(54,032,858,972,302) = Ω(2 ⋅ 72 ⋅ 149 ⋅ 2081 ⋅ 1778171) = 6
Ω(20,802,650,704,327,415) = Ω(5 ⋅ 7 ⋅ 112 ⋅ 19932 ⋅ 1236661) = 7.
Examples of arithmetic functions which are additive but not completely additive are:
ω(4) = 1
ω(16) = ω(24) = 1
ω(20) = ω(22 · 5) = 2
ω(27) = ω(33) = 1
ω(144) = ω(24 · 32) = ω(24) + ω(32) = 1 + 1 = 2
ω(2000) = ω(24 · 53) = ω(24) + ω(53) = 1 + 1 = 2
ω(2001) = 3
ω(2002) = 4
ω(2003) = 1
ω(54,032,858,972,279) = 3
ω(54,032,858,972,302) = 5
ω(20,802,650,704,327,415) = 5
"a"1(1) = 0
"a"1(4) = 2
"a"1(20) = 2 + 5 = 7
"a"1(27) = 3
"a"1(144) = "a"1(24 · 32) = "a"1(24) + "a"1(32) = 2 + 3 = 5
"a"1(2000) = "a"1(24 · 53) = "a"1(24) + "a"1(53) = 2 + 5 = 7
"a"1(2001) = 55
"a"1(2002) = 33
"a"1(2003) = 2003
"a"1(54,032,858,972,279) = 1238665
"a"1(54,032,858,972,302) = 1780410
"a"1(20,802,650,704,327,415) = 1238677
Multiplicative functions.
From any additive function formula_3 it is possible to create a related multiplicative function formula_4 which is a function with the property that whenever formula_5 and formula_6 are coprime then:
formula_7
One such example is formula_8 Likewise if formula_3 is completely additive, then formula_9 is completely multiplicative. More generally, we could consider the function formula_10, where formula_11 is a nonzero real constant.
Summatory functions.
Given an additive function formula_12, let its summatory function be defined by formula_13. The average of formula_12 is given exactly as
formula_14
The summatory functions over formula_12 can be expanded as formula_15 where
formula_16
The average of the function formula_17 is also expressed by these functions as
formula_18
There is always an absolute constant formula_19 such that for all natural numbers formula_20,
formula_21
Let
formula_22
Suppose that formula_12 is an additive function with formula_23
such that as formula_24,
formula_25
Then formula_26 where formula_27 is the Gaussian distribution function
formula_28
Examples of this result related to the prime omega function and the numbers of prime divisors of shifted primes include the following for fixed formula_29 where the relations hold for formula_30:
formula_31
formula_32
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f(a b) = f(a) + f(b)."
},
{
"math_id": 1,
"text": "f(a b) = f(a) + f(b)"
},
{
"math_id": 2,
"text": "\\N."
},
{
"math_id": 3,
"text": "f(n)"
},
{
"math_id": 4,
"text": "g(n),"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "b"
},
{
"math_id": 7,
"text": "g(a b) = g(a) \\times g(b)."
},
{
"math_id": 8,
"text": "g(n) = 2^{f(n)}."
},
{
"math_id": 9,
"text": "g(n) = 2^{f(n)} "
},
{
"math_id": 10,
"text": "g(n) = c^{f(n)} "
},
{
"math_id": 11,
"text": "c"
},
{
"math_id": 12,
"text": "f"
},
{
"math_id": 13,
"text": "\\mathcal{M}_f(x) := \\sum_{n \\leq x} f(n)"
},
{
"math_id": 14,
"text": "\\mathcal{M}_f(x) = \\sum_{p^{\\alpha} \\leq x} f(p^{\\alpha}) \\left(\\left\\lfloor \\frac{x}{p^{\\alpha}} \\right\\rfloor - \\left\\lfloor \\frac{x}{p^{\\alpha+1}} \\right\\rfloor\\right)."
},
{
"math_id": 15,
"text": "\\mathcal{M}_f(x) = x E(x) + O(\\sqrt{x} \\cdot D(x))"
},
{
"math_id": 16,
"text": "\\begin{align} \nE(x) & = \\sum_{p^{\\alpha} \\leq x} f(p^{\\alpha}) p^{-\\alpha} (1-p^{-1}) \\\\ \nD^2(x) & = \\sum_{p^{\\alpha} \\leq x} |f(p^{\\alpha})|^2 p^{-\\alpha}. \n\\end{align}"
},
{
"math_id": 17,
"text": "f^2"
},
{
"math_id": 18,
"text": "\\mathcal{M}_{f^2}(x) = x E^2(x) + O(x D^2(x))."
},
{
"math_id": 19,
"text": "C_f > 0"
},
{
"math_id": 20,
"text": "x \\geq 1"
},
{
"math_id": 21,
"text": "\\sum_{n \\leq x} |f(n) - E(x)|^2 \\leq C_f \\cdot x D^2(x)."
},
{
"math_id": 22,
"text": "\\nu(x; z) := \\frac{1}{x} \\#\\!\\left\\{n \\leq x: \\frac{f(n)-A(x)}{B(x)} \\leq z\\right\\}\\!."
},
{
"math_id": 23,
"text": "-1 \\leq f(p^{\\alpha}) = f(p) \\leq 1"
},
{
"math_id": 24,
"text": "x \\rightarrow \\infty"
},
{
"math_id": 25,
"text": "B(x) = \\sum_{p \\leq x} f^2(p) / p \\rightarrow \\infty."
},
{
"math_id": 26,
"text": "\\nu(x; z) \\sim G(z)"
},
{
"math_id": 27,
"text": "G(z)"
},
{
"math_id": 28,
"text": "G(z) = \\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^{z} e^{-t^2/2} dt."
},
{
"math_id": 29,
"text": "z \\in \\R"
},
{
"math_id": 30,
"text": "x \\gg 1"
},
{
"math_id": 31,
"text": "\\#\\{n \\leq x: \\omega(n) - \\log\\log x \\leq z (\\log\\log x)^{1/2}\\} \\sim x G(z),"
},
{
"math_id": 32,
"text": "\\#\\{p \\leq x: \\omega(p+1) - \\log\\log x \\leq z (\\log\\log x)^{1/2}\\} \\sim \\pi(x) G(z)."
}
] | https://en.wikipedia.org/wiki?curid=87372 |
8737421 | Series acceleration | In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.
Definition.
Given a sequence
formula_0
having a limit
formula_1
an accelerated series is a second sequence
formula_2
which converges faster to formula_3 than the original sequence, in the sense that
formula_4
If the original sequence is divergent, the sequence transformation acts as an extrapolation method to the antilimit formula_3.
The mappings from the original to the transformed series may be linear (as defined in the article sequence transformations), or non-linear. In general, the non-linear sequence transformations tend to be more powerful.
Overview.
Two classical techniques for series acceleration are Euler's transformation of series and Kummer's transformation of series. A variety of much more rapidly convergent and special-case tools have been developed in the 20th century, including Richardson extrapolation, introduced by Lewis Fry Richardson in the early 20th century but also known and used by Katahiro Takebe in 1722; the Aitken delta-squared process, introduced by Alexander Aitken in 1926 but also known and used by Takakazu Seki in the 18th century; the epsilon method given by Peter Wynn in 1956; the Levin u-transform; and the Wilf-Zeilberger-Ekhad method or WZ method.
For alternating series, several powerful techniques, offering convergence rates from formula_5 all the way to formula_6 for a summation of formula_7 terms, are described by Cohen "et al".
Euler's transform.
A basic example of a linear sequence transformation, offering improved convergence, is Euler's transform. It is intended to be applied to an alternating series; it is given by
formula_8
where formula_9 is the forward difference operator, for which one has the formula
formula_10
If the original series, on the left hand side, is only slowly converging, the forward differences will tend to become small quite rapidly; the additional power of two further improves the rate at which the right hand side converges.
A particularly efficient numerical implementation of the Euler transform is the van Wijngaarden transformation.
Conformal mappings.
A series
formula_11
can be written as formula_12, where the function "f" is defined as
formula_13
The function formula_14 can have singularities in the complex plane (branch point singularities, poles or essential singularities), which limit the radius of convergence of the series. If the point formula_15 is close to or on the boundary of the disk of convergence, the series for formula_16 will converge very slowly. One can then improve the convergence of the series by means of a conformal mapping that moves the singularities such that the point that is mapped to formula_15ends up deeper in the new disk of convergence.
The conformal transform formula_17 needs to be chosen such that formula_18, and one usually chooses a function that has a finite derivative at "w" = 0. One can assume that formula_19 without loss of generality, as one can always rescale "w" to redefine formula_20. We then consider the function
formula_21
Since formula_19, we have formula_22. We can obtain the series expansion of formula_23 by putting formula_17 in the series expansion of formula_14 because formula_24; the first formula_7 terms of the series expansion for formula_14 will yield the first formula_7 terms of the series expansion for formula_23 if formula_25. Putting formula_26 in that series expansion will thus yield a series such that if it converges, it will converge to the same value as the original series.
Non-linear sequence transformations.
Examples of such nonlinear sequence transformations are Padé approximants, the Shanks transformation, and Levin-type sequence transformations.
Especially nonlinear sequence transformations often provide powerful numerical methods for the summation of divergent series or asymptotic series that arise for instance in perturbation theory, and may be used as highly effective extrapolation methods.
Aitken method.
A simple nonlinear sequence transformation is the Aitken extrapolation or delta-squared method,
formula_27
defined by
formula_28
This transformation is commonly used to improve the rate of convergence of a slowly converging sequence; heuristically, it eliminates the largest part of the absolute error. | [
{
"math_id": 0,
"text": "S=\\{ s_n \\}_{n\\in\\N}"
},
{
"math_id": 1,
"text": "\\lim_{n\\to\\infty} s_n = \\ell,"
},
{
"math_id": 2,
"text": "S'=\\{ s'_n \\}_{n\\in\\N}"
},
{
"math_id": 3,
"text": "\\ell"
},
{
"math_id": 4,
"text": "\\lim_{n\\to\\infty} \\frac{s'_n-\\ell}{s_n-\\ell} = 0."
},
{
"math_id": 5,
"text": "5.828^{-n}"
},
{
"math_id": 6,
"text": "17.93^{-n}"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "\\sum_{n=0}^\\infty (-1)^n a_n = \\sum_{n=0}^\\infty (-1)^n \\frac{(\\Delta^n a)_0}{2^{n+1}}"
},
{
"math_id": 9,
"text": "\\Delta"
},
{
"math_id": 10,
"text": "(\\Delta^n a)_0 = \\sum_{k=0}^n (-1)^k {n \\choose k} a_{n-k}."
},
{
"math_id": 11,
"text": "S = \\sum_{n=0}^{\\infty} a_n"
},
{
"math_id": 12,
"text": "f(1)"
},
{
"math_id": 13,
"text": "f(z) = \\sum_{n=0}^{\\infty} a_n z^n."
},
{
"math_id": 14,
"text": "f(z)"
},
{
"math_id": 15,
"text": "z = 1"
},
{
"math_id": 16,
"text": "S"
},
{
"math_id": 17,
"text": "z = \\Phi(w)"
},
{
"math_id": 18,
"text": "\\Phi(0) = 0"
},
{
"math_id": 19,
"text": "\\Phi(1) = 1"
},
{
"math_id": 20,
"text": "\\Phi"
},
{
"math_id": 21,
"text": "g(w) = f(\\Phi(w))."
},
{
"math_id": 22,
"text": "f(1) = g(1)"
},
{
"math_id": 23,
"text": "g(w)"
},
{
"math_id": 24,
"text": "\\Phi(0)=0"
},
{
"math_id": 25,
"text": "\\Phi'(0) \\neq 0"
},
{
"math_id": 26,
"text": "w = 1"
},
{
"math_id": 27,
"text": "\\mathbb{A} : S \\to S'=\\mathbb{A}(S) = {(s'_n)}_{n\\in\\N}"
},
{
"math_id": 28,
"text": "s'_n = s_{n+2} - \\frac{(s_{n+2}-s_{n+1})^2}{s_{n+2}-2s_{n+1}+s_n}."
},
{
"math_id": 29,
"text": "\\epsilon"
}
] | https://en.wikipedia.org/wiki?curid=8737421 |
8738566 | Etemadi's inequality | In probability theory, Etemadi's inequality is a so-called "maximal inequality", an inequality that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The result is due to Nasrollah Etemadi.
Statement of the inequality.
Let "X"1, ..., "X""n" be independent real-valued random variables defined on some common probability space, and let "α" ≥ 0. Let "S""k" denote the partial sum
formula_0
Then
formula_1
Remark.
Suppose that the random variables "X""k" have common expected value zero. Apply Chebyshev's inequality to the right-hand side of Etemadi's inequality and replace "α" by "α" / 3. The result is Kolmogorov's inequality with an extra factor of 27 on the right-hand side:
formula_2 | [
{
"math_id": 0,
"text": "S_k = X_1 + \\cdots + X_k.\\,"
},
{
"math_id": 1,
"text": "\\Pr \\Bigl( \\max_{1 \\leq k \\leq n} | S_k | \\geq 3 \\alpha \\Bigr) \\leq 3 \\max_{1 \\leq k \\leq n} \\Pr \\bigl( | S_k | \\geq \\alpha \\bigr)."
},
{
"math_id": 2,
"text": " \\Pr \\Bigl( \\max_{1 \\leq k \\leq n} | S_k | \\geq \\alpha \\Bigr) \\leq \\frac{27}{\\alpha^2} \\operatorname{var} (S_n)."
}
] | https://en.wikipedia.org/wiki?curid=8738566 |
8739059 | BWF Super Series | Series of Grade 2 badminton tournaments
The BWF Super Series was a series of Grade 2 badminton tournaments, sanctioned by Badminton World Federation (BWF). It was launched on December 14, 2006 and implemented in 2007.
Since 2011, the Super Series includes two levels of tournament, Super Series Premier and Super Series. A season of Super Series featured twelve tournaments around the world, including five of them classified as Super Series Premier. Super Series Premier tournament offers higher ranking point and higher minimum total prize money. Top eight players/pairs in each discipline in Super Series standings are invited to the Super Series Finals held at the year end.
BWF announced a new tournament structure in March 2017, BWF World Tour together with the new hosts for the 2018–2021 cycle to replace this Super Series tournament.
Features.
Prize money.
A Super Series tournament offered a minimum total prize money of USD200,000; a Super Series Premier tournament offered minimum total prize money of USD350,000; Super Series Finals offered minimum total prize money of USD500,000. From 2014, a Super Series Premier tournament offered minimum total prize money of USD500,000, with minimum increment of USD50,000 each year until 2017. Super Series tournaments offered minimum total prize money of USD250,000, with an increment of USD25,000 each year up to 2017.
The Super Series offered the prize money regardless of the round from which a player is ousted, unless they went out in the qualification round. Starting in 2008 season, the women's winners received the equal prize money amount as men's winners. The prize money is distributed via the following formula:
formula_0
World Ranking points.
The Super Series Premier and Super Series tournaments offered ranking points to players based on the round a player/pair reaches. The Super Series Premier tournaments offered higher ranking points, second only to BWF tournaments (BWF World Championships and Summer Olympics). Points would be used for World Ranking and also Super Series standing to decide the top eight players/pairs qualified for the Super Series Finals.
Nationality separation.
Starting in 2007, players from the same nation were not separated in the main draw of the tournaments. All but the top two seeds would not be divided into two draws as they were before. The top Chinese player Lin Dan has criticized the rule change. Since 2010 rules were altered with nationality separation in the first round.
Entries.
Entries must be made five weeks before the start of the tournament. Only 32 players/pairs would play in the main round. Among the 32 players/pairs, only eight players/pair would be seeded in each event. Each event had 28 highest-ranked players/pairs in World Ranking and four qualifiers.
Prior to September 2008, 32 players/pairs were able to participate in qualifying rounds. Since then, only up to 16 players/pairs were allowed to participate in qualifying rounds, where four highest-ranked players/pairs in World Ranking would be seeded. This change was to avoid a big strain between the qualifiers and the main events.
Each Super Series tournament were held in six days, with the main round in five days.
Player commitment regulations.
Starting in 2011, top ten players/pairs of each discipline in the World Ranking were required to play in all Super Series Premier tournaments and a minimum of four Super Series tournaments occurring in the full calendar year. Players who qualified for Super Series Finals were obliged to play. A fine and above the normal withdrawal fees would be imposed upon players/pairs who fail to play. Exemption from penalty would be considered by BWF on receipt of a valid medical certificate or strong evidence that prove players unfit to participate. However, retired or suspended players were not subject to these regulations.
Umpires.
In 2007 season, each tournament hosts were allowed to present local umpires. However, after the outcry of several players during the tournaments, each Super Series tournaments must present eight international certificated and accredited umpires. Recent regulations state that at least six umpires must be from member associations other than the host member association, at least four BWF and two continental certificated umpires with well spread nationality.
Tournaments.
Every three years, the BWF Council would review the countries that host a Super Series Premier and Super Series tournament.
Historically, 14 tournaments in 13 countries hosted at least a season of the series. China was the sole country to host the series twice in a season from the year of 2007–2013. Starting in 2014 season, Australia hosted a Super Series tournament.
<templatestyles src="Legend/styles.css" /> Super Series
<templatestyles src="Legend/styles.css" /> Super Series Premier
BWF Super Series Finals.
At the end of the Super Series circuit, top eight players/pairs in the Super Series standing of each discipline, with the maximum of two players/pairs from the same member association, were required to play in a final tournament known as the Super Series Finals. It offered minimum total prize money of USD500,000.
If two or more players were tied in ranking, the selection of players was based on the following criteria:
Performances by countries.
Tabulated below are the Super Series performances based on countries. Only countries who won a title are listed.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Total\\ prize\\ money\\ \\times \\frac{Percentage}{100}"
}
] | https://en.wikipedia.org/wiki?curid=8739059 |
8740833 | Optical DPSK demodulator | An optical DPSK demodulator is a device that provides a method for converting an optical differential phase-shift keying (DPSK) signal to an intensity-keyed signal at the receiving end in fiber-optic communication networks. It is also known as delay line interferometer (DLI), or simply called DPSK demodulator.
The DPSK decoding method is achieved by comparing the phase of two sequential bits. An incoming DPSK optical signal is first split into two beams with equal intensities, in which one beam is delayed in space by an optical path difference that introduces a time delay corresponding to "one bit". The two beams in the two paths are then coherently recombined to interfere each other constructively or destructively. The interference intensity is measured and becomes the intensity-keyed signal. A typical optical system for such a purpose is Mach–Zehnder interferometer or Michelson interferometer, forming an optical DPSK demodulator.
Delay time depends on the data rate. For instance, in a 40 Gbit/s system, one bit corresponds to 25 picoseconds, and light travels 5 mm in a fiber optics or 7.5 mm in free space within that period. Thus the optical path difference between the two beams is 5 mm or 7.5 mm depending on the type of interferometer used.
DQPSK is the four-level version of DPSK. DQPSK transmits two bits for every symbol (bit combinations being 00, 01, 11 and 10) and has an additional advantage over conventional binary DPSK. DQPSK has a narrower optical spectrum, which tolerates more dispersion (both chromatic and polarization-mode), allows for stronger optical filtering, and enables closer channel spacing. As a result, DQPSK allows processing of 40 Gbit/s data-rate in a 50 GHz channel spacing system. A demodulator for optical DQPSK signals can be constructed using two matched DPSK demodulators with phase off-set at formula_0. | [
{
"math_id": 0,
"text": "\\pm \\pi/4"
}
] | https://en.wikipedia.org/wiki?curid=8740833 |
8743318 | Least distance of distinct vision | In optometry, the least distance of distinct vision (LDDV) or the reference seeing distance (RSD) is the closest someone with "normal" vision (20/20 vision) can comfortably look at something. In other words, LDDV is the minimum comfortable distance between the naked human eye and a visible object.
The magnifying power ("M") of a lens with focal length ("f" in millimeters) when viewed by the naked human eye can be calculated as:
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{M} = \\frac{250}{f}."
}
] | https://en.wikipedia.org/wiki?curid=8743318 |
874400 | Representation theory of finite groups | Representations of finite groups, particularly on vector spaces
The representation theory of groups is a part of mathematics which examines how groups act on given structures.
Here the focus is in particular on operations of groups on vector spaces. Nevertheless, groups acting on other groups or on sets are also considered. For more details, please refer to the section on permutation representations.
Other than a few marked exceptions, only finite groups will be considered in this article. We will also restrict ourselves to vector spaces over fields of characteristic zero. Because the theory of algebraically closed fields of characteristic zero is complete, a theory valid for a special algebraically closed field of characteristic zero is also valid for every other algebraically closed field of characteristic zero. Thus, without loss of generality, we can study vector spaces over formula_0
Representation theory is used in many parts of mathematics, as well as in quantum chemistry and physics. Among other things it is used in algebra to examine the structure of groups. There are also applications in harmonic analysis and number theory. For example, representation theory is used in the modern approach to gain new results about automorphic forms.
Definition.
Linear representations.
Let formula_1 be a formula_2–vector space and formula_3 a finite group. A linear representation of formula_3 is a group homomorphism formula_4 Here formula_5 is notation for a general linear group, and formula_6 for an automorphism group. This means that a linear representation is a map formula_7 which satisfies formula_8 for all formula_9 The vector space formula_1 is called representation space of formula_10 Often the term representation of formula_3 is also used for the representation space formula_11
The representation of a group in a module instead of a vector space is also called a linear representation.
We write formula_12 for the representation formula_13 of formula_10 Sometimes we use the notation formula_14 if it is clear to which representation the space formula_1 belongs.
In this article we will restrict ourselves to the study of finite-dimensional representation spaces, except for the last chapter. As in most cases only a finite number of vectors in formula_1 is of interest, it is sufficient to study the subrepresentation generated by these vectors. The representation space of this subrepresentation is then finite-dimensional.
The degree of a representation is the dimension of its representation space formula_11 The notation formula_15 is sometimes used to denote the degree of a representation formula_16
Examples.
The trivial representation is given by formula_17 for all formula_18
A representation of degree formula_19 of a group formula_3 is a homomorphism into the multiplicative group formula_20 As every element of formula_3 is of finite order, the values of formula_21 are roots of unity. For example, let formula_22 be a nontrivial linear representation. Since formula_23 is a group homomorphism, it has to satisfy formula_24 Because formula_19 generates formula_25 is determined by its value on formula_26 And as formula_23 is nontrivial, formula_27 Thus, we achieve the result that the image of formula_3 under formula_23 has to be a nontrivial subgroup of the group which consists of the fourth roots of unity. In other words, formula_23 has to be one of the following three maps:
formula_28
Let formula_29 and let formula_30 be the group homomorphism defined by:
formula_31
In this case formula_23 is a linear representation of formula_3 of degree formula_32
Permutation representation.
Let formula_33 be a finite set and let formula_3 be a group acting on formula_34 Denote by formula_35 the group of all permutations on formula_33 with the composition as group multiplication.
A group acting on a finite set is sometimes considered sufficient for the definition of the permutation representation. However, since we want to construct examples for linear representations - where groups act on vector spaces instead of on arbitrary finite sets - we have to proceed in a different way. In order to construct the permutation representation, we need a vector space formula_1 with formula_36 A basis of formula_1 can be indexed by the elements of formula_34 The permutation representation is the group homomorphism formula_37 given by formula_38 for all formula_39 All linear maps formula_21 are uniquely defined by this property.
Example. Let formula_40 and formula_41 Then formula_3 acts on formula_33 via formula_42 The associated linear representation is formula_43 with formula_44 for formula_45
Left- and right-regular representation.
Let formula_3 be a group and formula_1 be a vector space of dimension formula_46 with a basis formula_47 indexed by the elements of formula_10 The left-regular representation is a special case of the permutation representation by choosing formula_48 This means formula_49 for all formula_50 Thus, the family formula_51 of images of formula_52 are a basis of formula_11 The degree of the left-regular representation is equal to the order of the group.
The right-regular representation is defined on the same vector space with a similar homomorphism: formula_53 In the same way as before formula_51 is a basis of formula_11 Just as in the case of the left-regular representation, the degree of the right-regular representation is equal to the order of formula_10
Both representations are isomorphic via formula_54 For this reason they are not always set apart, and often referred to as "the" regular representation.
A closer look provides the following result: A given linear representation formula_55 is isomorphic to the left-regular representation if and only if there exists a formula_56 such that formula_57 is a basis of formula_58
Example. Let formula_59 and formula_60 with the basis formula_61 Then the left-regular representation formula_62 is defined by formula_63 for formula_64 The right-regular representation is defined analogously by formula_65 for formula_66
Representations, modules and the convolution algebra.
Let formula_3 be a finite group, let formula_2 be a commutative ring and let formula_67 be the group algebra of formula_3 over formula_68 This algebra is free and a basis can be indexed by the elements of formula_10 Most often the basis is identified with formula_3. Every element formula_69 can then be uniquely expressed as
formula_70 with formula_71.
The multiplication in formula_67 extends that in formula_3 distributively.
Now let formula_1 be a formula_2–module and let formula_72 be a linear representation of formula_3 in formula_11 We define formula_73 for all formula_74 and formula_75. By linear extension formula_1 is endowed with the structure of a left-formula_67–module. Vice versa we obtain a linear representation of formula_3 starting from a formula_67–module formula_1. Additionally, homomorphisms of representations are in bijective correspondence with group algebra homomorphisms. Therefore, these terms may be used interchangeably. This is an example of an isomorphism of categories.
Suppose formula_76 In this case the left formula_77–module given by formula_77 itself corresponds to the left-regular representation. In the same way formula_77 as a right formula_77–module corresponds to the right-regular representation.
In the following we will define the convolution algebra: Let formula_3 be a group, the set formula_78 is a formula_79–vector space with the operations addition and scalar multiplication then this vector space is isomorphic to formula_80 The convolution of two elements formula_81 defined by
formula_82
makes formula_83 an algebra. The algebra formula_83 is called the convolution algebra.
The convolution algebra is free and has a basis indexed by the group elements: formula_84 where
formula_85
Using the properties of the convolution we obtain: formula_86
We define a map between formula_83 and formula_87 by defining formula_88 on the basis formula_89 and extending it linearly. Obviously the prior map is bijective. A closer inspection of the convolution of two basis elements as shown in the equation above reveals that the multiplication in formula_83 corresponds to that in formula_90 Thus, the convolution algebra and the group algebra are isomorphic as algebras.
The involution
formula_91
turns formula_83 into a formula_92–algebra. We have formula_93
A representation formula_94 of a group formula_3 extends to a formula_92–algebra homomorphism formula_95 by formula_96 Since multiplicativity is a characteristic property of algebra homomorphisms, formula_97 satisfies formula_98 If formula_97 is unitary, we also obtain formula_99 For the definition of a unitary representation, please refer to the chapter on properties. In that chapter we will see that (without loss of generality) every linear representation can be assumed to be unitary.
Using the convolution algebra we can implement a Fourier transformation on a group formula_10 In the area of harmonic analysis it is shown that the following definition is consistent with the definition of the Fourier transformation on formula_100
Let formula_101 be a representation and let formula_102 be a formula_103-valued function on formula_3. The Fourier transform formula_104 of formula_105 is defined as
formula_106
This transformation satisfies formula_107
Maps between representations.
A map between two representations formula_108 of the same group formula_3 is a linear map formula_109 with the property that formula_110 holds for all formula_111 In other words, the following diagram commutes for all formula_74:
Such a map is also called formula_3–linear, or an equivariant map. The kernel, the image and the cokernel of formula_112 are defined by default. The composition of equivariant maps is again an equivariant map. There is a category of representations with equivariant maps as its morphisms. They are again formula_3–modules. Thus, they provide representations of formula_3 due to the correlation described in the previous section.
Irreducible representations and Schur's lemma.
Let formula_113 be a linear representation of formula_10 Let formula_114 be a formula_3-invariant subspace of formula_115 that is, formula_116 for all formula_117 and formula_118. The restriction formula_119 is an isomorphism of formula_114 onto itself. Because formula_120 holds for all formula_121 this construction is a representation of formula_3 in formula_58 It is called subrepresentation of formula_11
Any representation "V" has at least two subrepresentations, namely the one consisting only of 0, and the one consisting of "V" itself. The representation is called an "irreducible representation", if these two are the only subrepresentations. Some authors also call these representations simple, given that they are precisely the simple modules over the group algebra formula_77.
"Schur's lemma" puts a strong constraint on maps between irreducible representations. If formula_122 and formula_123 are both irreducible, and formula_124 is a linear map such that formula_125 for all formula_111, there is the following dichotomy:
Properties.
Two representations formula_135 are called equivalent or isomorphic, if there exists a formula_3–linear vector space isomorphism between the representation spaces. In other words, they are isomorphic if there exists a bijective linear map formula_136 such that formula_137 for all formula_138 In particular, equivalent representations have the same degree.
A representation formula_139 is called faithful when formula_97 is injective. In this case formula_97 induces an isomorphism between formula_3 and the image formula_140 As the latter is a subgroup of formula_141 we can regard formula_3 via formula_97 as subgroup of formula_142
We can restrict the range as well as the domain:
Let formula_143 be a subgroup of formula_10 Let formula_23 be a linear representation of formula_10 We denote by formula_144 the restriction of formula_23 to the subgroup formula_145
If there is no danger of confusion, we might use only formula_146 or in short formula_147
The notation formula_148 or in short formula_149 is also used to denote the restriction of the representation formula_1 of formula_3 onto formula_145
Let formula_105 be a function on formula_10 We write formula_150 or shortly formula_151 for the restriction to the subgroup formula_145
It can be proven that the number of irreducible representations of a group formula_3 (or correspondingly the number of simple formula_77–modules) equals the number of conjugacy classes of formula_10
A representation is called semisimple or completely reducible if it can be written as a direct sum of irreducible representations. This is analogous to the corresponding definition for a semisimple algebra.
For the definition of the direct sum of representations please refer to the section on direct sums of representations.
A representation is called isotypic if it is a direct sum of pairwise isomorphic irreducible representations.
Let formula_152 be a given representation of a group formula_10 Let formula_153 be an irreducible representation of formula_10 The formula_153–isotype formula_154 of formula_3 is defined as the sum of all irreducible subrepresentations of formula_1 isomorphic to formula_155
Every vector space over formula_79 can be provided with an inner product. A representation formula_23 of a group formula_3 in a vector space endowed with an inner product is called unitary if formula_21 is unitary for every formula_111 This means that in particular every formula_21 is diagonalizable. For more details see the article on unitary representations.
A representation is unitary with respect to a given inner product if and only if the inner product is invariant with regard to the induced operation of formula_156 i.e. if and only if formula_157 holds for all formula_158
A given inner product formula_159 can be replaced by an invariant inner product by exchanging formula_160 with
formula_161
Thus, without loss of generality we can assume that every further considered representation is unitary.
Example. Let formula_162 be the dihedral group of order formula_163 generated by formula_164 which fulfil the properties formula_165 and formula_166 Let formula_167 be a linear representation of formula_168 defined on the generators by:
formula_169
This representation is faithful. The subspace formula_170 is a formula_168–invariant subspace. Thus, there exists a nontrivial subrepresentation formula_171 with formula_172 Therefore, the representation is not irreducible. The mentioned subrepresentation is of degree one and irreducible.
The complementary subspace of formula_170 is formula_168–invariant as well. Therefore, we obtain the subrepresentation formula_173 with
formula_174
This subrepresentation is also irreducible. That means, the original representation is completely reducible:
formula_175
Both subrepresentations are isotypic and are the two only non-zero isotypes of formula_16
The representation formula_23 is unitary with regard to the standard inner product on formula_176 because formula_177 and formula_178 are unitary.
Let formula_179 be any vector space isomorphism. Then formula_180 which is defined by the equation formula_181 for all formula_182 is a representation isomorphic to formula_16
By restricting the domain of the representation to a subgroup, e.g. formula_183 we obtain the representation formula_184 This representation is defined by the image formula_185 whose explicit form is shown above.
Constructions.
The dual representation.
Let formula_186 be a given representation. The dual representation or contragredient representation formula_187 is a representation of formula_3 in the dual vector space of formula_11 It is defined by the property
formula_188
With regard to the natural pairing formula_189 between formula_190 and formula_191 the definition above provides the equation:
formula_192
For an example, see the main page on this topic: Dual representation.
Direct sum of representations.
Let formula_193 and formula_194 be a representation of formula_195 and formula_196 respectively. The direct sum of these representations is a linear representation and is defined as
formula_197
Let formula_198 be representations of the same group formula_10 For the sake of simplicity, the direct sum of these representations is defined as a representation of formula_156 i.e. it is given as formula_199 by viewing formula_3 as the diagonal subgroup of formula_200
Example. Let (here formula_201 and formula_202 are the imaginary unit and the primitive cube root of unity respectively):
formula_203
Then
formula_204
As it is sufficient to consider the image of the generating element, we find that
formula_205
Tensor product of representations.
Let formula_206 be linear representations. We define the linear representation formula_207 into the tensor product of formula_208 and formula_133 by formula_209 in which formula_210 This representation is called outer tensor product of the representations formula_131 and formula_211 The existence and uniqueness is a consequence of the properties of the tensor product.
Example. We reexamine the example provided for the direct sum:
formula_203
The outer tensor product
formula_212
Using the standard basis of formula_213 we have the following for the generating element:
formula_214
Remark. Note that the direct sum and the tensor products have different degrees and hence are different representations.
Let formula_215 be two linear representations of the same group. Let formula_216 be an element of formula_10 Then formula_217 is defined by formula_218 for formula_219 and we write formula_220 Then the map formula_221 defines a linear representation of formula_156 which is also called tensor product of the given representations.
These two cases have to be strictly distinguished. The first case is a representation of the group product into the tensor product of the corresponding representation spaces. The second case is a representation of the group formula_3 into the tensor product of two representation spaces of this one group. But this last case can be viewed as a special case of the first one by focusing on the diagonal subgroup formula_200 This definition can be iterated a finite number of times.
Let formula_1 and formula_114 be representations of the group formula_10 Then formula_222 is a representation by virtue of the following identity: formula_223. Let formula_224 and let formula_23 be the representation on formula_225 Let formula_226 be the representation on formula_1 and formula_227 the representation on formula_58 Then the identity above leads to the following result:
formula_228 for all formula_229
Theorem. The irreducible representations of formula_230 up to isomorphism are exactly the representations formula_231 in which formula_131 and formula_132 are irreducible representations of formula_195 and formula_196 respectively.
Symmetric and alternating square.
Let formula_232 be a linear representation of formula_10 Let formula_233 be a basis of formula_11 Define formula_234 by extending formula_235 linearly. It then holds that formula_236 and therefore formula_237 splits up into formula_238 in which
formula_239
formula_240
These subspaces are formula_3–invariant and by this define subrepresentations which are called the symmetric square and the alternating square, respectively. These subrepresentations are also defined in formula_241 although in this case they are denoted wedge product formula_242 and symmetric product formula_243 In case that formula_244 the vector space formula_245 is in general not equal to the direct sum of these two products.
Decompositions.
In order to understand representations more easily, a decomposition of the representation space into the direct sum of simpler subrepresentations would be desirable.
This can be achieved for finite groups as we will see in the following results. More detailed explanations and proofs may be found in [1] and [2].
Theorem. (Maschke) Let formula_246 be a linear representation where formula_1 is a vector space over a field of characteristic zero. Let formula_114 be a formula_3-invariant subspace of formula_11 Then the complement formula_247 of formula_114 exists in formula_1 and is formula_3-invariant.
A subrepresentation and its complement determine a representation uniquely.
The following theorem will be presented in a more general way, as it provides a very beautiful result about representations of compact – and therefore also of finite – groups:
Theorem. Every linear representation of a compact group over a field of characteristic zero is a direct sum of irreducible representations.
Or in the language of formula_67-modules: If formula_248 the group algebra formula_67 is semisimple, i.e. it is the direct sum of simple algebras.
Note that this decomposition is not unique. However, the number of how many times a subrepresentation isomorphic to a given irreducible representation is occurring in this decomposition is independent of the choice of decomposition.
The canonical decomposition
To achieve a unique decomposition, one has to combine all the irreducible subrepresentations that are isomorphic to each other. That means, the representation space is decomposed into a direct sum of its isotypes. This decomposition is uniquely determined. It is called the canonical decomposition.
Let formula_249 be the set of all irreducible representations of a group formula_3 up to isomorphism. Let formula_1 be a representation of formula_3 and let formula_250 be the set of all isotypes of formula_11 The projection formula_251 corresponding to the canonical decomposition is given by
formula_252
where formula_253 formula_254 and formula_255 is the character belonging to formula_256
In the following, we show how to determine the isotype to the trivial representation:
Definition (Projection formula). For every representation formula_14 of a group formula_3 we define
formula_257
In general, formula_258 is not formula_3-linear. We define
formula_259
Then formula_260 is a formula_3-linear map, because
formula_261
Proposition. The map formula_260 is a projection from formula_1 to formula_262
This proposition enables us to determine the isotype to the trivial subrepresentation of a given representation explicitly.
How often the trivial representation occurs in formula_1 is given by formula_263 This result is a consequence of the fact that the eigenvalues of a projection are only formula_264 or formula_19 and that the eigenspace corresponding to the eigenvalue formula_19 is the image of the projection. Since the trace of the projection is the sum of all eigenvalues, we obtain the following result
formula_265
in which formula_266 denotes the isotype of the trivial representation.
Let formula_267 be a nontrivial irreducible representation of formula_10 Then the isotype to the trivial representation of formula_97 is the null space. That means the following equation holds
formula_268
Let formula_269 be an orthonormal basis of formula_270 Then we have:
formula_271
Therefore, the following is valid for a nontrivial irreducible representation formula_1:
formula_272
Example. Let formula_273 be the permutation groups in three elements. Let formula_274 be a linear representation of formula_275 defined on the generating elements as follows:
formula_276
This representation can be decomposed on first look into the left-regular representation of formula_277which is denoted by formula_97 in the following, and the representation formula_278 with
formula_279
With the help of the irreducibility criterion taken from the next chapter, we could realize that formula_280 is irreducible but formula_97 is not. This is because (in terms of the inner product from ”Inner product and characters” below) we have formula_281
The subspace formula_282 of formula_283 is invariant with respect to the left-regular representation. Restricted to this subspace we obtain the trivial representation.
The orthogonal complement of formula_282 is formula_284 Restricted to this subspace, which is also formula_3–invariant as we have seen above, we obtain the representation formula_153 given by
formula_285
Again, we can use the irreducibility criterion of the next chapter to prove that formula_153 is irreducible. Now, formula_280 and formula_153 are isomorphic because formula_286 for all formula_287 in which formula_288 is given by the matrix
formula_289
A decomposition of formula_290 in irreducible subrepresentations is: formula_291 where formula_19 denotes the trivial representation and
formula_292
is the corresponding decomposition of the representation space.
We obtain the canonical decomposition by combining all the isomorphic irreducible subrepresentations: formula_293 is the formula_153-isotype of formula_23 and consequently the canonical decomposition is given by
formula_294
The theorems above are in general not valid for infinite groups. This will be demonstrated by the following example: let
formula_295
Together with the matrix multiplication formula_3 is an infinite group. formula_3 acts on formula_296 by matrix-vector multiplication. We consider the representation formula_297 for all formula_298 The subspace formula_299 is a formula_3-invariant subspace. However, there exists no formula_3-invariant complement to this subspace. The assumption that such a complement exists would entail that every matrix is diagonalizable over formula_300 This is known to be wrong and thus yields a contradiction.
The moral of the story is that if we consider infinite groups, it is possible that a representation - even one that is not irreducible - can not be decomposed into a direct sum of irreducible subrepresentations.
Character theory.
Definitions.
The "character" of a representation formula_7 is defined as the map
formula_301 in which formula_302 denotes the trace of the linear map formula_303
Even though the character is a map between two groups, it is not in general a group homomorphism, as the following example shows.
Let formula_304 be the representation defined by:
formula_305
The character formula_306 is given by
formula_307
Characters of permutation representations are particularly easy to compute. If "V" is the "G"-representation corresponding to the left action of formula_3 on a finite set formula_33, then
formula_308
For example, the character of the regular representation formula_309 is given by
formula_310
where formula_311 denotes the neutral element of formula_10
Properties.
A crucial property of characters is the formula
formula_312
This formula follows from the fact that the trace of a product "AB" of two square matrices is the same as the trace of "BA". Functions formula_313 satisfying such a formula are called class functions. Put differently, class functions and in particular characters are constant on each conjugacy class formula_314
It also follows from elementary properties of the trace that formula_315 is the sum of the eigenvalues of formula_21 with multiplicity. If the degree of the representation is "n", then the sum is "n" long. If "s" has order "m", these eigenvalues are all "m"-th roots of unity. This fact can be used to show that formula_316 and it also implies formula_317
Since the trace of the identity matrix is the number of rows, formula_318 where formula_311 is the neutral element of formula_3 and "n" is the dimension of the representation. In general, formula_319 is a normal subgroup in formula_10
The following table shows how the characters formula_320 of two given representations formula_321 give rise to characters of related representations.
By construction, there is a direct sum decomposition of formula_322. On characters, this corresponds to the fact that the sum of the last two expressions in the table is formula_323, the character of formula_324.
Inner product and characters.
In order to show some particularly interesting results about characters, it is rewarding to consider a more general type of functions on groups:
Definition (Class functions). A function formula_325 is called a class function if it is constant on conjugacy classes of formula_3, i.e.
formula_326
Note that every character is a class function, as the trace of a matrix is preserved under conjugation.
The set of all class functions is a formula_79–algebra and is denoted by formula_327. Its dimension is equal to the number of conjugacy classes of formula_10
Proofs of the following results of this chapter may be found in [1], [2] and [3].
An inner product can be defined on the set of all class functions on a finite group:
formula_328
Orthonormal property. If formula_329 are the distinct irreducible characters of formula_3, they form an orthonormal basis for the vector space of all class functions with respect to the inner product defined above, i.e.
One might verify that the irreducible characters generate formula_331 by showing that there exists no nonzero class function which is orthogonal to all the irreducible characters. For formula_332 a representation and formula_105 a class function, denote formula_333 Then for formula_332 irreducible, we have formula_334 from Schur's lemma. Suppose formula_335 is a class function which is orthogonal to all the characters. Then by the above we have formula_336 whenever formula_332 is irreducible. But then it follows that formula_336 for all formula_332, by decomposability. Take formula_332 to be the regular representation. Applying formula_337 to some particular basis element formula_338, we get formula_339. Since this is true for all formula_338, we have formula_340
It follows from the orthonormal property that the number of non-isomorphic irreducible representations of a group formula_3 is equal to the number of conjugacy classes of formula_10
Furthermore, a class function on formula_3 is a character of formula_3 if and only if it can be written as a linear combination of the distinct irreducible characters formula_341 with non-negative integer coefficients: if formula_342 is a class function on formula_3 such that formula_343 where formula_344 non-negative integers, then formula_342 is the character of the direct sum formula_345 of the representations formula_346 corresponding to formula_347 Conversely, it is always possible to write any character as a sum of irreducible characters.
The inner product defined above can be extended on the set of all formula_103-valued functions formula_83 on a finite group:
formula_328
A symmetric bilinear form can also be defined on formula_348
formula_349
These two forms match on the set of characters. If there is no danger of confusion the index of both forms formula_350 and formula_351 will be omitted.
Let formula_352 be two formula_77–modules. Note that formula_77–modules are simply representations of formula_3. Since the orthonormal property yields the number of irreducible representations of formula_3 is exactly the number of its conjugacy classes, then there are exactly as many simple formula_77–modules (up to isomorphism) as there are conjugacy classes of formula_10
We define formula_353 in which formula_354 is the vector space of all formula_3–linear maps. This form is bilinear with respect to the direct sum.
In the following, these bilinear forms will allow us to obtain some important results with respect to the decomposition and irreducibility of representations.
For instance, let formula_355 and formula_356 be the characters of formula_208 and formula_357 respectively. Thenformula_358
It is possible to derive the following theorem from the results above, along with Schur's lemma and the complete reducibility of representations.
Theorem. Let formula_1 be a linear representation of formula_3 with character formula_359 Let formula_360 where formula_361 are irreducible. Let formula_362 be an irreducible representation of formula_3 with character formula_363 Then the number of subrepresentations formula_361 which are isomorphic to formula_114 is independent of the given decomposition and is equal to the inner product formula_364 i.e. the formula_153–isotype formula_365 of formula_1 is independent of the choice of decomposition. We also get:
formula_366
and thus
formula_367
Corollary. Two representations with the same character are isomorphic. This means that every representation is determined by its character.
With this we obtain a very useful result to analyse representations:
Irreducibility criterion. Let formula_368 be the character of the representation formula_115 then we have formula_369 The case formula_370 holds if and only if formula_1 is irreducible.
Therefore, using the first theorem, the characters of irreducible representations of formula_3 form an orthonormal set on formula_331 with respect to this inner product.
Corollary. Let formula_1 be a vector space with formula_371 A given irreducible representation formula_1 of formula_3 is contained formula_372–times in the regular representation. In other words, if formula_309 denotes the regular representation of formula_3 then we have: formula_373 in which formula_374 is the set of all irreducible representations of formula_3 that are pairwise not isomorphic to each other.
In terms of the group algebra, this means that formula_375 as algebras.
As a numerical result we get:
formula_376
in which formula_309 is the regular representation and formula_377 and formula_378 are corresponding characters to formula_379 and formula_380 respectively. Recall that formula_311 denotes the neutral element of the group.
This formula is a "necessary and sufficient" condition for the problem of classifying the irreducible representations of a group up to isomorphism. It provides us with the means to check whether we found all the isomorphism classes of irreducible representations of a group.
Similarly, by using the character of the regular representation evaluated at formula_381 we get the equation:
formula_382
Using the description of representations via the convolution algebra we achieve an equivalent formulation of these equations:
The Fourier inversion formula:
formula_383
In addition, the Plancherel formula holds:
formula_384
In both formulas formula_12 is a linear representation of a group formula_385 and formula_386
The corollary above has an additional consequence:
Lemma. Let formula_3 be a group. Then the following is equivalent:
* formula_3 is abelian.
* Every function on formula_3 is a class function.
* All irreducible representations of formula_3 have degree formula_387
The induced representation.
As was shown in the section on properties of linear representations, we can - by restriction - obtain a representation of a subgroup starting from a representation of a group. Naturally we are interested in the reverse process: Is it possible to obtain the representation of a group starting from a representation of a subgroup? We will see that the induced representation defined below provides us with the necessary concept. Admittedly, this construction is not inverse but rather adjoint to the restriction.
Definitions.
Let formula_388 be a linear representation of formula_10 Let formula_143 be a subgroup and formula_389 the restriction. Let formula_114 be a subrepresentation of formula_390 We write formula_391 to denote this representation. Let formula_111 The vector space formula_392 depends only on the left coset formula_393 of formula_394 Let formula_309 be a representative system of formula_395 then
formula_396
is a subrepresentation of formula_397
A representation formula_23 of formula_3 in formula_398 is called induced by the representation formula_399 of formula_143 in formula_400 if
formula_401
Here formula_309 denotes a representative system of formula_402 and formula_403 for all formula_404 and for all formula_405 In other words: the representation formula_152 is induced by formula_406 if every formula_407 can be written uniquely as
formula_408
where formula_409 for every formula_405
We denote the representation formula_23 of formula_3 which is induced by the representation formula_399 of formula_143 as formula_410 or in short formula_411 if there is no danger of confusion. The representation space itself is frequently used instead of the representation map, i.e. formula_412 or formula_413 if the representation formula_1 is induced by formula_58
Alternative description of the induced representation.
By using the group algebra we obtain an alternative description of the induced representation:
Let formula_3 be a group, formula_1 a formula_77–module and formula_114 a formula_414–submodule of formula_1 corresponding to the subgroup formula_143 of formula_10 We say that formula_1 is induced by formula_114 if formula_415 in which formula_3 acts on the first factor: formula_416 for all formula_417
Properties.
The results introduced in this section will be presented without proof. These may be found in [1] and [2].
Uniqueness and existence of the induced representation. Let formula_418 be a linear representation of a subgroup formula_143 of formula_10 Then there exists a linear representation formula_12 of formula_156 which is induced by formula_419 Note that this representation is unique up to isomorphism.
Transitivity of induction. Let formula_114 be a representation of formula_143 and let formula_420 be an ascending series of groups. Then we have
formula_421
Lemma. Let formula_12 be induced by formula_422 and let formula_423 be a linear representation of formula_10 Now let formula_424 be a linear map satisfying the property that formula_425 for all formula_426 Then there exists a uniquely determined linear map formula_427 which extends formula_128 and for which formula_428 is valid for all formula_111
This means that if we interpret formula_429 as a formula_430–module, we have formula_431 where formula_432 is the vector space of all formula_430–homomorphisms of formula_398 to formula_433 The same is valid for formula_434
Induction on class functions. In the same way as it was done with representations, we can - by "induction" - obtain a class function on the group from a class function on a subgroup. Let formula_342 be a class function on formula_145 We define a function formula_435 on formula_3 by
formula_436
We say formula_435 is "induced" by formula_342 and write formula_437 or formula_438
Proposition. The function formula_439 is a class function on formula_10 If formula_342 is the character of a representation formula_114 of formula_440 then formula_439 is the character of the induced representation formula_441 of formula_10
Lemma. If formula_442 is a class function on formula_143 and formula_342 is a class function on formula_156 then we have: formula_443
Theorem. Let formula_152 be the representation of formula_3 induced by the representation formula_422 of the subgroup formula_145 Let formula_306 and formula_444 be the corresponding characters. Let formula_309 be a representative system of formula_445 The induced character is given by
formula_446
Frobenius reciprocity.
As a preemptive summary, the lesson to take from Frobenius reciprocity is that the maps formula_447 and formula_448 are adjoint to each other.
Let formula_114 be an irreducible representation of formula_143 and let formula_1 be an irreducible representation of formula_156 then the Frobenius reciprocity tells us that formula_114 is contained in formula_149 as often as formula_441 is contained in formula_11
Frobenius reciprocity. If formula_449 and formula_450 we have formula_451
This statement is also valid for the inner product.
Mackey's irreducibility criterion.
George Mackey established a criterion to verify the irreducibility of induced representations. For this we will first need some definitions and some specifications with respect to the notation.
Two representations formula_208 and formula_133 of a group formula_3 are called disjoint, if they have no irreducible component in common, i.e. if formula_452
Let formula_3 be a group and let formula_143 be a subgroup. We define formula_453 for formula_138 Let formula_454 be a representation of the subgroup formula_145 This defines by restriction a representation formula_455 of formula_456 We write formula_457 for formula_458 We also define another representation formula_459 of formula_460 by formula_461 These two representations are not to be confused.
Mackey's irreducibility criterion. The induced representation formula_462 is irreducible if and only if the following conditions are satisfied:
* formula_114 is irreducible
* For each formula_463 the two representations formula_459 and formula_457 of formula_460 are disjoint.
For the case of formula_143 normal, we have formula_464 and formula_465. Thus we obtain the following:
Corollary. Let formula_143 be a normal subgroup of formula_10 Then formula_466 is irreducible if and only if formula_23 is irreducible and not isomorphic to the conjugates formula_459 for formula_467
Applications to special groups.
In this section we present some applications of the so far presented theory to normal subgroups and to a special group, the semidirect product of a subgroup with an abelian normal subgroup.
Proposition. Let formula_468 be a normal subgroup of the group formula_3 and let formula_72 be an irreducible representation of formula_10 Then one of the following statements has to be valid:
* either there exists a proper subgroup formula_143 of formula_3 containing formula_468, and an irreducible representation formula_280 of formula_143 which induces formula_23,
* or formula_1 is an isotypic formula_469-module.
Proof. Consider formula_1 as a formula_469-module, and decompose it into isotypes as formula_470. If this decomposition is trivial, we are in the second case. Otherwise, the larger formula_3-action permutes these isotypic modules; because formula_1 is irreducible as a formula_471-module, the permutation action is transitive (in fact primitive). Fix any formula_472; the stabilizer in formula_3 of formula_473 is elementarily seen to exhibit the claimed properties. formula_474
Note that if formula_468 is abelian, then the isotypic modules of formula_468 are irreducible, of degree one, and all homotheties.
We obtain also the following
Corollary. Let formula_468 be an abelian normal subgroup of formula_3 and let formula_153 be any irreducible representation of formula_10 We denote with formula_475 the index of formula_468 in formula_10 Then formula_476[1]
If formula_468 is an abelian subgroup of formula_3 (not necessarily normal), generally formula_477 is not satisfied, but nevertheless formula_478 is still valid.
Classification of representations of a semidirect product.
In the following, let formula_479 be a semidirect product such that the normal semidirect factor, formula_468, is abelian. The irreducible representations of such a group formula_156 can be classified by showing that all irreducible representations of formula_3 can be constructed from certain subgroups of formula_143. This is the so-called method of “little groups” of Wigner and Mackey.
Since formula_468 is abelian, the irreducible characters of formula_468 have degree one and form the group formula_480 The group formula_3 acts on formula_481 by formula_482 for formula_483
Let formula_484 be a representative system of the orbit of formula_143 in formula_485 For every formula_486 let formula_487 This is a subgroup of formula_145 Let formula_488 be the corresponding subgroup of formula_10 We now extend the function formula_341 onto formula_489 by formula_490 for formula_491 Thus, formula_341 is a class function on formula_492 Moreover, since formula_493 for all formula_494 it can be shown that formula_341 is a group homomorphism from formula_489 to formula_495 Therefore, we have a representation of formula_489 of degree one which is equal to its own character.
Let now formula_23 be an irreducible representation of formula_496 Then we obtain an irreducible representation formula_497 of formula_498 by combining formula_23 with the canonical projection formula_499 Finally, we construct the tensor product of formula_341 and formula_500 Thus, we obtain an irreducible representation formula_501 of formula_492
To finally obtain the classification of the irreducible representations of formula_3 we use the representation formula_502 of formula_156 which is induced by the tensor product formula_503 Thus, we achieve the following result:
Proposition.
* formula_502 is irreducible.
* If formula_502 and formula_504 are isomorphic, then formula_505 and additionally formula_23 is isomorphic to formula_506
* Every irreducible representation of formula_3 is isomorphic to one of the formula_507
Amongst others, the criterion of Mackey and a conclusion based on the Frobenius reciprocity are needed for the proof of the proposition. Further details may be found in [1].
In other words, we classified all irreducible representations of formula_508
Representation ring.
The representation ring of formula_3 is defined as the abelian group
formula_509
With the multiplication provided by the tensor product, formula_510 becomes a ring. The elements of formula_510 are called virtual representations.
The character defines a ring homomorphism in the set of all class functions on formula_3 with complex values
formula_511
in which the formula_341 are the irreducible characters corresponding to the formula_256
Because a representation is determined by its character, formula_368 is injective. The images of formula_368 are called virtual characters.
As the irreducible characters form an orthonormal basis of formula_512 induces an isomorphism
formula_513
This isomorphism is defined on a basis out of elementary tensors formula_514 by formula_515 respectively formula_516 and extended bilinearly.
We write formula_517 for the set of all characters of formula_3 and formula_518 to denote the group generated by formula_519 i.e. the set of all differences of two characters. It then holds that formula_520 and formula_521 Thus, we have formula_522 and the virtual characters correspond to the virtual representations in an optimal manner.
Since formula_523 holds, formula_518 is the set of all virtual characters. As the product of two characters provides another character, formula_518 is a subring of the ring formula_524 of all class functions on formula_10 Because the formula_525 form a basis of formula_331 we obtain, just as in the case of formula_526 an isomorphism formula_527
Let formula_143 be a subgroup of formula_10 The restriction thus defines a ring homomorphism formula_528 which will be denoted by formula_529 or formula_530 Likewise, the induction on class functions defines a homomorphism of abelian groups formula_531 which will be written as formula_532 or in short formula_533
According to the Frobenius reciprocity, these two homomorphisms are adjoint with respect to the bilinear forms formula_534 and formula_535 Furthermore, the formula formula_536 shows that the image of formula_537 is an ideal of the ring formula_538
By the restriction of representations, the map formula_447 can be defined analogously for formula_526 and by the induction we obtain the map formula_448 for formula_539 Due to the Frobenius reciprocity, we get the result that these maps are adjoint to each other and that the image formula_540 is an ideal of the ring formula_539
If formula_468 is a commutative ring, the homomorphisms formula_447 and formula_448 may be extended to formula_468–linear maps:
formula_541
in which formula_542 are all the irreducible representations of formula_143 up to isomorphism.
With formula_543 we obtain in particular that formula_448 and formula_447 supply homomorphisms between formula_331 and formula_544
Let formula_195 and formula_545 be two groups with respective representations formula_546 and formula_547 Then, formula_231 is the representation of the direct product formula_230 as was shown in a previous section. Another result of that section was that all irreducible representations of formula_230 are exactly the representations formula_548 where formula_549 and formula_550 are irreducible representations of formula_195 and formula_196 respectively. This passes over to the representation ring as the identity formula_551 in which formula_552 is the tensor product of the representation rings as formula_553–modules.
Induction theorems.
Induction theorems relate the representation ring of a given finite group "G" to representation rings of a family "X" consisting of some subsets "H" of "G". More precisely, for such a collection of subgroups, the induction functor yields a map
formula_554; induction theorems give criteria for the surjectivity of this map or closely related ones.
"Artin's induction theorem" is the most elementary theorem in this group of results. It asserts that the following are equivalent:
Since formula_518 is finitely generated as a group, the first point can be rephrased as follows:
gives two proofs of this theorem. For example, since "G" is the union of its cyclic subgroups, every character of formula_3 is a linear combination with rational coefficients of characters induced by characters of cyclic subgroups of formula_10 Since the representations of cyclic groups are well-understood, in particular the irreducible representations are one-dimensional, this gives a certain control over representations of "G".
Under the above circumstances, it is not in general true that formula_342 is surjective. "Brauer's induction theorem" asserts that formula_342 is surjective, provided that "X" is the family of all "elementary subgroups".
Here a group "H" is elementary if there is some prime "p" such that "H" is the direct product of a cyclic group of order prime to formula_560 and a formula_560–group.
In other words, every character of formula_3 is a linear combination with integer coefficients of characters induced by characters of elementary subgroups.
The elementary subgroups "H" arising in Brauer's theorem have a richer representation theory than cyclic groups, they at least have the property that any irreducible representation for such "H" is induced by a one-dimensional representation of a (necessarily also elementary) subgroup formula_561. (This latter property can be shown to hold for any supersolvable group, which includes nilpotent groups and, in particular, elementary groups.) This ability to induce representations from degree 1 representations has some further consequences in the representation theory of finite groups.
Real representations.
For proofs and more information about representations over general subfields of formula_79 please refer to [2].
If a group formula_3 acts on a real vector space formula_562 the corresponding representation on the complex vector space formula_563 is called real (formula_1 is called the complexification of formula_564). The corresponding representation mentioned above is given by formula_565 for all formula_566
Let formula_23 be a real representation. The linear map formula_21 is formula_567-valued for all formula_111 Thus, we can conclude that the character of a real representation is always real-valued. But not every representation with a real-valued character is real. To make this clear, let formula_3 be a finite, non-abelian subgroup of the group
formula_568
Then formula_569 acts on formula_570 Since the trace of any matrix in formula_571 is real, the character of the representation is real-valued. Suppose formula_23 is a real representation, then formula_572 would consist only of real-valued matrices. Thus, formula_573 However the circle group is abelian but formula_3 was chosen to be a non-abelian group. Now we only need to prove the existence of a non-abelian, finite subgroup of formula_574 To find such a group, observe that formula_571 can be identified with the units of the quaternions. Now let formula_575 The following two-dimensional representation of formula_3 is not real-valued, but has a real-valued character:
formula_576
Then the image of formula_23 is not real-valued, but nevertheless it is a subset of formula_574 Thus, the character of the representation is real.
Lemma. An irreducible representation formula_1 of formula_3 is real if and only if there exists a nondegenerate symmetric bilinear form formula_577 on formula_1 preserved by formula_10
An irreducible representation of formula_3 on a real vector space can become reducible when extending the field to formula_0 For example, the following real representation of the cyclic group is reducible when considered over formula_103
formula_578
Therefore, by classifying all the irreducible representations that are real over formula_134 we still haven't classified all the irreducible real representations. But we achieve the following:
Let formula_564 be a real vector space. Let formula_3 act irreducibly on formula_564 and let formula_579 If formula_1 is not irreducible, there are exactly two irreducible factors which are complex conjugate representations of formula_10
Definition. A quaternionic representation is a (complex) representation formula_115 which possesses a formula_3–invariant anti-linear homomorphism formula_580 satisfying formula_581 Thus, a skew-symmetric, nondegenerate formula_3–invariant bilinear form defines a quaternionic structure on formula_11
Theorem. An irreducible representation formula_1 is one and only one of the following:
(i) complex: formula_582 is not real-valued and there exists no formula_3–invariant nondegenerate bilinear form on formula_11
(ii) real: formula_583 a real representation; formula_1 has a formula_3–invariant nondegenerate symmetric bilinear form.
(iii) quaternionic: formula_582 is real, but formula_1 is not real; formula_1 has a formula_3–invariant skew-symmetric nondegenerate bilinear form.
Representations of particular groups.
Symmetric groups.
Representation of the symmetric groups formula_584 have been intensely studied. Conjugacy classes in formula_584 (and therefore, by the above, irreducible representations) correspond to partitions of "n". For example, formula_585 has three irreducible representations, corresponding to the partitions
3; 2+1; 1+1+1
of 3. For such a partition, a Young tableau is a graphical device depicting a partition. The irreducible representation corresponding to such a partition (or Young tableau) is called a Specht module.
Representations of different symmetric groups are related: any representation of formula_586 yields a representation of formula_587 by induction, and vice versa by restriction. The direct sum of all these representation rings
formula_588
inherits from these constructions the structure of a Hopf algebra which, it turns out, is closely related to symmetric functions.
Finite groups of Lie type.
To a certain extent, the representations of the formula_589 , as "n" varies, have a similar flavor as for the formula_584; the above-mentioned induction process gets replaced by so-called parabolic induction. However, unlike for formula_584, where all representations can be obtained by induction of trivial representations, this is not true for formula_589. Instead, new building blocks, known as cuspidal representations, are needed.
Representations of formula_589 and more generally, representations of finite groups of Lie type have been thoroughly studied. describes the representations of formula_590. A geometric description of irreducible representations of such groups, including the above-mentioned cuspidal representations, is obtained by Deligne-Lusztig theory, which constructs such representation in the l-adic cohomology of Deligne-Lusztig varieties.
The similarity of the representation theory of formula_584 and formula_589 goes beyond finite groups. The philosophy of cusp forms highlights the kinship of representation theoretic aspects of these types of groups with general linear groups of local fields such as Q"p" and of the ring of adeles, see .
Outlook—Representations of compact groups.
The theory of representations of compact groups may be, to some degree, extended to locally compact groups. The representation theory unfolds in this context great importance for harmonic analysis and the study of automorphic forms. For proofs, further information and for a more detailed insight which is beyond the scope of this chapter please consult [4] and [5].
Definition and properties.
A topological group is a group together with a topology with respect to which the group composition and the inversion are continuous.
Such a group is called compact, if any cover of formula_156 which is open in the topology, has a finite subcover. Closed subgroups of a compact group are compact again.
Let formula_3 be a compact group and let formula_1 be a finite-dimensional formula_103–vector space. A linear representation of formula_3 to formula_1 is a continuous group homomorphism formula_591 i.e. formula_592 is a continuous function in the two variables formula_74 and formula_593
A linear representation of formula_3 into a Banach space formula_1 is defined to be a continuous group homomorphism of formula_3 into the set of all bijective bounded linear operators on formula_1 with a continuous inverse. Since formula_594 we can do without the last requirement. In the following, we will consider in particular representations of compact groups in Hilbert spaces.
Just as with finite groups, we can define the group algebra and the convolution algebra. However, the group algebra provides no helpful information in the case of infinite groups, because the continuity condition gets lost during the construction. Instead the convolution algebra formula_83 takes its place.
Most properties of representations of finite groups can be transferred with appropriate changes to compact groups. For this we need a counterpart to the summation over a finite group:
Existence and uniqueness of the Haar measure.
On a compact group formula_3 there exists exactly one measure formula_595 such that:
formula_596
formula_597
Such a left-translation-invariant, normed measure is called Haar measure of the group formula_10
Since formula_3 is compact, it is possible to show that this measure is also right-translation-invariant, i.e. it also applies
formula_598
By the scaling above the Haar measure on a finite group is given by formula_599 for all formula_111
All the definitions to representations of finite groups that are mentioned in the section ”Properties”, also apply to representations of compact groups. But there are some modifications needed:
To define a subrepresentation we now need a closed subspace. This was not necessary for finite-dimensional representation spaces, because in this case every subspace is already closed. Furthermore, two representations formula_600 of a compact group formula_3 are called equivalent, if there exists a bijective, continuous, linear operator formula_112 between the representation spaces whose inverse is also continuous and which satisfies formula_601 for all formula_111
If formula_112 is unitary, the two representations are called unitary equivalent.
To obtain a formula_3–invariant inner product from a not formula_3–invariant, we now have to use the integral over formula_3 instead of the sum. If formula_602 is an inner product on a Hilbert space formula_115 which is not invariant with respect to the representation formula_23 of formula_156 then
formula_603
is a formula_3–invariant inner product on formula_1 due to the properties of the Haar measure formula_604 Thus, we can assume every representation on a Hilbert space to be unitary.
Let formula_3 be a compact group and let formula_111 Let formula_605 be the Hilbert space of the square integrable functions on formula_10 We define the operator formula_606 on this space by formula_607 where formula_608
The map formula_609 is a unitary representation of formula_10 It is called left-regular representation. The right-regular representation is defined similarly. As the Haar measure of formula_3 is also right-translation-invariant, the operator formula_610 on formula_605 is given by formula_611 The right-regular representation is then the unitary representation given by formula_612 The two representations formula_609 and formula_613 are dual to each other.
If formula_3 is infinite, these representations have no finite degree. The left- and right-regular representation as defined at the beginning are isomorphic to the left- and right-regular representation as defined above, if the group formula_3 is finite. This is due to the fact that in this case formula_614
Constructions and decompositions.
The different ways of constructing new representations from given ones can be used for compact groups as well, except for the dual representation with which we will deal later. The direct sum and the tensor product with a finite number of summands/factors are defined in exactly the same way as for finite groups. This is also the case for the symmetric and alternating square. However, we need a Haar measure on the direct product of compact groups in order to extend the theorem saying that the irreducible representations of the product of two groups are (up to isomorphism) exactly the tensor product of the irreducible representations of the factor groups. First, we note that the direct product formula_230 of two compact groups is again a compact group when provided with the product topology. The Haar measure on the direct product is then given by the product of the Haar measures on the factor groups.
For the dual representation on compact groups we require the topological dual formula_429 of the vector space formula_11 This is the vector space of all continuous linear functionals from the vector space formula_1 into the base field. Let formula_97 be a representation of a compact group formula_3 in formula_11
The dual representation formula_615 is defined by the property
formula_616
Thus, we can conclude that the dual representation is given by formula_617 for all formula_618 The map formula_619 is again a continuous group homomorphism and thus a representation.
On Hilbert spaces: formula_97 is irreducible if and only if formula_619 is irreducible.
By transferring the results of the section decompositions to compact groups, we obtain the following theorems:
Theorem. Every irreducible representation formula_620 of a compact group into a Hilbert space is finite-dimensional and there exists an inner product on formula_621 such that formula_153 is unitary. Since the Haar measure is normalized, this inner product is unique.
Every representation of a compact group is isomorphic to a direct Hilbert sum of irreducible representations.
Let formula_152 be a unitary representation of the compact group formula_10 Just as for finite groups we define for an irreducible representation formula_622 the isotype or isotypic component in formula_23 to be the subspace
formula_623
This is the sum of all invariant closed subspaces formula_624 which are formula_3–isomorphic to formula_625
Note that the isotypes of not equivalent irreducible representations are pairwise orthogonal.
Theorem.
(i) formula_154 is a closed invariant subspace of formula_397
(ii) formula_154 is formula_3–isomorphic to the direct sum of copies of formula_625
(iii) Canonical decomposition: formula_626 is the direct Hilbert sum of the isotypes formula_627 in which formula_153 passes through all the isomorphism classes of the irreducible representations.
The corresponding projection to the canonical decomposition formula_628 in which formula_365 is an isotype of formula_115 is for compact groups given by
formula_629
where formula_630 and formula_631 is the character corresponding to the irreducible representation formula_155
Projection formula.
For every representation formula_632 of a compact group formula_3 we define
formula_633
In general formula_258 is not formula_3–linear. Let
formula_634
The map formula_260 is defined as endomorphism on formula_1 by having the property
formula_635
which is valid for the inner product of the Hilbert space formula_11
Then formula_260 is formula_3–linear, because of
formula_636
where we used the invariance of the Haar measure.
Proposition. The map formula_260 is a projection from formula_1 to formula_262
If the representation is finite-dimensional, it is possible to determine the direct sum of the trivial subrepresentation just as in the case of finite groups.
Characters, Schur's lemma and the inner product.
Generally, representations of compact groups are investigated on Hilbert- and Banach spaces. In most cases they are not finite-dimensional. Therefore, it is not useful to refer to characters when speaking about representations of compact groups. Nevertheless, in most cases it is possible to restrict the study to the case of finite dimensions:
Since irreducible representations of compact groups are finite-dimensional and unitary (see results from the first subsection), we can define irreducible characters in the same way as it was done for finite groups.
As long as the constructed representations stay finite-dimensional, the characters of the newly constructed representations may be obtained in the same way as for finite groups.
Schur's lemma is also valid for compact groups:
Let formula_637 be an irreducible unitary representation of a compact group formula_10 Then every bounded operator formula_638 satisfying the property formula_639 for all formula_640 is a scalar multiple of the identity, i.e. there exists formula_641 such that formula_642
Definition. The formula
formula_643
defines an inner product on the set of all square integrable functions formula_605 of a compact group formula_10 Likewise
formula_644
defines a bilinear form on formula_605 of a compact group formula_10
The bilinear form on the representation spaces is defined exactly as it was for finite groups and analogous to finite groups the following results are therefore valid:
Theorem. Let formula_368 and formula_645 be the characters of two non-isomorphic irreducible representations formula_1 and formula_646 respectively. Then the following is valid
*formula_647
*formula_648 i.e. formula_368 has "norm" formula_387
Theorem. Let formula_1 be a representation of formula_3 with character formula_649 Suppose formula_114 is an irreducible representation of formula_3 with character formula_650 The number of subrepresentations of formula_1 equivalent to formula_114 is independent of any given decomposition for formula_1 and is equal to the inner product formula_651
Irreducibility Criterion. Let formula_368 be the character of the representation formula_115 then formula_652 is a positive integer. Moreover formula_370 if and only if formula_1 is irreducible.
Therefore, using the first theorem, the characters of irreducible representations of formula_3 form an orthonormal set on formula_605 with respect to this inner product.
Corollary. Every irreducible representation formula_1 of formula_3 is contained formula_653–times in the left-regular representation.
Lemma. Let formula_3 be a compact group. Then the following statements are equivalent:
* formula_3 is abelian.
* All the irreducible representations of formula_3 have degree formula_387
Orthonormal Property. Let formula_3 be a group. The non-isomorphic irreducible representations of formula_3 form an orthonormal basis in formula_605 with respect to this inner product.
As we already know that the non-isomorphic irreducible representations are orthonormal, we only need to verify that they generate formula_654 This may be done, by proving that there exists no non-zero square integrable function on formula_3 orthogonal to all the irreducible characters.
Just as in the case of finite groups, the number of the irreducible representations up to isomorphism of a group formula_3 equals the number of conjugacy classes of formula_10 However, because a compact group has in general infinitely many conjugacy classes, this does not provide any useful information.
The induced representation.
If formula_143 is a closed subgroup of finite index in a compact group formula_156 the definition of the induced representation for finite groups may be adopted.
However, the induced representation can be defined more generally, so that the definition is valid independent of the index of the subgroup formula_145
For this purpose let formula_655 be a unitary representation of the closed subgroup formula_145 The continuous induced representation formula_656 is defined as follows:
Let formula_657 denote the Hilbert space of all measurable, square integrable functions formula_658 with the property formula_659 for all formula_660 The norm is given by
formula_661
and the representation formula_662 is given as the right-translation: formula_663
The induced representation is then again a unitary representation.
Since formula_3 is compact, the induced representation can be decomposed into the direct sum of irreducible representations of formula_10 Note that all irreducible representations belonging to the same isotype appear with a multiplicity equal to formula_664
Let formula_12 be a representation of formula_156 then there exists a canonical isomorphism
formula_665
The Frobenius reciprocity transfers, together with the modified definitions of the inner product and of the bilinear form, to compact groups. The theorem now holds for square integrable functions on formula_3 instead of class functions, but the subgroup formula_143 must be closed.
The Peter-Weyl Theorem.
Another important result in the representation theory of compact groups is the Peter-Weyl Theorem. It is usually presented and proven in harmonic analysis, as it represents one of its central and fundamental statements.
The Peter-Weyl Theorem. Let formula_3 be a compact group. For every irreducible representation formula_622 of formula_3 let formula_666 be an orthonormal basis of formula_625 We define the "matrix coefficients" formula_667 for formula_668 Then we have the following orthonormal basis of formula_605:
formula_669
We can reformulate this theorem to obtain a generalization of the Fourier series for functions on compact groups:
The Peter-Weyl Theorem (Second version). There exists a natural formula_670–isomorphism
formula_671
in which formula_672 is the set of all irreducible representations of formula_3 up to isomorphism and formula_621 is the representation space corresponding to formula_155 More concretely:
formula_673
History.
The general features of the representation theory of a finite group "G", over the complex numbers, were discovered by Ferdinand Georg Frobenius in the years before 1900. Later the modular representation theory of Richard Brauer was developed.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Complex."
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "G"
},
{
"math_id": 4,
"text": "\\rho: G\\to \\text{GL}(V)=\\text{Aut}(V)."
},
{
"math_id": 5,
"text": "\\text{GL}(V)"
},
{
"math_id": 6,
"text": "\\text{Aut}(V)"
},
{
"math_id": 7,
"text": "\\rho: G\\to \\text{GL}(V)"
},
{
"math_id": 8,
"text": "\\rho(st)=\\rho(s)\\rho(t)"
},
{
"math_id": 9,
"text": "s,t \\in G."
},
{
"math_id": 10,
"text": "G."
},
{
"math_id": 11,
"text": "V."
},
{
"math_id": 12,
"text": "(\\rho, V_\\rho)"
},
{
"math_id": 13,
"text": "\\rho: G\\to\\text{GL}(V_\\rho)"
},
{
"math_id": 14,
"text": "(\\rho, V)"
},
{
"math_id": 15,
"text": "\\dim (\\rho)"
},
{
"math_id": 16,
"text": "\\rho."
},
{
"math_id": 17,
"text": "\\rho(s)=\\text{Id}"
},
{
"math_id": 18,
"text": " s\\in G."
},
{
"math_id": 19,
"text": "1"
},
{
"math_id": 20,
"text": "\\rho:G\\to \\text{GL}_1 (\\Complex)=\\Complex^\\times = \\Complex \\setminus\\{0\\}."
},
{
"math_id": 21,
"text": "\\rho(s)"
},
{
"math_id": 22,
"text": "\\rho: G=\\Z/4\\Z \\to \\Complex ^\\times"
},
{
"math_id": 23,
"text": "\\rho"
},
{
"math_id": 24,
"text": "\\rho({0})=1."
},
{
"math_id": 25,
"text": "G, \\rho"
},
{
"math_id": 26,
"text": "\\rho(1)."
},
{
"math_id": 27,
"text": "\\rho({1})\\in\\{i,-1,-i\\}."
},
{
"math_id": 28,
"text": " \\begin{cases} \\rho_1({0})=1 \\\\ \\rho_1({1})=i \\\\ \\rho_1({2})=-1 \\\\ \\rho_1({3})=-i \\end{cases} \\qquad \\begin{cases} \\rho_2({0})=1 \\\\ \\rho_2({1})=-1 \\\\ \\rho_2({2})=1 \\\\ \\rho_2({3})=-1 \\end{cases} \\qquad \\begin{cases} \\rho_3({0})=1 \\\\ \\rho_3({1})=-i \\\\ \\rho_3({2})=-1 \\\\ \\rho_3({3})=i \\end{cases}"
},
{
"math_id": 29,
"text": "G=\\Z /2\\Z \\times\\Z /2\\Z "
},
{
"math_id": 30,
"text": "\\rho: G\\to\\text{GL}_2(\\Complex )"
},
{
"math_id": 31,
"text": " \\rho({0},{0})=\n \\begin{pmatrix}\n 1 & 0 \\\\\n 0 & 1\n \\end{pmatrix}, \\quad \n\\rho({1},{0})=\n \\begin{pmatrix}\n -1 & 0 \\\\\n 0 & -1\n \\end{pmatrix}, \\quad \n\\rho({0},{1})=\n \\begin{pmatrix}\n 0 & 1 \\\\\n 1 & 0\n \\end{pmatrix}, \\quad \n\\rho({1},{1})= \n \\begin{pmatrix}\n 0 & -1 \\\\\n -1 & 0\n \\end{pmatrix}."
},
{
"math_id": 32,
"text": "2."
},
{
"math_id": 33,
"text": "X"
},
{
"math_id": 34,
"text": "X."
},
{
"math_id": 35,
"text": "\\text{Aut}(X)"
},
{
"math_id": 36,
"text": "\\dim (V)=|X|."
},
{
"math_id": 37,
"text": "\\rho : G \\to\\text{GL}(V)"
},
{
"math_id": 38,
"text": "\\rho(s)e_x=e_{s.x}"
},
{
"math_id": 39,
"text": "s\\in G, x\\in X."
},
{
"math_id": 40,
"text": "X=\\{1,2,3\\}"
},
{
"math_id": 41,
"text": "G=\\text{Sym}(3)."
},
{
"math_id": 42,
"text": "\\text{Aut}(X)=G."
},
{
"math_id": 43,
"text": "\\rho:G\\to \\text{GL}(V)\\cong\\text{GL}_3(\\Complex )"
},
{
"math_id": 44,
"text": "\\rho(\\sigma)e_x=e_{\\sigma(x)}"
},
{
"math_id": 45,
"text": "\\sigma\\in G, x\\in X."
},
{
"math_id": 46,
"text": "|G|"
},
{
"math_id": 47,
"text": "(e_t)_{t\\in G}"
},
{
"math_id": 48,
"text": "X=G."
},
{
"math_id": 49,
"text": "\\rho(s)e_t=e_{st}"
},
{
"math_id": 50,
"text": "s, t\\in G."
},
{
"math_id": 51,
"text": "(\\rho(s)e_1)_{s\\in G}"
},
{
"math_id": 52,
"text": "e_1"
},
{
"math_id": 53,
"text": " \\rho(s)e_t=e_{ts^{-1}}."
},
{
"math_id": 54,
"text": "e_{s} \\mapsto e_{s^{-1}}."
},
{
"math_id": 55,
"text": "\\rho:G\\to\\text{GL}(W)"
},
{
"math_id": 56,
"text": "w\\in W,"
},
{
"math_id": 57,
"text": "(\\rho(s)w)_{s\\in G}"
},
{
"math_id": 58,
"text": "W."
},
{
"math_id": 59,
"text": "G = \\Z /5\\Z "
},
{
"math_id": 60,
"text": "V=\\R^5"
},
{
"math_id": 61,
"text": "\\{e_0,\\ldots, e_4\\}."
},
{
"math_id": 62,
"text": "L_\\rho: G\\to \\text{GL}(V)"
},
{
"math_id": 63,
"text": "L_\\rho(k)e_l=e_{l+k}"
},
{
"math_id": 64,
"text": "k, l \\in \\Z /5\\Z."
},
{
"math_id": 65,
"text": "R_\\rho(k)e_l=e_{l-k}"
},
{
"math_id": 66,
"text": "k, l \\in \\Z /5\\Z ."
},
{
"math_id": 67,
"text": "K[G]"
},
{
"math_id": 68,
"text": "K."
},
{
"math_id": 69,
"text": "f \\in K[G]"
},
{
"math_id": 70,
"text": "f=\\sum_{s\\in G} a_s s"
},
{
"math_id": 71,
"text": "a_s \\in K"
},
{
"math_id": 72,
"text": "\\rho: G\\to\\text{GL}(V)"
},
{
"math_id": 73,
"text": "sv=\\rho(s) v"
},
{
"math_id": 74,
"text": "s\\in G"
},
{
"math_id": 75,
"text": "v\\in V"
},
{
"math_id": 76,
"text": "K=\\Complex."
},
{
"math_id": 77,
"text": "\\Complex [G]"
},
{
"math_id": 78,
"text": "L^1(G):=\\{f:G\\to\\Complex \\}"
},
{
"math_id": 79,
"text": "\\Complex "
},
{
"math_id": 80,
"text": "\\Complex^{|G|}."
},
{
"math_id": 81,
"text": "f, h \\in L^1(G)"
},
{
"math_id": 82,
"text": "f*h(s):=\\sum_{t\\in G}f(t)h(t^{-1}s)"
},
{
"math_id": 83,
"text": "L^1(G)"
},
{
"math_id": 84,
"text": "(\\delta_s)_{s\\in G},"
},
{
"math_id": 85,
"text": "\\delta_s(t)=\\begin{cases} 1& t=s\\\\ 0 & \\text{otherwise.}\\end{cases}"
},
{
"math_id": 86,
"text": "\\delta_s*\\delta_t=\\delta_{st}."
},
{
"math_id": 87,
"text": "\\Complex [G],"
},
{
"math_id": 88,
"text": "\\delta_s\\mapsto e_s"
},
{
"math_id": 89,
"text": "(\\delta_s)_{s\\in G}"
},
{
"math_id": 90,
"text": "\\Complex [G]."
},
{
"math_id": 91,
"text": "f^*(s)=\\overline{f(s^{-1})}"
},
{
"math_id": 92,
"text": "^*"
},
{
"math_id": 93,
"text": "\\delta_s^*=\\delta_{s^{-1}}."
},
{
"math_id": 94,
"text": "(\\pi, V_\\pi)"
},
{
"math_id": 95,
"text": "\\pi: L^1(G)\\to\\text{End}(V_\\pi)"
},
{
"math_id": 96,
"text": "\\pi(\\delta_s)=\\pi(s)."
},
{
"math_id": 97,
"text": "\\pi"
},
{
"math_id": 98,
"text": "\\pi(f*h)=\\pi(f)\\pi(h)."
},
{
"math_id": 99,
"text": "\\pi(f)^* =\\pi(f^*)."
},
{
"math_id": 100,
"text": "\\R."
},
{
"math_id": 101,
"text": "\\rho:G\\to\\text{GL}(V_\\rho)"
},
{
"math_id": 102,
"text": "f\\in L^1(G)"
},
{
"math_id": 103,
"text": "\\Complex"
},
{
"math_id": 104,
"text": "\\hat{f}(\\rho)\\in \\text{End}(V_\\rho)"
},
{
"math_id": 105,
"text": "f"
},
{
"math_id": 106,
"text": "\\hat{f}(\\rho)=\\sum_{s\\in G} f(s)\\rho(s)."
},
{
"math_id": 107,
"text": "\\widehat{f*g}(\\rho)=\\hat{f}(\\rho)\\cdot\\hat{g}(\\rho)."
},
{
"math_id": 108,
"text": "(\\rho, V_\\rho),\\, (\\tau, V_\\tau)"
},
{
"math_id": 109,
"text": "T: V_\\rho\\to V_\\tau,"
},
{
"math_id": 110,
"text": "\\tau(s)\\circ T=T\\circ\\rho(s)"
},
{
"math_id": 111,
"text": "s\\in G."
},
{
"math_id": 112,
"text": "T"
},
{
"math_id": 113,
"text": "\\rho:G\\to\\text{GL}(V)"
},
{
"math_id": 114,
"text": "W"
},
{
"math_id": 115,
"text": "V,"
},
{
"math_id": 116,
"text": " \\rho(s)w\\in W"
},
{
"math_id": 117,
"text": " s\\in G"
},
{
"math_id": 118,
"text": "w\\in W"
},
{
"math_id": 119,
"text": "\\rho(s)|_W"
},
{
"math_id": 120,
"text": " \\rho(s) |_W\\circ\\rho(t)|_W = \\rho(st)|_W"
},
{
"math_id": 121,
"text": "s,t\\in G,"
},
{
"math_id": 122,
"text": "\\rho_1:G\\to\\text{GL}(V_1)"
},
{
"math_id": 123,
"text": "\\rho_2:G\\to\\text{GL}(V_2)"
},
{
"math_id": 124,
"text": "F: V_1\\to V_2"
},
{
"math_id": 125,
"text": "\\rho_2(s)\\circ F= F\\circ \\rho_1(s)"
},
{
"math_id": 126,
"text": "V_1 = V_2"
},
{
"math_id": 127,
"text": "\\rho_1 = \\rho_2,"
},
{
"math_id": 128,
"text": "F"
},
{
"math_id": 129,
"text": "F=\\lambda\\text{Id}"
},
{
"math_id": 130,
"text": "\\lambda \\in \\Complex"
},
{
"math_id": 131,
"text": "\\rho_1"
},
{
"math_id": 132,
"text": "\\rho_2"
},
{
"math_id": 133,
"text": "V_2"
},
{
"math_id": 134,
"text": "\\Complex,"
},
{
"math_id": 135,
"text": "(\\rho, V_{\\rho}), (\\pi, V_{\\pi})"
},
{
"math_id": 136,
"text": "T: V_{\\rho} \\to V_{\\pi},"
},
{
"math_id": 137,
"text": "T \\circ \\rho(s)=\\pi(s)\\circ T "
},
{
"math_id": 138,
"text": "s \\in G."
},
{
"math_id": 139,
"text": "(\\pi,V_\\pi)"
},
{
"math_id": 140,
"text": "\\pi(G)."
},
{
"math_id": 141,
"text": "\\text{GL}(V_\\pi),"
},
{
"math_id": 142,
"text": "\\text{Aut}(V_\\pi)."
},
{
"math_id": 143,
"text": "H"
},
{
"math_id": 144,
"text": "\\text{Res}_H(\\rho)"
},
{
"math_id": 145,
"text": "H."
},
{
"math_id": 146,
"text": "\\text{Res}(\\rho)"
},
{
"math_id": 147,
"text": "\\text{Res}\\rho."
},
{
"math_id": 148,
"text": "\\text{Res}_H(V)"
},
{
"math_id": 149,
"text": "\\text{Res}(V)"
},
{
"math_id": 150,
"text": "\\text{Res}_H(f)"
},
{
"math_id": 151,
"text": "\\text{Res}(f)"
},
{
"math_id": 152,
"text": "(\\rho,V_\\rho)"
},
{
"math_id": 153,
"text": "\\tau"
},
{
"math_id": 154,
"text": "V_\\rho(\\tau)"
},
{
"math_id": 155,
"text": "\\tau."
},
{
"math_id": 156,
"text": "G,"
},
{
"math_id": 157,
"text": "(v|u)=(\\rho(s)v|\\rho(s)u)"
},
{
"math_id": 158,
"text": "v,u\\in V_\\rho, s\\in G."
},
{
"math_id": 159,
"text": " (\\cdot|\\cdot)"
},
{
"math_id": 160,
"text": "(v|u)"
},
{
"math_id": 161,
"text": "\\sum_{t\\in G}(\\rho(t)v|\\rho(t)u)."
},
{
"math_id": 162,
"text": "G=D_6=\\{\\text{id},\\mu,\\mu^2,\\nu,\\mu\\nu,\\mu^2\\nu\\}"
},
{
"math_id": 163,
"text": "6"
},
{
"math_id": 164,
"text": "\\mu,\\nu"
},
{
"math_id": 165,
"text": "\\text{ord}(\\nu)=2, \\text{ord}(\\mu)=3"
},
{
"math_id": 166,
"text": "\\nu\\mu\\nu=\\mu^2."
},
{
"math_id": 167,
"text": "\\rho:D_6\\to\\text{GL}_3(\\Complex )"
},
{
"math_id": 168,
"text": "D_6"
},
{
"math_id": 169,
"text": "\n\\rho(\\mu)=\\left(\n\\begin{array}{ccc}\n \\cos (\\frac{2\\pi}{3}) & 0& -\\sin (\\frac{2\\pi}{3})\\\\\n 0 & 1 & 0\\\\\n \\sin (\\frac{2\\pi}{3}) &0 & \\cos (\\frac{2\\pi}{3})\n\\end{array}\n\\right), \\,\\,\\,\\,\n\\rho(\\nu)= \\left(\n\\begin{array}{ccc}\n-1& 0&0\\\\\n0&-1&0\\\\\n0& 0 &1\n\\end{array}\n\\right).\n"
},
{
"math_id": 170,
"text": "\\Complex e_2"
},
{
"math_id": 171,
"text": "\\rho|_{\\Complex e_2}: D_6\\to\\Complex ^\\times"
},
{
"math_id": 172,
"text": "\\nu\\mapsto -1, \\mu\\mapsto 1."
},
{
"math_id": 173,
"text": "\\rho|_{\\Complex e_1\\oplus\\Complex e_3}"
},
{
"math_id": 174,
"text": " \\nu \\mapsto \\begin{pmatrix} -1 &0 \\\\0&1 \\end{pmatrix}, \\,\\,\\,\\,\\mu \\mapsto \\begin{pmatrix} \\cos (\\frac{2\\pi}{3}) &-\\sin (\\frac{2\\pi}{3})\\\\\\sin (\\frac{2\\pi}{3}) & \\cos (\\frac{2\\pi}{3})\\end{pmatrix}."
},
{
"math_id": 175,
"text": "\\rho=\\rho|_{\\Complex e_2}\\oplus \\rho|_{\\Complex e_1\\oplus\\Complex e_3}."
},
{
"math_id": 176,
"text": "\\Complex ^3,"
},
{
"math_id": 177,
"text": "\\rho(\\mu)"
},
{
"math_id": 178,
"text": "\\rho(\\nu)"
},
{
"math_id": 179,
"text": "T:\\Complex ^3\\to\\Complex ^3"
},
{
"math_id": 180,
"text": "\\eta:D_6\\to \\text{GL}_3(\\Complex ),"
},
{
"math_id": 181,
"text": "\\eta (s) :=T\\circ\\rho(s)\\circ T^{-1}"
},
{
"math_id": 182,
"text": " s\\in D_6,"
},
{
"math_id": 183,
"text": "H=\\{\\text{id},\\mu, \\mu^2\\},"
},
{
"math_id": 184,
"text": "\\text{Res}_H(\\rho)."
},
{
"math_id": 185,
"text": "\\rho(\\mu),"
},
{
"math_id": 186,
"text": "\\rho: G \\to \\text{GL}(V)"
},
{
"math_id": 187,
"text": "\\rho^*: G\\to \\text{GL}(V^*)"
},
{
"math_id": 188,
"text": "\\forall s\\in G, v \\in V, \\alpha \\in V^*: \\qquad \\left (\\rho^*(s)\\alpha \\right )(v) =\\alpha \\left (\\rho \\left (s^{-1} \\right ) v \\right )."
},
{
"math_id": 189,
"text": "\\langle \\alpha, v\\rangle:=\\alpha(v)"
},
{
"math_id": 190,
"text": "V^*"
},
{
"math_id": 191,
"text": " V"
},
{
"math_id": 192,
"text": "\\forall s\\in G, v \\in V, \\alpha \\in V^*: \\qquad \\langle \\rho^*(s)(\\alpha), \\rho(s)(v)\\rangle = \\langle \\alpha,v \\rangle."
},
{
"math_id": 193,
"text": " (\\rho_1,V_1 )"
},
{
"math_id": 194,
"text": "(\\rho_2,V_2 )"
},
{
"math_id": 195,
"text": "G_1"
},
{
"math_id": 196,
"text": "G_2,"
},
{
"math_id": 197,
"text": "\\forall s_1 \\in G_1, s_2 \\in G_2, v_1\\in V_1, v_2\\in V_2: \\qquad \\begin{cases} \\rho_1\\oplus\\rho_2 : G_1\\times G_2 \\to \\text{GL}(V_1 \\oplus V_2 ) \\\\[4pt] (\\rho_1\\oplus\\rho_2)(s_1, s_2) (v_1,v_2) := \\rho_1(s_1)v_1\\oplus \\rho_2(s_2)v_2\\end{cases}"
},
{
"math_id": 198,
"text": "\\rho_1, \\rho_2"
},
{
"math_id": 199,
"text": "\\rho_1\\oplus\\rho_2:G\\to\\text{GL}(V_1\\oplus V_2),"
},
{
"math_id": 200,
"text": "G\\times G."
},
{
"math_id": 201,
"text": "i"
},
{
"math_id": 202,
"text": "\\omega"
},
{
"math_id": 203,
"text": " \\begin{cases} \\rho_1: \\Z /2\\Z \\to \\text{GL}_2(\\Complex ) \\\\[4pt] \\rho_1(1)=\\begin{pmatrix} 0 & -i \\\\ i & 0\\end{pmatrix}\\end{cases} \\qquad\\qquad \\begin{cases} \\rho_2: \\Z /3\\Z \\to \\text{GL}_3(\\Complex )\\\\[6pt] \\rho_2(1) = \\begin{pmatrix} 1 & 0 & \\omega \\\\ 0 & \\omega & 0\\\\ 0 & 0 & \\omega^2 \\end{pmatrix} \\end{cases}"
},
{
"math_id": 204,
"text": " \\begin{cases} \\rho_1\\oplus \\rho_2 : \\Z /2\\Z \\times\\Z /3\\Z \\to \\text{GL} \\left ( \\Complex ^2\\oplus\\Complex ^3 \\right ) \\\\[6pt] \\left (\\rho_1\\oplus\\rho_2 \\right ) (k,l) = \\begin{pmatrix} \\rho_1(k)& 0 \\\\ 0 & \\rho_2(l) \\end{pmatrix} & k\\in\\Z /2\\Z, l\\in \\Z /3\\Z \\end{cases}"
},
{
"math_id": 205,
"text": "(\\rho_1\\oplus\\rho_2) (1,1) = \\begin{pmatrix} \n0 & -i & 0 & 0 & 0 \\\\\ni & 0 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0 & \\omega\\\\ \n0 & 0 &0 & \\omega & 0\\\\ \n0 & 0 & 0 & 0 & \\omega^2\n\\end{pmatrix} "
},
{
"math_id": 206,
"text": "\\rho_1:G_1\\to \\text{GL}(V_1), \\rho_2:G_2\\to\\text{GL}(V_2)"
},
{
"math_id": 207,
"text": "\\rho_1\\otimes\\rho_2:G_1\\times G_2 \\to \\text{GL}(V_1\\otimes V_2)"
},
{
"math_id": 208,
"text": "V_1"
},
{
"math_id": 209,
"text": "\\rho_1\\otimes\\rho_2(s_1,s_2)=\\rho_1(s_1)\\otimes \\rho_2(s_2),"
},
{
"math_id": 210,
"text": "s_1\\in G_1, s_2\\in G_2."
},
{
"math_id": 211,
"text": "\\rho_2."
},
{
"math_id": 212,
"text": " \\begin{cases} \\rho_1\\otimes\\rho_2 :\\Z /2\\Z \\times \\Z /3\\Z \\to \\text{GL}(\\Complex ^2\\otimes\\Complex ^3) \\\\ (\\rho_1 \\otimes \\rho_2) (k,l) = \\rho_1(k)\\otimes \\rho_2(l) & k\\in\\Z /2\\Z, l\\in\\Z /3\\Z \\end{cases}"
},
{
"math_id": 213,
"text": "\\Complex ^2\\otimes\\Complex ^3\\cong \\Complex ^6"
},
{
"math_id": 214,
"text": "\\rho_1\\otimes\\rho_2(1,1) = \\rho_1(1)\\otimes \\rho_2(1) = \\begin{pmatrix}\n0 & 0 & 0 & -i & 0 & -i\\omega \\\\\n0 & 0 & 0 & 0 & -i\\omega &0\\\\\n0 & 0 & 0 & 0 & 0 & -i\\omega^2\\\\\ni & 0 & i\\omega & 0 & 0 & 0 \\\\\n0 & i\\omega &0 & 0 & 0 & 0\\\\\n0 & 0 & i\\omega^2 & 0 & 0 & 0\\end{pmatrix} "
},
{
"math_id": 215,
"text": "\\rho_1: G \\to \\text{GL}(V_1), \\rho_2: G \\to \\text{GL}(V_2)"
},
{
"math_id": 216,
"text": "s"
},
{
"math_id": 217,
"text": "\\rho(s)\\in\\text{GL}(V_1\\otimes V_2)"
},
{
"math_id": 218,
"text": "\\rho(s)(v_1\\otimes v_2)=\\rho_1(s)v_1\\otimes \\rho_2(s)v_2,"
},
{
"math_id": 219,
"text": "v_1\\in V_1, v_2\\in V_2,"
},
{
"math_id": 220,
"text": "\\rho(s)=\\rho_1(s) \\otimes \\rho_2(s)."
},
{
"math_id": 221,
"text": "s\\mapsto \\rho(s)"
},
{
"math_id": 222,
"text": "\\text{Hom}(V,W)"
},
{
"math_id": 223,
"text": "\\text{Hom}(V,W)=V^*\\otimes W"
},
{
"math_id": 224,
"text": "B \\in \\text{Hom}(V,W)"
},
{
"math_id": 225,
"text": "\\text{Hom}(V,W)."
},
{
"math_id": 226,
"text": "\\rho_V"
},
{
"math_id": 227,
"text": "\\rho_W"
},
{
"math_id": 228,
"text": " \\rho(s)(B) v=\\rho_W(s)\\circ B \\circ\\rho_V(s^{-1})v"
},
{
"math_id": 229,
"text": " s\\in G, v\\in V."
},
{
"math_id": 230,
"text": "G_1\\times G_2"
},
{
"math_id": 231,
"text": "\\rho_1\\otimes\\rho_2"
},
{
"math_id": 232,
"text": "\\rho: G\\to V\\otimes V"
},
{
"math_id": 233,
"text": "(e_k)"
},
{
"math_id": 234,
"text": "\\vartheta: V\\otimes V \\to V \\otimes V"
},
{
"math_id": 235,
"text": "\\vartheta(e_k\\otimes e_j) =e_j \\otimes e_k"
},
{
"math_id": 236,
"text": "\\vartheta^2 =1"
},
{
"math_id": 237,
"text": "V\\otimes V"
},
{
"math_id": 238,
"text": "V\\otimes V=\\text{Sym}^2(V)\\oplus \\text{Alt}^2(V),"
},
{
"math_id": 239,
"text": "\\text{Sym}^2(V) = \\{z\\in V\\otimes V: \\vartheta(z)=z \\}"
},
{
"math_id": 240,
"text": "\\text{Alt}^2(V)=\\bigwedge^2V=\\{z\\in V\\otimes V: \\vartheta (z)=-z \\}."
},
{
"math_id": 241,
"text": " V^{\\otimes m},"
},
{
"math_id": 242,
"text": "\\bigwedge^m V"
},
{
"math_id": 243,
"text": "\\text{Sym}^m(V)."
},
{
"math_id": 244,
"text": "m>2,"
},
{
"math_id": 245,
"text": "V^{\\otimes m}"
},
{
"math_id": 246,
"text": "\\rho:G\\to \\text{GL}(V)"
},
{
"math_id": 247,
"text": "W^0"
},
{
"math_id": 248,
"text": "\\text{char}(K)=0,"
},
{
"math_id": 249,
"text": "(\\tau_j)_{j\\in I}"
},
{
"math_id": 250,
"text": "\\{V(\\tau_j)|j\\in I\\}"
},
{
"math_id": 251,
"text": "p_j:V\\to V(\\tau_j)"
},
{
"math_id": 252,
"text": "p_j=\\frac{n_j}{g}\\sum_{t\\in G}\\overline{\\chi_{\\tau_j}(t)}\\rho(t),"
},
{
"math_id": 253,
"text": "n_j=\\dim (\\tau_j),"
},
{
"math_id": 254,
"text": "g=\\text{ord}(G)"
},
{
"math_id": 255,
"text": "\\chi_{\\tau_j}"
},
{
"math_id": 256,
"text": "\\tau_j."
},
{
"math_id": 257,
"text": "V^G:=\\{v\\in V : \\rho(s)v=v\\,\\,\\,\\, \\forall\\, s \\in G\\}."
},
{
"math_id": 258,
"text": "\\rho(s): V\\to V"
},
{
"math_id": 259,
"text": "P:= \\frac{1}{|G|}\\sum_{s\\in G} \\rho(s) \\in \\text{End}(V)."
},
{
"math_id": 260,
"text": "P"
},
{
"math_id": 261,
"text": "\\forall t \\in G : \\qquad \\sum_{s\\in G} \\rho(s)= \\sum_{s\\in G} \\rho(tst^{-1})."
},
{
"math_id": 262,
"text": "V^G."
},
{
"math_id": 263,
"text": "\\text{Tr}(P)."
},
{
"math_id": 264,
"text": "0"
},
{
"math_id": 265,
"text": "\\dim(V(1))=\\dim(V^G)=Tr(P)=\\frac{1}{|G|}\\sum_{s\\in G}\\chi_V(s),"
},
{
"math_id": 266,
"text": "V(1)"
},
{
"math_id": 267,
"text": "V_\\pi"
},
{
"math_id": 268,
"text": "P=\\frac{1}{|G|}\\sum_{s\\in G} \\pi(s)=0."
},
{
"math_id": 269,
"text": "e_1,...,e_n"
},
{
"math_id": 270,
"text": "V_\\pi."
},
{
"math_id": 271,
"text": "\\sum_{s\\in G} \\text{Tr}(\\pi(s)) = \\sum _{s\\in G} \\sum_{j=1}^{n} \\langle \\pi(s)e_j, e_j \\rangle = \\sum_{j=1}^{n} \\left \\langle \\sum_{s\\in G} \\pi(s)e_j, e_j \\right \\rangle =0."
},
{
"math_id": 272,
"text": "\\sum_{s\\in G} \\chi_V(s)=0."
},
{
"math_id": 273,
"text": "G=\\text{Per}(3)"
},
{
"math_id": 274,
"text": "\\rho: \\text{Per}(3)\\to \\text{GL}_5(\\Complex )"
},
{
"math_id": 275,
"text": "\\text{Per}(3)"
},
{
"math_id": 276,
"text": "\n\\rho(1,2)=\n\\begin{pmatrix}\n-1 & 2 & 0& 0& 0\\\\\n0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0\\\\\n0 & 0 & 1 & 0 & 0\\\\\n0& 0 & 0 & 0 & 1\n\\end{pmatrix}, \\quad \n\\rho(1,3)=\n\\begin{pmatrix}\n\\frac{1}{2} & \\frac{1}{2} & 0& 0& 0\\\\\n\\frac{1}{2} & -1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 1\\\\\n0 & 0 & 0 & 1 & 0\\\\\n0& 0 & 1 & 0 & 0\n\\end{pmatrix}, \\quad \n\\rho(2,3)=\n\\begin{pmatrix}\n0 & -2 & 0& 0& 0\\\\\n-\\frac{1}{2} & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 1\\\\\n0& 0 & 0 & 1 & 0\n\\end{pmatrix}."
},
{
"math_id": 277,
"text": "\\text{Per}(3),"
},
{
"math_id": 278,
"text": "\\eta: \\text{Per}(3) \\to \\text{GL}_2(\\Complex )"
},
{
"math_id": 279,
"text": "\\eta(1,2)= \\begin{pmatrix} -1 &2 \\\\ 0& 1\\end{pmatrix}, \\quad \\eta(1,3)=\\begin{pmatrix} \\frac{1}{2} & \\frac{1}{2} \\\\ \\frac{1}{2} & -1 \\end{pmatrix},\\quad \\eta(2,3)= \\begin{pmatrix}0& -2\\\\-\\frac{1}{2} & 0\\end{pmatrix}."
},
{
"math_id": 280,
"text": "\\eta"
},
{
"math_id": 281,
"text": "(\\eta|\\eta)=1, (\\pi|\\pi)=2."
},
{
"math_id": 282,
"text": "\\Complex (e_1+e_2+e_3)"
},
{
"math_id": 283,
"text": "\\Complex ^3"
},
{
"math_id": 284,
"text": "\\Complex (e_1-e_2)\\oplus\\Complex (e_1+e_2-2e_3)."
},
{
"math_id": 285,
"text": " \\tau (1,2)= \\begin{pmatrix} -1 &0 \\\\0 & 1\\end{pmatrix},\\quad \\tau(1,3)=\\begin{pmatrix}\\frac{1}{2} & \\frac{3}{2}\\\\ \\frac{1}{2} & -\\frac{1}{2}\\end{pmatrix}, \\quad \\tau(2,3)=\\begin{pmatrix} \\frac{1}{2} & -\\frac{3}{2}\\\\ -\\frac{1}{2} & -\\frac{1}{2}\\end{pmatrix}."
},
{
"math_id": 286,
"text": "\\eta(s)=B\\circ\\tau(s)\\circ B^{-1}"
},
{
"math_id": 287,
"text": " s\\in \\text{Per}(3),"
},
{
"math_id": 288,
"text": "B:\\Complex ^2\\to\\Complex ^2"
},
{
"math_id": 289,
"text": "M_B=\\begin{pmatrix} 2 &2\\\\0&2\\end{pmatrix}."
},
{
"math_id": 290,
"text": "(\\rho,\\Complex ^5)"
},
{
"math_id": 291,
"text": "\\rho=\\tau\\oplus\\eta\\oplus 1"
},
{
"math_id": 292,
"text": "\\Complex ^5=\\Complex (e_1,e_2)\\oplus\\Complex (e_3-e_4, e_3+e_4-2e_5)\\oplus\\Complex (e_3+e_4+e_5)"
},
{
"math_id": 293,
"text": "\\rho_1:=\\eta\\oplus\\tau"
},
{
"math_id": 294,
"text": "\\rho=\\rho_1\\oplus 1, \\qquad \\Complex ^5=\\Complex (e_1,e_2,e_3-e_4, e_3+e_4-2e_5)\\oplus\\Complex (e_3+e_4+e_5)."
},
{
"math_id": 295,
"text": "G=\\{ A\\in \\text{GL}_2(\\Complex )| \\,A\\,\\, \\text{ is an upper triangular matrix}\\}."
},
{
"math_id": 296,
"text": "\\Complex ^2"
},
{
"math_id": 297,
"text": "\\rho(A)=A"
},
{
"math_id": 298,
"text": "A\\in G."
},
{
"math_id": 299,
"text": "\\Complex e_1"
},
{
"math_id": 300,
"text": "\\Complex ."
},
{
"math_id": 301,
"text": "\\chi_\\rho : G \\to \\Complex, \\chi_\\rho(s) := \\text{Tr}(\\rho(s)),"
},
{
"math_id": 302,
"text": "\\text{Tr}(\\rho(s))"
},
{
"math_id": 303,
"text": "\\rho(s)."
},
{
"math_id": 304,
"text": "\\rho: \\Z /2\\Z \\times\\Z /2\\Z\\to\\text{GL}_2(\\Complex )"
},
{
"math_id": 305,
"text": "\n\\rho(0,0)=\n \\begin{pmatrix}\n 1 & 0 \\\\\n 0 & 1\n \\end{pmatrix}, \\quad \n\\rho(1, 0)=\n \\begin{pmatrix}\n -1 & 0 \\\\\n 0 & -1\n \\end{pmatrix}, \\quad \n\\rho(0,1)=\n \\begin{pmatrix}\n 0 & 1 \\\\\n 1 & 0\n \\end{pmatrix}, \\quad\n\\rho(1,1)= \\begin{pmatrix} 0 & -1 \\\\ -1 & 0 \\end{pmatrix}."
},
{
"math_id": 306,
"text": "\\chi_\\rho"
},
{
"math_id": 307,
"text": "\\chi_\\rho(0,0)=2, \\quad \\chi_\\rho(1,0)=-2,\\quad \\chi_\\rho(0,1)=\\chi_\\rho(1,1)=0."
},
{
"math_id": 308,
"text": "\\chi_V(s)=|\\{x\\in X | s\\cdot x=x\\}|."
},
{
"math_id": 309,
"text": "R"
},
{
"math_id": 310,
"text": "\\chi_R(s)=\\begin{cases} 0 & s\\neq e\\\\ |G| & s=e\\end{cases},"
},
{
"math_id": 311,
"text": "e"
},
{
"math_id": 312,
"text": "\\chi(tst^{-1})=\\chi(s),\\,\\,\\forall\\,s,t\\in G."
},
{
"math_id": 313,
"text": "G \\to \\Complex"
},
{
"math_id": 314,
"text": "C_s = \\{tst^{-1}| t \\in G \\}."
},
{
"math_id": 315,
"text": "\\chi(s)"
},
{
"math_id": 316,
"text": "\\chi(s^{-1})=\\overline{\\chi(s)},\\,\\,\\,\\forall\\, s\\in G"
},
{
"math_id": 317,
"text": "|\\chi(s)|\\leqslant n."
},
{
"math_id": 318,
"text": "\\chi(e)=n,"
},
{
"math_id": 319,
"text": "\\{s\\in G | \\chi(s)=n\\}"
},
{
"math_id": 320,
"text": "\\chi_1, \\chi_2"
},
{
"math_id": 321,
"text": "\\rho_1:G\\to \\text{GL}(V_1), \\rho_2:G \\to \\text{GL}(V_2)"
},
{
"math_id": 322,
"text": "V \\otimes V = Sym^2(V) \\oplus \\bigwedge^2 V"
},
{
"math_id": 323,
"text": "\\chi(s)^2"
},
{
"math_id": 324,
"text": "V \\otimes V"
},
{
"math_id": 325,
"text": "\\varphi : G \\to \\Complex"
},
{
"math_id": 326,
"text": " \\forall s, t \\in G : \\quad \\varphi \\left (sts^{-1} \\right ) =\\varphi(t)."
},
{
"math_id": 327,
"text": " \\Complex _{\\text{class}}(G)"
},
{
"math_id": 328,
"text": " (f|h)_G=\\frac{1}{|G|} \\sum_{t\\in G}f(t)\\overline{h(t)} "
},
{
"math_id": 329,
"text": "\\chi_1, \\ldots ,\\chi_k"
},
{
"math_id": 330,
"text": "(\\chi_i|\\chi_j)=\\begin{cases} 1 \\text{ if } i= j \\\\0 \\text{ otherwise }\\end{cases}."
},
{
"math_id": 331,
"text": "\\Complex _{\\text{class}}(G)"
},
{
"math_id": 332,
"text": " \\rho "
},
{
"math_id": 333,
"text": " \\rho_f = \\sum_g f(g)\\rho (g)."
},
{
"math_id": 334,
"text": " \\rho_f = \\frac {|G|}{n} \\langle f,\\chi _V^{*} \\rangle \\in End(V)"
},
{
"math_id": 335,
"text": " f "
},
{
"math_id": 336,
"text": " \\rho _f = 0 "
},
{
"math_id": 337,
"text": " \\rho _f "
},
{
"math_id": 338,
"text": "g"
},
{
"math_id": 339,
"text": " f(g)=0 "
},
{
"math_id": 340,
"text": "f=0."
},
{
"math_id": 341,
"text": "\\chi_j"
},
{
"math_id": 342,
"text": "\\varphi"
},
{
"math_id": 343,
"text": "\\varphi=c_1 \\chi_1 + \\cdots + c_k \\chi_k "
},
{
"math_id": 344,
"text": "c_j"
},
{
"math_id": 345,
"text": "c_1 \\tau_1 \\oplus \\cdots \\oplus c_k \\tau_k"
},
{
"math_id": 346,
"text": "\\tau_j"
},
{
"math_id": 347,
"text": "\\chi_j."
},
{
"math_id": 348,
"text": "L^1(G):"
},
{
"math_id": 349,
"text": " \\langle f,h\\rangle_G=\\frac{1}{|G|} \\sum_{t\\in G}f(t)h(t^{-1}) "
},
{
"math_id": 350,
"text": "(\\cdot|\\cdot)_G"
},
{
"math_id": 351,
"text": " \\langle\\cdot|\\cdot\\rangle_G"
},
{
"math_id": 352,
"text": "V_1, V_2"
},
{
"math_id": 353,
"text": "\\langle V_1, V_2 \\rangle_G:=\\dim (\\text{Hom}^G(V_1,V_2)),"
},
{
"math_id": 354,
"text": "\\text{Hom}^G(V_1,V_2)"
},
{
"math_id": 355,
"text": "\\chi_1"
},
{
"math_id": 356,
"text": "\\chi_2"
},
{
"math_id": 357,
"text": "V_2,"
},
{
"math_id": 358,
"text": " \\langle\\chi_1,\\chi_2\\rangle_G = (\\chi_1|\\chi_2)_G=\\langle V_1, V_2 \\rangle_G."
},
{
"math_id": 359,
"text": "\\xi."
},
{
"math_id": 360,
"text": "V=W_1\\oplus \\cdots \\oplus W_k,"
},
{
"math_id": 361,
"text": "W_j"
},
{
"math_id": 362,
"text": "(\\tau,W)"
},
{
"math_id": 363,
"text": "\\chi."
},
{
"math_id": 364,
"text": "(\\xi|\\chi),"
},
{
"math_id": 365,
"text": "V(\\tau)"
},
{
"math_id": 366,
"text": "(\\xi|\\chi)=\\frac{\\dim (V(\\tau))}{\\dim (\\tau)}=\\langle V, W\\rangle"
},
{
"math_id": 367,
"text": "\\dim (V(\\tau))=\\dim (\\tau)(\\xi|\\chi)."
},
{
"math_id": 368,
"text": "\\chi"
},
{
"math_id": 369,
"text": "(\\chi|\\chi) \\in \\mathbb{N}_0."
},
{
"math_id": 370,
"text": "(\\chi|\\chi)=1"
},
{
"math_id": 371,
"text": "\\dim (V)=n."
},
{
"math_id": 372,
"text": "n"
},
{
"math_id": 373,
"text": " R \\cong \\oplus (W_j)^{\\oplus\\dim (W_j)},"
},
{
"math_id": 374,
"text": "\\{W_j|j\\in I\\}"
},
{
"math_id": 375,
"text": "\\Complex [G]\\cong\\oplus_{j}\\text{End}(W_j)"
},
{
"math_id": 376,
"text": "|G|=\\chi_R(e)=\\dim (R)=\\sum_j\\dim \\left ((W_j)^{\\oplus(\\chi_{W_j}|\\chi_R)} \\right )=\\sum_j(\\chi_{W_j}|\\chi_R)\\cdot\\dim (W_j)=\\sum_j\\dim (W_j)^2,"
},
{
"math_id": 377,
"text": "\\chi_{W_j}"
},
{
"math_id": 378,
"text": "\\chi_R"
},
{
"math_id": 379,
"text": " W_j"
},
{
"math_id": 380,
"text": " R,"
},
{
"math_id": 381,
"text": "s\\neq e,"
},
{
"math_id": 382,
"text": " 0=\\chi_R(s)=\\sum_j\\dim (W_j)\\cdot\\chi_{W_j}(s)."
},
{
"math_id": 383,
"text": "f(s)=\\frac{1}{|G|}\\sum_{\\rho \\text{ irr. rep. of } G}\\dim (V_\\rho)\\cdot\\text{Tr}(\\rho(s^{-1})\\cdot \\hat{f}(\\rho))."
},
{
"math_id": 384,
"text": "\\sum_{s\\in G}f(s^{-1})h(s)=\\frac{1}{|G|}\\sum_{\\rho\\,\\, \\text{ irred.} \\text{ rep.}\\text{ of } G}\\dim (V_{\\rho})\\cdot\\text{Tr}(\\hat{f}(\\rho)\\hat{h}(\\rho))."
},
{
"math_id": 385,
"text": "G, s\\in G"
},
{
"math_id": 386,
"text": "f, h \\in L^1(G)."
},
{
"math_id": 387,
"text": "1."
},
{
"math_id": 388,
"text": "\\rho: G\\to \\text{GL}(V_\\rho)"
},
{
"math_id": 389,
"text": "\\rho|_H"
},
{
"math_id": 390,
"text": "\\rho_H."
},
{
"math_id": 391,
"text": "\\theta:H \\to \\text{GL}(W)"
},
{
"math_id": 392,
"text": "\\rho(s)(W)"
},
{
"math_id": 393,
"text": "sH"
},
{
"math_id": 394,
"text": "s."
},
{
"math_id": 395,
"text": "G/H,"
},
{
"math_id": 396,
"text": " \\sum_{r\\in R} \\rho(r)(W)"
},
{
"math_id": 397,
"text": "V_\\rho."
},
{
"math_id": 398,
"text": "V_\\rho"
},
{
"math_id": 399,
"text": "\\theta"
},
{
"math_id": 400,
"text": "W,"
},
{
"math_id": 401,
"text": " V_\\rho= \\bigoplus_{r\\in R} W_r."
},
{
"math_id": 402,
"text": "G/H"
},
{
"math_id": 403,
"text": "W_r=\\rho(s)(W)"
},
{
"math_id": 404,
"text": "s\\in rH"
},
{
"math_id": 405,
"text": "r\\in R."
},
{
"math_id": 406,
"text": "(\\theta, W),"
},
{
"math_id": 407,
"text": "v\\in V_\\rho"
},
{
"math_id": 408,
"text": "\\sum_{r\\in R}w_r,"
},
{
"math_id": 409,
"text": "w_r \\in W_r"
},
{
"math_id": 410,
"text": "\\rho=\\text{Ind}^G_H(\\theta),"
},
{
"math_id": 411,
"text": "\\rho=\\text{Ind}(\\theta),"
},
{
"math_id": 412,
"text": "V=\\text{Ind}^G_H(W),"
},
{
"math_id": 413,
"text": "V=\\text{Ind}(W),"
},
{
"math_id": 414,
"text": "\\Complex [H]"
},
{
"math_id": 415,
"text": "V= \\Complex[G]\\otimes_{\\Complex[H]}W,"
},
{
"math_id": 416,
"text": "s\\cdot (e_t \\otimes w)=e_{st}\\otimes w"
},
{
"math_id": 417,
"text": "s,t\\in G, w\\in W."
},
{
"math_id": 418,
"text": "(\\theta, W_\\theta)"
},
{
"math_id": 419,
"text": "(\\theta, W_\\theta)."
},
{
"math_id": 420,
"text": " H \\leq G \\leq K"
},
{
"math_id": 421,
"text": "\\text{Ind}^K_G(\\text{Ind}^G_H(W))\\cong\\text{Ind}^K_H(W)."
},
{
"math_id": 422,
"text": "(\\theta,W_\\theta)"
},
{
"math_id": 423,
"text": "\\rho':G\\to\\text{GL}(V')"
},
{
"math_id": 424,
"text": "F: W_\\theta\\to V'"
},
{
"math_id": 425,
"text": "F\\circ\\theta(t)=\\rho'(t)\\circ F"
},
{
"math_id": 426,
"text": "t\\in G."
},
{
"math_id": 427,
"text": "F':V_\\rho\\to V',"
},
{
"math_id": 428,
"text": "F'\\circ\\rho(s)=\\rho'(s)\\circ F'"
},
{
"math_id": 429,
"text": "V'"
},
{
"math_id": 430,
"text": "\\Complex[G]"
},
{
"math_id": 431,
"text": "\\text{Hom}^H(W_\\theta,V') \\cong \\text{Hom}^G(V_\\rho,V'),"
},
{
"math_id": 432,
"text": "\\text{Hom}^G(V_\\rho,V')"
},
{
"math_id": 433,
"text": "V'."
},
{
"math_id": 434,
"text": "\\text{Hom}^H(W_\\theta,V')."
},
{
"math_id": 435,
"text": "\\varphi'"
},
{
"math_id": 436,
"text": " \\varphi'(s)=\\frac{1}{|H|}\\sum_{t\\in G\\atop t^{-1}st\\in H}^{}\\varphi(t^{-1}st)."
},
{
"math_id": 437,
"text": "\\text{Ind}^G_H(\\varphi)=\\varphi'"
},
{
"math_id": 438,
"text": "\\text{Ind}(\\varphi)=\\varphi'."
},
{
"math_id": 439,
"text": "\\text{Ind}(\\varphi)"
},
{
"math_id": 440,
"text": "H,"
},
{
"math_id": 441,
"text": "\\text{Ind}(W)"
},
{
"math_id": 442,
"text": "\\psi"
},
{
"math_id": 443,
"text": " \\text{Ind}(\\psi \\cdot \\text{Res}\\varphi) = (\\text{Ind} \\psi) \\cdot \\varphi."
},
{
"math_id": 444,
"text": " \\chi_\\theta"
},
{
"math_id": 445,
"text": "G/H."
},
{
"math_id": 446,
"text": "\\forall t \\in G: \\qquad \\chi_\\rho(t)=\\sum_{r\\in R,\\atop r^{-1}tr \\in H}^{} \\chi_\\theta (r^{-1}tr)=\\frac{1}{|H|} \\sum_{s\\in G,\\atop s^{-1}ts\\in H}^{} \\chi_\\theta(s^{-1}ts)."
},
{
"math_id": 447,
"text": "\\text{Res}"
},
{
"math_id": 448,
"text": "\\text{Ind}"
},
{
"math_id": 449,
"text": "\\psi\\in\\Complex _{\\text{class}}(H)"
},
{
"math_id": 450,
"text": "\\varphi\\in\\Complex _{\\text{class}}(G)"
},
{
"math_id": 451,
"text": "\\langle \\psi, \\text{Res}(\\varphi)\\rangle_H=\\langle \\text{Ind}(\\psi), \\varphi\\rangle_G."
},
{
"math_id": 452,
"text": "\\langle V_1,V_2\\rangle_G =0."
},
{
"math_id": 453,
"text": "H_s=sHs^{-1}\\cap H"
},
{
"math_id": 454,
"text": "(\\rho, W)"
},
{
"math_id": 455,
"text": "\\text{Res}_{H_s}(\\rho)"
},
{
"math_id": 456,
"text": "H_s."
},
{
"math_id": 457,
"text": "\\text{Res}_s(\\rho)"
},
{
"math_id": 458,
"text": "\\text{Res}_{H_s}(\\rho)."
},
{
"math_id": 459,
"text": "\\rho^s"
},
{
"math_id": 460,
"text": "H_s"
},
{
"math_id": 461,
"text": "\\rho^s(t)=\\rho(s^{-1}ts)."
},
{
"math_id": 462,
"text": "V=\\text{Ind}^G_H(W)"
},
{
"math_id": 463,
"text": "s\\in G\\setminus H"
},
{
"math_id": 464,
"text": "H_s=H"
},
{
"math_id": 465,
"text": "\\text{Res}_s(\\rho)=\\rho"
},
{
"math_id": 466,
"text": "\\text{Ind}^G_H(\\rho)"
},
{
"math_id": 467,
"text": "s \\notin H."
},
{
"math_id": 468,
"text": "A"
},
{
"math_id": 469,
"text": "\\mathbb{C}A"
},
{
"math_id": 470,
"text": "V=\\bigoplus_j{V_j}"
},
{
"math_id": 471,
"text": "\\mathbb{C}G"
},
{
"math_id": 472,
"text": "j"
},
{
"math_id": 473,
"text": "V_j"
},
{
"math_id": 474,
"text": "\\Box"
},
{
"math_id": 475,
"text": "(G : A)"
},
{
"math_id": 476,
"text": "\\deg(\\tau)|(G : A)."
},
{
"math_id": 477,
"text": "\\deg(\\tau)|(G : A)"
},
{
"math_id": 478,
"text": "\\deg(\\tau) \\leq (G : A) "
},
{
"math_id": 479,
"text": "G=A\\rtimes H"
},
{
"math_id": 480,
"text": "\\Chi = \\text{Hom}(A,\\Complex^\\times)."
},
{
"math_id": 481,
"text": "\\Chi"
},
{
"math_id": 482,
"text": "(s\\chi)(a) = \\chi(s^{-1}as)"
},
{
"math_id": 483,
"text": "s \\in G, \\chi \\in \\Chi, a \\in A."
},
{
"math_id": 484,
"text": "(\\chi_j)_{j\\in \\Chi/H}"
},
{
"math_id": 485,
"text": "\\Chi."
},
{
"math_id": 486,
"text": "j \\in \\Chi/H"
},
{
"math_id": 487,
"text": "H_j = \\{t \\in H : t\\chi_j = \\chi_j\\}."
},
{
"math_id": 488,
"text": "G_j = A \\cdot H_j"
},
{
"math_id": 489,
"text": "G_j"
},
{
"math_id": 490,
"text": "\\chi_j(at) = \\chi_j(a)"
},
{
"math_id": 491,
"text": "a \\in A, t \\in H_j."
},
{
"math_id": 492,
"text": "G_j."
},
{
"math_id": 493,
"text": "t\\chi_j = \\chi_j"
},
{
"math_id": 494,
"text": "t \\in H_j,"
},
{
"math_id": 495,
"text": "\\Complex^\\times."
},
{
"math_id": 496,
"text": "H_j. "
},
{
"math_id": 497,
"text": "\\tilde{\\rho}"
},
{
"math_id": 498,
"text": "G_j,"
},
{
"math_id": 499,
"text": "G_j \\to H_j."
},
{
"math_id": 500,
"text": "\\tilde{\\rho}."
},
{
"math_id": 501,
"text": "\\chi_j\\otimes \\tilde{\\rho}"
},
{
"math_id": 502,
"text": "\\theta_{j,\\rho}"
},
{
"math_id": 503,
"text": "\\chi_j\\otimes \\tilde{\\rho}."
},
{
"math_id": 504,
"text": "\\theta_{j',\\rho'}"
},
{
"math_id": 505,
"text": "j = j'"
},
{
"math_id": 506,
"text": "\\rho'."
},
{
"math_id": 507,
"text": "\\theta_{j,\\rho}."
},
{
"math_id": 508,
"text": "G=A \\rtimes H."
},
{
"math_id": 509,
"text": "R(G)= \\left \\{ \\left. \\sum_{j=1}^m a_j \\tau_j \\right |\\tau_1, \\ldots, \\tau_m \\text{ all irreducible representations of } G \\text{ up to isomorphism}, a_j\\in\\Z \\right \\}."
},
{
"math_id": 510,
"text": "R(G)"
},
{
"math_id": 511,
"text": " \\begin{cases} \\chi: R(G)\\to\\Complex _{\\text{class}}(G)\\\\ \\sum a_j \\tau_j \\mapsto \\sum a_j \\chi_j \\end{cases} "
},
{
"math_id": 512,
"text": "\\Complex _{\\text{class}}, \\chi"
},
{
"math_id": 513,
"text": "\\chi_\\Complex : R(G)\\otimes\\Complex \\to \\Complex _{\\text{class}}(G). "
},
{
"math_id": 514,
"text": "(\\tau_j\\otimes1)_{j=1,\\ldots, m}"
},
{
"math_id": 515,
"text": "\\chi_{\\Complex }(\\tau_j\\otimes1)=\\chi_j"
},
{
"math_id": 516,
"text": "\\chi_{\\Complex }(\\tau_j\\otimes z)=z\\chi_j,"
},
{
"math_id": 517,
"text": "\\mathcal{R}^+(G)"
},
{
"math_id": 518,
"text": "\\mathcal{R}(G)"
},
{
"math_id": 519,
"text": "\\mathcal{R}^+(G),"
},
{
"math_id": 520,
"text": "\\mathcal{R}(G)=\\Z \\chi_1\\oplus \\cdots \\oplus\\Z \\chi_m"
},
{
"math_id": 521,
"text": "\\mathcal{R}(G)=\\text{Im}(\\chi)=\\chi(R(G))."
},
{
"math_id": 522,
"text": "R(G)\\cong\\mathcal{R}(G)"
},
{
"math_id": 523,
"text": "\\mathcal{R}(G)=\\text{Im}(\\chi)"
},
{
"math_id": 524,
"text": "\\Complex_{\\text{class}}(G)"
},
{
"math_id": 525,
"text": "\\chi_i"
},
{
"math_id": 526,
"text": "R(G),"
},
{
"math_id": 527,
"text": "\\Complex \\otimes \\mathcal{R}(G)\\cong\\Complex _{\\text{class}}(G)."
},
{
"math_id": 528,
"text": "\\mathcal{R}(G)\\to \\mathcal{R}(H), \\phi\\mapsto \\phi|_H,"
},
{
"math_id": 529,
"text": "\\text{Res}^G_H"
},
{
"math_id": 530,
"text": "\\text{Res}."
},
{
"math_id": 531,
"text": "\\mathcal{R}(H)\\to \\mathcal{R}(G),"
},
{
"math_id": 532,
"text": "\\text{Ind}^G_H"
},
{
"math_id": 533,
"text": "\\text{Ind}."
},
{
"math_id": 534,
"text": "\\langle \\cdot,\\cdot\\rangle_H"
},
{
"math_id": 535,
"text": "\\langle \\cdot,\\cdot\\rangle_G."
},
{
"math_id": 536,
"text": "\\text{Ind}(\\varphi\\cdot \\text{Res}(\\psi))=\\text{Ind}(\\varphi)\\cdot \\psi"
},
{
"math_id": 537,
"text": "\\text{Ind}:\\mathcal{R}(H)\\to \\mathcal{R}(G)"
},
{
"math_id": 538,
"text": "\\mathcal{R}(G)."
},
{
"math_id": 539,
"text": "R(G)."
},
{
"math_id": 540,
"text": "\\text{Im}(\\text{Ind})=\\text{Ind}(R(H))"
},
{
"math_id": 541,
"text": " \\begin{cases}\nA\\otimes \\text{Res}: A\\otimes R(G) \\to A\\otimes R(H)\\\\\n\\left (a \\otimes \\sum a_j \\tau_j \\right ) \\mapsto \\left (a \\otimes \\sum a_j \\text{Res}(\\tau_j) \\right )\n\\end{cases}, \\qquad \\begin{cases}\n A\\otimes \\text{Ind}: A\\otimes R(H) \\to A\\otimes R(G)\\\\\n\\left (a \\otimes \\sum a_j \\eta_j \\right )\\mapsto \\left (a \\otimes \\sum a_j \\text{Ind}(\\eta_j) \\right )\n\\end{cases}"
},
{
"math_id": 542,
"text": "\\eta_j"
},
{
"math_id": 543,
"text": "A=\\Complex "
},
{
"math_id": 544,
"text": "\\Complex _{\\text{class}}(H)."
},
{
"math_id": 545,
"text": "G_2"
},
{
"math_id": 546,
"text": "(\\rho_1, V_1)"
},
{
"math_id": 547,
"text": "(\\rho_2, V_2)."
},
{
"math_id": 548,
"text": "\\eta_1\\otimes\\eta_2,"
},
{
"math_id": 549,
"text": "\\eta_1"
},
{
"math_id": 550,
"text": "\\eta_2"
},
{
"math_id": 551,
"text": "R(G_1\\times G_2)=R(G_1)\\otimes_{\\Z} R(G_2),"
},
{
"math_id": 552,
"text": "R(G_1)\\otimes_{\\Z} R(G_2)"
},
{
"math_id": 553,
"text": "\\Z"
},
{
"math_id": 554,
"text": "\\varphi: \\text{Ind}:\\bigoplus_{H\\in X}\\mathcal{R}(H) \\to \\mathcal{R}(G)"
},
{
"math_id": 555,
"text": "X,"
},
{
"math_id": 556,
"text": "G=\\bigcup_{H\\in X \\atop s\\in G}sHs^{-1}."
},
{
"math_id": 557,
"text": "\\chi_H \\in \\mathcal{R}(H), \\,H\\in X"
},
{
"math_id": 558,
"text": " d\\geq 1,"
},
{
"math_id": 559,
"text": " d\\cdot\\chi = \\sum_{H\\in X}\\text{Ind}^G_H(\\chi_H)."
},
{
"math_id": 560,
"text": "p"
},
{
"math_id": 561,
"text": "K \\subset H"
},
{
"math_id": 562,
"text": "V_0,"
},
{
"math_id": 563,
"text": "V=V_0\\otimes_\\R \\Complex "
},
{
"math_id": 564,
"text": "V_0"
},
{
"math_id": 565,
"text": "s \\cdot(v_0\\otimes z) = (s\\cdot v_0)\\otimes z"
},
{
"math_id": 566,
"text": "s\\in G, v_0\\in V_0, z\\in \\Complex."
},
{
"math_id": 567,
"text": "\\R"
},
{
"math_id": 568,
"text": "\\text{SU}(2)= \\left \\{\\begin{pmatrix} a & b \\\\ -\\overline{b} & \\overline{a} \\end{pmatrix} \\ : \\ |a|^2+|b|^2=1 \\right \\}."
},
{
"math_id": 569,
"text": "G \\subset \\text{SU}(2)"
},
{
"math_id": 570,
"text": "V=\\Complex^2."
},
{
"math_id": 571,
"text": "\\text{SU}(2)"
},
{
"math_id": 572,
"text": "\\rho(G)"
},
{
"math_id": 573,
"text": "G \\subset \\text{SU}(2)\\cap\\text{GL}_2(\\R )=\\text{SO}(2)=S^1."
},
{
"math_id": 574,
"text": "\\text{SU}(2)."
},
{
"math_id": 575,
"text": "G=\\{\\pm1,\\pm i,\\pm j, \\pm ij \\}."
},
{
"math_id": 576,
"text": " \\begin{cases} \\rho : G \\to \\text{GL}_2(\\Complex ) \\\\[4pt] \\rho(\\pm 1)=\\begin{pmatrix} \\pm1&0\\\\0& \\pm1\\end{pmatrix}, \\quad \\rho(\\pm i)=\\begin{pmatrix} \\pm i&0\\\\0&\\mp i\\end{pmatrix}, \\quad \\rho(\\pm j)=\\begin{pmatrix} 0&\\pm i\\\\ \\pm i&0\\end{pmatrix} \\end{cases}"
},
{
"math_id": 577,
"text": "B"
},
{
"math_id": 578,
"text": " \\begin{cases} \\rho : \\Z /m\\Z \\to \\text{GL}_2(\\R ) \\\\[4pt] \\rho(k)= \\begin{pmatrix} \\cos \\left(\\frac{2\\pi ik}{m}\\right ) & \\sin \\left (\\frac{2\\pi ik}{m} \\right ) \\\\ -\\sin \\left(\\frac{2\\pi ik}{m} \\right ) & \\cos \\left (\\frac{2\\pi ik}{m} \\right )\\end{pmatrix} \\end{cases}"
},
{
"math_id": 579,
"text": "V=V_0\\otimes \\Complex."
},
{
"math_id": 580,
"text": "J:V\\to V"
},
{
"math_id": 581,
"text": "J^2=-\\text{Id}."
},
{
"math_id": 582,
"text": "\\chi_V"
},
{
"math_id": 583,
"text": "V=V_0\\otimes \\Complex ,"
},
{
"math_id": 584,
"text": "S_n"
},
{
"math_id": 585,
"text": "S_3"
},
{
"math_id": 586,
"text": "S_n \\times S_m"
},
{
"math_id": 587,
"text": "S_{n+m}"
},
{
"math_id": 588,
"text": "\\bigoplus_{n \\ge 0} R(S_n)"
},
{
"math_id": 589,
"text": "GL_n(\\mathbf F_q)"
},
{
"math_id": 590,
"text": "SL_2(\\mathbf F_q)"
},
{
"math_id": 591,
"text": "\\rho: G \\to \\text{GL}(V),"
},
{
"math_id": 592,
"text": "\\rho(s)v"
},
{
"math_id": 593,
"text": "v\\in V."
},
{
"math_id": 594,
"text": "\\pi(g)^{-1}=\\pi(g^{-1}),"
},
{
"math_id": 595,
"text": "dt,"
},
{
"math_id": 596,
"text": "\\forall s \\in G: \\quad \\int_{G} f(t) dt = \\int_{G} f(st)dt."
},
{
"math_id": 597,
"text": "\\int_{G} dt=1,"
},
{
"math_id": 598,
"text": "\\forall s \\in G: \\quad \\int_{G} f(t) dt = \\int_{G} f(ts)dt."
},
{
"math_id": 599,
"text": "dt(s)=\\tfrac{1}{|G|}"
},
{
"math_id": 600,
"text": "\\rho, \\pi"
},
{
"math_id": 601,
"text": "T\\circ\\rho(s)=\\pi(s)\\circ T"
},
{
"math_id": 602,
"text": "(\\cdot|\\cdot)"
},
{
"math_id": 603,
"text": " (v|u)_\\rho=\\int_G(\\rho(t)v|\\rho(t)u)dt"
},
{
"math_id": 604,
"text": "dt."
},
{
"math_id": 605,
"text": "L^2(G)"
},
{
"math_id": 606,
"text": "L_s"
},
{
"math_id": 607,
"text": "L_s\\Phi(t)=\\Phi(s^{-1}t),"
},
{
"math_id": 608,
"text": "\\Phi\\in L^2(G), t\\in G."
},
{
"math_id": 609,
"text": "s\\mapsto L_s"
},
{
"math_id": 610,
"text": "R_s"
},
{
"math_id": 611,
"text": "R_s\\Phi(t)=\\Phi(ts)."
},
{
"math_id": 612,
"text": " s\\mapsto R_s."
},
{
"math_id": 613,
"text": "s\\mapsto R_s"
},
{
"math_id": 614,
"text": "L^2(G)\\cong L^1(G)\\cong\\Complex [G]."
},
{
"math_id": 615,
"text": "\\pi':G\\to\\text{GL}(V')"
},
{
"math_id": 616,
"text": "\\forall v\\in V, \\forall v'\\in V', \\forall s\\in G: \\qquad \\left \\langle\\pi'(s)v',\\pi(s)v \\right \\rangle=\\langle v',v\\rangle := v'(v)."
},
{
"math_id": 617,
"text": "\\pi'(s)v'=v'\\circ\\pi(s^{-1})"
},
{
"math_id": 618,
"text": "v'\\in V', s\\in G."
},
{
"math_id": 619,
"text": "\\pi'"
},
{
"math_id": 620,
"text": "(\\tau,V_\\tau)"
},
{
"math_id": 621,
"text": "V_\\tau"
},
{
"math_id": 622,
"text": "(\\tau, V_\\tau)"
},
{
"math_id": 623,
"text": "V_\\rho(\\tau)=\\sum_{V_{\\tau} \\cong U\\subset V_\\rho} U."
},
{
"math_id": 624,
"text": "U,"
},
{
"math_id": 625,
"text": "V_\\tau."
},
{
"math_id": 626,
"text": " V_\\rho"
},
{
"math_id": 627,
"text": "V_\\rho(\\tau),"
},
{
"math_id": 628,
"text": "p_\\tau: V\\to V(\\tau),"
},
{
"math_id": 629,
"text": "p_\\tau(v)=n_\\tau\\int_G\\overline{\\chi_\\tau(t)}\\rho(t)(v)dt,"
},
{
"math_id": 630,
"text": "n_\\tau=\\dim (V(\\tau))"
},
{
"math_id": 631,
"text": "\\chi_\\tau"
},
{
"math_id": 632,
"text": "(\\rho,V)"
},
{
"math_id": 633,
"text": "V^G=\\{v\\in V : \\rho(s)v=v \\,\\,\\,\\forall s \\in G\\}."
},
{
"math_id": 634,
"text": " Pv:= \\int_G \\rho(s)vds. "
},
{
"math_id": 635,
"text": " \\left. \\left (\\int_G \\rho(s)v ds \\right |w \\right )=\\int_G (\\rho(s)v|w) ds,"
},
{
"math_id": 636,
"text": " \\begin{align}\n\\left. \\left (\\int_G \\rho(s)(\\rho(t)v) ds \\right |w \\right )&=\\int_G \\left. \\left (\\rho \\left (tst^{-1} \\right )(\\rho(t)v) \\right |w \\right ) ds \\\\\n&= \\int_G (\\rho(ts)v|w) ds \\\\\n&= \\int(\\rho(t)\\rho(s)v|w) ds \\\\\n&= \\left. \\left (\\rho(t)\\int_G \\rho(s)v ds \\right |w \\right ),\n\\end{align} "
},
{
"math_id": 637,
"text": "(\\pi,V)"
},
{
"math_id": 638,
"text": "T:V\\to V"
},
{
"math_id": 639,
"text": "T\\circ\\pi(s)=\\pi(s)\\circ T"
},
{
"math_id": 640,
"text": "s\\in G,"
},
{
"math_id": 641,
"text": "\\lambda \\in \\Complex "
},
{
"math_id": 642,
"text": "T=\\lambda \\text{Id}."
},
{
"math_id": 643,
"text": " (\\Phi|\\Psi)=\\int_G\\Phi(t)\\overline{\\Psi(t)}dt."
},
{
"math_id": 644,
"text": "\\langle\\Phi,\\Psi\\rangle=\\int_G\\Phi(t)\\Psi(t^{-1})dt."
},
{
"math_id": 645,
"text": "\\chi'"
},
{
"math_id": 646,
"text": "V',"
},
{
"math_id": 647,
"text": "(\\chi|\\chi')=0."
},
{
"math_id": 648,
"text": "(\\chi|\\chi)=1,"
},
{
"math_id": 649,
"text": "\\chi_V."
},
{
"math_id": 650,
"text": "\\chi_W."
},
{
"math_id": 651,
"text": "(\\chi_V|\\chi_W)."
},
{
"math_id": 652,
"text": "(\\chi|\\chi)"
},
{
"math_id": 653,
"text": "\\dim (V)"
},
{
"math_id": 654,
"text": "L^2(G)."
},
{
"math_id": 655,
"text": "(\\eta, V_\\eta)"
},
{
"math_id": 656,
"text": "\\text{Ind}^G_H(\\eta)=(I,V_I)"
},
{
"math_id": 657,
"text": "V_I"
},
{
"math_id": 658,
"text": "\\Phi:G\\to V_\\eta"
},
{
"math_id": 659,
"text": "\\Phi(ls)=\\eta(l)\\Phi(s)"
},
{
"math_id": 660,
"text": "l\\in H, s\\in G."
},
{
"math_id": 661,
"text": "\\|\\Phi\\|_G=\\text{sup}_{s\\in G}\\|\\Phi(s)\\|"
},
{
"math_id": 662,
"text": "I"
},
{
"math_id": 663,
"text": "I(s)\\Phi(k)=\\Phi(ks)."
},
{
"math_id": 664,
"text": "\\dim (\\text{Hom}_G(V_\\eta,V_I))=\\langle V_\\eta,V_I \\rangle_G."
},
{
"math_id": 665,
"text": "T: \\text{Hom}_G(V_\\rho, I^G_H(\\eta))\\to \\text{Hom}_H(V_\\rho|_H, V_\\eta)."
},
{
"math_id": 666,
"text": "\\{e_1,\\ldots,e_{\\dim (\\tau)}\\}"
},
{
"math_id": 667,
"text": "\\tau_{k,l}(s)=\\langle\\tau(s)e_k,e_l\\rangle"
},
{
"math_id": 668,
"text": "k,l \\in \\{1, \\ldots, \\dim (\\tau)\\}, s\\in G."
},
{
"math_id": 669,
"text": "\\left (\\sqrt{\\dim (\\tau)}\\tau_{k,l} \\right)_{k,l}"
},
{
"math_id": 670,
"text": "G\\times G"
},
{
"math_id": 671,
"text": " L^2(G)\\cong_{G\\times G} \\widehat{\\bigoplus}_{\\tau \\in \\widehat{G}}\\text{End}(V_\\tau)\\cong_{G\\times G} \\widehat{\\bigoplus}_{\\tau \\in \\widehat{G}} \\tau\\otimes\\tau^* "
},
{
"math_id": 672,
"text": "\\widehat{G}"
},
{
"math_id": 673,
"text": "\\begin{cases}\\Phi \\mapsto \\sum_{\\tau\\in \\widehat{G}}\\tau(\\Phi) \\\\[5pt] \\tau(\\Phi)=\\int_G \\Phi(t)\\tau(t)dt\\in \\text{End}(V_\\tau) \\end{cases} "
}
] | https://en.wikipedia.org/wiki?curid=874400 |
874641 | Representation theory of the symmetric group | In mathematics, the representation theory of the symmetric group is a particular case of the representation theory of finite groups, for which a concrete and detailed theory can be obtained. This has a large area of potential applications, from symmetric function theory to quantum chemistry studies of atoms, molecules and solids.
The symmetric group S"n" has order "n"!. Its conjugacy classes are labeled by partitions of "n". Therefore according to the representation theory of a finite group, the number of inequivalent irreducible representations, over the complex numbers, is equal to the number of partitions of "n". Unlike the general situation for finite groups, there is in fact a natural way to parametrize irreducible representations by the same set that parametrizes conjugacy classes, namely by partitions of "n" or equivalently Young diagrams of size "n".
Each such irreducible representation can in fact be realized over the integers (every permutation acting by a matrix with integer coefficients); it can be explicitly constructed by computing the Young symmetrizers acting on a space generated by the Young tableaux of shape given by the Young diagram. The dimension formula_0 of the representation that corresponds to the Young diagram formula_1 is given by the hook length formula.
To each irreducible representation ρ we can associate an irreducible character, χρ.
To compute χρ(π) where π is a permutation, one can use the combinatorial Murnaghan–Nakayama rule
. Note that χρ is constant on conjugacy classes,
that is, χρ(π) = χρ(σ−1πσ) for all permutations σ.
Over other fields the situation can become much more complicated. If the field "K" has characteristic equal to zero or greater than "n" then by Maschke's theorem the group algebra "K"S"n" is semisimple. In these cases the irreducible representations defined over the integers give the complete set of irreducible representations (after reduction modulo the characteristic if necessary).
However, the irreducible representations of the symmetric group are not known in arbitrary characteristic. In this context it is more usual to use the language of modules rather than representations. The representation obtained from an irreducible representation defined over the integers by reducing modulo the characteristic will not in general be irreducible. The modules so constructed are called "Specht modules", and every irreducible does arise inside some such module. There are now fewer irreducibles, and although they can be classified they are very poorly understood. For example, even their dimensions are not known in general.
The determination of the irreducible modules for the symmetric group over an arbitrary field is widely regarded as one of the most important open problems in representation theory.
Low-dimensional representations.
Symmetric groups.
The lowest-dimensional representations of the symmetric groups can be described explicitly, and over arbitrary fields. The smallest two degrees in characteristic zero are described here:
Every symmetric group has a one-dimensional representation called the trivial representation, where every element acts as the one by one identity matrix. For "n" ≥ 2, there is another irreducible representation of degree 1, called the sign representation or alternating character, which takes a permutation to the one by one matrix with entry ±1 based on the sign of the permutation. These are the only one-dimensional representations of the symmetric groups, as one-dimensional representations are abelian, and the abelianization of the symmetric group is C2, the cyclic group of order 2.
For all "n", there is an "n"-dimensional representation of the symmetric group of order "n!", called the <templatestyles src="Template:Visible anchor/styles.css" />natural permutation representation, which consists of permuting "n" coordinates. This has the trivial subrepresentation consisting of vectors whose coordinates are all equal. The orthogonal complement consists of those vectors whose coordinates sum to zero, and when "n" ≥ 2, the representation on this subspace is an ("n" − 1)-dimensional irreducible representation, called the standard representation. Another ("n" − 1)-dimensional irreducible representation is found by tensoring with the sign representation. An exterior power formula_2 of the standard representation formula_3 is irreducible provided formula_4 .
For "n" ≥ 7, these are the lowest-dimensional irreducible representations of S"n" – all other irreducible representations have dimension at least "n". However for "n" = 4, the surjection from S4 to S3 allows S4 to inherit a two-dimensional irreducible representation. For "n" = 6, the exceptional transitive embedding of S5 into S6 produces another pair of five-dimensional irreducible representations.
Alternating groups.
The representation theory of the alternating groups is similar, though the sign representation disappears. For "n" ≥ 7, the lowest-dimensional irreducible representations are the trivial representation in dimension one, and the ("n" − 1)-dimensional representation from the other summand of the permutation representation, with all other irreducible representations having higher dimension, but there are exceptions for smaller "n".
The alternating groups for "n" ≥ 5 have only one one-dimensional irreducible representation, the trivial representation. For "n" = 3, 4 there are two additional one-dimensional irreducible representations, corresponding to maps to the cyclic group of order 3: A3 ≅ C3 and A4 → A4/"V" ≅ C3.
Tensor products of representations.
Kronecker coefficients.
The tensor product of two representations of formula_5 corresponding to the Young diagrams formula_7 is a combination of irreducible representations of formula_5,
formula_8
The coefficients formula_9 are called the Kronecker coefficients of the symmetric group.
They can be computed from the characters of the representations :
formula_10
The sum is over partitions formula_11 of formula_6, with formula_12 the corresponding conjugacy classes. The values of the characters formula_13 can be computed using the Frobenius formula. The coefficients formula_14 are
formula_15
where formula_16 is the number of times formula_17 appears in formula_11, so that formula_18.
A few examples, written in terms of Young diagrams :
formula_19
formula_20
formula_21
formula_22
There is a simple rule for computing formula_23 for any Young diagram formula_1 : the result is the sum of all Young diagrams that are obtained from formula_1 by removing one box and then adding one box, where the coefficients are one except for formula_1 itself, whose coefficient is formula_24, i.e., the number of different row lengths minus one.
A constraint on the irreducible constituents of formula_25 is
formula_26
where the depth formula_27 of a Young diagram is the number of boxes that do not belong to the first row.
Reduced Kronecker coefficients.
For formula_1 a Young diagram and formula_28, formula_29 is a Young diagram of size formula_6. Then formula_30 is a bounded, non-decreasing function of formula_6, and
formula_31
is called a reduced Kronecker coefficient or stable Kronecker coefficient. There are known bounds on the value of formula_6 where formula_30 reaches its limit. The reduced Kronecker coefficients are structure constants of Deligne categories of representations of formula_5 with formula_32.
In contrast to Kronecker coefficients, reduced Kronecker coefficients are defined for any triple of Young diagrams, not necessarily of the same size. If formula_33, then formula_34 coincides with the Littlewood-Richardson coefficient formula_35. Reduced Kronecker coefficients can be written as linear combinations of Littlewood-Richardson coefficients via a change of bases in the space of symmetric functions, giving rise to expressions that are manifestly integral although not manifestly positive. Reduced Kronecker coefficients can also be written in terms of Kronecker and Littlewood-Richardson coefficients formula_36 via Littlewood's formula
formula_37
Conversely, it is possible to recover the Kronecker coefficients as linear combinations of reduced Kronecker coefficients.
Reduced Kronecker coefficients are implemented in the computer algebra system SageMath.
Eigenvalues of complex representations.
Given an element formula_38 of cycle-type formula_39 and order formula_40, the eigenvalues of formula_41 in a complex representation of formula_5 are of the type formula_42 with formula_43, where the integers formula_44 are called the cyclic exponents of formula_41 with respect to the representation.
There is a combinatorial description of the cyclic exponents of the symmetric group (and wreath products thereof). Defining formula_45, let the formula_46-index of a standard Young tableau be the sum of the values of formula_47 over the tableau's descents, formula_48.
Then the cyclic exponents of the representation of formula_5 described by the Young diagram formula_1 are the formula_46-indices of the corresponding Young tableaux.
In particular, if formula_49 is of order formula_6, then formula_50, and formula_51 coincides with the major index of formula_52 (the sum of the descents). The cyclic exponents of an irreducible representation of formula_5 then describe how it decomposes into representations of the cyclic group formula_53, with formula_42 being interpreted as the image of formula_41 in the (one-dimensional) representation characterized by formula_54.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d_\\lambda"
},
{
"math_id": 1,
"text": "\\lambda"
},
{
"math_id": 2,
"text": "\\Lambda^k V"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "0\\leq k\\leq n-1"
},
{
"math_id": 5,
"text": "S_n"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "\\lambda,\\mu"
},
{
"math_id": 8,
"text": "\nV_\\lambda\\otimes V_\\mu \\cong \\sum_\\nu C_{\\lambda,\\mu,\\nu} V_\\nu\n"
},
{
"math_id": 9,
"text": "C_{\\lambda\\mu\\nu}\\in\\mathbb{N}"
},
{
"math_id": 10,
"text": "\nC_{\\lambda,\\mu,\\nu} = \\sum_\\rho \\frac{1}{z_\\rho} \\chi_\\lambda(C_\\rho)\\chi_\\mu(C_\\rho)\\chi_\\nu(C_\\rho)\n"
},
{
"math_id": 11,
"text": "\\rho"
},
{
"math_id": 12,
"text": "C_\\rho"
},
{
"math_id": 13,
"text": "\\chi_\\lambda(C_\\rho)"
},
{
"math_id": 14,
"text": "z_\\rho"
},
{
"math_id": 15,
"text": "\nz_\\rho = \\prod_{j=0}^n j^{i_j}i_j! = \\frac{n!}{|C_\\rho|}\n"
},
{
"math_id": 16,
"text": "i_j"
},
{
"math_id": 17,
"text": "j"
},
{
"math_id": 18,
"text": "\\sum i_jj = n"
},
{
"math_id": 19,
"text": "\n(n - 1, 1) \\otimes (n - 1, 1) \\cong (n) + (n - 1, 1) + (n - 2, 2)\n+ (n - 2, 1,1)\n"
},
{
"math_id": 20,
"text": "\n(n - 1, 1) \\otimes (n - 2, 2) \\underset{n>4}{\\cong} (n - 1, 1) + (n - 2, 2)\n+ (n - 2, 1, 1) + (n - 3, 3)\n+ (n - 3, 2, 1) \n"
},
{
"math_id": 21,
"text": "\n(n - 1, 1) \\otimes (n - 2, 1,1) \\cong (n - 1, 1) + (n - 2, 2) + (n - 2, 1,1)\n+ (n - 3, 2, 1) + (n - 3, 1,1,1)\n"
},
{
"math_id": 22,
"text": "\n\\begin{align}\n(n - 2, 2) \\otimes (n - 2, 2) \\cong & (n) + (n - 1, 1) + 2(n - 2, 2)\n+ (n - 2, 1,1) + (n - 3, 3)\n\\\\ & \n+ 2(n - 3, 2, 1) + (n - 3, 1,1,1)\n+ (n - 4, 4) + (n - 4, 3, 1)\n+ (n - 4, 2, 2)\n\\end{align}\n"
},
{
"math_id": 23,
"text": "(n-1,1)\\otimes \\lambda"
},
{
"math_id": 24,
"text": "\\#\\{\\lambda_i\\}-1"
},
{
"math_id": 25,
"text": "V_\\lambda\\otimes V_\\mu"
},
{
"math_id": 26,
"text": "\nC_{\\lambda,\\mu,\\nu}>0 \\implies |d_\\lambda-d_\\mu| \\leq d_\\nu \\leq d_\\lambda+d_\\mu\n"
},
{
"math_id": 27,
"text": "d_\\lambda=n-\\lambda_1"
},
{
"math_id": 28,
"text": "n\\geq \\lambda_1"
},
{
"math_id": 29,
"text": "\\lambda[n]=(n-|\\lambda|,\\lambda)"
},
{
"math_id": 30,
"text": "C_{\\lambda[n],\\mu[n],\\nu[n]}"
},
{
"math_id": 31,
"text": "\n\\bar{C}_{\\lambda,\\mu,\\nu} = \\lim_{n\\to\\infty} C_{\\lambda[n],\\mu[n],\\nu[n]}\n"
},
{
"math_id": 32,
"text": "n\\in \\mathbb{C}-\\mathbb{N}"
},
{
"math_id": 33,
"text": "|\\nu|=|\\lambda|+|\\mu|"
},
{
"math_id": 34,
"text": "\\bar{C}_{\\lambda,\\mu,\\nu}"
},
{
"math_id": 35,
"text": "c_{\\lambda,\\mu}^\\nu"
},
{
"math_id": 36,
"text": "c^\\lambda_{\\alpha\\beta\\gamma}"
},
{
"math_id": 37,
"text": "\n\\bar{C}_{\\lambda,\\mu,\\nu} = \\sum_{\\lambda',\\mu',\\nu',\\alpha,\\beta,\\gamma} C_{\\lambda',\\mu',\\nu'} c^{\\lambda}_{\\lambda'\\beta\\gamma} c^{\\mu}_{\\mu'\\alpha\\gamma} c^\\nu_{\\nu'\\alpha\\beta}\n"
},
{
"math_id": 38,
"text": "w\\in S_n"
},
{
"math_id": 39,
"text": "\\mu=(\\mu_1,\\mu_2,\\dots,\\mu_k)"
},
{
"math_id": 40,
"text": "m=\\text{lcm}(\\mu_i)"
},
{
"math_id": 41,
"text": "w"
},
{
"math_id": 42,
"text": "\\omega^{e_j}"
},
{
"math_id": 43,
"text": "\\omega=e^{\\frac{2\\pi i}{m}}"
},
{
"math_id": 44,
"text": "e_j\\in \\frac{\\mathbb{Z}}{m\\mathbb{Z}}"
},
{
"math_id": 45,
"text": "\\left(b_\\mu(1),\\dots,b_\\mu(n)\\right) = \\left(\\frac{m}{\\mu_1},2\\frac{m}{\\mu_1},\\dots, m, \\frac{m}{\\mu_2},2\\frac{m}{\\mu_2},\\dots, m,\\dots\\right)"
},
{
"math_id": 46,
"text": "\\mu"
},
{
"math_id": 47,
"text": "b_\\mu"
},
{
"math_id": 48,
"text": "\\text{ind}_\\mu(T) = \\sum_{k\\in \\{\\text{descents}(T)\\}} b_\\mu(k)\\bmod m"
},
{
"math_id": 49,
"text": " w "
},
{
"math_id": 50,
"text": "b_\\mu(k)=k"
},
{
"math_id": 51,
"text": "\\text{ind}_\\mu(T)"
},
{
"math_id": 52,
"text": "T"
},
{
"math_id": 53,
"text": " \\frac{\\mathbb{Z}}{n\\mathbb{Z}}"
},
{
"math_id": 54,
"text": "e_j"
}
] | https://en.wikipedia.org/wiki?curid=874641 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.