id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
846500 | Roulette (curve) | Mathematical curves generated by rolling other curves together
In the differential geometry of curves, a roulette is a kind of curve, generalizing cycloids, epicycloids, hypocycloids, trochoids, epitrochoids, hypotrochoids, and involutes. On a basic level, it is the path traced by a curve while rolling on another curve without slipping.
Definition.
Informal definition.
Roughly speaking, a roulette is the curve described by a point (called the "generator" or "pole") attached to a given curve as that curve rolls without slipping, along a second given curve that is fixed. More precisely, given a curve attached to a plane which is moving so that the curve rolls, without slipping, along a given curve attached to a fixed plane occupying the same space, then a point attached to the moving plane describes a curve, in the fixed plane called a roulette.
Special cases and related concepts.
In the case where the rolling curve is a line and the generator is a point on the line, the roulette is called an involute of the fixed curve. If the rolling curve is a circle and the fixed curve is a line then the roulette is a trochoid. If, in this case, the point lies on the circle then the roulette is a cycloid.
A related concept is a glissette, the curve described by a point attached to a given curve as it slides along two (or more) given curves.
Formal definition.
Formally speaking, the curves must be differentiable curves in the Euclidean plane. The "fixed curve" is kept invariant; the "rolling curve" is subjected to a continuous congruence transformation such that at all times the curves are tangent at a point of contact that moves with the same speed when taken along either curve (another way to express this constraint is that the point of contact of the two curves is the instant centre of rotation of the congruence transformation). The resulting roulette is formed by the locus of the generator subjected to the same set of congruence transformations.
Modeling the original curves as curves in the complex plane, let formula_0 be the two natural parameterizations of the rolling (formula_1) and fixed (formula_2) curves, such that formula_3, formula_4, and formula_5 for all formula_6. The roulette of generator formula_7 as formula_1 is rolled on formula_2 is then given by the mapping:
formula_8
Generalizations.
If, instead of a single point being attached to the rolling curve, another given curve is carried along the moving plane, a family of congruent curves is produced. The envelope of this family may also be called a roulette.
Roulettes in higher spaces can certainly be imagined but one needs to align more than just the tangents.
Example.
If the fixed curve is a catenary and the rolling curve is a line, we have:
formula_9
formula_10
The parameterization of the line is chosen so that
formula_11
Applying the formula above we obtain:
formula_12
If "p" = −"i" the expression has a constant imaginary part (namely −"i") and the roulette is a horizontal line. An interesting application of this is that a square wheel could roll without bouncing on a road that is a matched series of catenary arcs.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r,f:\\mathbb R\\to\\Complex"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "r(0)=f(0)"
},
{
"math_id": 4,
"text": "r'(0) = f'(0)"
},
{
"math_id": 5,
"text": "|r'(t)| = |f'(t)| \\neq 0"
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": "p\\in\\Complex"
},
{
"math_id": 8,
"text": "t\\mapsto f(t)+(p-r(t)) {f'(t)\\over r'(t)}."
},
{
"math_id": 9,
"text": "f(t)=t+i(\\cosh(t)-1) \\qquad r(t)=\\sinh(t)"
},
{
"math_id": 10,
"text": "f'(t)=1+i\\sinh(t) \\qquad r'(t)=\\cosh(t)."
},
{
"math_id": 11,
"text": "|f'(t)| = \\sqrt{1^2+\\sinh^2(t)} = \\sqrt{\\cosh^2(t)} = |r'(t)|. "
},
{
"math_id": 12,
"text": "f(t)+(p-r(t)){f'(t)\\over r'(t)}\n=t-i+{p-\\sinh(t)+i(1+p\\sinh(t))\\over\\cosh(t)}\n=t-i+(p+i){1+i\\sinh(t)\\over\\cosh(t)}."
}
] | https://en.wikipedia.org/wiki?curid=846500 |
84652 | Hubbert curve |
The Hubbert curve is an approximation of the production rate of a resource over time. It is a symmetric logistic distribution curve, often confused with the "normal" gaussian function. It first appeared in "Nuclear Energy and the Fossil Fuels," geologist M. King Hubbert's 1956 presentation to the American Petroleum Institute, as an idealized symmetric curve, during his tenure at the Shell Oil Company. It has gained a high degree of popularity in the scientific community for predicting the depletion of various natural resources. The curve is the main component of Hubbert peak theory, which has led to the rise of peak oil concerns. Basing his calculations on the peak of oil well discovery in 1948, Hubbert used his model in 1956 to create a curve which predicted that oil production in the contiguous United States would peak around 1970.
Shape.
The prototypical Hubbert curve is a probability density function of a logistic distribution curve. It is not a gaussian function (which is used to plot normal distributions), but the two have a similar appearance. The density of a Hubbert curve approaches zero more slowly than a gaussian function:
formula_0
The graph of a Hubbert curve consists of three key elements:
The actual shape of a graph of real world production trends is determined by various factors, such as development of enhanced production techniques, availability of competing resources, and government regulations on production or consumption. Because of such factors, real world Hubbert curves are often not symmetrical.
Application.
Peak oil.
Using the curve, Hubbert modeled the rate of petroleum production for several regions, determined by the rate of new oil well discovery, and extrapolated a world production curve. The relative steepness of decline in this projection is the main concern in peak oil discussions. This is because a steep drop in the production implies that global oil production will decline so rapidly that the world will not have enough time to develop sources of energy to replace the energy now used from oil, possibly leading to drastic social and economic impacts.
Other resources.
Hubbert models have been used to predict the production trends of various resources, such as natural gas (Hubbert's attempt in the late 1970s resulted in an inaccurate prediction that natural gas production would fall dramatically in the 1980s), Coal, fissionable materials, Helium, transition metals (such as copper), and water. At least one researcher has attempted to create a Hubbert curve for the whaling industry and caviar, while another applied it to cod.
Critique.
After the predicted early-1970s peak of oil production in the U.S., production declined over the following 35 years in a pattern closely matching the Hubbert curve. However, new extraction methods began reversing this trend beginning in the mid-2000s decade, with production reaching 10.07 million b/d in November 2017 – the highest monthly level of crude oil production in U.S. history. As such, the Hubbert curve has to be calculated separately for different oil provinces, whose exploration has started at a different time, and oil extracted by new techniques, sometimes called unconventional oil, resulting in individual Hubbert cycles. The Hubbert Curve for US oil production is generally measured in years.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nx = {e^{-t}\\over(1+e^{-t})^2}={1\\over2+2\\cosh t}={1\\over4}\\operatorname{sech}^{2} {t\\over2}.\n"
}
] | https://en.wikipedia.org/wiki?curid=84652 |
8465724 | Fourier–Mukai transform | In algebraic geometry, a Fourier–Mukai transform "Φ""K" is a functor between derived categories of coherent sheaves D("X") → D("Y") for schemes "X" and "Y", which is, in a sense, an integral transform along a kernel object "K" ∈ D("X"×"Y"). Most natural functors, including basic ones like pushforwards and pullbacks, are of this type.
These kinds of functors were introduced by Mukai (1981) in order to prove an equivalence between the derived categories of coherent sheaves on an abelian variety and its dual. That equivalence is analogous to the classical Fourier transform that gives an isomorphism between tempered distributions on a finite-dimensional real vector space and its dual.
Definition.
Let "X" and "Y" be smooth projective varieties, "K" ∈ Db("X"×"Y") an object in the derived category of coherent sheaves on their product. Denote by "q" the projection "X"×"Y"→"X", by "p" the projection "X"×"Y"→"Y". Then the Fourier-Mukai transform "Φ""K" is a functor Db("X")→Db("Y") given by
formula_0
where R"p"* is the derived direct image functor and formula_1 is the derived tensor product.
Fourier-Mukai transforms always have left and right adjoints, both of which are also kernel transformations. Given two kernels "K"1 ∈ Db("X"×"Y") and "K"2 ∈ Db("Y"×"Z"), the composed functor "Φ""K"2 ∘ "Φ""K"1 is also a Fourier-Mukai transform.
The structure sheaf of the diagonal formula_2, taken as a kernel, produces the identity functor on Db("X"). For a morphism "f":"X"→"Y", the structure sheaf of the graph Γ"f" produces a pushforward when viewed as an object in Db("X"×"Y"), or a pullback when viewed as an object in Db("Y"×"X").
On abelian varieties.
Let formula_3 be an abelian variety and formula_4 be its dual variety. The Poincaré bundle formula_5 on formula_6, normalized to be trivial on the fiber at zero, can be used as a Fourier-Mukai kernel. Let formula_7 and formula_8 be the canonical projections.
The corresponding Fourier–Mukai functor with kernel formula_5 is then
formula_9
There is a similar functor
formula_10
If the canonical class of a variety is ample or anti-ample, then the derived category of coherent sheaves determines the variety. In general, an abelian variety is not isomorphic to its dual, so this Fourier–Mukai transform gives examples of different varieties (with trivial canonical bundles) that have equivalent derived categories.
Let "g" denote the dimension of "X". The Fourier–Mukai transformation is nearly involutive :
formula_11
It interchanges Pontrjagin product and tensor product.
formula_12
formula_13
have used the Fourier-Mukai transform to prove the Künneth decomposition for the Chow motives of abelian varieties.
Applications in string theory.
In string theory, T-duality (short for "target space duality"), which relates two quantum field theories or string theories with different spacetime geometries, is closely related with the Fourier-Mukai transformation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{F} \\mapsto \\mathrm{R}p_*\\left(q^*\\mathcal{F} \\otimes^{L} K\\right)"
},
{
"math_id": 1,
"text": "\\otimes^L"
},
{
"math_id": 2,
"text": "\\mathcal{O}_{\\Delta} \\in \\mathrm{D}^b(X \\times X)"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "\\hat X"
},
{
"math_id": 5,
"text": "\\mathcal P"
},
{
"math_id": 6,
"text": "X \\times \\hat X"
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "\\hat p"
},
{
"math_id": 9,
"text": "R\\mathcal S: \\mathcal F \\in D(X) \\mapsto R\\hat p_\\ast (p^\\ast \\mathcal F \\otimes \\mathcal P) \\in D(\\hat X)"
},
{
"math_id": 10,
"text": "R\\widehat{\\mathcal S} : D(\\hat X) \\to D(X). \\, "
},
{
"math_id": 11,
"text": "R\\mathcal S \\circ R\\widehat{\\mathcal S} = (-1)^\\ast [-g]"
},
{
"math_id": 12,
"text": "R\\mathcal S(\\mathcal F \\ast \\mathcal G) = R\\mathcal S(\\mathcal F) \\otimes R\\mathcal S(\\mathcal G)"
},
{
"math_id": 13,
"text": "R\\mathcal S(\\mathcal F \\otimes \\mathcal G) = R\\mathcal S(\\mathcal F) \\ast R\\mathcal S(\\mathcal G)[g]"
}
] | https://en.wikipedia.org/wiki?curid=8465724 |
8465779 | Craig's theorem | In mathematical logic, Craig's theorem (also known as Craig's trick) states that any recursively enumerable set of well-formed formulas of a first-order language is (primitively) recursively axiomatizable. This result is not related to the well-known Craig interpolation theorem, although both results are named after the same logician, William Craig.
Recursive axiomatization.
Let formula_0 be an enumeration of the axioms of a recursively enumerable set formula_1 of first-order formulas. Construct another set formula_2 consisting of
formula_3
for each positive integer formula_4. The deductive closures of formula_2 and formula_1 are thus equivalent; the proof will show that formula_2 is a recursive set. A decision procedure for formula_2 lends itself according to the following informal reasoning. Each member of formula_2 is of the form
formula_5
Since each formula has finite length, it is checkable whether or not it is of the said form. If it is of the said form and consists of formula_6 conjuncts, it is in formula_2 if the (reoccurring) conjunct is formula_7; otherwise it is not in formula_2. Again, it is checkable whether the conjunct is in fact formula_8 by going through the enumeration of the axioms of formula_1 and then checking symbol-for-symbol whether the expressions are identical.
Primitive recursive axiomatizations.
The proof above shows that for each recursively enumerable set of axioms there is a recursive set of axioms with the same deductive closure. A set of axioms is primitive recursive if there is a primitive recursive function that decides membership in the set. To obtain a primitive recursive axiomatization, instead of replacing a formula formula_9 with
formula_3
one instead replaces it with
formula_10 (*)
where formula_11 is a function that, given formula_4, returns a computation history showing that formula_9 is in the original recursively enumerable set of axioms. It is possible for a primitive recursive function to parse an expression of form (*) to obtain formula_9 and formula_6. Then, because Kleene's T predicate is primitive recursive, it is possible for a primitive recursive function to verify that formula_6 is indeed a computation history as required.
Philosophical implications.
If formula_1 is a recursively axiomatizable theory and we divide its predicates into two disjoint sets formula_12 and formula_13, then those theorems of formula_1 that are in the vocabulary formula_12 are recursively enumerable, and hence, based on Craig's theorem, axiomatizable. Carl G. Hempel argued based on this that since all science's predictions are in the vocabulary of observation terms, the theoretical vocabulary of science is in principle eliminable. He himself raised two objections to this argument: 1) the new axioms of science are practically unmanageable, and 2) science uses inductive reasoning and eliminating theoretical terms may alter the inductive relations between observational sentences. Hilary Putnam argues that this argument is based on a misconception that the sole aim of science is successful prediction. He proposes that the main reason we need theoretical terms is that we wish to talk about theoretical entities (such as viruses, radio stars, and elementary particles).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_1,A_2,\\dots"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "T^*"
},
{
"math_id": 3,
"text": "\\underbrace{A_i\\land\\dots\\land A_i}_i"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "\\underbrace{B_j\\land\\dots\\land B_j}_j."
},
{
"math_id": 6,
"text": "j"
},
{
"math_id": 7,
"text": "A_j"
},
{
"math_id": 8,
"text": "A_n"
},
{
"math_id": 9,
"text": "A_i"
},
{
"math_id": 10,
"text": "\\underbrace{A_i\\land\\dots\\land A_i}_{f(i)}"
},
{
"math_id": 11,
"text": "f"
},
{
"math_id": 12,
"text": "V_A"
},
{
"math_id": 13,
"text": "V_B"
}
] | https://en.wikipedia.org/wiki?curid=8465779 |
8468 | Determinant | In mathematics, invariant of square matrices
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix "A" is commonly denoted det("A"), det "A", or . Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
The determinant is completely determined by the two following properties: the determinant of a product of matrices is the product of their determinants, and the determinant of a triangular matrix is the product of its diagonal entries.
The determinant of a 2 × 2 matrix is
formula_0
and the determinant of a 3 × 3 matrix is
formula_1
The determinant of an "n" × "n" matrix can be defined in several equivalent ways, the most common being Leibniz formula, which expresses the determinant as a sum of formula_2 (the factorial of n) signed products of matrix entries. It can be computed by the Laplace expansion, which expresses the determinant as a linear combination of determinants of submatrices, or with Gaussian elimination, which allows computing a row echelon form with the same determinant, equal to the product of the diagonal entries of the row echelon form.
Determinants can also be defined by some of their properties. Namely, the determinant is the unique function defined on the "n" × "n" matrices that has the four following properties:
The above properties relating to rows (properties 2–4) may be replaced by the corresponding statements with respect to columns.
The determinant is invariant under matrix similarity. This implies that, given a linear endomorphism of a finite-dimensional vector space, the determinant of the matrix that represents it on a basis does not depend on the chosen basis. This allows defining the "determinant" of a linear endomorphism, which does not depend on the choice of a coordinate system.
Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and determinants can be used to solve these equations (Cramer's rule), although other methods of solution are computationally much more efficient. Determinants are used for defining the characteristic polynomial of a square matrix, whose roots are the eigenvalues. In geometry, the signed n-dimensional volume of a n-dimensional parallelepiped is expressed by a determinant, and the determinant of a linear endomorphism determines how the orientation and the n-dimensional volume are transformed under the endomorphism. This is used in calculus with exterior differential forms and the Jacobian determinant, in particular for changes of variables in multiple integrals.
Two by two matrices.
The determinant of a 2 × 2 matrix formula_3 is denoted either by "det" or by vertical bars around the matrix, and is defined as
formula_4
For example,
formula_5
First properties.
The determinant has several key properties that can be proved by direct evaluation of the definition for formula_6-matrices, and that continue to hold for determinants of larger matrices. They are as follows: first, the determinant of the identity matrix formula_7 is 1.
Second, the determinant is zero if two rows are the same:
formula_8
This holds similarly if the two columns are the same. Moreover,
formula_9
Finally, if any column is multiplied by some number formula_10 (i.e., all entries in that column are multiplied by that number), the determinant is also multiplied by that number:
formula_11
Geometric meaning.
If the matrix entries are real numbers, the matrix A can be used to represent two linear maps: one that maps the standard basis vectors to the rows of A, and one that maps them to the columns of A. In either case, the images of the basis vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram defined by the rows of the above matrix is the one with vertices at (0, 0), ("a", "b"), ("a" + "c", "b" + "d"), and ("c", "d"), as shown in the accompanying diagram.
The absolute value of "ad" − "bc" is the area of the parallelogram, and thus represents the scale factor by which areas are transformed by A. (The parallelogram formed by the columns of A is in general a different parallelogram, but since the determinant is symmetric with respect to rows and columns, the area will be the same.)
The absolute value of the determinant together with the sign becomes the "oriented area" of the parallelogram. The oriented area is the same as the usual area, except that it is negative when the angle from the first to the second vector defining the parallelogram turns in a clockwise direction (which is opposite to the direction one would get for the identity matrix).
To show that "ad" − "bc" is the signed area, one may consider a matrix containing two vectors u ≡ ("a", "b") and v ≡ ("c", "d") representing the parallelogram's sides. The signed area can be expressed as |u| |v| sin "θ" for the angle "θ" between the vectors, which is simply base times height, the length of one vector times the perpendicular component of the other. Due to the sine this already is the signed area, yet it may be expressed more conveniently using the cosine of the complementary angle to a perpendicular vector, e.g. u⊥ = (−"b", "a"), so that |u⊥| |v| cos "θ′" becomes the signed area in question, which can be determined by the pattern of the scalar product to be equal to "ad" − "bc" according to the following equations:
formula_12
Thus the determinant gives the scaling factor and the orientation induced by the mapping represented by "A". When the determinant is equal to one, the linear mapping defined by the matrix is equi-areal and orientation-preserving.
The object known as the "bivector" is related to these ideas. In 2D, it can be interpreted as an "oriented plane segment" formed by imagining two vectors each with origin (0, 0), and coordinates ("a", "b") and ("c", "d"). The bivector magnitude (denoted by ("a", "b") ∧ ("c", "d")) is the "signed area", which is also the determinant "ad" − "bc".
If an "n" × "n" real matrix "A" is written in terms of its column vectors formula_13, then
formula_14
This means that formula_15 maps the unit "n"-cube to the "n"-dimensional parallelotope defined by the vectors formula_16 the region formula_17
The determinant gives the signed "n"-dimensional volume of this parallelotope, formula_18 and hence describes more generally the "n"-dimensional volume scaling factor of the linear transformation produced by "A". (The sign shows whether the transformation preserves or reverses orientation.) In particular, if the determinant is zero, then this parallelotope has volume zero and is not fully "n"-dimensional, which indicates that the dimension of the image of "A" is less than "n". This means that "A" produces a linear transformation which is neither onto nor one-to-one, and so is not invertible.
Definition.
Let "A" be a square matrix with "n" rows and "n" columns, so that it can be written as
formula_19
The entries formula_20 etc. are, for many purposes, real or complex numbers. As discussed below, the determinant is also defined for matrices whose entries are in a commutative ring.
The determinant of "A" is denoted by det("A"), or it can be denoted directly in terms of the matrix entries by writing enclosing bars instead of brackets:
formula_21
There are various equivalent ways to define the determinant of a square matrix "A", i.e. one with the same number of rows and columns: the determinant can be defined via the Leibniz formula, an explicit formula involving sums of products of certain entries of the matrix. The determinant can also be characterized as the unique function depending on the entries of the matrix satisfying certain properties. This approach can also be used to compute determinants by simplifying the matrices in question.
Leibniz formula.
3 × 3 matrices.
The "Leibniz formula" for the determinant of a 3 × 3 matrix is the following:
formula_22
In this expression, each term has one factor from each row, all in different columns, arranged in increasing row order. For example, "bdi" has "b" from the first row second column, "d" from the second row first column, and "i" from the third row third column. The signs are determined by how many transpositions of factors are necessary to arrange the factors in increasing order of their columns (given that the terms are arranged left-to-right in increasing row order): positive for an even number of transpositions and negative for an odd number. For the example of "bdi", the single transposition of "bd" to "db" gives "dbi," whose three factors are from the first, second and third columns respectively; this is an odd number of transpositions, so the term appears with negative sign.
The rule of Sarrus is a mnemonic for the expanded form of this determinant: the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements, when the copies of the first two columns of the matrix are written beside it as in the illustration. This scheme for calculating the determinant of a 3 × 3 matrix does not carry over into higher dimensions.
"n" × "n" matrices.
Generalizing the above to higher dimensions, the determinant of an formula_23 matrix is an expression involving permutations and their signatures. A permutation of the set formula_24 is a bijective function formula_25 from this set to itself, with values formula_26 exhausting the entire set. The set of all such permutations, called the symmetric group, is commonly denoted formula_27. The signature formula_28 of a permutation formula_25 is formula_29 if the permutation can be obtained with an even number of transpositions (exchanges of two entries); otherwise, it is formula_30
Given a matrix
formula_31
the Leibniz formula for its determinant is, using sigma notation for the sum,
formula_32
Using pi notation for the product, this can be shortened into
formula_33.
The Levi-Civita symbol formula_34 is defined on the n-tuples of integers in formula_35 as 0 if two of the integers are equal, and otherwise as the signature of the permutation defined by the "n-"tuple of integers. With the Levi-Civita symbol, the Leibniz formula becomes
formula_36
where the sum is taken over all n-tuples of integers in formula_37
Properties of the determinant.
Characterization of the determinant.
The determinant can be characterized by the following three key properties. To state these, it is convenient to regard an formula_23-matrix "A" as being composed of its formula_38 columns, so denoted as
formula_39
where the column vector formula_40 (for each "i") is composed of the entries of the matrix in the "i"-th column.
If the determinant is defined using the Leibniz formula as above, these three properties can be proved by direct inspection of that formula. Some authors also approach the determinant directly using these three properties: it can be shown that there is exactly one function that assigns to any formula_23-matrix "A" a number that satisfies these three properties. This also shows that this more abstract approach to the determinant yields the same definition as the one using the Leibniz formula.
To see this it suffices to expand the determinant by multi-linearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a standard basis vector. These determinants are either 0 (by property 9) or else ±1 (by properties 1 and 12 below), so the linear combination gives the expression above in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear.
Immediate consequences.
These rules have several further consequences:
Example.
These characterizing properties and their consequences listed above are both theoretically significant, but can also be used to compute determinants for concrete matrices. In fact, Gaussian elimination can be applied to bring any matrix into upper triangular form, and the steps in this algorithm affect the determinant in a controlled way. The following concrete example illustrates the computation of the determinant of the matrix formula_15 using that method:
formula_53
Combining these equalities gives formula_54
Transpose.
The determinant of the transpose of formula_15 equals the determinant of "A":
formula_55.
This can be proven by inspecting the Leibniz formula. This implies that in all the properties mentioned above, the word "column" can be replaced by "row" throughout. For example, viewing an "n" × "n" matrix as being composed of "n" rows, the determinant is an "n"-linear function.
Multiplicativity and matrix groups.
The determinant is a "multiplicative map", i.e., for square matrices formula_15 and formula_56 of equal size, the determinant of a matrix product equals the product of their determinants:
formula_57
This key fact can be proven by observing that, for a fixed matrix formula_56, both sides of the equation are alternating and multilinear as a function depending on the columns of formula_15. Moreover, they both take the value formula_58 when formula_15 is the identity matrix. The above-mentioned unique characterization of alternating multilinear maps therefore shows this claim.
A matrix formula_15 with entries in a field is invertible precisely if its determinant is nonzero. This follows from the multiplicativity of the determinant and the formula for the inverse involving the adjugate matrix mentioned below. In this event, the determinant of the inverse matrix is given by
formula_59.
In particular, products and inverses of matrices with non-zero determinant (respectively, determinant one) still have this property. Thus, the set of such matrices (of fixed size formula_38 over a field formula_60) forms a group known as the general linear group formula_61 (respectively, a subgroup called the special linear group formula_62. More generally, the word "special" indicates the subgroup of another matrix group of matrices of determinant one. Examples include the special orthogonal group (which if "n" is 2 or 3 consists of all rotation matrices), and the special unitary group.
Because the determinant respects multiplication and inverses, it is in fact a group homomorphism from formula_61 into the multiplicative group formula_63 of nonzero elements of formula_60. This homomorphism is surjective and its kernel is formula_64 (the matrices with determinant one). Hence, by the first isomorphism theorem, this shows that formula_64 is a normal subgroup of formula_61, and that the quotient group formula_65 is isomorphic to formula_63.
The Cauchy–Binet formula is a generalization of that product formula for "rectangular" matrices. This formula can also be recast as a multiplicative formula for compound matrices whose entries are the determinants of all quadratic submatrices of a given matrix.
Laplace expansion.
Laplace expansion expresses the determinant of a matrix formula_15 recursively in terms of determinants of smaller matrices, known as its minors. The minor formula_66 is defined to be the determinant of the formula_67-matrix that results from formula_15 by removing the formula_68-th row and the formula_69-th column. The expression formula_70 is known as a cofactor. For every formula_68, one has the equality
formula_71
which is called the "Laplace expansion along the ith row". For example, the Laplace expansion along the first row (formula_72) gives the following formula:
formula_73
Unwinding the determinants of these formula_6-matrices gives back the Leibniz formula mentioned above. Similarly, the "Laplace expansion along the formula_69-th column" is the equality
formula_74
Laplace expansion can be used iteratively for computing determinants, but this approach is inefficient for large matrices. However, it is useful for computing the determinants of highly symmetric matrix such as the Vandermonde matrix
formula_75The "n"-term Laplace expansion along a row or column can be generalized to write an "n" x "n" determinant as a sum of formula_76 terms, each the product of the determinant of a "k" x "k" submatrix and the determinant of the complementary ("n−k") x ("n−k") submatrix.
Adjugate matrix.
The adjugate matrix formula_77 is the transpose of the matrix of the cofactors, that is,
formula_78
For every matrix, one has
formula_79
Thus the adjugate matrix can be used for expressing the inverse of a nonsingular matrix:
formula_80
Block matrices.
The formula for the determinant of a formula_6-matrix above continues to hold, under appropriate further assumptions, for a block matrix, i.e., a matrix composed of four submatrices formula_81 of dimension formula_82, formula_83, formula_84 and formula_23, respectively. The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the Schur complement, is
formula_85
If formula_15 is invertible, then it follows with results from the section on multiplicativity that
formula_86
which simplifies to formula_87 when formula_88 is a formula_89-matrix.
A similar result holds when formula_88 is invertible, namely
formula_90
Both results can be combined to derive Sylvester's determinant theorem, which is also stated below.
If the blocks are square matrices of the "same" size further formulas hold. For example, if formula_91 and formula_88 commute (i.e., formula_92), then
formula_93
This formula has been generalized to matrices composed of more than formula_6 blocks, again under appropriate commutativity conditions among the individual blocks.
For formula_94 and formula_95, the following formula holds (even if formula_15 and formula_56 do not commute)
formula_96
Sylvester's determinant theorem.
Sylvester's determinant theorem states that for "A", an "m" × "n" matrix, and "B", an "n" × "m" matrix (so that "A" and "B" have dimensions allowing them to be multiplied in either order forming a square matrix):
formula_97
where "I""m" and "I""n" are the "m" × "m" and "n" × "n" identity matrices, respectively.
From this general result several consequences follow.
Sum.
The determinant of the sum formula_98 of two square matrices of the same size is not in general expressible in terms of the determinants of "A" and of "B".
However, for positive semidefinite matrices formula_15, formula_56 and formula_91 of equal size,
formula_99
with the corollary
formula_100
Brunn–Minkowski theorem implies that the nth root of determinant is a concave function, when restricted to Hermitian positive-definite formula_101 matrices. Therefore, if A and B are Hermitian positive-definite formula_101 matrices, one has
formula_102 since the nth root of the determinant is a homogeneous function.
Sum identity for 2×2 matrices.
For the special case of formula_103 matrices with complex entries, the determinant of the sum can be written in terms of determinants and traces in the following identity:
formula_104
<templatestyles src="Math_proof/styles.css" />Proof of identity
This can be shown by writing out each term in components formula_105. The left-hand side is
formula_106
Expanding gives
formula_107
The terms which are quadratic in formula_15 are seen to be formula_108, and similarly for formula_56, so the expression can be written
formula_109
We can then write the cross-terms as
formula_110
which can be recognized as
formula_111
which completes the proof.
This has an application to formula_103 matrix algebras. For example, consider the complex numbers as a matrix algebra. The complex numbers have a representation as matrices of the form
formula_112
with formula_113 and formula_114 real. Since formula_115, taking formula_116 and formula_117 in the above identity gives
formula_118
This result followed just from formula_115 and formula_119.
Properties of the determinant in relation to other notions.
Eigenvalues and characteristic polynomial.
The determinant is closely related to two other central concepts in linear algebra, the eigenvalues and the characteristic polynomial of a matrix. Let formula_15 be an formula_23-matrix with complex entries. Then, by the Fundamental Theorem of Algebra, formula_15 must have exactly "n" eigenvalues formula_120. (Here it is understood that an eigenvalue with algebraic multiplicity μ occurs μ times in this list.) Then, it turns out the determinant of A is equal to the "product" of these eigenvalues,
formula_121
The product of all non-zero eigenvalues is referred to as pseudo-determinant.
From this, one immediately sees that the determinant of a matrix formula_15 is zero if and only if formula_122 is an eigenvalue of formula_15. In other words, formula_15 is invertible if and only if formula_122 is not an eigenvalue of formula_15.
The characteristic polynomial is defined as
formula_123
Here, formula_124 is the indeterminate of the polynomial and formula_42 is the identity matrix of the same size as formula_15. By means of this polynomial, determinants can be used to find the eigenvalues of the matrix formula_15: they are precisely the roots of this polynomial, i.e., those complex numbers formula_125 such that
formula_126
A Hermitian matrix is positive definite if all its eigenvalues are positive. Sylvester's criterion asserts that this is equivalent to the determinants of the submatrices
formula_127
being positive, for all formula_128 between formula_129 and formula_38.
Trace.
The trace tr("A") is by definition the sum of the diagonal entries of A and also equals the sum of the eigenvalues. Thus, for complex matrices A,
formula_130
or, for real matrices A,
formula_131
Here exp(A) denotes the matrix exponential of A, because every eigenvalue λ of A corresponds to the eigenvalue exp(λ) of exp(A). In particular, given any logarithm of A, that is, any matrix L satisfying
formula_132
the determinant of A is given by
formula_133
For example, for "n" = 2, "n" = 3, and "n" = 4, respectively,
formula_134
cf. Cayley-Hamilton theorem. Such expressions are deducible from combinatorial arguments, Newton's identities, or the Faddeev–LeVerrier algorithm. That is, for generic n, det"A"
(−1)"n""c"0 the signed constant term of the characteristic polynomial, determined recursively from
formula_135
In the general case, this may also be obtained from
formula_136
where the sum is taken over the set of all integers "kl" ≥ 0 satisfying the equation
formula_137
The formula can be expressed in terms of the complete exponential Bell polynomial of "n" arguments "s""l" = −("l" – 1)! tr("A""l") as
formula_138
This formula can also be used to find the determinant of a matrix "AIJ" with multidimensional indices "I" = ("i"1, "i"2, ..., "ir") and "J" = ("j"1, "j"2, ..., "jr"). The product and trace of such matrices are defined in a natural way as
formula_139
An important arbitrary dimension n identity can be obtained from the Mercator series expansion of the logarithm when the expansion converges. If every eigenvalue of "A" is less than 1 in absolute value,
formula_140
where "I" is the identity matrix. More generally, if
formula_141
is expanded as a formal power series in s then all coefficients of sm for "m" > "n" are zero and the remaining polynomial is det("I" + "sA").
Upper and lower bounds.
For a positive definite matrix "A", the trace operator gives the following tight lower and upper bounds on the log determinant
formula_142
with equality if and only if "A" = "I". This relationship can be derived via the formula for the Kullback-Leibler divergence between two multivariate normal distributions.
Also,
formula_143
These inequalities can be proved by expressing the traces and the determinant in terms of the eigenvalues. As such, they represent the well-known fact that the harmonic mean is less than the geometric mean, which is less than the arithmetic mean, which is, in turn, less than the root mean square.
Derivative.
The Leibniz formula shows that the determinant of real (or analogously for complex) square matrices is a polynomial function from formula_144 to formula_145. In particular, it is everywhere differentiable. Its derivative can be expressed using Jacobi's formula:
formula_146
where formula_77 denotes the adjugate of formula_15. In particular, if formula_15 is invertible, we have
formula_147
Expressed in terms of the entries of formula_15, these are
formula_148
Yet another equivalent formulation is
formula_149,
using big O notation. The special case where formula_150, the identity matrix, yields
formula_151
This identity is used in describing Lie algebras associated to certain matrix Lie groups. For example, the special linear group formula_152 is defined by the equation formula_153. The above formula shows that its Lie algebra is the special linear Lie algebra formula_154 consisting of those matrices having trace zero.
Writing a formula_155-matrix as formula_156 where formula_157 are column vectors of length 3, then the gradient over one of the three vectors may be written as the cross product of the other two:
formula_158
History.
Historically, determinants were used long before matrices: A determinant was originally defined as a property of a system of linear equations.
The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is non-zero).
In this sense, determinants were first used in the Chinese mathematics textbook "The Nine Chapters on the Mathematical Art" (九章算術, Chinese scholars, around the 3rd century BCE). In Europe, solutions of linear systems of two equations were expressed by Cardano in 1545 by a determinant-like entity.
Determinants proper originated separately from the work of Seki Takakazu in 1683 in Japan and parallelly of Leibniz in 1693. stated, without proof, Cramer's rule. Both Cramer and also were led to determinants by the question of plane curves passing through a given set of points.
Vandermonde (1771) first recognized determinants as independent functions. gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case. Immediately following, Lagrange (1773) treated determinants of the second and third order and applied it to questions of elimination theory; he proved many special cases of general identities.
Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word "determinant" (Laplace had used "resultant"), though not in the present signification, but rather as applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem.
The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of two matrices of "m" columns and "n" rows, which for the special case of "m" = "n" reduces to the multiplication theorem. On the same day (November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the subject. (See Cauchy–Binet formula.) In this he used the word "determinant" in its present sense, summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's. With him begins the theory in its generality.
used the functional determinant which Sylvester later called the Jacobian. In his memoirs in "Crelle's Journal" for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called "alternants". About the time of Jacobi's last memoirs, Sylvester (1839) and Cayley began their work. introduced the modern notation for the determinant using vertical bars.
The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the textbooks on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises.
Applications.
Cramer's rule.
Determinants can be used to describe the solutions of a linear system of equations, written in matrix form as formula_159. This equation has a unique solution formula_160 if and only if formula_161 is nonzero. In this case, the solution is given by Cramer's rule:
formula_162
where formula_163 is the matrix formed by replacing the formula_68-th column of formula_15 by the column vector formula_114. This follows immediately by column expansion of the determinant, i.e.
formula_164
where the vectors formula_165 are the columns of "A". The rule is also implied by the identity
formula_166
Cramer's rule can be implemented in formula_167 time, which is comparable to more common methods of solving systems of linear equations, such as LU, QR, or singular value decomposition.
Linear independence.
Determinants can be used to characterize linearly dependent vectors: formula_168 is zero if and only if the column vectors (or, equivalently, the row vectors) of the matrix formula_15 are linearly dependent. For example, given two linearly independent vectors formula_169, a third vector formula_170 lies in the plane spanned by the former two vectors exactly if the determinant of the formula_155-matrix consisting of the three vectors is zero. The same idea is also used in the theory of differential equations: given functions formula_171 (supposed to be formula_172 times differentiable), the Wronskian is defined to be
formula_173
It is non-zero (for some formula_160) in a specified interval if and only if the given functions and all their derivatives up to order formula_172 are linearly independent. If it can be shown that the Wronskian is zero everywhere on an interval then, in the case of analytic functions, this implies the given functions are linearly dependent. See the Wronskian and linear independence. Another such use of the determinant is the resultant, which gives a criterion when two polynomials have a common root.
Orientation of a basis.
The determinant can be thought of as assigning a number to every sequence of "n" vectors in R"n", by using the square matrix whose columns are the given vectors. The determinant will be nonzero if and only if the sequence of vectors is a "basis" for R"n". In that case, the sign of the determinant determines whether the orientation of the basis is consistent with or opposite to the orientation of the standard basis. In the case of an orthogonal basis, the magnitude of the determinant is equal to the "product" of the lengths of the basis vectors. For instance, an orthogonal matrix with entries in R"n" represents an orthonormal basis in Euclidean space, and hence has determinant of ±1 (since all the vectors have length 1). The determinant is +1 if and only if the basis has the same orientation. It is −1 if and only if the basis has the opposite orientation.
More generally, if the determinant of "A" is positive, "A" represents an orientation-preserving linear transformation (if "A" is an orthogonal 2 × 2 or 3 × 3 matrix, this is a rotation), while if it is negative, "A" switches the orientation of the basis.
Volume and Jacobian determinant.
As pointed out above, the absolute value of the determinant of real vectors is equal to the volume of the parallelepiped spanned by those vectors. As a consequence, if formula_174 is the linear map given by multiplication with a matrix formula_15, and formula_175 is any measurable subset, then the volume of formula_176 is given by formula_177 times the volume of formula_178. More generally, if the linear map formula_179 is represented by the formula_83-matrix formula_15, then the formula_38-dimensional volume of formula_176 is given by:
formula_180
By calculating the volume of the tetrahedron bounded by four points, they can be used to identify skew lines. The volume of any tetrahedron, given its vertices formula_181, formula_182, or any other combination of pairs of vertices that form a spanning tree over the vertices.
For a general differentiable function, much of the above carries over by considering the Jacobian matrix of "f". For
formula_183
the Jacobian matrix is the "n" × "n" matrix whose entries are given by the partial derivatives
formula_184
Its determinant, the Jacobian determinant, appears in the higher-dimensional version of integration by substitution: for suitable functions "f" and an open subset "U" of R"n" (the domain of "f"), the integral over "f"("U") of some other function "φ" : R"n" → R"m" is given by
formula_185
The Jacobian also occurs in the inverse function theorem.
When applied to the field of Cartography, the determinant can be used to measure the rate of expansion of a map near the poles.
Abstract algebraic aspects.
Determinant of an endomorphism.
The above identities concerning the determinant of products and inverses of matrices imply that similar matrices have the same determinant: two matrices "A" and "B" are similar, if there exists an invertible matrix "X" such that "A" = "X"−1"BX". Indeed, repeatedly applying the above identities yields
formula_186
The determinant is therefore also called a similarity invariant. The determinant of a linear transformation
formula_187
for some finite-dimensional vector space "V" is defined to be the determinant of the matrix describing it, with respect to an arbitrary choice of basis in "V". By the similarity invariance, this determinant is independent of the choice of the basis for "V" and therefore only depends on the endomorphism "T".
Square matrices over commutative rings.
The above definition of the determinant using the Leibniz rule holds works more generally when the entries of the matrix are elements of a commutative ring formula_188, such as the integers formula_189, as opposed to the field of real or complex numbers. Moreover, the characterization of the determinant as the unique alternating multilinear map that satisfies formula_190 still holds, as do all the properties that result from that characterization.
A matrix formula_191 is invertible (in the sense that there is an inverse matrix whose entries are in formula_188) if and only if its determinant is an invertible element in formula_188. For formula_192, this means that the determinant is +1 or −1. Such a matrix is called unimodular.
The determinant being multiplicative, it defines a group homomorphism
formula_193
between the general linear group (the group of invertible formula_23-matrices with entries in formula_188) and the multiplicative group of units in formula_188. Since it respects the multiplication in both groups, this map is a group homomorphism.
Given a ring homomorphism formula_194, there is a map formula_195 given by replacing all entries in formula_188 by their images under formula_196. The determinant respects these maps, i.e., the identity
formula_197
holds. In other words, the displayed commutative diagram commutes.
For example, the determinant of the complex conjugate of a complex matrix (which is also the determinant of its conjugate transpose) is the complex conjugate of its determinant, and for integer matrices: the reduction modulo formula_198 of the determinant of such a matrix is equal to the determinant of the matrix reduced modulo formula_198 (the latter determinant being computed using modular arithmetic). In the language of category theory, the determinant is a natural transformation between the two functors formula_199 and formula_200. Adding yet another layer of abstraction, this is captured by saying that the determinant is a morphism of algebraic groups, from the general linear group to the multiplicative group,
formula_201
Exterior algebra.
The determinant of a linear transformation formula_187 of an formula_38-dimensional vector space formula_202 or, more generally a free module of (finite) rank formula_38 over a commutative ring formula_188 can be formulated in a coordinate-free manner by considering the formula_38-th exterior power formula_203 of formula_202. The map formula_204 induces a linear map
formula_205
As formula_203 is one-dimensional, the map formula_206 is given by multiplying with some scalar, i.e., an element in formula_188. Some authors such as use this fact to "define" the determinant to be the element in formula_188 satisfying the following identity (for all formula_207):
formula_208
This definition agrees with the more concrete coordinate-dependent definition. This can be shown using the uniqueness of a multilinear alternating form on formula_38-tuples of vectors in formula_209.
For this reason, the highest non-zero exterior power formula_203 (as opposed to the determinant associated to an endomorphism) is sometimes also called the determinant of formula_202 and similarly for more involved objects such as vector bundles or chain complexes of vector spaces. Minors of a matrix can also be cast in this setting, by considering lower alternating forms formula_210 with formula_211.
Generalizations and related notions.
Determinants as treated above admit several variants: the permanent of a matrix is defined as the determinant, except that the factors formula_28 occurring in Leibniz's rule are omitted. The immanant generalizes both by introducing a character of the symmetric group formula_27 in Leibniz's rule.
Determinants for finite-dimensional algebras.
For any associative algebra formula_15 that is finite-dimensional as a vector space over a field formula_212, there is a determinant map
formula_213
This definition proceeds by establishing the characteristic polynomial independently of the determinant, and defining the determinant as the lowest order term of this polynomial. This general definition recovers the determinant for the matrix algebra formula_214, but also includes several further cases including the determinant of a quaternion,
formula_215,
the norm formula_216 of a field extension, as well as the Pfaffian of a skew-symmetric matrix and the reduced norm of a central simple algebra, also arise as special cases of this construction.
Infinite matrices.
For matrices with an infinite number of rows and columns, the above definitions of the determinant do not carry over directly. For example, in the Leibniz formula, an infinite sum (all of whose terms are infinite products) would have to be calculated. Functional analysis provides different extensions of the determinant for such infinite-dimensional situations, which however only work for particular kinds of operators.
The Fredholm determinant defines the determinant for operators known as trace class operators by an appropriate generalization of the formula
formula_217
Another infinite-dimensional notion of determinant is the functional determinant.
Operators in von Neumann algebras.
For operators in a finite factor, one may define a positive real-valued determinant called the Fuglede−Kadison determinant using the canonical trace. In fact, corresponding to every tracial state on a von Neumann algebra there is a notion of Fuglede−Kadison determinant.
Related notions for non-commutative rings.
For matrices over non-commutative rings, multilinearity and alternating properties are incompatible for "n" ≥ 2, so there is no good definition of the determinant in this setting.
For square matrices with entries in a non-commutative ring, there are various difficulties in defining determinants analogously to that for commutative rings. A meaning can be given to the Leibniz formula provided that the order for the product is specified, and similarly for other definitions of the determinant, but non-commutativity then leads to the loss of many fundamental properties of the determinant, such as the multiplicative property or that the determinant is unchanged under transposition of the matrix. Over non-commutative rings, there is no reasonable notion of a multilinear form (existence of a nonzero with a regular element of "R" as value on some pair of arguments implies that "R" is commutative). Nevertheless, various notions of non-commutative determinant have been formulated that preserve some of the properties of determinants, notably quasideterminants and the Dieudonné determinant. For some classes of matrices with non-commutative elements, one can define the determinant and prove linear algebra theorems that are very similar to their commutative analogs. Examples include the "q"-determinant on quantum groups, the Capelli determinant on Capelli matrices, and the Berezinian on supermatrices (i.e., matrices whose entries are elements of formula_218-graded rings). Manin matrices form the class closest to matrices with commutative elements.
Calculation.
Determinants are mainly used as a theoretical tool. They are rarely calculated explicitly in numerical linear algebra, where for applications such as checking invertibility and finding eigenvalues the determinant has largely been supplanted by other techniques. Computational geometry, however, does frequently use calculations related to determinants.
While the determinant can be computed directly using the Leibniz rule this approach is extremely inefficient for large matrices, since that formula requires calculating formula_2 (formula_38 factorial) products for an formula_23-matrix. Thus, the number of required operations grows very quickly: it is of order formula_2. The Laplace expansion is similarly inefficient. Therefore, more involved techniques have been developed for calculating determinants.
Gaussian elemination.
Gaussian elimination consists of left multiplying a matrix by elementary matrices for getting a matrix in a row echelon form. One can restrict the computation to elementary matrices of determinant 1. In this case, the determinant of the resulting row echelon form equals the determinant of the initial matrix. As a row echelon form is a triangular matrix, its determinant is the product of the entries of its diagonal.
So, the determinant can be computed for almost free from the result of a Gaussian elemination.
Decomposition methods.
Some methods compute formula_108 by writing the matrix as a product of matrices whose determinants can be more easily computed. Such techniques are referred to as decomposition methods. Examples include the LU decomposition, the QR decomposition or the Cholesky decomposition (for positive definite matrices). These methods are of order formula_167, which is a significant improvement over formula_219.
For example, LU decomposition expresses formula_15 as a product
formula_220
of a permutation matrix formula_221 (which has exactly a single formula_129 in each column, and otherwise zeros), a lower triangular matrix formula_222 and an upper triangular matrix formula_223.
The determinants of the two triangular matrices formula_222 and formula_223 can be quickly calculated, since they are the products of the respective diagonal entries. The determinant of formula_221 is just the sign formula_224 of the corresponding permutation (which is formula_225 for an even number of permutations and is formula_226 for an odd number of permutations). Once such a LU decomposition is known for formula_15, its determinant is readily computed as
formula_227
Further methods.
The order formula_167 reached by decomposition methods has been improved by different methods. If two matrices of order formula_38 can be multiplied in time formula_228, where formula_229 for some formula_230, then there is an algorithm computing the determinant in time formula_231. This means, for example, that an formula_232 algorithm for computing the determinant exists based on the Coppersmith–Winograd algorithm. This exponent has been further lowered, as of 2016, to 2.373.
In addition to the complexity of the algorithm, further criteria can be used to compare algorithms.
Especially for applications concerning matrices over rings, algorithms that compute the determinant without any divisions exist. (By contrast, Gauss elimination requires divisions.) One such algorithm, having complexity formula_233 is based on the following idea: one replaces permutations (as in the Leibniz rule) by so-called closed ordered walks, in which several items can be repeated. The resulting sum has more terms than in the Leibniz rule, but in the process several of these products can be reused, making it more efficient than naively computing with the Leibniz rule. Algorithms can also be assessed according to their bit complexity, i.e., how many bits of accuracy are needed to store intermediate values occurring in the computation. For example, the Gaussian elimination (or LU decomposition) method is of order formula_167, but the bit length of intermediate values can become exponentially long. By comparison, the Bareiss Algorithm, is an exact-division method (so it does use division, but only in cases where these divisions can be performed without remainder) is of the same order, but the bit complexity is roughly the bit size of the original entries in the matrix times formula_38.
If the determinant of "A" and the inverse of "A" have already been computed, the matrix determinant lemma allows rapid calculation of the determinant of "A" + "uv"T, where "u" and "v" are column vectors.
Charles Dodgson (i.e. Lewis Carroll of "Alice's Adventures in Wonderland" fame) invented a method for computing determinants called Dodgson condensation. Unfortunately this interesting method does not always work in its original form.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{vmatrix} a & b\\\\c & d \\end{vmatrix}=ad-bc,"
},
{
"math_id": 1,
"text": " \\begin{vmatrix} a & b & c \\\\ d & e & f \\\\ g & h & i \\end{vmatrix}= aei + bfg + cdh - ceg - bdi - afh."
},
{
"math_id": 2,
"text": "n!"
},
{
"math_id": 3,
"text": "\\begin{pmatrix} a & b \\\\c & d \\end{pmatrix}"
},
{
"math_id": 4,
"text": "\\det \\begin{pmatrix} a & b \\\\c & d \\end{pmatrix} = \\begin{vmatrix} a & b \\\\c & d \\end{vmatrix} = ad - bc."
},
{
"math_id": 5,
"text": "\\det \\begin{pmatrix} 3 & 7 \\\\1 & -4 \\end{pmatrix} = \\begin{vmatrix} 3 & 7 \\\\ 1 & {-4} \\end{vmatrix} = (3 \\cdot (-4)) - (7 \\cdot 1) = -19."
},
{
"math_id": 6,
"text": "2 \\times 2"
},
{
"math_id": 7,
"text": "\\begin{pmatrix}1 & 0 \\\\ 0 & 1 \\end{pmatrix}"
},
{
"math_id": 8,
"text": "\\begin{vmatrix} a & b \\\\ a & b \\end{vmatrix} = ab - ba = 0."
},
{
"math_id": 9,
"text": "\\begin{vmatrix}a & b + b' \\\\ c & d + d' \\end{vmatrix} = a(d+d')-(b+b')c = \\begin{vmatrix}a & b\\\\ c & d \\end{vmatrix} + \\begin{vmatrix}a & b' \\\\ c & d' \\end{vmatrix}."
},
{
"math_id": 10,
"text": "r"
},
{
"math_id": 11,
"text": "\\begin{vmatrix} r \\cdot a & b \\\\ r \\cdot c & d \\end{vmatrix} = rad - brc = r(ad-bc) = r \\cdot \\begin{vmatrix} a & b \\\\c & d \\end{vmatrix}."
},
{
"math_id": 12,
"text": "\\text{Signed area} =\n |\\boldsymbol{u}|\\,|\\boldsymbol{v}|\\,\\sin\\,\\theta = \\left|\\boldsymbol{u}^\\perp\\right|\\,\\left|\\boldsymbol{v}\\right|\\,\\cos\\,\\theta' =\n \\begin{pmatrix} -b \\\\ a \\end{pmatrix} \\cdot \\begin{pmatrix} c \\\\ d \\end{pmatrix} = ad - bc.\n"
},
{
"math_id": 13,
"text": "A = \\left[\\begin{array}{c|c|c|c} \\mathbf{a}_1 & \\mathbf{a}_2 & \\cdots & \\mathbf{a}_n\\end{array}\\right]"
},
{
"math_id": 14,
"text": "\n A\\begin{pmatrix}1 \\\\ 0\\\\ \\vdots \\\\0\\end{pmatrix} = \\mathbf{a}_1, \\quad\n A\\begin{pmatrix}0 \\\\ 1\\\\ \\vdots \\\\0\\end{pmatrix} = \\mathbf{a}_2, \\quad\n \\ldots, \\quad\n A\\begin{pmatrix}0 \\\\0 \\\\ \\vdots \\\\1\\end{pmatrix} = \\mathbf{a}_n.\n"
},
{
"math_id": 15,
"text": "A"
},
{
"math_id": 16,
"text": "\\mathbf{a}_1, \\mathbf{a}_2, \\ldots, \\mathbf{a}_n,"
},
{
"math_id": 17,
"text": "P = \\left\\{c_1 \\mathbf{a}_1 + \\cdots + c_n\\mathbf{a}_n \\mid 0 \\leq c_i\\leq 1 \\ \\forall i\\right\\}."
},
{
"math_id": 18,
"text": "\\det(A) = \\pm \\text{vol}(P),"
},
{
"math_id": 19,
"text": "A = \\begin{bmatrix}\n a_{1,1} & a_{1,2} & \\cdots & a_{1,n} \\\\\n a_{2,1} & a_{2,2} & \\cdots & a_{2,n} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{n,1} & a_{n,2} & \\cdots & a_{n,n}\n\\end{bmatrix}."
},
{
"math_id": 20,
"text": "a_{1,1}"
},
{
"math_id": 21,
"text": "\\begin{vmatrix}\n a_{1,1} & a_{1,2} & \\cdots & a_{1,n} \\\\\n a_{2,1} & a_{2,2} & \\cdots & a_{2,n} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{n,1} & a_{n,2} & \\cdots & a_{n,n}\n\\end{vmatrix}."
},
{
"math_id": 22,
"text": "\\begin{vmatrix}a&b&c\\\\d&e&f\\\\g&h&i\\end{vmatrix}\n = aei + bfg + cdh - ceg - bdi - afh.\\ "
},
{
"math_id": 23,
"text": "n \\times n"
},
{
"math_id": 24,
"text": "\\{1, 2, \\dots, n \\}"
},
{
"math_id": 25,
"text": "\\sigma"
},
{
"math_id": 26,
"text": "\\sigma(1), \\sigma(2),\\ldots,\\sigma(n)"
},
{
"math_id": 27,
"text": "S_n"
},
{
"math_id": 28,
"text": "\\sgn(\\sigma)"
},
{
"math_id": 29,
"text": "+1,"
},
{
"math_id": 30,
"text": "-1."
},
{
"math_id": 31,
"text": "A=\\begin{bmatrix}\na_{1,1}\\ldots a_{1,n}\\\\\n\\vdots\\qquad\\vdots\\\\\na_{n,1}\\ldots a_{n,n}\n\\end{bmatrix},"
},
{
"math_id": 32,
"text": "\\det(A)=\\begin{vmatrix}\na_{1,1}\\ldots a_{1,n}\\\\\n\\vdots\\qquad\\vdots\\\\\na_{n,1}\\ldots a_{n,n}\n\\end{vmatrix} = \\sum_{\\sigma \\in S_n}\\sgn(\\sigma)a_{1,\\sigma(1)}\\cdots a_{n,\\sigma(n)}."
},
{
"math_id": 33,
"text": "\\det(A) = \\sum_{\\sigma \\in S_n} \\left( \\sgn(\\sigma) \\prod_{i=1}^n a_{i,\\sigma(i)}\\right)"
},
{
"math_id": 34,
"text": "\\varepsilon_{i_1,\\ldots,i_n}"
},
{
"math_id": 35,
"text": "\\{1,\\ldots,n\\}"
},
{
"math_id": 36,
"text": "\\det(A) = \\sum_{i_1,i_2,\\ldots,i_n} \\varepsilon_{i_1\\cdots i_n} a_{1,i_1} \\!\\cdots a_{n,i_n},"
},
{
"math_id": 37,
"text": "\\{1,\\ldots,n\\}."
},
{
"math_id": 38,
"text": "n"
},
{
"math_id": 39,
"text": "A = \\big ( a_1, \\dots, a_n \\big ),"
},
{
"math_id": 40,
"text": "a_i"
},
{
"math_id": 41,
"text": "\\det\\left(I\\right) = 1"
},
{
"math_id": 42,
"text": "I"
},
{
"math_id": 43,
"text": "a_j = r \\cdot v + w"
},
{
"math_id": 44,
"text": "\\begin{align}|A|\n &= \\big | a_1, \\dots, a_{j-1}, r \\cdot v + w, a_{j+1}, \\dots, a_n | \\\\\n &= r \\cdot | a_1, \\dots, v, \\dots a_n | + | a_1, \\dots, w, \\dots, a_n |\n\\end{align}"
},
{
"math_id": 45,
"text": "| a_1, \\dots, v, \\dots, v, \\dots, a_n| = 0."
},
{
"math_id": 46,
"text": "\\det(cA) = c^n\\det(A)"
},
{
"math_id": 47,
"text": "|a_1, \\dots, a_j, \\dots a_i, \\dots, a_n| = - |a_1, \\dots, a_i, \\dots, a_j, \\dots, a_n|."
},
{
"math_id": 48,
"text": "|a_3, a_1, a_2, a_4 \\dots, a_n| = - |a_1, a_3, a_2, a_4, \\dots, a_n| = |a_1, a_2, a_3, a_4, \\dots, a_n|."
},
{
"math_id": 49,
"text": "a_{ij}=0"
},
{
"math_id": 50,
"text": "i>j"
},
{
"math_id": 51,
"text": "i<j"
},
{
"math_id": 52,
"text": "\\det(A) = a_{11} a_{22} \\cdots a_{nn} = \\prod_{i=1}^n a_{ii}."
},
{
"math_id": 53,
"text": "A = \\begin{bmatrix}\n -2 & -1 & 2 \\\\\n 2 & 1 & 4 \\\\\n -3 & 3 & -1\n\\end{bmatrix}. "
},
{
"math_id": 54,
"text": "|A| = -|E| = -(18 \\cdot 3 \\cdot (-1)) = 54."
},
{
"math_id": 55,
"text": "\\det\\left(A^\\textsf{T}\\right) = \\det(A)"
},
{
"math_id": 56,
"text": "B"
},
{
"math_id": 57,
"text": "\\det(AB) = \\det (A) \\det (B)"
},
{
"math_id": 58,
"text": "\\det B"
},
{
"math_id": 59,
"text": "\\det\\left(A^{-1}\\right) = \\frac{1}{\\det(A)} = [\\det(A)]^{-1}"
},
{
"math_id": 60,
"text": "K"
},
{
"math_id": 61,
"text": "\\operatorname{GL}_n(K)"
},
{
"math_id": 62,
"text": "\\operatorname{SL}_n(K) \\subset \\operatorname{GL}_n(K)"
},
{
"math_id": 63,
"text": "K^\\times"
},
{
"math_id": 64,
"text": "\\operatorname{SL}_n(K)"
},
{
"math_id": 65,
"text": "\\operatorname{GL}_n(K)/\\operatorname{SL}_n(K)"
},
{
"math_id": 66,
"text": "M_{i,j}"
},
{
"math_id": 67,
"text": "(n-1) \\times (n-1)"
},
{
"math_id": 68,
"text": "i"
},
{
"math_id": 69,
"text": "j"
},
{
"math_id": 70,
"text": "(-1)^{i+j}M_{i,j}"
},
{
"math_id": 71,
"text": "\\det(A) = \\sum_{j=1}^n (-1)^{i+j} a_{i,j} M_{i,j},"
},
{
"math_id": 72,
"text": "i=1"
},
{
"math_id": 73,
"text": "\n \\begin{vmatrix}a&b&c\\\\ d&e&f\\\\ g&h&i\\end{vmatrix} =\n a\\begin{vmatrix}e&f\\\\ h&i\\end{vmatrix} - b\\begin{vmatrix}d&f\\\\ g&i\\end{vmatrix} + c\\begin{vmatrix}d&e\\\\ g&h\\end{vmatrix}\n"
},
{
"math_id": 74,
"text": "\\det(A)= \\sum_{i=1}^n (-1)^{i+j} a_{i,j} M_{i,j}."
},
{
"math_id": 75,
"text": "\\begin{vmatrix}\n 1 & 1 & 1 & \\cdots & 1 \\\\\n x_1 & x_2 & x_3 & \\cdots & x_n \\\\\n x_1^2 & x_2^2 & x_3^2 & \\cdots & x_n^2 \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n x_1^{n-1} & x_2^{n-1} & x_3^{n-1} & \\cdots & x_n^{n-1}\n \\end{vmatrix} =\n \\prod_{1 \\leq i < j \\leq n} \\left(x_j - x_i\\right).\n"
},
{
"math_id": 76,
"text": "\\tbinom nk"
},
{
"math_id": 77,
"text": "\\operatorname{adj}(A)"
},
{
"math_id": 78,
"text": "(\\operatorname{adj}(A))_{i,j} = (-1)^{i+j} M_{ji}."
},
{
"math_id": 79,
"text": "(\\det A) I = A\\operatorname{adj}A = (\\operatorname{adj}A)\\,A. "
},
{
"math_id": 80,
"text": "A^{-1} = \\frac 1{\\det A}\\operatorname{adj}A. "
},
{
"math_id": 81,
"text": "A, B, C, D"
},
{
"math_id": 82,
"text": "m \\times m"
},
{
"math_id": 83,
"text": "m \\times n"
},
{
"math_id": 84,
"text": "n \\times m"
},
{
"math_id": 85,
"text": "\\det\\begin{pmatrix}A& 0\\\\ C& D\\end{pmatrix} = \\det(A) \\det(D) = \\det\\begin{pmatrix}A& B\\\\ 0& D\\end{pmatrix}."
},
{
"math_id": 86,
"text": "\\begin{align}\n\\det\\begin{pmatrix}A& B\\\\ C& D\\end{pmatrix}\n& = \\det(A)\\det\\begin{pmatrix}A& B\\\\ C& D\\end{pmatrix}\n\\underbrace{\\det\\begin{pmatrix}A^{-1}& -A^{-1} B\\\\ 0& I_n\\end{pmatrix}}_{=\\,\\det(A^{-1})\\,=\\,(\\det A)^{-1}}\\\\\n& = \\det(A) \\det\\begin{pmatrix}I_m& 0\\\\ C A^{-1}& D-C A^{-1} B\\end{pmatrix}\\\\\n& = \\det(A) \\det(D - C A^{-1} B),\n\\end{align}"
},
{
"math_id": 87,
"text": "\\det (A) (D - C A^{-1} B)"
},
{
"math_id": 88,
"text": "D"
},
{
"math_id": 89,
"text": "1 \\times 1"
},
{
"math_id": 90,
"text": "\\begin{align}\n\\det\\begin{pmatrix}A& B\\\\ C& D\\end{pmatrix}\n& = \\det(D)\\det\\begin{pmatrix}A& B\\\\ C& D\\end{pmatrix}\n\\underbrace{\\det\\begin{pmatrix}I_m& 0\\\\ -D^{-1} C& D^{-1}\\end{pmatrix}}_{=\\,\\det(D^{-1})\\,=\\,(\\det D)^{-1}}\\\\\n& = \\det(D) \\det\\begin{pmatrix}A - B D^{-1} C& B D^{-1}\\\\ 0& I_n\\end{pmatrix}\\\\\n& = \\det(D) \\det(A - B D^{-1} C).\n\\end{align}"
},
{
"math_id": 91,
"text": "C"
},
{
"math_id": 92,
"text": "CD=DC"
},
{
"math_id": 93,
"text": "\\det\\begin{pmatrix}A& B\\\\ C& D\\end{pmatrix} = \\det(AD - BC)."
},
{
"math_id": 94,
"text": "A = D "
},
{
"math_id": 95,
"text": "B = C"
},
{
"math_id": 96,
"text": "\\det\\begin{pmatrix}A& B\\\\ B& A\\end{pmatrix} = \\det(A - B) \\det(A + B)."
},
{
"math_id": 97,
"text": "\\det\\left(I_\\mathit{m} + AB\\right) = \\det\\left(I_\\mathit{n} + BA\\right),"
},
{
"math_id": 98,
"text": "A+B"
},
{
"math_id": 99,
"text": "\\det(A + B + C) + \\det(C) \\geq \\det(A + C) + \\det(B + C)\\text{,}"
},
{
"math_id": 100,
"text": "\\det(A + B) \\geq \\det(A) + \\det(B)\\text{.}"
},
{
"math_id": 101,
"text": "n\\times n"
},
{
"math_id": 102,
"text": "\\sqrt[n]{\\det(A+B)}\\geq\\sqrt[n]{\\det(A)}+\\sqrt[n]{\\det(B)},"
},
{
"math_id": 103,
"text": "2\\times 2"
},
{
"math_id": 104,
"text": "\\det(A+B) = \\det(A) + \\det(B) + \\text{tr}(A)\\text{tr}(B) - \\text{tr}(AB)."
},
{
"math_id": 105,
"text": "A_{ij}, B_{ij}"
},
{
"math_id": 106,
"text": "(A_{11} + B_{11})(A_{22} + B_{22}) - (A_{12} + B_{12})(A_{21} + B_{21})."
},
{
"math_id": 107,
"text": "A_{11}A_{22} + B_{11}A_{22} + A_{11}B_{22} + B_{11}B_{22} - A_{12}A_{21} - B_{12}A_{21} - A_{12}B_{21} - B_{12}B_{21}."
},
{
"math_id": 108,
"text": "\\det(A)"
},
{
"math_id": 109,
"text": "\\det(A) + \\det(B) + A_{11}B_{22} + B_{11}A_{22} - A_{12}B_{21} - B_{12}A_{21}."
},
{
"math_id": 110,
"text": "(A_{11} + A_{22})(B_{11} + B_{22}) - (A_{11}B_{11} + A_{12}B_{21} + A_{21}B_{12} + A_{22}B_{22})"
},
{
"math_id": 111,
"text": "\\text{tr}(A)\\text{tr}(B) - \\text{tr}(AB)."
},
{
"math_id": 112,
"text": "aI + b\\mathbf{i} := a\\begin{pmatrix} 1 & 0 \\\\ 0 & 1 \\end{pmatrix} + b\\begin{pmatrix} 0 & -1 \\\\ 1 & 0 \\end{pmatrix}"
},
{
"math_id": 113,
"text": "a"
},
{
"math_id": 114,
"text": "b"
},
{
"math_id": 115,
"text": "\\text{tr}(\\mathbf{i}) = 0"
},
{
"math_id": 116,
"text": "A = aI"
},
{
"math_id": 117,
"text": "B = b\\mathbf{i}"
},
{
"math_id": 118,
"text": "\\det(aI + b\\mathbf{i}) = a^2\\det(I) + b^2\\det(\\mathbf{i}) = a^2 + b^2."
},
{
"math_id": 119,
"text": "\\det(I) = \\det(\\mathbf{i}) = 1"
},
{
"math_id": 120,
"text": "\\lambda_1, \\lambda_2, \\ldots, \\lambda_n"
},
{
"math_id": 121,
"text": "\\det(A) = \\prod_{i=1}^n \\lambda_i=\\lambda_1\\lambda_2\\cdots\\lambda_n."
},
{
"math_id": 122,
"text": "0"
},
{
"math_id": 123,
"text": "\\chi_A(t) = \\det(t \\cdot I - A)."
},
{
"math_id": 124,
"text": "t"
},
{
"math_id": 125,
"text": "\\lambda"
},
{
"math_id": 126,
"text": "\\chi_A(\\lambda) = 0."
},
{
"math_id": 127,
"text": "A_k := \\begin{bmatrix}\n a_{1,1} & a_{1,2} & \\cdots & a_{1,k} \\\\\n a_{2,1} & a_{2,2} & \\cdots & a_{2,k} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{k,1} & a_{k,2} & \\cdots & a_{k,k}\n\\end{bmatrix}"
},
{
"math_id": 128,
"text": "k"
},
{
"math_id": 129,
"text": "1"
},
{
"math_id": 130,
"text": "\\det(\\exp(A)) = \\exp(\\operatorname{tr}(A))"
},
{
"math_id": 131,
"text": "\\operatorname{tr}(A) = \\log(\\det(\\exp(A)))."
},
{
"math_id": 132,
"text": "\\exp(L) = A"
},
{
"math_id": 133,
"text": "\\det(A) = \\exp(\\operatorname{tr}(L))."
},
{
"math_id": 134,
"text": "\\begin{align}\n \\det(A) &= \\frac{1}{2}\\left(\\left(\\operatorname{tr}(A)\\right)^2 - \\operatorname{tr}\\left(A^2\\right)\\right), \\\\\n \\det(A) &= \\frac{1}{6}\\left(\\left(\\operatorname{tr}(A)\\right)^3 - 3\\operatorname{tr}(A) ~ \\operatorname{tr}\\left(A^2\\right) + 2 \\operatorname{tr}\\left(A^3\\right)\\right), \\\\\n \\det(A) &= \\frac{1}{24}\\left(\\left(\\operatorname{tr}(A)\\right)^4 - 6\\operatorname{tr}\\left(A^2\\right)\\left(\\operatorname{tr}(A)\\right)^2 + 3\\left(\\operatorname{tr}\\left(A^2\\right)\\right)^2 + 8\\operatorname{tr}\\left(A^3\\right)~\\operatorname{tr}(A) - 6\\operatorname{tr}\\left(A^4\\right)\\right).\n\\end{align}"
},
{
"math_id": 135,
"text": "c_n = 1; ~~~c_{n-m} = -\\frac{1}{m}\\sum_{k=1}^m c_{n-m+k} \\operatorname{tr}\\left(A^k\\right) ~~(1 \\le m \\le n)~."
},
{
"math_id": 136,
"text": "\\det(A) = \\sum_{\\begin{array}{c}k_1,k_2,\\ldots,k_n \\geq 0\\\\k_1+2k_2+\\cdots+nk_n=n\\end{array}}\\prod_{l=1}^n \\frac{(-1)^{k_l+1}}{l^{k_l}k_l!} \\operatorname{tr}\\left(A^l\\right)^{k_l},"
},
{
"math_id": 137,
"text": "\\sum_{l=1}^n lk_l = n."
},
{
"math_id": 138,
"text": "\\det(A) = \\frac{(-1)^n}{n!} B_n(s_1, s_2, \\ldots, s_n)."
},
{
"math_id": 139,
"text": "(AB)^I_J = \\sum_K A^I_K B^K_J, \\operatorname{tr}(A) = \\sum_I A^I_I."
},
{
"math_id": 140,
"text": "\\det(I + A) = \\sum_{k=0}^\\infty \\frac{1}{k!} \\left(-\\sum_{j=1}^\\infty \\frac{(-1)^j}{j} \\operatorname{tr}\\left(A^j\\right)\\right)^k\\,,"
},
{
"math_id": 141,
"text": "\\sum_{k=0}^\\infty \\frac{1}{k!} \\left(-\\sum_{j=1}^\\infty \\frac{(-1)^j s^j}{j}\\operatorname{tr}\\left(A^j\\right)\\right)^k\\,,"
},
{
"math_id": 142,
"text": "\\operatorname{tr}\\left(I - A^{-1}\\right) \\le \\log\\det(A) \\le \\operatorname{tr}(A - I)"
},
{
"math_id": 143,
"text": "\\frac{n}{\\operatorname{tr}\\left(A^{-1}\\right)} \\leq \\det(A)^\\frac{1}{n} \\leq \\frac{1}{n}\\operatorname{tr}(A) \\leq \\sqrt{\\frac{1}{n}\\operatorname{tr}\\left(A^2\\right)}."
},
{
"math_id": 144,
"text": "\\mathbf R^{n \\times n}"
},
{
"math_id": 145,
"text": "\\mathbf R"
},
{
"math_id": 146,
"text": "\\frac{d \\det(A)}{d \\alpha} = \\operatorname{tr}\\left(\\operatorname{adj}(A) \\frac{d A}{d \\alpha}\\right)."
},
{
"math_id": 147,
"text": "\\frac{d \\det(A)}{d \\alpha} = \\det(A) \\operatorname{tr}\\left(A^{-1} \\frac{d A}{d \\alpha}\\right)."
},
{
"math_id": 148,
"text": " \\frac{\\partial \\det(A)}{\\partial A_{ij}}= \\operatorname{adj}(A)_{ji} = \\det(A)\\left(A^{-1}\\right)_{ji}."
},
{
"math_id": 149,
"text": "\\det(A + \\epsilon X) - \\det(A) = \\operatorname{tr}(\\operatorname{adj}(A) X) \\epsilon + O\\left(\\epsilon^2\\right) = \\det(A) \\operatorname{tr}\\left(A^{-1} X\\right) \\epsilon + O\\left(\\epsilon^2\\right)"
},
{
"math_id": 150,
"text": "A = I"
},
{
"math_id": 151,
"text": "\\det(I + \\epsilon X) = 1 + \\operatorname{tr}(X) \\epsilon + O\\left(\\epsilon^2\\right)."
},
{
"math_id": 152,
"text": "\\operatorname{SL}_n"
},
{
"math_id": 153,
"text": "\\det A = 1"
},
{
"math_id": 154,
"text": "\\mathfrak{sl}_n"
},
{
"math_id": 155,
"text": "3 \\times 3"
},
{
"math_id": 156,
"text": "A = \\begin{bmatrix}a & b & c\\end{bmatrix}"
},
{
"math_id": 157,
"text": "a, b,c"
},
{
"math_id": 158,
"text": "\\begin{align}\n \\nabla_\\mathbf{a}\\det(A) &= \\mathbf{b} \\times \\mathbf{c} \\\\\n \\nabla_\\mathbf{b}\\det(A) &= \\mathbf{c} \\times \\mathbf{a} \\\\\n \\nabla_\\mathbf{c}\\det(A) &= \\mathbf{a} \\times \\mathbf{b}.\n\\end{align}"
},
{
"math_id": 159,
"text": "Ax = b"
},
{
"math_id": 160,
"text": "x"
},
{
"math_id": 161,
"text": "\\det (A)"
},
{
"math_id": 162,
"text": "x_i = \\frac{\\det(A_i)}{\\det(A)} \\qquad i = 1, 2, 3, \\ldots, n"
},
{
"math_id": 163,
"text": "A_i"
},
{
"math_id": 164,
"text": "\\det(A_i) =\n \\det\\begin{bmatrix}a_1 & \\ldots & b & \\ldots & a_n\\end{bmatrix} =\n \\sum_{j=1}^n x_j\\det\\begin{bmatrix}a_1 & \\ldots & a_{i-1} & a_j & a_{i+1} & \\ldots & a_n\\end{bmatrix} =\n x_i\\det(A)\n"
},
{
"math_id": 165,
"text": "a_j"
},
{
"math_id": 166,
"text": "A\\, \\operatorname{adj}(A) = \\operatorname{adj}(A)\\, A = \\det(A)\\, I_n."
},
{
"math_id": 167,
"text": "\\operatorname O(n^3)"
},
{
"math_id": 168,
"text": "\\det A"
},
{
"math_id": 169,
"text": "v_1, v_2 \\in \\mathbf R^3"
},
{
"math_id": 170,
"text": "v_3"
},
{
"math_id": 171,
"text": "f_1(x), \\dots, f_n(x)"
},
{
"math_id": 172,
"text": "n-1"
},
{
"math_id": 173,
"text": "W(f_1, \\ldots, f_n)(x) =\n \\begin{vmatrix}\n f_1(x) & f_2(x) & \\cdots & f_n(x) \\\\\n f_1'(x) & f_2'(x) & \\cdots & f_n'(x) \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n f_1^{(n-1)}(x) & f_2^{(n-1)}(x) & \\cdots & f_n^{(n-1)}(x)\n\\end{vmatrix}."
},
{
"math_id": 174,
"text": "f : \\mathbf R^n \\to \\mathbf R^n"
},
{
"math_id": 175,
"text": "S \\subset \\mathbf R^n"
},
{
"math_id": 176,
"text": "f(S)"
},
{
"math_id": 177,
"text": "|\\det(A)|"
},
{
"math_id": 178,
"text": "S"
},
{
"math_id": 179,
"text": "f : \\mathbf R^n \\to \\mathbf R^m"
},
{
"math_id": 180,
"text": "\\operatorname{volume}(f(S)) = \\sqrt{\\det\\left(A^\\textsf{T} A\\right)} \\operatorname{volume}(S)."
},
{
"math_id": 181,
"text": "a, b, c, d"
},
{
"math_id": 182,
"text": "\\frac 1 6 \\cdot |\\det(a-b,b-c,c-d)|"
},
{
"math_id": 183,
"text": "f: \\mathbf R^n \\rightarrow \\mathbf R^n,"
},
{
"math_id": 184,
"text": "D(f) = \\left(\\frac {\\partial f_i}{\\partial x_j}\\right)_{1 \\leq i, j \\leq n}."
},
{
"math_id": 185,
"text": "\\int_{f(U)} \\phi(\\mathbf{v})\\, d\\mathbf{v} = \\int_U \\phi(f(\\mathbf{u})) \\left|\\det(\\operatorname{D}f)(\\mathbf{u})\\right| \\,d\\mathbf{u}."
},
{
"math_id": 186,
"text": "\\det(A) = \\det(X)^{-1} \\det(B)\\det(X) = \\det(B) \\det(X)^{-1} \\det(X) = \\det(B)."
},
{
"math_id": 187,
"text": "T : V \\to V"
},
{
"math_id": 188,
"text": "R"
},
{
"math_id": 189,
"text": "\\mathbf Z"
},
{
"math_id": 190,
"text": "\\det(I) = 1"
},
{
"math_id": 191,
"text": "A \\in \\operatorname{Mat}_{n \\times n}(R)"
},
{
"math_id": 192,
"text": "R = \\mathbf Z"
},
{
"math_id": 193,
"text": "\\operatorname{GL}_n(R) \\rightarrow R^\\times, "
},
{
"math_id": 194,
"text": "f : R \\to S"
},
{
"math_id": 195,
"text": "\\operatorname{GL}_n(f) : \\operatorname{GL}_n(R) \\to \\operatorname{GL}_n(S)"
},
{
"math_id": 196,
"text": "f"
},
{
"math_id": 197,
"text": "f(\\det((a_{i,j}))) = \\det ((f(a_{i,j})))"
},
{
"math_id": 198,
"text": "m"
},
{
"math_id": 199,
"text": "\\operatorname{GL}_n"
},
{
"math_id": 200,
"text": "(-)^\\times"
},
{
"math_id": 201,
"text": "\\det: \\operatorname{GL}_n \\to \\mathbb G_m."
},
{
"math_id": 202,
"text": "V"
},
{
"math_id": 203,
"text": "\\bigwedge^n V"
},
{
"math_id": 204,
"text": "T"
},
{
"math_id": 205,
"text": "\\begin{align}\n \\bigwedge^n T: \\bigwedge^n V &\\rightarrow \\bigwedge^n V \\\\\n v_1 \\wedge v_2 \\wedge \\dots \\wedge v_n &\\mapsto T v_1 \\wedge T v_2 \\wedge \\dots \\wedge T v_n.\n\\end{align}"
},
{
"math_id": 206,
"text": "\\bigwedge^n T"
},
{
"math_id": 207,
"text": "v_i \\in V"
},
{
"math_id": 208,
"text": "\\left(\\bigwedge^n T\\right)\\left(v_1 \\wedge \\dots \\wedge v_n\\right) = \\det(T) \\cdot v_1 \\wedge \\dots \\wedge v_n."
},
{
"math_id": 209,
"text": "R^n"
},
{
"math_id": 210,
"text": "\\bigwedge^k V"
},
{
"math_id": 211,
"text": "k < n"
},
{
"math_id": 212,
"text": "F"
},
{
"math_id": 213,
"text": "\\det : A \\to F."
},
{
"math_id": 214,
"text": "A = \\operatorname{Mat}_{n \\times n}(F)"
},
{
"math_id": 215,
"text": "\\det (a + ib+jc+kd) = a^2 + b^2 + c^2 + d^2"
},
{
"math_id": 216,
"text": "N_{L/F} : L \\to F"
},
{
"math_id": 217,
"text": "\\det(I+A) = \\exp(\\operatorname{tr}(\\log(I+A))). "
},
{
"math_id": 218,
"text": "\\mathbb Z_2"
},
{
"math_id": 219,
"text": "\\operatorname O (n!)"
},
{
"math_id": 220,
"text": " A = PLU. "
},
{
"math_id": 221,
"text": "P"
},
{
"math_id": 222,
"text": "L"
},
{
"math_id": 223,
"text": "U"
},
{
"math_id": 224,
"text": "\\varepsilon"
},
{
"math_id": 225,
"text": "+1"
},
{
"math_id": 226,
"text": " -1 "
},
{
"math_id": 227,
"text": " \\det(A) = \\varepsilon \\det(L)\\cdot\\det(U). "
},
{
"math_id": 228,
"text": "M(n)"
},
{
"math_id": 229,
"text": "M(n) \\ge n^a"
},
{
"math_id": 230,
"text": "a>2"
},
{
"math_id": 231,
"text": "O(M(n))"
},
{
"math_id": 232,
"text": "\\operatorname O(n^{2.376})"
},
{
"math_id": 233,
"text": "\\operatorname O(n^4)"
}
] | https://en.wikipedia.org/wiki?curid=8468 |
847478 | Duration (finance) | Weighted term of future cash flows
In finance, the duration of a financial asset that consists of fixed cash flows, such as a bond, is the weighted average of the times until those fixed cash flows are received.
When the price of an asset is considered as a function of yield, duration also measures the price sensitivity to yield, the rate of change of price with respect to yield, or the percentage change in price for a parallel shift in yields.
The dual use of the word "duration", as both the weighted average time until repayment and as the percentage change in price, often causes confusion. Strictly speaking, Macaulay duration is the name given to the weighted average time until cash flows are received and is measured in years. Modified duration is the name given to the price sensitivity. It is (-1) times the rate of change in the price of a bond as a function of the change in its yield.
Both measures are termed "duration" and have the same (or close to the same) numerical value, but it is important to keep in mind the conceptual distinctions between them. Macaulay duration is a time measure with units in years and really makes sense only for an instrument with fixed cash flows. For a standard bond, the Macaulay duration will be between 0 and the maturity of the bond. It is equal to the maturity if and only if the bond is a zero-coupon bond.
Modified duration, on the other hand, is a mathematical derivative (rate of change) of price and measures the percentage rate of change of price with respect to yield. (Price sensitivity with respect to yields can also be measured in absolute (dollar or euro, etc.) terms, and the absolute sensitivity is often referred to as dollar (euro) duration, DV01, BPV, or delta (δ or Δ) risk). The concept of modified duration can be applied to interest-rate-sensitive instruments with non-fixed cash flows and can thus be applied to a wider range of instruments than can Macaulay duration. Modified duration is used more often than Macaulay duration in modern finance.
For everyday use, the equality (or near-equality) of the values for Macaulay and modified duration can be a useful aid to intuition. For example, a standard ten-year coupon bond will have a Macaulay duration of somewhat but not dramatically less than 10 years and from this, we can infer that the modified duration (price sensitivity) will also be somewhat but not dramatically less than 10%. Similarly, a two-year coupon bond will have a Macaulay duration of somewhat below 2 years and a modified duration of somewhat below 2%.
Macaulay duration.
Macaulay duration, named for Frederick Macaulay who introduced the concept, is the weighted average maturity of cash flows, in which the time of receipt of each payment is weighted by the present value of that payment. The denominator is the sum of the weights, which is precisely the price of the bond. Consider some set of fixed cash flows. The present value of these cash flows is:
formula_0
The Macaulay duration is defined as:
(1) formula_1
where:
In the second expression the fractional term is the ratio of the cash flow formula_3 to the total PV. These terms add to 1.0 and serve as weights for a weighted average. Thus the overall expression is a weighted average of time until cash flow payments, with weight formula_6 being the proportion of the asset's present value due to cash flow formula_2.
For a set of all-positive fixed cash flows the weighted average will fall between 0 (the minimum time), or more precisely formula_7 (the time to the first payment) and the time of the final cash flow. The Macaulay duration will equal the final maturity if and only if there is only a single payment at maturity. In symbols, if cash flows are, in order, formula_8, then:
formula_9
with the inequalities being strict unless it has a single cash flow. In terms of standard bonds (for which cash flows are fixed and positive), this means the Macaulay duration will equal the bond maturity only for a zero-coupon bond.
Macaulay duration has the diagrammatic interpretation shown in figure 1.
This represents the bond discussed in the example below - two year maturity with a coupon of 20% and continuously compounded yield of 3.9605%. The circles represent the present value of the payments, with the coupon payments getting smaller the further in the future they are, and the final large payment including both the coupon payment and the final principal repayment. If these circles were put on a balance beam, the fulcrum (balanced center) of the beam would represent the weighted average distance (time to payment), which is 1.78 years in this case.
For most practical calculations, the Macaulay duration is calculated using the yield to maturity to calculate the formula_10:
(2) formula_11
(3) formula_12
where:
Macaulay gave two alternative measures:
The key difference between the two durations is that the Fisher–Weil duration allows for the possibility of a sloping yield curve, whereas the second form is based on a constant value of the yield formula_14, not varying by term to payment. With the use of computers, both forms may be calculated but expression (3), assuming a constant yield, is more widely used because of the application to modified duration.
Duration versus Weighted Average Life.
Similarities in both values and definitions of Macaulay duration versus Weighted Average Life can lead to confusing the purpose and calculation of the two. For example, a 5-year fixed-rate interest-only bond would have a Weighted Average Life of 5, and a Macaulay duration that should be very close. Mortgages behave similarly. The differences between the two are as follows:
Modified duration.
In contrast to Macaulay duration, modified duration (sometimes abbreviated MD) is a price sensitivity measure, defined as the percentage derivative of price with respect to yield (the logarithmic derivative of bond price with respect to yield). Modified duration applies when a bond or other asset is considered as a function of yield. In this case one can measure the logarithmic derivative with respect to yield:
formula_15
When the yield is expressed continuously compounded, Macaulay duration and modified duration are numerically equal. To see this, if we take the derivative of price or present value, expression (2), with respect to the continuously compounded yield formula_14 we see that:
formula_16
In other words, for yields expressed continuously compounded,
formula_17.
where:
Periodically compounded.
In financial markets, yields are usually expressed periodically compounded (say annually or semi-annually) instead of continuously compounded. Then expression (2) becomes:
formula_18
formula_19
To find modified duration, when we take the derivative of the value formula_5 with respect to the periodically compounded yield we find
formula_20
Rearranging (dividing both sides by "-V" ) gives:
formula_21
which is the well-known relationship between modified duration and Macaulay duration:
formula_22
where:
This gives the well-known relation between Macaulay duration and modified duration quoted above. It should be remembered that, even though Macaulay duration and modified duration are closely related, they are conceptually distinct. Macaulay duration is a weighted average time until repayment (measured in units of time such as years) while modified duration is a price sensitivity measure when the price is treated as a function of yield, the "percentage change" in price with respect to yield.
Units.
Macaulay duration is measured in years.
Modified duration is measured as the percent change in price per one unit (percentage "point") change in yield per year (for example yield going from 8% per year (y = 0.08) to 9% per year (y = 0.09)). This will give modified duration a numerical value close to the Macaulay duration (and equal when rates are continuously compounded).
Formally, modified duration is a "semi-"elasticity, the "percent" change in price for a "unit" change in yield, rather than an elasticity, which is a percentage change in output for a "percentage" change in input. Modified duration is a rate of change, the percent change in price per change in yield.
Non-fixed cash flows.
Modified duration can be extended to instruments with non-fixed cash flows, while Macaulay duration applies only to fixed cash flow instruments. Modified duration is defined as the logarithmic derivative of price with respect to yield, and such a definition will apply to instruments that depend on yields, whether or not the cash flows are fixed.
Finite yield changes.
Modified duration is defined above as a derivative (as the term relates to calculus) and so is based on infinitesimal changes. Modified duration is also useful as a measure of the sensitivity of a bond's market price to finite interest rate (i.e., yield) movements. For a small change in yield, formula_25,
formula_26
Thus modified duration is approximately equal to the percentage change in price for a given finite change in yield. So a 15-year bond with a Macaulay duration of 7 years would have a modified duration of roughly 7 years and would fall approximately 7% in value if the interest rate increased by one percentage point (say from 7% to 8%).
Fisher–Weil duration.
Fisher–Weil duration is a refinement of Macaulay’s duration which takes into account the term structure of interest rates. Fisher–Weil duration calculates the present values of the relevant cashflows (more strictly) by using the zero coupon yield for each respective maturity.
Key rate duration.
Key rate durations (also called partial DV01s or partial durations) are a natural extension of the total modified duration to measuring sensitivity to shifts of different parts of the yield curve. Key rate durations might be defined, for example, with respect to zero-coupon rates with maturity '1M', '3M', '6M', '1Y', '2Y', '3Y', '5Y', '7Y', '10Y', '15Y', '20Y', '25Y', '30Y'. Thomas Ho (1992) introduced the term key rate duration. Reitano covered multifactor yield curve models as early as 1991 and has revisited the topic in a recent review.
Key rate durations require that we value an instrument off a yield curve and requires building a yield curve. Ho's original methodology was based on valuing instruments off a zero or spot yield curve and used linear interpolation between "key rates", but the idea is applicable to yield curves based on forward rates, par rates, and so forth. Many technical issues arise for key rate durations (partial DV01s) that do not arise for the standard total modified duration because of the dependence of the key rate durations on the specific type of the yield curve used to value the instruments (see Coleman, 2011 ).
Bond formulas.
For a standard bond with fixed, semi-annual payments the bond duration closed-form formula is:
formula_27
For a bond with coupon frequency formula_23 but an integer number of periods (so that there is no fractional payment period), the formula simplifies to:
formula_28
where
Example 1.
Consider a 2-year bond with face value of $100, a 20% semi-annual coupon, and a yield of 4% semi-annually compounded. The total PV will be:
formula_29
formula_30
The Macaulay duration is then
formula_31.
The simple formula above gives (y/k =.04/2=.02, c/k = 20/2 = 10):
formula_32
The modified duration, measured as percentage change in price per one percentage point change in yield, is:
formula_33 (% change in price per 1 percentage point change in yield)
The DV01, measured as dollar change in price for a $100 nominal bond for a one percentage point change in yield, is
formula_34 ($ per 1 percentage point change in yield)
where the division by 100 is because modified duration is the percentage change.
Example 2.
Consider a bond with a $1000 face value, 5% coupon rate and 6.5% annual yield, with maturity in 5 years. The steps to compute duration are the following:
1. Estimate the bond value
The coupons will be $50 in years 1, 2, 3 and 4. Then, on year 5, the bond will pay coupon and principal, for a total of $1050. Discounting to present value at 6.5%, the bond value is $937.66. The detail is the following:
Year 1: $50 / (1 + 6.5%) ^ 1 = 46.95
Year 2: $50 / (1 + 6.5%) ^ 2 = 44.08
Year 3: $50 / (1 + 6.5%) ^ 3 = 41.39
Year 4: $50 / (1 + 6.5%) ^ 4 = 38.87
Year 5: $1050 / (1 + 6.5%) ^ 5 = 766.37
2. Multiply the time each cash flow is received, times its present value
Year 1: 1 * $46.95 = 46.95
Year 2: 2 * $44.08 = 88.17
Year 3: 3 * $41.39 = 124.18
Year 4: 4 * $38.87 = 155.46
Year 5: 5 * 766.37 = 3831.87
TOTAL: 4246.63
3. Compare the total from step 2 with the bond value (step 1)
Macaulay duration: 4246.63 / 937.66 = 4.53
Money duration.
The <templatestyles src="Template:Visible anchor/styles.css" />money duration, or <templatestyles src="Template:Visible anchor/styles.css" />basis point value or Bloomberg <templatestyles src="Template:Visible anchor/styles.css" />Risk, also called <templatestyles src="Template:Visible anchor/styles.css" />dollar duration or <templatestyles src="Template:Visible anchor/styles.css" />DV01 in the United States, is defined as negative of the derivative of the value with respect to yield:
formula_35
so that it is the product of the modified duration and the price (value):
formula_36 ($ per 1 percentage point change in yield)
or
formula_37 ($ per 1 basis point change in yield)
The DV01 is analogous to the delta in derivative pricing (one of the "Greeks") – it is the ratio of a price change in output (dollars) to unit change in input (a basis point of yield). Dollar duration or DV01 is the change in price in "dollars," not in "percentage." It gives the dollar variation in a bond's value per unit change in the yield. It is often measured per 1 basis point - DV01 is short for "dollar value of an 01" (or 1 basis point).
The name BPV (basis point value) or Bloomberg "Risk" is also used, often applied to the dollar change for a $100 notional for 100bp change in yields - giving the same units as duration.
PV01 (present value of an 01) is sometimes used, although PV01 more accurately refers to the value of a one dollar or one basis point annuity. (For a par bond and a flat yield curve the DV01, derivative of price w.r.t. yield, and PV01, value of a one-dollar annuity, will actually have the same value.)
DV01 or dollar duration can be used for instruments with zero up-front value such as interest rate swaps where percentage changes and modified duration are less useful.
Application to value-at-risk (VaR).
Dollar duration formula_38 is commonly used for value-at-risk (VaR) calculation. To illustrate applications to portfolio risk management, consider a portfolio of securities dependent on the interest rates formula_39 as risk factors, and let
formula_40
denote the value of such portfolio. Then the exposure vector formula_41 has components
formula_42
Accordingly, the change in value of the portfolio can be approximated as
formula_43
that is, a component that is linear in the interest rate changes plus an error term which is at least quadratic. This formula can be used to calculate the VaR of the portfolio by ignoring higher order terms. Typically cubic or higher terms are truncated. Quadratic terms, when included, can be expressed in terms of (multi-variate) bond convexity. One can make assumptions about the joint distribution of the interest rates and then calculate VaR by Monte Carlo simulation or, in some special cases (e.g., Gaussian distribution assuming a linear approximation), even analytically. The formula can also be used to calculate the DV01 of the portfolio (cf. below) and it can be generalized to include risk factors beyond interest rates.
Risk – duration as interest rate sensitivity.
The primary use of duration (modified duration) is to measure interest rate sensitivity or exposure. Thinking of risk in terms of interest rates or yields is very useful because it helps to normalize across otherwise disparate instruments. Consider, for example, the following four instruments, each with 10-year final maturity:
All four have a 10-year maturity, but the sensitivity to interest rates, and thus the risk, will be different: the zero-coupon has the highest sensitivity and the annuity the lowest.
Consider first a $100 investment in each, which makes sense for the three bonds (the coupon bond, the annuity, the zero-coupon bond - it does not make sense for the interest rate swap for which there is no initial investment). Modified duration is a useful measure to compare interest rate sensitivity across the three. The zero-coupon bond will have the highest sensitivity, changing at a rate of 9.76% per 100bp change in yield. This means that if yields go up from 5% to 5.01% (a rise of 1bp) the price should fall by roughly 0.0976% or a change in price from $61.0271 per $100 notional to roughly $60.968. The original $100 invested will fall to roughly $99.90. The annuity has the lowest sensitivity, roughly half that of the zero-coupon bond, with a modified duration of 4.72%.
Alternatively, we could consider $100 notional of each of the instruments. In this case the BPV or DV01 (dollar value of an 01 or dollar duration) is the more natural measure. The BPV in the table is the dollar change in price for $100 notional for 100bp change in yields. The BPV will make sense for the interest rate swap (for which modified duration is not defined) as well as the three bonds.
Modified duration measures the "size" of the interest rate sensitivity. Sometimes we can be misled into thinking that it measures "which part" of the yield curve the instrument is sensitive to. After all, the modified duration (% change in price) is almost the same number as the Macaulay duration (a kind of weighted average years to maturity). For example, the annuity above has a Macaulay duration of 4.8 years, and we might think that it is sensitive to the 5-year yield. But it has cash flows out to 10 years and thus will be sensitive to 10-year yields. If we want to measure sensitivity to parts of the yield curve, we need to consider key rate durations.
For bonds with fixed cash flows a price change can come from two sources:
The yield-price relationship is inverse, and the modified duration provides a very useful measure of the price sensitivity to yields. As a first derivative it provides a linear approximation. For large yield changes, convexity can be added to provide a quadratic or second-order approximation. Alternatively, and often more usefully, convexity can be used to measure how the modified duration changes as yields change. Similar risk measures (first and second order) used in the options markets are the delta and gamma.
Modified duration and DV01 as measures of interest rate sensitivity are also useful because they can be applied to instruments and securities with varying or contingent cash flows, such as options.
Embedded options and effective duration.
For bonds that have embedded options, such as putable and callable bonds, modified duration will not correctly approximate the price move for a change in yield to maturity.
Consider a bond with an embedded put option. As an example, a $1,000 bond that can be redeemed by the holder at par at any time before the bond's maturity (i.e. an American put option). No matter how high interest rates become, the price of the bond will never go below $1,000 (ignoring counterparty risk). This bond's price sensitivity to interest rate changes is different from a non-puttable bond with otherwise identical cash flows.
To price such bonds, one must use option pricing to determine the value of the bond, and then one can compute its delta (and hence its lambda), which is the duration. The effective duration is a discrete approximation to this latter, and will require an option pricing model.
formula_44
where Δ "y" is the amount that yield changes, and formula_45 and formula_46 are the values that the bond will take if the yield falls by "y" or rises by "y", respectively. (A "parallel shift"; note that this value may vary depending on the value used for Δ "y".)
These values are typically calculated using a tree-based model, built for the "entire" yield curve (as opposed to a single yield to maturity), and therefore capturing exercise behavior at each point in the option's life as a function of both time and interest rates; see .
Spread duration.
Spread duration is the sensitivity of a bond's market price to a change in option-adjusted spread (OAS). Thus the index, or underlying yield curve, remains unchanged. Floating rate assets that are benchmarked to an index (such as 1-month or 3-month LIBOR) and reset periodically will have an effective duration near zero but a spread duration comparable to an otherwise identical fixed rate bond.
Average duration.
The sensitivity of a portfolio of bonds such as a bond mutual fund to changes in interest rates can also be important. The average duration of the bonds in the portfolio is often reported. The duration of a portfolio equals the weighted average maturity of all of the cash flows in the portfolio. If each bond has the same yield to maturity, this equals the weighted average of the portfolio's bond's durations, with weights proportional to the bond prices. Otherwise the weighted average of the bond's durations is just a good approximation, but it can still be used to infer how the value of the portfolio would change in response to changes in interest rates.
Convexity.
Duration is a linear measure of how the price of a bond changes in response to interest rate changes. As interest rates change, the price does not change linearly, but rather is a convex function of interest rates. Convexity is a measure of the curvature of how the price of a bond changes as the interest rate changes. Specifically, duration can be formulated as the first derivative of the price function of the bond with respect to the interest rate in question, and the convexity as the second derivative.
Convexity also gives an idea of the spread of future cashflows. (Just as the duration gives the discounted mean term, so convexity can be used to calculate the discounted standard deviation, say, of return.)
Note that convexity can be positive or negative. A bond with "positive convexity" will not have any call features - i.e. the issuer must redeem the bond at maturity - which means that as rates fall, both its duration and price will rise.
On the other hand, a bond "with" call features - i.e. where the issuer can redeem the bond early - is deemed to have "negative convexity" as rates approach the option strike, which is to say its duration will fall as rates fall, and hence its price will rise less quickly. This is because the issuer can redeem the old bond at a high coupon and re-issue a new bond at a lower rate, thus providing the issuer with valuable optionality. Similar to the above, in these cases, it may be more correct to calculate an effective convexity.
Mortgage-backed securities (pass-through mortgage principal prepayments) with US-style 15- or 30-year fixed-rate mortgages as collateral are examples of callable bonds.
Sherman ratio.
The "Sherman ratio" is the yield offered per unit of bond duration, named after DoubleLine Capital's chief investment officer, Jeffrey Sherman. It has been called the "Bond Market's Scariest Gauge", and hit an all-time low of 0.1968 for the Bloomberg Barclays US Corporate Bond Index on Dec 31, 2020. The ratio is simply the yield offered (as a percentage), divided by the bond duration (in years).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " V = \\sum_{i=1}^{n}PV_i "
},
{
"math_id": 1,
"text": "\\text{Macaulay duration} = \\frac{\\sum_{i=1}^{n}{t_i PV_i}} {\\sum_{i=1}^{n}{PV_i}} = \\frac{\\sum_{i=1}^{n}{t_i PV_i}} {V} = \\sum_{i=1}^{n}t_i \\frac{{PV_i}} {V} "
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "PV_i"
},
{
"math_id": 4,
"text": "t_i"
},
{
"math_id": 5,
"text": "V"
},
{
"math_id": 6,
"text": "\\frac{PV_i} {V} "
},
{
"math_id": 7,
"text": "t_1"
},
{
"math_id": 8,
"text": "(t_1, ..., t_n)"
},
{
"math_id": 9,
"text": "t_1 \\leq \\text{Macaulay duration} \\leq t_n,"
},
{
"math_id": 10,
"text": "PV(i)"
},
{
"math_id": 11,
"text": "V = \\sum_{i=1}^{n}PV_i = \\sum_{i=1}^{n}CF_i \\cdot e^{-y \\cdot t_i} "
},
{
"math_id": 12,
"text": "\\text{Macaulay duration} = \\sum_{i=1}^{n}t_i\\frac{{CF_i \\cdot e^{-y \\cdot t_i}}} {V} "
},
{
"math_id": 13,
"text": "CF_i"
},
{
"math_id": 14,
"text": "y"
},
{
"math_id": 15,
"text": " ModD(y) \\equiv - \\frac{1}{V} \\cdot \\frac{\\partial V}{\\partial y} = - \\frac{\\partial \\ln(V)}{\\partial y} "
},
{
"math_id": 16,
"text": " \\frac{\\partial V}{\\partial y} = - \\sum_{i=1}^{n} t_i \\cdot CF_i \\cdot e^{-y \\cdot t_i} = - MacD \\cdot V,"
},
{
"math_id": 17,
"text": " ModD = MacD "
},
{
"math_id": 18,
"text": "V(y_k) = \\sum_{i=1}^{n}PV_i = \\sum_{i=1}^{n} \\frac{CF_i} {(1+y_k/k)^{k \\cdot t_i}} "
},
{
"math_id": 19,
"text": " MacD = \\sum_{i=1}^{n} \\frac {t_i} {V(y_k)} \\cdot \\frac{CF_i} {(1+y_k/k)^{k \\cdot t_i}} "
},
{
"math_id": 20,
"text": " \\frac{\\partial V}{\\partial y_k} = - \\frac{1}{(1+y_k/k)} \\cdot \\sum_{i=1}^{n} t_i \\cdot \\frac {CF_i} {(1+y_k/k)^{k \\cdot t_i}} = - \\frac{MacD \\cdot V(y_k)} { (1+y_k/k)} "
},
{
"math_id": 21,
"text": " \\frac{MacD } { (1+y_k/k)} = - \\frac{1} {V(y_k)} \\cdot \\frac{\\partial V}{\\partial y_k} \\equiv ModD "
},
{
"math_id": 22,
"text": " ModD = \\frac{MacD}{(1+y_k/k)} "
},
{
"math_id": 23,
"text": "k"
},
{
"math_id": 24,
"text": "y_k"
},
{
"math_id": 25,
"text": "\\Delta y"
},
{
"math_id": 26,
"text": " ModD \\approx - \\frac{1}{V} \\frac {\\Delta V} {\\Delta y} \\rArr \\Delta V \\approx - V \\cdot ModD \\cdot \\Delta y "
},
{
"math_id": 27,
"text": " \\text{Dur} = \\frac{1}{P} \\left( C\\frac{(1+ai)(1+i)^m-(1+i) - (m-1+a)i}{i^2(1+i)^{(m-1+a)}} + \\frac{FV(m - 1 + a)}{(1+i)^{(m-1+a)}} \\right ) "
},
{
"math_id": 28,
"text": "MacD = \\left[ \\frac {(1+y/k)}{y/k} - \\frac {100(1+y/k)+m(c/k-100y/k)}{(c/k)[(1+y/k)^m-1]+100y/k} \\right ] / k"
},
{
"math_id": 29,
"text": "V = \\sum_{i=1}^{n}PV_i = \\sum_{i=1}^{n} \\frac{CF_i} {(1+y/k)^{k \\cdot t_i}} = \\sum_{i=1}^{4} \\frac{10} {(1+.04/2)^i} + \\frac{100} {(1+.04/2)^4} "
},
{
"math_id": 30,
"text": "= 9.804 + 9.612 + 9.423 + 9.238 + 92.385 = 130.462 "
},
{
"math_id": 31,
"text": "\\text{MacD} = 0.5 \\cdot \\frac{9.804} { 130.462} + 1.0 \\cdot \\frac{9.612} { 130.462} + 1.5 \\cdot \\frac{9.423} { 130.462} + 2.0 \\cdot \\frac{9.238} { 130.462} + 2.0 \\cdot \\frac{92.385} { 130.462}= 1.777\\,\\text{years} "
},
{
"math_id": 32,
"text": " \\text{MacD} = \\left[ \\frac {(1.02)}{0.02} - \\frac {100(1.02)+4(10-2)}{10[(1.02)^{4}-1]+2} \\right] / 2 = 1.777\\,\\text{years}"
},
{
"math_id": 33,
"text": " \\text{ModD} = \\frac{\\text{MacD}}{(1+y/k)} = \\frac{1.777}{(1+.04/2)} = 1.742"
},
{
"math_id": 34,
"text": " \\text{DV01} = \\frac{\\text{ModD} \\cdot 130.462} {100} = 2.27 "
},
{
"math_id": 35,
"text": "D_\\$ = DV01 = -\\frac{\\partial V}{\\partial y}. "
},
{
"math_id": 36,
"text": "D_\\$ = DV01 = BPV = V \\cdot ModD / 100 "
},
{
"math_id": 37,
"text": "D_\\$ = DV01 = V \\cdot ModD / 10000 "
},
{
"math_id": 38,
"text": "D_\\$"
},
{
"math_id": 39,
"text": " r_1, \\ldots, r_n "
},
{
"math_id": 40,
"text": "V = V(r_1, \\ldots, r_n) \\, "
},
{
"math_id": 41,
"text": " \\boldsymbol{\\omega} = (\\omega_1, \\ldots, \\omega_n)"
},
{
"math_id": 42,
"text": "\\omega_i = - D_{\\$,i} := \\frac{\\partial V}{\\partial r_i}. "
},
{
"math_id": 43,
"text": "\\Delta V = \\sum_{i=1}^n \\omega_i\\, \\Delta r_i\n+ \\sum_{1 \\leq i,j \\leq n} O(\\Delta r_i\\, \\Delta r_j), "
},
{
"math_id": 44,
"text": "\\text{Effective duration} = \\frac {V_{-\\Delta y}-V_{+\\Delta y}}{2(V_0)\\Delta y} "
},
{
"math_id": 45,
"text": "V_{-\\Delta y}"
},
{
"math_id": 46,
"text": "V_{+\\Delta y} "
}
] | https://en.wikipedia.org/wiki?curid=847478 |
847558 | Confusion matrix | Table layout for visualizing performance; also called an error matrix
In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one; in unsupervised learning it is usually called a matching matrix.
Each row of the matrix represents the instances in an actual class while each column represents the instances in a predicted class, or vice versa – both variants are found in the literature. The diagonal of the matrix therefore represents all instances that are correctly predicted. The name stems from the fact that it makes it easy to see whether the system is confusing two classes (i.e. commonly mislabeling one as another).
It is a special kind of contingency table, with two dimensions ("actual" and "predicted"), and identical sets of "classes" in both dimensions (each combination of dimension and class is a variable in the contingency table).
Example.
Given a sample of 12 individuals, 8 that have been diagnosed with cancer and 4 that are cancer-free, where individuals with cancer belong to class 1 (positive) and non-cancer individuals belong to class 0 (negative), we can display that data as follows:
Assume that we have a classifier that distinguishes between individuals with and without cancer in some way, we can take the 12 individuals and run them through the classifier. The classifier then makes 9 accurate predictions and misses 3: 2 individuals with cancer wrongly predicted as being cancer-free (sample 1 and 2), and 1 person without cancer that is wrongly predicted to have cancer (sample 9).
Notice, that if we compare the actual classification set to the predicted classification set, there are 4 different outcomes that could result in any particular column. One, if the actual classification is positive and the predicted classification is positive (1,1), this is called a true positive result because the positive sample was correctly identified by the classifier. Two, if the actual classification is positive and the predicted classification is negative (1,0), this is called a false negative result because the positive sample is incorrectly identified by the classifier as being negative. Third, if the actual classification is negative and the predicted classification is positive (0,1), this is called a false positive result because the negative sample is incorrectly identified by the classifier as being positive. Fourth, if the actual classification is negative and the predicted classification is negative (0,0), this is called a true negative result because the negative sample gets correctly identified by the classifier.
We can then perform the comparison between actual and predicted classifications and add this information to the table, making correct results appear in green so they are more easily identifiable.
The template for any binary confusion matrix uses the four kinds of results discussed above (true positives, false negatives, false positives, and true negatives) along with the positive and negative classifications. The four outcomes can be formulated in a 2×2 "confusion matrix", as follows:
The color convention of the three data tables above were picked to match this confusion matrix, in order to easily differentiate the data.
Now, we can simply total up each type of result, substitute into the template, and create a confusion matrix that will concisely summarize the results of testing the classifier:
In this confusion matrix, of the 8 samples with cancer, the system judged that 2 were cancer-free, and of the 4 samples without cancer, it predicted that 1 did have cancer. All correct predictions are located in the diagonal of the table (highlighted in green), so it is easy to visually inspect the table for prediction errors, as values outside the diagonal will represent them. By summing up the 2 rows of the confusion matrix, one can also deduce the total number of positive (P) and negative (N) samples in the original dataset, i.e. formula_0 and formula_1.
Table of confusion.
In predictive analytics, a table of confusion (sometimes also called a confusion matrix) is a table with two rows and two columns that reports the number of "true positives", "false negatives", "false positives", and "true negatives". This allows more detailed analysis than simply observing the proportion of correct classifications (accuracy). Accuracy will yield misleading results if the data set is unbalanced; that is, when the numbers of observations in different classes vary greatly.
For example, if there were 95 cancer samples and only 5 non-cancer samples in the data, a particular classifier might classify all the observations as having cancer. The overall accuracy would be 95%, but in more detail the classifier would have a 100% recognition rate (sensitivity) for the cancer class but a 0% recognition rate for the non-cancer class. F1 score is even more unreliable in such cases, and here would yield over 97.4%, whereas informedness removes such bias and yields 0 as the probability of an informed decision for any form of guessing (here always guessing cancer).
According to Davide Chicco and Giuseppe Jurman, the most informative metric to evaluate a confusion matrix is the Matthews correlation coefficient (MCC).
Other metrics can be included in a confusion matrix, each of them having their significance and use.
<templatestyles src="Reflist/styles.css" />
Confusion matrices with more than two categories.
Confusion matrix is not limited to binary classification and can be used in multi-class classifiers as well. The confusion matrices discussed above have only two conditions: positive and negative. For example, the table below summarizes communication of a whistled language between two speakers, zero values omitted for clarity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P=TP+FN"
},
{
"math_id": 1,
"text": "N=FP+TN"
}
] | https://en.wikipedia.org/wiki?curid=847558 |
8476166 | Disphenoid | Tetrahedron whose faces are all congruent
In geometry, a disphenoid (from el " sphenoeides" 'wedgelike') is a tetrahedron whose four faces are congruent acute-angled triangles. It can also be described as a tetrahedron in which every two edges that are opposite each other have equal lengths. Other names for the same shape are isotetrahedron, sphenoid, bisphenoid, isosceles tetrahedron, equifacial tetrahedron, almost regular tetrahedron, and tetramonohedron.
All the solid angles and vertex figures of a disphenoid are the same, and the sum of the face angles at each vertex is equal to two right angles. However, a disphenoid is not a regular polyhedron, because, in general, its faces are not regular polygons, and its edges have three different lengths.
Special cases and generalizations.
If the faces of a disphenoid are equilateral triangles, it is a regular tetrahedron with Td tetrahedral symmetry, although this is not normally called a disphenoid. When the faces of a disphenoid are isosceles triangles, it is called a tetragonal disphenoid. In this case it has D2d dihedral symmetry.
A sphenoid with scalene triangles as its faces is called a rhombic disphenoid and it has D2 dihedral symmetry. Unlike the tetragonal disphenoid, the rhombic disphenoid has no reflection symmetry, so it is chiral.
Both tetragonal disphenoids and rhombic disphenoids are isohedra: as well as being congruent to each other, all of their faces are symmetric to each other.
It is not possible to construct a disphenoid with right triangle or obtuse triangle faces. When right triangles are glued together in the pattern of a disphenoid, they form a flat figure (a doubly-covered rectangle) that does not enclose any volume. When obtuse triangles are glued in this way, the resulting surface can be folded to form a disphenoid (by Alexandrov's uniqueness theorem) but one with acute triangle faces and with edges that in general do not lie along the edges of the given obtuse triangles.
Two more types of tetrahedron generalize the disphenoid and have similar names. The digonal disphenoid has faces with two different shapes, both isosceles triangles, with two faces of each shape. The phyllic disphenoid similarly has faces with two shapes of scalene triangles.
Disphenoids can also be seen as digonal antiprisms or as alternated quadrilateral prisms.
Characterizations.
A tetrahedron is a disphenoid if and only if its circumscribed parallelepiped is right-angled.
We also have that a tetrahedron is a disphenoid if and only if the center in the circumscribed sphere and the inscribed sphere coincide.
Another characterization states that if "d1", "d2" and "d3" are the common perpendiculars of "AB" and "CD"; "AC" and "BD"; and "AD" and "BC" respectively in a tetrahedron "ABCD", then the tetrahedron is a disphenoid if and only if "d1", "d2" and "d3" are pairwise perpendicular.
The disphenoids are the only polyhedra having infinitely many non-self-intersecting closed geodesics. On a disphenoid, all closed geodesics are non-self-intersecting.
The disphenoids are the tetrahedra in which all four faces have the same perimeter, the tetrahedra in which all four faces have the same area, and the tetrahedra in which the angular defects of all four vertices equal π. They are the polyhedra having a net in the shape of an acute triangle, divided into four similar triangles by segments connecting the edge midpoints.
Metric formulas.
The volume of a disphenoid with opposite edges of length "l", "m" and "n" is given by:
formula_0
The circumscribed sphere has radius (the circumradius):
formula_1
and the inscribed sphere has radius:
formula_2
where "V" is the volume of the disphenoid and "T" is the area of any face, which is given by Heron's formula. There is also the following interesting relation connecting the volume and the circumradius:
formula_3
The squares of the lengths of the bimedians are:
formula_4
Other properties.
If the four faces of a tetrahedron have the same perimeter, then the tetrahedron is a disphenoid.
If the four faces of a tetrahedron have the same area, then it is a disphenoid.
The centers in the circumscribed and inscribed spheres coincide with the centroid of the disphenoid.
The bimedians are perpendicular to the edges they connect and to each other.
Honeycombs and crystals.
Some tetragonal disphenoids will form honeycombs. The disphenoid whose four vertices are (-1, 0, 0), (1, 0, 0), (0, 1, 1), and (0, 1, -1) is such a disphenoid. Each of its four faces is an isosceles triangle with edges of lengths √3, √3, and 2. It can tessellate space to form the disphenoid tetrahedral honeycomb. As describes, it can be folded without cutting or overlaps from a single sheet of a4 paper.
"Disphenoid" is also used to describe two forms of crystal:
Other uses.
Six tetragonal disphenoids attached end-to-end in a ring construct a kaleidocycle, a paper toy that can rotate on 4 sets of faces in a hexagon.
The rotation of the six disphenoids with opposite edges of length l, m and n (without loss of generality n≤l, n≤m) is physically realizable if and only if
formula_5 | [
{
"math_id": 0,
"text": " V=\\sqrt{\\frac{(l^2+m^2-n^2)(l^2-m^2+n^2)(-l^2+m^2+n^2)}{72}}. "
},
{
"math_id": 1,
"text": " R=\\sqrt{\\frac{l^2+m^2+n^2}{8}} "
},
{
"math_id": 2,
"text": " r=\\frac{3V}{4T} "
},
{
"math_id": 3,
"text": "\\displaystyle 16T^2R^2=l^2m^2n^2+9V^2. "
},
{
"math_id": 4,
"text": " \\tfrac{1}{2}(l^2+m^2-n^2),\\quad \\tfrac{1}{2}(l^2-m^2+n^2),\\quad \\tfrac{1}{2}(-l^2+m^2+n^2). "
},
{
"math_id": 5,
"text": "-8*(l^2-m^2)^2*(l^2+m^2)-5*n^6+11*(l^2-m^2)^2*n^2+2*(l^2+m^2)*n^4>=0 "
}
] | https://en.wikipedia.org/wiki?curid=8476166 |
8476776 | Assortativity | Tendency for similar nodes to be connected
Assortativity, or assortative mixing, is a preference for a network's nodes to attach to others that are similar in some way. Though the specific measure of similarity may vary, network theorists often examine assortativity in terms of a node's degree. The addition of this characteristic to network models more closely approximates the behaviors of many real world networks.
Correlations between nodes of similar degree are often found in the mixing patterns of many observable networks. For instance, in social networks, nodes tend to be connected with other nodes with similar degree values. This tendency is referred to as assortative mixing, or "assortativity". On the other hand, technological and biological networks typically show disassortative mixing, or "disassortativity", as high degree nodes tend to attach to low degree nodes.
Measurement.
Assortativity is often operationalized as a correlation between two nodes. However, there are several ways to capture such a correlation. The two most prominent measures are the "assortativity coefficient" and the "neighbor connectivity". These measures are outlined in more detail below.
Assortativity coefficient.
The "assortativity coefficient" is the Pearson correlation coefficient of degree between pairs of linked nodes. Positive values of "r" indicate a correlation between nodes of similar degree, while negative values indicate relationships between nodes of different degree. In general, "r" lies between −1 and 1. When "r" = 1, the network is said to have perfect assortative mixing patterns, when "r" = 0 the network is non-assortative, while at "r" = −1 the network is completely disassortative.
The "assortativity coefficient" is given by formula_0. The term formula_1 is the distribution of the "remaining degree". This captures the number of edges leaving the node, other than the one that connects the pair. The distribution of this term is derived from the degree distribution formula_2 as formula_3. Finally, formula_4 refers to the joint probability distribution of the remaining degrees of the two vertices. This quantity is symmetric on an undirected graph, and follows the sum rules formula_5 and formula_6.
In a Directed graph, in-assortativity (formula_7) and out-assortativity (formula_8) measure the tendencies of nodes to connect with other nodes that have similar in and out degrees as themselves, respectively. Extending this further, four types of assortativity can be considered (see ). Adopting the notation of that article, it is possible to define four metrics formula_7, formula_9, formula_10, and formula_8. Let formula_11, be one of the "in"/"out" word pairs (e.g. formula_12). Let formula_13 be the number of edges in the network. Suppose we label the edges of the network formula_14. Given edge formula_15, let formula_16 be the formula_17-degree of the source (i.e. "tail") node vertex of the edge, and formula_18 be the formula_19-degree of the target (i.e. "head") node of edge formula_15. We indicate average values with bars, so that formula_20, and formula_21 are the average formula_17-degree of sources, and formula_19-degree of targets, respectively; averages being taken over the edges of the network. Finally, we have
formula_22
Neighbor connectivity.
Another means of capturing the degree correlation is by examining the properties of formula_23, or the average degree of neighbors of a node with degree "k". This term is formally defined as: formula_24, where formula_25 is the conditional probability that an edge of node with degree "k" points to a node with degree "k"'. If this function is increasing, the network is assortative, since it shows that nodes of high degree connect, on average, to nodes of high degree. Alternatively, if the function is decreasing, the network is disassortative, since nodes of high degree tend to connect to nodes of lower degree. The function can be plotted on a graph (see Fig. 2) to depict the overall assortativity trend for a network.
Local assortativity.
In assortative networks, there could be nodes that are disassortative and vice versa. A local assortative measure is required to identify such anomalies within networks. Local assortativity is defined as the contribution that each node makes to the network assortativity. Local assortativity in undirected networks is defined as,
formula_26
Where formula_27 is the excess degree of a particular node and formula_28 is the average excess degree of its neighbors and M is the number of links in the network.
Respectively, local assortativity for directed networks is a node's contribution to the directed assortativity of a network. A node's contribution to the assortativity of a directed network formula_29 is defined as,
formula_30
Where formula_31 is the out-degree of the node under consideration and formula_32 is the in-degree, formula_33 is the average in-degree of its neighbors (to which node formula_34} has an edge) and formula_35 is the average out-degree of its neighbors (from which node formula_34 has an edge).formula_36,formula_37.
By including the scaling terms formula_38 and formula_39 , we ensure that the equation for local assortativity for a directed network satisfies the condition formula_40.
Further, based on whether the in-degree or out-degree distribution is considered, it is possible to define local in-assortativity and local out-assortativity as the respective local assortativity measures in a directed network.
Assortative mixing patterns of real networks.
The assortative patterns of a variety of real world networks have been examined. For instance, Fig. 3 lists values of "r" for a variety of networks. Note that the social networks (the first five entries) have apparent assortative mixing. On the other hand, the technological and biological networks (the middle six entries) all appear to be disassortative. It has been suggested that this is because most networks have a tendency to evolve, unless otherwise constrained, towards their maximum entropy state—which is usually disassortative.
The table also has the value of r calculated analytically for two models of networks:
In the ER model, since edges are placed at random without regard to vertex degree, it follows that r = 0 in the limit of large graph size. The scale-free BA model also holds this property. For the BA model in the special case of m=1 (where each incoming node attaches to only one of the existing nodes with a degree-proportional probability), a more precise result is known: as formula_41 (the number of vertices) tends to infinity, r approaches 0 at the same rate as formula_42.
Application.
The properties of assortativity are useful in the field of epidemiology, since they can help understand the spread of disease or cures. For instance, the removal of a portion of a network's vertices may correspond to curing, vaccinating, or quarantining individuals or cells. Since social networks demonstrate assortative mixing, diseases targeting high degree individuals are likely to spread to other high degree nodes. Alternatively, within the cellular network—which, as a biological network is likely dissortative—vaccination strategies that specifically target the high degree vertices may quickly destroy the epidemic network.
Structural disassortativity.
The basic structure of a network can cause these measures to show disassortativity, which is not representative of any underlying assortative or disassortative mixing. Special caution must be taken to avoid this structural disassortativity. | [
{
"math_id": 0,
"text": "r = \\frac{\\sum_{jk}{jk (e_{jk} - q_j q_k)}}{\\sigma_{q}^{2}}"
},
{
"math_id": 1,
"text": "q_{k}"
},
{
"math_id": 2,
"text": "p_{k}"
},
{
"math_id": 3,
"text": "q_{k} = \\frac{(k+1)p_{k+1}}{\\sum_{j \\geq 1} j p_j}"
},
{
"math_id": 4,
"text": "e_{jk}"
},
{
"math_id": 5,
"text": "\\sum_{jk}{e_{jk}} = 1\\,"
},
{
"math_id": 6,
"text": "\\sum_{j}{e_{jk}} = q_{k}\\,"
},
{
"math_id": 7,
"text": "r( \\text{in}, \\text{in})"
},
{
"math_id": 8,
"text": "r( \\text{out}, \\text{out})"
},
{
"math_id": 9,
"text": "r( \\text{in}, \\text{out})"
},
{
"math_id": 10,
"text": "r( \\text{out}, \\text{in})"
},
{
"math_id": 11,
"text": "(\\alpha,\\beta)"
},
{
"math_id": 12,
"text": "(\\alpha,\\beta)=(\\text{out},\\text{in})"
},
{
"math_id": 13,
"text": "E"
},
{
"math_id": 14,
"text": "1,\\ldots,E"
},
{
"math_id": 15,
"text": "i"
},
{
"math_id": 16,
"text": "j^{\\alpha}_i"
},
{
"math_id": 17,
"text": "\\alpha"
},
{
"math_id": 18,
"text": "k^{\\beta}_i"
},
{
"math_id": 19,
"text": "\\beta"
},
{
"math_id": 20,
"text": "\\bar{j^\\alpha}"
},
{
"math_id": 21,
"text": " \\bar{k^\\beta}"
},
{
"math_id": 22,
"text": "\nr(\\alpha,\\beta)=\\frac{\\sum_i (j^\\alpha_i-\\bar{j^\\alpha})(k^\\beta_i-\\bar{k^\\beta})}{ \\sqrt{\\sum_i (j^\\alpha_i-\\bar{j^\\alpha})^2} \\sqrt{\\sum_i (k^\\beta_i-\\bar{k^\\beta})^2} }.\n"
},
{
"math_id": 23,
"text": "\\langle k_{nn} \\rangle"
},
{
"math_id": 24,
"text": "\\langle k_{nn} \\rangle = \\sum_{k'}{k'P(k'|k)}"
},
{
"math_id": 25,
"text": "P(k'|k)"
},
{
"math_id": 26,
"text": "\n\\rho = \\frac{j\\ \\left(j+1\\right)\\left(\\overline{k}-\\ {\\mu }_q\\right)}{2M{\\sigma }^2_q} \n"
},
{
"math_id": 27,
"text": "j"
},
{
"math_id": 28,
"text": "\\overline{k}"
},
{
"math_id": 29,
"text": "r_d"
},
{
"math_id": 30,
"text": "\n{\\rho }_d=\\ \\frac{{j_{out}}^2\\left({\\overline{k}}_{in}-\\ {\\mu }^{in}_q\\right)+\\ {j_{in}}^2\\left({\\overline{k}}_{out}-\\ {\\mu }^{out}_q\\right)}{2\\ M{\\sigma }^{in}_q{\\sigma }^{out}_q} \n"
},
{
"math_id": 31,
"text": "j_{out}"
},
{
"math_id": 32,
"text": "j_{in}"
},
{
"math_id": 33,
"text": "{\\overline{k}}_{in}"
},
{
"math_id": 34,
"text": "v"
},
{
"math_id": 35,
"text": "{\\overline{k}}_{out}"
},
{
"math_id": 36,
"text": "{\\sigma }^{in}_q\\ \\ne 0"
},
{
"math_id": 37,
"text": "\\ {\\ \\sigma }^{out}_q\\ \\ne 0"
},
{
"math_id": 38,
"text": "{\\sigma }^{in}_q"
},
{
"math_id": 39,
"text": "{\\ \\sigma }^{out}_q"
},
{
"math_id": 40,
"text": "r_d=\\ \\sum^N_{i=1}{{\\rho }_d}"
},
{
"math_id": 41,
"text": "N"
},
{
"math_id": 42,
"text": "(\\log^2 N)/N"
}
] | https://en.wikipedia.org/wiki?curid=8476776 |
8476916 | Neferneferuaten Tasherit | Neferneferuaten Tasherit or Neferneferuaten the younger (, meaning "most beautiful one of Aten – younger") (14th century BCE) was an ancient Egyptian princess of the 18th Dynasty and the fourth daughter of Pharaoh Akhenaten and his Great Royal Wife Nefertiti.
Family.
Neferneferuaten was born between c. year 8 and 9 of her father's reign. She was the fourth of six known daughters of the royal couple. It is likely that she was born in Akhetaten, the capital founded by her father. Her name "Neferneferuaten" ("Beauty of the Beauties of Aten" or "Most Beautiful One of Aten") is the exact copy of the name Nefertiti took in the 5th regnal year. ("Ta-sherit" simply means "the younger"). She had three older sisters named Meritaten, Meketaten, and Ankhesenpaaten (later known as Ankhesenamun), and two younger sisters named Neferneferure and Setepenre.
Life.
One of the earliest depictions of Neferneferuaten Tasherit is on a mural from the King's House in Amarna. She is depicted sitting on a pillow with her sister Neferneferure. The fresco is dated to c. year 9 of Akhenaten, and the entire family is depicted, including the baby Setepenre.
Neferneferuaten Tasherit is depicted in several tombs in Amarna and appears on monuments. A statue base originally from Amarna, but later moved to Heliopolis, mentions the Aten and Akhenaten, while in texts in a lower register the royal daughters Ankhesenpaaten and Neferneferuaten Tasherit are mentioned.
In the tomb of Huya, the chief Steward of Neferneferuaten's grandmother Queen Tiye, Neferneferuaten is shown in a family scene on a lintel on the north wall. The extended scene shows Akhenaten and Nefertiti on the left with their four eldest daughters, while on the right hand side Amenhotep III, Queen Tiye and princess Baketaten are shown. In the reward scene in the tomb of Meryre II, Neferneferuaten Tasherit is shown with four of her sisters (only Setepenre in absent).
She is depicted at the Durbar in year 12 in the tomb of the Overseer of the royal quarters Meryre II in Amarna. Akhenaten and Nefertiti are shown seated in a kiosk, receiving tribute from foreign lands. The daughters of the royal couple are shown standing behind their parents. Neferneferuaten is the first daughter in the lower register. She is holding an object which is too damaged to identify. Her sisters Neferneferure and Setepenre are standing behind her. Neferneferure is shown holding a pet gazelle and Setepenre is shown reaching over to pet the animal.
Neferneferuaten also appears in the award scene of Panehesy. She is shown standing in the building near the window of appearance as her parents, Akhenaten and Nefertiti, bestow honors upon the first servant of the Aten named Panehesy. In another scene in this tomb Neferneferuaten and her three older sisters all accompany their parents who are shown offering flowers to the Aten. The four royal daughters are all shown holding bouquets of flowers.
Neferneferuaten Tasherit is shown with her sisters Meritaten and Ankhesenpaaten mourning the death of Meketaten in c. year 14 in the Royal Tomb in Amarna. Her younger sisters Neferneferure and Setepenre are not present in this scene.
Final years and death.
It is unknown what became of Neferneferuaten Tasherit, but it has been suggested she died before Tutankhamun and Ankhesenpaaten came to the throne. It is possible she was one of the persons buried in chamber formula_0 in the Royal Tomb in Amarna.
It was previously suggested by James Allen in 2009 that she might be identified as Akhenaten's co-regent. whose exact identity is still disputed, but who could have been a woman. | [
{
"math_id": 0,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=8476916 |
8476931 | Neferneferure | Neferneferure ( "beautiful are the beauties of Re") (14th century BCE) was an ancient Egyptian princess of the 18th Dynasty. She was the fifth of six known daughters of Pharaoh Akhenaten and his Great Royal Wife Nefertiti.
Family.
Neferneferure was born during the 8th or 9th regnal year of her father Akhenaten in the city of Akhetaten. She had four older sisters named Meritaten, Meketaten, Ankhesenpaaten and Neferneferuaten Tasherit, as well as a younger sister named Setepenre.
Life.
One of the earliest depictions of Neferneferure is in a fresco from the King's House in Amarna. She is depicted sitting on a pillow with her sister Neferneferuaten Tasherit. The fresco is dated to c. year 9 of Akhenaten, and the entire family is depicted, including the baby Setepenre.
Neferneferure is depicted at the Durbar in year 12 in the tomb of the Overseer of the royal quarters Meryre II in Amarna. Akhenaten and Nefertiti are shown seated in a kiosk, receiving tribute from foreign lands. The daughters of the royal couple are shown standing behind their parents. Neferure is the middle daughter in the lower register. She is holding a gazelle in her right arm and a lotus flower in her left. She is standing right behind her sister Neferneferuaten Tasherit. Her sister Setepenre is standing behind her and is shown reaching over to pet the gazelle.
Death and burial.
Neferneferure probably died in the 13th or 14th regnal year, possibly in the plague that swept across Egypt during this time. She is absent from one scene and her name was plastered over in another scene in the Royal Tomb in Amarna. To be specific, on Wall C of the chamber formula_0 of the Royal Tomb her name was mentioned among the five princesses (the list excluded the youngest, Setepenre, who was possibly dead by this time), but was later covered by plaster. On Wall B of the chamber formula_1 she is missing from the scene which shows her parents and three elder sisters – Meritaten, Ankhesenpaaten and Neferneferuaten Tasherit – mourning the dead second princess, Meketaten. This suggests that she is likely to have died shortly before the decoration of these chambers was finished. It is possible that Neferneferure was actually buried in chamber formula_0 of the royal tomb.
Alternatively she may have been buried in Tomb 29 in Amarna. This theory is based on an amphora handle bearing an inscription mentioning the inner (burial) chamber of Neferneferure. If Neferneferure was buried in tomb 29, then this may mean the Royal Tomb was already sealed at the time of her burial and that she may have died after the death of her father Akhenaten.
Other objects mentioning Neferneferure.
A lid of a small box (JdE 61498) bearing her picture was found among the treasures of Tutankhamun. It shows the princess crouching, with a finger pressed to her mouth, as very young children were often depicted. On the lid Re's name in her name was written phonetically instead of with the usual circled dot.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\gamma"
}
] | https://en.wikipedia.org/wiki?curid=8476931 |
8477282 | Gradient network | In network science, a gradient network is a directed subnetwork of an undirected "substrate" network where each node has an associated scalar potential and one out-link that points to the node with the smallest (or largest) potential in its neighborhood, defined as the union of itself and its neighbors on the substrate network.
Definition.
Transport takes place on a fixed network formula_0 called the substrate graph. It has "N" nodes, formula_1 and the set
of edges formula_2. Given a node "i", we can define its set of neighbors in G by Si(1) = {j ∈ V | (i,j)∈ E}.
Let us also consider a scalar field, "h" = {"h"0, .., "h""N"−1} defined on the set of nodes V, so that every node i has a scalar value "h""i" associated to it.
Gradient ∇"h""i" on a network: ∇hiformula_3(i, μ(i))
i.e. the directed edge from "i" to "μ(i)", where "μ"("i") ∈ Si(1) ∪ {i}, and hμ has the maximum value in formula_4.
Gradient network : ∇formula_5 ∇formula_6 formula_7
where "F" is the set of gradient edges on "G".
In general, the scalar field depends on time, due to the flow, external sources and sinks on the network. Therefore, the gradient network ∇formula_6 will be dynamic.
Motivation and history.
The concept of a gradient network was first introduced by Toroczkai and Bassler (2004).
Generally, real-world networks (such as citation graphs, the Internet, cellular metabolic networks, the worldwide airport network), which often evolve to transport entities such as information, cars, power, water, forces, and so on, are not globally designed; instead, they evolve and grow through local changes. For example, if a router on the Internet is frequently congested and packets are lost or delayed due to that, it will be replaced by several interconnected new routers.
Moreover, this flow is often generated or influenced by local gradients of a scalar. For example: electric current is driven by a gradient of electric potential. In information networks, properties of nodes will generate a bias in the way of information is transmitted from a node to its neighbors. This idea motivated the approach to study the flow efficiency of a network by using gradient networks, when the flow is driven by gradients of a scalar field distributed on the network.
Recent research investigates the connection between network topology and the flow efficiency of the transportation.
In-degree distribution of gradient networks.
In a gradient network, the in-degree of a node i, "ki (in)" is the number of gradient edges pointing into i, and the in-degree distribution is formula_8 .
When the substrate G is a random graph and each pair of nodes is connected with probability "P" (i.e. an Erdős–Rényi random graph), the scalars" hi" are i.i.d. (independent identically distributed) the exact expression for R(l) is given by
formula_9
In the limit formula_10 and formula_11, the degree distribution becomes the power law
formula_12
This shows in this limit, the gradient network of random network is scale-free.
Furthermore, if the substrate network G is scale-free, like in the Barabási–Albert model, then the gradient network also follow the power-law with the same exponent as those of G.
The congestion on networks.
The fact that the topology of the substrate network influence the level of network congestion can be illustrated by a simple example: if the network has a star-like structure, then at the central node, the flow would become congested because the central node should handle all the flow from other nodes. However, if the network has a ring-like structure, since every node takes the same role, there is no flow congestion.
Under assumption that the flow is generated by gradients in the network, flow efficiency on networks can be characterized through the jamming factor (or congestion factor), defined as follows:
formula_13
where "N"receive is the number of nodes that receive gradient flow and Nsend is the number of nodes that send gradient flow.
The value of "J" is between 0 and 1; formula_14 means no congestion, and formula_15 corresponds to maximal congestion.
In the limit formula_16, for an Erdős–Rényi random graph, the congestion factor becomes
formula_17
This result shows that random networks are maximally congested in that limit.
On the contrary, for a scale-free network, "J" is a constant for any "N", which means that scale-free networks are not prone to maximal jamming.
Approaches to control congestion.
One problem in communication networks is understanding how to control congestion and maintain normal and efficient network function.
Zonghua Liu et al. (2006) showed that congestion is more likely to occur at the nodes with high degrees in networks, and an efficient approach of selectively enhancing the message-process capability of a small fraction (e.g. 3%) of nodes is shown to perform just as well as enhancing the capability of all nodes.
Ana L Pastore y Piontti et al. (2008) showed that relaxational dynamics can reduce network congestion.
Pan et al. (2011) studied jamming properties in a scheme where edges are given weights of a power of the scalar difference between node potentials.
Niu and Pan (2016) showed that congestion can be reduced by introducing a correlation between the gradient field and the local network topology. | [
{
"math_id": 0,
"text": "G = G(V,E) "
},
{
"math_id": 1,
"text": "V = \\{0, 1, ...,N-1\\} "
},
{
"math_id": 2,
"text": "E = \\{(i,j) | i,j\\in V\\} "
},
{
"math_id": 3,
"text": "= "
},
{
"math_id": 4,
"text": "{ h_j | j \\in S_i^{(1)} \\cup {i}}"
},
{
"math_id": 5,
"text": "G = "
},
{
"math_id": 6,
"text": "G "
},
{
"math_id": 7,
"text": " (V, F) "
},
{
"math_id": 8,
"text": "R(l)= P\\{k_i^{(in)}=l\\}"
},
{
"math_id": 9,
"text": "R(l)=\\frac{1}{N}\\sum_{n=0}^{N-1}\\mathrm{C}^{N-1-n}_l[1-p(1-p)]^{N-1-n-l}[p(1-p)^n]^l]"
},
{
"math_id": 10,
"text": "N\\to\\infty "
},
{
"math_id": 11,
"text": "P\\to 0 "
},
{
"math_id": 12,
"text": " R(l) \\approx l^{-1} "
},
{
"math_id": 13,
"text": " J = 1 - \\langle \\langle \\frac{N_\\text{receive}}{N_\\text{send}} \\rangle_h \\rangle_\\text{network} = R(0)"
},
{
"math_id": 14,
"text": "J=0"
},
{
"math_id": 15,
"text": "J=1"
},
{
"math_id": 16,
"text": "N\\to\\infty"
},
{
"math_id": 17,
"text": "J(N,P) = 1 - \\frac{\\ln N}{N \\ln(\\frac{1}{1-P})} \\left[ 1 + O(\\frac{1}{N}) \\right]\\rightarrow 1. "
}
] | https://en.wikipedia.org/wiki?curid=8477282 |
8477639 | Fractal dimension on networks | Fractal analysis is useful in the study of complex networks, present in both natural and artificial systems such as computer systems, brain and social networks, allowing further development of the field in network science.
Self-similarity of complex networks.
Many real networks have two fundamental properties, scale-free property and small-world property. If the degree distribution of the network follows a power-law, the network is scale-free; if any two arbitrary nodes in a network can be connected in a very small number of steps, the network is said to be small-world.
The small-world properties can be mathematically expressed by the slow increase of the average diameter of the network, with the total number of nodes formula_0,
formula_1
where formula_2 is the shortest distance between two nodes.
Equivalently:
formula_3
where formula_4 is a characteristic length.
For a self-similar structure, a power-law relation is expected rather than the exponential relation above. From this fact, it would seem that the small-world networks are not self-similar under a length-scale transformation.
Self-similarity has been discovered in the solvent-accessible surface areas of proteins. Because proteins form globular folded chains, this discovery has important implications for protein evolution and protein dynamics, as it can be used to establish characteristic dynamic length scales for protein functionality.
Methods for calculation of the dimension.
The fractal dimension can be calculated using methods such as the "box counting method" or the "cluster growing method".
The box counting method.
Let formula_5 be the number of boxes of linear size formula_6, needed to cover the given network. The fractal dimension formula_7 is then given by
formula_8
This means that the average number of vertices formula_9 within a box of size formula_6
formula_10
By measuring the distribution of formula_0 for different box sizes or by measuring the distribution of formula_9 for different box sizes, the fractal dimension formula_7 can be obtained by a power law fit of the distribution.
The cluster growing method.
One seed node is chosen randomly. If the minimum distance formula_2 is given, a cluster of nodes separated by at most formula_2 from the seed node can be formed. The procedure is repeated by choosing many seeds until the clusters cover the whole network. Then the dimension formula_11 can be calculated by
formula_12
where formula_13 is the average mass of the clusters, defined as the average number of nodes in a cluster.
These methods are difficult to apply to networks since networks are generally not embedded in another space. In order to measure the fractal dimension of networks we add the concept of renormalization.
Fractal scaling in scale-free networks.
Box-counting and renormalization.
To investigate self-similarity in networks, the box-counting method with renormalization can be used.
For each size "l""B", boxes are chosen randomly (as in the cluster growing method) until the network is covered, A box consists of nodes all separated by a distance of "l" < "l""B", that is every pair of nodes in the box must be separated by a minimal path of at most "l""B" links. Then each box is replaced by a node(renormalization). The renormalized nodes are connected if there is at least one link between the unrenormalized boxes. This procedure is repeated until the network collapses to one node. Each of these boxes has an effective mass (the number of nodes in it) which can be used as shown above to measure the fractal dimension of the network.
The plot shows the invariance of the degree distribution "P"("k") under the renormalization performed as a function of the box size on the World Wide Web. The networks are also invariant under multiple renormalizations applied for a fixed box size "l""B". This invariance suggests that the networks are self-similar on multiple length scales.
Skeleton and fractal scaling.
The fractal properties of the network can be seen in its underlying tree structure. In this view, the network consists of the skeleton and the shortcuts. The skeleton is a special type of spanning tree, formed by the edges having the highest betweenness centralities, and the remaining edges in the network are shortcuts.
If the original network is scale-free, then its skeleton also follows a power-law degree distribution, where the degree can be different from the degree of the original network. For the fractal networks following fractal scaling, each skeleton shows fractal scaling similar to that of the original network. The number of boxes to cover the skeleton is almost the same as the number needed to cover the network.
Real-world fractal networks.
Since fractal networks and their skeletons follow the relation
formula_14,
A network can be classified as fractal or not and the fractal dimension can be found. For example, the WWW, the human brain, metabolic network, protein interaction network (PIN) of "H". "sapiens", and PIN of "S". "cerevisiae"are considered as fractal networks. Furthermore, the fractal dimensions measured are formula_15 for the networks respectively. On the other hand, the Internet, actor network, and artificial models (for instance, the BA model) do not show the fractal properties.
Other definitions for network dimensions.
The best definition of dimension for a complex network or graph depends on the application. For example, metric dimension is defined in terms of the resolving set for a graph. Definitions based on the scaling property of the "mass" as defined above with distance,
or based on the complex network zeta function have also been studied. | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "\\left\\langle l\\right\\rangle\\sim\\ln{N}"
},
{
"math_id": 2,
"text": "l"
},
{
"math_id": 3,
"text": "N\\sim e^{\\left\\langle l\\right\\rangle/l_0}"
},
{
"math_id": 4,
"text": "l_0"
},
{
"math_id": 5,
"text": "N_B"
},
{
"math_id": 6,
"text": "l_B"
},
{
"math_id": 7,
"text": "d_B"
},
{
"math_id": 8,
"text": "N_B\\sim l_B^{-d_B}"
},
{
"math_id": 9,
"text": "\\left\\langle M_B\\left(l_B\\right)\\right\\rangle"
},
{
"math_id": 10,
"text": "\\left\\langle M_B\\left(l_B\\right)\\right\\rangle \\sim l_B^{d_B}"
},
{
"math_id": 11,
"text": "d_f"
},
{
"math_id": 12,
"text": "\\left\\langle M_C\\right\\rangle \\sim l^{d_f}"
},
{
"math_id": 13,
"text": "\\left\\langle M_C\\right\\rangle"
},
{
"math_id": 14,
"text": "\\left\\langle M_B\\left(l_B\\right)\\right\\rangle\\sim l_B^{d_B}"
},
{
"math_id": 15,
"text": "d_B = 4.1,\\mbox{ } 3.7,\\mbox{ } 3.4,\\mbox{ } 2.0, \\mbox{ and } 1.8"
}
] | https://en.wikipedia.org/wiki?curid=8477639 |
84777 | Fingerprint | Biometric identifier
A fingerprint is an impression left by the friction ridges of a human finger. The recovery of partial fingerprints from a crime scene is an important method of forensic science. Moisture and grease on a finger result in fingerprints on surfaces such as glass or metal. Deliberate impressions of entire fingerprints can be obtained by ink or other substances transferred from the peaks of friction ridges on the skin to a smooth surface such as paper. Fingerprint records normally contain impressions from the pad on the last joint of fingers and thumbs, though fingerprint cards also typically record portions of lower joint areas of the fingers.
Human fingerprints are detailed, unique, difficult to alter, and durable over the life of an individual, making them suitable as long-term markers of human identity. They may be employed by police or other authorities to identify individuals who wish to conceal their identity, or to identify people who are incapacitated or deceased and thus unable to identify themselves, as in the aftermath of a natural disaster.
Their use as evidence has been challenged by academics, judges and the media. There are no uniform standards for point-counting methods, and academics have argued that the error rate in matching fingerprints has not been adequately studied and that fingerprint evidence has no secure statistical foundation. Research has been conducted into whether experts can objectively focus on feature information in fingerprints without being misled by extraneous information, such as context.
Biology.
Fingerprints are impressions left on surfaces by the friction ridges on the finger of a human. The matching of two fingerprints is among the most widely used and most reliable biometric techniques. Fingerprint matching considers only the obvious features of a fingerprint.
The composition of fingerprints consists of water (95%-99%), as well as organic and inorganic constituents. The organic component is made up of amino acids, proteins, glucose, lactase, urea, pyruvate, fatty acids and sterols. Inorganic ions such as chloride, sodium, potassium and iron are also present. Other contaminants such as oils found in cosmetics, drugs and their metabolites and food residues may be found in fingerprint residues.
A friction ridge is a raised portion of the epidermis on the digits (fingers and toes), the palm of the hand or the sole of the foot, consisting of one or more connected ridge units of friction ridge skin. These are sometimes known as "epidermal ridges" which are caused by the underlying interface between the dermal papillae of the dermis and the interpapillary (rete) pegs of the epidermis. These unique features are formed at around the 15th week of fetal development and remain until after death, when decomposition begins. During the development of the fetus, around the 13th week of a pregnancy, ledge-like formation is formed at the bottom of the epidermis beside the dermis. The cells along these ledges begin to rapidly proliferate. This rapid proliferation forms primary and secondary ridges. Both the primary and secondary ridges act as a template for the outer layer of the skin to form the friction ridges seen on the surface of the skin.
These epidermal ridges serve to amplify vibrations triggered, for example, when fingertips brush across an uneven surface, better transmitting the signals to sensory nerves involved in fine texture perception. These ridges may also assist in gripping rough surfaces and may improve surface contact in wet conditions.
Genetics.
Consensus within the scientific community suggests that the patterns on fingertips are hereditary. The fingerprint patterns between monozygotic twins have been shown to be very similar (though not identical), whereas dizygotic twins have considerably less similarity. Significant heritability has been identified for 12 dermatoglyphic characteristics. Current models of dermatoglyphic trait inheritance suggest Mendelian transmission with additional effects from either additive or dominant major genes.
Whereas genes determine the general characteristics of patterns and their type, the presence of environmental factors result in the slight differentiation of each fingerprint. However, the relative influences of genetic and environmental effects on fingerprint patterns are generally unclear. One study has suggested that roughly 5% of the total variability is due to small environmental effects, although this was only performed using total ridge count as a metric. Several models of finger ridge formation mechanisms that lead to the vast diversity of fingerprints have been proposed. One model suggests that a buckling instability in the basal cell layer of the fetal epidermis is responsible for developing epidermal ridges. Additionally, blood vessels and nerves may also serve a role in the formation of ridge configurations. Another model indicates that changes in amniotic fluid surrounding each developing finger within the uterus cause corresponding cells on each fingerprint to grow in different microenvironments. For a given individual, these various factors affect each finger differently, preventing two fingerprints from being identical while still retaining similar patterns.
It is important to note that the determination of fingerprint inheritance is made difficult by the vast diversity of phenotypes. Classification of a specific pattern is often subjective (lack of consensus on the most appropriate characteristic to measure quantitatively) which complicates analysis of dermatoglyphic patterns. Several modes of inheritance have been suggested and observed for various fingerprint patterns. Total fingerprint ridge count, a commonly used metric of fingerprint pattern size, has been suggested to have a polygenic mode of inheritance and is influenced by multiple additive genes. This hypothesis has been challenged by other research, however, which indicates that ridge counts on individual fingers are genetically independent and lack evidence to support the existence of additive genes influencing pattern formation. Another mode of fingerprint pattern inheritance suggests that the arch pattern on the thumb and on other fingers are inherited as an autosomal dominant trait. Further research on the arch pattern has suggested that a major gene or multifactorial inheritance is responsible for arch pattern heritability. A separate model for the development of the whorl pattern indicates that a single gene or group of linked genes contributes to its inheritance. Furthermore, inheritance of the whorl pattern does not appear to be symmetric in that the pattern is seemingly randomly distributed among the ten fingers of a given individual. In general, comparison of fingerprint patterns between left and right hands suggests an asymmetry in the effects of genes on fingerprint patterns, although this observation requires further analysis.
In addition to proposed models of inheritance, specific genes have been implicated as factors in fingertip pattern formation (their exact mechanism of influencing patterns is still under research). Multivariate linkage analysis of finger ridge counts on individual fingers revealed linkage to chromosome 5q14.1 specifically for the ring, index, and middle fingers. In mice, variants in the gene EVI1 were correlated with dermatoglyphic patterns. EVI1 expression in humans does not directly influence fingerprint patterns but does affect limb and digit formation which in turn may play a role in influencing fingerprint patterns. Genome-wide association studies found single nucleotide polymorphisms within the gene ADAMTS9-AS2 on 3p14.1, which appeared to have an influence on the whorl pattern on all digits. This gene encodes antisense RNA which may inhibit ADAMTS9, which is expressed in the skin. A model of how genetic variants of ADAMTS9-AS2 directly influence whorl development has not yet been proposed.
In February 2023, a study identified WNT, BMP and EDAR as signaling pathways regulating the formation of primary ridges on fingerprints, with the first two having an opposite relationship established by a Turing reaction-diffusion system.
Classification systems.
Before computerization, manual filing systems were used in large fingerprint repositories. A fingerprint classification system groups fingerprints according to their characteristics and therefore helps in the matching of a fingerprint against a large database of fingerprints. A query fingerprint that needs to be matched can therefore be compared with a subset of fingerprints in an existing database. Early classification systems were based on the general ridge patterns, including the presence or absence of circular patterns, of several or all fingers. This allowed the filing and retrieval of paper records in large collections based on friction ridge patterns alone. The most popular systems used the pattern class of each finger to form a numeric key to assist lookup in a filing system. Fingerprint classification systems included the Roscher System, the Juan Vucetich System and the Henry Classification System. The Roscher System was developed in Germany and implemented in both Germany and Japan. The Vucetich System was developed in Argentina and implemented throughout South America. The Henry Classification System was developed in India and implemented in most English-speaking countries.
In the Henry Classification System, there are three basic fingerprint patterns: loop, whorl, and arch, which constitute 60–65 percent, 30–35 percent, and 5 percent of all fingerprints respectively. There are also more complex classification systems that break down patterns even further, into plain arches or tented arches, and into loops that may be radial or ulnar, depending on the side of the hand toward which the tail points. Ulnar loops start on the pinky-side of the finger, the side closer to the ulna, the lower arm bone. Radial loops start on the thumb-side of the finger, the side closer to the radius. Whorls may also have sub-group classifications including plain whorls, accidental whorls, double loop whorls, peacock's eye, composite, and central pocket loop whorls.
The "primary classification number" in the Henry Classification System is a fraction whose numerator and denominator are whole numbers between 1 and 32 inclusive, thus classifying each set of ten fingerprints into one of 1024 groups. (To distinguish these groups, the fraction is "not" reduced by dividing out any common factors.) The fraction is determined by ten indicators, one for each finger, an indicator taking the value 1 when that finger has a whorl, and 0 otherwise. These indicators can be written
formula_0 for the right hand and
formula_1 for the left hand, where the subscripts are "t" for thumb, "i" for index finger, "m" for middle finger, "r" for ring finger and "l" for little finger. The formula for the fraction is then as follows:
formula_2
For example, if only the right ring finger and the left index finger have whorls, then the set of fingerprints is classified into the "9/3" group:
formula_3
Note that although 9/3 = 3/1, the "9/3" group is different from the "3/1" group, as the latter corresponds to having whorls only on the left middle finger.
Fingerprint identification.
Fingerprint identification, known as dactyloscopy, ridgeology, or hand print identification, is the process of comparing two instances of friction ridge skin impressions (see minutiae), from human fingers or toes, or even the palm of the hand or sole of the foot, to determine whether these impressions could have come from the same individual. The flexibility and the randomized formation of the friction ridges on skin means that no two finger or palm prints are ever exactly alike in every detail; even two impressions recorded immediately after each other from the same hand may be slightly different. Fingerprint identification, also referred to as individualization, involves an expert, or an expert computer system operating under threshold scoring rules, determining whether two friction ridge impressions are likely to have originated from the same finger or palm (or toe or sole).
In 2024, research using deep learning neural networks found contrary to "prevailing assumptions" that fingerprints from different fingers of the same person could be identified as belonging to that individual with 99.99% confidence. Further, features used in traditional methods were nonpredictive in such identification while ridge orientation, particularly near the center of the fingerprint center provided most information.
An intentional recording of friction ridges is usually made with black printer's ink rolled across a contrasting white background, typically a white card. Friction ridges can also be recorded digitally, usually on a glass plate, using a technique called live scan. A "latent print" is the chance recording of friction ridges deposited on the surface of an object or a wall. Latent prints are invisible to the naked eye, whereas "patent prints" or "plastic prints" are viewable with the unaided eye. Latent prints are often fragmentary and require the use of chemical methods, powder, or alternative light sources in order to be made clear. Sometimes an ordinary bright flashlight will make a latent print visible.
When friction ridges come into contact with a surface that will take a print, material that is on the friction ridges such as perspiration, oil, grease, ink, or blood, will be transferred to the surface. Factors which affect the quality of friction ridge impressions are numerous. Pliability of the skin, deposition pressure, slippage, the material from which the surface is made, the roughness of the surface, and the substance deposited are just some of the various factors which can cause a latent print to appear differently from any known recording of the same friction ridges. Indeed, the conditions surrounding every instance of friction ridge deposition are unique and never duplicated. For these reasons, fingerprint examiners are required to undergo extensive training. The scientific study of fingerprints is called dermatoglyphics.
Fingerprinting techniques.
Exemplar.
Exemplar prints, or known prints, is the name given to fingerprints deliberately collected from a subject, whether for purposes of enrollment in a system or when under arrest for a suspected criminal offense. During criminal arrests, a set of exemplar prints will normally include one print taken from each finger that has been rolled from one edge of the nail to the other, plain (or slap) impressions of each of the four fingers of each hand, and plain impressions of each thumb. Exemplar prints can be collected using live scan or by using ink on paper cards.
Latent.
In forensic science, a partial fingerprint lifted from a surface is called a "latent fingerprint". Moisture and grease on fingers result in latent fingerprints on surfaces such as glass. But because they are not clearly visible, their detection may require chemical development through powder dusting, the spraying of ninhydrin, iodine fuming, or soaking in silver nitrate. Depending on the surface or the material on which a latent fingerprint has been found, different methods of chemical development must be used. Forensic scientists use different techniques for porous surfaces, such as paper, and nonporous surfaces, such as glass, metal or plastic. Nonporous surfaces require the dusting process, where fine powder and a brush are used, followed by the application of transparent tape to lift the latent fingerprint off the surface.
While the police often describe all partial fingerprints found at a crime scene as latent prints, forensic scientists call partial fingerprints that are readily visible "patent prints". Chocolate, toner, paint or ink on fingers will result in patent fingerprints. Latent fingerprints impressions that are found on soft material, such as soap, cement or plaster, are called "plastic prints" by forensic scientists.
Capture and detection.
Live scan devices.
Fingerprint image acquisition is considered to be the most critical step in an automated fingerprint authentication system, as it determines the final fingerprint image quality, which has a drastic effect on the overall system performance. There are different types of fingerprint readers on the market, but the basic idea behind each is to measure the physical difference between ridges and valleys.
All the proposed methods can be grouped into two major families: solid-state fingerprint readers and optical fingerprint readers. The procedure for capturing a fingerprint using a sensor consists of rolling or touching with the finger onto a sensing area, which according to the physical principle in use (optical, ultrasonic, capacitive, or thermal – see ) captures the difference between valleys and ridges. When a finger touches or rolls onto a surface, the elastic skin deforms. The quantity and direction of the pressure applied by the user, the skin conditions and the projection of an irregular 3D object (the finger) onto a 2D flat plane introduce distortions, noise, and inconsistencies in the captured fingerprint image. These problems result in inconsistent and non-uniform irregularities in the image. During each acquisition, therefore, the results of the imaging are different and uncontrollable. The representation of the same fingerprint changes every time the finger is placed on the sensor plate, increasing the complexity of any attempt to match fingerprints, impairing the system performance and consequently, limiting the widespread use of this biometric technology.
In order to overcome these problems, as of 2010, non-contact or touchless 3D fingerprint scanners have been developed. Acquiring detailed 3D information, 3D fingerprint scanners take a digital approach to the analog process of pressing or rolling the finger. By modelling the distance between neighboring points, the fingerprint can be imaged at a resolution high enough to record all the necessary detail.
Fingerprinting on cadavers.
The human skin itself, which is a regenerating organ until death, and environmental factors such as lotions and cosmetics, pose challenges when fingerprinting a human. Following the death of a human, the skin dries and cools. Fingerprints of dead humans may be obtained during an autopsy.
The collection of fingerprints off of a cadaver can be done in varying ways and depends on the condition of the skin. In the case of cadaver in the later stages of decomposition with dried skin, analysts will boil the skin to recondition/rehydrate it, allowing for moisture to flow back into the skin and resulting in detail friction ridges. Another method that has been used in brushing a powder, such as baby powder over the tips of the fingers. The powder will ebbed itself into the farrows of the friction ridges allowing for the lifted ridges to be seen.
Latent fingerprint detection.
In the 1930s, criminal investigators in the United States first discovered the existence of latent fingerprints on the surfaces of fabrics, most notably on the insides of gloves discarded by perpetrators.
Since the late nineteenth century, fingerprint identification methods have been used by police agencies around the world to identify suspected criminals as well as the victims of crime. The basis of the traditional fingerprinting technique is simple. The skin on the palmar surface of the hands and feet forms ridges, so-called papillary ridges, in patterns that are unique to each individual and which do not change over time. Even identical twins (who share their DNA) do not have identical fingerprints. The best way to render latent fingerprints visible, so that they can be photographed, can be complex and may depend, for example, on the type of surfaces on which they have been left. It is generally necessary to use a "developer", usually a powder or chemical reagent, to produce a high degree of visual contrast between the ridge patterns and the surface on which a fingerprint has been deposited.
Developing agents depend on the presence of organic materials or inorganic salts for their effectiveness, although the water deposited may also take a key role. Fingerprints are typically formed from the aqueous-based secretions of the eccrine glands of the fingers and palms with additional material from sebaceous glands primarily from the forehead. This latter contamination results from the common human behaviors of touching the face and hair. The resulting latent fingerprints consist usually of a substantial proportion of water with small traces of amino acids and chlorides mixed with a fatty, sebaceous component which contains a number of fatty acids and triglycerides. Detection of a small proportion of reactive organic substances such as urea and amino acids is far from easy.
Fingerprints at a crime scene may be detected by simple powders, or by chemicals applied "in situ". More complex techniques, usually involving chemicals, can be applied in specialist laboratories to appropriate articles removed from a crime scene. With advances in these more sophisticated techniques, some of the more advanced crime scene investigation services from around the world were, as of 2010, reporting that 50% or more of the fingerprints recovered from a crime scene had been identified as a result of laboratory-based techniques.
Forensic laboratories.
Although there are hundreds of reported techniques for fingerprint detection, many of these are only of academic interest and there are only around 20 really effective methods which are currently in use in the more advanced fingerprint laboratories around the world.
Some of these techniques, such as ninhydrin, diazafluorenone and vacuum metal deposition, show great sensitivity and are used operationally. Some fingerprint reagents are specific, for example ninhydrin or diazafluorenone reacting with amino acids. Others such as ethyl cyanoacrylate polymerisation, work apparently by water-based catalysis and polymer growth. Vacuum metal deposition using gold and zinc has been shown to be non-specific, but can detect fat layers as thin as one molecule.
More mundane methods, such as the application of fine powders, work by adhesion to sebaceous deposits and possibly aqueous deposits in the case of fresh fingerprints. The aqueous component of a fingerprint, while initially sometimes making up over 90% of the weight of the fingerprint, can evaporate quite quickly and may have mostly gone after 24 hours. Following work on the use of argon ion lasers for fingerprint detection, a wide range of fluorescence techniques have been introduced, primarily for the enhancement of chemically developed fingerprints; the inherent fluorescence of some latent fingerprints may also be detected. Fingerprints can for example be visualized in 3D and without chemicals by the use of infrared lasers.
A comprehensive manual of the operational methods of fingerprint enhancement was last published by the UK Home Office Scientific Development Branch in 2013 and is used widely around the world.
A technique proposed in 2007 aims to identify an individual's ethnicity, sex, and dietary patterns.
Limitations and implications in a forensic context.
One of the main limitations of friction ridge impression evidence regarding the actual collection would be the surface environment, specifically talking about how porous the surface the impression is on. With non-porous surfaces, the residues of the impression will not be absorbed into the material of the surface, but could be smudged by another surface. With porous surfaces, the residues of the impression will be absorbed into the surface. With both resulting in either an impression of no value to examiners or the destruction of the friction ridge impressions.
In order for analysts to correctly positively identify friction ridge patterns and their features depends heavily on the clarity of the impression. Therefore, the analysis of friction ridges is limited by clarity.
In a court context, many have argued that friction ridge identification and ridgeology should be classified as opinion evidence and not as fact, therefore should be assessed as such. Many have said that friction ridge identification is only legally admissible today because during the time when it was added to the legal system, the admissibility standards were quite low. There are only a limited number of studies that have been conducted to help confirm the science behind this identification process.
Crime scene investigations.
The application of the new scanning Kelvin probe (SKP) fingerprinting technique, which makes no physical contact with the fingerprint and does not require the use of developers, has the potential to allow fingerprints to be recorded while still leaving intact material that could subsequently be subjected to DNA analysis. A forensically usable prototype was under development at Swansea University during 2010, in research that was generating significant interest from the British Home Office and a number of different police forces across the UK, as well as internationally. The hope is that this instrument could eventually be manufactured in sufficiently large numbers to be widely used by forensic teams worldwide.
Detection of drug use.
The secretions, skin oils and dead cells in a human fingerprint contain residues of various chemicals and their metabolites present in the body. These can be detected and used for forensic purposes. For example, the fingerprints of tobacco smokers contain traces of cotinine, a nicotine metabolite; they also contain traces of nicotine itself. Caution should be used, as its presence may be caused by mere contact of the finger with a tobacco product. By treating the fingerprint with gold nanoparticles with attached cotinine antibodies, and then subsequently with a fluorescent agent attached to cotinine antibodies, the fingerprint of a smoker becomes fluorescent; non-smokers' fingerprints stay dark. The same approach, as of 2010, is being tested for use in identifying heavy coffee drinkers, cannabis smokers, and users of various other drugs.
Police force databases.
Most American law enforcement agencies use Wavelet Scalar Quantization (WSQ), a wavelet-based system for efficient storage of compressed fingerprint images at 500 pixels per inch (ppi). WSQ was developed by the FBI, the Los Alamos National Lab, and the National Institute of Standards and Technology (NIST). For fingerprints recorded at 1000 ppi spatial resolution, law enforcement (including the FBI) uses JPEG 2000 instead of WSQ.
Validity.
Fingerprints collected at a crime scene, or on items of evidence from a crime, have been used in forensic science to identify suspects, victims and other persons who touched a surface. Fingerprint identification emerged as an important system within police agencies in the late 19th century, when it replaced anthropometric measurements as a more reliable method for identifying persons having a prior record, often under a false name, in a criminal record repository. Fingerprinting has served all governments worldwide during the past 100 years or so to provide identification of criminals. Fingerprints are the fundamental tool in every police agency for the identification of people with a criminal history.
The validity of forensic fingerprint evidence has been challenged by academics, judges and the media. In the United States fingerprint examiners have not developed uniform standards for the identification of an individual based on matching fingerprints. In some countries where fingerprints are also used in criminal investigations, fingerprint examiners are required to match a number of "identification points" before a match is accepted. In England 16 identification points are required and in France 12, to match two fingerprints and identify an individual. Point-counting methods have been challenged by some fingerprint examiners because they focus solely on the location of particular characteristics in fingerprints that are to be matched. Fingerprint examiners may also uphold the "one dissimilarity doctrine", which holds that if there is one dissimilarity between two fingerprints, the fingerprints are not from the same finger. Furthermore, academics have argued that the error rate in matching fingerprints has not been adequately studied and it has even been argued that fingerprint evidence has no secure statistical foundation. Research has been conducted into whether experts can objectively focus on feature information in fingerprints without being misled by extraneous information, such as context.
Fingerprints can theoretically be forged and planted at crime scenes.
Professional certification.
Fingerprinting was the basis upon which the first forensic professional organization was formed, the International Association for Identification (IAI), in 1915. The first professional certification program for forensic scientists was established in 1977, the IAI's Certified Latent Print Examiner program, which issued certificates to those meeting stringent criteria and had the power to revoke certification where an individual's performance warranted it. Other forensic disciplines have followed suit and established their own certification programs.
History.
Antiquity and the medieval period.
Fingerprints have been found on ancient clay tablets, seals, and pottery. They have also been found on the walls of Egyptian tombs and on Minoan, Greek, and Chinese pottery. In ancient China officials authenticated government documents with their fingerprints. In about 200 BC, fingerprints were used to sign written contracts in Babylon. Fingerprints from 3D-scans of cuneiform tablets are extracted using the GigaMesh Software Framework.
With the advent of silk and paper in China, parties to a legal contract impressed their handprints on the document. Sometime before 851 CE, an Arab merchant in China, Abu Zayd Hasan, witnessed Chinese merchants using fingerprints to authenticate loans.
References from the age of the Babylonian king Hammurabi (reigned 1792–1750 BCE) indicate that law officials would take the fingerprints of people who had been arrested. During China's Qin dynasty, records have shown that officials took hand prints and foot prints as well as fingerprints as evidence from a crime scene. In 650, the Chinese historian Kia Kung-Yen remarked that fingerprints could be used as a means of authentication. In his "Jami al-Tawarikh" (Universal History), the Iranian physician Rashid-al-Din Hamadani (1247–1318) refers to the Chinese practice of identifying people via their fingerprints, commenting: "Experience shows that no two individuals have fingers exactly alike."
Whether these examples indicate that ancient peoples realized that fingerprints could uniquely identify individuals has been debated, with some arguing these examples are no more meaningful than an illiterate's mark on a document or an accidental remnant akin to a potter's mark on their clay.
Europe in the 17th and 18th centuries.
From the late 16th century onwards, European academics attempted to include fingerprints in scientific studies. But plausible conclusions could be established only from the mid-17th century onwards. In 1686, the professor of anatomy at the University of Bologna Marcello Malpighi identified ridges, spirals and loops in fingerprints left on surfaces. In 1788, a German anatomist Johann Christoph Andreas Mayer was the first European to conclude that fingerprints were unique to each individual.
19th century.
In 1823, Jan Evangelista Purkyně identified nine fingerprint patterns. The nine patterns include the tented arch, the loop, and the whorl, which in modern-day forensics are considered ridge details. In 1840, following the murder of Lord William Russell, a provincial doctor, Robert Blake Overton, wrote to Scotland Yard suggesting checking for fingerprints. In 1853, the German anatomist Georg von Meissner (1829–1905) studied friction ridges, and in 1858, Sir William James Herschel initiated fingerprinting in India. In 1877, he first instituted the use of fingerprints on contracts and deeds to prevent the repudiation of signatures in Hooghly near Kolkata and he registered government pensioners' fingerprints to prevent the collection of money by relatives after a pensioner's death.
In 1880, Henry Faulds, a Scottish surgeon in a Tokyo hospital, published his first paper on the usefulness of fingerprints for identification and proposed a method to record them with printing ink. Henry Faulds also suggested, based on his studies, that fingerprints are unique to a human. Returning to Great Britain in 1886, he offered the concept to the Metropolitan Police in London but it was dismissed at that time. Up until the early 1890s, police forces in the United States and on the European continent could not reliably identify criminals to track their criminal record. Francis Galton published a detailed statistical model of fingerprint analysis and identification in his 1892 book "Finger Prints". He had calculated that the chance of a "false positive" (two different individuals having the same fingerprints) was about 1 in 64 billion. In 1892, Juan Vucetich, an Argentine chief police officer, created the first method of recording the fingerprints of individuals on file. In that same year, Francisca Rojas was found in a house with neck injuries, while her two sons were found dead with their throats cut. Rojas accused a neighbour, but despite brutal interrogation, this neighbour would not confess to the crimes. Inspector Álvarez, a colleague of Vucetich, went to the scene and found a bloody thumb mark on a door. When it was compared with Rojas' prints, it was found to be identical with her right thumb. She then confessed to the murder of her sons. This was the first known murder case to be solved using fingerprint analysis.
In Kolkata, a fingerprint Bureau was established in 1897, after the Council of the Governor General approved a committee report that fingerprints should be used for the classification of criminal records. The bureau employees Azizul Haque and Hem Chandra Bose have been credited with the primary development of a fingerprint classification system eventually named after their supervisor, Sir Edward Richard Henry.
20th century.
The French scientist Paul-Jean Coulier developed a method to transfer latent fingerprints on surfaces to paper using iodine fuming. It allowed the London Scotland Yard to start fingerprinting individuals and identify criminals using fingerprints in 1901. Soon after, American police departments adopted the same method and fingerprint identification became a standard practice in the United States. The Scheffer case of 1902 is the first case of the identification, arrest, and conviction of a murderer based upon fingerprint evidence. Alphonse Bertillon identified the thief and murderer Scheffer, who had previously been arrested and his fingerprints filed some months before, from the fingerprints found on a fractured glass showcase, after a theft in a dentist's apartment where the dentist's employee was found dead. It was able to be proved in court that the fingerprints had been made after the showcase was broken.
The identification of individuals through fingerprints for law enforcement has been considered essential in the United States since the beginning of the 20th century. Body identification using fingerprints has also been valuable in the aftermath of natural disasters and anthropogenic hazards. In the United States, the FBI manages a fingerprint identification system and database called the Integrated Automated Fingerprint Identification System (IAFIS), which currently holds the fingerprints and criminal records of over 51 million criminal record subjects and over 1.5 million civil (non-criminal) fingerprint records. OBIM, formerly U.S. VISIT, holds the largest repository of biometric identifiers in the U.S. government at over 260 million individual identities. When it was deployed in 2004, this repository, known as the Automated Biometric Identification System (IDENT), stored biometric data in the form of two-finger records. Between 2005 and 2009, the DHS transitioned to a ten-print record standard in order to establish interoperability with IAFIS.
In 1910, Edmond Locard established the first forensic lab in France. Criminals may wear gloves to avoid leaving fingerprints. However, the gloves themselves can leave prints that are as unique as human fingerprints. After collecting glove prints, law enforcement can match them to gloves that they have collected as evidence or to prints collected at other crime scenes. In many jurisdictions the act of wearing gloves itself while committing a crime can be prosecuted as an inchoate offense.
Use of fingerprints in schools.
The non-governmental organization (NGO) Privacy International in 2002 made the cautionary announcement that tens of thousands of UK school children were being fingerprinted by schools, often without the knowledge or consent of their parents. That same year, the supplier Micro Librarian Systems, which uses a technology similar to that used in US prisons and the German military, estimated that 350 schools throughout Britain were using such systems to replace library cards. By 2007, it was estimated that 3,500 schools were using such systems. Under the United Kingdom Data Protection Act, schools in the UK do not have to ask parental consent to allow such practices to take place. Parents opposed to fingerprinting may bring only individual complaints against schools. In response to a complaint which they are continuing to pursue, in 2010, the European Commission expressed 'significant concerns' over the proportionality and necessity of the practice and the lack of judicial redress, indicating that the practice may break the European Union data protection directive.
In March 2007, the UK government was considering fingerprinting all children aged 11 to 15 and adding the prints to a government database as part of a new passport and ID card scheme and disallowing opposition for privacy concerns. All fingerprints taken would be cross-checked against prints from 900,000 unsolved crimes. Shadow Home secretary David Davis called the plan "sinister". The Liberal Democrat home affairs spokesman Nick Clegg criticised "the determination to build a surveillance state behind the backs of the British people". The UK's junior education minister Lord Adonis defended the use of fingerprints by schools, to track school attendance as well as access to school meals and libraries, and reassured the House of Lords that the children's fingerprints had been taken with the consent of the parents and would be destroyed once children left the school. An Early Day Motion which called on the UK Government to conduct a full and open consultation with stakeholders about the use of biometrics in schools, secured the support of 85 Members of Parliament (Early Day Motion 686). Following the establishment in the United Kingdom of a Conservative and Liberal Democratic coalition government in May 2010, the UK ID card scheme was scrapped.
Serious concerns about the security implications of using conventional biometric templates in schools have been raised by a number of leading IT security experts, one of whom has voiced the opinion that "it is absolutely premature to begin using 'conventional biometrics' in schools". The vendors of biometric systems claim that their products bring benefits to schools such as improved reading skills, decreased wait times in lunch lines and increased revenues. They do not cite independent research to support this view. One education specialist wrote in 2007: "I have not been able to find a single piece of published research which suggests that the use of biometrics in schools promotes healthy eating or improves reading skills amongst children... There is absolutely no evidence for such claims".
The Ottawa Police in Canada have advised parents who fear their children may be kidnapped to fingerprint their children.
Absence or mutilation of fingerprints.
A very rare medical condition, adermatoglyphia, is characterized by the absence of fingerprints. Affected persons have completely smooth fingertips, palms, toes and soles, but no other medical signs or symptoms. A 2011 study indicated that adermatoglyphia is caused by the improper expression of the protein SMARCAD1. The condition has been called "immigration delay disease" by the researchers describing it, because the congenital lack of fingerprints causes delays when affected persons attempt to prove their identity while traveling. Only five families with this condition had been described as of 2011.
People with Naegeli–Franceschetti–Jadassohn syndrome and dermatopathia pigmentosa reticularis, which are both forms of ectodermal dysplasia, also have no fingerprints. Both of these rare genetic syndromes produce other signs and symptoms as well, such as thin, brittle hair.
The anti-cancer medication capecitabine may cause the loss of fingerprints. Swelling of the fingers, such as that caused by bee stings, will in some cases cause the temporary disappearance of fingerprints, though they will return when the swelling recedes.
Since the elasticity of skin decreases with age, many senior citizens have fingerprints that are difficult to capture. The ridges get thicker; the height between the top of the ridge and the bottom of the furrow gets narrow, so there is less prominence.
Fingerprints can be erased permanently and this can potentially be used by criminals to reduce their chance of conviction. Erasure can be achieved in a variety of ways including simply burning the fingertips, using acids and advanced techniques such as plastic surgery. John Dillinger burned his fingers with acid, but prints taken during a previous arrest and upon death still exhibited almost complete relation to one another.
Fingerprint verification.
Fingerprints can be captured as graphical ridge and valley patterns. Because of their uniqueness and permanence, fingerprints emerged as the most widely used biometric identifier in the 2000s. Automated fingerprint verification systems were developed to meet the needs of law enforcement and their use became more widespread in civilian applications. Despite being deployed more widely, reliable automated fingerprint verification remained a challenge and was extensively researched in the context of pattern recognition and image processing. The uniqueness of a fingerprint can be established by the overall pattern of ridges and valleys, or the logical ridge discontinuities known as minutiae. In the 2000s, minutiae features were considered the most discriminating and reliable feature of a fingerprint. Therefore, the recognition of minutiae features became the most common basis for automated fingerprint verification. The most widely used minutiae features used for automated fingerprint verification were the ridge ending and the ridge bifurcation.
Patterns.
The three basic patterns of fingerprint ridges are the arch, loop, and whorl:
Scientists have found that family members often share the same general fingerprint patterns, leading to the belief that these patterns are inherited.
Fingerprint features.
Features of fingerprint ridges, called "minutiae", include:
Fingerprint sensors.
A fingerprint sensor is an electronic device used to capture a digital image of the fingerprint pattern. The captured image is called a live scan. This live scan is digitally processed to create a biometric template (a collection of extracted features) which is stored and used for matching. Many technologies have been used including optical, capacitive, RF, thermal, piezoresistive, ultrasonic, piezoelectric, and MEMS.
Consumer electronics login authentication.
Since 2000, electronic fingerprint readers have been introduced as consumer electronics security applications. Fingerprint sensors could be used for login authentication and the identification of computer users. However, some less sophisticated sensors have been discovered to be vulnerable to quite simple methods of deception, such as fake fingerprints cast in gels. In 2006, fingerprint sensors gained popularity in the laptop market. Built-in sensors in laptops, such as ThinkPad, VAIO, HP Pavilion and EliteBook laptops, and others also double as motion detectors for document scrolling, like the scroll wheel.
Two of the first smartphone manufacturers to integrate fingerprint recognition into their phones were Motorola with the Atrix 4G in 2011 and Apple with the iPhone 5S on September 10, 2013. One month after, HTC launched the One Max, which also included fingerprint recognition. In April 2014, Samsung released the Galaxy S5, which integrated a fingerprint sensor on the home button.
Following the release of the iPhone 5S model, a group of German hackers announced on September 21, 2013, that they had bypassed Apple's new Touch ID fingerprint sensor by photographing a fingerprint from a glass surface and using that captured image as verification. The spokesman for the group stated: "We hope that this finally puts to rest the illusions people have about fingerprint biometrics. It is plain stupid to use something that you can't change and that you leave everywhere every day as a security token." In September 2015, Apple included a new version of the fingerprint scanner in the iPhone home button with the iPhone 6S. The use of the Touch ID fingerprint scanner was optional and could be configured to unlock the screen or pay for mobile apps purchases. Since December 2015, cheaper smartphones with fingerprint recognition have been released, such as the $100 UMI Fair. Samsung introduced fingerprint sensors to its mid-range A series smartphones in 2014.
By 2017, Hewlett Packard, Asus, Huawei, Lenovo and Apple were using fingerprint readers in their laptops. Synaptics says the SecurePad sensor is now available for OEMs to start building into their laptops. In 2018, Synaptics revealed that their in-display fingerprint sensors would be featured on the new Vivo X21 UD smartphone. This was the first mass-produced fingerprint sensor to be integrated into the entire touchscreen display, rather than as a separate sensor.
Algorithms.
Matching algorithms are used to compare previously stored templates of fingerprints against candidate fingerprints for authentication purposes. In order to do this either the original image must be directly compared with the candidate image or certain features must be compared.
Pre-processing.
Pre-processing enhances the quality of an image by filtering and removing extraneous noise. The minutiae-based algorithm is only effective with 8-bit gray scale fingerprint images. One reason for this is that an 8-bit gray fingerprint image is a fundamental base when converting the image to a 1-bit image with value 1 for ridges and value 0 for furrows. This process allows for enhanced edge detection so the fingerprint is revealed in high contrast, with the ridges highlighted in black and the furrows in white. To further optimize the input image's quality, two more steps are required: minutiae extraction and false minutiae removal. The minutiae extraction is carried out by applying a ridge-thinning algorithm that removes redundant pixels of ridges. As a result, the thinned ridges of the fingerprint image are marked with a unique ID to facilitate the conduction of further operations. After the minutiae extraction, the false minutiae removal is carried out. The lack of the amount of ink and the cross link among the ridges could cause false minutiae that led to inaccuracy in fingerprint recognition process.
Pattern-based (or image-based) algorithms.
Pattern based algorithms compare the basic fingerprint patterns (arch, whorl, and loop) between a previously stored template and a candidate fingerprint. This requires that the images can be aligned in the same orientation. To do this, the algorithm finds a central point in the fingerprint image and centers on that. In a pattern-based algorithm, the template contains the type, size, and orientation of patterns within the aligned fingerprint image. The candidate fingerprint image is graphically compared with the template to determine the degree to which they match.
In other species.
Some other animals have evolved their own unique prints, especially those whose lifestyle involves climbing or grasping wet objects; these include many primates, such as gorillas and chimpanzees, Australian koalas, and aquatic mammal species such as the North American fisher. According to one study, even with an electron microscope, it can be quite difficult to distinguish between the fingerprints of a koala and a human.
In fiction.
Mark Twain.
Mark Twain's memoir "Life on the Mississippi" (1883), notable mainly for its account of the author's time on the river, also recounts parts of his later life and includes tall tales and stories allegedly told to him. Among them is an involved, melodramatic account of a murder in which the killer is identified by a thumbprint. Twain's novel "Pudd'nhead Wilson", published in 1893, includes a courtroom drama that turns on fingerprint identification.
Crime fiction.
The use of fingerprints in crime fiction has, of course, kept pace with its use in real-life detection. Sir Arthur Conan Doyle wrote a short story about his celebrated sleuth Sherlock Holmes which features a fingerprint: "The Norwood Builder" is a 1903 short story set in 1894 and involves the discovery of a bloody fingerprint which helps Holmes to expose the real criminal and free his client.
The British detective writer R. Austin Freeman's first Thorndyke novel "The Red Thumb-Mark" was published in 1907 and features a bloody fingerprint left on a piece of paper together with a parcel of diamonds inside a safe-box. These become the center of a medico-legal investigation led by Dr. Thorndyke, who defends the accused whose fingerprint matches that on the paper, after the diamonds are stolen.
Film and television.
In the television series "Bonanza" (1959–1973), the Chinese character Hop Sing uses his knowledge of fingerprints to free Little Joe from a murder charge.
The 1997 movie "Men in Black" required Agent J to remove his ten fingerprints by putting his hands on a metal ball, an action deemed necessary by the MIB agency to remove the identity of its agents.
In the 2009 science fiction movie "Cold Souls", a mule who smuggles souls wears latex fingerprints to frustrate airport security terminals. She can change her identity by simply changing her wig and latex fingerprints.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_t, R_i, R_m, R_r, R_l"
},
{
"math_id": 1,
"text": "L_t, L_i, L_m, L_r, L_l"
},
{
"math_id": 2,
"text": "{16R_i+8R_r+4L_t+2L_m+1L_l + 1 \\over 16R_t+8R_m+4R_l+2L_i+1L_r + 1}."
},
{
"math_id": 3,
"text": "{16(0)+8(1)+4(0)+2(0)+1(0) + 1 \\over 16(0)+8(0)+4(0)+2(1)+1(0) + 1} = {9\\over3}."
}
] | https://en.wikipedia.org/wiki?curid=84777 |
8478563 | Mersenne conjectures | Mathematical conjectures about Mersenne primes
In mathematics, the Mersenne conjectures concern the characterization of a kind of prime numbers called Mersenne primes, meaning prime numbers that are a power of two minus one.
Original Mersenne conjecture.
The original, called Mersenne's conjecture, was a statement by Marin Mersenne in his "Cogitata Physico-Mathematica" (1644; see e.g. Dickson 1919) that the numbers formula_0 were prime for "n" = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 and 257, and were composite for all other positive integers "n" ≤ 257. The first seven entries of his list (formula_0 for "n" = 2, 3, 5, 7, 13, 17, 19) had already been proven to be primes by trial division before Mersenne's time; only the last four entries were new claims by Mersenne. Due to the size of those last numbers, Mersenne did not and could not test all of them, nor could his peers in the 17th century. It was eventually determined, after three centuries and the availability of new techniques such as the Lucas–Lehmer test, that Mersenne's conjecture contained five errors, namely two entries are composite (those corresponding to the primes "n" = 67, 257) and three primes are missing (those corresponding to the primes "n" = 61, 89, 107). The correct list for "n" ≤ 257 is: "n" = 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107 and 127.
While Mersenne's original conjecture is false, it may have led to the New Mersenne conjecture.
New Mersenne conjecture.
The New Mersenne conjecture or Bateman, Selfridge and Wagstaff conjecture (Bateman et al. 1989) states that for any odd natural number "p", if any two of the following conditions hold, then so does the third:
If "p" is an odd composite number, then 2"p" − 1 and (2"p" + 1)/3 are both composite. Therefore it is only necessary to test primes to verify the truth of the conjecture.
Currently, there are nine known numbers for which all three conditions hold: 3, 5, 7, 13, 17, 19, 31, 61, 127 (sequence in the OEIS). Bateman et al. expected that no number greater than 127 satisfies all three conditions, and showed that heuristically no greater number would even satisfy two conditions, which would make the New Mersenne Conjecture trivially true.
As of 2024[ [update]], all the Mersenne primes up to 257885161 − 1 are known, and for none of these does the third condition hold except for the ones just mentioned.
Primes which satisfy at least one condition are
2, 3, 5, 7, 11, 13, 17, 19, 23, 31, 43, 61, 67, 79, 89, 101, 107, 127, 167, 191, 199, 257, 313, 347, 521, 607, 701, 1021, 1279, 1709, 2203, 2281, 2617, 3217, 3539, 4093, 4099, 4253, 4423, 5807, 8191, 9689, 9941, ... (sequence in the OEIS)
Note that the two primes for which the original Mersenne conjecture is false (67 and 257) satisfy the first condition of the new conjecture (67 = 26 + 3, 257 = 28 + 1), but not the other two. 89 and 107, which were missed by Mersenne, satisfy the second condition but not the other two. Mersenne may have thought that 2"p" − 1 is prime only if "p" = 2"k" ± 1 or "p" = 4"k" ± 3 for some natural number "k", but if he thought it was "if and only if" he would have included 61.
The New Mersenne conjecture can be thought of as an attempt to salvage the centuries-old Mersenne's conjecture, which is false. However, according to Robert D. Silverman, John Selfridge agreed that the New Mersenne conjecture is "obviously true" as it was chosen to fit the known data and counter-examples beyond those cases are exceedingly unlikely. It may be regarded more as a curious observation than as an open question in need of proving.
Prime Pages shows that the New Mersenne conjecture is true for all integers less than or equal to 30402457 by systematically listing all primes for which it is already known that one of the conditions holds.
Lenstra–Pomerance–Wagstaff conjecture.
Lenstra, Pomerance, and Wagstaff have conjectured that there are infinitely many Mersenne primes, and, more precisely, that the number of Mersenne primes less than "x" is asymptotically approximated by
formula_1
where γ is the Euler–Mascheroni constant.
In other words, the number of Mersenne primes with exponent "p" less than "y" is asymptotically
formula_2
This means that there should on average be about formula_3 ≈ 5.92 primes "p" of a given number of decimal digits such that formula_4 is prime. The conjecture is fairly accurate for the first 40 Mersenne primes, but between 220,000,000 and 285,000,000 there are at least 12, rather than the expected number which is around 3.7.
More generally, the number of primes "p" ≤ "y" such that formula_5 is prime (where "a", "b" are coprime integers, "a" > 1, −"a" < "b" < "a", "a" and "b" are not both perfect "r"-th powers for any natural number "r" > 1, and −4"ab" is not a perfect fourth power) is asymptotically
formula_6
where "m" is the largest nonnegative integer such that "a" and −"b" are both perfect 2"m"-th powers. The case of Mersenne primes is one case of ("a", "b") = (2, 1).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2^n - 1"
},
{
"math_id": 1,
"text": "e^\\gamma\\cdot\\log_2 \\log_2(x),"
},
{
"math_id": 2,
"text": "e^\\gamma\\cdot\\log_2(y)."
},
{
"math_id": 3,
"text": "e^\\gamma\\cdot\\log_2(10)"
},
{
"math_id": 4,
"text": "M_p"
},
{
"math_id": 5,
"text": "\\frac{a^p-b^p}{a-b}"
},
{
"math_id": 6,
"text": "(e^\\gamma+m\\cdot\\log_e(2))\\cdot\\log_a(y)."
}
] | https://en.wikipedia.org/wiki?curid=8478563 |
847879 | Age of the universe | Time elapsed since the Big Bang
In physical cosmology, the age of the universe is the time elapsed since the Big Bang. Astronomers have derived two different measurements of the age of the universe: a measurement based on direct observations of an early state of the universe, which indicate an age of billion years as interpreted with the Lambda-CDM concordance model as of 2021; and a measurement based on the observations of the local, modern universe, which suggest a younger age. The uncertainty of the first kind of measurement has been narrowed down to 20 million years, based on a number of studies that all show similar figures for the age. These studies include researches of the microwave background radiation by the "Planck" spacecraft, the Wilkinson Microwave Anisotropy Probe and other space probes. Measurements of the cosmic background radiation give the cooling time of the universe since the Big Bang, and measurements of the expansion rate of the universe can be used to calculate its approximate age by extrapolating backwards in time. The range of the estimate is also within the range of the estimate for the oldest observed star in the universe.
History.
In the 18th century, the concept that the age of Earth was millions, if not billions, of years began to appear. Nonetheless, most scientists throughout the 19th century and into the first decades of the 20th century presumed that the universe itself was steady state and eternal, possibly with stars coming and going but no changes occurring at the largest scale known at the time.
The first scientific theories indicating that the age of the universe might be finite were the studies of thermodynamics, formalized in the mid-19th century. The concept of entropy dictates that if the universe (or any other closed system) were infinitely old, then everything inside would be at the same temperature, and thus there would be no stars and no life. No scientific explanation for this contradiction was put forth at the time.
In 1915 Albert Einstein published the theory of general relativity and in 1917 constructed the first cosmological model based on his theory. In order to remain consistent with a steady-state universe, Einstein added what was later called a cosmological constant to his equations. Einstein's model of a static universe was proved unstable by Arthur Eddington.
The first direct observational hint that the universe was not static but expanding came from the observations of 'recession velocities', mostly by Vesto M. Slipher, combined with distances to the 'nebulae' (galaxies) by Edwin Hubble in a work published in 1929. Earlier in the 20th century, Hubble and others resolved individual stars within certain nebulae, thus determining that they were galaxies, similar to, but external to, the Milky Way Galaxy. In addition, these galaxies were very large and very far away. Spectra taken of these distant galaxies showed a red shift in their spectral lines presumably caused by the Doppler effect, thus indicating that these galaxies were moving away from the Earth. In addition, the farther away these galaxies seemed to be (the dimmer they appeared) the greater was their redshift, and thus the faster they seemed to be moving away. This was the first direct evidence that the universe is not static but expanding. The first estimate of the age of the universe came from the calculation of when all of the objects must have started speeding out from the same point. Hubble's initial value for the universe's age was very low, as the galaxies were assumed to be much closer than later observations found them to be.
The first reasonably accurate measurement of the rate of expansion of the universe, a numerical value now known as the Hubble constant, was made in 1958 by astronomer Allan Sandage. His measured value for the Hubble constant came very close to the value range generally accepted today.
Sandage, like Einstein, did not believe his own results at the time of discovery. Sandage proposed new theories of cosmogony to explain this discrepancy. This issue was more or less resolved by improvements in the theoretical models used for estimating the ages of stars. As of 2024, using the latest models for stellar evolution, the estimated age of the oldest known star is billion years.
The discovery of cosmic microwave background radiation announced in 1965 finally brought an effective end to the remaining scientific uncertainty over the expanding universe. It was a chance result from work by two teams less than 60 miles apart. In 1964, Arno Penzias and Robert Woodrow Wilson were trying to detect radio wave echoes with a supersensitive antenna. The antenna persistently detected a low, steady, mysterious noise in the microwave region that was evenly spread over the sky, and was present day and night. After testing, they became certain that the signal did not come from the Earth, the Sun, or the Milky Way galaxy, but from outside the Milky Way, but could not explain it. At the same time another team, Robert H. Dicke, Jim Peebles, and David Wilkinson, were attempting to detect low level noise that might be left over from the Big Bang and could prove whether the Big Bang theory was correct. The two teams realized that the detected noise was in fact radiation left over from the Big Bang, and that this was strong evidence that the theory was correct. Since then, a great deal of other evidence has strengthened and confirmed this conclusion, and refined the estimated age of the universe to its current figure.
The space probes WMAP, launched in 2001, and Planck, launched in 2009, produced data that determines the Hubble constant and the age of the universe independent of galaxy distances, removing the largest source of error.
Explanation.
The Lambda-CDM concordance model describes the evolution of the universe from a very uniform, hot, dense primordial state to its present state over a span of about 13.77 billion years of cosmological time. This model is well understood theoretically and strongly supported by recent high-precision astronomical observations such as WMAP. In contrast, theories of the origin of the primordial state remain very speculative.
If one extrapolates the Lambda-CDM model backward from the earliest well-understood state, it quickly (within a small fraction of a second) reaches a singularity. This is known as the "initial singularity" or the "Big Bang singularity". This singularity is not understood as having a physical significance in the usual sense, but it is convenient to quote times measured "since the Big Bang" even though they do not correspond to a time that can actually be physically measured.
Though the universe might in theory have a longer history, the International Astronomical Union presently uses the term "age of the universe" to mean the duration of the Lambda-CDM expansion, or equivalently, the time elapsed within the currently observable universe since the Big Bang.
In July 2023, a study published in the "Monthly Notices of the Royal Astronomical Society" journal put the age of the Universe as 26.7 billion years. The author Rajendra Gupta shows a new model that stretches the galaxy formation time by several billion years, leading to the conclusion that the age of the universe is roughly twice as long as thought. Using Zwicky's tired light theory and "coupling constants" as described by Paul Dirac, Gupta writes that the recent James Webb Space Telescope observations are in strong tension with existing cosmological models. Gupta says about his new theory: "It thus resolves the 'impossible early galaxy' problem without requiring the existence of primordial black hole seeds or modified power spectrum."
Observational limits.
Since the universe must be at least as old as the oldest things in it, there are a number of observations that put a lower limit on the age of the universe; these include
Cosmological parameters.
The problem of determining the age of the universe is closely tied to the problem of determining the values of the cosmological parameters. Today this is largely carried out in the context of the ΛCDM model, where the universe is assumed to contain normal (baryonic) matter, cold dark matter, radiation (including both photons and neutrinos), and a cosmological constant.
The fractional contribution of each to the current energy density of the universe is given by the density parameters formula_1 formula_2 and formula_0 The full ΛCDM model is described by a number of other parameters, but for the purpose of computing its age these three, along with the Hubble parameter formula_3, are the most important.
If one has accurate measurements of these parameters, then the age of the universe can be determined by using the Friedmann equation. This equation relates the rate of change in the scale factor formula_4 to the matter content of the universe. Turning this relation around, we can calculate the change in time per change in scale factor and thus calculate the total age of the universe by integrating this formula. The age formula_5 is then given by an expression of the form
formula_6
where formula_3 is the Hubble parameter and the function formula_7 depends only on the fractional contribution to the universe's energy content that comes from various components. The first observation that one can make from this formula is that it is the Hubble parameter that controls that age of the universe, with a correction arising from the matter and energy content. So a rough estimate of the age of the universe comes from the Hubble time, the inverse of the Hubble parameter. With a value for formula_3 around , the Hubble time evaluates to formula_8 billion years.
To get a more accurate number, the correction function formula_7 must be computed. In general this must be done numerically, and the results for a range of cosmological parameter values are shown in the figure. For the Planck values formula_9(0.3086, 0.6914), shown by the box in the upper left corner of the figure, this correction factor is about formula_10 For a flat universe without any cosmological constant, shown by the star in the lower right corner, formula_11 is much smaller and thus the universe is younger for a fixed value of the Hubble parameter. To make this figure, formula_12 is held constant (roughly equivalent to holding the cosmic microwave background temperature constant) and the curvature density parameter is fixed by the value of the other three.
Apart from the Planck satellite, the Wilkinson Microwave Anisotropy Probe (WMAP) was instrumental in establishing an accurate age of the universe, though other measurements must be folded in to gain an accurate number. CMB measurements are very good at constraining the matter content formula_1 and curvature parameter formula_13 It is not as sensitive to formula_14 directly, partly because the cosmological constant becomes important only at low redshift. The most accurate determinations of the Hubble parameter formula_3 are currently believed to come from measured brightnesses and redshifts of distant Type Ia supernovae. Combining these measurements leads to the generally accepted value for the age of the universe quoted above.
The cosmological constant makes the universe "older" for fixed values of the other parameters. This is significant, since before the cosmological constant became generally accepted, the Big Bang model had difficulty explaining why globular clusters in the Milky Way appeared to be far older than the age of the universe as calculated from the Hubble parameter and a matter-only universe. Introducing the cosmological constant allows the universe to be older than these clusters, as well as explaining other features that the matter-only cosmological model could not.
WMAP.
NASA's Wilkinson Microwave Anisotropy Probe (WMAP) project's nine-year data release in 2012 estimated the age of the universe to be years (13.772 billion years, with an uncertainty of plus or minus 59 million years).
This age is based on the assumption that the project's underlying model is correct; other methods of estimating the age of the universe could give different ages. Assuming an extra background of relativistic particles, for example, can enlarge the error bars of the WMAP constraint by one order of magnitude.
This measurement is made by using the location of the first acoustic peak in the microwave background power spectrum to determine the size of the decoupling surface (size of the universe at the time of recombination). The light travel time to this surface (depending on the geometry used) yields a reliable age for the universe. Assuming the validity of the models used to determine this age, the residual accuracy yields a margin of error near one per cent.
Planck.
In 2015, the Planck Collaboration estimated the age of the universe to be billion years, slightly higher but within the uncertainties of the earlier number derived from the WMAP data.
In the table below, figures are within 68% confidence limits for the base ΛCDM model.
Legend:
In 2018, the Planck Collaboration updated its estimate for the age of the universe to billion years.
Assumption of strong priors.
Calculating the age of the universe is accurate only if the assumptions built into the models being used to estimate it are also accurate. This is referred to as strong priors and essentially involves stripping the potential errors in other parts of the model to render the accuracy of actual observational data directly into the concluded result. This is not a valid procedure in all contexts, as noted in the accompanying caveat: "on the assumption that the project's underlying model is correct". The age given is thus accurate to the specified error, since this represents the error in the instrument used to gather the raw data input into the model.
The age of the universe based on the best fit to Planck 2018 data alone is billion years. This number represents an accurate "direct" measurement of the age of the universe, in contrast to other methods that typically involve Hubble's law and the age of the oldest stars in globular clusters. It is possible to use different methods for determining the same parameter (in this case, the age of the universe) and arrive at different answers with no overlap in the "errors". To best avoid the problem, it is common to show two sets of uncertainties; one related to the actual measurement and the other related to the systematic errors of the model being used.
An important component to the analysis of data used to determine the age of the universe (e.g. from Planck) therefore is to use a Bayesian statistical analysis, which normalizes the results based upon the priors (i.e. the model). This quantifies any uncertainty in the accuracy of a measurement due to a particular model used.
References.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Div col/styles.css"/>* | [
{
"math_id": 0,
"text": "~\\Omega_\\Lambda~."
},
{
"math_id": 1,
"text": "~\\Omega_\\text{m}~,"
},
{
"math_id": 2,
"text": "~\\Omega_\\text{r}~,"
},
{
"math_id": 3,
"text": "~H_0~"
},
{
"math_id": 4,
"text": "~a(t)~"
},
{
"math_id": 5,
"text": "~t_0~"
},
{
"math_id": 6,
"text": "t_0 = \\frac{1}{H_0} \\, F (\\,\\Omega_\\text{r},\\,\\Omega_\\text{m},\\,\\Omega_\\Lambda,\\,\\dots\\,)~"
},
{
"math_id": 7,
"text": "~F~"
},
{
"math_id": 8,
"text": "~1/H_0 =~"
},
{
"math_id": 9,
"text": "~(\\Omega_\\text{m}, \\Omega_\\Lambda) =~"
},
{
"math_id": 10,
"text": "~F = 0.956 ~."
},
{
"math_id": 11,
"text": "~F = {2}/{3}~"
},
{
"math_id": 12,
"text": "~\\Omega_\\text{r}~"
},
{
"math_id": 13,
"text": "~\\Omega_\\text{k}~."
},
{
"math_id": 14,
"text": "~\\Omega_\\Lambda~"
}
] | https://en.wikipedia.org/wiki?curid=847879 |
848067 | Reduction (complexity) | Transformation of one computational problem to another
In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A sufficiently efficient reduction from one problem to another may be used to show that the second problem is at least as difficult as the first.
Intuitively, problem "A" is reducible to problem "B", if an algorithm for solving problem "B" efficiently (if it existed) could also be used as a subroutine to solve problem "A" efficiently. When this is true, solving "A" cannot be harder than solving "B". "Harder" means having a higher estimate of the required computational resources in a given context (e.g., higher time complexity, greater memory requirement, expensive need for extra hardware processor cores for a parallel solution compared to a single-threaded solution, etc.). The existence of a reduction from "A" to "B", can be written in the shorthand notation "A" ≤m "B", usually with a subscript on the ≤ to indicate the type of reduction being used (m : mapping reduction, p : polynomial reduction).
The mathematical structure generated on a set of problems by the reductions of a particular type generally forms a preorder, whose equivalence classes may be used to define degrees of unsolvability and complexity classes.
Introduction.
There are two main situations where we need to use reductions:
A very simple example of a reduction is from "multiplication" to "squaring". Suppose all we know how to do is to add, subtract, take squares, and divide by two. We can use this knowledge, combined with the following formula, to obtain the product of any two numbers:
formula_0
We also have a reduction in the other direction; obviously, if we can multiply two numbers, we can square a number. This seems to imply that these two problems are equally hard. This kind of reduction corresponds to Turing reduction.
However, the reduction becomes much harder if we add the restriction that we can only use the squaring function one time, and only at the end. In this case, even if we're allowed to use all the basic arithmetic operations, including multiplication, no reduction exists in general, because in order to get the desired result as a square we have to compute its square root first, and this square root could be an irrational number like formula_1 that cannot be constructed by arithmetic operations on rational numbers. Going in the other direction, however, we can certainly square a number with just one multiplication, only at the end. Using this limited form of reduction, we have shown the unsurprising result that multiplication is harder in general than squaring. This corresponds to many-one reduction.
Properties.
A reduction is a preordering, that is a reflexive and transitive relation, on P(N)×P(N), where P(N) is the power set of the natural numbers.
Types and applications of reductions.
As described in the example above, there are two main types of reductions used in computational complexity, the many-one reduction and the Turing reduction. Many-one reductions map "instances" of one problem to "instances" of another; Turing reductions "compute" the solution to one problem, assuming the other problem is easy to solve. The many-one reduction is a stronger type of Turing reduction, and is more effective at separating problems into distinct complexity classes. However, the increased restrictions on many-one reductions make them more difficult to find.
A problem is complete for a complexity class if every problem in the class reduces to that problem, and it is also in the class itself. In this sense the problem represents the class, since any solution to it can, in combination with the reductions, be used to solve every problem in the class.
However, in order to be useful, reductions must be "easy". For example, it's quite possible to reduce a difficult-to-solve NP-complete problem like the boolean satisfiability problem to a trivial problem, like determining if a number equals zero, by having the reduction machine solve the problem in exponential time and output zero only if there is a solution. However, this does not achieve much, because even though we can solve the new problem, performing the reduction is just as hard as solving the old problem. Likewise, a reduction computing a noncomputable function can reduce an undecidable problem to a decidable one. As Michael Sipser points out in "Introduction to the Theory of Computation": "The reduction must be easy, relative to the complexity of typical problems in the class [...] If the reduction itself were difficult to compute, an easy solution to the complete problem wouldn't necessarily yield an easy solution to the problems reducing to it."
Therefore, the appropriate notion of reduction depends on the complexity class being studied. When studying the complexity class NP and harder classes such as the polynomial hierarchy, polynomial-time reductions are used. When studying classes within P such as NC and NL, log-space reductions are used. Reductions are also used in computability theory to show whether problems are or are not solvable by machines at all; in this case, reductions are restricted only to computable functions.
In case of optimization (maximization or minimization) problems, we often think in terms of approximation-preserving reduction. Suppose we have two optimization problems such that instances of one problem can be mapped onto instances of the other, in a way that nearly optimal solutions to instances of the latter problem can be transformed back to yield nearly optimal solutions to the former. This way, if we have an optimization algorithm (or approximation algorithm) that finds near-optimal (or optimal) solutions to instances of problem B, and an efficient approximation-preserving reduction from problem A to problem B, by composition we obtain an optimization algorithm that yields near-optimal solutions to instances of problem A. Approximation-preserving reductions are often used to prove hardness of approximation results: if some optimization problem A is hard to approximate (under some complexity assumption) within a factor better than α for some α, and there is a β-approximation-preserving reduction from problem A to problem B, we can conclude that problem B is hard to approximate within factor α/β.
Examples.
Detailed example.
The following example shows how to use reduction from the halting problem to prove that a language is undecidable. Suppose "H"("M", "w") is the problem of determining whether a given Turing machine "M" halts (by accepting or rejecting) on input string "w". This language is known to be undecidable. Suppose "E"("M") is the problem of determining whether the language a given Turing machine "M" accepts is empty (in other words, whether "M" accepts any strings at all). We show that "E" is undecidable by a reduction from "H".
To obtain a contradiction, suppose "R" is a decider for "E". We will use this to produce a decider "S" for "H" (which we know does not exist). Given input "M" and "w" (a Turing machine and some input string), define "S"("M", "w") with the following behavior: "S" creates a Turing machine "N" that accepts only if the input string to "N" is "w" and "M" halts on input "w", and does not halt otherwise. The decider "S" can now evaluate "R"("N") to check whether the language accepted by "N" is empty. If "R" accepts "N", then the language accepted by "N" is empty, so in particular "M" does not halt on input "w", so "S" can reject. If "R" rejects "N", then the language accepted by "N" is nonempty, so "M" does halt on input "w", so "S" can accept. Thus, if we had a decider "R" for "E", we would be able to produce a decider "S" for the halting problem "H"("M", "w") for any machine "M" and input "w". Since we know that such an "S" cannot exist, it follows that the language "E" is also undecidable. | [
{
"math_id": 0,
"text": "a \\times b = \\frac{\\left(\\left(a + b\\right)^{2} - a^{2} - b^{2}\\right)}{2}"
},
{
"math_id": 1,
"text": "\\sqrt{2}"
}
] | https://en.wikipedia.org/wiki?curid=848067 |
8481594 | Co-orbital configuration | Configuration of two or more astronomical objects
In astronomy, a co-orbital configuration is a configuration of two or more astronomical objects (such as asteroids, moons, or planets) orbiting at the same, or very similar, distance from their primary; i.e., they are in a 1:1 mean-motion resonance. (or 1:-1 if orbiting in opposite directions).
There are several classes of co-orbital objects, depending on their point of libration. The most common and best-known class is the trojan, which librates around one of the two stable Lagrangian points (Trojan points), L4 and L5, 60° ahead of and behind the larger body respectively. Another class is the horseshoe orbit, in which objects librate around 180° from the larger body. Objects librating around 0° are called quasi-satellites.
An exchange orbit occurs when two co-orbital objects are of similar masses and thus exert a non-negligible influence on each other. The objects can exchange semi-major axes or eccentricities when they approach each other.
Parameters.
Orbital parameters that are used to describe the relation of co-orbital objects are the longitude of the periapsis difference and the mean longitude difference. The longitude of the periapsis is the sum of the mean longitude and the mean anomaly formula_0 and the mean longitude is the sum of the longitude of the ascending node and the argument of periapsis formula_1.
Trojans.
Trojan objects orbit 60° ahead of (L4) or behind (L5) a more massive object, both in orbit around an even more massive central object. The best known examples are the large population of asteroids that orbit ahead of or behind Jupiter around the Sun. Trojan objects do not orbit exactly at one of either Lagrangian points, but do remain relatively close to it, appearing to slowly orbit it. In technical terms, they librate around formula_2 = (±60°, ±60°). The point around which they librate is the same, irrespective of their mass or orbital eccentricity.
Trojan minor planets.
There are several thousand known trojan minor planets orbiting the Sun. Most of these orbit near Jupiter's Lagrangian points, the traditional Jupiter trojans. As of 2015[ [update]], there are also 13 Neptune trojans, 7 Mars trojans, 2 Uranus trojans ( and ), and 2 Earth trojans ( and (614689) 2020 XL5 ) that are known to exist. No Saturnian trojans have been observed.
Trojan moons.
The Saturnian system contains two sets of trojan moons. Both Tethys and Dione have two trojan moons each, Telesto and Calypso in Tethys's L4 and L5 respectively, and Helene and Polydeuces in Dione's L4 and L5 respectively.
Polydeuces is noticeable for its wide libration: it wanders as far as ±30° from its Lagrangian point and ±2% from its mean orbital radius, along a tadpole orbit in 790 days (288 times its orbital period around Saturn, the same as Dione's).
Trojan planets.
A pair of co-orbital exoplanets was proposed to be orbiting the star Kepler-223, but this was later retracted.
The possibility of a trojan planet to Kepler-91b was studied but the conclusion was that the transit-signal was a false-positive.
In April 2023, a group of amateur astronomers reported two new exoplanet candidates co-orbiting , in a horseshoe exchange orbit, close to the star GJ 3470 (this star has been known to have a confirmed planet GJ 3470 b). However, the mentioned study is only in preprint form on arXiv, and it has not yet been peer reviewed and published in a reputable scientific journal.
In July 2023, the possible detection of a cloud of debris co-orbital with the proto-planet PDS 70 b was announced. This debris cloud could be evidence of a Trojan planetary-mass body or one in the process of forming.
One possibility for the habitable zone is a trojan planet of a giant planet close to its star.
The reason why no trojan planets have been definitively detected could be that tides destabilize their orbits.
Formation of the Earth–Moon system.
According to the giant impact hypothesis, the Moon formed after a collision between two co-orbital objects: Theia, thought to have had about 10% of the mass of Earth (about as massive as Mars), and the proto-Earth. Their orbits were perturbed by other planets, bringing Theia out of its trojan position and causing the collision.
Horseshoe orbits.
Objects in a horseshoe orbit librate around 180° from the primary. Their orbits encompass both equilateral Lagrangian points, i.e. L4 and L5.
Co-orbital moons.
The Saturnian moons Janus and Epimetheus share their orbits, the difference in semi-major axes being less than either's mean diameter. This means the moon with the smaller semi-major axis slowly catches up with the other. As it does this, the moons gravitationally tug at each other, increasing the semi-major axis of the moon that has caught up and decreasing that of the other. This reverses their relative positions proportionally to their masses and causes this process to begin anew with the moons' roles reversed. In other words, they effectively swap orbits, ultimately oscillating both about their mass-weighted mean orbit.
Earth co-orbital asteroids.
A small number of asteroids have been found which are co-orbital with Earth. The first of these to be discovered, asteroid 3753 Cruithne, orbits the Sun with a period slightly less than one Earth year, resulting in an orbit that (from the point of view of Earth) appears as a bean-shaped orbit centered on a position ahead of the position of Earth. This orbit slowly moves further ahead of Earth's orbital position. When Cruithne's orbit moves to a position where it trails Earth's position, rather than leading it, the gravitational effect of Earth increases the orbital period, and hence the orbit then begins to lag, returning to the original location. The full cycle from leading to trailing Earth takes 770 years, leading to a horseshoe-shaped movement with respect to Earth.
More resonant near-Earth objects (NEOs) have since been discovered. These include 54509 YORP, , , , 2009 BD, and which exist in resonant orbits similar to Cruithne's. and are the only two identified Earth trojans.
Hungaria asteroids were found to be one of the possible sources for co-orbital objects of the Earth with a lifetime up to ~58 kyrs.
Quasi-satellite.
Quasi-satellites are co-orbital objects that librate around 0° from the primary. Low-eccentricity quasi-satellite orbits are highly unstable, but for moderate to high eccentricities such orbits can be stable. From a co-rotating perspective the quasi-satellite appears to orbit the primary like a retrograde satellite, although at distances so large that it is not gravitationally bound to it. Two examples of quasi-satellites of the Earth are
and 469219 Kamoʻoalewa.
Exchange orbits.
In addition to swapping semi-major axes like Saturn's moons Epimetheus and Janus, another possibility is to share the same axis, but swap eccentricities instead.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "({\\lambda}= \\varpi + M) "
},
{
"math_id": 1,
"text": "(\\varpi = \\Omega + \\omega) "
},
{
"math_id": 2,
"text": "({\\Delta}{\\lambda}, {\\Delta}\\varpi)"
}
] | https://en.wikipedia.org/wiki?curid=8481594 |
8481660 | Spin qubit quantum computer | Proposed semiconductor implementation of quantum computers
The spin qubit quantum computer is a quantum computer based on controlling the spin of charge carriers (electrons and electron holes) in semiconductor devices. The first spin qubit quantum computer was first proposed by Daniel Loss and David P. DiVincenzo in 1997, also known as the Loss–DiVincenzo quantum computer. The proposal was to use the intrinsic spin-1/2 degree of freedom of individual electrons confined in quantum dots as qubits. This should not be confused with other proposals that use the nuclear spin as qubit, like the Kane quantum computer or the nuclear magnetic resonance quantum computer. Intel has developed quantum computers based on silicon spin qubits, also called hot qubits.
Spin qubits so far have been implemented by locally depleting two-dimensional electron gases in semiconductors such a gallium arsenide, silicon and germanium. Spin qubits have also been implemented in graphene.
Loss–DiVicenzo proposal.
The Loss–DiVicenzo quantum computer proposal tried to fulfill DiVincenzo's criteria for a scalable quantum computer, namely:
A candidate for such a quantum computer is a lateral quantum dot system. Earlier work on applications of quantum dots for quantum computing was done by Barenco et al.
Implementation of the two-qubit gate.
The Loss–DiVincenzo quantum computer operates, basically, using inter-dot gate voltage for implementing swap operations and local magnetic fields (or any other local spin manipulation) for implementing the controlled NOT gate (CNOT gate).
The swap operation is achieved by applying a pulsed inter-dot gate voltage, so the exchange constant in the Heisenberg Hamiltonian becomes time-dependent:
formula_0
This description is only valid if:
formula_7 is the Boltzmann constant and formula_8 is the temperature in Kelvin.
From the pulsed Hamiltonian follows the time evolution operator
formula_9
where formula_10 is the time-ordering symbol.
We can choose a specific duration of the pulse such that the integral in time over formula_11 gives formula_12 and formula_13 becomes the swap operator formula_14
This pulse run for half the time (with formula_15) results in a square root of swap gate, formula_16
The "XOR" gate may be achieved by combining formula_17 operations with individual spin rotation operations:
formula_18
The formula_19 operator is a conditional phase shift (controlled-Z) for the state in the basis of formula_20.4 It can be made into a CNOT gate by surrounding the desired target qubit with Hadamard gates.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_{\\rm s}(t) = J(t)\\mathbf{S}_{\\rm L} \\cdot \\mathbf{S}_{\\rm R} ."
},
{
"math_id": 1,
"text": "\\Delta E "
},
{
"math_id": 2,
"text": "\\; kT "
},
{
"math_id": 3,
"text": "\\tau_{\\rm s} "
},
{
"math_id": 4,
"text": "\\hbar / \\Delta E "
},
{
"math_id": 5,
"text": "\\Gamma ^{-1} "
},
{
"math_id": 6,
"text": "\\tau_{\\rm s}."
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": "T"
},
{
"math_id": 9,
"text": "U_{\\rm s}(t) = {\\mathcal{T}} \\exp\\left\\{ -i\\int_0^t dt' H_{\\rm s}(t') \\right\\},"
},
{
"math_id": 10,
"text": "{\\mathcal{T}}"
},
{
"math_id": 11,
"text": "J(t)"
},
{
"math_id": 12,
"text": "J_0 \\tau_{\\rm s} = \\pi \\pmod{2\\pi},"
},
{
"math_id": 13,
"text": "U_{\\rm s}"
},
{
"math_id": 14,
"text": "U_{\\rm s} (J_0 \\tau_{\\rm s} = \\pi) \\equiv U_{\\rm sw}."
},
{
"math_id": 15,
"text": "J_0 \\tau_{\\rm s} = \\pi /2"
},
{
"math_id": 16,
"text": "U_{\\rm sw}^{1/2}."
},
{
"math_id": 17,
"text": "U_{\\rm sw}^{1/2}"
},
{
"math_id": 18,
"text": "U_{\\rm XOR} = e^{i\\frac{\\pi}{2}S_{\\rm L}^z}e^{-i\\frac{\\pi}{2}S_{\\rm R}^z}U_{\\rm sw}^{1/2}\ne^{i \\pi S_{\\rm L}^z}U_{\\rm sw}^{1/2}."
},
{
"math_id": 19,
"text": "U_{\\rm XOR}"
},
{
"math_id": 20,
"text": "\\mathbf{S}_{\\rm L} + \\mathbf{S}_{\\rm R}"
}
] | https://en.wikipedia.org/wiki?curid=8481660 |
8483 | Diesel cycle | Engine combustion process
The Diesel cycle is a combustion process of a reciprocating internal combustion engine. In it, fuel is ignited by heat generated during the compression of air in the combustion chamber, into which fuel is then injected. This is in contrast to igniting the fuel-air mixture with a spark plug as in the Otto cycle (four-stroke/petrol) engine. Diesel engines are used in aircraft, automobiles, power generation, diesel–electric locomotives, and both surface ships and submarines.
The Diesel cycle is assumed to have constant pressure during the initial part of the combustion phase (formula_0 to formula_1 in the diagram, below). This is an idealized mathematical model: real physical diesels do have an increase in pressure during this period, but it is less pronounced than in the Otto cycle. In contrast, the idealized Otto cycle of a gasoline engine approximates a constant volume process during that phase.
Idealized Diesel cycle.
The image shows a p–V diagram for the ideal Diesel cycle; where formula_2 is pressure and V the volume or formula_3 the specific volume if the process is placed on a unit mass basis. The "idealized" Diesel cycle assumes an ideal gas and ignores combustion chemistry, exhaust- and recharge procedures and simply follows four distinct processes:
The Diesel engine is a heat engine: it converts heat into work. During the bottom isentropic processes (blue), energy is transferred into the system in the form of work formula_4, but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant pressure (red, isobaric) process, energy enters the system as heat formula_5. During the top isentropic processes (yellow), energy is transferred out of the system in the form of formula_6, but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant volume (green, isochoric) process, some of energy flows out of the system as heat through the right depressurizing process formula_7. The work that leaves the system is equal to the work that enters the system plus the difference between the heat added to the system and the heat that leaves the system; in other words, net gain of work is equal to the difference between the heat added to the system and the heat that leaves the system.
The net work produced is also represented by the area enclosed by the cycle on the p–V diagram. The net work is produced per cycle and is also called the useful work, as it can be turned to other useful types of energy and propel a vehicle (kinetic energy) or produce electrical energy. The summation of many such cycles per unit of time is called the developed power. The formula_6 is also called the gross work, some of which is used in the next cycle of the engine to compress the next charge of air.
Maximum thermal efficiency.
The maximum thermal efficiency of a Diesel cycle is dependent on the compression ratio and the cut-off ratio. It has the following formula under cold air standard analysis:
formula_8
where
formula_9 is thermal efficiency
formula_10 is the cut-off ratio formula_11 (ratio between the end and start volume for the combustion phase)
r is the compression ratio formula_12
formula_13 is ratio of specific heats (Cp/Cv)
The cut-off ratio can be expressed in terms of temperature as shown below:
formula_14
formula_15
formula_16
formula_17
formula_18 can be approximated to the flame temperature of the fuel used. The flame temperature can be approximated to the adiabatic flame temperature of the fuel with corresponding air-to-fuel ratio and compression pressure, formula_19.
formula_20 can be approximated to the inlet air temperature.
This formula only gives the ideal thermal efficiency. The actual thermal efficiency will be significantly lower due to heat and friction losses. The formula is more complex than the Otto cycle (petrol/gasoline engine) relation that has the following formula:
formula_21
The additional complexity for the Diesel formula comes around since the heat addition is at constant pressure and the heat rejection is at constant volume. The Otto cycle by comparison has both the heat addition and rejection at constant volume.
Comparing efficiency to Otto cycle.
Comparing the two formulae it can be seen that for a given compression ratio (r), the "ideal" Otto cycle will be more efficient. However, a "real" diesel engine will be more efficient overall since it will have the ability to operate at higher compression ratios. If a petrol engine were to have the same compression ratio, then knocking (self-ignition) would occur and this would severely reduce the efficiency, whereas in a diesel engine, the self ignition is the desired behavior. Additionally, both of these cycles are only idealizations, and the actual behavior does not divide as clearly or sharply. Furthermore, the ideal Otto cycle formula stated above does not include throttling losses, which do not apply to diesel engines.
Applications.
Diesel engines.
Diesel engines have the lowest specific fuel consumption of any large internal combustion engine employing a single cycle, 0.26 lb/hp·h (0.16 kg/kWh) for very large marine engines (combined cycle power plants are more efficient, but employ two engines rather than one). Two-stroke diesels with high pressure forced induction, particularly turbocharging, make up a large percentage of the very largest diesel engines.
In North America, diesel engines are primarily used in large trucks, where the low-stress, high-efficiency cycle leads to much longer engine life and lower operational costs. These advantages also make the diesel engine ideal for use in the heavy-haul railroad and earthmoving environments.
Other internal combustion engines without spark plugs.
Many model airplanes use very simple "glow" and "diesel" engines. Glow engines use glow plugs. "Diesel" model airplane engines have variable compression ratios. Both types depend on special fuels.
Some 19th-century or earlier experimental engines used external flames, exposed by valves, for ignition, but this becomes less attractive with increasing compression. (It was the research of Nicolas Léonard Sadi Carnot that established the thermodynamic value of compression.) A historical implication of this is that the diesel engine could have been invented without the aid of electricity.
See the development of the hot-bulb engine and indirect injection for historical significance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_2"
},
{
"math_id": 1,
"text": "V_3"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "W_{in}"
},
{
"math_id": 5,
"text": "Q_{in}"
},
{
"math_id": 6,
"text": "W_{out}"
},
{
"math_id": 7,
"text": "Q_{out}"
},
{
"math_id": 8,
"text": "\\eta_{th}=1-\\frac{1}{r^{\\gamma-1}}\\left ( \\frac{\\alpha^{\\gamma}-1}{\\gamma(\\alpha-1)} \\right )"
},
{
"math_id": 9,
"text": "\\eta_{th} "
},
{
"math_id": 10,
"text": "\\alpha"
},
{
"math_id": 11,
"text": "\\frac{V_3}{V_2}"
},
{
"math_id": 12,
"text": "\\frac{V_1}{V_2}"
},
{
"math_id": 13,
"text": "\\gamma "
},
{
"math_id": 14,
"text": "\\frac{T_2}{T_1} ={\\left(\\frac{V_1}{V_2}\\right)^{\\gamma-1}} = r^{\\gamma-1}"
},
{
"math_id": 15,
"text": " \\displaystyle {T_2} ={T_1} r^{\\gamma-1} "
},
{
"math_id": 16,
"text": "\\frac{V_3}{V_2} = \\frac{T_3}{T_2}"
},
{
"math_id": 17,
"text": "\\alpha = \\left(\\frac{T_3}{T_1}\\right)\\left(\\frac{1}{r^{\\gamma-1}}\\right)"
},
{
"math_id": 18,
"text": "T_3"
},
{
"math_id": 19,
"text": "p_3"
},
{
"math_id": 20,
"text": "T_1"
},
{
"math_id": 21,
"text": "\\eta_{otto,th}=1-\\frac{1}{r^{\\gamma-1}}"
}
] | https://en.wikipedia.org/wiki?curid=8483 |
84840 | Kite (geometry) | Quadrilateral symmetric across a diagonal
In Euclidean geometry, a kite is a quadrilateral with reflection symmetry across a diagonal. Because of this symmetry, a kite has two equal angles and two pairs of adjacent equal-length sides. Kites are also known as deltoids, but the word "deltoid" may also refer to a deltoid curve, an unrelated geometric object sometimes studied in connection with quadrilaterals. A kite may also be called a dart, particularly if it is not convex.
Every kite is an orthodiagonal quadrilateral (its diagonals are at right angles) and, when convex, a tangential quadrilateral (its sides are tangent to an inscribed circle). The convex kites are exactly the quadrilaterals that are both orthodiagonal and tangential. They include as special cases the right kites, with two opposite right angles; the rhombi, with two diagonal axes of symmetry; and the squares, which are also special cases of both right kites and rhombi.
The quadrilateral with the greatest ratio of perimeter to diameter is a kite, with 60°, 75°, and 150° angles. Kites of two shapes (one convex and one non-convex) form the prototiles of one of the forms of the Penrose tiling. Kites also form the faces of several face-symmetric polyhedra and tessellations, and have been studied in connection with outer billiards, a problem in the advanced mathematics of dynamical systems.
Definition and classification.
A kite is a quadrilateral with reflection symmetry across one of its diagonals. Equivalently, it is a quadrilateral whose four sides can be grouped into two pairs of adjacent equal-length sides. A kite can be constructed from the centers and crossing points of any two intersecting circles. Kites as described here may be either convex or concave, although some sources restrict "kite" to mean only convex kites. A quadrilateral is a kite if and only if any one of the following conditions is true:
Kite quadrilaterals are named for the wind-blown, flying kites, which often have this shape and which are in turn named for a hovering bird and the sound it makes. According to Olaus Henrici, the name "kite" was given to these shapes by James Joseph Sylvester.
Quadrilaterals can be classified "hierarchically", meaning that some classes of quadrilaterals include other classes, or "partitionally", meaning that each quadrilateral is in only one class. Classified hierarchically, kites include the rhombi (quadrilaterals with four equal sides) and squares. All equilateral kites are rhombi, and all equiangular kites are squares. When classified partitionally, rhombi and squares would not be kites, because they belong to a different class of quadrilaterals; similarly, the right kites discussed below would not be kites. The remainder of this article follows a hierarchical classification; rhombi, squares, and right kites are all considered kites. By avoiding the need to consider special cases, this classification can simplify some facts about kites.
Like kites, a parallelogram also has two pairs of equal-length sides, but they are opposite to each other rather than adjacent. Any non-self-crossing quadrilateral that has an axis of symmetry must be either a kite, with a diagonal axis of symmetry; or an isosceles trapezoid, with an axis of symmetry through the midpoints of two sides. These include as special cases the rhombus and the rectangle respectively, and the square, which is a special case of both. The self-crossing quadrilaterals include another class of symmetric quadrilaterals, the antiparallelograms.
Special cases.
The right kites have two opposite right angles. The right kites are exactly the kites that are cyclic quadrilaterals, meaning that there is a circle that passes through all their vertices. The cyclic quadrilaterals may equivalently defined as the quadrilaterals in which two opposite angles are supplementary (they add to 180°); if one pair is supplementary the other is as well. Therefore, the right kites are the kites with two opposite supplementary angles, for either of the two opposite pairs of angles. Because right kites circumscribe one circle and are inscribed in another circle, they are bicentric quadrilaterals (actually tricentric, as they also have a third circle externally tangent to the extensions of their sides). If the sizes of an inscribed and a circumscribed circle are fixed, the right kite has the largest area of any quadrilateral trapped between them.
Among all quadrilaterals, the shape that has the greatest ratio of its perimeter to its diameter (maximum distance between any two points) is an equidiagonal kite with angles 60°, 75°, 150°, 75°. Its four vertices lie at the three corners and one of the side midpoints of the Reuleaux triangle. When an equidiagonal kite has side lengths less than or equal to its diagonals, like this one or the square, it is one of the quadrilaterals with the greatest ratio of area to diameter.
A kite with three 108° angles and one 36° angle forms the convex hull of the lute of Pythagoras, a fractal made of nested pentagrams. The four sides of this kite lie on four of the sides of a regular pentagon, with a golden triangle glued onto the fifth side.
There are only eight polygons that can tile the plane such that reflecting any tile across any one of its edges produces another tile; this arrangement is called an edge tessellation. One of them is a tiling by a right kite, with 60°, 90°, and 120° angles. It produces the deltoidal trihexagonal tiling (see ). A prototile made by eight of these kites tiles the plane only aperiodically, key to a claimed solution of the einstein problem.
In non-Euclidean geometry, a kite can have three right angles and one non-right angle, forming a special case of a Lambert quadrilateral. The fourth angle is acute in hyperbolic geometry and obtuse in spherical geometry.
Properties.
Diagonals, angles, and area.
Every kite is an orthodiagonal quadrilateral, meaning that its two diagonals are at right angles to each other. Moreover, one of the two diagonals (the symmetry axis) is the perpendicular bisector of the other, and is also the angle bisector of the two angles it meets. Because of its symmetry, the other two angles of the kite must be equal. The diagonal symmetry axis of a convex kite divides it into two congruent triangles; the other diagonal divides it into two isosceles triangles.
As is true more generally for any orthodiagonal quadrilateral, the area formula_0 of a kite may be calculated as half the product of the lengths of the diagonals formula_1 and formula_2:
formula_3
Alternatively, the area can be calculated by dividing the kite into two congruent triangles and applying the SAS formula for their area. If formula_4 and formula_5 are the lengths of two sides of the kite, and formula_6 is the angle between, then the area is
formula_7
Inscribed circle.
Every "convex" kite is also a tangential quadrilateral, a quadrilateral that has an inscribed circle. That is, there exists a circle that is tangent to all four sides. Additionally, if a convex kite is not a rhombus, there is a circle outside the kite that is tangent to the extensions of the four sides; therefore, every convex kite that is not a rhombus is an ex-tangential quadrilateral. The convex kites that are not rhombi are exactly the quadrilaterals that are both tangential and ex-tangential. For every "concave" kite there exist two circles tangent to two of the sides and the extensions of the other two: one is interior to the kite and touches the two sides opposite from the concave angle, while the other circle is exterior to the kite and touches the kite on the two edges incident to the concave angle.
For a convex kite with diagonal lengths formula_1 and formula_2 and side lengths formula_4 and formula_5, the radius formula_8 of the inscribed circle is formula_9
and the radius formula_10 of the ex-tangential circle is
formula_11
A tangential quadrilateral is also a kite if and only if any one of the following conditions is true:
If the diagonals in a tangential quadrilateral formula_12 intersect at formula_13, and the incircles of triangles formula_14, formula_15, formula_16, formula_17 have radii formula_18, formula_19, formula_20, and formula_21 respectively, then the quadrilateral is a kite if and only if
formula_22
If the excircles to the same four triangles opposite the vertex formula_13 have radii formula_23, formula_24, formula_25, and formula_26 respectively, then the quadrilateral is a kite if and only if
formula_27
Duality.
Kites and isosceles trapezoids are dual to each other, meaning that there is a correspondence between them that reverses the dimension of their parts, taking vertices to sides and sides to vertices. From any kite, the inscribed circle is tangent to its four sides at the four vertices of an isosceles trapezoid. For any isosceles trapezoid, tangent lines to the circumscribing circle at its four vertices form the four sides of a kite. This correspondence can also be seen as an example of polar reciprocation, a general method for corresponding points with lines and vice versa given a fixed circle. Although they do not touch the circle, the four vertices of the kite are reciprocal in this sense to the four sides of the isosceles trapezoid. The features of kites and isosceles trapezoids that correspond to each other under this duality are compared in the table below.
Dissection.
The equidissection problem concerns the subdivision of polygons into triangles that all have equal areas. In this context, the "spectrum" of a polygon is the set of numbers formula_28 such that the polygon has an equidissection into formula_28 equal-area triangles. Because of its symmetry, the spectrum of a kite contains all even integers. Certain special kites also contain some odd numbers in their spectra.
Every triangle can be subdivided into three right kites meeting at the center of its inscribed circle. More generally, a method based on circle packing can be used to subdivide any polygon with formula_28 sides into formula_29 kites, meeting edge-to-edge.
Tilings and polyhedra.
All kites tile the plane by repeated point reflection around the midpoints of their edges, as do more generally all quadrilaterals. Kites and darts with angles 72°, 72°, 72°, 144° and 36°, 72°, 36°, 216°, respectively, form the prototiles of one version of the Penrose tiling, an aperiodic tiling of the plane discovered by mathematical physicist Roger Penrose. When a kite has angles that, at its apex and one side, sum to formula_30 for some positive integer formula_28, then scaled copies of that kite can be used to tile the plane in a fractal rosette in which successively larger rings of formula_28 kites surround a central point. These rosettes can be used to study the phenomenon of inelastic collapse, in which a system of moving particles meeting in inelastic collisions all coalesce at a common point.
A kite with angles 60°, 90°, 120°, 90° can also tile the plane by repeated reflection across its edges; the resulting tessellation, the deltoidal trihexagonal tiling, superposes a tessellation of the plane by regular hexagons and isosceles triangles. The deltoidal icositetrahedron, deltoidal hexecontahedron, and trapezohedron are polyhedra with congruent kite-shaped faces, which can alternatively be thought of as tilings of the sphere by congruent spherical kites. There are infinitely many face-symmetric tilings of the hyperbolic plane by kites. These polyhedra (equivalently, spherical tilings), the square and deltoidal trihexagonal tilings of the Euclidean plane, and some tilings of the hyperbolic plane are shown in the table below, labeled by face configuration (the numbers of neighbors of each of the four vertices of each tile). Some polyhedra and tilings appear twice, under two different face configurations.
The trapezohedra are another family of polyhedra that have congruent kite-shaped faces. In these polyhedra, the edges of one of the two side lengths of the kite meet at two "pole" vertices, while the edges of the other length form an equatorial zigzag path around the polyhedron. They are the dual polyhedra of the uniform antiprisms. A commonly seen example is the pentagonal trapezohedron, used for ten-sided dice.
Outer billiards.
Mathematician Richard Schwartz has studied outer billiards on kites. Outer billiards is a dynamical system in which, from a point outside a given compact convex set in the plane, one draws a tangent line to the convex set, travels from the starting point along this line to another point equally far from the point of tangency, and then repeats the same process. It had been open since the 1950s whether any system defined in this way could produce paths that get arbitrarily far from their starting point, and in a 2007 paper Schwartz solved this problem by finding unbounded billiards paths for the kite with angles 72°, 72°, 72°, 144°, the same as the one used in the Penrose tiling. He later wrote a monograph analyzing outer billiards for kite shapes more generally. For this problem, any affine transformation of a kite preserves the dynamical properties of outer billiards on it, and it is possible to transform any kite into a shape where three vertices are at the points formula_31 and formula_32, with the fourth at formula_33 with formula_34 in the open unit interval formula_35. The behavior of outer billiards on any kite depends strongly on the parameter formula_34 and in particular whether it is rational. For the case of the Penrose kite, formula_36, an irrational number, where formula_37 is the golden ratio.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "q"
},
{
"math_id": 3,
"text": "A =\\frac{p \\cdot q}{2}."
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "b"
},
{
"math_id": 6,
"text": "\\theta"
},
{
"math_id": 7,
"text": "\\displaystyle A = ab \\cdot \\sin\\theta."
},
{
"math_id": 8,
"text": "r"
},
{
"math_id": 9,
"text": "r=\\frac{pq}{2(a+b)},"
},
{
"math_id": 10,
"text": "\\rho"
},
{
"math_id": 11,
"text": "\\rho=\\frac{pq}{2|a-b|}."
},
{
"math_id": 12,
"text": "ABCD"
},
{
"math_id": 13,
"text": "P"
},
{
"math_id": 14,
"text": "ABP"
},
{
"math_id": 15,
"text": "BCP"
},
{
"math_id": 16,
"text": "CDP"
},
{
"math_id": 17,
"text": "DAP"
},
{
"math_id": 18,
"text": "r_1"
},
{
"math_id": 19,
"text": "r_2"
},
{
"math_id": 20,
"text": "r_3"
},
{
"math_id": 21,
"text": "r_4"
},
{
"math_id": 22,
"text": "r_1+r_3=r_2+r_4."
},
{
"math_id": 23,
"text": "R_1"
},
{
"math_id": 24,
"text": "R_2"
},
{
"math_id": 25,
"text": "R_3"
},
{
"math_id": 26,
"text": "R_4"
},
{
"math_id": 27,
"text": "R_1+R_3=R_2+R_4."
},
{
"math_id": 28,
"text": "n"
},
{
"math_id": 29,
"text": "O(n)"
},
{
"math_id": 30,
"text": "\\pi(1-\\tfrac1n)"
},
{
"math_id": 31,
"text": "(-1,0)"
},
{
"math_id": 32,
"text": "(0,\\pm1)"
},
{
"math_id": 33,
"text": "(\\alpha,0)"
},
{
"math_id": 34,
"text": "\\alpha"
},
{
"math_id": 35,
"text": "(0,1)"
},
{
"math_id": 36,
"text": "\\alpha=1/\\varphi^3"
},
{
"math_id": 37,
"text": "\\varphi=(1+\\sqrt5)/2"
}
] | https://en.wikipedia.org/wiki?curid=84840 |
8485219 | Von Neumann neighborhood | Cellular automaton neighborhood consisting of four adjacent cells
In cellular automata, the von Neumann neighborhood (or 4-neighborhood) is classically defined on a two-dimensional square lattice and is composed of a central cell and its four adjacent cells. The neighborhood is named after John von Neumann, who used it to define the von Neumann cellular automaton and the von Neumann universal constructor within it. It is one of the two most commonly used neighborhood types for two-dimensional cellular automata, the other one being the Moore neighborhood.
This neighbourhood can be used to define the notion of 4-connected pixels in computer graphics.
The von Neumann neighbourhood of a cell is the cell itself and the cells at a Manhattan distance of 1.
The concept can be extended to higher dimensions, for example forming a 6-cell octahedral neighborhood for a cubic cellular automaton in three dimensions.
Von Neumann neighborhood of range "r".
An extension of the simple von Neumann neighborhood described above is to take the set of points at a Manhattan distance of "r" > 1. This results in a diamond-shaped region (shown for "r" = 2 in the illustration). These are called von Neumann neighborhoods of range or extent "r". The number of cells in a 2-dimensional von Neumann neighborhood of range "r" can be expressed as formula_0. The number of cells in a "d"-dimensional von Neumann neighborhood of range "r" is the Delannoy number "D"("d","r"). The number of cells on a surface of a "d"-dimensional von Neumann neighborhood of range "r" is the Zaitsev number (sequence in the OEIS).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r^2 + (r+1)^2"
}
] | https://en.wikipedia.org/wiki?curid=8485219 |
848561 | Flesch–Kincaid readability tests | Indicator for the complexity of texts
The Flesch–Kincaid readability tests are readability tests designed to indicate how difficult a passage in English is to understand. There are two tests: the Flesch Reading-Ease, and the Flesch–Kincaid Grade Level. Although they use the same core measures (word length and sentence length), they have different weighting factors.
The results of the two tests correlate approximately inversely: a text with a comparatively high score on the Reading Ease test should have a lower score on the Grade-Level test. Rudolf Flesch devised the Reading Ease evaluation; somewhat later, he and J. Peter Kincaid developed the Grade Level evaluation for the United States Navy.
History.
"The Flesch–Kincaid" (F–K) reading grade level was developed under contract to the U.S. Navy in 1975 by J. Peter Kincaid and his team. Related U.S. Navy research directed by Kincaid delved into high-tech education (for example, the electronic authoring and delivery of technical information), usefulness of the Flesch–Kincaid readability formula, computer aids for editing tests, illustrated formats to teach procedures, and the Computer Readability Editing System (CRES).
The F–K formula was first used by the Army for assessing the difficulty of technical manuals in 1978 and soon after became a United States Military Standard. Pennsylvania was the first U.S. state to require that automobile insurance policies be written at no higher than a ninth-grade level (14–15 years of age) of reading difficulty, as measured by the F–K formula. This is now a common requirement in many other states and for other legal documents such as insurance policies.
Flesch reading ease.
In the Flesch reading-ease test, higher scores indicate material that is easier to read; lower numbers mark passages that are more difficult to read. The formula for the Flesch reading-ease score (FRES) test is:
formula_0
Scores can be interpreted as shown in the table below.
"Reader's Digest" magazine has a readability index of about 65, "Time" magazine scores about 52, an average grade six student's written assignment (age of 12) has a readability index of 60–70 (and a reading grade level of six to seven), and the "Harvard Law Review" has a general readability score in the low 30s. The highest (easiest) readability score possible is 121.22, but only if every sentence consists of only one-syllable words. "The cat sat on the mat." scores 116. The score does not have a theoretical lower bound; therefore, it is possible to make the score as low as wanted by arbitrarily including words with many syllables. The sentence "This sentence, taken as a reading passage unto itself, is being used to prove a point." has a readability of 69. The sentence "The Australian platypus is seemingly a hybrid of a mammal and reptilian creature." scores 37.5 as it has 24 syllables and 13 words. While Amazon calculates the text of "Moby-Dick" as 57.9, one particularly long sentence about sharks in chapter 64 has a readability score of −146.77. One sentence in the beginning of Scott Moncrieff's English translation of "Swann's Way", by Marcel Proust, has a score of −515.1.
The U.S. Department of Defense uses the reading ease test as the standard test of readability for its documents and forms. Florida requires that insurance policies have a Flesch reading ease score of 45 or greater.
Use of this scale is so ubiquitous that it is bundled with popular word processing programs and services such as KWord, IBM Lotus Symphony, Microsoft Office Word, WordPerfect, WordPro, and Grammarly.
Polysyllabic words affect this score significantly more than they do the grade-level score.
Flesch–Kincaid grade level.
These readability tests are used extensively in the field of education. The "Flesch–Kincaid Grade Level Formula" presents a score as a U.S. grade level, making it easier for teachers, parents, librarians, and others to judge the readability level of various books and texts. It can also mean the number of years of education generally required to understand this text, relevant when the formula results in a number greater than 10. The grade level is calculated with the following formula:
formula_1
The result is a number that corresponds with a U.S. grade level. The sentence, "The Australian platypus is seemingly a hybrid of a mammal and reptilian creature" is an 11.3 as it has 24 syllables and 13 words. The different weighting factors for words per sentence and syllables per word in each scoring system mean that the two schemes are not directly comparable and cannot be converted. The grade level formula emphasizes sentence length over word length. By creating one-word strings with hundreds of random characters, grade levels may be attained that are hundreds of times larger than high school completion in the United States. Due to the formula's construction, the score does not have an upper bound.
The lowest grade level score in theory is −3.40, but there are few real passages in which every sentence consists of a single one-syllable word. "Green Eggs and Ham" by Dr. Seuss comes close, averaging 5.7 words per sentence and 1.02 syllables per word, with a grade level of −1.3. (Most of the 50 used words are monosyllabic; "anywhere", which occurs eight times, is the only exception.)
Limitations.
As readability formulas were developed for school books, they demonstrate weaknesses compared to directly testing usability with typical readers. They neglect between-reader differences and effects of content, layout and retrieval aids.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n206.835 - 1.015 \\left( \\frac{\\text{total words}}{\\text{total sentences}} \\right) - 84.6 \\left( \\frac{\\text{total syllables}}{\\text{total words}} \\right)\n"
},
{
"math_id": 1,
"text": "\n0.39 \\left ( \\frac{\\mbox{total words}}{\\mbox{total sentences}} \\right ) + 11.8 \\left ( \\frac{\\mbox{total syllables}}{\\mbox{total words}} \\right ) - 15.59\n"
}
] | https://en.wikipedia.org/wiki?curid=848561 |
848629 | Adiabatic theorem | Concept in quantum mechanics
The adiabatic theorem is a concept in quantum mechanics. Its original form, due to Max Born and Vladimir Fock (1928), was stated as follows:
"A physical system remains in its instantaneous eigenstate if a given perturbation is acting on it slowly enough and if there is a gap between the eigenvalue and the rest of the Hamiltonian's spectrum."
In simpler terms, a quantum mechanical system subjected to gradually changing external conditions adapts its functional form, but when subjected to rapidly varying conditions there is insufficient time for the functional form to adapt, so the spatial probability density remains unchanged.
Adiabatic pendulum.
At the 1911 Solvay conference, Einstein gave a lecture on the quantum hypothesis, which states that formula_0 for atomic oscillators. After Einstein's lecture, Hendrik Lorentz commented that, classically, if a simple pendulum is shortened by holding the wire between two fingers and sliding down, it seems that its energy will change smoothly as the pendulum is shortened. This seems to show that the quantum hypothesis is invalid for macroscopic systems, and if macroscopic systems do not follow the quantum hypothesis, then as the macroscopic system becomes microscopic, it seems the quantum hypothesis would be invalidated. Einstein replied that although both the energy formula_1 and the frequency formula_2 would change, their ratio formula_3 would still be conserved, thus saving the quantum hypothesis.
Before the conference, Einstein had just read a paper by Paul Ehrenfest on the adiabatic hypothesis. We know that he had read it because he mentioned it in a letter to Michele Besso written before the conference.
Diabatic vs. adiabatic processes.
At some initial time formula_4 a quantum-mechanical system has an energy given by the Hamiltonian formula_5; the system is in an eigenstate of formula_5 labelled formula_6. Changing conditions modify the Hamiltonian in a continuous manner, resulting in a final Hamiltonian formula_7 at some later time formula_8. The system will evolve according to the time-dependent Schrödinger equation, to reach a final state formula_9. The adiabatic theorem states that the modification to the system depends critically on the time formula_10 during which the modification takes place.
For a truly adiabatic process we require formula_11; in this case the final state formula_9 will be an eigenstate of the final Hamiltonian formula_12, with a modified configuration:
formula_13
The degree to which a given change approximates an adiabatic process depends on both the energy separation between formula_6 and adjacent states, and the ratio of the interval formula_14 to the characteristic timescale of the evolution of formula_6 for a time-independent Hamiltonian, formula_15, where formula_16 is the energy of formula_6.
Conversely, in the limit formula_17 we have infinitely rapid, or diabatic passage; the configuration of the state remains unchanged:
formula_18
The so-called "gap condition" included in Born and Fock's original definition given above refers to a requirement that the spectrum of formula_19 is discrete and nondegenerate, such that there is no ambiguity in the ordering of the states (one can easily establish which eigenstate of formula_7 "corresponds" to formula_20). In 1999 J. E. Avron and A. Elgart reformulated the adiabatic theorem to adapt it to situations without a gap.
Comparison with the adiabatic concept in thermodynamics.
The term "adiabatic" is traditionally used in thermodynamics to describe processes without the exchange of heat between system and environment (see adiabatic process), more precisely these processes are usually faster than the timescale of heat exchange. (For example, a pressure wave is adiabatic with respect to a heat wave, which is not adiabatic.) Adiabatic in the context of thermodynamics is often used as a synonym for fast process.
The classical and quantum mechanics definition is instead closer to the thermodynamical concept of a quasistatic process, which are processes that are almost always at equilibrium (i.e. that are slower than the internal energy exchange interactions time scales, namely a "normal" atmospheric heat wave is quasi-static, and a pressure wave is not). Adiabatic in the context of mechanics is often used as a synonym for slow process.
In the quantum world adiabatic means for example that the time scale of electrons and photon interactions is much faster or almost instantaneous with respect to the average time scale of electrons and photon propagation. Therefore, we can model the interactions as a piece of continuous propagation of electrons and photons (i.e. states at equilibrium) plus a quantum jump between states (i.e. instantaneous).
The adiabatic theorem in this heuristic context tells essentially that quantum jumps are preferably avoided, and the system tries to conserve the state and the quantum numbers.
The quantum mechanical concept of adiabatic is related to adiabatic invariant, it is often used in the old quantum theory and has no direct relation with heat exchange.
Example systems.
Simple pendulum.
As an example, consider a pendulum oscillating in a vertical plane. If the support is moved, the mode of oscillation of the pendulum will change. If the support is moved "sufficiently slowly", the motion of the pendulum relative to the support will remain unchanged. A gradual change in external conditions allows the system to adapt, such that it retains its initial character. The detailed classical example is available in the Adiabatic invariant page and here.
Quantum harmonic oscillator.
The classical nature of a pendulum precludes a full description of the effects of the adiabatic theorem. As a further example consider a quantum harmonic oscillator as the spring constant formula_21 is increased. Classically this is equivalent to increasing the stiffness of a spring; quantum-mechanically the effect is a narrowing of the potential energy curve in the system Hamiltonian.
If formula_21 is increased adiabatically formula_22 then the system at time formula_23 will be in an instantaneous eigenstate formula_24 of the "current" Hamiltonian formula_25, corresponding to the initial eigenstate of formula_26. For the special case of a system like the quantum harmonic oscillator described by a single quantum number, this means the quantum number will remain unchanged. Figure 1 shows how a harmonic oscillator, initially in its ground state, formula_27, remains in the ground state as the potential energy curve is compressed; the functional form of the state adapting to the slowly varying conditions.
For a rapidly increased spring constant, the system undergoes a diabatic process formula_28 in which the system has no time to adapt its functional form to the changing conditions. While the final state must look identical to the initial state formula_29 for a process occurring over a vanishing time period, there is no eigenstate of the new Hamiltonian, formula_25, that resembles the initial state. The final state is composed of a linear superposition of many different eigenstates of formula_25 which sum to reproduce the form of the initial state.
Avoided curve crossing.
For a more widely applicable example, consider a 2-level atom subjected to an external magnetic field. The states, labelled formula_30 and formula_31 using bra–ket notation, can be thought of as atomic angular-momentum states, each with a particular geometry. For reasons that will become clear these states will henceforth be referred to as the diabatic states. The system wavefunction can be represented as a linear combination of the diabatic states:
formula_32
With the field absent, the energetic separation of the diabatic states is equal to formula_33; the energy of state formula_30 increases with increasing magnetic field (a low-field-seeking state), while the energy of state formula_31 decreases with increasing magnetic field (a high-field-seeking state). Assuming the magnetic-field dependence is linear, the Hamiltonian matrix for the system with the field applied can be written
formula_34
where formula_35 is the magnetic moment of the atom, assumed to be the same for the two diabatic states, and formula_36 is some time-independent coupling between the two states. The diagonal elements are the energies of the diabatic states (formula_37 and formula_38), however, as formula_39 is not a diagonal matrix, it is clear that these states are not eigenstates of formula_39 due to the off-diagonal coupling constant.
The eigenvectors of the matrix formula_39 are the eigenstates of the system, which we will label formula_40 and formula_41, with corresponding eigenvalues
formula_42
It is important to realise that the eigenvalues formula_43 and formula_44 are the only allowed outputs for any individual measurement of the system energy, whereas the diabatic energies formula_37 and formula_38 correspond to the expectation values for the energy of the system in the diabatic states formula_30 and formula_31.
Figure 2 shows the dependence of the diabatic and adiabatic energies on the value of the magnetic field; note that for non-zero coupling the eigenvalues of the Hamiltonian cannot be degenerate, and thus we have an avoided crossing. If an atom is initially in state formula_45 in zero magnetic field (on the red curve, at the extreme left), an adiabatic increase in magnetic field formula_46 will ensure the system remains in an eigenstate of the Hamiltonian formula_41 throughout the process (follows the red curve). A diabatic increase in magnetic field formula_47 will ensure the system follows the diabatic path (the dotted blue line), such that the system undergoes a transition to state formula_48. For finite magnetic field slew rates formula_49 there will be a finite probability of finding the system in either of the two eigenstates. See below for approaches to calculating these probabilities.
These results are extremely important in atomic and molecular physics for control of the energy-state distribution in a population of atoms or molecules.
Mathematical statement.
Under a slowly changing Hamiltonian formula_50 with instantaneous eigenstates formula_51 and corresponding energies formula_52, a quantum system evolves from the initial state
formula_53
to the final state
formula_54
where the coefficients undergo the change of phase
formula_55
with the dynamical phase
formula_56
and geometric phase
formula_57
In particular, formula_58, so if the system begins in an eigenstate of formula_59, it remains in an eigenstate of formula_50 during the evolution with a change of phase only.
Example applications.
Often a solid crystal is modeled as a set of independent valence electrons moving in a mean perfectly periodic potential generated by a rigid lattice of ions. With the Adiabatic theorem we can also include instead the motion of the valence electrons across the crystal and the thermal motion of the ions as in the Born–Oppenheimer approximation.
This does explain many phenomena in the scope of:
Deriving conditions for diabatic vs adiabatic passage.
We will now pursue a more rigorous analysis. Making use of bra–ket notation, the state vector of the system at time formula_23 can be written
formula_60
where the spatial wavefunction alluded to earlier is the projection of the state vector onto the eigenstates of the position operator
formula_61
It is instructive to examine the limiting cases, in which formula_14 is very large (adiabatic, or gradual change) and very small (diabatic, or sudden change).
Consider a system Hamiltonian undergoing continuous change from an initial value formula_62, at time formula_4, to a final value formula_63, at time formula_8, where formula_10. The evolution of the system can be described in the Schrödinger picture by the time-evolution operator, defined by the integral equation
formula_64
which is equivalent to the Schrödinger equation.
formula_65
along with the initial condition formula_66. Given knowledge of the system wave function at formula_4, the evolution of the system up to a later time formula_23 can be obtained using
formula_67
The problem of determining the "adiabaticity" of a given process is equivalent to establishing the dependence of formula_68 on formula_14.
To determine the validity of the adiabatic approximation for a given process, one can calculate the probability of finding the system in a state other than that in which it started. Using bra–ket notation and using the definition formula_69, we have:
formula_70
We can expand formula_68
formula_71
In the perturbative limit we can take just the first two terms and substitute them into our equation for formula_72, recognizing that
formula_73
is the system Hamiltonian, averaged over the interval formula_74, we have:
formula_75
After expanding the products and making the appropriate cancellations, we are left with:
formula_76
giving
formula_77
where formula_78 is the root mean square deviation of the system Hamiltonian averaged over the interval of interest.
The sudden approximation is valid when formula_79 (the probability of finding the system in a state other than that in which is started approaches zero), thus the validity condition is given by
formula_80
which is a statement of the time-energy form of the Heisenberg uncertainty principle.
Diabatic passage.
In the limit formula_17 we have infinitely rapid, or diabatic passage:
formula_81
The functional form of the system remains unchanged:
formula_82
This is sometimes referred to as the sudden approximation. The validity of the approximation for a given process can be characterized by the probability that the state of the system remains unchanged:
formula_83
Adiabatic passage.
In the limit formula_11 we have infinitely slow, or adiabatic passage. The system evolves, adapting its form to the changing conditions,
formula_84
If the system is initially in an eigenstate of formula_5, after a period formula_14 it will have passed into the "corresponding" eigenstate of formula_7.
This is referred to as the adiabatic approximation. The validity of the approximation for a given process can be determined from the probability that the final state of the system is different from the initial state:
formula_85
Calculating adiabatic passage probabilities.
The Landau–Zener formula.
In 1932 an analytic solution to the problem of calculating adiabatic transition probabilities was published separately by Lev Landau and Clarence Zener, for the special case of a linearly changing perturbation in which the time-varying component does not couple the relevant states (hence the coupling in the diabatic Hamiltonian matrix is independent of time).
The key figure of merit in this approach is the Landau–Zener velocity:
formula_86
where formula_87 is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and formula_88 and formula_89 are the energies of the two diabatic (crossing) states. A large formula_90 results in a large diabatic transition probability and vice versa.
Using the Landau–Zener formula the probability, formula_91, of a diabatic transition is given by
formula_92
The numerical approach.
For a transition involving a nonlinear change in perturbation variable or time-dependent coupling between the diabatic states, the equations of motion for the system dynamics cannot be solved analytically. The diabatic transition probability can still be obtained using one of the wide varieties of numerical solution algorithms for ordinary differential equations.
The equations to be solved can be obtained from the time-dependent Schrödinger equation:
formula_93
where formula_94 is a vector containing the adiabatic state amplitudes, formula_95 is the time-dependent adiabatic Hamiltonian, and the overdot represents a time derivative.
Comparison of the initial conditions used with the values of the state amplitudes following the transition can yield the diabatic transition probability. In particular, for a two-state system:
formula_96
for a system that began with formula_97.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E = nh \\nu"
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "\\nu"
},
{
"math_id": 3,
"text": "\\frac{E}{\\nu}"
},
{
"math_id": 4,
"text": "t_0"
},
{
"math_id": 5,
"text": "\\hat{H}(t_0)"
},
{
"math_id": 6,
"text": "\\psi(x,t_0)"
},
{
"math_id": 7,
"text": "\\hat{H}(t_1)"
},
{
"math_id": 8,
"text": "t_1"
},
{
"math_id": 9,
"text": "\\psi(x,t_1)"
},
{
"math_id": 10,
"text": "\\tau = t_1 - t_0"
},
{
"math_id": 11,
"text": "\\tau \\to \\infty"
},
{
"math_id": 12,
"text": "\\hat{H}(t_1) "
},
{
"math_id": 13,
"text": "|\\psi(x,t_1)|^2 \\neq |\\psi(x,t_0)|^2 ."
},
{
"math_id": 14,
"text": "\\tau"
},
{
"math_id": 15,
"text": "\\tau_{int} = 2\\pi\\hbar/E_0"
},
{
"math_id": 16,
"text": "E_0"
},
{
"math_id": 17,
"text": "\\tau \\to 0"
},
{
"math_id": 18,
"text": "|\\psi(x,t_1)|^2 = |\\psi(x,t_0)|^2 ."
},
{
"math_id": 19,
"text": "\\hat{H}"
},
{
"math_id": 20,
"text": "\\psi(t_0)"
},
{
"math_id": 21,
"text": "k"
},
{
"math_id": 22,
"text": "\\left(\\frac{dk}{dt} \\to 0\\right)"
},
{
"math_id": 23,
"text": "t"
},
{
"math_id": 24,
"text": "\\psi(t)"
},
{
"math_id": 25,
"text": "\\hat{H}(t)"
},
{
"math_id": 26,
"text": "\\hat{H}(0)"
},
{
"math_id": 27,
"text": "n = 0"
},
{
"math_id": 28,
"text": "\\left(\\frac{dk}{dt} \\to \\infty\\right)"
},
{
"math_id": 29,
"text": "\\left(|\\psi(t)|^2 = |\\psi(0)|^2\\right)"
},
{
"math_id": 30,
"text": "|1\\rangle"
},
{
"math_id": 31,
"text": "|2\\rangle"
},
{
"math_id": 32,
"text": "|\\Psi\\rangle = c_1(t)|1\\rangle + c_2(t)|2\\rangle."
},
{
"math_id": 33,
"text": "\\hbar\\omega_0"
},
{
"math_id": 34,
"text": "\\mathbf{H} = \\begin{pmatrix}\n\\mu B(t)-\\hbar\\omega_0/2 & a \\\\\na^* & \\hbar\\omega_0/2-\\mu B(t)\n\\end{pmatrix}"
},
{
"math_id": 35,
"text": "\\mu"
},
{
"math_id": 36,
"text": "a"
},
{
"math_id": 37,
"text": "E_1(t)"
},
{
"math_id": 38,
"text": "E_2(t)"
},
{
"math_id": 39,
"text": "\\mathbf{H}"
},
{
"math_id": 40,
"text": "|\\phi_1(t)\\rangle"
},
{
"math_id": 41,
"text": "|\\phi_2(t)\\rangle"
},
{
"math_id": 42,
"text": "\\begin{align}\n\\varepsilon_1(t) &= -\\frac{1}{2}\\sqrt{4a^2 + (\\hbar\\omega_0 - 2\\mu B(t))^2} \\\\[4pt]\n\\varepsilon_2(t) &= +\\frac{1}{2}\\sqrt{4a^2 + (\\hbar\\omega_0 - 2\\mu B(t))^2}.\n\\end{align}"
},
{
"math_id": 43,
"text": "\\varepsilon_1(t)"
},
{
"math_id": 44,
"text": "\\varepsilon_2(t)"
},
{
"math_id": 45,
"text": "|\\phi_2(t_0)\\rangle"
},
{
"math_id": 46,
"text": "\\left(\\frac{dB}{dt} \\to 0\\right)"
},
{
"math_id": 47,
"text": "\\left(\\frac{dB}{dt}\\to \\infty\\right)"
},
{
"math_id": 48,
"text": "|\\phi_1(t_1)\\rangle"
},
{
"math_id": 49,
"text": "\\left(0 < \\frac{dB}{dt} < \\infty\\right)"
},
{
"math_id": 50,
"text": "H(t)"
},
{
"math_id": 51,
"text": "| n(t) \\rangle"
},
{
"math_id": 52,
"text": "E_n(t)"
},
{
"math_id": 53,
"text": "| \\psi(0) \\rangle = \\sum_n c_n(0) | n(0) \\rangle"
},
{
"math_id": 54,
"text": "| \\psi(t) \\rangle = \\sum_n c_n(t) | n(t) \\rangle ,"
},
{
"math_id": 55,
"text": "c_n(t) = c_n(0) e^{i \\theta_n(t)} e^{i \\gamma_n(t)}"
},
{
"math_id": 56,
"text": "\\theta_m(t) = -\\frac{1}{\\hbar} \\int_0^t E_m(t') dt'"
},
{
"math_id": 57,
"text": "\\gamma_m(t) = i \\int_0^t \\langle m(t') | \\dot{m}(t') \\rangle dt' ."
},
{
"math_id": 58,
"text": "|c_n(t)|^2 = |c_n(0)|^2"
},
{
"math_id": 59,
"text": "H(0)"
},
{
"math_id": 60,
"text": "|\\psi(t)\\rangle = \\sum_n c^A_n(t)e^{-iE_nt/\\hbar}|\\phi_n\\rangle ,"
},
{
"math_id": 61,
"text": "\\psi(x,t) = \\langle x|\\psi(t)\\rangle ."
},
{
"math_id": 62,
"text": "\\hat{H}_0"
},
{
"math_id": 63,
"text": "\\hat{H}_1"
},
{
"math_id": 64,
"text": "\\hat{U}(t,t_0) = 1 - \\frac{i}{\\hbar}\\int_{t_0}^t\\hat{H}(t')\\hat{U}(t',t_0)dt' ,"
},
{
"math_id": 65,
"text": "i\\hbar\\frac{\\partial}{\\partial t}\\hat{U}(t,t_0) = \\hat{H}(t)\\hat{U}(t,t_0),"
},
{
"math_id": 66,
"text": "\\hat{U}(t_0,t_0) = 1"
},
{
"math_id": 67,
"text": "|\\psi(t)\\rangle = \\hat{U}(t,t_0)|\\psi(t_0)\\rangle."
},
{
"math_id": 68,
"text": "\\hat{U}(t_1,t_0)"
},
{
"math_id": 69,
"text": "|0\\rangle \\equiv |\\psi(t_0)\\rangle"
},
{
"math_id": 70,
"text": "\\zeta = \\langle 0|\\hat{U}^\\dagger(t_1,t_0)\\hat{U}(t_1,t_0)|0\\rangle - \\langle 0|\\hat{U}^\\dagger(t_1,t_0)|0\\rangle\\langle 0 | \\hat{U}(t_1,t_0) | 0 \\rangle."
},
{
"math_id": 71,
"text": "\\hat{U}(t_1,t_0) = 1 + {1 \\over i\\hbar} \\int_{t_0}^{t_1}\\hat{H}(t)dt + {1 \\over (i\\hbar)^2} \\int_{t_0}^{t_1}dt' \\int_{t_0}^{t'}dt'' \\hat{H}(t')\\hat{H}(t'') + \\cdots."
},
{
"math_id": 72,
"text": "\\zeta"
},
{
"math_id": 73,
"text": "{1 \\over \\tau}\\int_{t_0}^{t_1}\\hat{H}(t)dt \\equiv \\bar{H}"
},
{
"math_id": 74,
"text": "t_0 \\to t_1"
},
{
"math_id": 75,
"text": "\\zeta = \\langle 0|(1 + \\tfrac{i}{\\hbar}\\tau\\bar{H})(1 - \\tfrac{i}{\\hbar}\\tau\\bar{H})|0\\rangle - \\langle 0|(1 + \\tfrac{i}{\\hbar}\\tau\\bar{H})|0\\rangle \\langle 0|(1 - \\tfrac{i}{\\hbar}\\tau\\bar{H})|0\\rangle ."
},
{
"math_id": 76,
"text": "\\zeta = \\frac{\\tau^2}{\\hbar^2}\\left(\\langle 0|\\bar{H}^2|0\\rangle - \\langle 0|\\bar{H}|0\\rangle\\langle 0|\\bar{H}|0\\rangle\\right) ,"
},
{
"math_id": 77,
"text": "\\zeta = \\frac{\\tau^2\\Delta\\bar{H}^2}{\\hbar^2} ,"
},
{
"math_id": 78,
"text": "\\Delta\\bar{H}"
},
{
"math_id": 79,
"text": "\\zeta \\ll 1"
},
{
"math_id": 80,
"text": "\\tau \\ll {\\hbar \\over \\Delta\\bar{H}} ,"
},
{
"math_id": 81,
"text": "\\lim_{\\tau \\to 0}\\hat{U}(t_1,t_0) = 1 ."
},
{
"math_id": 82,
"text": "|\\langle x|\\psi(t_1)\\rangle|^2 = \\left|\\langle x|\\psi(t_0)\\rangle\\right|^2 ."
},
{
"math_id": 83,
"text": "P_D = 1 - \\zeta."
},
{
"math_id": 84,
"text": "|\\langle x|\\psi(t_1)\\rangle|^2 \\neq |\\langle x|\\psi(t_0)\\rangle|^2 ."
},
{
"math_id": 85,
"text": "P_A = \\zeta ."
},
{
"math_id": 86,
"text": "v_\\text{LZ} = {\\frac{\\partial}{\\partial t}|E_2 - E_1| \\over \\frac{\\partial}{\\partial q}|E_2 - E_1|} \\approx \\frac{dq}{dt} ,"
},
{
"math_id": 87,
"text": "q"
},
{
"math_id": 88,
"text": "E_1"
},
{
"math_id": 89,
"text": "E_2"
},
{
"math_id": 90,
"text": "v_\\text{LZ}"
},
{
"math_id": 91,
"text": "P_{\\rm D}"
},
{
"math_id": 92,
"text": "\\begin{align}\nP_{\\rm D} &= e^{-2\\pi\\Gamma}\\\\\n\\Gamma &= {a^2/\\hbar \\over \\left|\\frac{\\partial}{\\partial t}(E_2 - E_1)\\right|} = {a^2/\\hbar \\over \\left|\\frac{dq}{dt}\\frac{\\partial}{\\partial q}(E_2 - E_1)\\right|}\\\\\n&= {a^2 \\over \\hbar|\\alpha|}\\\\\n\\end{align}"
},
{
"math_id": 93,
"text": "i\\hbar\\dot{\\underline{c}}^A(t) = \\mathbf{H}_A(t)\\underline{c}^A(t) ,"
},
{
"math_id": 94,
"text": "\\underline{c}^A(t)"
},
{
"math_id": 95,
"text": "\\mathbf{H}_A(t)"
},
{
"math_id": 96,
"text": "P_D = |c^A_2(t_1)|^2"
},
{
"math_id": 97,
"text": "|c^A_1(t_0)|^2 = 1"
}
] | https://en.wikipedia.org/wiki?curid=848629 |
848633 | Discrete valuation ring | Concept in abstract algebra
In abstract algebra, a discrete valuation ring (DVR) is a principal ideal domain (PID) with exactly one non-zero maximal ideal.
This means a DVR is an integral domain "R" that satisfies any one of the following equivalent conditions:
Examples.
Algebraic.
Localization of Dedekind rings.
Let formula_2. Then, the field of fractions of formula_3 is formula_4. For any nonzero element formula_5 of formula_4, we can apply unique factorization to the numerator and denominator of "r" to write "r" as where "z", "n", and "k" are integers with "z" and "n" odd. In this case, we define ν("r")="k".
Then formula_3 is the discrete valuation ring corresponding to ν. The maximal ideal of formula_3 is the principal ideal generated by 2, i.e. formula_6, and the "unique" irreducible element (up to units) is 2 (this is also known as a uniformizing parameter). Note that formula_3 is the localization of the Dedekind domain formula_7 at the prime ideal generated by 2.
More generally, any localization of a Dedekind domain at a non-zero prime ideal is a discrete valuation ring; in practice, this is frequently how discrete valuation rings arise. In particular, we can define rings
formula_8
for any prime "p" in complete analogy.
"p"-adic integers.
The ring formula_9 of "p"-adic integers is a DVR, for any prime formula_10. Here formula_10 is an irreducible element; the valuation assigns to each formula_10-adic integer formula_11 the largest integer formula_12 such that formula_13 divides formula_11.
Formal power series.
Another important example of a DVR is the ring of formal power series formula_14 in one variable formula_15 over some field formula_12. The "unique" irreducible element is formula_15, the maximal ideal of formula_16 is the principal ideal generated by formula_15, and the valuation formula_17 assigns to each power series the index (i.e. degree) of the first non-zero coefficient.
If we restrict ourselves to real or complex coefficients, we can consider the ring of power series in one variable that "converge" in a neighborhood of 0 (with the neighborhood depending on the power series). This is a discrete valuation ring. This is useful for building intuition with the Valuative criterion of properness.
Ring in function field.
For an example more geometrical in nature, take the ring "R" = {"f"/"g" : "f", "g" polynomials in R["X"] and "g"(0) ≠ 0}, considered as a subring of the field of rational functions R("X") in the variable "X". "R" can be identified with the ring of all real-valued rational functions defined (i.e. finite) in a neighborhood of 0 on the real axis (with the neighborhood depending on the function). It is a discrete valuation ring; the "unique" irreducible element is "X" and the valuation assigns to each function "f" the order (possibly 0) of the zero of "f" at 0. This example provides the template for studying general algebraic curves near non-singular points, the algebraic curve in this case being the real line.
Scheme-theoretic.
Henselian trait.
For a DVR formula_16 it is common to write the fraction field as formula_18 and formula_19 the residue field. These correspond to the generic and closed points of formula_20 For example, the closed point of formula_21 is formula_22 and the generic point is formula_23. Sometimes this is denoted as
formula_24
where formula_25 is the generic point and formula_26 is the closed point .
Localization of a point on a curve.
Given an algebraic curve formula_27, the local ring formula_28 at a smooth point formula_29 is a discrete valuation ring, because it is a principal valuation ring. Note because the point formula_29 is smooth, the completion of the local ring is isomorphic to the completion of the localization of formula_30 at some point formula_31.
Uniformizing parameter.
Given a DVR "R", any irreducible element of "R" is a generator for the unique maximal ideal of "R" and vice versa. Such an element is also called a uniformizing parameter of "R" (or a uniformizing element, a uniformizer, or a prime element).
If we fix a uniformizing parameter "t", then "M"=("t") is the unique maximal ideal of "R", and every other non-zero ideal is a power of "M", i.e. has the form ("t" "k") for some "k"≥0. All the powers of "t" are distinct, and so are the powers of "M". Every non-zero element "x" of "R" can be written in the form α"t" "k" with α a unit in "R" and "k"≥0, both uniquely determined by "x". The valuation is given by "ν"("x") = "kv"("t"). So to understand the ring completely, one needs to know the group of units of "R" and how the units interact additively with the powers of "t".
The function "v" also makes any discrete valuation ring into a Euclidean domain.
Topology.
Every discrete valuation ring, being a local ring, carries a natural topology and is a topological ring. We can also give it a metric space structure where the distance between two elements "x" and "y" can be measured as follows:
formula_32
(or with any other fixed real number > 1 in place of 2). Intuitively: an element "z" is "small" and "close to 0" iff its valuation ν("z") is large. The function |x-y|, supplemented by |0|=0, is the restriction of an absolute value defined on the field of fractions of the discrete valuation ring.
A DVR is compact if and only if it is complete and its residue field "R"/"M" is a finite field.
Examples of complete DVRs include
For a given DVR, one often passes to its completion, a complete DVR containing the given ring that is often easier to study. This completion procedure can be thought of in a geometrical way as passing from rational functions to power series, or from rational numbers to the reals.
Returning to our examples: the ring of all formal power series in one variable with real coefficients is the completion of the ring of rational functions defined (i.e. finite) in a neighborhood of 0 on the real line; it is also the completion of the ring of all real power series that converge near 0. The completion of formula_33 (which can be seen as the set of all rational numbers that are "p"-adic integers) is the ring of all "p"-adic integers Z"p".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\cup "
},
{
"math_id": 1,
"text": " \\in "
},
{
"math_id": 2,
"text": "\\mathbb{Z}_{(2)} := \\{ z/n\\mid z,n\\in\\mathbb{Z},\\,\\, n\\text{ is odd}\\}"
},
{
"math_id": 3,
"text": "\\mathbb{Z}_{(2)}"
},
{
"math_id": 4,
"text": "\\mathbb{Q}"
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": "2\\mathbb{Z}_{(2)}"
},
{
"math_id": 7,
"text": "\\mathbb{Z}"
},
{
"math_id": 8,
"text": "\\mathbb Z_{(p)}:=\\left.\\left\\{\\frac zn\\,\\right| z,n\\in\\mathbb Z,p\\nmid n\\right\\}"
},
{
"math_id": 9,
"text": "\\mathbb{Z}_p"
},
{
"math_id": 10,
"text": "p"
},
{
"math_id": 11,
"text": "x"
},
{
"math_id": 12,
"text": "k"
},
{
"math_id": 13,
"text": "p^k"
},
{
"math_id": 14,
"text": "R = k[[T]]"
},
{
"math_id": 15,
"text": "T"
},
{
"math_id": 16,
"text": "R"
},
{
"math_id": 17,
"text": "\\nu"
},
{
"math_id": 18,
"text": "K = \\text{Frac}(R)"
},
{
"math_id": 19,
"text": "\\kappa = R/\\mathfrak{m}"
},
{
"math_id": 20,
"text": "S=\\text{Spec}(R)."
},
{
"math_id": 21,
"text": "\\text{Spec}(\\mathbb{Z}_p)"
},
{
"math_id": 22,
"text": "\\mathbb{F}_p"
},
{
"math_id": 23,
"text": "\\mathbb{Q}_p"
},
{
"math_id": 24,
"text": "\n\\eta \\to S \\leftarrow s\n"
},
{
"math_id": 25,
"text": "\\eta"
},
{
"math_id": 26,
"text": "s"
},
{
"math_id": 27,
"text": "(X,\\mathcal{O}_X)"
},
{
"math_id": 28,
"text": "\\mathcal{O}_{X,\\mathfrak{p}}"
},
{
"math_id": 29,
"text": "\\mathfrak{p}"
},
{
"math_id": 30,
"text": "\\mathbb{A}^1"
},
{
"math_id": 31,
"text": "\\mathfrak{q}"
},
{
"math_id": 32,
"text": "|x-y| = 2^{-\\nu(x-y)}"
},
{
"math_id": 33,
"text": "\\Z_{(p)}=\\Q \\cap \\Z_p"
}
] | https://en.wikipedia.org/wiki?curid=848633 |
848684 | Cup product | Turns the cohomology of a space into a graded ring
In mathematics, specifically in algebraic topology, the cup product is a method of adjoining two cocycles of degree "p" and "q" to form a composite cocycle of degree "p" + "q". This defines an associative (and distributive) graded commutative product operation in cohomology, turning the cohomology of a space "X" into a graded ring, "H"∗("X"), called the cohomology ring. The cup product was introduced in work of J. W. Alexander, Eduard Čech and Hassler Whitney from 1935–1938, and, in full generality, by Samuel Eilenberg in 1944.
Definition.
In singular cohomology, the cup product is a construction giving a product on the graded cohomology ring "H"∗("X") of a topological space "X".
The construction starts with a product of cochains: if formula_0 is a "p"-cochain and
formula_1 is a "q"-cochain, then
formula_2
where σ is a singular ("p" + "q") -simplex and formula_3
is the canonical embedding of the simplex spanned by S into the formula_4-simplex whose vertices are indexed by formula_5.
Informally, formula_6 is the "p"-th front face and formula_7 is the "q"-th back face of σ, respectively.
The coboundary of the cup product of cochains formula_0 and formula_1 is given by
formula_8
The cup product of two cocycles is again a cocycle, and the product of a coboundary with a cocycle (in either order) is a coboundary. The cup product operation induces a bilinear operation on cohomology,
formula_9
Properties.
The cup product operation in cohomology satisfies the identity
formula_10
so that the corresponding multiplication is graded-commutative.
The cup product is functorial, in the following sense: if
formula_11
is a continuous function, and
formula_12
is the induced homomorphism in cohomology, then
formula_13
for all classes α, β in "H" *("Y"). In other words, "f" * is a (graded) ring homomorphism.
Interpretation.
It is possible to view the cup product formula_14 as induced from the following composition:formula_15in terms of the chain complexes of formula_16 and formula_17, where the first map is the Künneth map and the second is the map induced by the diagonal formula_18.
This composition passes to the quotient to give a well-defined map in terms of cohomology, this is the cup product. This approach explains the existence of a cup product for cohomology but not for homology: formula_18 induces a map formula_19 but would also induce a map formula_20, which goes the wrong way round to allow us to define a product. This is however of use in defining the cap product.
Bilinearity follows from this presentation of cup product, i.e. formula_21 and formula_22
Examples.
Cup products may be used to distinguish manifolds from wedges of spaces with identical cohomology groups. The space formula_23 has the same cohomology groups as the torus "T", but with a different cup product. In the case of "X" the multiplication of the cochains associated to the copies of formula_24 is degenerate, whereas in "T" multiplication in the first cohomology group can be used to decompose the torus as a 2-cell diagram, thus having product equal to Z (more generally "M" where this is the base module).
Other definitions.
Cup product and differential forms.
In de Rham cohomology, the cup product of differential forms is induced by the wedge product. In other words, the wedge product of
two closed differential forms belongs to the de Rham class of the cup product of the two original de Rham classes.
Cup product and geometric intersections.
For oriented manifolds, there is a geometric heuristic that "the cup product is dual to intersections."
Indeed, let formula_25 be an oriented smooth manifold of dimension formula_26. If two submanifolds formula_27 of codimension formula_28 and formula_29 intersect transversely, then their intersection formula_30 is again a submanifold of codimension formula_31. By taking the images of the fundamental homology classes of these manifolds under inclusion, one can obtain a bilinear product on homology. This product is Poincaré dual to the cup product, in the sense that taking the Poincaré pairings formula_32 then there is the following equality :
formula_33.
Similarly, the linking number can be defined in terms of intersections, shifting dimensions by 1, or alternatively in terms of a non-vanishing cup product on the complement of a link.
Massey products.
The cup product is a binary (2-ary) operation; one can define a ternary (3-ary) and higher order operation called the Massey product, which generalizes the cup product. This is a higher order cohomology operation, which is only partly defined (only defined for some triples).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha^p"
},
{
"math_id": 1,
"text": "\\beta^q"
},
{
"math_id": 2,
"text": "(\\alpha^p \\smile \\beta^q)(\\sigma) = \\alpha^p(\\sigma \\circ \\iota_{0,1, ... p}) \\cdot \\beta^q(\\sigma \\circ \\iota_{p,p+1, ..., p + q})"
},
{
"math_id": 3,
"text": "\\iota_S , S \\subset \\{0,1,...,p+q \\} "
},
{
"math_id": 4,
"text": "(p+q)"
},
{
"math_id": 5,
"text": "\\{0,...,p+q \\}"
},
{
"math_id": 6,
"text": " \\sigma \\circ \\iota_{0,1, ..., p}"
},
{
"math_id": 7,
"text": "\\sigma \\circ \\iota_{p, p+1, ..., p + q}"
},
{
"math_id": 8,
"text": "\\delta(\\alpha^p \\smile \\beta^q) = \\delta{\\alpha^p} \\smile \\beta^q + (-1)^p(\\alpha^p \\smile \\delta{\\beta^q})."
},
{
"math_id": 9,
"text": " H^p(X) \\times H^q(X) \\to H^{p+q}(X). "
},
{
"math_id": 10,
"text": "\\alpha^p \\smile \\beta^q = (-1)^{pq}(\\beta^q \\smile \\alpha^p)"
},
{
"math_id": 11,
"text": "f\\colon X\\to Y"
},
{
"math_id": 12,
"text": "f^*\\colon H^*(Y)\\to H^*(X)"
},
{
"math_id": 13,
"text": "f^*(\\alpha \\smile \\beta) =f^*(\\alpha) \\smile f^*(\\beta),"
},
{
"math_id": 14,
"text": " \\smile \\colon H^p(X) \\times H^q(X) \\to H^{p+q}(X)"
},
{
"math_id": 15,
"text": " \\displaystyle C^\\bullet(X) \\times C^\\bullet(X) \\to C^\\bullet(X \\times X) \\overset{\\Delta^*}{\\to} C^\\bullet(X) "
},
{
"math_id": 16,
"text": "X"
},
{
"math_id": 17,
"text": "X \\times X"
},
{
"math_id": 18,
"text": " \\Delta \\colon X \\to X \\times X"
},
{
"math_id": 19,
"text": "\\Delta^* \\colon H^\\bullet(X \\times X) \\to H^\\bullet(X)"
},
{
"math_id": 20,
"text": "\\Delta_* \\colon H_\\bullet(X) \\to H_\\bullet(X \\times X)"
},
{
"math_id": 21,
"text": " (u_1 + u_2) \\smile v = u_1 \\smile v + u_2 \\smile v "
},
{
"math_id": 22,
"text": " u \\smile (v_1 + v_2) = u \\smile v_1 + u \\smile v_2. "
},
{
"math_id": 23,
"text": "X:= S^2\\vee S^1\\vee S^1"
},
{
"math_id": 24,
"text": "S^1"
},
{
"math_id": 25,
"text": "M"
},
{
"math_id": 26,
"text": "n"
},
{
"math_id": 27,
"text": "A,B"
},
{
"math_id": 28,
"text": "i"
},
{
"math_id": 29,
"text": "j"
},
{
"math_id": 30,
"text": "A \\cap B"
},
{
"math_id": 31,
"text": "i+j"
},
{
"math_id": 32,
"text": "[A]^*, [B]^* \\in H^{i},H^{j}"
},
{
"math_id": 33,
"text": "[A]^* \\smile [B]^*=[A \\cap B]^* \\in H^{i+j}(X, \\mathbb Z)"
}
] | https://en.wikipedia.org/wiki?curid=848684 |
8487086 | Infinitely near point | Concept in algebraic geometry
In algebraic geometry, an infinitely near point of an algebraic surface "S" is a point on a surface obtained from "S" by repeatedly blowing up points. Infinitely near points of algebraic surfaces were introduced by Max Noether (1876).
There are some other meanings of "infinitely near point". Infinitely near points can also be defined for higher-dimensional varieties: there are several inequivalent ways to do this, depending on what one is allowed to blow up. Weil gave a definition of infinitely near points of smooth varieties, though these are not the same as infinitely near points in algebraic geometry.
In the line of hyperreal numbers, an extension of the real number line, two points are called infinitely near if their difference is infinitesimal.
Definition.
When blowing up is applied to a point "P" on a surface "S", the new surface "S"* contains a whole curve "C" where "P" used to be. The points of "C" have the geometric interpretation as the tangent directions at "P" to "S". They can be called infinitely near to "P" as way of visualizing them on "S", rather than "S"*. More generally this construction can be iterated by blowing up a point on the new curve "C", and so on.
An infinitely near point (of order "n") "P""n" on a surface "S"0 is given by a sequence of points "P"0, "P"1...,"P""n" on surfaces "S"0, "S"1...,"S""n" such that "S""i" is given by blowing up "S""i"–1 at the point "P""i"–1 and "P"i is a point of the surface "S"i with image "P""i"–1.
In particular the points of the surface "S" are the infinitely near points on "S" of order 0.
Infinitely near points correspond to 1-dimensional valuations of the function field of "S" with 0-dimensional center, and in particular correspond to some of the points of the Zariski–Riemann surface. (The 1-dimensional valuations with 1-dimensional center correspond to irreducible curves of "S".) It is also possible to iterate the construction infinitely often, producing an infinite sequence "P"0, "P"1... of infinitely near points. These infinite sequences correspond to the 0-dimensional valuations of the function field of the surface, which correspond to the "0-dimensional" points of the Zariski–Riemann surface.
Applications.
If "C" and "D" are distinct irreducible curves on a smooth surface "S" intersecting at a point "p", then the multiplicity of their intersection at "p" is given by
formula_0
where "m""x"("C") is the multiplicity of "C" at "x". In general this is larger than "m""p"("C")"m""p"("D") if "C" and "D" have a common tangent line at "x" so that they also intersect at infinitely near points of order greater than 0, for example if "C" is the line "y" = 0 and "D" is the parabola "y" = "x"2 and "p" = (0,0).
The genus of "C" is given by
formula_1
where "N" is the normalization of "C" and "m""x" is the multiplicity of the infinitely near point "x" on "C". | [
{
"math_id": 0,
"text": "\\sum_{x \\text{ infinitely near }p} m_x(C)m_x(D)"
},
{
"math_id": 1,
"text": " g(C)=g(N)+\\sum_{\\text{infinitely near points }x}m_x(m_x-1)/2"
}
] | https://en.wikipedia.org/wiki?curid=8487086 |
8488549 | Huai-Dong Cao | Chinese mathematician
Huai-Dong Cao (born 8 November 1959, in Jiangsu) is a Chinese–American mathematician. He is the A. Everett Pitcher Professor of Mathematics at Lehigh University. He is known for his research contributions to the Ricci flow, a topic in the field of geometric analysis.
Academic history.
Cao received his B.A. from Tsinghua University in 1981 and his Ph.D. from Princeton University in 1986 under the supervision of Shing-Tung Yau.
Cao is a former Associate Director, Institute for Pure and Applied Mathematics (IPAM) at UCLA. He has held visiting Professorships at MIT, Harvard University, Isaac Newton Institute, Max-Planck Institute, IHES, ETH Zurich, and University of Pisa. He has been the managing editor of the "Journal of Differential Geometry" since 2003. His awards and honors include:
Mathematical contributions.
Kähler-Ricci flow.
In 1982, Richard S. Hamilton introduced the Ricci flow, proving a dramatic new theorem on the geometry of three-dimensional manifolds. Cao, who had just begun his Ph.D. studies under Shing-Tung Yau, began to study the Ricci flow in the setting of Kähler manifolds. In his Ph.D. thesis, published in 1985, he showed that Yau's estimates in the resolution of the Calabi conjecture could be modified to the Kähler-Ricci flow context, to prove a convergence theorem similar to Hamilton's original result. This also provided a parabolic alternative to Yau's method of continuity in the proof of the Calabi conjecture, although much of the technical work in the proofs is similar.
Perelman's work on the Ricci flow.
Following a suggestion of Yau's that the Ricci flow could be used to prove William Thurston's Geometrization conjecture, Hamilton developed the theory over the following two decades. In 2002 and 2003, Grisha Perelman posted two articles to the arXiv in which he claimed to present a proof, via the Ricci flow, of the geometrization conjecture. Additionally, he posted a third article in which he gave a shortcut to the proof of the famous Poincaré conjecture, for which the results in the second half of the second paper were unnecessary. Perelman's papers were immediately recognized as giving notable new results in the theory of Ricci flow, although many mathematicians were unable to fully understand the technical details of some unusually complex or terse sections in his work.
Bruce Kleiner of Yale University and John Lott of the University of Michigan began posting annotations of Perelman's first two papers to the web in 2003, adding to and modifying them over the next several years. The results of this work were published in an academic journal in 2008. Cao collaborated with Xi-Ping Zhu of Zhongshan University, publishing an exposition in 2006 of Hamilton's work and of Perelman's first two papers, explaining them in the context of the mathematical literature on geometric analysis. John Morgan of Columbia University and Gang Tian of Princeton University published a book in 2007 on Perelman's first and third paper, and the first half of the second paper; they later published a second book on the second half of Perelman's second paper.
The abstract of Cao and Zhu's article states
<templatestyles src="Template:Blockquote/styles.css" />In this paper, we give a complete proof of the Poincaré and the geometrization conjectures. This work depends on the accumulative works of many geometric analysts in the past thirty years. This proof should be considered as the crowning achievement of the Hamilton-Perelman theory of Ricci flow.
with introduction beginning
<templatestyles src="Template:Blockquote/styles.css" />In this paper, we shall present the Hamilton-Perelman theory of Ricci flow. Based on it, we shall give the first written account of a complete proof of the Poincaré conjecture and the geometrization conjecture of Thurston. While the complete work is an accumulated efforts of many geometric analysts, the major contributors are unquestionably Hamilton and Perelman.
Some observers felt that Cao and Zhu were overstating the value of their paper. Additionally, it was found that a few pages of Cao and Zhu's article were similar to those in Kleiner and Lott's article, leading to accusations of plagiarism. Cao and Zhu said that, in 2003, they had taken notes on that section of Perelman's work from Kleiner and Lott's early postings, and that as an accidental oversight they had failed to realize the source of the notes when writing their article in 2005. They released a revised version of their article to the arXiv in December 2006.
Gradient Ricci solitons.
A "gradient Ricci soliton" consists of a Riemannian manifold ("M", "g") and a function f on M such that Ric"g" + Hess"g" "f" is a constant multiple of g. In the special case that M has a complex structure, g is a Kähler metric, and the gradient of f is a holomorphic vector field, one has a "gradient Kähler-Ricci soliton". Ricci solitons are sometimes considered as generalizations of Einstein metrics, which correspond to the case "f"
0. The importance of gradient Ricci solitons to the theory of the Ricci flow was first recognized by Hamilton in an influential 1995 article. In Perelman's analysis, the gradient Ricci solitons where the constant multiple is positive are especially important; these are called "gradient shrinking Ricci solitons". A 2010 survey of Cao's on Ricci solitons has been widely cited.
In 1996, Cao studied gradient Kähler-Ricci solitons under the ansatz of rotational symmetry, so that the Ricci soliton equation reduces to ODE analysis. He showed that for each positive n there is a gradient steady Kähler-Ricci soliton on formula_0 which is rotationally symmetric, complete, and positively curved. In the case that n is equal to 1, this recovers Hamilton's cigar soliton. Cao also showed the existence of gradient steady Kähler-Ricci solitons on the total space of the canonical bundle over complex projective space which is complete and rotationally symmetric, and nonnegatively curved. He constructed closed examples of gradient shrinking Kähler-Ricci solitons on the projectivization of certain line bundles over complex projective space; these examples were considered independently by Norihito Koiso. Cao and Koiso's ansatz was pushed further in an influential article of Mikhail Feldman, Tom Ilmanen, and Dan Knopf, and the examples of Cao, Koiso, and Feldman-Ilmanen-Knopf have been unified and extended in 2011 by Andrew Dancer and McKenzie Wang.
Utilizing an argument of Perelman's, Cao and Detang Zhou showed that complete gradient shrinking Ricci solitons have a Gaussian character, in that for any given point p of M, the function f must grow quadratically with the distance function to p. Additionally, the volume of geodesic balls around p can grow at most polynomially with their radius. These estimates make possible much integral analysis to do with complete gradient shrinking Ricci solitons, in particular allowing "e"−"f" to be used as a weighting function. | [
{
"math_id": 0,
"text": "\\mathbb{C}^n"
}
] | https://en.wikipedia.org/wiki?curid=8488549 |
8489464 | 331 model | The 331 model in particle physics is an extension of the electroweak gauge symmetry which offers an explanation of why there must be three families of quarks and leptons. The name "331" comes from the full gauge symmetry group formula_0.
Details.
The 331 model in particle physics is an extension of the electroweak gauge symmetry from formula_1 to formula_2 with formula_3.
In the 331 model, hypercharge is given by
formula_4
and electric charge is given by
formula_5
where formula_6 and formula_7 are the Gell-Mann matrices of SU(3)L and formula_8 and formula_9 are parameters of the model.
Motivation.
The 331 model offers an explanation of why there must be three families of quarks and leptons. One curious feature of the Standard Model is that the gauge anomalies independently exactly cancel for each of the three known quark-lepton families. The Standard Model thus offers no explanation of why there are three families, or indeed why there is more than one family.
The idea behind the 331 model is to extend the standard model such that all three families are required for anomaly cancellation. More specifically, in this model the three families transform differently under an extended gauge group. The perfect cancellation of the anomalies within each family is ruined, but the anomalies of the extended gauge group cancel when all three families are present. The cancellation will persist for 6, 9, ... families, so having only the three families observed in nature is the least possible matter content.
Such a construction necessarily requires the addition of further gauge bosons and chiral fermions, which then provide testable predictions of the model in the form of elementary particles. These particles could be found experimentally at masses above the electroweak scale, which is on the order of 102 - 103 GeV. The minimal 331 model predicts singly and doubly charged spin-one bosons, bileptons, which could show up in electron-electron scattering when it is studied at TeV energy scales and may also be produced in multi-TeV proton–proton scattering at the Large Hadron Collider which can reach 104 GeV. | [
{
"math_id": 0,
"text": "SU(3)_C \\times SU(3)_L \\times U(1)_X\\,"
},
{
"math_id": 1,
"text": "SU(2)_L \\times U(1)_Y"
},
{
"math_id": 2,
"text": "\\,SU(3)_L \\times U(1)_X\\,"
},
{
"math_id": 3,
"text": "SU(2)_L \\subset SU(3)_L"
},
{
"math_id": 4,
"text": "Y = \\beta\\,T_8 + I\\,X"
},
{
"math_id": 5,
"text": "Q = \\frac{Y + T_3}{2}"
},
{
"math_id": 6,
"text": "T_3"
},
{
"math_id": 7,
"text": "T_8"
},
{
"math_id": 8,
"text": "\\beta"
},
{
"math_id": 9,
"text": "I"
}
] | https://en.wikipedia.org/wiki?curid=8489464 |
8491096 | Poynting effect | The Poynting effect may refer to two unrelated physical phenomena. Neither should be confused with the Poynting–Robertson effect. All of these effects are named after John Henry Poynting, an English physicist.
Solid mechanics.
In solid mechanics, the Poynting effect is a Finite strain theory effect observed when an elastic cube is sheared between two plates and stress is developed in the direction normal to the sheared faces, or when a cylinder is subjected to torsion and the axial length changes. The Poynting phenomenon in torsion was noticed experimentally by J. H. Poynting.
Chemistry and thermodynamics.
In thermodynamics, the Poynting effect generally refers to the change in the fugacity of a liquid when a non-condensable gas is mixed with the vapor at saturated conditions.
formula_0
Equivalently in terms of vapor pressure, if one assumes that the vapor and the non-condensable gas behave as ideal gases and an ideal mixture, it can be shown that:
formula_1
where
formula_2 is the modified vapor pressure
formula_3 is the unmodified vapor pressure
formula_4 is the liquid molar volume
formula_5 is the liquid/vapor's gas constant
formula_6 is the temperature
formula_7 is the total pressure (vapor pressure + non-condensable gas)
A common example is the production of the medicine Entonox, a high-pressure mixture of nitrous oxide and oxygen. The ability to combine N2O and O2 at high pressure while remaining in the gaseous form is due to the Poynting effect.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\ln \\frac{ f^L (T, P) }{f_\\text{sat} (T)} = \\frac{ 1 }{ R T } \\int_{P^\\text{sat} (T)}^{P} v_\\text{liq} \\, dp "
},
{
"math_id": 1,
"text": " \\ln \\frac{ p_v}{p_{v,o}} = \\frac{ v_\\text{liq} }{ R T } ( P - p_{v,o} ) \\!"
},
{
"math_id": 2,
"text": "p_v"
},
{
"math_id": 3,
"text": "p_{v,o}"
},
{
"math_id": 4,
"text": "v_{liq}"
},
{
"math_id": 5,
"text": "R"
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": "P"
}
] | https://en.wikipedia.org/wiki?curid=8491096 |
8491596 | Davydov soliton | Quasiparticle used to model vibrations within proteins
In quantum biology, the Davydov soliton (after the Soviet Ukrainian physicist Alexander Davydov) is a quasiparticle representing an excitation propagating along the self-trapped amide I groups within the α-helices of proteins. It is a solution of the Davydov Hamiltonian.
The Davydov model describes the interaction of the amide I vibrations with the hydrogen bonds that stabilize the α-helices of proteins. The elementary excitations within the α-helix are given by the phonons which correspond to the deformational oscillations of the lattice, and the excitons which describe the internal amide I excitations of the peptide groups. Referring to the atomic structure of an α-helix region of protein the mechanism that creates the Davydov soliton (polaron, exciton) can be described as follows: vibrational energy of the C=O stretching (or amide I) oscillators that is localized on the α-helix acts through a phonon coupling effect to distort the structure of the α-helix, while the helical distortion reacts again through phonon coupling to trap the amide I oscillation energy and prevent its dispersion. This effect is called "self-localization" or "self-trapping". Solitons in which the energy is distributed in a fashion preserving the helical symmetry are dynamically unstable, and such symmetrical solitons once formed decay rapidly when they propagate. On the other hand, an asymmetric soliton which spontaneously breaks the local translational and helical symmetries possesses the lowest energy and is a robust localized entity.
Davydov Hamiltonian.
Davydov Hamiltonian is formally similar to the Fröhlich-Holstein Hamiltonian for the interaction of electrons with a polarizable lattice. Thus the Hamiltonian of the energy operator formula_0 is
formula_1
where formula_2 is the exciton Hamiltonian, which describes the motion of the amide I excitations between adjacent sites; formula_3 is the phonon Hamiltonian, which describes
the vibrations of the lattice; and formula_4 is the interaction Hamiltonian, which describes the interaction of the amide I excitation with the lattice.
The exciton Hamiltonian formula_2 is
formula_5
where the index formula_6 counts the peptide groups along the α-helix spine, the index formula_7 counts each α-helix spine, formula_8 zJ is the energy of the amide I
vibration (CO stretching), formula_9 zJ is the dipole-dipole coupling energy between a particular amide I bond and those ahead and behind along the same spine, formula_10 zJ is the
dipole-dipole coupling energy between a particular amide I bond and those on adjacent spines in the same unit cell of the protein α-helix, formula_11 and formula_12 are respectively
the boson creation and annihilation operator for an amide I exciton at the peptide group formula_13.
The phonon Hamiltonian formula_3 is
formula_14
where formula_15 is the displacement operator from the equilibrium position of the peptide group formula_13, formula_16 is the momentum operator of the peptide group formula_13, formula_17 is the mass of the peptide group formula_13, formula_18 N/m is an effective elasticity coefficient of the lattice (the spring constant of a hydrogen bond) and formula_19 N/m is the lateral coupling between the spines.
Finally, the interaction Hamiltonian formula_4 is
formula_20
where formula_21 pN is an anharmonic parameter arising from the coupling between the exciton and the lattice displacements (phonon) and parameterizes the strength of the exciton-phonon interaction. The value of this parameter for α-helix has been determined via comparison of the theoretically calculated absorption line shapes with the experimentally measured ones.
Davydov soliton properties.
There are three possible fundamental approaches for deriving equations of motion from Davydov Hamiltonian:
The mathematical techniques that are used to analyze the Davydov soliton are similar to some that have been developed in polaron theory. In this context, the Davydov soliton corresponds to a polaron that is:
The Davydov soliton is a "quantum quasiparticle" and it obeys Heisenberg's uncertainty principle. Thus any model that does not impose translational invariance is flawed by construction. Supposing that the Davydov soliton is localized to 5 turns of the α-helix results in significant uncertainty in the velocity of the soliton formula_22 m/s, a fact that is obscured if one models the Davydov soliton as a classical object.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{H}"
},
{
"math_id": 1,
"text": "\n\\hat{H}=\\hat{H}_{\\text{ex}}+\\hat{H}_{\\text{ph}}+\\hat{H}_{\\text{int}}"
},
{
"math_id": 2,
"text": "\\hat{H}_{\\text{ex}}"
},
{
"math_id": 3,
"text": "\\hat{H}_{\\text{ph}}"
},
{
"math_id": 4,
"text": "\\hat{H}_{\\text{int}}"
},
{
"math_id": 5,
"text": "\\begin{align} \n\\hat{H}_{\\text{ex}} =& \\sum_{n,\\alpha}E_{0}\\hat{A}_{n,\\alpha}^{\\dagger}\\hat{A}_{n,\\alpha} \\\\\n&-J_1\\sum_{n,\\alpha}\\left(\\hat{A}_{n,\\alpha}^{\\dagger}\\hat{A}_{n+1,\\alpha}+\\hat{A}_{n,\\alpha}^{\\dagger}\\hat{A}_{n-1,\\alpha}\\right) \\\\\n&+J_2\\sum_{n,\\alpha}\\left(\\hat{A}_{n,\\alpha}^{\\dagger}\\hat{A}_{n,\\alpha+1}+\\hat{A}_{n,\\alpha}^{\\dagger}\\hat{A}_{n,\\alpha-1}\\right)\n\\end{align}"
},
{
"math_id": 6,
"text": "n=1,2,\\cdots,N"
},
{
"math_id": 7,
"text": "\\alpha=1,2,3"
},
{
"math_id": 8,
"text": "E_{0}=32.8"
},
{
"math_id": 9,
"text": "J_1=0.155"
},
{
"math_id": 10,
"text": "J_2=0.246"
},
{
"math_id": 11,
"text": "\\hat{A}_{n,\\alpha}^{\\dagger}"
},
{
"math_id": 12,
"text": "\\hat{A}_{n,\\alpha}"
},
{
"math_id": 13,
"text": "(n,\\alpha)"
},
{
"math_id": 14,
"text": "\n\\hat{H}_{\\text{ph}}=\\frac{1}{2}\\sum_{n,\\alpha}\\left[w_1(\\hat{u}_{n+1,\\alpha}-\\hat{u}_{n,\\alpha})^{2}+w_2(\\hat{u}_{n,\\alpha+1}-\\hat{u}_{n,\\alpha})^{2}+\\frac{\\hat{p}_{n,\\alpha}^{2}}{M_{n,\\alpha}}\\right]"
},
{
"math_id": 15,
"text": "\\hat{u}_{n,\\alpha}"
},
{
"math_id": 16,
"text": "\\hat{p}_{n,\\alpha}"
},
{
"math_id": 17,
"text": "M_{n,\\alpha}"
},
{
"math_id": 18,
"text": "w_1=13-19.5"
},
{
"math_id": 19,
"text": "w_2=30.5"
},
{
"math_id": 20,
"text": "\n\\hat{H}_{\\text{int}}=\\chi\\sum_{n,\\alpha}\\left[(\\hat{u}_{n+1,\\alpha}-\\hat{u}_{n,\\alpha})\\hat{A}_{n,\\alpha}^{\\dagger}\\hat{A}_{n,\\alpha}\\right]"
},
{
"math_id": 21,
"text": "\\chi=35-62"
},
{
"math_id": 22,
"text": "\\Delta v=133"
}
] | https://en.wikipedia.org/wiki?curid=8491596 |
8491791 | Minkowski's bound | Limits ideals to be checked in order to determine the class number of a number field
In algebraic number theory, Minkowski's bound gives an upper bound of the norm of ideals to be checked in order to determine the class number of a number field "K". It is named for the mathematician Hermann Minkowski.
Definition.
Let "D" be the discriminant of the field, "n" be the degree of "K" over formula_0, and formula_1 be the number of complex embeddings where formula_2 is the number of real embeddings. Then every class in the ideal class group of "K" contains an integral ideal of norm not exceeding Minkowski's bound
formula_3
Minkowski's constant for the field "K" is this bound "M""K".
Properties.
Since the number of integral ideals of given norm is finite, the finiteness of the class number is an immediate consequence, and further, the ideal class group is generated by the prime ideals of norm at most "M""K".
Minkowski's bound may be used to derive a lower bound for the discriminant of a field "K" given "n", "r"1 and "r"2. Since an integral ideal has norm at least one, we have 1 ≤ "M""K", so that
formula_4
For "n" at least 2, it is easy to show that the lower bound is greater than 1, so we obtain Minkowski's Theorem, that the discriminant of every number field, other than Q, is non-trivial. This implies that the field of rational numbers has no unramified extension.
Proof.
The result is a consequence of Minkowski's theorem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Q}"
},
{
"math_id": 1,
"text": "2 r_2 = n - r_1"
},
{
"math_id": 2,
"text": "r_1"
},
{
"math_id": 3,
"text": " M_K = \\sqrt{|D|} \\left(\\frac{4}{\\pi}\\right)^{r_2} \\frac{n!}{n^n} \\ . "
},
{
"math_id": 4,
"text": " \\sqrt{|D|} \\ge \\left(\\frac{\\pi}{4}\\right)^{r_2} \\frac{n^n}{n!} \\ge \\left(\\frac{\\pi}{4}\\right)^{n/2} \\frac{n^n}{n!} \\ . "
}
] | https://en.wikipedia.org/wiki?curid=8491791 |
849181 | CPU cache | Hardware cache of a central processing unit
A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, located closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have a hierarchy of multiple cache levels (L1, L2, often L3, and rarely even L4), with different instruction-specific and data-specific caches at level 1. The cache memory is typically implemented with static random-access memory (SRAM), in modern CPUs by far the largest part of them by chip area, but SRAM is not always used for all levels (of I- or D-cache), or even any level, sometimes some latter or all levels are implemented with eDRAM.
Other types of caches exist (that are not counted towards the "cache size" of the most important caches mentioned above), such as the translation lookaside buffer (TLB) which is part of the memory management unit (MMU) which most CPUs have.
Overview.
When trying to read from or write to a location in the main memory, the processor checks whether the data from that location is already in the cache. If so, the processor will read from or write to the cache instead of the much slower main memory.
Many modern desktop, server, and industrial CPUs have at least three independent caches:
History.
Early examples of CPU caches include the Atlas 2 and the IBM System/360 Model 85 in the 1960s. The first CPUs that used a cache had only one level of cache; unlike later level 1 cache, it was not split into L1d (for data) and L1i (for instructions). Split L1 cache started in 1976 with the IBM 801 CPU, became mainstream in the late 1980s, and in 1997 entered the embedded CPU market with the ARMv5TE. In 2015, even sub-dollar SoCs split the L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split, and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L1 cache and is usually not shared between the cores. The L2 cache, and higher-level caches, may be shared between the cores. L4 cache is currently uncommon, and is generally dynamic random-access memory (DRAM) on a separate die or chip, rather than static random-access memory (SRAM). An exception to this is when eDRAM is used for all levels of cache, down to L1. Historically L1 was also on a separate die, however bigger die sizes have allowed integration of it as well as other cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and optimized differently.
Caches (like for RAM historically) have generally been sized in powers of: 2, 4, 8, 16 etc. KiB; when up to MiB sizes (i.e. for larger non-L1), very early on the pattern broke down, to allow for larger caches without being forced into the doubling-in-size paradigm, with e.g. Intel Core 2 Duo with 3 MiB L2 cache in April 2008. This happened much later for L1 caches, as their size is generally still a small number of KiB. The IBM zEC12 from 2012 is an exception however, to gain unusually large 96 KiB L1 data cache for its time, and e.g. the IBM z13 having a 96 KiB L1 instruction cache (and 128 KiB L1 data cache), and Intel Ice Lake-based processors from 2018, having 48 KiB L1 data cache and 48 KiB L1 instruction cache. In 2020, some Intel Atom CPUs (with up to 24 cores) have (multiple of) 4.5 MiB and 15 MiB cache sizes.
Operation.
Cache entries.
Data is transferred between memory and cache in blocks of fixed size, called "cache lines" or "cache blocks". When a cache line is copied from memory into the cache, a cache entry is created. The cache entry will include the copied data as well as the requested memory location (called a tag).
When the processor needs to read or write a location in memory, it first checks for a corresponding entry in the cache. The cache checks for the contents of the requested memory location in any cache lines that might contain that address. If the processor finds that the memory location is in the cache, a cache hit has occurred. However, if the processor does not find the memory location in the cache, a cache miss has occurred. In the case of a cache hit, the processor immediately reads or writes the data in the cache line. For a cache miss, the cache allocates a new entry and copies data from main memory, then the request is fulfilled from the contents of the cache.
Policies.
Replacement policies.
To make room for the new entry on a cache miss, the cache may have to evict one of the existing entries. The heuristic it uses to choose the entry to evict is called the replacement policy. The fundamental problem with any replacement policy is that it must predict which existing cache entry is least likely to be used in the future. Predicting the future is difficult, so there is no perfect method to choose among the variety of replacement policies available. One popular replacement policy, least-recently used (LRU), replaces the least recently accessed entry.
Marking some memory ranges as non-cacheable can improve performance, by avoiding caching of memory regions that are rarely re-accessed. This avoids the overhead of loading something into the cache without having any reuse. Cache entries may also be disabled or locked depending on the context.
Write policies.
If data is written to the cache, at some point it must also be written to main memory; the timing of this write is known as the write policy. In a write-through cache, every write to the cache causes a write to main memory. Alternatively, in a write-back or copy-back cache, writes are not immediately mirrored to the main memory, and the cache instead tracks which locations have been written over, marking them as dirty. The data in these locations is written back to the main memory only when that data is evicted from the cache. For this reason, a read miss in a write-back cache may sometimes require two memory accesses to service: one to first write the dirty location to main memory, and then another to read the new location from memory. Also, a write to a main memory location that is not yet mapped in a write-back cache may evict an already dirty location, thereby freeing that cache space for the new memory location.
There are intermediate policies as well. The cache may be write-through, but the writes may be held in a store data queue temporarily, usually so multiple stores can be processed together (which can reduce bus turnarounds and improve bus utilization).
Cached data from the main memory may be changed by other entities (e.g., peripherals using direct memory access (DMA) or another core in a multi-core processor), in which case the copy in the cache may become out-of-date or stale. Alternatively, when a CPU in a multiprocessor system updates data in the cache, copies of data in caches associated with other CPUs become stale. Communication protocols between the cache managers that keep the data consistent are known as cache coherence protocols.
Cache performance.
Cache performance measurement has become important in recent times where the speed gap between the memory performance and the processor performance is increasing exponentially. The cache was introduced to reduce this speed gap. Thus knowing how well the cache is able to bridge the gap in the speed of processor and memory becomes important, especially in high-performance systems. The cache hit rate and the cache miss rate play an important role in determining this performance. To improve the cache performance, reducing the miss rate becomes one of the necessary steps among other steps. Decreasing the access time to the cache also gives a boost to its performance and helps with optimization.
CPU stalls.
The time taken to fetch one cache line from memory (read latency due to a cache miss) matters because the CPU will run out of work while waiting for the cache line. When a CPU reaches this state, it is called a stall. As CPUs become faster compared to main memory, stalls due to cache misses displace more potential computation; modern CPUs can execute hundreds of instructions in the time taken to fetch a single cache line from main memory.
Various techniques have been employed to keep the CPU busy during this time, including out-of-order execution in which the CPU attempts to execute independent instructions after the instruction that is waiting for the cache miss data. Another technology, used by many processors, is simultaneous multithreading (SMT), which allows an alternate thread to use the CPU core while the first thread waits for required CPU resources to become available.
Associativity.
The placement policy decides where in the cache a copy of a particular entry of main memory will go. If the placement policy is free to choose any entry in the cache to hold the copy, the cache is called "fully associative". At the other extreme, if each entry in the main memory can go in just one place in the cache, the cache is "direct-mapped". Many caches implement a compromise in which each entry in the main memory can go to any one of N places in the cache, and are described as N-way set associative. For example, the level-1 data cache in an AMD Athlon is two-way set associative, which means that any particular location in main memory can be cached in either of two locations in the level-1 data cache.
Choosing the right value of associativity involves a trade-off. If there are ten places to which the placement policy could have mapped a memory location, then to check if that location is in the cache, ten cache entries must be searched. Checking more places takes more power and chip area, and potentially more time. On the other hand, caches with more associativity suffer fewer misses (see conflict misses), so that the CPU wastes less time reading from the slow main memory. The general guideline is that doubling the associativity, from direct mapped to two-way, or from two-way to four-way, has about the same effect on raising the hit rate as doubling the cache size. However, increasing associativity more than four does not improve hit rate as much, and are generally done for other reasons (see virtual aliasing). Some CPUs can dynamically reduce the associativity of their caches in low-power states, which acts as a power-saving measure.
In order of worse but simple to better but complex:
Direct-mapped cache.
In this cache organization, each location in the main memory can go in only one entry in the cache. Therefore, a direct-mapped cache can also be called a "one-way set associative" cache. It does not have a placement policy as such, since there is no choice of which cache entry's contents to evict. This means that if two locations map to the same entry, they may continually knock each other out. Although simpler, a direct-mapped cache needs to be much larger than an associative one to give comparable performance, and it is more unpredictable. Let x be block number in cache, y be block number of memory, and n be number of blocks in cache, then mapping is done with the help of the equation "x"
"y" mod "n".
Two-way set associative cache.
If each location in the main memory can be cached in either of two locations in the cache, one logical question is: "which one of the two?" The simplest and most commonly used scheme, shown in the right-hand diagram above, is to use the least significant bits of the memory location's index as the index for the cache memory, and to have two entries for each index. One benefit of this scheme is that the tags stored in the cache do not have to include that part of the main memory address which is implied by the cache memory's index. Since the cache tags have fewer bits, they require fewer transistors, take less space on the processor circuit board or on the microprocessor chip, and can be read and compared faster. Also LRU algorithm is especially simple since only one bit needs to be stored for each pair.
Speculative execution.
One of the advantages of a direct-mapped cache is that it allows simple and fast speculation. Once the address has been computed, the one cache index which might have a copy of that location in memory is known. That cache entry can be read, and the processor can continue to work with that data before it finishes checking that the tag actually matches the requested address.
The idea of having the processor use the cached data before the tag match completes can be applied to associative caches as well. A subset of the tag, called a "hint", can be used to pick just one of the possible cache entries mapping to the requested address. The entry selected by the hint can then be used in parallel with checking the full tag. The hint technique works best when used in the context of address translation, as explained below.
Two-way skewed associative cache.
Other schemes have been suggested, such as the "skewed cache", where the index for way 0 is direct, as above, but the index for way 1 is formed with a hash function. A good hash function has the property that addresses which conflict with the direct mapping tend not to conflict when mapped with the hash function, and so it is less likely that a program will suffer from an unexpectedly large number of conflict misses due to a pathological access pattern. The downside is extra latency from computing the hash function. Additionally, when it comes time to load a new line and evict an old line, it may be difficult to determine which existing line was least recently used, because the new line conflicts with data at different indexes in each way; LRU tracking for non-skewed caches is usually done on a per-set basis. Nevertheless, skewed-associative caches have major advantages over conventional set-associative ones.
Pseudo-associative cache.
A true set-associative cache tests all the possible ways simultaneously, using something like a content-addressable memory. A pseudo-associative cache tests each possible way one at a time. A hash-rehash cache and a column-associative cache are examples of a pseudo-associative cache.
In the common case of finding a hit in the first way tested, a pseudo-associative cache is as fast as a direct-mapped cache, but it has a much lower conflict miss rate than a direct-mapped cache, closer to the miss rate of a fully associative cache.
Multicolumn cache.
Comparing with a direct-mapped cache, a set associative cache has a reduced number of bits for its cache set index that maps to a cache set, where multiple ways or blocks stays, such as 2 blocks for a 2-way set associative cache and 4 blocks for a 4-way set associative cache. Comparing with a direct mapped cache, the unused cache index bits become a part of the tag bits. For example, a 2-way set associative cache contributes 1 bit to the tag and a 4-way set associative cache contributes 2 bits to the tag. The basic idea of the multicolumn cache is to use the set index to map to a cache set as a conventional set associative cache does, and to use the added tag bits to index a way in the set. For example, in a 4-way set associative cache, the two bits are used to index way 00, way 01, way 10, and way 11, respectively. This double cache indexing is called a “major location mapping”, and its latency is equivalent to a direct-mapped access. Extensive experiments in multicolumn cache design shows that the hit ratio to major locations is as high as 90%. If cache mapping conflicts with a cache block in the major location, the existing cache block will be moved to another cache way in the same set, which is called “selected location”. Because the newly indexed cache block is a most recently used (MRU) block, it is placed in the major location in multicolumn cache with a consideration of temporal locality. Since multicolumn cache is designed for a cache with a high associativity, the number of ways in each set is high; thus, it is easy find a selected location in the set. A selected location index by an additional hardware is maintained for the major location in a cache block.
Multicolumn cache remains a high hit ratio due to its high associativity, and has a comparable low latency to a direct-mapped cache due to its high percentage of hits in major locations. The concepts of major locations and selected locations in multicolumn cache have been used in several cache designs in ARM Cortex R chip, Intel's way-predicting cache memory, IBM's reconfigurable multi-way associative cache memory and Oracle's dynamic cache replacement way selection based on address tab bits.
Cache entry structure.
Cache row entries usually have the following structure:
The "data block" (cache line) contains the actual data fetched from the main memory. The "tag" contains (part of) the address of the actual data fetched from the main memory. The flag bits are discussed below.
The "size" of the cache is the amount of main memory data it can hold. This size can be calculated as the number of bytes stored in each data block times the number of blocks stored in the cache. (The tag, flag and error correction code bits are not included in the size, although they do affect the physical area of a cache.)
An effective memory address which goes along with the cache line (memory block) is split (MSB to LSB) into the tag, the index and the block offset.
The index describes which cache set that the data has been put in. The index length is formula_0 bits for s cache sets.
The block offset specifies the desired data within the stored data block within the cache row. Typically the effective address is in bytes, so the block offset length is formula_1 bits, where b is the number of bytes per data block.
The tag contains the most significant bits of the address, which are checked against all rows in the current set (the set has been retrieved by index) to see if this set contains the requested address. If it does, a cache hit occurs. The tag length in bits is as follows:
codice_0
Some authors refer to the block offset as simply the "offset" or the "displacement".
Example.
The original Pentium 4 processor had a four-way set associative L1 data cache of 8 KiB in size, with 64-byte cache blocks. Hence, there are 8 KiB / 64 = 128 cache blocks. The number of sets is equal to the number of cache blocks divided by the number of ways of associativity, what leads to 128 / 4 = 32 sets, and hence 25 = 32 different indices. There are 26 = 64 possible offsets. Since the CPU address is 32 bits wide, this implies 32 - 5 - 6 = 21 bits for the tag field.
The original Pentium 4 processor also had an eight-way set associative L2 integrated cache 256 KiB in size, with 128-byte cache blocks. This implies 32 - 8 - 7 = 17 bits for the tag field.
Flag bits.
An instruction cache requires only one flag bit per cache row entry: a valid bit. The valid bit indicates whether or not a cache block has been loaded with valid data.
On power-up, the hardware sets all the valid bits in all the caches to "invalid". Some systems also set a valid bit to "invalid" at other times, such as when multi-master bus snooping hardware in the cache of one processor hears an address broadcast from some other processor, and realizes that certain data blocks in the local cache are now stale and should be marked invalid.
A data cache typically requires two flag bits per cache line – a valid bit and a dirty bit. Having a dirty bit set indicates that the associated cache line has been changed since it was read from main memory ("dirty"), meaning that the processor has written data to that line and the new value has not propagated all the way to main memory.
Cache miss.
A cache miss is a failed attempt to read or write a piece of data in the cache, which results in a main memory access with much longer latency. There are three kinds of cache misses: instruction read miss, data read miss, and data write miss.
"Cache read misses" from an "instruction" cache generally cause the largest delay, because the processor, or at least the thread of execution, has to wait (stall) until the instruction is fetched from main memory. "Cache read misses" from a "data" cache usually cause a smaller delay, because instructions not dependent on the cache read can be issued and continue execution until the data is returned from main memory, and the dependent instructions can resume execution. "Cache write misses" to a "data" cache generally cause the shortest delay, because the write can be queued and there are few limitations on the execution of subsequent instructions; the processor can continue until the queue is full. For a detailed introduction to the types of misses, see cache performance measurement and metric.
Address translation.
Most general purpose CPUs implement some form of virtual memory. To summarize, either each program running on the machine sees its own simplified address space, which contains code and data for that program only, or all programs run in a common virtual address space. A program executes by calculating, comparing, reading and writing to addresses of its virtual address space, rather than addresses of physical address space, making programs simpler and thus easier to write.
Virtual memory requires the processor to translate virtual addresses generated by the program into physical addresses in main memory. The portion of the processor that does this translation is known as the memory management unit (MMU). The fast path through the MMU can perform those translations stored in the translation lookaside buffer (TLB), which is a cache of mappings from the operating system's page table, segment table, or both.
For the purposes of the present discussion, there are three important features of address translation:
One early virtual memory system, the IBM M44/44X, required an access to a mapping table held in core memory before every programmed access to main memory. With no caches, and with the mapping table memory running at the same speed as main memory this effectively cut the speed of memory access in half. Two early machines that used a page table in main memory for mapping, the IBM System/360 Model 67 and the GE 645, both had a small associative memory as a cache for accesses to the in-memory page table. Both machines predated the first machine with a cache for main memory, the IBM System/360 Model 85, so the first hardware cache used in a computer system was not a data or instruction cache, but rather a TLB.
Caches can be divided into four types, based on whether the index or tag correspond to physical or virtual addresses:
The speed of this recurrence (the "load latency") is crucial to CPU performance, and so most modern level-1 caches are virtually indexed, which at least allows the MMU's TLB lookup to proceed in parallel with fetching the data from the cache RAM.
But virtual indexing is not the best choice for all cache levels. The cost of dealing with virtual aliases grows with cache size, and as a result most level-2 and larger caches are physically indexed.
Caches have historically used both virtual and physical addresses for the cache tags, although virtual tagging is now uncommon. If the TLB lookup can finish before the cache RAM lookup, then the physical address is available in time for tag compare, and there is no need for virtual tagging. Large caches, then, tend to be physically tagged, and only small, very low latency caches are virtually tagged. In recent general-purpose CPUs, virtual tagging has been superseded by vhints, as described below.
Homonym and synonym problems.
A cache that relies on virtual indexing and tagging becomes inconsistent after the same virtual address is mapped into different physical addresses (homonym), which can be solved by using physical address for tagging, or by storing the address space identifier in the cache line. However, the latter approach does not help against the synonym problem, in which several cache lines end up storing data for the same physical address. Writing to such locations may update only one location in the cache, leaving the others with inconsistent data. This issue may be solved by using non-overlapping memory layouts for different address spaces, or otherwise the cache (or a part of it) must be flushed when the mapping changes.
Virtual tags and vhints.
The great advantage of virtual tags is that, for associative caches, they allow the tag match to proceed before the virtual to physical translation is done. However, coherence probes and evictions present a physical address for action. The hardware must have some means of converting the physical addresses into a cache index, generally by storing physical tags as well as virtual tags. For comparison, a physically tagged cache does not need to keep virtual tags, which is simpler. When a virtual to physical mapping is deleted from the TLB, cache entries with those virtual addresses will have to be flushed somehow. Alternatively, if cache entries are allowed on pages not mapped by the TLB, then those entries will have to be flushed when the access rights on those pages are changed in the page table.
It is also possible for the operating system to ensure that no virtual aliases are simultaneously resident in the cache. The operating system makes this guarantee by enforcing page coloring, which is described below. Some early RISC processors (SPARC, RS/6000) took this approach. It has not been used recently, as the hardware cost of detecting and evicting virtual aliases has fallen and the software complexity and performance penalty of perfect page coloring has risen.
It can be useful to distinguish the two functions of tags in an associative cache: they are used to determine which way of the entry set to select, and they are used to determine if the cache hit or missed. The second function must always be correct, but it is permissible for the first function to guess, and get the wrong answer occasionally.
Some processors (e.g. early SPARCs) have caches with both virtual and physical tags. The virtual tags are used for way selection, and the physical tags are used for determining hit or miss. This kind of cache enjoys the latency advantage of a virtually tagged cache, and the simple software interface of a physically tagged cache. It bears the added cost of duplicated tags, however. Also, during miss processing, the alternate ways of the cache line indexed have to be probed for virtual aliases and any matches evicted.
The extra area (and some latency) can be mitigated by keeping "virtual hints" with each cache entry instead of virtual tags. These hints are a subset or hash of the virtual tag, and are used for selecting the way of the cache from which to get data and a physical tag. Like a virtually tagged cache, there may be a virtual hint match but physical tag mismatch, in which case the cache entry with the matching hint must be evicted so that cache accesses after the cache fill at this address will have just one hint match. Since virtual hints have fewer bits than virtual tags distinguishing them from one another, a virtually hinted cache suffers more conflict misses than a virtually tagged cache.
Perhaps the ultimate reduction of virtual hints can be found in the Pentium 4 (Willamette and Northwood cores). In these processors the virtual hint is effectively two bits, and the cache is four-way set associative. Effectively, the hardware maintains a simple permutation from virtual address to cache index, so that no content-addressable memory (CAM) is necessary to select the right one of the four ways fetched.
Page coloring.
Large physically indexed caches (usually secondary caches) run into a problem: the operating system rather than the application controls which pages collide with one another in the cache. Differences in page allocation from one program run to the next lead to differences in the cache collision patterns, which can lead to very large differences in program performance. These differences can make it very difficult to get a consistent and repeatable timing for a benchmark run.
To understand the problem, consider a CPU with a 1 MiB physically indexed direct-mapped level-2 cache and 4 KiB virtual memory pages. Sequential physical pages map to sequential locations in the cache until after 256 pages the pattern wraps around. We can label each physical page with a color of 0–255 to denote where in the cache it can go. Locations within physical pages with different colors cannot conflict in the cache.
Programmers attempting to make maximum use of the cache may arrange their programs' access patterns so that only 1 MiB of data need be cached at any given time, thus avoiding capacity misses. But they should also ensure that the access patterns do not have conflict misses. One way to think about this problem is to divide up the virtual pages the program uses and assign them virtual colors in the same way as physical colors were assigned to physical pages before. Programmers can then arrange the access patterns of their code so that no two pages with the same virtual color are in use at the same time. There is a wide literature on such optimizations (e.g. loop nest optimization), largely coming from the High Performance Computing (HPC) community.
The snag is that while all the pages in use at any given moment may have different virtual colors, some may have the same physical colors. In fact, if the operating system assigns physical pages to virtual pages randomly and uniformly, it is extremely likely that some pages will have the same physical color, and then locations from those pages will collide in the cache (this is the birthday paradox).
The solution is to have the operating system attempt to assign different physical color pages to different virtual colors, a technique called "page coloring". Although the actual mapping from virtual to physical color is irrelevant to system performance, odd mappings are difficult to keep track of and have little benefit, so most approaches to page coloring simply try to keep physical and virtual page colors the same.
If the operating system can guarantee that each physical page maps to only one virtual color, then there are no virtual aliases, and the processor can use virtually indexed caches with no need for extra virtual alias probes during miss handling. Alternatively, the OS can flush a page from the cache whenever it changes from one virtual color to another. As mentioned above, this approach was used for some early SPARC and RS/6000 designs.
The software page coloring technique has been used to effectively partition the shared Last level Cache (LLC) in multicore processors. This operating system-based LLC management in multicore processors has been adopted by Intel.
Cache hierarchy in a modern processor.
Modern processors have multiple interacting on-chip caches. The operation of a particular cache can be completely specified by the cache size, the cache block size, the number of blocks in a set, the cache set replacement policy, and the cache write policy (write-through or write-back).
While all of the cache blocks in a particular cache are the same size and have the same associativity, typically the "lower-level" caches (called Level 1 cache) have a smaller number of blocks, smaller block size, and fewer blocks in a set, but have very short access times. "Higher-level" caches (i.e. Level 2 and above) have progressively larger numbers of blocks, larger block size, more blocks in a set, and relatively longer access times, but are still much faster than main memory.
Cache entry replacement policy is determined by a cache algorithm selected to be implemented by the processor designers. In some cases, multiple algorithms are provided for different kinds of work loads.
Specialized caches.
Pipelined CPUs access memory from multiple points in the pipeline: instruction fetch, virtual-to-physical address translation, and data fetch (see classic RISC pipeline). The natural design is to use different physical caches for each of these points, so that no one physical resource has to be scheduled to service two points in the pipeline. Thus the pipeline naturally ends up with at least three separate caches (instruction, TLB, and data), each specialized to its particular role.
Victim cache.
A victim cache is a cache used to hold blocks evicted from a CPU cache upon replacement. The victim cache lies between the main cache and its refill path, and holds only those blocks of data that were evicted from the main cache. The victim cache is usually fully associative, and is intended to reduce the number of conflict misses. Many commonly used programs do not require an associative mapping for all the accesses. In fact, only a small fraction of the memory accesses of the program require high associativity. The victim cache exploits this property by providing high associativity to only these accesses. It was introduced by Norman Jouppi from DEC in 1990.
Intel's "Crystalwell" variant of its Haswell processors introduced an on-package 128 MiB eDRAM Level 4 cache which serves as a victim cache to the processors' Level 3 cache. In the Skylake microarchitecture the Level 4 cache no longer works as a victim cache.
Trace cache.
One of the more extreme examples of cache specialization is the trace cache (also known as "execution trace cache") found in the Intel Pentium 4 microprocessors. A trace cache is a mechanism for increasing the instruction fetch bandwidth and decreasing power consumption (in the case of the Pentium 4) by storing traces of instructions that have already been fetched and decoded.
A trace cache stores instructions either after they have been decoded, or as they are retired. Generally, instructions are added to trace caches in groups representing either individual basic blocks or dynamic instruction traces. The Pentium 4's trace cache stores micro-operations resulting from decoding x86 instructions, providing also the functionality of a micro-operation cache. Having this, the next time an instruction is needed, it does not have to be decoded into micro-ops again.
Write Coalescing Cache (WCC).
Write Coalescing Cache is a special cache that is part of L2 cache in AMD's Bulldozer microarchitecture. Stores from both L1D caches in the module go through the WCC, where they are buffered and coalesced.
The WCC's task is reducing number of writes to the L2 cache.
Micro-operation (μop or uop) cache.
A micro-operation cache (μop cache, uop cache or UC) is a specialized cache that stores micro-operations of decoded instructions, as received directly from the instruction decoders or from the instruction cache. When an instruction needs to be decoded, the μop cache is checked for its decoded form which is re-used if cached; if it is not available, the instruction is decoded and then cached.
One of the early works describing μop cache as an alternative frontend for the Intel P6 processor family is the 2001 paper "Micro-Operation Cache: A Power Aware Frontend for Variable Instruction Length ISA". Later, Intel included μop caches in its Sandy Bridge processors and in successive microarchitectures like Ivy Bridge and Haswell. AMD implemented a μop cache in their Zen microarchitecture.
Fetching complete pre-decoded instructions eliminates the need to repeatedly decode variable length complex instructions into simpler fixed-length micro-operations, and simplifies the process of predicting, fetching, rotating and aligning fetched instructions. A μop cache effectively offloads the fetch and decode hardware, thus decreasing power consumption and improving the frontend supply of decoded micro-operations. The μop cache also increases performance by more consistently delivering decoded micro-operations to the backend and eliminating various bottlenecks in the CPU's fetch and decode logic.
A μop cache has many similarities with a trace cache, although a μop cache is much simpler thus providing better power efficiency; this makes it better suited for implementations on battery-powered devices. The main disadvantage of the trace cache, leading to its power inefficiency, is the hardware complexity required for its heuristic deciding on caching and reusing dynamically created instruction traces.
Branch target instruction cache.
A branch target cache or branch target instruction cache, the name used on ARM microprocessors, is a specialized cache which holds the first few instructions at the destination of a taken branch. This is used by low-powered processors which do not need a normal instruction cache because the memory system is capable of delivering instructions fast enough to satisfy the CPU without one. However, this only applies to consecutive instructions in sequence; it still takes several cycles of latency to restart instruction fetch at a new address, causing a few cycles of pipeline bubble after a control transfer. A branch target cache provides instructions for those few cycles avoiding a delay after most taken branches.
This allows full-speed operation with a much smaller cache than a traditional full-time instruction cache.
Smart cache.
Smart cache is a level 2 or level 3 caching method for multiple execution cores, developed by Intel.
Smart Cache shares the actual cache memory between the cores of a multi-core processor. In comparison to a dedicated per-core cache, the overall cache miss rate decreases when cores do not require equal parts of the cache space. Consequently, a single core can use the full level 2 or level 3 cache while the other cores are inactive. Furthermore, the shared cache makes it faster to share memory among different execution cores.
Multi-level caches.
Another issue is the fundamental tradeoff between cache latency and hit rate. Larger caches have better hit rates but longer latency. To address this tradeoff, many computers use multiple levels of cache, with small fast caches backed up by larger, slower caches. Multi-level caches generally operate by checking the fastest cache, "level 1" (L1), first; if it hits, the processor proceeds at high speed. If that smaller cache misses, the next fastest cache, "level 2" (L2), is checked, and so on, before accessing external memory.
As the latency difference between main memory and the fastest cache has become larger, some processors have begun to utilize as many as three levels of on-chip cache. Price-sensitive designs used this to pull the entire cache hierarchy on-chip, but by the 2010s some of the highest-performance designs returned to having large off-chip caches, which is often implemented in eDRAM and mounted on a multi-chip module, as a fourth cache level. In rare cases, such as in the mainframe CPU IBM z15 (2019), all levels down to L1 are implemented by eDRAM, replacing SRAM entirely (for cache, SRAM is still used for registers). The ARM-based Apple M1 has a 192 KiB L1 cache for each of the four high-performance cores, an unusually large amount; however the four high-efficiency cores only have 128 KiB.
The benefits of L3 and L4 caches depend on the application's access patterns. Examples of products incorporating L3 and L4 caches include the following:
Finally, at the other end of the memory hierarchy, the CPU register file itself can be considered the smallest, fastest cache in the system, with the special characteristic that it is scheduled in software—typically by a compiler, as it allocates registers to hold values retrieved from main memory for, as an example, loop nest optimization. However, with register renaming most compiler register assignments are reallocated dynamically by hardware at runtime into a register bank, allowing the CPU to break false data dependencies and thus easing pipeline hazards.
Register files sometimes also have hierarchy: The Cray-1 (circa 1976) had eight address "A" and eight scalar data "S" registers that were generally usable. There was also a set of 64 address "B" and 64 scalar data "T" registers that took longer to access, but were faster than main memory. The "B" and "T" registers were provided because the Cray-1 did not have a data cache. (The Cray-1 did, however, have an instruction cache.)
Multi-core chips.
When considering a chip with multiple cores, there is a question of whether the caches should be shared or local to each core. Implementing shared cache inevitably introduces more wiring and complexity. But then, having one cache per "chip", rather than "core", greatly reduces the amount of space needed, and thus one can include a larger cache.
Typically, sharing the L1 cache is undesirable because the resulting increase in latency would make each core run considerably slower than a single-core chip. However, for the highest-level cache, the last one called before accessing memory, having a global cache is desirable for several reasons, such as allowing a single core to use the whole cache, reducing data redundancy by making it possible for different processes or threads to share cached data, and reducing the complexity of utilized cache coherency protocols. For example, an eight-core chip with three levels may include an L1 cache for each core, one intermediate L2 cache for each pair of cores, and one L3 cache shared between all cores.
A shared highest-level cache, which is called before accessing memory, is usually referred to as a "last level cache" (LLC). Additional techniques are used for increasing the level of parallelism when LLC is shared between multiple cores, including slicing it into multiple pieces which are addressing certain ranges of memory addresses, and can be accessed independently.
Separate versus unified.
In a separate cache structure, instructions and data are cached separately, meaning that a cache line is used to cache either instructions or data, but not both; various benefits have been demonstrated with separate data and instruction translation lookaside buffers. In a unified structure, this constraint is not present, and cache lines can be used to cache both instructions and data.
Exclusive versus inclusive.
Multi-level caches introduce new design decisions. For instance, in some processors, all data in the L1 cache must also be somewhere in the L2 cache. These caches are called "strictly inclusive". Other processors (like the AMD Athlon) have "exclusive" caches: data is guaranteed to be in at most one of the L1 and L2 caches, never in both. Still other processors (like the Intel Pentium II, III, and 4) do not require that data in the L1 cache also reside in the L2 cache, although it may often do so. There is no universally accepted name for this intermediate policy; two common names are "non-exclusive" and "partially-inclusive".
The advantage of exclusive caches is that they store more data. This advantage is larger when the exclusive L1 cache is comparable to the L2 cache, and diminishes if the L2 cache is many times larger than the L1 cache. When the L1 misses and the L2 hits on an access, the hitting cache line in the L2 is exchanged with a line in the L1. This exchange is quite a bit more work than just copying a line from L2 to L1, which is what an inclusive cache does.
One advantage of strictly inclusive caches is that when external devices or other processors in a multiprocessor system wish to remove a cache line from the processor, they need only have the processor check the L2 cache. In cache hierarchies which do not enforce inclusion, the L1 cache must be checked as well. As a drawback, there is a correlation between the associativities of L1 and L2 caches: if the L2 cache does not have at least as many ways as all L1 caches together, the effective associativity of the L1 caches is restricted. Another disadvantage of inclusive cache is that whenever there is an eviction in L2 cache, the (possibly) corresponding lines in L1 also have to get evicted in order to maintain inclusiveness. This is quite a bit of work, and would result in a higher L1 miss rate.
Another advantage of inclusive caches is that the larger cache can use larger cache lines, which reduces the size of the secondary cache tags. (Exclusive caches require both caches to have the same size cache lines, so that cache lines can be swapped on a L1 miss, L2 hit.) If the secondary cache is an order of magnitude larger than the primary, and the cache data is an order of magnitude larger than the cache tags, this tag area saved can be comparable to the incremental area needed to store the L1 cache data in the L2.
Scratchpad memory.
Scratchpad memory (SPM), also known as scratchpad, scratchpad RAM or local store in computer terminology, is a high-speed internal memory used for temporary storage of calculations, data, and other work in progress.
Example: the K8.
To illustrate both specialization and multi-level caching, here is the cache hierarchy of the K8 core in the AMD Athlon 64 CPU.
The K8 has four specialized caches: an instruction cache, an instruction TLB, a data TLB, and a data cache. Each of these caches is specialized:
The K8 also has multiple-level caches. There are second-level instruction and data TLBs, which store only PTEs mapping 4 KiB. Both instruction and data caches, and the various TLBs, can fill from the large unified L2 cache. This cache is exclusive to both the L1 instruction and data caches, which means that any 8-byte line can only be in one of the L1 instruction cache, the L1 data cache, or the L2 cache. It is, however, possible for a line in the data cache to have a PTE which is also in one of the TLBs—the operating system is responsible for keeping the TLBs coherent by flushing portions of them when the page tables in memory are updated.
The K8 also caches information that is never stored in memory—prediction information. These caches are not shown in the above diagram. As is usual for this class of CPU, the K8 has fairly complex
branch prediction, with tables that help predict whether branches are taken and other tables which predict the targets of branches and jumps. Some of this information is associated with instructions, in both the level 1 instruction cache and the unified secondary cache.
The K8 uses an interesting trick to store prediction information with instructions in the secondary cache. Lines in the secondary cache are protected from accidental data corruption (e.g. by an alpha particle strike) by either ECC or parity, depending on whether those lines were evicted from the data or instruction primary caches. Since the parity code takes fewer bits than the ECC code, lines from the instruction cache have a few spare bits. These bits are used to cache branch prediction information associated with those instructions. The net result is that the branch predictor has a larger effective history table, and so has better accuracy.
More hierarchies.
Other processors have other kinds of predictors (e.g., the store-to-load bypass predictor in the DEC Alpha 21264), and various specialized predictors are likely to flourish in future processors.
These predictors are caches in that they store information that is costly to compute. Some of the terminology used when discussing predictors is the same as that for caches (one speaks of a hit in a branch predictor), but predictors are not generally thought of as part of the cache hierarchy.
The K8 keeps the instruction and data caches coherent in hardware, which means that a store into an instruction closely following the store instruction will change that following instruction. Other processors, like those in the Alpha and MIPS family, have relied on software to keep the instruction cache coherent. Stores are not guaranteed to show up in the instruction stream until a program calls an operating system facility to ensure coherency.
Tag RAM.
In computer engineering, a "tag RAM" is used to specify which of the possible memory locations is currently stored in a CPU cache. For a simple, direct-mapped design fast SRAM can be used. Higher associative caches usually employ content-addressable memory.
Implementation.
Cache reads are the most common CPU operation that takes more than a single cycle. Program execution time tends to be very sensitive to the latency of a level-1 data cache hit. A great deal of design effort, and often power and silicon area are expended making the caches as fast as possible.
The simplest cache is a virtually indexed direct-mapped cache. The virtual address is calculated with an adder, the relevant portion of the address extracted and used to index an SRAM, which returns the loaded data. The data is byte aligned in a byte shifter, and from there is bypassed to the next operation. There is no need for any tag checking in the inner loop – in fact, the tags need not even be read. Later in the pipeline, but before the load instruction is retired, the tag for the loaded data must be read, and checked against the virtual address to make sure there was a cache hit. On a miss, the cache is updated with the requested cache line and the pipeline is restarted.
An associative cache is more complicated, because some form of tag must be read to determine which entry of the cache to select. An N-way set-associative level-1 cache usually reads all N possible tags and N data in parallel, and then chooses the data associated with the matching tag. Level-2 caches sometimes save power by reading the tags first, so that only one data element is read from the data SRAM.
The adjacent diagram is intended to clarify the manner in which the various fields of the address are used. Address bit 31 is most significant, bit 0 is least significant. The diagram shows the SRAMs, indexing, and multiplexing for a 4 KiB, 2-way set-associative, virtually indexed and virtually tagged cache with 64 byte (B) lines, a 32-bit read width and 32-bit virtual address.
Because the cache is 4 KiB and has 64 B lines, there are just 64 lines in the cache, and we read two at a time from a Tag SRAM which has 32 rows, each with a pair of 21 bit tags. Although any function of virtual address bits 31 through 6 could be used to index the tag and data SRAMs, it is simplest to use the least significant bits.
Similarly, because the cache is 4 KiB and has a 4 B read path, and reads two ways for each access, the Data SRAM is 512 rows by 8 bytes wide.
A more modern cache might be 16 KiB, 4-way set-associative, virtually indexed, virtually hinted, and physically tagged, with 32 B lines, 32-bit read width and 36-bit physical addresses. The read path recurrence for such a cache looks very similar to the path above. Instead of tags, vhints are read, and matched against a subset of the virtual address. Later on in the pipeline, the virtual address is translated into a physical address by the TLB, and the physical tag is read (just one, as the vhint supplies which way of the cache to read). Finally the physical address is compared to the physical tag to determine if a hit has occurred.
Some SPARC designs have improved the speed of their L1 caches by a few gate delays by collapsing the virtual address adder into the SRAM decoders. See sum-addressed decoder.
History.
The early history of cache technology is closely tied to the invention and use of virtual memory. Because of scarcity and cost of semi-conductor memories, early mainframe computers in the 1960s used a complex hierarchy of physical memory, mapped onto a flat virtual memory space used by programs. The memory technologies would span semi-conductor, magnetic core, drum and disc. Virtual memory seen and used by programs would be flat and caching would be used to fetch data and instructions into the fastest memory ahead of processor access. Extensive studies were done to optimize the cache sizes. Optimal values were found to depend greatly on the programming language used with Algol needing the smallest and Fortran and Cobol needing the largest cache sizes.
In the early days of microcomputer technology, memory access was only slightly slower than register access. But since the 1980s the performance gap between processor and memory has been growing. Microprocessors have advanced much faster than memory, especially in terms of their operating frequency, so memory became a performance bottleneck. While it was technically possible to have all the main memory as fast as the CPU, a more economically viable path has been taken: use plenty of low-speed memory, but also introduce a small high-speed cache memory to alleviate the performance gap. This provided an order of magnitude more capacity—for the same price—with only a slightly reduced combined performance.
First TLB implementations.
The first documented uses of a TLB were on the GE 645 and the IBM 360/67, both of which used an associative memory as a TLB.
First instruction cache.
The first documented use of an instruction cache was on the CDC 6600.
First data cache.
The first documented use of a data cache was on the IBM System/360 Model 85.
In 68k microprocessors.
The 68010, released in 1982, has a "loop mode" which can be considered a tiny and special-case instruction cache that accelerates loops that consist of only two instructions. The 68020, released in 1984, replaced that with a typical instruction cache of 256 bytes, being the first 68k series processor to feature true on-chip cache memory.
The 68030, released in 1987, is basically a 68020 core with an additional 256-byte data cache, an on-chip memory management unit (MMU), a process shrink, and added burst mode for the caches.
The 68040, released in 1990, has split instruction and data caches of four kilobytes each.
The 68060, released in 1994, has the following: 8 KiB data cache (four-way associative), 8 KiB instruction cache (four-way associative), 96-byte FIFO instruction buffer, 256-entry branch cache, and 64-entry address translation cache MMU buffer (four-way associative).
In x86 microprocessors.
As the x86 microprocessors reached clock rates of 20 MHz and above in the 386, small amounts of fast cache memory began to be featured in systems to improve performance. This was because the DRAM used for main memory had significant latency, up to 120 ns, as well as refresh cycles. The cache was constructed from more expensive, but significantly faster, SRAM memory cells, which at the time had latencies around 10–25 ns. The early caches were external to the processor and typically located on the motherboard in the form of eight or nine DIP devices placed in sockets to enable the cache as an optional extra or upgrade feature.
Some versions of the Intel 386 processor could support 16 to 256 KiB of external cache.
With the 486 processor, an 8 KiB cache was integrated directly into the CPU die. This cache was termed Level 1 or L1 cache to differentiate it from the slower on-motherboard, or Level 2 (L2) cache. These on-motherboard caches were much larger, with the most common size being 256 KiB. There were some system boards that contained sockets for the Intel 485Turbocache daughtercard which had either 64 or 128 Kbyte of cache memory. The popularity of on-motherboard cache continued through the Pentium MMX era but was made obsolete by the introduction of SDRAM and the growing disparity between bus clock rates and CPU clock rates, which caused on-motherboard cache to be only slightly faster than main memory.
The next development in cache implementation in the x86 microprocessors began with the Pentium Pro, which brought the secondary cache onto the same package as the microprocessor, clocked at the same frequency as the microprocessor.
On-motherboard caches enjoyed prolonged popularity thanks to the AMD K6-2 and AMD K6-III processors that still used Socket 7, which was previously used by Intel with on-motherboard caches. K6-III included 256 KiB on-die L2 cache and took advantage of the on-board cache as a third level cache, named L3 (motherboards with up to 2 MiB of on-board cache were produced). After the Socket 7 became obsolete, on-motherboard cache disappeared from the x86 systems.
The three-level caches were used again first with the introduction of multiple processor cores, where the L3 cache was added to the CPU die. It became common for the total cache sizes to be increasingly larger in newer processor generations, and recently (as of 2011) it is not uncommon to find Level 3 cache sizes of tens of megabytes.
Intel introduced a Level 4 on-package cache with the Haswell microarchitecture. "Crystalwell" Haswell CPUs, equipped with the GT3e variant of Intel's integrated Iris Pro graphics, effectively feature 128 MiB of embedded DRAM (eDRAM) on the same package. This L4 cache is shared dynamically between the on-die GPU and CPU, and serves as a victim cache to the CPU's L3 cache.
In ARM microprocessors.
Apple M1 CPU has 128 or 192 KiB instruction L1 cache for each core (important for latency/single-thread performance), depending on core type. This is an unusually large L1 cache for any CPU type (not just for a laptop); the total cache memory size is not unusually large (the total is more important for throughput) for a laptop, and much larger total (e.g. L3 or L4) sizes are available in IBM's mainframes.
Current research.
Early cache designs focused entirely on the direct cost of cache and RAM and average execution speed.
More recent cache designs also consider energy efficiency, fault tolerance, and other goals.
There are several tools available to computer architects to help explore tradeoffs between the cache cycle time, energy, and area; the CACTI cache simulator and the SimpleScalar instruction set simulator are two open-source options.
Multi-ported cache.
A multi-ported cache is a cache which can serve more than one request at a time. When accessing a traditional cache we normally use a single memory address, whereas in a multi-ported cache we may request N addresses at a time – where N is the number of ports that connected through the processor and the cache. The benefit of this is that a pipelined processor may access memory from different phases in its pipeline. Another benefit is that it allows the concept of super-scalar processors through different cache levels.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lceil \\log_2(s) \\rceil"
},
{
"math_id": 1,
"text": "\\lceil \\log_2(b) \\rceil"
}
] | https://en.wikipedia.org/wiki?curid=849181 |
8492 | Discrete mathematics | Study of discrete mathematical structures
Discrete mathematics is the study of mathematical structures that can be considered "discrete" (in a way analogous to discrete variables, having a bijection with the set of natural numbers) rather than "continuous" (analogously to continuous functions). Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (finite sets or sets with the same cardinality as the natural numbers). However, there is no exact definition of the term "discrete mathematics".
The set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business.
Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development of digital computers which operate in "discrete" steps and store data in "discrete" bits. Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches of computer science, such as computer algorithms, programming languages, cryptography, automated theorem proving, and software development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems.
Although the main objects of study in discrete mathematics are discrete objects, analytic methods from "continuous" mathematics are often employed as well.
In university curricula, discrete mathematics appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts by ACM and MAA into a course that is basically intended to develop mathematical maturity in first-year students; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well. Some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is sometimes seen as a preparatory course, like precalculus in this respect.
The Fulkerson Prize is awarded for outstanding papers in discrete mathematics.
Topics in discrete mathematics.
Theoretical computer science.
Theoretical computer science includes areas of discrete mathematics relevant to computing. It draws heavily on graph theory and mathematical logic. Included within theoretical computer science is the study of algorithms and data structures. Computability studies what can be computed in principle, and has close ties to logic, while complexity studies the time, space, and other resources taken by computations. Automata theory and formal language theory are closely related to computability. Petri nets and process algebras are used to model computer systems, and methods from discrete mathematics are used in analyzing VLSI electronic circuits. Computational geometry applies algorithms to geometrical problems and representations of geometrical objects, while computer image analysis applies them to representations of images. Theoretical computer science also includes the study of various continuous computational topics.
Information theory.
Information theory involves the quantification of information. Closely related is coding theory which is used to design efficient and reliable data transmission and storage methods. Information theory also includes continuous topics such as: analog signals, analog coding, analog encryption.
Logic.
Logic is the study of the principles of valid reasoning and inference, as well as of consistency, soundness, and completeness. For example, in most systems of logic (but not in intuitionistic logic) Peirce's law ((("P"→"Q")→"P")→"P") is a theorem. For classical logic, it can be easily verified with a truth table. The study of mathematical proof is particularly important in logic, and has accumulated to automated theorem proving and formal verification of software.
Logical formulas are discrete structures, as are proofs, which form finite trees or, more generally, directed acyclic graph structures (with each inference step combining one or more premise branches to give a single conclusion). The truth values of logical formulas usually form a finite set, generally restricted to two values: "true" and "false", but logic can also be continuous-valued, e.g., fuzzy logic. Concepts such as infinite proof trees or infinite derivation trees have also been studied, e.g. infinitary logic.
Set theory.
Set theory is the branch of mathematics that studies sets, which are collections of objects, such as {blue, white, red} or the (infinite) set of all prime numbers. Partially ordered sets and sets with other relations have applications in several areas.
In discrete mathematics, countable sets (including finite sets) are the main focus. The beginning of set theory as a branch of mathematics is usually marked by Georg Cantor's work distinguishing between different kinds of infinite set, motivated by the study of trigonometric series, and further development of the theory of infinite sets is outside the scope of discrete mathematics. Indeed, contemporary work in descriptive set theory makes extensive use of traditional continuous mathematics.
Combinatorics.
Combinatorics studies the ways in which discrete structures can be combined or arranged.
Enumerative combinatorics concentrates on counting the number of certain combinatorial objects - e.g. the twelvefold way provides a unified framework for counting permutations, combinations and partitions.
Analytic combinatorics concerns the enumeration (i.e., determining the number) of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae.
Topological combinatorics concerns the use of techniques from topology and algebraic topology/combinatorial topology in combinatorics.
Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties.
Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, partition theory is now considered a part of combinatorics or an independent field.
Order theory is the study of partially ordered sets, both finite and infinite.
Graph theory.
Graph theory, the study of graphs and networks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right. Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures. They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts of topology, e.g. knot theory. Algebraic graph theory has close links with group theory and topological graph theory has close links to topology. There are also continuous graphs; however, for the most part, research in graph theory falls within the domain of discrete mathematics.
Number theory.
Number theory is concerned with the properties of numbers in general, particularly integers. It has applications to cryptography and cryptanalysis, particularly with regard to modular arithmetic, diophantine equations, linear and quadratic congruences, prime numbers and primality testing. Other discrete aspects of number theory include geometry of numbers. In analytic number theory, techniques from continuous mathematics are also used. Topics that go beyond discrete objects include transcendental numbers, diophantine approximation, p-adic analysis and function fields.
Algebraic structures.
Algebraic structures occur as both discrete examples and continuous examples. Discrete algebras include: Boolean algebra used in logic gates and programming; relational algebra used in databases; discrete and finite versions of groups, rings and fields are important in algebraic coding theory; discrete semigroups and monoids appear in the theory of formal languages.
Discrete analogues of continuous mathematics.
There are many concepts and theories in continuous mathematics which have discrete versions, such as discrete calculus, discrete Fourier transforms, discrete geometry, discrete logarithms, discrete differential geometry, discrete exterior calculus, discrete Morse theory, discrete optimization, discrete probability theory, discrete probability distribution, difference equations, discrete dynamical systems, and discrete vector measures.
Calculus of finite differences, discrete analysis, and discrete calculus.
In discrete calculus and the calculus of finite differences, a function defined on an interval of the integers is usually called a sequence. A sequence could be a finite sequence from a data source or an infinite sequence from a discrete dynamical system. Such a discrete function could be defined explicitly by a list (if its domain is finite), or by a formula for its general term, or it could be given implicitly by a recurrence relation or difference equation. Difference equations are similar to differential equations, but replace differentiation by taking the difference between adjacent terms; they can be used to approximate differential equations or (more often) studied in their own right. Many questions and methods concerning differential equations have counterparts for difference equations. For instance, where there are integral transforms in harmonic analysis for studying continuous functions or analogue signals, there are discrete transforms for discrete functions or digital signals. As well as discrete metric spaces, there are more general discrete topological spaces, finite metric spaces, finite topological spaces.
The time scale calculus is a unification of the theory of difference equations with that of differential equations, which has applications to fields requiring simultaneous modelling of discrete and continuous data. Another way of modeling such a situation is the notion of hybrid dynamical systems.
Discrete geometry.
Discrete geometry and combinatorial geometry are about combinatorial properties of "discrete collections" of geometrical objects. A long-standing topic in discrete geometry is tiling of the plane.
In algebraic geometry, the concept of a curve can be extended to discrete geometries by taking the spectra of polynomial rings over finite fields to be models of the affine spaces over that field, and letting subvarieties or spectra of other rings provide the curves that lie in that space. Although the space in which the curves appear has a finite number of points, the curves are not so much sets of points as analogues of curves in continuous settings. For example, every point of the form formula_0 for formula_1 a field can be studied either as formula_2, a point, or as the spectrum formula_3 of the local ring at (x-c), a point together with a neighborhood around it. Algebraic varieties also have a well-defined notion of tangent space called the Zariski tangent space, making many features of calculus applicable even in finite settings.
Discrete modelling.
In applied mathematics, discrete modelling is the discrete analogue of continuous modelling. In discrete modelling, discrete formulae are fit to data. A common method in this form of modelling is to use recurrence relation. Discretization concerns the process of transferring continuous models and equations into discrete counterparts, often for the purposes of making calculations easier by using approximations. Numerical analysis provides an important example.
Challenges.
The history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, but not proved until 1976 (by Kenneth Appel and Wolfgang Haken, using substantial computer assistance).
In logic, the second problem on David Hilbert's list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödel's second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself. Hilbert's tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done.
The need to break German codes in World War II led to advances in cryptography and theoretical computer science, with the first programmable digital electronic computer being developed at England's Bletchley Park with the guidance of Alan Turing and his seminal work, On Computable Numbers. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades. The telecommunications industry has also motivated advances in discrete mathematics, particularly in graph theory and information theory. Formal verification of statements in logic has been necessary for software development of safety-critical systems, and advances in automated theorem proving have been driven by this need.
Computational geometry has been an important part of the computer graphics incorporated into modern video games and computer-aided design tools.
Several fields of discrete mathematics, particularly theoretical computer science, graph theory, and combinatorics, are important in addressing the challenging bioinformatics problems associated with understanding the tree of life.
Currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP. The Clay Mathematics Institute has offered a $1 million USD prize for the first correct proof, along with prizes for six other mathematical problems.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "V(x-c) \\subset \\operatorname{Spec} K[x] = \\mathbb{A}^1"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "\\operatorname{Spec} K[x]/(x-c) \\cong \\operatorname{Spec} K"
},
{
"math_id": 3,
"text": "\\operatorname{Spec} K[x]_{(x-c)}"
}
] | https://en.wikipedia.org/wiki?curid=8492 |
8492267 | General Perspective projection | Azimuthal perspective map projection
The General Perspective projection is a map projection. When the Earth is photographed from space, the camera records the view as a perspective projection. When the camera is aimed toward the center of the Earth, the resulting projection is called Vertical Perspective. When aimed in other directions, the resulting projection is called a Tilted Perspective.
Perspective and usage.
The Vertical Perspective is related to the stereographic projection, gnomonic projection, and orthographic projection. These are all true perspective projections, meaning that they result from viewing the globe from some vantage point. They are also azimuthal projections, meaning that the projection surface is a plane tangent to the sphere. This results in correct directions from the center to all other points. The "point of perspective", or vantage point, for the General Perspective Projection is at a finite distance. It depicts the earth as it appears from some relatively short distance above the surface, typically a few hundred to a few tens of thousands of kilometers.
When tilted, the General Perspective projection, also called the tilted perspective projection, is not azimuthal (see second figure below); directions are not true from the central point, and the projection plane is not tangent to the sphere.
Tilted perspectives are common from aerial and low orbit photography, generally taken from at a height measured in kilometers to hundreds of kilometers, rather than the hundreds or thousands of kilometers typical of a vertical perspective. However, Richard Edes Harrison pioneered the use of this projection on strategic maps showing military theaters during WWII.
Some prominent Internet mapping tools also use the tilted perspective projection. For example, Google Earth and NASA World Wind show the globe as it appears from space. These applications permit a wide variety of interactive pan and zoom operations, including fly-through simulations, mimicking pictures or movies taken with a hand-held camera from an airplane or spacecraft.
History.
Some forms of the projection were known to the Greeks and Egyptians 2,000 years ago. It was studied by several French and British scientists in the 18th and 19th centuries. However, the projection had little practical value at that time; computationally simpler nonperspective azimuthal projections could be used instead.
Space exploration led to a renewed interest in the perspective projection. Now the concern was for a pictorial view from space, not for minimal distortion. A picture taken with a hand-held camera from the window of a spacecraft has a tilted vertical perspective, so the crewed Gemini and Apollo space missions sparked interest in this projection.
Mathematics.
The formulas for the general perspective projection are derived using trigonometry. They are written in terms of longitude ("λ") and latitude ("φ") on the sphere. Define the radius of the sphere "R" and the "center" point (and origin) of the projection ("λ"0, "φ"0). The equations for the orthographic projection onto the ("x", "y") tangent plane reduce to the following:
formula_0
where
formula_1
formula_2 is the angular distance and formula_3 denotes the distance from the perspective point to the center of earth. It is positive in the direction of the center of the projection (for the “view from space”) and negative in the opposite direction. For the stereographic projection, formula_4, and for the gnomonic projection, formula_5.
The inverse formulas are given by:
formula_6
where
formula_7
If formula_8 is negative and formula_9 is greater than formula_10, formula_2 must be subtracted from 180° to place it in the proper quadrant. For computation of the inverse formulas the use of the two-argument atan2 form of the inverse tangent function (as opposed to atan) is recommended. This ensures that the sign of the orthographic projection as written is correct in all quadrants.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\nx &= R \\, k' \\cos\\varphi \\sin\\left(\\lambda - \\lambda_0\\right) \\\\\ny &= R \\, k' \\big(\\cos\\varphi_0 \\sin\\varphi - \\sin\\varphi_0 \\cos\\varphi \\cos\\left(\\lambda - \\lambda_0\\right)\\big) \\\\\n\\end{align}"
},
{
"math_id": 1,
"text": "\\begin{align}\nk' &= \\frac{d-R}{d-R\\cos c} \\\\\n\\cos c &= \\sin\\varphi_0 \\sin\\varphi + \\cos\\varphi_0 \\cos\\varphi \\cos\\left(\\lambda - \\lambda_0\\right)\n\\end{align}"
},
{
"math_id": 2,
"text": "c"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "d = -R"
},
{
"math_id": 5,
"text": "d = 0"
},
{
"math_id": 6,
"text": "\\begin{align}\n\\varphi &= \\arcsin\\left(\\cos c \\sin\\varphi_0 + \\frac{y\\sin c \\cos\\varphi_0}{\\rho}\\right) \\\\\n\\lambda &= \\lambda_0 + \\arctan\\left(\\frac{x\\sin c}{\\rho \\cos c \\cos\\varphi_0 - y \\sin c \\sin\\varphi_0}\\right)\n\\end{align}"
},
{
"math_id": 7,
"text": "\\begin{align}\n\\rho &= \\sqrt{x^2 + y^2} \\\\\nc &= \\arcsin \\frac{P-\\sqrt{1-\\frac{\\rho^2(P+1)}{R^2(P-1)}}}{\\frac{R(P-1)}{\\rho} + \\frac{\\rho}{R(P-1)}}\n\\end{align}"
},
{
"math_id": 8,
"text": "P"
},
{
"math_id": 9,
"text": "\\rho"
},
{
"math_id": 10,
"text": "R(P-1)/P"
}
] | https://en.wikipedia.org/wiki?curid=8492267 |
849375 | Relativistic Heavy Ion Collider | Particle accelerator at Brookhaven National Laboratory in Upton, New York, USA
The Relativistic Heavy Ion Collider (RHIC ) is the first and one of only two operating heavy-ion colliders, and the only spin-polarized proton collider ever built. Located at Brookhaven National Laboratory (BNL) in Upton, New York, and used by an international team of researchers, it is the only operating particle collider in the US. By using RHIC to collide ions traveling at relativistic speeds, physicists study the primordial form of matter that existed in the universe shortly after the Big Bang. By colliding spin-polarized protons, the spin structure of the proton is explored.
RHIC is as of 2019 the second-highest-energy heavy-ion collider in the world, with nucleon energies for collisions reaching 100 GeV for gold ions and 250 GeV for protons. As of November 7, 2010, the Large Hadron Collider (LHC) has collided heavy ions of lead at higher energies than RHIC. The LHC operating time for ions (lead–lead and lead–proton collisions) is limited to about one month per year.
In 2010, RHIC physicists published results of temperature measurements from earlier experiments which concluded that temperatures in excess of 345 MeV (4 terakelvin or 7 trillion degrees Fahrenheit) had been achieved in gold ion collisions, and that these collision temperatures resulted in the breakdown of "normal matter" and the creation of a liquid-like quark–gluon plasma.
In January 2020, the US Department of Energy Office of Science selected the eRHIC design for the future Electron–Ion collider (EIC), building on the existing RHIC facility at BNL.
The accelerator.
RHIC is an intersecting storage ring particle accelerator. Two independent rings (arbitrarily denoted as "Blue" and "Yellow") circulate heavy ions and/or polarized protons in opposite directions and allow a virtually free choice of colliding positively charged particles (the eRHIC upgrade will allow collisions between positively and negatively charged particles). The RHIC double storage ring is hexagonally shaped and has a circumference of , with curved edges in which stored particles are deflected and focused by 1,740 superconducting magnets using niobium-titanium conductors. The dipole magnets operate at . The six interaction points (between the particles circulating in the two rings) are in the middle of the six relatively straight sections, where the two rings cross, allowing the particles to collide. The interaction points are enumerated by clock positions, with the injection near 6 o'clock. Two large experiments, STAR and sPHENIX, are located at 6 and 8 o'clock respectively. The sPHENIX experiment is the newest experiment to be built at RHIC, replacing PHENIX at the 8 o'clock position.
A particle passes through several stages of boosters before it reaches the RHIC storage ring. The first stage for ions is the electron beam ion source (EBIS), while for protons, the linear accelerator (Linac) is used. As an example, gold nuclei leaving the EBIS have a kinetic energy of per nucleon and have an electric charge "Q" = +32 (32 of 79 electrons stripped from the gold atom). The particles are then accelerated by the Booster synchrotron to per nucleon, which injects the projectile now with "Q" = +77 into the Alternating Gradient Synchrotron (AGS), before they finally reach per nucleon and are injected in a "Q" = +79 state (no electrons left) into the RHIC storage ring over the AGS-to-RHIC Transfer Line (AtR).
To date the types of particle combinations explored at RHIC are p + p,
p + Al, p + Au, d + Au, h + Au, Cu + Cu, Cu + Au, Zr + Zr, Ru + Ru, Au + Au and U + U. The projectiles typically travel at a speed of 99.995% of the speed of light. For Au + Au collisions, the center-of-mass energy is typically per nucleon-pair, and was as low as per nucleon-pair. An average luminosity of was targeted during the planning. The current average Au + Au luminosity of the collider has reached , 44 times the design value. The heavy ion luminosity is substantially increased through stochastic cooling.
One unique characteristic of RHIC is its capability to collide polarized protons. RHIC holds the record of highest energy polarized proton beams. Polarized protons are injected into RHIC and preserve this state throughout the energy ramp. This is a difficult task that is accomplished with the aid of corkscrew magnetics called 'Siberian snakes' (in RHIC a chain 4 helical dipole magnets). The corkscrew induces the magnetic field to spiral along the direction of the beam
Run-9 achieved center-of-mass energy of on 12 February 2009. In Run-13 the average p + p luminosity of the collider reached , with a time and intensity averaged polarization of 52%.
AC dipoles have been used in non-linear machine diagnostics for the first time in RHIC.
The experiments.
There are two detectors currently operating at RHIC: STAR (6 o'clock, and near the AGS-to-RHIC Transfer Line) and sPHENIX (8 o'clock), the successor to PHENIX. PHOBOS (10 o'clock) completed its operation in 2005, and BRAHMS (2 o'clock) in 2006.
Among the two larger detectors, STAR is aimed at the detection of hadrons with its system of time projection chambers covering a large solid angle and in a conventionally generated solenoidal magnetic field, while PHENIX is further specialized in detecting rare and electromagnetic particles, using a partial coverage detector system in a superconductively generated axial magnetic field. The smaller detectors have larger pseudorapidity coverage, PHOBOS has the largest pseudorapidity coverage of all detectors, and tailored for bulk particle multiplicity measurement, while BRAHMS is designed for momentum spectroscopy, in order to study the so-called "small-"x"" and saturation physics. There is an additional experiment, PP2PP (now part of STAR), investigating spin dependence in p + p scattering.
The spokespersons for each of the experiments are:
Current results.
For the experimental objective of creating and studying the quark–gluon plasma, RHIC has the unique ability to provide baseline measurements for itself. This consists of both the lower energy and also lower mass number projectile combinations that do not result in the density of 200 GeV Au + Au collisions, like the p + p and d + Au collisions of the earlier runs, and also Cu + Cu collisions in Run-5.
Using this approach, important results of the measurement of the hot QCD matter created at RHIC are:
While in the first years, theorists were eager to claim that RHIC has discovered the quark–gluon plasma (e.g. Gyulassy & McLarren), the experimental groups were more careful not to jump to conclusions, citing various variables still in need of further measurement. The present results shows that the matter created is a fluid with a viscosity near the quantum limit, but is unlike a weakly interacting plasma (a widespread yet not quantitatively unfounded belief on how quark–gluon plasma looks).
A recent overview of the physics result is provided by the RHIC Experimental Evaluations 2004 , a community-wide effort of RHIC experiments to evaluate the current data in the context of implication for formation of a new state of matter. These results are from the first three years of data collection at RHIC.
New results were published in "Physical Review Letters" on February 16, 2010, stating the discovery of the first hints of symmetry transformations, and that the observations may suggest that bubbles formed in the aftermath of the collisions created in the RHIC may break parity symmetry, which normally characterizes interactions between quarks and gluons.
The RHIC physicists announced new temperature measurements for these experiments of up to 4 trillion kelvins, the highest temperature ever achieved in a laboratory. It is described as a recreation of the conditions that existed during the birth of the Universe.
Possible closure under flat nuclear science budget scenarios.
In late 2012, the Nuclear Science Advisory Committee (NSAC) was asked to advise the Department of Energy's Office of Science and the National Science Foundation how to implement the nuclear science long range plan written in 2007, if future nuclear science budgets continue to provide no growth over the next four years. In a narrowly decided vote, the NSAC committee showed a slight preference, based on non-science related considerations, for shutting down RHIC rather than canceling the construction of the Facility for Rare Isotope Beams (FRIB).
By October 2015, the budget situation had improved, and RHIC can continue operations into the next decade.
The future.
RHIC began operation in 2000 and until November 2010 was the highest-energy heavy-ion collider in the world. The Large Hadron Collider (LHC) of CERN, while used mainly for colliding protons, operates with heavy ions for about one month per year. The LHC has operated with 25 times higher energies per nucleon. As of 2018, RHIC and the LHC are the only operating hadron colliders in the world.
Due to the longer operating time per year, a greater number of colliding ion species and collision energies can be studied at RHIC. In addition and unlike the LHC, RHIC is also able to accelerate spin polarized protons, which would leave RHIC as the world's highest energy accelerator for studying spin-polarized proton structure.
A major upgrade is the Electron–Ion Collider (EIC), the addition of a 18 GeV high intensity electron beam facility, allowing electron–ion collisions. At least one new detector will have to be built to study the collisions. A review was published by Abhay Deshpande et al. in 2005. A more recent description is at:
On January 9, 2020, It was announced by Paul Dabbar, undersecretary of the US Department of Energy Office of Science, that the BNL eRHIC design has been selected for the future electron–ion collider (EIC) in the United States. In addition to the site selection, it was announced that the BNL EIC had acquired CD-0 (mission need) from the Department of Energy.
Critics of high-energy experiments.
Before RHIC started operation, critics postulated that the extremely high energy could produce catastrophic scenarios,
such as creating a black hole, a transition into a different quantum mechanical vacuum (see false vacuum), or the creation of strange matter that is more stable than ordinary matter. These hypotheses are complex, but many predict that the Earth would be destroyed in a time frame from seconds to millennia, depending on the theory considered. However, the fact that objects of the Solar System (e.g., the Moon) have been bombarded with cosmic particles of significantly higher energies than that of RHIC and other man-made colliders for billions of years, without any harm to the Solar System, were among the most striking arguments that these hypotheses were unfounded.
The other main controversial issue was a demand by critics for physicists to reasonably exclude the probability for such a catastrophic scenario. Physicists are unable to demonstrate experimental and astrophysical constraints of zero probability of catastrophic events, nor that tomorrow Earth will be struck with a "doomsday" cosmic ray (they can only calculate an upper limit for the likelihood). The result would be the same destructive scenarios described above, although obviously not caused by humans. According to this argument of upper limits, RHIC would still modify the chance for the Earth's survival by an infinitesimal amount.
Concerns were raised in connection with the RHIC particle accelerator, both in the media and in the popular science media. The risk of a doomsday scenario was indicated by Martin Rees, with respect to the RHIC, as being at least a 1 in 50,000,000 chance. With regards to the production of strangelets, Frank Close, professor of physics at the University of Oxford, indicates that "the chance of this happening is like you winning the major prize on the lottery 3 weeks in succession; the problem is that people believe it is possible to win the lottery 3 weeks in succession." After detailed studies, scientists reached such conclusions as "beyond reasonable doubt, heavy-ion experiments at RHIC will not endanger our planet" and that there is "powerful empirical evidence against the possibility of dangerous strangelet production".
The debate started in 1999 with an exchange of letters in "Scientific American" between Walter L. Wagner and F. Wilczek, in response to a previous article by M. Mukerjee. The media attention unfolded with an article in UK "Sunday Times" of July 18, 1999, by J. Leake, closely followed by articles in the U.S. media. The controversy mostly ended with the report of a committee convened by the director of Brookhaven National Laboratory, J. H. Marburger, ostensibly ruling out the catastrophic scenarios depicted. However, the report left open the possibility that relativistic cosmic ray impact products might behave differently while transiting earth compared to "at rest" RHIC products; and the possibility that the qualitative difference between high-E proton collisions with earth or the moon might be different than gold on gold collisions at the RHIC. Wagner tried subsequently to stop full-energy collision at RHIC by filing Federal lawsuits in San Francisco and New York, but without success. The New York suit was dismissed on the technicality that the San Francisco suit was the preferred forum. The San Francisco suit was dismissed, but with leave to refile if additional information was developed and presented to the court.
On March 17, 2005, the BBC published an article implying that researcher Horaţiu Năstase believes black holes have been created at RHIC. However, the original papers of H. Năstase and the "New Scientist" article cited by the BBC state that the correspondence of the hot dense QCD matter created in RHIC to a black hole is only in the sense of a correspondence of QCD scattering in Minkowski space and scattering in the "AdS"5 × "X"5 space in AdS/CFT; in other words, it is similar mathematically. Therefore, RHIC collisions might be described by mathematics relevant to theories of quantum gravity within AdS/CFT, but the described physical phenomena are not the same.
Financial information.
The RHIC project was sponsored by the United States Department of Energy, Office of Science, Office of Nuclear physics. It had a line-item budget of 616.6 million U.S. dollars.
For fiscal year 2006 the operational budget was reduced by 16.1 million U.S. dollars from the previous year, to 115.5 million U.S. dollars. Though operation under the fiscal year 2006 federal budget cut was uncertain, a key portion of the operational cost (13 million U.S. dollars) was contributed privately by a group close to Renaissance Technologies of East Setauket, New York.
References.
<templatestyles src="Reflist/styles.css" />
* BRAHMS
* PHENIX
* PHOBOS
* STAR | [
{
"math_id": 0,
"text": "dn/d\\phi \\propto 1 + 2 v_2(p_\\mathrm{T}) \\cos 2 \\phi"
},
{
"math_id": 1,
"text": "\\phi"
},
{
"math_id": 2,
"text": "Q_s^2 \\propto \\langle N_\\mathrm{part} \\rangle/2"
},
{
"math_id": 3,
"text": "n_\\mathrm{ch}/A \\propto 1/\\alpha_s(Q_s^2)"
},
{
"math_id": 4,
"text": "\\mu_B"
}
] | https://en.wikipedia.org/wiki?curid=849375 |
849412 | Ramanujan graph | In the mathematical field of spectral graph theory, a Ramanujan graph is a regular graph whose spectral gap is almost as large as possible (see extremal graph theory). Such graphs are excellent spectral expanders. As Murty's survey paper notes, Ramanujan graphs "fuse diverse branches of pure mathematics, namely, number theory, representation theory, and algebraic geometry".
These graphs are indirectly named after Srinivasa Ramanujan; their name comes from the Ramanujan–Petersson conjecture, which was used in a construction of some of these graphs.
Definition.
Let formula_0 be a connected formula_1-regular graph with formula_2 vertices, and let formula_3 be the eigenvalues of the adjacency matrix of formula_0 (or the spectrum of formula_0). Because formula_0 is connected and formula_1-regular, its eigenvalues satisfy formula_4 formula_5.
Define formula_6. A connected formula_1-regular graph formula_0 is a "Ramanujan graph" if formula_7.
Many sources uses an alternative definition formula_8 (whenever there exists formula_9 with formula_10) to define Ramanujan graphs. In other words, we allow formula_11 in addition to the "small" eigenvalues. Since formula_12 if and only if the graph is bipartite, we will refer to the graphs that satisfy this alternative definition but not the first definition "bipartite Ramanujan graphs". If formula_0 is a Ramanujan graph, then formula_13 is a bipartite Ramanujan graph, so the existence of Ramanujan graphs is stronger.
As observed by Toshikazu Sunada, a regular graph is Ramanujan if and only if its Ihara zeta function satisfies an analog of the Riemann hypothesis.
Examples and constructions.
Explicit examples.
Mathematicians are often interested in constructing infinite families of formula_1-regular Ramanujan graphs for every fixed formula_1. Such families are useful in applications.
Algebraic constructions.
Several explicit constructions of Ramanujan graphs arise as Cayley graphs and are algebraic in nature. See Winnie Li's survey on Ramanujan's conjecture and other aspects of number theory relevant to these results.
Lubotzky, Phillips and Sarnak and independently Margulis showed how to construct an infinite family of formula_29-regular Ramanujan graphs, whenever formula_30 is a prime number and formula_31. Both proofs use the Ramanujan conjecture, which led to the name of Ramanujan graphs. Besides being Ramanujan graphs, these constructions satisfies some other properties, for example, their girth is formula_32 where formula_2 is the number of nodes.
Let us sketch the Lubotzky-Phillips-Sarnak construction. Let formula_33 be a prime not equal to formula_30. By Jacobi's four-square theorem, there are formula_34 solutions to the equation formula_35 where formula_36 is odd and formula_37 are even. To each such solution associate the formula_38 matrix formula_39If formula_40 is not a quadratic residue modulo formula_21 let formula_41 be the Cayley graph of formula_38 with these formula_34 generators, and otherwise, let formula_41 be the Cayley graph of formula_42 with the same generators. Then formula_41 is a formula_29-regular graph on formula_43 or formula_44 vertices depending on whether or not formula_40 is a quadratic residue modulo formula_21. It is proved that formula_41 is a Ramanujan graph.
Morgenstern later extended the construction of Lubotzky, Phillips and Sarnak. His extended construction holds whenever formula_30 is a prime power.
Arnold Pizer proved that the supersingular isogeny graphs are Ramanujan, although they tend to have lower girth than the graphs of Lubotzky, Phillips, and Sarnak. Like the graphs of Lubotzky, Phillips, and Sarnak, the degrees of these graphs are always a prime number plus one.
Probabilistic examples.
Adam Marcus, Daniel Spielman and Nikhil Srivastava proved the existence of infinitely many formula_1-regular "bipartite" Ramanujan graphs for any formula_45. Later they proved that there exist bipartite Ramanujan graphs of every degree and every number of vertices. Michael B. Cohen showed how to construct these graphs in polynomial time.
The initial work followed an approach of Bilu and Linial. They considered an operation called a 2-lift that takes a formula_1-regular graph formula_0 with formula_2 vertices and a sign on each edge, and produces a new formula_1-regular graph formula_46 on formula_47 vertices. Bilu & Linial conjectured that there always exists a signing so that every new eigenvalue of formula_46 has magnitude at most formula_48. This conjecture guarantees the existence of Ramanujan graphs with degree formula_1 and formula_49 vertices for any formula_50—simply start with the complete graph formula_14, and iteratively take 2-lifts that retain the Ramanujan property.
Using the method of interlacing polynomials, Marcus, Spielman, and Srivastava proved Bilu & Linial's conjecture holds when formula_0 is already a bipartite Ramanujan graph, which is enough to conclude the existence result. The sequel proved the stronger statement that a sum of formula_1 random bipartite matchings is Ramanujan with non-vanishing probability. Hall, Puder and Sawin extended the original work of Marcus, Spielman and Srivastava to r-lifts.
It is still an open problem whether there are infinitely many formula_1-regular (non-bipartite) Ramanujan graphs for any formula_45. In particular, the problem is open for formula_51, the smallest case for which formula_52 is not a prime power and hence not covered by Morgenstern's construction.
Ramanujan graphs as expander graphs.
The constant formula_48 in the definition of Ramanujan graphs is asymptotically sharp. More precisely, the Alon-Boppana bound states that for every formula_1 and formula_53, there exists formula_2 such that all formula_1-regular graphs formula_0 with at least formula_2 vertices satisfy formula_54. This means that Ramanujan graphs are essentially the best possible expander graphs.
Due to achieving the tight bound on formula_55, the expander mixing lemma gives excellent bounds on the uniformity of the distribution of the edges in Ramanujan graphs, and any random walks on the graphs has a logarithmic mixing time (in terms of the number of vertices): in other words, the random walk converges to the (uniform) stationary distribution very quickly. Therefore, the diameter of Ramanujan graphs are also bounded logarithmically in terms of the number of vertices.
Random graphs.
Confirming a conjecture of Alon, Friedman showed that many families of random graphs are "weakly-Ramanujan". This means that for every formula_1 and formula_53 and for sufficiently large formula_2, a random formula_1-regular formula_2-vertex graph formula_0 satisfies formula_56 with high probability. While this result shows that random graphs are close to being Ramanujan, it cannot be used to prove the existence of Ramanujan graphs. It is conjectured, though, that random graphs are Ramanujan with substantial probability (roughly 52%). In addition to direct numerical evidence, there is some theoretical support for this conjecture: the spectral gap of a formula_1-regular graph seems to behave according to a Tracy-Widom distribution from random matrix theory, which would predict the same asymptotic.
Applications of Ramanujan graphs.
Expander graphs have many applications to computer science, number theory, and group theory, see e.g Lubotzky's survey on applications to pure and applied math and Hoory, Linial, and Wigderson's survey which focuses on computer science.. Ramanujan graphs are in some sense the best expanders, and so they are especially useful in applications where expanders are needed. Importantly, the Lubotzky, Phillips, and Sarnak graphs can be traversed extremely quickly in practice, so they are practical for applications.
Some example applications include
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "\\lambda_1 \\geq \\lambda_2 \\geq \\cdots \\geq \\lambda_n"
},
{
"math_id": 4,
"text": "d = \\lambda_1 > \\lambda_2 "
},
{
"math_id": 5,
"text": " \\geq \\cdots \\geq \\lambda_n \\geq -d "
},
{
"math_id": 6,
"text": "\\lambda(G) = \\max_{i\\neq 1}|\\lambda_i| = \\max(|\\lambda_2|,\\ldots, |\\lambda_n|)"
},
{
"math_id": 7,
"text": "\\lambda(G) \\leq 2\\sqrt{d-1}"
},
{
"math_id": 8,
"text": "\\lambda'(G) = \\max_{|\\lambda_i| < d} |\\lambda_i|"
},
{
"math_id": 9,
"text": "\\lambda_i"
},
{
"math_id": 10,
"text": "|\\lambda_i| < d"
},
{
"math_id": 11,
"text": "-d"
},
{
"math_id": 12,
"text": "\\lambda_n = -d"
},
{
"math_id": 13,
"text": "G \\times K_2"
},
{
"math_id": 14,
"text": "K_{d+1}"
},
{
"math_id": 15,
"text": "d, -1, -1, \\dots, -1"
},
{
"math_id": 16,
"text": "\\lambda(K_{d+1}) = 1"
},
{
"math_id": 17,
"text": "d > 1"
},
{
"math_id": 18,
"text": "K_{d,d}"
},
{
"math_id": 19,
"text": "d, 0, 0, \\dots, 0, -d"
},
{
"math_id": 20,
"text": "3, 1, 1, 1, 1, 1, -2, -2, -2, -2"
},
{
"math_id": 21,
"text": "q"
},
{
"math_id": 22,
"text": "\\frac{q-1}{2}"
},
{
"math_id": 23,
"text": "\\frac{-1\\pm\\sqrt{q}}{2}"
},
{
"math_id": 24,
"text": "f(x)"
},
{
"math_id": 25,
"text": "\\mathbb{F}_q"
},
{
"math_id": 26,
"text": "S = \\{f(x)\\, :\\, x \\in \\mathbb{F}_q\\}"
},
{
"math_id": 27,
"text": "S = -S"
},
{
"math_id": 28,
"text": "S"
},
{
"math_id": 29,
"text": "(p+1)"
},
{
"math_id": 30,
"text": "p"
},
{
"math_id": 31,
"text": "p\\equiv 1 \\pmod 4"
},
{
"math_id": 32,
"text": "\\Omega(\\log_{p}(n))"
},
{
"math_id": 33,
"text": "q \\equiv 1 \\bmod 4"
},
{
"math_id": 34,
"text": "p+1"
},
{
"math_id": 35,
"text": "p=a_0^2+a_1^2+a_2^2+a_3^2"
},
{
"math_id": 36,
"text": "a_0 > 0"
},
{
"math_id": 37,
"text": "a_1, a_2, a_3"
},
{
"math_id": 38,
"text": "\\operatorname{PGL}(2,\\Z/q\\Z)"
},
{
"math_id": 39,
"text": "\\tilde \\alpha = \\begin{pmatrix}a_0 + ia_1 & a_2 + ia_3 \\\\ -a_2 + ia_3 & a_0 - ia_1\\end{pmatrix},\\qquad i \\text{ a fixed solution to } i^2 = -1 \\bmod q."
},
{
"math_id": 40,
"text": "p\n"
},
{
"math_id": 41,
"text": "X^{p,q}"
},
{
"math_id": 42,
"text": "\\operatorname{PSL}(2,\\Z/q\\Z)"
},
{
"math_id": 43,
"text": "n=q(q^2-1)"
},
{
"math_id": 44,
"text": "q(q^2-1)/2"
},
{
"math_id": 45,
"text": "d\\geq 3"
},
{
"math_id": 46,
"text": "G'"
},
{
"math_id": 47,
"text": "2n"
},
{
"math_id": 48,
"text": "2\\sqrt{d-1}"
},
{
"math_id": 49,
"text": "2^k(d+1)"
},
{
"math_id": 50,
"text": "k"
},
{
"math_id": 51,
"text": "d = 7"
},
{
"math_id": 52,
"text": "d-1"
},
{
"math_id": 53,
"text": "\\epsilon > 0"
},
{
"math_id": 54,
"text": "\\lambda(G) > 2\\sqrt{d-1} - \\epsilon"
},
{
"math_id": 55,
"text": "\\lambda (G)"
},
{
"math_id": 56,
"text": "\\lambda(G) < 2\\sqrt{d-1} + \\epsilon"
}
] | https://en.wikipedia.org/wiki?curid=849412 |
8495142 | Eckart conditions | Conditions in the second step of the Born-Oppenheimer approximation
The Eckart conditions, named after Carl Eckart, simplify the nuclear motion (rovibrational) Hamiltonian that arises in the second step of the Born–Oppenheimer approximation. They make it possible to approximately separate rotation from vibration. Although the rotational and vibrational motions of the nuclei in a molecule cannot be fully separated, the Eckart conditions minimize the coupling close to a reference (usually equilibrium) configuration. The Eckart conditions are explained by Louck and Galbraith
and in Section 10.2 of the textbook by Bunker and Jensen, where a numerical example is given.
Definition of Eckart conditions.
The Eckart conditions can only be formulated for a semi-rigid molecule, which is a molecule with a potential energy surface "V"(R1, R2..R"N") that has a well-defined minimum for R"A"0 (formula_0). These equilibrium coordinates of the nuclei—with masses "M""A"—are expressed with respect to a fixed orthonormal principal axes frame and hence satisfy the relations
formula_1
Here λi0 is a principal inertia moment of the equilibrium molecule.
The triplets R"A"0 = ("R""A"10, "R""A"20, "R""A"30) satisfying these conditions, enter the theory as a given set of real constants.
Following Biedenharn and Louck, we introduce an orthonormal body-fixed frame, the "Eckart frame",
formula_2.
If we were tied to the Eckart frame, which—following the molecule—rotates and translates in space, we would observe the molecule in its equilibrium geometry when we would draw the nuclei at the points,
formula_3.
Let the elements of R"A" be the coordinates with respect to the Eckart frame of the position vector of nucleus "A" (formula_0). Since we take the origin of the Eckart frame in the instantaneous center of mass, the following relation
formula_4
holds. We define "displacement coordinates"
formula_5.
Clearly the displacement coordinates satisfy the translational Eckart conditions,
formula_6
The rotational Eckart conditions for the displacements are:
formula_7
where formula_8 indicates a vector product.
These rotational conditions follow from the specific construction of the Eckart frame, see Biedenharn and Louck, "loc. cit.", page 538.
Finally, for a better understanding of the Eckart frame it may be useful to remark that it becomes a principal axes frame in the case that the molecule is a rigid rotor, that is, when all "N" displacement vectors are zero.
Separation of external and internal coordinates.
The "N" position vectors formula_9 of the nuclei constitute a 3"N" dimensional linear space R3N: the "configuration space". The Eckart conditions give an orthogonal direct sum decomposition of this space
formula_10
The elements of the 3"N"-6 dimensional subspace Rint are referred to as "internal coordinates", because they are invariant under overall translation and rotation of the molecule and, thus, depend only on the internal (vibrational) motions. The elements of the 6-dimensional subspace Rext are referred to as "external coordinates", because they are associated with the overall translation and rotation of the molecule.
To clarify this nomenclature we define first a basis for Rext. To that end we introduce the following 6 vectors (i=1,2,3):
formula_11
An orthogonal, unnormalized, basis for Rext is,
formula_12
A mass-weighted displacement vector can be written as
formula_13
For i=1,2,3,
formula_14
where the zero follows because of the translational Eckart conditions.
For i=4,5,6
formula_15
where the zero follows because of the rotational Eckart conditions. We conclude that the displacement vector formula_16 belongs to the orthogonal complement of Rext, so that it is an internal vector.
We obtain a basis for the internal space by defining 3"N"-6 linearly independent vectors
formula_17
The vectors formula_18 could be Wilson's s-vectors or could be obtained in the harmonic approximation by diagonalizing the Hessian of "V".
We next introduce internal (vibrational) modes,
formula_19
The physical meaning of "q"r depends on the vectors formula_18. For instance, "q"r could be a symmetric stretching mode, in which two C—H bonds are simultaneously stretched and contracted.
We already saw that the corresponding external modes are zero because of the Eckart conditions,
formula_20
Overall translation and rotation.
The vibrational (internal) modes are invariant under translation and infinitesimal rotation of the equilibrium (reference) molecule if and only if the Eckart conditions apply. This will be shown in this subsection.
An overall translation of the reference molecule is given by
formula_21'
for any arbitrary 3-vector formula_22.
An infinitesimal rotation of the molecule is given by
formula_23
where Δφ is an infinitesimal angle, Δφ » (Δφ)², and formula_24 is an arbitrary unit vector. From the orthogonality of formula_25 to the external space follows that the formula_18 satisfy
formula_26
Now, under translation
formula_27
Clearly, formula_18 is invariant under translation if and only if
formula_28
because the vector formula_22 is arbitrary. So, the translational Eckart conditions imply the translational invariance of the vectors belonging to internal space and conversely. Under rotation we have,
formula_29
Rotational invariance follows if and only if
formula_30
The external modes, on the other hand, are "not" invariant and it is not difficult to show that they change under translation as follows:
formula_31
where "M" is the total mass of the molecule. They change under infinitesimal rotation as follows
formula_32
where I0 is the inertia tensor of the equilibrium molecule. This behavior shows
that the first three external modes describe the overall translation of the molecule, while
the modes 4, 5, and, 6 describe the overall rotation.
Vibrational energy.
The vibrational energy of the molecule can be written in terms of coordinates with respect to the Eckart frame as
formula_33
Because the Eckart frame is non-inertial, the total kinetic energy comprises also centrifugal and Coriolis energies. These stay out of the present discussion. The vibrational energy is written in terms of the displacement coordinates, which are linearly dependent because they are contaminated by the 6 external modes, which are zero, i.e., the d"A"'s satisfy 6 linear relations. It is possible to write the vibrational energy solely in terms of the internal modes "q"r ("r" =1, ..., 3"N"-6) as we will now show. We write the different modes in terms of the displacements
formula_34
The parenthesized expressions define a matrix B relating the internal and external modes to the displacements. The matrix B may be partitioned in an internal (3"N"-6 x 3"N") and an external (6 x 3"N") part,
formula_35
We define the matrix M by
formula_36
and from the relations given in the previous sections follow the matrix relations
formula_37
and
formula_38
We define
formula_39
By using the rules for block matrix multiplication we can show that
formula_40
where G−1 is of dimension (3"N"-6 x 3"N"-6) and N−1 is (6 x 6).
The kinetic energy becomes
formula_41
where we used that the last 6 components of v are zero. This form of
the kinetic energy of vibration enters Wilson's GF method. It is of some interest to point out that the potential energy in the harmonic approximation can be written as follows
formula_42
where H is the Hessian of the potential in the minimum and F, defined by this equation, is the F matrix of the GF method.
Relation to the harmonic approximation.
In the harmonic approximation to the nuclear vibrational problem, expressed in displacement coordinates, one must solve the generalized eigenvalue problem
formula_43
where H is a 3"N" × 3"N" symmetric matrix of second derivatives of the potential formula_44. H is the Hessian matrix of "V" in the equilibrium formula_45. The diagonal matrix M contains the masses on the diagonal.
The diagonal matrix formula_46 contains the eigenvalues, while
the columns of C contain the eigenvectors.
It can be shown that the invariance of "V" under simultaneous translation over t of all nuclei implies that vectors T = (t, ..., t) are in the kernel of H.
From the invariance of "V" under an infinitesimal rotation of all nuclei around s, it can be shown that also the vectors S = (s x R10, ..., s x RN0) are in the kernel of H :
formula_47
Thus, six columns of C corresponding to eigenvalue zero are determined algebraically. (If the generalized eigenvalue problem is solved numerically, one will find in general six linearly independent linear combinations of S and T).
The eigenspace corresponding to eigenvalue zero is at least of dimension 6 (often it is exactly of dimension 6, since the other eigenvalues, which are force constants, are never zero for molecules in their ground state). Thus, T and S correspond to the overall (external) motions: translation and rotation, respectively. They are "zero-energy modes" because space is homogeneous (force-free) and isotropic (torque-free).
By the definition in this article, the non-zero frequency modes are internal modes, since they are within the orthogonal complement of Rext. The generalized orthogonalities:
formula_48
applied to the "internal" (non-zero eigenvalue) and "external" (zero-eigenvalue) columns of C are equivalent to the Eckart conditions.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
The classic work is:
More advanced book are: | [
{
"math_id": 0,
"text": "A=1,\\ldots, N"
},
{
"math_id": 1,
"text": "\n\\sum_{A=1}^N M_A\\,\\big(\\delta_{ij}|\\mathbf{R}_A^0|^2 - R^0_{Ai} R^0_{Aj}\\big) = \\lambda^0_i \\delta_{ij} \\quad\\mathrm{and}\\quad\n\\sum_{A=1}^N M_A \\mathbf{R}_A^0 = \\mathbf{0}.\n"
},
{
"math_id": 2,
"text": "\\vec{\\mathbf{F}} = \\{ \\vec{f}_1, \\vec{f}_2, \\vec{f}_3\\}"
},
{
"math_id": 3,
"text": "\n\\vec{R}_A^0 \\equiv \\vec{\\mathbf{F}} \\cdot \\mathbf{R}_A^0\n=\\sum_{i=1}^3 \\vec{f}_i\\, R^0_{Ai},\\quad A=1,\\ldots,N\n"
},
{
"math_id": 4,
"text": "\n\\sum_A M_A \\mathbf{R}_A = \\mathbf{0}\n"
},
{
"math_id": 5,
"text": "\\mathbf{d}_A\\equiv\\mathbf{R}_A-\\mathbf{R}^0_A"
},
{
"math_id": 6,
"text": "\n\\sum_{A=1}^N M_A \\mathbf{d}_A = 0 .\n"
},
{
"math_id": 7,
"text": "\n\\sum_{A=1}^N M_A \\mathbf{R}^0_A \\times \\mathbf{d}_A = 0,\n"
},
{
"math_id": 8,
"text": "\\times"
},
{
"math_id": 9,
"text": "\\vec{R}_A"
},
{
"math_id": 10,
"text": "\n\\mathbf{R}^{3N} = \\mathbf{R}_\\textrm{ext}\\oplus\\mathbf{R}_\\textrm{int}.\n"
},
{
"math_id": 11,
"text": "\n\\begin{align}\n\\vec{s}^A_{i} &\\equiv \\vec{f}_i \\\\\n\\vec{s}^A_{i+3} &\\equiv \\vec{f}_i \\times\\vec{R}_A^0 .\\\\\n\\end{align}\n"
},
{
"math_id": 12,
"text": "\n\\vec{S}_t \\equiv \\operatorname{row}(\\sqrt{M_1}\\;\\vec{s}^{\\,1}_{t}, \\ldots, \\sqrt{M_N} \\;\\vec{s}^{\\,N}_{t})\n\\quad\\mathrm{for}\\quad t=1,\\ldots, 6.\n"
},
{
"math_id": 13,
"text": "\n\\vec{D} \\equiv \\operatorname{col}(\\sqrt{M_1}\\;\\vec{d}^{\\,1}, \\ldots, \\sqrt{M_N}\\;\\vec{d}^{\\,N})\n\\quad\\mathrm{with}\\quad\n\\vec{d}^{\\,A} \\equiv \\vec{\\mathbf{F}}\\cdot \\mathbf{d}_A .\n"
},
{
"math_id": 14,
"text": "\n\\vec{S}_i \\cdot \\vec{D} = \\sum_{A=1}^N \\; M_A \\vec{s}^{\\,A}_i \\cdot \\vec{d}^{\\,A}\n=\\sum_{A=1}^N M_A d_{Ai} = 0,\n"
},
{
"math_id": 15,
"text": "\\, \\vec{S}_i \\cdot \\vec{D} = \\sum_{A=1}^N \\; M_A \\big(\\vec{f}_i \\times\\vec{R}_A^0\\big) \\cdot \\vec{d}^{\\,A}=\\vec{f}_i \\cdot \\sum_{A=1}^N M_A \\vec{R}_A^0 \\times\\vec{d}^A = \\sum_{A=1}^N M_A \\big( \\mathbf{R}_A^0 \\times \\mathbf{d}_A\\big)_i = 0,\n"
},
{
"math_id": 16,
"text": "\\vec{D}"
},
{
"math_id": 17,
"text": "\n\\vec{Q}_r \\equiv \\operatorname{row}(\\frac{1}{\\sqrt{M_1}}\\;\\vec{q}_r^{\\,1}, \\ldots, \\frac{1}{\\sqrt{M_N}}\\;\\vec{q}_r^{\\,N}), \\quad\\mathrm{for}\\quad r=1,\\ldots, 3N-6.\n"
},
{
"math_id": 18,
"text": "\\vec{q}^A_r"
},
{
"math_id": 19,
"text": "\nq_r \\equiv \\vec{Q}_r \\cdot \\vec{D} = \\sum_{A=1}^N \\vec{q}^A_r \\cdot \\vec{d}^{\\,A}\n\\quad\\mathrm{for}\\quad r=1,\\ldots, 3N-6.\n"
},
{
"math_id": 20,
"text": "\ns_t \\equiv \\vec{S}_t \\cdot \\vec{D} = \\sum_{A=1}^N M_A \\;\\vec{s}^{\\,A}_t \\cdot \\vec{d}^{\\,A} = 0\n\\quad\\mathrm{for}\\quad t=1,\\ldots, 6.\n"
},
{
"math_id": 21,
"text": " \\vec{R}_{A}^0 \\mapsto \\vec{R}_{A}^0 + \\vec{t} "
},
{
"math_id": 22,
"text": "\\vec{t}"
},
{
"math_id": 23,
"text": "\n\\vec{R}_A^0 \\mapsto \\vec{R}_A^0 + \\Delta\\varphi \\; ( \\vec{n}\\times \\vec{R}_A^0)\n"
},
{
"math_id": 24,
"text": "\\vec{n}"
},
{
"math_id": 25,
"text": "\\vec{Q}_r"
},
{
"math_id": 26,
"text": "\n\\sum_{A=1}^N \\vec{q}^{\\,A}_r = \\vec{0} \\quad\\mathrm{and}\\quad \\sum_{A=1}^N \\vec{R}^0_A\\times\n\\vec{q}^A_r = \\vec{0}.\n"
},
{
"math_id": 27,
"text": "\nq_r \\mapsto \\sum_A\\vec{q}^{\\,A}_r \\cdot(\\vec{d}^A - \\vec{t}) =\nq_r - \\vec{t}\\cdot\\sum_A \\vec{q}^{\\,A}_r = q_r.\n"
},
{
"math_id": 28,
"text": "\n\\sum_A \\vec{q}^{\\,A}_r = 0,\n"
},
{
"math_id": 29,
"text": "\nq_r \\mapsto \\sum_A\\vec{q}^{\\,A}_r \\cdot \\big(\\vec{d}^A - \\Delta\\varphi \\; ( \\vec{n}\\times \\vec{R}_A^0) \\big) =\nq_r - \\Delta\\varphi \\; \\vec{n}\\cdot\\sum_A \\vec{R}^0_A\\times\\vec{q}^{\\,A}_r = q_r.\n"
},
{
"math_id": 30,
"text": "\n\\sum_A \\vec{R}^0_A\\times\\vec{q}^{\\,A}_r = \\vec{0}.\n"
},
{
"math_id": 31,
"text": "\n\\begin{align}\ns_i &\\mapsto s_i + M \\vec{f}_i \\cdot \\vec{t} \\quad \\mathrm{for}\\quad i=1,2,3 \\\\\ns_i &\\mapsto s_i \\quad \\mathrm{for}\\quad i=4,5,6, \\\\\n\\end{align}\n"
},
{
"math_id": 32,
"text": "\n\\begin{align}\ns_i &\\mapsto s_i \\quad \\mathrm{for}\\quad i=1,2,3 \\\\\ns_i &\\mapsto s_i + \\Delta \\phi \\vec{f}_i \\cdot \\mathbf{I}^0\\cdot \\vec{n} \\quad \\mathrm{for}\\quad i=4,5,6, \\\\\n\\end{align}\n"
},
{
"math_id": 33,
"text": "\n2T_\\mathrm{vib} = \\sum_{A=1}^N M_A \\dot{\\mathbf{R}}_A\\cdot \\dot{\\mathbf{R}}_A\n= \\sum_{A=1}^N M_A \\dot{\\mathbf{d}}_A\\cdot \\dot{\\mathbf{d}}_A.\n"
},
{
"math_id": 34,
"text": "\n\\begin{align}\nq_r = \\sum_{Aj} d_{Aj}& \\big( q^A_{rj} \\big) \\\\\ns_i = \\sum_{Aj} d_{Aj}& \\big( M_A \\delta_{ij} \\big) =0 \\\\\ns_{i+3} = \\sum_{Aj} d_{Aj}& \\big( M_A \\sum_k \\epsilon_{ikj} R^0_{Ak} \\big)=0 \\\\\n\\end{align}\n"
},
{
"math_id": 35,
"text": "\n\\mathbf{v}\\equiv\n\\begin{pmatrix}\nq_1 \\\\\n\\vdots \\\\\n\\vdots \\\\\nq_{3N-6} \\\\\n0 \\\\\n\\vdots \\\\\n0\\\\\n\\end{pmatrix}\n= \\begin{pmatrix}\n\\mathbf{B}^\\mathrm{int} \\\\\n\\cdots \\\\\n\\mathbf{B}^\\mathrm{ext} \\\\\n\\end{pmatrix}\n\\mathbf{d} \\equiv \\mathbf{B} \\mathbf{d}.\n"
},
{
"math_id": 36,
"text": "\n\\mathbf{M} \\equiv \\operatorname{diag}(\\mathbf{M}_1, \\mathbf{M}_2, \\ldots,\\mathbf{M}_N)\n\\quad\\textrm{and}\\quad\n\\mathbf{M}_A\\equiv \\operatorname{diag}(M_A, M_A, M_A)\n"
},
{
"math_id": 37,
"text": "\n\\mathbf{B}^\\mathrm{ext} \\mathbf{M}^{-1} (\\mathbf{B}^\\mathrm{ext})^\\mathrm{T}\n= \\operatorname{diag}(N_1,\\ldots, N_6) \\equiv\\mathbf{N},\n"
},
{
"math_id": 38,
"text": "\n\\mathbf{B}^\\mathrm{int} \\mathbf{M}^{-1} (\\mathbf{B}^\\mathrm{ext})^\\mathrm{T}\n= \\mathbf{0}.\n"
},
{
"math_id": 39,
"text": "\n\\mathbf{G} \\equiv\n\\mathbf{B}^\\mathrm{int} \\mathbf{M}^{-1} (\\mathbf{B}^\\mathrm{int})^\\mathrm{T}.\n"
},
{
"math_id": 40,
"text": "\n(\\mathbf{B}^\\mathrm{T})^{-1} \\mathbf{M} \\mathbf{B}^{-1}\n= \\begin{pmatrix}\n\\mathbf{G}^{-1} && \\mathbf{0} \\\\\n\\mathbf{0} && \\mathbf{N}^{-1}\n\\end{pmatrix},\n"
},
{
"math_id": 41,
"text": "\n2T_\\mathrm{vib} = \\dot{\\mathbf{d}}^\\mathrm{T} \\mathbf{M} \\dot{\\mathbf{d}}\n= \\dot{\\mathbf{v}}^\\mathrm{T}\\; (\\mathbf{B}^\\mathrm{T})^{-1} \\mathbf{M} \\mathbf{B}^{-1}\\; \\dot{\\mathbf{v}} = \\sum_{r, r'=1}^{3N-6} (G^{-1})_{r r'} \\dot{q}_r \\dot{q}_{r'}\n"
},
{
"math_id": 42,
"text": "\n2V_\\mathrm{harm} = \\mathbf{d}^\\mathrm{T} \\mathbf{H} \\mathbf{d}\n= \\mathbf{v}^\\mathrm{T} (\\mathbf{B}^\\mathrm{T})^{-1} \\mathbf{H} \\mathbf{B}^{-1} \\mathbf{v} = \\sum_{r, r'=1}^{3N-6} F_{r r'} q_r q_{r'},\n"
},
{
"math_id": 43,
"text": " \\mathbf{H}\\mathbf{C} = \\mathbf{M} \\mathbf{C} \\boldsymbol{\\Phi},\n"
},
{
"math_id": 44,
"text": "V(\\mathbf{R}_1, \\mathbf{R}_2,\\ldots, \\mathbf{R}_N)"
},
{
"math_id": 45,
"text": "\\mathbf{R}_1^0,\\ldots, \\mathbf{R}_N^0"
},
{
"math_id": 46,
"text": "\\boldsymbol{\\Phi}"
},
{
"math_id": 47,
"text": "\n\\mathbf{H}\n\\begin{pmatrix} \\mathbf{t} \\\\ \\vdots\\\\ \\mathbf{t} \\end{pmatrix} =\n\\begin{pmatrix} \\mathbf{0} \\\\ \\vdots\\\\ \\mathbf{0} \\end{pmatrix}\n\\quad\\mathrm{and}\\quad\n\\mathbf{H}\n\\begin{pmatrix} \\mathbf{s}\\times \\mathbf{R}_1^0 \\\\ \\vdots\\\\ \\mathbf{s}\\times \\mathbf{R}_N^0 \\end{pmatrix} =\n\\begin{pmatrix} \\mathbf{0} \\\\ \\vdots\\\\ \\mathbf{0} \\end{pmatrix}\n"
},
{
"math_id": 48,
"text": " \\mathbf{C}^\\mathrm{T} \\mathbf{M} \\mathbf{C} = \\mathbf{I} "
}
] | https://en.wikipedia.org/wiki?curid=8495142 |
849543 | Darcy's law | Equation describing the flow of a fluid through a porous medium
Darcy's law is an equation that describes the flow of a fluid through a porous medium. The law was formulated by Henry Darcy based on results of experiments on the flow of water through beds of sand, forming the basis of hydrogeology, a branch of earth sciences. It is analogous to Ohm's law in electrostatics, linearly relating the volume flow rate of the fluid to the hydraulic head difference (which is often just proportional to the pressure difference) via the hydraulic conductivity. In fact, the Darcy's law is a special case of the Stokes equation for the momentum flux, in turn deriving from the momentum Navier-Stokes equation.
Background.
Darcy's law was first determined experimentally by Darcy, but has since been derived from the Navier–Stokes equations via homogenization methods. It is analogous to Fourier's law in the field of heat conduction, Ohm's law in the field of electrical networks, and Fick's law in diffusion theory.
One application of Darcy's law is in the analysis of water flow through an aquifer; Darcy's law along with the equation of conservation of mass simplifies to the groundwater flow equation, one of the basic relationships of hydrogeology.
Morris Muskat first refined Darcy's equation for a single-phase flow by including viscosity in the single (fluid) phase equation of Darcy. It can be understood that viscous fluids have more difficulty permeating through a porous medium than less viscous fluids. This change made it suitable for researchers in the petroleum industry. Based on experimental results by his colleagues Wyckoff and Botset, Muskat and Meres also generalized Darcy's law to cover a multiphase flow of water, oil and gas in the porous medium of a petroleum reservoir. The generalized multiphase flow equations by Muskat and others provide the analytical foundation for reservoir engineering that exists to this day.
Description.
In the integral form, Darcy's law, as refined by Morris Muskat, in the absence of gravitational forces and in a homogeneously permeable medium, is given by a simple proportionality relationship between the volumetric flow rate formula_1, and the pressure drop formula_2 through a porous medium. The proportionality constant is linked to the permeability formula_3 of the medium, the dynamic viscosity of the fluid formula_4, the given distance formula_5 over which the pressure drop is computed, and the cross-sectional area formula_6, in the form:
formula_7
Note that the ratio:
formula_8
can be defined as the Darcy's law hydraulic resistance.
The Darcy's law can be generalised to a local form:
Darcy's constitutive equation (isotropic porous media)
formula_9
where formula_0 is the hydraulic gradient and formula_10 is the volumetric flux which here is called also superficial velocity.
Note that the ratio:
formula_11
can be thought as the Darcy's law hydraulic conductivity.
In the (less general) integral form, the volumetric flux and the pressure gradient correspond to the ratios:
formula_12
formula_13.
In case of an anisotropic porous media, the permeability is a second order tensor, and in tensor notation one can write the more general law:
Darcy's constitutive equation (anisotropic porous media)
formula_14
Notice that the quantity formula_10, often referred to as the Darcy flux or Darcy velocity, is not the velocity at which the fluid is travelling through the pores. The flow velocity (u) is related to the flux (q) by the porosity (φ) with the following equation:
formula_15
The Darcy's constitutive equation, for single phase (fluid) flow, is the defining equation for absolute permeability (single phase permeability).
With reference to the diagram to the right, the flow velocity is in SI units formula_16, and since the porosity φ is a nondimensional number, the Darcy flux formula_10, or discharge per unit area, is also defined in units formula_16; the permeability formula_3 in units formula_17, the dynamic viscosity formula_4 in units formula_18 and the hydraulic gradient is in units formula_19.
In the integral form, the total pressure drop formula_20 is in units formula_21, and formula_5 is the length of the sample in units formula_22, the Darcy's volumetric flow rate formula_1, or discharge, is also defined in units formula_23and the cross-sectional area formula_6 in units formula_17. A number of these parameters are used in alternative definitions below. A negative sign is used in the definition of the flux following the standard physics convention that fluids flow from regions of high pressure to regions of low pressure. Note that the elevation head must be taken into account if the inlet and outlet are at different elevations. If the change in pressure is negative, then the flow will be in the positive x direction. There have been several proposals for a constitutive equation for absolute permeability, and the most famous one is probably the Kozeny equation (also called Kozeny–Carman equation).
By considering the relation for static fluid pressure (Stevin's law):
formula_24
one can decline the integral form also into the equation:
formula_25
where ν is the kinematic viscosity.
The corresponding hydraulic conductivity is therefore:
formula_26
Darcy's law is a simple mathematical statement which neatly summarizes several familiar properties that groundwater flowing in aquifers exhibits, including:
A graphical illustration of the use of the steady-state groundwater flow equation (based on Darcy's law and the conservation of mass) is in the construction of flownets, to quantify the amount of groundwater flowing under a dam.
Darcy's law is only valid for slow, viscous flow; however, most groundwater flow cases fall in this category. Typically any flow with a Reynolds number less than one is clearly laminar, and it would be valid to apply Darcy's law. Experimental tests have shown that flow regimes with Reynolds numbers up to 10 may still be Darcian, as in the case of groundwater flow. The Reynolds number (a dimensionless parameter) for porous media flow is typically expressed as
formula_27
where ν is the kinematic viscosity of water, q is the specific discharge (not the pore velocity — with units of length per time), d is a representative grain diameter for the porous media (the standard choice is math|"d"30, which is the 30% passing size from a grain size analysis using sieves — with units of length).
Derivation.
For stationary, creeping, incompressible flow, i.e. ≈ 0, the Navier–Stokes equation simplifies to the Stokes equation, which by neglecting the bulk term is:
formula_28
where μ is the viscosity, ui is the velocity in the i direction, and p is the pressure. Assuming the viscous resisting force is linear with the velocity we may write:
formula_29
where φ is the porosity, and kij is the second order permeability tensor. This gives the velocity in the n direction,
formula_30
which gives Darcy's law for the volumetric flux density in the n direction,
formula_31
In isotropic porous media the off-diagonal elements in the permeability tensor are zero, "kij"
0 for "i" ≠ "j" and the diagonal elements are identical, "kii"
"k", and the common form is obtained as below, which enables the determination of the liquid flow velocity by solving a set of equations in a given region.
formula_32
The above equation is a governing equation for single-phase fluid flow in a porous medium.
Use in petroleum engineering.
Another derivation of Darcy's law is used extensively in petroleum engineering to determine the flow through permeable media — the most simple of which is for a one-dimensional, homogeneous rock formation with a single fluid phase and constant fluid viscosity.
Almost all oil reservoirs have a water zone below the oil leg, and some have also a gas cap above the oil leg. When the reservoir pressure drops due to oil production, water flows into the oil zone from below, and gas flows into the oil zone from above (if the gas cap exists), and we get a simultaneous flow and immiscible mixing of all fluid phases in the oil zone. The operator of the oil field may also inject water (and/or gas) in order to improve oil production. The petroleum industry is therefore using a generalized Darcy equation for multiphase flow that was developed by Muskat et alios. Because Darcy's name is so widespread and strongly associated with flow in porous media, the multiphase equation is denoted Darcy's law for multiphase flow or generalized Darcy equation (or law) or simply Darcy's equation (or law) or simply flow equation if the context says that the text is discussing the multiphase equation of Muskat et alios. Multiphase flow in oil and gas reservoirs is a comprehensive topic, and one of many articles about this topic is Darcy's law for multiphase flow.
Use in coffee brewing.
A number of papers have utilized Darcy's law to model the physics of brewing in a moka pot, specifically how the hot water percolates through the coffee grinds under pressure, starting with a 2001 paper by Varlamov and Balestrino, and continuing with a 2007 paper by Gianino, a 2008 paper by Navarini et al., and a 2008 paper by W. King. The papers will either take the coffee permeability to be constant as a simplification or will measure change through the brewing process.
Additional forms.
Differential expression.
Darcy's law can be expressed very generally as:
formula_33
where q is the volume flux vector of the fluid at a particular point in the medium, "h" is the total hydraulic head, and "K" is the hydraulic conductivity tensor, at that point. The hydraulic conductivity can often be approximated as a scalar. (Note the analogy to Ohm's law in electrostatics. The flux vector is analogous to the current density, head is analogous to voltage, and hydraulic conductivity is analogous to electrical conductivity.)
Quadratic law.
For flows in porous media with Reynolds numbers greater than about 1 to 10, inertial effects can also become significant. Sometimes an inertial term is added to the Darcy's equation, known as Forchheimer term. This term is able to account for the non-linear behavior of the pressure difference vs flow data.
formula_34
where the additional term "k"1 is known as inertial permeability, in units of length formula_22.
The flow in the middle of a sandstone reservoir is so slow that Forchheimer's equation is usually not needed, but the gas flow into a gas production well may be high enough to justify use of Forchheimer's equation. In this case, the inflow performance calculations for the well, not the grid cell of the 3D model, is based on the Forchheimer equation. The effect of this is that an additional rate-dependent skin appears in the inflow performance formula.
Some carbonate reservoirs have many fractures, and Darcy's equation for multiphase flow is generalized in order to govern both flow in fractures and flow in the matrix (i.e. the traditional porous rock). The irregular surface of the fracture walls and high flow rate in the fractures may justify the use of Forchheimer's equation.
Correction for gases in fine media (Knudsen diffusion or Klinkenberg effect).
For gas flow in small characteristic dimensions (e.g., very fine sand, nanoporous structures etc.), the particle-wall interactions become more frequent, giving rise to additional wall friction (Knudsen friction). For a flow in this region, where both viscous and Knudsen friction are present, a new formulation needs to be used. Knudsen presented a semi-empirical model for flow in transition regime based on his experiments on small capillaries. For a porous medium, the Knudsen equation can be given as
formula_35
where N is the molar flux, "R"g is the gas constant, T is the temperature, "D" is the effective Knudsen diffusivity of the porous media. The model can also be derived from the first-principle-based binary friction model (BFM). The differential equation of transition flow in porous media based on BFM is given as
formula_36
This equation is valid for capillaries as well as porous media. The terminology of the Knudsen effect and Knudsen diffusivity is more common in mechanical and chemical engineering. In geological and petrochemical engineering, this effect is known as the Klinkenberg effect. Using the definition of molar flux, the above equation can be rewritten as
formula_37
This equation can be rearranged into the following equation
formula_38
Comparing this equation with conventional Darcy's law, a new formulation can be given as
formula_39
where
formula_40
This is equivalent to the effective permeability formulation proposed by Klinkenberg:
formula_41
where b is known as the Klinkenberg parameter, which depends on the gas and the porous medium structure. This is quite evident if we compare the above formulations. The Klinkenberg parameter b is dependent on permeability, Knudsen diffusivity and viscosity (i.e., both gas and porous medium properties).
Darcy's law for short time scales.
For very short time scales, a time derivative of flux may be added to Darcy's law, which results in valid solutions at very small times (in heat transfer, this is called the modified form of Fourier's law),
formula_42
where τ is a very small time constant which causes this equation to reduce to the normal form of Darcy's law at "normal" times (> nanoseconds). The main reason for doing this is that the regular groundwater flow equation (diffusion equation) leads to singularities at constant head boundaries at very small times. This form is more mathematically rigorous but leads to a hyperbolic groundwater flow equation, which is more difficult to solve and is only useful at very small times, typically out of the realm of practical use.
Brinkman form of Darcy's law.
Another extension to the traditional form of Darcy's law is the Brinkman term, which is used to account for transitional flow between boundaries (introduced by Brinkman in 1949),
formula_43
where β is an effective viscosity term. This correction term accounts for flow through medium where the grains of the media are porous themselves, but is difficult to use, and is typically neglected.
Validity of Darcy's law.
Darcy's law is valid for laminar flow through sediments. In fine-grained sediments, the dimensions of interstices are small and thus flow is laminar. Coarse-grained sediments also behave similarly but in very coarse-grained sediments the flow may be turbulent. Hence Darcy's law is not always valid in such sediments.
For flow through commercial circular pipes, the flow is laminar when Reynolds number is less than 2000 and turbulent when it is more than 4000, but in some sediments, it has been found that flow is laminar when the value of Reynolds number is less than 1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\nabla p"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "\\Delta p"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "\\mu"
},
{
"math_id": 5,
"text": "L"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": " Q = \\frac {k A}{\\mu L} \\Delta p"
},
{
"math_id": 8,
"text": " R = \\frac {\\mu L}{k A}"
},
{
"math_id": 9,
"text": "\\mathbf q = - \\frac {k}{\\mu} \\nabla p"
},
{
"math_id": 10,
"text": "\\mathbf q"
},
{
"math_id": 11,
"text": " \\sigma = \\frac k \\mu"
},
{
"math_id": 12,
"text": "q = \\frac Q A"
},
{
"math_id": 13,
"text": "\\nabla p= \\frac {\\Delta p} L"
},
{
"math_id": 14,
"text": "q_i = - \\frac {k_{ij}} \\mu \\partial_j p"
},
{
"math_id": 15,
"text": "\\mathbf q= \\varphi \\, \\mathbf u."
},
{
"math_id": 16,
"text": "\\mathrm{(m/s)}"
},
{
"math_id": 17,
"text": "\\mathrm{(m^2)}"
},
{
"math_id": 18,
"text": "\\mathrm{(Pa \\cdot s)}"
},
{
"math_id": 19,
"text": "\\mathrm{(Pa/m)}"
},
{
"math_id": 20,
"text": "\\Delta p = p_b - p_a"
},
{
"math_id": 21,
"text": "\\mathrm{(Pa)}"
},
{
"math_id": 22,
"text": "\\mathrm{(m)}"
},
{
"math_id": 23,
"text": "\\mathrm{(m^3/s)}"
},
{
"math_id": 24,
"text": "p = \\rho g h "
},
{
"math_id": 25,
"text": "Q = \\frac{k A g}{\\nu L} \\, {\\Delta h}"
},
{
"math_id": 26,
"text": " K = \\frac{k\\rho g}{\\mu}=\\frac{k g}{\\nu}."
},
{
"math_id": 27,
"text": "\\mathrm{Re} = \\frac{q d }{\\nu}\\,,"
},
{
"math_id": 28,
"text": " \\mu\\nabla^2 u_i -\\partial_i p =0\\,,"
},
{
"math_id": 29,
"text": "-\\left(k^{-1}\\right)_{ij} \\mu\\varphi u_j-\\partial_i p=0\\,,"
},
{
"math_id": 30,
"text": "k_{ni}\\left(k^{-1}\\right)_{ij} u_j= \\delta_{nj} u_j = u_n = -\\frac{k_{ni}}{\\varphi\\mu} \\partial_i p\\,,"
},
{
"math_id": 31,
"text": "q_n=-\\frac{k_{ni}}{\\mu} \\, \\partial_i p\\,."
},
{
"math_id": 32,
"text": "\\boldsymbol{q}=-\\frac{k}{\\mu} \\, \\boldsymbol{\\nabla} p \\,."
},
{
"math_id": 33,
"text": "\\mathbf{q}=-K\\nabla h"
},
{
"math_id": 34,
"text": "\\nabla p =-\\frac{\\mu}{k}q-\\frac{\\rho}{k_1}q^2\\,,"
},
{
"math_id": 35,
"text": "N=-\\left(\\frac{k}{\\mu}\\frac{p_a+p_b}{2}+D_\\mathrm{K}^\\mathrm{eff}\\right)\\frac{1}{R_\\mathrm{g}T}\\frac{p_\\mathrm{b}-p_\\mathrm{a}}{L}\\,,"
},
{
"math_id": 36,
"text": "\\frac{\\partial p}{\\partial x}=-R_\\mathrm{g}T\\left(\\frac{k p}{\\mu}+D_\\mathrm{K}\\right)^{-1}N\\,."
},
{
"math_id": 37,
"text": "\\frac{\\partial p}{\\partial x}=-R_\\mathrm{g}T\\left(\\frac{k p}{\\mu}+D_\\mathrm{K}\\right)^{-1}\\dfrac{p}{R_\\mathrm{g}T}q\\,."
},
{
"math_id": 38,
"text": " q=-\\frac{k}{\\mu}\\left(1+\\frac{D_\\mathrm{K}\\mu}{k}\\frac{1}{p}\\right)\\frac{\\partial p}{\\partial x}\\,."
},
{
"math_id": 39,
"text": " q=-\\frac{k^\\mathrm{eff}}{\\mu}\\frac{\\partial p}{\\partial x}\\,,"
},
{
"math_id": 40,
"text": "k^\\mathrm{eff}=k\\left(1+\\frac{D_\\mathrm{K}\\mu}{k}\\frac{1}{p}\\right)\\,."
},
{
"math_id": 41,
"text": "k^\\mathrm{eff}=k\\left(1+\\frac{b}{p}\\right)\\,."
},
{
"math_id": 42,
"text": "\\tau \\frac{\\partial q}{\\partial t}+q=-k \\nabla h\\,,"
},
{
"math_id": 43,
"text": "-\\beta \\nabla^2 q +q =-\\frac{k}{\\mu} \\nabla p\\,,"
}
] | https://en.wikipedia.org/wiki?curid=849543 |
849738 | Exterior covariant derivative | In the mathematical field of differential geometry, the exterior covariant derivative is an extension of the notion of exterior derivative to the setting of a differentiable principal bundle or vector bundle with a connection.
Definition.
Let "G" be a Lie group and "P" → "M" be a principal "G"-bundle on a smooth manifold "M". Suppose there is a connection on "P"; this yields a natural direct sum decomposition formula_0 of each tangent space into the horizontal and vertical subspaces. Let formula_1 be the projection to the horizontal subspace.
If "ϕ" is a "k"-form on "P" with values in a vector space "V", then its exterior covariant derivative "Dϕ" is a form defined by
formula_2
where "v""i" are tangent vectors to "P" at "u".
Suppose that "ρ" : "G" → GL("V") is a representation of "G" on a vector space "V". If "ϕ" is equivariant in the sense that
formula_3
where formula_4, then "Dϕ" is a tensorial ("k" + 1)-form on "P" of the type "ρ": it is equivariant and horizontal (a form "ψ" is horizontal if "ψ"("v"0, ..., "v"k) = "ψ"("hv"0, ..., "hv""k").)
By abuse of notation, the differential of "ρ" at the identity element may again be denoted by "ρ":
formula_5
Let formula_6 be the connection one-form and formula_7 the representation of the connection in formula_8 That is, formula_7 is a formula_9-valued form, vanishing on the horizontal subspace. If "ϕ" is a tensorial "k"-form of type "ρ", then
formula_10
where, following the notation in "", we wrote
formula_11
Unlike the usual exterior derivative, which squares to 0, the exterior covariant derivative does not. In general, one has, for a tensorial zero-form "ϕ",
formula_12
where "F" = "ρ"(Ω) is the representation in formula_9 of the curvature two-form Ω. The form F is sometimes referred to as the field strength tensor, in analogy to the role it plays in electromagnetism. Note that "D"2 vanishes for a flat connection (i.e. when Ω = 0).
If "ρ" : "G" → GL(R"n"), then one can write
formula_13
where formula_14 is the matrix with 1 at the ("i", "j")-th entry and zero on the other entries. The matrix formula_15 whose entries are 2-forms on "P" is called the curvature matrix.
For vector bundles.
Given a smooth real vector bundle "E" → "M" with a connection ∇ and rank r, the exterior covariant derivative is a real-linear map on the vector-valued differential forms which are valued in E:
formula_16
The covariant derivative is such a map for "k"
0. The exterior covariant derivatives extends this map to general k. There are several equivalent ways to define this object:
formula_17
where "x"1, "x"2, "x"3 are arbitrary tangent vectors at p which are extended to smooth locally-defined vector fields "X"1, "X"2 "X"3. The legitimacy of this definition depends on the fact that the above expression depends only on "x"1, "x"2, "x"3, and not on the choice of extension. This can be verified by the Leibniz rule for covariant differentiation and for the Lie bracket of vector fields. The pattern established in the above formula in the case "k"
2 can be directly extended to define the exterior covariant derivative for arbitrary k.
0 is the covariant derivative and in general satisfies the Leibniz rule
formula_18
for any differential k-form ω and any vector-valued form s. This may also be viewed as a direct inductive definition. For instance, for any vector-valued differential 1-form s and any local frame "e"1, ..., "e""r" of the vector bundle, the coordinates of s are locally-defined differential 1-forms "ω"1, ..., "ω""r". The above inductive formula then says that
formula_19
In order for this to be a legitimate definition of "d"∇"s", it must be verified that the choice of local frame is irrelevant. This can be checked by considering a second local frame obtained by an arbitrary change-of-basis matrix; the inverse matrix provides the change-of-basis matrix for the 1-forms "ω"1, ..., "ω""r". When substituted into the above formula, the Leibniz rule as applied for the standard exterior derivative and for the covariant derivative ∇ cancel out the arbitrary choice.
formula_20
The fact that this defines a tensor field valued in E is a direct consequence of the same fact for the covariant derivative. The further fact that it is a differential 3-form valued in E asserts the full anti-symmetry in "i", "j", "k" and is directly verified from the above formula and the contextual assumption that s is a vector-valued differential 2-form, so that "s"α"ij"
−"s"α"ji". The pattern in this definition of the exterior covariant derivative for "k"
2 can be directly extended to larger values of k.This definition may alternatively be expressed in terms of an arbitrary local frame of E but without considering coordinates on M. Then a vector-valued differential 2-form is expressed by differential 2-forms "s"1, ..., "s""r" and the connection is expressed by the connection 1-forms, a skew-symmetric "r" × "r" matrix of differential 1-forms θαβ. The exterior covariant derivative of s, as a vector-valued differential 3-form, is expressed relative to the local frame by r many differential 3-forms, defined by
formula_21
In the case of the trivial real line bundle ℝ × "M" → "M" with its standard connection, vector-valued differential forms and differential forms can be naturally identified with one another, and each of the above definitions coincides with the standard exterior derivative.
Given a principal bundle, any linear representation of the structure group defines an associated bundle, and any connection on the principal bundle induces a connection on the associated vector bundle. Differential forms valued in the vector bundle may be naturally identified with fully anti-symmetric tensorial forms on the total space of the principal bundle. Under this identification, the notions of exterior covariant derivative for the principal bundle and for the vector bundle coincide with one another.
The curvature of a connection on a vector bundle may be defined as the composition of the two exterior covariant derivatives Ω0("M", "E") → Ω1("M", "E") and Ω1("M", "E") → Ω2("M", "E"), so that it is defined as a real-linear map "F": Ω0("M", "E") → Ω2("M", "E"). It is a fundamental but not immediately apparent fact that "F"("s")"p": "T""p""M" × "T""p""M" → "E""p" only depends on "s"("p"), and does so linearly. As such, the curvature may be regarded as an element of Ω2("M", End("E")). Depending on how the exterior covariant derivative is formulated, various alternative but equivalent definitions of curvature (some without the language of exterior differentiation) can be obtained.
It is a well-known fact that the composition of the standard exterior derivative with itself is zero: "d"("d"ω)
0. In the present context, this can be regarded as saying that the standard connection on the trivial line bundle ℝ × "M" → "M" has zero curvature.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "T_u P = H_u \\oplus V_u"
},
{
"math_id": 1,
"text": "h: T_u P \\to H_u"
},
{
"math_id": 2,
"text": "D\\phi(v_0, v_1,\\dots, v_k)= d \\phi(h v_0 ,h v_1,\\dots, h v_k)"
},
{
"math_id": 3,
"text": "R_g^* \\phi = \\rho(g)^{-1}\\phi"
},
{
"math_id": 4,
"text": "R_g(u) = ug"
},
{
"math_id": 5,
"text": "\\rho: \\mathfrak{g} \\to \\mathfrak{gl}(V)."
},
{
"math_id": 6,
"text": "\\omega"
},
{
"math_id": 7,
"text": "\\rho(\\omega)"
},
{
"math_id": 8,
"text": "\\mathfrak{gl}(V)."
},
{
"math_id": 9,
"text": "\\mathfrak{gl}(V)"
},
{
"math_id": 10,
"text": "D \\phi = d \\phi + \\rho(\\omega) \\cdot \\phi,"
},
{
"math_id": 11,
"text": "\n (\\rho(\\omega) \\cdot \\phi)(v_1, \\dots, v_{k+1}) =\n {1 \\over (1+k)!} \\sum_{\\sigma} \\operatorname{sgn}(\\sigma)\\rho(\\omega(v_{\\sigma(1)})) \\phi(v_{\\sigma(2)}, \\dots, v_{\\sigma(k+1)}).\n"
},
{
"math_id": 12,
"text": "D^2\\phi=F \\cdot \\phi."
},
{
"math_id": 13,
"text": "\\rho(\\Omega) = F = \\sum {F^i}_j {e^j}_i"
},
{
"math_id": 14,
"text": "{e^i}_j"
},
{
"math_id": 15,
"text": "{F^i}_j"
},
{
"math_id": 16,
"text": "d^\\nabla:\\Omega^k(M,E)\\to\\Omega^{k+1}(M,E)."
},
{
"math_id": 17,
"text": "\\begin{align}\\nabla_{x_1}(s(X_2,X_3))&-\\nabla_{x_2}(s(X_1,X_3))+\\nabla_{x_3}(s(X_1,X_2))\\\\ &-s([X_1,X_2],x_3)+s([X_1,X_3],x_2)-s([X_2,X_3],x_1).\\end{align}"
},
{
"math_id": 18,
"text": "d^\\nabla(\\omega \\wedge s) = (d\\omega) \\wedge s + (-1)^k \\omega \\wedge (d^\\nabla s)"
},
{
"math_id": 19,
"text": "\\begin{align}\nd^\\nabla s&=d^\\nabla(\\omega_1\\wedge e_1+\\cdots+\\omega_r\\wedge e_r)\\\\\n&=d\\omega_1\\wedge e_1+\\cdots+d\\omega_r\\wedge e_r-\\omega_1\\wedge \\nabla e_1-\\cdots-\\omega_r\\wedge\\nabla e_r.\\end{align}"
},
{
"math_id": 20,
"text": "(d^\\nabla s)^\\alpha{}_{ijk}=\\nabla_is^\\alpha{}_{jk}-\\nabla_js^\\alpha{}_{ik}+\\nabla_ks^\\alpha{}_{ij}."
},
{
"math_id": 21,
"text": "(d^\\nabla s)^\\alpha=d(s^\\alpha)+\\theta_\\beta{}^\\alpha\\wedge s^\\beta."
},
{
"math_id": 22,
"text": "d\\Omega + \\operatorname{ad}(\\omega) \\cdot \\Omega = d\\Omega + [\\omega \\wedge \\Omega] = 0"
}
] | https://en.wikipedia.org/wiki?curid=849738 |
849762 | Economic order quantity | Production scheduling model
Economic order quantity (EOQ), also known as financial purchase quantity or economic buying quantity, is the order quantity that minimizes the total holding costs and ordering costs in inventory management. It is one of the oldest classical production scheduling models. The model was developed by Ford W. Harris in 1913, but the consultant R. H. Wilson applied it extensively, and he and K. Andler are given credit for their in-depth analysis.
Overview.
EOQ applies only when demand for a product is constant over a period of time (such as a year) and each new order is delivered in full when inventory reaches zero. There is a fixed cost for each order placed, regardless of the quantity of items ordered; an order is assumed to contain only one type of inventory item. There is also a cost for each unit held in storage, commonly known as holding cost, sometimes expressed as a percentage of the purchase cost of the item. Although the EOQ formulation is straightforward, factors such as transportation rates and quantity discounts factor into its real-world application.
The EOQ indicates the optimal number of units to order to minimize the total cost associated with the purchase, delivery, and storage of the product.
The required parameters to the solution are the total demand for the year, the purchase cost for each item, the fixed cost to place the order for a single item and the storage cost for each item per year. Note that the number of times an order is placed will also affect the total cost, though this number can be determined from the other parameters.
Total cost function and derivation of EOQ formula.
The single-item EOQ formula finds the minimum point of the following cost function:
Total Cost = purchase cost or production cost + ordering cost + holding cost
Where:
formula_7.
To determine the minimum point of the total cost curve, calculate the derivative of the total cost with respect to Q (assume all other variables are constant) and set it equal to 0:
formula_8
Solving for Q gives Q* (the optimal order quantity):
formula_9
Therefore:
Economic Order Quantity
formula_10
Q* is independent of P; it is a function of only K, D, h.
The optimal value Q* may also be found by recognizing that
formula_11
where the non-negative quadratic term disappears for formula_12 which provides the cost minimum formula_13
Example.
Economic order quantity = formula_14 formula_15 = 400 units
Number of orders per year (based on EOQ) formula_16
Total cost formula_17
Total cost formula_18
If we check the total cost for any order quantity other than 400(=EOQ), we will see that the cost is higher. For instance, supposing 500 units per order, then
Total cost formula_19
Similarly, if we choose 300 for the order quantity, then
Total cost formula_20
This illustrates that the economic order quantity is always in the best interests of the firm.
Extensions of the EOQ model.
Quantity discounts.
An important extension to the EOQ model is to accommodate quantity discounts. There are two main types of quantity discounts: (1) all-units and (2) incremental. Here is a numerical example:
In order to find the optimal order quantity under different quantity discount schemes, one should use algorithms; these algorithms are developed under the assumption that the EOQ policy is still optimal with quantity discounts. Perera et al. (2017) establish this optimality and fully characterize the (s,S) optimality within the EOQ setting under general cost structures.
Design of optimal quantity discount schedules.
In presence of a strategic customer, who responds optimally to discount schedules, the design of an optimal quantity discount scheme by the supplier is complex and has to be done carefully. This is particularly so when the demand at the customer is itself uncertain. An interesting effect called the "reverse bullwhip" takes place where an increase in consumer demand uncertainty actually reduces order quantity uncertainty at the supplier.
Backordering costs and multiple items.
Several extensions can be made to the EOQ model, including backordering costs and multiple items. In the case backorders are permitted, the inventory carrying costs per cycle are:
formula_21
where s is the number of backorders when order quantity Q is delivered and formula_22 is the rate of demand. The backorder cost per cycle is:
formula_23
where formula_24 and formula_25 are backorder costs, formula_26, T being the cycle length and formula_27. The average annual variable cost is the sum of order costs, holding inventory costs and backorder costs:
formula_28
To minimize formula_29 impose the partial derivatives equal to zero:
formula_30
formula_31
Substituting the second equation into the first gives the following quadratic equation:
formula_32
If formula_33 either s=0 or formula_34 is optimal. In the first case the optimal lot is given by the classic EOQ formula, in the second case an order is never placed and minimum yearly cost is given by formula_35. If formula_36 or formula_37 formula_38 is optimal, if formula_39 then there shouldn't be any inventory system. If formula_40 solving the preceding quadratic equation yields:
formula_41
formula_42
If there are backorders, the reorder point is: formula_43; with m being the largest integer formula_44 and μ the lead time demand.
Additionally, the economic order interval can be determined from the EOQ and the economic production quantity model (which determines the optimal production quantity) can be determined in a similar fashion.
A version of the model, the Baumol-Tobin model, has also been used to determine the money demand function, where a person's holdings of money balances can be seen in a way parallel to a firm's holdings of inventory.
Malakooti (2013) has introduced the multi-criteria EOQ models where the criteria could be minimizing the total cost, Order quantity (inventory), and Shortages.
A version taking the time-value of money into account was developed by Trippi and Lewin.
Imperfect quality.
Another important extension of the EOQ model is to consider items with imperfect quality. Salameh and Jaber (2000) were the first to study the imperfect items in an EOQ model very thoroughly. They consider an inventory problem in which the demand is deterministic and there is a fraction of imperfect items in the lot and are screened by the buyer and sold by them at the end of the circle at discount price.
Criticisms.
The EOQ model and its sister, the economic production quantity model (EPQ), have been criticised for "their restrictive set[s] of assumptions. Guga and Musa make use of the model for an Albanian business case study and conclude that the model is "perfect theoretically, but not very suitable from the practical perspective of this firm". However, James Cargal notes that the formula was developed when business calculations were undertaken "by hand", or using logarithmic tables or a slide rule. Use of spreadsheets and specialist software allows for more versatility in the use of the formula and adoption of "assumptions which are more realistic" than in the original model.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "Q^*"
},
{
"math_id": 4,
"text": "D"
},
{
"math_id": 5,
"text": "K"
},
{
"math_id": 6,
"text": "h"
},
{
"math_id": 7,
"text": "T = PD + K {\\frac{D}{Q}} + h {\\frac{Q}{2}}"
},
{
"math_id": 8,
"text": "{0} = -{\\frac{DK}{Q^2}}+{\\frac{h}{2}}"
},
{
"math_id": 9,
"text": "Q^{*2}={\\frac{2DK}{h}}"
},
{
"math_id": 10,
"text": "Q^* = \\sqrt{\\frac{2DK}{h}} "
},
{
"math_id": 11,
"text": "T = {\\frac{DK}{Q}} + {\\frac{hQ}{2}} + PD ={\\frac{h}{2Q}}(Q - \\sqrt{2DK/h})^2 + \\sqrt{2hDK} +PD, "
},
{
"math_id": 12,
"text": "Q = \\sqrt{2DK/h}, "
},
{
"math_id": 13,
"text": "T_{min} = \\sqrt{2hDK} + PD. "
},
{
"math_id": 14,
"text": " \\sqrt{\\frac{2D\\cdot K}{h}} "
},
{
"math_id": 15,
"text": " = \\sqrt{\\frac{2\\cdot 10000\\cdot 40}{4 + 50 \\cdot 2\\%}} = \\sqrt{\\frac{2\\cdot 10000 \\cdot 40}{5}}"
},
{
"math_id": 16,
"text": " = {\\frac{10000}{400}} = 25 "
},
{
"math_id": 17,
"text": " = P\\cdot D + K (D/EOQ) + h (EOQ/2) "
},
{
"math_id": 18,
"text": " = 50\\cdot 10000 + 40\\cdot (10000/400) + 5\\cdot (400/2) = 502000 "
},
{
"math_id": 19,
"text": " = 50\\cdot 10000 + 40\\cdot (10000/500) + 5\\cdot (500/2) = 502050 "
},
{
"math_id": 20,
"text": " = 50\\cdot 10000 + 40\\cdot (10000/300) + 5\\cdot (300/2) = 502083.33 "
},
{
"math_id": 21,
"text": "IC \\int\\limits_{0}^{T_1}(Q-s-\\lambda t)\\,dt = \\frac{IC}{2 \\lambda} (Q-s)^2,"
},
{
"math_id": 22,
"text": "\\lambda"
},
{
"math_id": 23,
"text": "\\pi s + \\hat{\\pi} \\int\\limits_{0}^{T_2}\\lambda t dt =\\pi s +\\frac{1}{2} \\hat{\\pi} \\lambda T^{2}_{2} = \\pi s + \\frac{ \\hat{\\pi} s^{2}}{2\\lambda},"
},
{
"math_id": 24,
"text": "\\pi"
},
{
"math_id": 25,
"text": "\\hat{\\pi}"
},
{
"math_id": 26,
"text": "T_{2}=T-T_{1}"
},
{
"math_id": 27,
"text": "T_{1}=(Q-s) / \\lambda"
},
{
"math_id": 28,
"text": "\\mathcal{K} = \\frac{\\lambda}{Q} A+\\frac{1}{2Q} IC (Q-s)^2+\\frac{1} {Q} [ \\pi \\lambda s+ \\frac{1}{2} \\hat{\\pi} s^{2}]"
},
{
"math_id": 29,
"text": "\\mathcal{K}"
},
{
"math_id": 30,
"text": "\\frac{\\partial \\mathcal{K}}{\\partial Q} =- \\frac{1}{Q^2} \\left[ {\\lambda} A+\\frac{1}{2} IC (Q-s)^2+\\pi \\lambda s+ \\frac{1}{2} \\hat{\\pi} s^{2} \\right]+\\frac{IC}{Q}(Q-s)=0"
},
{
"math_id": 31,
"text": "\\frac{\\partial \\mathcal{K}}{\\partial s} =-\\frac{IC}{Q}(Q-s) + \\frac{1}{Q} \\pi \\lambda + \\frac{1}{Q} \\hat{\\pi} s =0"
},
{
"math_id": 32,
"text": "[\\hat{\\pi} ^{2} + \\hat{\\pi} IC] s^2 +2\\pi \\hat{\\pi} \\lambda s+(\\pi \\lambda) ^2 -2 \\lambda A IC=0"
},
{
"math_id": 33,
"text": "\\hat{\\pi}=0"
},
{
"math_id": 34,
"text": "s=\\infty"
},
{
"math_id": 35,
"text": "\\pi \\lambda"
},
{
"math_id": 36,
"text": "\\pi > \\sqrt{\\frac{2AIC}{\\lambda}} =\\delta"
},
{
"math_id": 37,
"text": "\\pi \\lambda > K_{w}"
},
{
"math_id": 38,
"text": "s^*=0"
},
{
"math_id": 39,
"text": "\\pi<\\delta"
},
{
"math_id": 40,
"text": "\\hat{\\pi}\\ne0"
},
{
"math_id": 41,
"text": "s^* = [\\hat {\\pi} + IC] ^{-1} \\left ( -\\pi \\lambda + \\left [ (2\\lambda AIC) \\left ( 1 + \\frac{IC} {\\hat{\\pi}} \\right)- \\frac{IC}{\\hat{\\pi}}(\\pi \\lambda )^{2} \\right]^{1/2} \\right ) "
},
{
"math_id": 42,
"text": "Q^* = \\left [ \\frac{\\hat{\\pi}+IC}{ \\hat{\\pi}} \\right]^{1/2} \\left [ \\frac{2 \\lambda A}{IC} -\\frac{(\\pi \\lambda)^2}{IC(\\hat{\\pi}+IC)} \\right]^{1/2}"
},
{
"math_id": 43,
"text": "r^*_{h} = \\mu - mQ^* - s^*"
},
{
"math_id": 44,
"text": "m \\leq \\frac{\\tau}{T}"
}
] | https://en.wikipedia.org/wiki?curid=849762 |
849779 | Bond convexity | Financial measurement
In finance, bond convexity is a measure of the non-linear relationship of bond prices to changes in interest rates, and is defined as the second derivative of the price of the bond with respect to interest rates (duration is the first derivative). In general, the higher the duration, the more sensitive the bond price is to the change in interest rates. Bond convexity is one of the most basic and widely used forms of convexity in finance. Convexity was based on the work of Hon-Fei Lai and popularized by Stanley Diller.
Calculation of convexity.
Duration is a linear measure or 1st derivative of how the price of a bond changes in response to interest rate changes. As interest rates change, the price is not likely to change linearly, but instead it would change over some curved function of interest rates. The more curved the price function of the bond is, the more inaccurate duration is as a measure of the interest rate sensitivity.
Convexity is a measure of the curvature or 2nd derivative of how the price of a bond varies with interest rate, i.e. how the duration of a bond changes as the interest rate changes. Specifically, one assumes that the interest rate is constant across the life of the bond and that changes in interest rates occur evenly. Using these assumptions, duration can be formulated as the first derivative of the price function of the bond with respect to the interest rate in question. Then the convexity would be the second derivative of the price function with respect to the interest rate.
Convexity does not assume the relationship between Bond value and interest rates to be linear. In actual markets, the assumption of constant interest rates and even changes is not correct, and more complex models are needed to actually price bonds. However, these simplifying assumptions allow one to quickly and easily calculate factors which describe the sensitivity of the bond prices to interest rate changes.
Why bond convexities may differ.
The price sensitivity to parallel changes in the term structure of interest rates is highest with a zero-coupon bond and lowest with an amortizing bond (where the payments are front-loaded). Although the amortizing bond and the zero-coupon bond have different sensitivities at the same maturity, if their final maturities differ so that they have identical bond durations then they will have identical sensitivities. That is, their prices will be affected equally by small, first-order, (and parallel) yield curve shifts. They will, however, start to change by different amounts with each "further" incremental parallel rate shift due to their differing payment dates and amounts.
For two bonds with the same par value, coupon, and maturity, convexity may differ depending on what point on the price yield curve they are located.
Mathematical definition.
If the "flat" floating interest rate is "r" and the bond price is "B", then the convexity "C" is defined as
formula_0
Another way of expressing "C" is in terms of the modified duration "D":
formula_1
Therefore,
formula_2
leaving
formula_3
Where "D" is a Modified Duration
How bond duration changes with a changing interest rate.
Return to the standard definition of modified duration:
formula_4
where "P"("i") is the present value of coupon "i", and "t"("i") is the future payment date.
As the interest rate increases, the present value of longer-dated payments declines in relation to earlier coupons (by the discount factor between the early and late payments). However, bond price also declines when interest rate increases, but changes in the present value of sum of each coupons times timing (the numerator in the summation) are larger than changes in the bond price (the denominator in the summation). Therefore, increases in "r" must decrease the duration (or, in the case of zero-coupon bonds, leave the unmodified duration constant). Note that the modified duration "D" differs from the regular duration by the factor one over "1 + r" (shown above), which also decreases as "r" is increased.
formula_5
Given the relation between convexity and duration above, conventional bond convexities must always be positive.
The positivity of convexity can also be proven analytically for basic interest rate securities. For example, under the assumption of a flat yield curve one can write the value of a coupon-bearing bond as formula_6, where "Ci" stands for the coupon paid at time "ti". Then it is easy to see that
formula_7
Note that this conversely implies the negativity of the derivative of duration by differentiating formula_8.
formula_9
Effective convexity.
For a bond with an embedded option, a yield to maturity based calculation of convexity (and of duration) does not consider how changes in the yield curve will alter the cash flows due to option exercise. To address this, an effective convexity must be calculated numerically. Effective convexity is a discrete approximation of the second derivative of the bond's value as a function of the interest rate:
formula_10
where formula_11 is the bond value as calculated using an option pricing model, formula_12 is the amount that yield changes, and formula_13 are the values that the bond will take if the yield falls by "formula_14" or rises by "formula_14", respectively (a parallel shift).
These values are typically found using a tree-based model, built for the "entire" yield curve, and therefore capturing exercise behavior at each point in the option's life as a function of both time and interest rates; see .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C = \\frac{1}{B} \\frac{d^2\\left(B(r)\\right)}{dr^2}. "
},
{
"math_id": 1,
"text": " \\frac{d}{dr} B (r) = -DB."
},
{
"math_id": 2,
"text": "CB = \\frac{d(-DB)}{dr} = (-D)(-DB) + \\left(-\\frac{dD}{dr}\\right)(B),"
},
{
"math_id": 3,
"text": "C = D^2 - \\frac{dD}{dr}."
},
{
"math_id": 4,
"text": " D = \\frac {1}{1+r}\\sum_{i=1}^{n}\\frac {P(i)t(i)}{B} "
},
{
"math_id": 5,
"text": "\\frac{dD}{dr} \\leq 0"
},
{
"math_id": 6,
"text": "B (r) = \\sum_{i=1}^{n} c_i e^{-r t_i} "
},
{
"math_id": 7,
"text": "\\frac{d^2B}{dr^2} = \\sum_{i=1}^{n} c_i e^{-r t_i} t_i^2 \\geq 0"
},
{
"math_id": 8,
"text": "dB / dr = - D B "
},
{
"math_id": 9,
"text": "\\Delta B = B\\left[\\frac{C}{2}(\\Delta r)^2 - D\\Delta r\\right]."
},
{
"math_id": 10,
"text": "\\text{Effective convexity} = \\frac {V_{-\\Delta y} -2V +V_{+\\Delta y}}{(V_0)\\Delta y^2} "
},
{
"math_id": 11,
"text": "V"
},
{
"math_id": 12,
"text": "\\Delta y"
},
{
"math_id": 13,
"text": "V_{-\\Delta y}\\text{ and } V_{+\\Delta y} "
},
{
"math_id": 14,
"text": "y"
}
] | https://en.wikipedia.org/wiki?curid=849779 |
8498996 | R-matrix | The term R-matrix has several meanings, depending on the field of study.
The term R-matrix is used in connection with the Yang–Baxter equation, first introduced in the field of statistical mechanics in the works of J. B. McGuire in 1964 and C. N. Yang in 1967 and in the group algebra formula_0 of the symmetric group in the work of A. A. Jucys in 1966.
The classical R-matrix arises in the definition of the classical Yang–Baxter equation.
In quasitriangular Hopf algebra, the R-matrix is a solution of the Yang–Baxter equation.
The numerical modeling of diffraction gratings in optical science can be performed using the R-matrix propagation algorithm.
R-matrix method in quantum mechanics.
There is a method in computational quantum mechanics for studying scattering known as the R-matrix. This method was originally formulated for studying resonances in nuclear scattering by Wigner and Eisenbud. Using that work as a basis, an R-matrix method was developed for electron, positron and photon scattering by atoms. This approach was later adapted for electron, positron and photon scattering by molecules.
R-matrix method is used in UKRmol and UKRmol+ code suits. The user-friendly software Quantemol Electron Collisions (Quantemol-EC) and its predecessor Quantemol-N are based on UKRmol/UKRmol+ and employ MOLPRO package for electron configuration calculations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{C} [S_n] "
}
] | https://en.wikipedia.org/wiki?curid=8498996 |
8499571 | Negative probability | Concept in science
The probability of the outcome of an experiment is never negative, although a quasiprobability distribution allows a negative probability, or quasiprobability for some events. These distributions may apply to unobservable events or conditional probabilities.
Physics and mathematics.
In 1942, Paul Dirac wrote a paper "The Physical Interpretation of Quantum Mechanics" where he introduced the concept of negative energies and negative probabilities:
<templatestyles src="Template:Blockquote/styles.css" />Negative energies and probabilities should not be considered as nonsense. They are well-defined concepts mathematically, like a negative of money.
The idea of negative probabilities later received increased attention in physics and particularly in quantum mechanics. Richard Feynman argued that no one objects to using negative numbers in calculations: although "minus three apples" is not a valid concept in real life, negative money is valid. Similarly he argued how negative probabilities as well as probabilities above unity possibly could be useful in probability calculations.
Negative probabilities have later been suggested to solve several problems and paradoxes. "Half-coins" provide simple examples for negative probabilities. These strange coins were introduced in 2005 by Gábor J. Székely. Half-coins have infinitely many sides numbered with 0,1,2... and the positive even numbers are taken with negative probabilities. Two half-coins make a complete coin in the sense that if we flip two half-coins then the sum of the outcomes is 0 or 1 with probability 1/2 as if we simply flipped a fair coin.
In "Convolution quotients of nonnegative definite functions" and "Algebraic Probability Theory" Imre Z. Ruzsa and Gábor J. Székely proved that if a random variable X has a signed or quasi distribution where some of the probabilities are negative then one can always find two random variables, Y and Z, with ordinary (not signed / not quasi) distributions such that X, Y are independent and X + Y = Z in distribution. Thus X can always be interpreted as the "difference" of two ordinary random variables, Z and Y. If Y is interpreted as a measurement error of X and the observed value is Z then the negative regions of the distribution of X are masked / shielded by the error Y.
Another example known as the Wigner distribution in phase space, introduced by Eugene Wigner in 1932 to study quantum corrections, often leads to negative probabilities. For this reason, it has later been better known as the Wigner quasiprobability distribution. In 1945, M. S. Bartlett worked out the mathematical and logical consistency of such negative valuedness. The Wigner distribution function is routinely used in physics nowadays, and provides the cornerstone of phase-space quantization. Its negative features are an asset to the formalism, and often indicate quantum interference. The negative regions of the distribution are shielded from direct observation by the quantum uncertainty principle: typically, the moments of such a non-positive-semidefinite quasiprobability distribution are highly constrained, and prevent "direct measurability" of the negative regions of the distribution. Nevertheless, these regions contribute negatively and crucially to the expected values of observable quantities computed through such distributions.
An example: the double slit experiment.
Consider a double slit experiment with photons. The two waves exiting each slit can be written as:
formula_0
and
formula_1
where "d" is the distance to the detection screen, "a" is the separation between the two slits, "x" the distance to the center of the screen, "λ" the wavelength and "dN"/"dt" is the number of photons emitted per unit time at the source. The amplitude of measuring a photon at distance "x" from the center of the screen is the sum of these two amplitudes coming out of each hole, and therefore the probability that a photon is detected at position "x" will be given by the square of this sum:
formula_2
One can interpret this as the well-known probability rule:
formula_3
whatever the last term means. Indeed, if one closes either one of the holes forcing the photon to go through the other slit, the two corresponding intensities are
formula_4 and formula_5
But now, if one does interpret each of these terms in this way, the joint probability takes negative values roughly every formula_6:
formula_7
However, these negative probabilities are never observed as one cannot isolate the cases in which the photon "goes through both slits", but can hint at the existence of anti-particles.
Finance.
Negative probabilities have more recently been applied to mathematical finance. In quantitative finance most probabilities are not real probabilities but pseudo probabilities, often what is known as risk neutral probabilities. These are not real probabilities, but theoretical "probabilities" under a series of assumptions that help simplify calculations by allowing such pseudo probabilities to be negative in certain cases as first pointed out by Espen Gaarder Haug in 2004.
A rigorous mathematical definition of negative probabilities and their properties was recently derived by Mark Burgin and Gunter Meissner (2011). The authors also show how negative probabilities can be applied to financial option pricing.
Engineering.
The concept of negative probabilities has also been proposed for reliable facility location models where facilities are subject to negatively correlated disruption risks when facility locations, customer allocation, and backup service plans are determined simultaneously. Li et al. proposed a virtual station structure that transforms a facility network with positively correlated disruptions into an equivalent one with added virtual supporting stations, and these virtual stations were subject to independent disruptions. This approach reduces a problem from one with correlated disruptions to one without. Xie et al. later showed how negatively correlated disruptions can also be addressed by the same modeling framework, except that a virtual supporting station now may be disrupted with a “failure propensity” which
<templatestyles src="Template:Blockquote/styles.css" />... inherits all mathematical characteristics and properties of a failure probability except that we allow it to be larger than 1...
This finding paves ways for using compact mixed-integer mathematical programs to optimally design reliable location of service facilities under site-dependent and positive/negative/mixed facility disruption correlations.
The proposed “propensity” concept in Xie et al. turns out to be what Feynman and others referred to as “quasi-probability.” Note that when a quasi-probability is larger than 1, then 1 minus this value gives a negative probability. In the reliable facility location context, the truly physically verifiable observation is the facility disruption states (whose probabilities are ensured to be within the conventional range [0,1]), but there is no direct information on the station disruption states or their corresponding probabilities. Hence the disruption "probabilities" of the stations, interpreted as “probabilities of imagined intermediary states,” could exceed unity, and thus are referred to as quasi-probabilities.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f_1(x) = \\sqrt{\\frac{dN/dt}{2\\pi/d}}\\frac{1}{\\sqrt{d^2+(x+a/2)^2}}\\exp\\left[i(h/\\lambda)\\sqrt{d^2+(x+a/2)^2}\\right],"
},
{
"math_id": 1,
"text": "f_2(x) = \\sqrt{\\frac{dN/dt}{2\\pi/d}}\\frac{1}{\\sqrt{d^2+(x-a/2)^2}}\\exp\\left[i(h/\\lambda)\\sqrt{d^2+(x-a/2)^2}\\right],"
},
{
"math_id": 2,
"text": "I(x) = \\left\\vert f_1(x)+f_2(x) \\right\\vert^2 = \\left\\vert f_1(x) \\right\\vert^2 + \\left\\vert f_2(x) \\right\\vert^2 + \\left[f_1^*(x)f_2(x)+f_1(x)f_2^*(x)\\right],"
},
{
"math_id": 3,
"text": "\\begin{align}\nP(\\mathtt{photon\\,\\,reaches\\,\\,x\\,\\,going\\,\\,through\\,\\,either\\,\\,slit})\n = \\,&P(\\mathtt{photon\\,\\,reaches\\,\\,x\\,\\,going\\,\\,through\\,\\,slit\\,\\,1}) \\\\\n& + P(\\mathtt{photon\\,\\,reaches\\,\\,x}\\,\\,\\mathtt{going\\,\\,through\\,\\,slit\\,\\,2}) \\\\\n& - P(\\mathtt{photon\\,\\,reaches\\,\\,x}\\,\\,\\mathtt{going\\,\\,through\\,\\,both\\,\\,slits}) \\\\ \\\\\n=\\,&P(\\mathtt{photon\\,\\,reaches\\,\\,x}\\,|\\,\\mathtt{went\\,\\,through\\,\\,slit\\,\\,1})\\,P(\\mathtt{going\\,\\,through\\,\\,slit\\,\\,1}) \\\\\n& + P(\\mathtt{photon\\,\\,reaches\\,\\,x}\\,|\\,\\mathtt{went\\,\\,through\\,\\,slit\\,\\,2})\\,P(\\mathtt{going\\,\\,through\\,\\,slit\\,\\,2}) \\\\\n& - P(\\mathtt{photon\\,\\,reaches\\,\\,x}\\,\\,\\mathtt{going\\,\\,through\\,\\,both\\,\\,slits}) \\\\ \\\\\n=\\,&P(\\mathtt{photon\\,\\,reaches\\,\\,x}\\,|\\,\\mathtt{went\\,\\,through\\,\\,slit\\,\\,1})\\,\\frac{1}{2} \\\\\n& + P(\\mathtt{photon\\,\\,reaches\\,\\,x}\\,|\\,\\mathtt{went\\,\\,through\\,\\,slit\\,\\,2})\\,\\frac{1}{2} \\\\\n& - P(\\mathtt{photon\\,\\,reaches\\,\\,x}\\,\\,\\mathtt{going\\,\\,through\\,\\,both\\,\\,slits})\n\\end{align}"
},
{
"math_id": 4,
"text": "I_1(x) = \\left\\vert f_1(x) \\right\\vert^2 = \\frac{1}{2}\\frac{dN}{dt}\\frac{d/\\pi}{d^2+(x+a/2)^2}"
},
{
"math_id": 5,
"text": "I_2(x) = \\left\\vert f_2(x) \\right\\vert^2 = \\frac{1}{2}\\frac{dN}{dt}\\frac{d/\\pi}{d^2+(x-a/2)^2}."
},
{
"math_id": 6,
"text": "\\lambda\\frac{d}{a}"
},
{
"math_id": 7,
"text": "\\begin{align}\nI_{12}(x) & = \\left[f_1^*(x)f_2(x)+f_1(x)f_2^*(x)\\right] \\\\\n& = \\frac{1}{2} \\frac{dN}{dt} \\frac{d/\\pi}{\\sqrt{d^2+(x-a/2)^2}\\sqrt{d^2+(x+a/2)^2}}2\\cos\\left[(h/\\lambda)(\\sqrt{d^2+(x+a/2)^2}-\\sqrt{d^2+(x-a/2)^2})\\right] \\\\\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=8499571 |
8499785 | Diversity factor | Mathmetical operator in calculus
In the context of electricity, the diversity factor is the ratio of the sum of the individual non-coincident maximum loads of various subdivisions of the system to the maximum demand of the complete system.
formula_0
The diversity factor is always greater than 1. The aggregate load formula_1 is time dependent as well as being dependent upon equipment characteristics. The diversity factor recognizes that the whole load does not equal the sum of its parts due to this time interdependence or "diversity." For example, one might have ten air conditioning units that are 20 tons each at a facility with an average full load equivalent operating hours of 2000 hours per year. However, since the units are each thermostatically controlled, it is not known exactly when each unit turns on. If the ten units are substantially larger than the facility's actual peak AC load, then fewer than all ten units will likely come on at once. Thus, even though each unit runs a total of a couple of thousands (2000) hours a year, they do not all come on at the same time to affect the facility's peak load. The diversity factor provides a correction factor to use, resulting in a lower total power load for the ten AC units. If the energy balance done for this facility comes out within reason, but the demand balance shows far too much power for the peak load, then one can use the diversity factor to bring the power into line with the facility's true peak load. The diversity factor does not affect the energy; it only affects the power.
Coincidence factor.
The coincidence factor is the reciprocal of the diversity factor. The simultaneity factor may be identical to either the coincidence factor or the diversity factor, depending on the sources of definition; the International Electrotechnical Commission defines the coincidence and simultaneity factors identically, with the diversity factor being their reciprocal. Since the only change in definition is to take the inverse, all one needs to know is if the factor is greater than or less than one.
Diversity factor in heat networks.
In the heat networks design the coincidence factor is often called a diversity factor (CIBSE guidance, DS 439). So, in the context of hot water systems the diversity factor is always less than 1. For space heating, for more than 40 dwellings the factor levels out to approximately 0.62. For domestic hot water at 40 dwellings it is slightly below 0.1 and keeps decreasing further with additional connections.
Diversity.
The unofficial term diversity, as distinguished from "diversity factor", refers to the percent of time available that a machine, piece of equipment, or facility has its maximum or nominal load or demand; a 70% diversity means that the device in question operates at its nominal or maximum load level 70% of the time that it is connected and turned on.
Diversified load and diversification factor.
The diversified load is the total expected power, or "load", to be drawn during a peak period by a device or system of devices. The maximum system load is the combination of each device's full load capacity, utilization factor, diversity factor, demand factor, and the load factor. This process is referred to as load diversification. The diversification factor is then defined as:
formula_2
In electrical engineering.
Diversity factor is commonly used for a number of engineering-related topics. One such instance is when completing a coordination study for a system. This diversity factor is used to estimate the load of a particular node in the system.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f_\\text{Diversity} = \\frac{\\sum\\limits_{i=1}^n\\text{Individual peak load}_i}{\\sum\\limits_{i=1}^n\\text{Max}(\\text{Aggregated load}_i)}"
},
{
"math_id": 1,
"text": "\\left( \\sum\\limits_{i=1}^n\\text{Aggregated load}_i \\right)"
},
{
"math_id": 2,
"text": "f_\\text{Diversification} = \\frac{\\text{Diversified Load}}{\\text{Maximum system load}}"
}
] | https://en.wikipedia.org/wiki?curid=8499785 |
8500131 | Ramp travel index | Ramp travel index or RTI, is a way of measuring a vehicle's ability to flex its suspension, a property also known as axle articulation. The RTI rating is used mainly in the off-roading industry to test and describe chassis limits of modified vehicles.
The ramps vary between 15 and 30 degrees of angle for the vehicle to ride up. "Ramping" a vehicle involves putting one front tire on the ramp and driving up slowly until one of the other three tires (usually the rear one on the same side as the tire driving the ramp) begins to leave the ground. The measurement is only taken when the other three tires are still on the ground. The distance traveled up the ramp is then measured and is divided by the vehicle's wheelbase and finally multiplied by 1000 to give a final RTI score. Most stock SUVs have RTI values from 400 to 550; vehicles modified for off-road competition have the ability to exceed 1000.
Significance of RTI and Axle Articulation.
A high RTI or good axle articulation is essential for good off road performance on severe routes. A vehicle that has good axle articulation can keep all wheels in contact with the ground while traversing obstacles, which ensures that all wheels can deliver their torque to the surface with less risk of losing traction on any given wheel. All this can allow a very high level of off-road performance without the need for electronic chassis control systems that can be vulnerable and unreliable under extreme conditions.
Over a given obstacle, vehicles with simple AWD systems and chassis designs that restrict their RTI—i.e. that have poorer axle articulation—lift a wheel early which is then free to turn spinning away power unless differentials are able to be locked. A vehicle with high RTI tends to make uninterrupted (safer) progress as all wheels remain in contact with the ground during the maneuver. One chassis concept that often allows comparatively high RTI is the live axle (beam axle). Independent suspensions have tended to have reduced articulation while offering better on-road comfort, and are becoming increasingly popular in road-oriented SUVs.
Calculating RTI.
With a ramp.
The formula for calculating RTI using a ramp as pictured above is
formula_0
Where "b" is the wheelbase of the vehicle, "d" is the distance travelled along a (usually 20 degree) ramp before any wheels leave the ground and "r" is the calculated ramp travel index.
Without a ramp.
It is possible to calculate RTI without a ramp using basic trigonometry, provided a safe method is available to lift one wheel, say, using a forklift. Using the diagram below, if "h" is the maximum distance from the bottom of the tire to the ground, then
formula_1
Although "d" is not an available measurement, we can use the relationship between "h" and "d" to express "d" in terms of "h":
formula_2
Substituting this into the RTI formula produces:
formula_3
This yields a convenient formula for calculating a 20° RTI value when no ramp is available. If "b" is the vehicle's wheelbase and "h" is the maximum distance from the ground to the bottom of the wheel without allowing any other wheel to leave the ground, then
formula_4
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r=\\frac{d}{b} \\times 1000"
},
{
"math_id": 1,
"text": "\\sin 20^\\circ=\\frac{h}{d}"
},
{
"math_id": 2,
"text": "d=\\frac{h}{\\sin 20^\\circ}"
},
{
"math_id": 3,
"text": "r=\\frac{h}{b} \\times \\frac{1000}{\\sin 20^\\circ}"
},
{
"math_id": 4,
"text": "RTI_{20}=\\frac{h}{b} \\times 2924"
}
] | https://en.wikipedia.org/wiki?curid=8500131 |
850107 | Fluid and crystallized intelligence | Factors of general intelligence
The concepts of fluid intelligence (g"f) and crystallized intelligence (g"c) were introduced in 1943 by the psychologist Raymond Cattell. According to Cattell's psychometrically-based theory, general intelligence ("g") is subdivided into "g"f and "g"c. Fluid intelligence is the ability to solve novel reasoning problems and is correlated with a number of important skills such as comprehension, problem-solving, and learning. Crystallized intelligence, on the other hand, involves the ability to deduce secondary relational abstractions by applying previously learned primary relational abstractions.
History.
Fluid and crystallized intelligence are constructs originally conceptualized by Raymond Cattell. The concepts of fluid and crystallized intelligence were further developed by Cattell and his former student John L. Horn. Most of the intelligence testing had mainly been focused on children, and young adults. Cattell and Horn wanted to see how intelligence changed and developed when aging took place on an individual. They realized that while some memories and concepts remained, others diminished. Thus, there was a need to delineate two types of intelligence.
Fluid versus crystallized intelligence.
Fluid intelligence ("g"f) involved basic processes of reasoning and other mental activities that depend only minimally on prior learning (such as formal and informal education) and acculturation. Horn notes that it is formless and can "flow into" a wide variety of cognitive activities. Tasks measuring fluid reasoning require the ability to solve abstract reasoning problems. Examples of tasks that measure fluid intelligence include figure classifications, figural analyses, number and letter series, matrices, and paired associates.
Crystallized intelligence ("g"c) includes learned procedures and knowledge. It reflects the effects of experience and acculturation. Horn notes that crystallized ability is a "precipitate out of experience," resulting from the prior application of fluid ability that has been combined with the intelligence of culture. Examples of tasks that measure crystallized intelligence are vocabulary, general information, abstract word analogies, and the mechanics of language.
Example application of fluid and crystallized abilities to problem-solving.
Horn provided the following example of crystallized and fluid approaches to solving a problem. Here is the problem he described:
"There are 100 patients in a hospital. Some (an even number) are one-legged but wearing shoes. One-half of the remainder are barefooted. How many shoes are being worn?"
The crystallized approach to solving the problem would involve the application of high school-level algebra. Algebra is an acculturational product.
formula_0 is the number of shoes worn, where x equals the number of one-legged patients. formula_1 equals the number of two-legged patients. The solution boils down to 100 shoes.
In contrast to the crystallized approach to solving the problem, Horn provided a made-up example of a fluid approach to solving the problem, an approach that does not depend on the learning of high school-level algebra. In his made-up example, Horn described a boy who is too young to attend secondary school but could solve the problem through the application of fluid ability: "He may reason that if half the two-legged people are without shoes, and all the rest (an even number) are one-legged, then the shoes must average one per person, and the answer is 100."
Relationship to Piaget's theory of cognitive development.
Researchers have linked the theory of fluid and crystallized abilities to Piaget's theory of cognitive development. Fluid ability and Piaget's operative intelligence both concern logical thinking and the "eduction of relations" (an expression Cattell used to refer to the inferring of relationships). Crystallized ability and Piaget's treatment of everyday learning reflects the impress of experience. Like fluid ability's relation to crystallized intelligence, Piaget's operativity is considered to be prior to, and ultimately provides the foundation for, everyday learning.
Measurement of fluid intelligence.
Various measures have been thought to assess fluid intelligence.
Raven's Progressive Matrices.
The Raven's Progressive Matrices (RPM) is one of the most commonly used measures of fluid ability. It is a non-verbal multiple-choice test. Participants have to complete a series of drawings by identifying relevant features based on the spatial organization of an array of objects and choosing one object that matches one or more of the identified features. This task assesses the ability to consider one or more relationships between mental representations or "relational reasoning." "Propositional analogies" and semantic decision tasks are also used to assess relational reasoning.
Woodcock–Johnson Tests of Cognitive Abilities, Third Edition.
In the Woodcock–Johnson Tests of Cognitive Abilities, Third Edition (WJ-III), "g"f is assessed by two tests: Concept Formation and Analysis Synthesis. Concept Formation tasks require the individual to use categorical thinking; Analysis Synthesis tasks require general sequential reasoning.
Concept Formation.
Individuals have to apply concepts by inferring the underlying "rules" for solving visual puzzles that are presented with increasing levels of difficulty. As the level of difficulty increases, individuals have to identify a key difference (or the "rule") for solving puzzles involving one-to-one comparisons. For more difficult items, individuals need to understand the concept of "and" (e.g., a solution must have some of this and some of that) and the concept of "or" (e.g., to be inside a box, the item must be either this or that). The most difficult items require fluid transformations and cognitive shifting between the various types of concept puzzles that the examinee had worked with previously.
Analysis–Synthesis.
In the Analysis–Synthesis test, the individual has to learn and orally state the solutions to incomplete logic puzzles that mimic a miniature mathematics system. The test also contains some of the features involved in using symbolic formulations in other fields such as chemistry and logic. The individual has presented a set of logic rules, a "key" that is used to solve the puzzles. The individual has to determine the missing colors within each of the puzzles using the key. Complex items presented puzzles that require two or more sequential mental manipulations of the key to deriving a final solution. Increasingly difficult items involve a mix of puzzles that requires fluid shifts in deduction, logic, and inference.
Wechsler Intelligence Scales for Children, Fourth Edition.
The Wechsler Intelligence Scales for Children, Fourth Edition (WISC-IV) is used to have an overall measure in cognitive ability with five primary indexing scores. In the WISC-IV, the Perceptual Reasoning Index contains two subtests that assess "g"f: Matrix Reasoning, which involves induction and deduction, and Picture Concepts, which involves induction.
Picture Concepts.
In the Picture Concepts task, children are presented with a series of pictures on two or three rows and asked which pictures (one from each row) belong together based on some common characteristic. This task assesses the child's ability to discover the underlying characteristic (e.g., rule, concept, trend, class membership) that governs a set of materials.
Matrix Reasoning.
Matrix Reasoning also assesses this ability as well as the ability to start with stated rules, premises, or conditions and to engage in one or more steps to reach a solution to a novel problem (deduction). In the Matrix Reasoning test, children have presented with a series or sequence of pictures with one picture missing. Their task requires the child to choose the picture that fits the series or sequence from an array of five options. Since Matrix Reasoning and Picture Concepts involve the use of visual stimuli and do not require expressive language, they have been considered to be non-verbal tests of "g"f.
In the workplace.
Within the corporate environment, fluid intelligence is a predictor of a person's capacity to work well in environments characterised by complexity, uncertainty, and ambiguity. The Cognitive Process Profile (CPP) measures a person's fluid intelligence and cognitive processes. It maps these against suitable work environments according to Elliott Jaques's Stratified Systems Theory. Fe et al. (2022) show that fluid intelligence measured in childhood predicts labor market earnings.
Factors related to measuring intelligence.
Some authors have suggested that unless an individual is truly interested in a problem presented on an IQ test, the cognitive work required to solve the problem may not be performed owing to a lack of interest. These authors have contended that a low score on tests that are intended to measure fluid intelligence may reflect more of a lack of interest in the tasks than an inability to complete the tasks successfully.
Development across life span.
Fluid intelligence peaks at around age 27 and then gradually declines. This decline may be related to local atrophy of the brain in the right cerebellum, a lack of practice, or the result of age-related changes in the brain.
Crystallized intelligence typically increases gradually, stays relatively stable across most of adulthood, and then begins to decline after age 65. The exact peak age of cognitive skills remains elusive.
Fluid intelligence and working memory.
Working memory capacity is closely related to fluid intelligence, and has been proposed to account for individual differences in "g"f. The linking of working memory and "g"f has been suggested that it could help resolve mysteries that have puzzled researchers concerning the two concepts.
Neuroanatomy.
According to David Geary, "g"f and "g"c can be traced to two separate brain systems. Fluid intelligence involves the dorsolateral prefrontal cortex, the anterior cingulate cortex, and other systems related to attention and short-term memory. Crystallized intelligence appears to be a function of brain regions that involve the storage and usage of long-term memories, such as the hippocampus.
Research on training working memory and the training's indirect effect on fluid ability.
Because working memory is thought to influence "g"f, then training to increase the capacity of working memory could have a positive impact on "g"f. Some researchers, however, question whether the results of training interventions to enhance "g"f are long-lasting and transferable, especially when these techniques are used by healthy children and adults without cognitive deficiencies. A meta-analytical review published in 2012 concluded that "memory training programs appear to produce short-term, specific training effects that do not generalize."
In a series of four individual experiments involving 70 participants (mean age of 25.6) from the University of Bern community, Jaeggi et al. found that, in comparison to a demographically matched control group, healthy young adults who practiced a demanding working memory task (dual n-back) approximately 25 minutes per day for between 8 and 19 days had significantly greater pre-to-posttest increases in their scores on a matrix test of fluid intelligence. There was no long-term follow-up to assess how enduring the effects of training were.
Two later n-back studies did not support the findings of Jaeggi et al. Although participants' performance on the training task improved, these studies showed no significant improvement in the mental abilities tested, especially fluid intelligence and working memory capacity.
Thus the balance of findings suggests that training for the purpose of increasing working memory can have specific short-term effects but no effects on "g"f.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x + 1/2*(100-x)*2"
},
{
"math_id": 1,
"text": "100 - x"
}
] | https://en.wikipedia.org/wiki?curid=850107 |
8506316 | Standard RAID levels | Any of a set of standard configurations of Redundant Arrays of Independent Disks
In computer storage, the standard RAID levels comprise a basic set of RAID ("redundant array of independent disks" or "redundant array of inexpensive disks") configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from multiple general-purpose computer hard disk drives (HDDs). The most common types are RAID 0 (striping), RAID 1 (mirroring) and its variants, RAID 5 (distributed parity), and RAID 6 (dual parity). Multiple RAID levels can also be combined or "nested", for instance RAID 10 (striping of mirrors) or RAID 01 (mirroring stripe sets). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard. The numerical values only serve as identifiers and do not signify performance, reliability, generation, hierarchy, or any other metric.
While most RAID levels can provide good protection against and recovery from hardware defects or defective sectors/read errors ("hard errors"), they do not provide any protection against data loss due to catastrophic failures (fire, water) or "soft errors" such as user error, software malfunction, or malware infection. For valuable data, RAID is only one building block of a larger data loss prevention and recovery scheme – it cannot replace a backup plan.
RAID 0.
RAID 0 (also known as a "stripe set" or "striped volume") splits ("stripes") data evenly across two or more disks, without parity information, redundancy, or fault tolerance. Since RAID 0 provides no fault tolerance or redundancy, the failure of one drive will cause the entire array to fail, due to data being striped across all disks. This configuration is typically implemented having speed as the intended goal. RAID 0 is normally used to increase performance, although it can also be used as a way to create a large logical volume out of two or more physical disks.
A RAID 0 setup can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk. For example, if a 120 GB disk is striped together with a 320 GB disk, the size of the array will be 120 GB × 2 = 240 GB. However, some RAID implementations would allow the remaining 200 GB to be used for other purposes.
The diagram in this section shows how the data is distributed into stripes on two disks, with A1:A2 as the first stripe, A3:A4 as the second one, etc. Once the stripe size is defined during the creation of a RAID 0 array, it needs to be maintained at all times. Since the stripes are accessed in parallel, an n-drive RAID 0 array appears as a single large disk with a data rate n times higher than the single-disk rate.
Performance.
A RAID 0 array of n drives provides data read and write transfer rates up to n times as high as the individual drive rates, but with no data redundancy. As a result, RAID 0 is primarily used in applications that require high performance and are able to tolerate lower reliability, such as in scientific computing or computer gaming.
Some benchmarks of desktop applications show RAID 0 performance to be marginally better than a single drive. Another article examined these claims and concluded that "striping does not always increase performance (in certain situations it will actually be slower than a non-RAID setup), but in most situations it will yield a significant improvement in performance". Synthetic benchmarks show different levels of performance improvements when multiple HDDs or SSDs are used in a RAID 0 setup, compared with single-drive performance. However, some synthetic benchmarks also show a drop in performance for the same comparison.
RAID 1.
RAID 1 consists of an exact copy (or "mirror") of a set of data on two or more disks; a classic RAID 1 mirrored pair contains two disks. This configuration offers no parity, striping, or spanning of disk space across multiple disks, since the data is mirrored on all disks belonging to the array, and the array can only be as big as the smallest member disk. This layout is useful when read performance or reliability is more important than write performance or the resulting data storage capacity.
The array will continue to operate so long as at least one member drive is operational.
Performance.
Any read request can be serviced and handled by any drive in the array; thus, depending on the nature of I/O load, random read performance of a RAID 1 array may equal up to the sum of each member's performance, while the write performance remains at the level of a single disk. However, if disks with different speeds are used in a RAID 1 array, overall write performance is equal to the speed of the slowest disk.
Synthetic benchmarks show varying levels of performance improvements when multiple HDDs or SSDs are used in a RAID 1 setup, compared with single-drive performance. However, some synthetic benchmarks also show a drop in performance for the same comparison.
RAID 2.
RAID 2, which is rarely used in practice, stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. The disks are synchronized by the controller to spin at the same angular orientation (they reach index at the same time), so it generally cannot service multiple requests simultaneously. However, depending with a high rate Hamming code, many spindles would operate in parallel to simultaneously transfer data so that "very high data transfer rates" are possible as for example in the Thinking Machines' DataVault where 32 data bits were transmitted simultaneously. The IBM 353 also observed a similar usage of Hamming code and was capable of transmitting 64 data bits simultaneously, along with 8 ECC bits.
With all hard disk drives implementing internal error correction, the complexity of an external Hamming code offered little advantage over parity so RAID 2 has been rarely implemented; it is the only original level of RAID that is not currently used.
RAID 3.
RAID 3, which is rarely used in practice, consists of byte-level striping with a dedicated parity disk. One of the characteristics of RAID 3 is that it generally cannot service multiple requests simultaneously, which happens because any single block of data will, by definition, be spread across all members of the set and will reside in the same physical location on each disk. Therefore, any I/O operation requires activity on every disk and usually requires synchronized spindles.
This makes it suitable for applications that demand the highest transfer rates in long sequential reads and writes, for example uncompressed video editing. Applications that make small reads and writes from random disk locations will get the worst performance out of this level.
The requirement that all disks spin synchronously (in a lockstep) added design considerations that provided no significant advantages over other RAID levels. Both RAID 3 and RAID 4 were quickly replaced by RAID 5. RAID 3 was usually implemented in hardware, and the performance issues were addressed by using large disk caches.
RAID 4.
RAID 4 consists of block-level striping with a dedicated parity disk. As a result of its layout, RAID 4 provides good performance of random reads, while the performance of random writes is low due to the need to write all parity data to a single disk, unless the filesystem is RAID-4-aware and compensates for that.
An advantage of RAID 4 is that it can be quickly extended online, without parity recomputation, as long as the newly added disks are completely filled with 0-bytes.
In diagram 1, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.
RAID 5.
RAID 5 consists of block-level striping with distributed parity. Unlike in RAID 4, parity information is distributed among the drives. It requires that all drives but one be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks.
There are many layouts of data and parity in a RAID 5 disk drive array depending upon the sequence of writing across the disks, that is:
The figure shows 1) data blocks written left to right, 2) the parity block at the end of the stripe and 3) the first block of the next stripe not on the same disk as the parity block of the previous stripe. It can be designated as a "Left Asynchronous" RAID 5 layout and this is the only layout identified in the last edition of "The Raid Book" published by the defunct "Raid Advisory Board." In a "Synchronous" layout the data first block of the next stripe is written on the same drive as the parity block of the previous stripe.
In comparison to RAID 4, RAID 5's distributed parity evens out the stress of a dedicated parity disk among all RAID members. Additionally, write performance is increased since all RAID members participate in the serving of write requests. Although it will not be as efficient as a striping (RAID 0) setup, because parity must still be written, this is no longer a bottleneck.
Since parity calculation is performed on the full stripe, small changes to the array experience "write amplification": in the worst case when a single, logical sector is to be written, the original sector and the according parity sector need to be read, the original data is removed from the parity, the new data calculated into the parity and both the new data sector and the new parity sector are written.
RAID 6.
RAID 6 extends RAID 5 by adding another parity block; thus, it uses block-level striping with two parity blocks distributed across all member disks. RAID 6 requires at least four disks.
As in RAID 5, there are many layouts of RAID 6 disk arrays depending upon the direction the data blocks are written, the location of the parity blocks with respect to the data blocks and whether or not the first data block of a subsequent stripe is written to the same drive as the last parity block of the prior stripe. The figure to the right is just one of many such layouts.
According to the Storage Networking Industry Association (SNIA), the definition of RAID 6 is: "Any form of RAID that can continue to execute read and write requests to all of a RAID array's virtual disks in the presence of any two concurrent disk failures. Several methods, including dual check data computations (parity and Reed–Solomon), orthogonal dual parity check data and diagonal parity, have been used to implement RAID Level 6."
The 2 extra blocks are usually named P and Q. Typically the P block is calculated as the parity (XORing) of the data, the same as RAID 5.
Different implementations of RAID 6 use different erasure codes to calculate the Q block, often one of
Reed Solomon,
EVENODD,
Row Diagonal Parity (RDP),
Mojette, or
Liberation codes.
Performance.
RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations. Performance varies greatly depending on how RAID 6 is implemented in the manufacturer's storage architecture—in software, firmware, or by using firmware and specialized ASICs for intensive parity calculations. RAID 6 can read up to the same speed as RAID 5 with the same number of physical drives.
When either diagonal or orthogonal dual parity is used, a second parity calculation is necessary for write operations. This doubles CPU overhead for RAID-6 writes, versus single-parity RAID levels. When a Reed Solomon code is used, the second parity calculation is unnecessary. Reed Solomon has the advantage of allowing all redundancy information to be contained within a given stripe.
General parity system.
It is possible to support a far greater number of drives by choosing the parity function more carefully. The issue we face is to ensure that a system of equations over the finite field formula_0 has a unique solution. To do this, we can use the theory of polynomial equations over finite fields.
Consider the Galois field formula_1 with formula_2. This field is isomorphic to a polynomial field formula_3 for a suitable irreducible polynomial formula_4 of degree formula_5 over formula_0. We will represent the data elements formula_6 as polynomials formula_7 in the Galois field. Let formula_8 correspond to the stripes of data across hard drives encoded as field elements in this manner. We will use formula_9 to denote addition in the field, and concatenation to denote multiplication. The reuse of formula_9 is intentional: this is because addition in the finite field formula_0 represents to the XOR operator, so computing the sum of two elements is equivalent to computing XOR on the polynomial coefficients.
A generator of a field is an element of the field such that formula_10 is different for each non-negative formula_11. This means each element of the field, except the value formula_12, can be written as a power of formula_13 A finite field is guaranteed to have at least one generator. Pick one such generator formula_14, and define formula_15 and formula_16 as follows:
<templatestyles src="Block indent/styles.css"/>formula_17
<templatestyles src="Block indent/styles.css"/>formula_18
As before, the first checksum formula_15 is just the XOR of each stripe, though interpreted now as a polynomial. The effect of formula_10 can be thought of as the action of a carefully chosen linear feedback shift register on the data chunk. Unlike the bit shift in the simplified example, which could only be applied formula_5 times before the encoding began to repeat, applying the operator formula_14 multiple times is guaranteed to produce formula_19 unique invertible functions, which will allow a chunk length of formula_5 to support up to formula_20 data pieces.
If one data chunk is lost, the situation is similar to the one before. In the case of two lost data chunks, we can compute the recovery formulas algebraically. Suppose that formula_21 and formula_22 are the lost values with formula_23, then, using the other values of formula_6, we find constants formula_24 and formula_25:
<templatestyles src="Block indent/styles.css"/>formula_26
<templatestyles src="Block indent/styles.css"/>formula_27
We can solve for formula_28 in the second equation and plug it into the first to find formula_29, and then formula_30.
Unlike P, The computation of Q is relatively CPU intensive, as it involves polynomial multiplication in formula_3. This can be mitigated with a hardware implementation or by using an FPGA.
The above Vandermonde matrix solution can be extended to triple parity, but for beyond a Cauchy matrix construction is required.
Comparison.
The following table provides an overview of some considerations for standard RAID levels. In each case, array space efficiency is given as an expression in terms of the number of drives, n; this expression designates a fractional value between zero and one, representing the fraction of the sum of the drives' capacities that is available for use. For example, if three drives are arranged in RAID 3, this gives an array space efficiency of 1 − 1/"n"
1 − 1/3
2/3 ≈ 67%; thus, if each drive in this example has a capacity of 250 GB, then the array has a total capacity of 750 GB but the capacity that is usable for data storage is only 500 GB. Different RAID configurations can also detect failure during so called data scrubbing.
Historically disks were subject to lower reliability and RAID levels were also used to detect which disk in the array had failed in addition to that a disk had failed. Though as noted by Patterson et al. even at the inception of RAID many (though not all) disks were already capable of finding internal errors using error correcting codes. In particular it is/was sufficient to have a mirrored set of disks to detect a failure, but two disks were not sufficient to detect "which" had failed in a disk array without error correcting features. Modern RAID arrays depend for the most part on a disk's ability to identify itself as faulty which can be detected as part of a scrub. The redundant information is used to reconstruct the missing data, rather than to identify the faulted drive. Drives are considered to have faulted if they experience an unrecoverable read error, which occurs after a drive has retried many times to read data and failed. Enterprise drives may also report failure in far fewer tries than consumer drives as part of TLER to ensure a read request is fulfilled in a timely manner.
System implications.
In measurement of the I/O performance of five filesystems with five storage configurations—single SSD, RAID 0, RAID 1, RAID 10, and RAID 5 it was shown that F2FS on RAID 0 and RAID 5 with eight SSDs outperforms EXT4 by 5 times and 50 times, respectively. The measurements also suggest that the RAID controller can be a significant bottleneck in building a RAID system with high speed SSDs.
Nested RAID.
Combinations of two or more standard RAID levels. They are also known as RAID 0+1 or RAID 01, RAID 0+3 or RAID 03, RAID 1+0 or RAID 10, RAID 5+0 or RAID 50, RAID 6+0 or RAID 60, and RAID 10+0 or RAID 100.
Non-standard variants.
In addition to standard and nested RAID levels, alternatives include non-standard RAID levels, and non-RAID drive architectures. Non-RAID drive architectures are referred to by similar terms and acronyms, notably JBOD ("just a bunch of disks"), SPAN/BIG, and MAID ("massive array of idle disks").
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Z}_2"
},
{
"math_id": 1,
"text": "GF(m)"
},
{
"math_id": 2,
"text": "m=2^k"
},
{
"math_id": 3,
"text": "F_2[x]/(p(x))"
},
{
"math_id": 4,
"text": "p(x)"
},
{
"math_id": 5,
"text": "k"
},
{
"math_id": 6,
"text": "D"
},
{
"math_id": 7,
"text": "\\mathbf{D}=d_{k-1}x^{k-1} + d_{k-2}x^{k-2} + ... + d_1x + d_0"
},
{
"math_id": 8,
"text": "\\mathbf{D}_0,...,\\mathbf{D}_{n-1} \\in GF(m)"
},
{
"math_id": 9,
"text": "\\oplus"
},
{
"math_id": 10,
"text": "g^i"
},
{
"math_id": 11,
"text": "i<m-1"
},
{
"math_id": 12,
"text": "0"
},
{
"math_id": 13,
"text": "g."
},
{
"math_id": 14,
"text": "g"
},
{
"math_id": 15,
"text": "\\mathbf{P}"
},
{
"math_id": 16,
"text": "\\mathbf{Q}"
},
{
"math_id": 17,
"text": "\n\\mathbf{P} = \\bigoplus_i{\\mathbf{D}_i} = \\mathbf{D}_0 \\;\\oplus\\; \\mathbf{D}_1 \\;\\oplus\\; \\mathbf{D}_2 \\;\\oplus\\; ... \\;\\oplus\\; \\mathbf{D}_{n-1}"
},
{
"math_id": 18,
"text": "\n\\mathbf{Q} = \\bigoplus_i{g^i\\mathbf{D}_i} = g^0\\mathbf{D}_0 \\;\\oplus\\; g^1\\mathbf{D}_1 \\;\\oplus\\; g^2\\mathbf{D}_2 \\;\\oplus\\; ... \\;\\oplus\\; g^{n-1}\\mathbf{D}_{n-1}\n"
},
{
"math_id": 19,
"text": "m=2^k-1"
},
{
"math_id": 20,
"text": "2^k-1"
},
{
"math_id": 21,
"text": "\\mathbf{D}_i"
},
{
"math_id": 22,
"text": "\\mathbf{D}_j"
},
{
"math_id": 23,
"text": "i \\neq j"
},
{
"math_id": 24,
"text": "A"
},
{
"math_id": 25,
"text": "B"
},
{
"math_id": 26,
"text": "\nA = \\mathbf{P} \\;\\oplus\\; (\\bigoplus_{\\ell:\\;\\ell\\not=i\\;\\mathrm{and}\\;\\ell\\not=j}{D_\\ell}) = D_i \\oplus D_j\n"
},
{
"math_id": 27,
"text": "\nB = \\mathbf{Q} \\;\\oplus\\; (\\bigoplus_{\\ell:\\;\\ell\\not=i\\;\\mathrm{and}\\;\\ell\\not=j}{g^{\\ell}D_\\ell}) = g^iD_i \\oplus g^jD_j\n"
},
{
"math_id": 28,
"text": "D_i"
},
{
"math_id": 29,
"text": "D_j = (g^{m-i+j}\\oplus1)^{-1} (g^{m-i}B\\oplus A)"
},
{
"math_id": 30,
"text": "D_i=A\\oplus D_j"
}
] | https://en.wikipedia.org/wiki?curid=8506316 |
8506329 | Nested RAID levels | Stacked combination of two or more standard RAID levels
Nested RAID levels, also known as hybrid RAID, combine two or more of the standard RAID levels (where "RAID" stands for "redundant array of independent disks" or "redundant array of inexpensive disks") to gain performance, additional redundancy or both, as a result of combining properties of different standard RAID layouts.
Nested RAID levels are usually numbered using a series of numbers, where the most commonly used levels use two numbers. The first number in the numeric designation denotes the lowest RAID level in the "stack", while the rightmost one denotes the highest layered RAID level; for example, RAID 50 layers the data striping of RAID 0 on top of the distributed parity of RAID 5. Nested RAID levels include RAID 01, RAID 10, RAID 100, RAID 50 and RAID 60, which all combine data striping with other RAID techniques; as a result of the layering scheme, RAID 01 and RAID 10 represent significantly different nested RAID levels.
<templatestyles src="Template:Visible anchor/styles.css" />RAID 01 (RAID 0+1).
RAID 01, also called RAID 0+1, is a RAID level using a mirror of stripes, achieving both replication and sharing of data between disks. The usable capacity of a RAID 01 array is the same as in a RAID 1 array made of the same drives, in which one half of the drives is used to mirror the other half. formula_0, where formula_1 is the total number of drives and formula_2 is the capacity of the smallest drive in the array.
At least four disks are required in a standard RAID 01 configuration, but larger arrays are also used.
<templatestyles src="Template:Visible anchor/styles.css" />RAID 03 (RAID 0+3).
RAID 03, also called RAID 0+3 and sometimes RAID 53, is similar to RAID 01 with the exception that byte-level striping with dedicated parity is used instead of mirroring.
<templatestyles src="Template:Visible anchor/styles.css" />RAID 10 (RAID 1+0).
RAID 10, also called RAID 1+0 and sometimes RAID 1&0, is similar to RAID 01 with an exception that the two used standard RAID levels are layered in the opposite order; thus, RAID 10 is a stripe of mirrors.
RAID 10, as recognized by the storage industry association and as generally implemented by RAID controllers, is a RAID 0 array of mirrors, which may be two- or three-way mirrors, and requires a minimum of four drives. However, a nonstandard definition of "RAID 10" was created for the Linux MD driver; Linux "RAID 10" can be implemented with as few as two disks. Implementations supporting two disks such as Linux RAID 10 offer a choice of layouts. Arrays of more than four disks are also possible.
According to manufacturer specifications and official independent benchmarks, in most cases RAID 10 provides better throughput and latency than all other RAID levels except RAID 0 (which wins in throughput). Thus, it is the preferable RAID level for I/O-intensive applications such as database, email, and web servers, as well as for any other use requiring high disk performance.
<templatestyles src="Template:Visible anchor/styles.css" />RAID 50 (RAID 5+0).
RAID 50, also called RAID 5+0, combines the straight block-level striping of RAID 0 with the distributed parity of RAID 5. As a RAID 0 array striped across RAID 5 elements, minimal RAID 50 configuration requires six drives. On the right is an example where three collections of 120 GB RAID 5s are striped together to make 720 GB of total storage space.
One drive from each of the RAID 5 sets could fail without loss of data; for example, a RAID 50 configuration including three RAID 5 sets can tolerate three maximum potential simultaneous drive failures (but only one per RAID 5 set). Because the reliability of the system depends on quick replacement of the bad drive so the array can rebuild, it is common to include hot spares that can immediately start rebuilding the array upon failure. However, this does not address the issue that the array is put under maximum strain reading every bit to rebuild the array at the time when it is most vulnerable.
RAID 50 improves upon the performance of RAID 5 particularly during writes, and provides better fault tolerance than a single RAID level does. This level is recommended for applications that require high fault tolerance, capacity and random access performance. As the number of drives in a RAID set increases, and the capacity of the drives increase, this impacts the fault-recovery time correspondingly as the interval for rebuilding the RAID set increases.
<templatestyles src="Template:Visible anchor/styles.css" />RAID 60 (RAID 6+0).
RAID 60, also called RAID 6+0, combines the straight block-level striping of RAID 0 with the distributed double parity of RAID 6, resulting in a RAID 0 array striped across RAID 6 elements. It requires at least eight disks.
<templatestyles src="Template:Visible anchor/styles.css" />RAID 100 (RAID 10+0).
RAID 100, sometimes also called RAID 10+0, is a stripe of RAID 10s. This is logically equivalent to a wider RAID 10 array, but is generally implemented using software RAID 0 over hardware RAID 10. Being "striped two ways", RAID 100 is described as a "plaid RAID".
Comparison.
The following table provides an overview of some considerations for nested RAID levels. In each case:
1 − 1/3
2/3 ≈ 67%; thus, if each drive in this example has a capacity of 250 GB, then the array has a total capacity of 750 GB but the capacity that is usable for data storage is only 500 GB. It is sometimes necessary to use formula_3 in place of formula_4 due to the inherent nature of the configuration (of use in RAID 10). Fault tolerance uses formula_3 for representation, in place of formula_4, in certain Nested RAID levels (see below for fault tolerance calculation). formula_3 is the number of disks in each mirror, rather than the total number of disks.
formula_6
Explanatory notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(N/2) \\cdot S_{\\mathrm{min}}"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "S_{\\mathrm{min}}"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": "\n\\begin{align} 1 - (1 - r)^{n} - nr(1 - r)^{n - 1} & = 1 - (1 - 5\\%)^{3} - 3 \\times 5\\% \\times (1 - 5\\%)^{3 - 1} \\\\\n& = 1 - 0.95^{3} - 0.15 \\times 0.95^{2} \\\\\n& = 1 - 0.857375 - 0.135375 \\\\\n& = 0.00725 \\\\\n& \\approx 0.7\\% \\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=8506329 |
8507264 | Linner hue index | Index of hues of caramel coloring
The Linner hue index, formula_0, is used to describe the hues which a given caramel coloring may produce. In conjunction with tinctorial strength, or the depth of a caramel coloring's color, it describes the spectra which a solution of the coloring may produce at different dilutions and thicknesses. It also has applications in brewing.
In his presentation at the Society of Soft Drink Technologists Annual Meeting in 1970, Robert T. Linner mentioned that most caramel colors had log absorbance spectra which were essentially linear in the visible range. This means that such a spectrum could be characterized by a point (a log absorbance at some particular wavelength) and its slope. Because caramel colors have warm hues (i.e., greater absorbance for shorter wavelengths), the slopes of their log absorbance spectra will be negative. formula_0 is the negative of this slope, multiplied by a convenient factor.
Definition.
The Linner hue index is defined as:
formula_1
This is simply the negative of the slope of the log absorbance spectrum, between 510 and 610 nm wavelength, for materials that obey Linner's hypothesis of linear log-absorbance spectra.
Typical range.
Linner hue indices typically range from 3 (a greenish yellow or olive hue, depending on the depth of color) to 7.5 (yellow) for caramel colors and beers.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_L"
},
{
"math_id": 1,
"text": "H_L = 10 \\times \\log_{10} \\frac {A_{510}} {A_{610}}"
}
] | https://en.wikipedia.org/wiki?curid=8507264 |
8508055 | Sears–Haack body | The most aerodynamic shape for supersonic vehicles
The Sears–Haack body is the shape with the lowest theoretical wave drag in supersonic flow, for a slender solid body or revolution with a given body length and volume. The mathematical derivation assumes small-disturbance (linearized) supersonic flow, which is governed by the Prandtl–Glauert equation. The derivation and shape were published independently by two separate researchers: Wolfgang Haack in 1941 and later by William Sears in 1947.
The Kármán–Moore theory indicates that the wave drag scales as the square of the second derivative of the area distribution, formula_0 (see full expression below), so for low wave drag it is necessary that formula_1 be smooth. Thus, the Sears–Haack body is pointed at each end and grows smoothly to a maximum and then decreases smoothly toward the second point.
Useful formulas.
The cross-sectional area of a Sears–Haack body is
formula_2
its volume is
formula_3
its radius is
formula_4
the derivative (slope) is
formula_5
the second derivative is
formula_6
where:
From Kármán–Moore theory, it follows that:
formula_8
alternatively:
formula_9
These formulae may be combined to get the following:
formula_10
formula_11
where:
Derivation.
According to Kármán–Moore theory, the wave drag force is given by
formula_15
where formula_1 is the cross-sectional area of the body perpendicular to the body axis; here formula_16 represents the leading edge and formula_17 is the trailing edge, although the Kármán–Moore theory does not distinguish these ends because the drag coefficieint is independent of the direction of motion in the linear theory. Instead of formula_1, we can define the function formula_18 and expand it in series
formula_19
where formula_20. The series starts from formula_21 because of the condition formula_22. We have
formula_23
Note that the volume of the body depends only on the coefficient formula_24.
To calculate the drag force, first we shall rewrite the drag force formula, by integrating by parts once,
formula_25
in which formula_26 stands for Cauchy principal value. Now we can substitute the expansion for formula_27 and integrate the expression using the following two identities
formula_28
The final result, expressed in terms of the drag coefficient formula_29, is simply given by
formula_30
Since formula_31 depends only on formula_24, the minimum value of formula_32 is reached when formula_33 for formula_34.
Thus, setting formula_33 for formula_34, we obtain formula_35,
formula_36
where formula_37 is the radius as a function of formula_38.
Generalization by R. T. Jones.
The Sears–Haack body shape derivation is correct only in the limit of a slender body.
The theory has been generalized to slender but non-axisymmetric shapes by Robert T. Jones in NACA Report 1284. In this extension, the area formula_1 is defined on the Mach cone whose apex is at location formula_38, rather than on the formula_39 plane as assumed by Sears and Haack. Hence, Jones's theory makes it applicable to more complex shapes like entire supersonic aircraft.
Area rule.
A superficially related concept is the Whitcomb area rule, which states that wave drag due to volume in transonic flow depends primarily on the distribution of total cross-sectional area, and for low wave drag this distribution must be smooth. A common misconception is that the Sears–Haack body has the ideal area distribution according to the area rule, but this is not correct. The Prandtl–Glauert equation, which is the starting point in the Sears–Haack body shape derivation, is not valid in transonic flow, which is where the area rule applies. | [
{
"math_id": 0,
"text": "D_\\text{wave} \\sim [ S''(x)]^2"
},
{
"math_id": 1,
"text": "S(x)"
},
{
"math_id": 2,
"text": " S(x) = \\frac {16V}{3L\\pi}[4x(1-x)]^{3/2} = \\pi R_\\text{max}^2[4x(1-x)]^{3/2}, "
},
{
"math_id": 3,
"text": " V = \\frac {3\\pi^2}{16}R_\\text{max}^2 L, "
},
{
"math_id": 4,
"text": " r(x) = R_\\text{max}[4x(1-x)]^{3/4}, "
},
{
"math_id": 5,
"text": " r'(x) = 3R_\\text{max}[4x(1-x)]^{-1/4} (1-2x), "
},
{
"math_id": 6,
"text": " r''(x) = -3R_\\text{max}\\{[4x(1-x)]^{-5/4} (1-2x)^2 + 2[4x(1-x)]^{-1/4}\\},"
},
{
"math_id": 7,
"text": " R_\\text{max} "
},
{
"math_id": 8,
"text": " D_\\text{wave} = - \\frac {1}{4 \\pi} \\rho U^2 \\int_0^\\ell \\int_0^\\ell S''(x_1) S''(x_2) \\ln |x_1-x_2| \\mathrm{d}x_1 \\mathrm{d}x_2, "
},
{
"math_id": 9,
"text": " D_\\text{wave} = - \\frac {1}{2 \\pi} \\rho U^2 \\int_0^\\ell S''(x) \\mathrm{d}x \\int_0^x S''(x_1) \\ln (x-x_1) \\mathrm{d}x_1. "
},
{
"math_id": 10,
"text": " D_\\text{wave} = \\frac{64 V^2}{\\pi L^4} \\rho U^2 = \\frac {9\\pi^3R_{max}^4}{4L^2}\\rho U^2, "
},
{
"math_id": 11,
"text": " C_{D_\\text{wave}} = \\frac {24V} {L^3} = \\frac {9\\pi^2R_{max}^2}{2L^2}, "
},
{
"math_id": 12,
"text": " D_\\text{wave} "
},
{
"math_id": 13,
"text": " C_{D_\\text{wave}} "
},
{
"math_id": 14,
"text": " \\rho "
},
{
"math_id": 15,
"text": "F = - \\frac{\\rho U^2}{2\\pi} \\int_0^l \\int_0^{l} S''(\\xi_1)S''(\\xi_2)\\ln|\\xi_2-\\xi_1|d\\xi_1d\\xi_2"
},
{
"math_id": 16,
"text": "x=0"
},
{
"math_id": 17,
"text": "x=l"
},
{
"math_id": 18,
"text": "f(x)=S'(x)"
},
{
"math_id": 19,
"text": "f = - l \\sum_{n=2}^\\infty A_n \\sin n\\theta, \\quad x = \\frac{l}{2}(1-\\cos\\theta)"
},
{
"math_id": 20,
"text": "0\\leq \\theta \\leq \\pi"
},
{
"math_id": 21,
"text": "n=2"
},
{
"math_id": 22,
"text": "S(0)=S(l)=0"
},
{
"math_id": 23,
"text": "S(x)=\\int_0^x f(x) dx, \\quad V = \\int_0^l S(x) dx = \\frac{\\pi}{16} l^3 A_2."
},
{
"math_id": 24,
"text": "A_2"
},
{
"math_id": 25,
"text": "F = \\mathrm{p. v.}\\frac{\\rho U^2}{2\\pi} \\int_0^l \\int_0^{l} f(\\xi_1)f'(\\xi_2)\\frac{d\\xi_1d\\xi_2}{\\xi_1-\\xi_2}"
},
{
"math_id": 26,
"text": "\\mathrm{p.v.}"
},
{
"math_id": 27,
"text": "f"
},
{
"math_id": 28,
"text": "\\mathrm{p. v.} \\int_0^\\pi \\frac{\\cos n\\theta_2}{\\cos\\theta_2-\\cos\\theta_1}d\\theta_2 = \\frac{\\pi \\sin n\\theta_1}{\\sin\\theta_1}, \\quad \\int_0^\\pi \\sin n\\theta_1 \\sin m\\theta_1 d\\theta_1 = \\frac{\\pi}{2}\\begin{cases}1\\,\\,(m=n),\\\\ 0,\\,\\,(m\\neq n).\\end{cases}"
},
{
"math_id": 29,
"text": "C_d = F/\\rho U^2 l^2/2"
},
{
"math_id": 30,
"text": "C_d = \\frac{\\pi}{4} \\sum_{n=2}^\\infty n A_n^2."
},
{
"math_id": 31,
"text": "V"
},
{
"math_id": 32,
"text": "F"
},
{
"math_id": 33,
"text": "A_n=0"
},
{
"math_id": 34,
"text": "n\\geq 3"
},
{
"math_id": 35,
"text": "S=(1/3)l^3 A_2 \\sin^3\\theta"
},
{
"math_id": 36,
"text": "C_d = \\frac{128}{\\pi} \\left(\\frac{V}{l^3}\\right)^2=\\frac{9\\pi}{2} \\left(\\frac{S_{\\mathrm{max}}}{l^2}\\right)^2, \\quad R(x) = \\frac{8\\sqrt 2 }{\\pi}\\left(\\frac{V}{3l^4}\\right)^{1/2}[x(l-x)]^{3/4},"
},
{
"math_id": 37,
"text": "R(x)"
},
{
"math_id": 38,
"text": "x"
},
{
"math_id": 39,
"text": "x = \\text{constant}"
}
] | https://en.wikipedia.org/wiki?curid=8508055 |
8509589 | Odd greedy expansion | <templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Does every rational number with an odd denominator have an odd greedy expansion?
In number theory, the odd greedy expansion problem asks whether a greedy algorithm for finding Egyptian fractions with odd denominators always succeeds. It is an open problem.
Description.
An Egyptian fraction represents a given rational number as a sum of distinct unit fractions. If a rational number formula_0 is a sum of unit fractions with odd denominators,
formula_1
then formula_2 must be odd. Conversely, every fraction formula_0 with formula_2 odd can be represented as a sum of distinct odd unit fractions. One method of finding such a representation replaces formula_0 by formula_3 where formula_4 for a sufficiently large formula_5, and then expands formula_6 as a sum of distinct divisors of formula_7.
However, a simpler greedy algorithm has successfully found Egyptian fractions in which all denominators are odd for all instances formula_0 (with odd formula_2) on which it has been tested: let formula_8 be the least odd number that is greater than or equal to formula_9, include the fraction formula_10 in the expansion, and continue in the same way (avoiding repeated uses of the same unit fraction) with the remaining fraction formula_11. This method is called the odd greedy algorithm and the expansions it creates are called odd greedy expansions.
Stein, Selfridge, Graham, and others have posed the open problem of whether the odd greedy algorithm terminates with a finite expansion for every formula_0 with formula_2 odd.
Example.
Let formula_0 = 4/23.
23/4 = 5; the next larger odd number is 7. So the first step expands
<templatestyles src="Block indent/styles.css"/>4/23
1/7 + 5/161.
161/5 = 32; the next larger odd number is 33. So the next step expands
<templatestyles src="Block indent/styles.css"/>4/23
1/7 + 1/33 + 4/5313.
5313/4 = 1328; the next larger odd number is 1329. So the third step expands
<templatestyles src="Block indent/styles.css"/>4/23
1/7 + 1/33 + 1/1329 + 1/2353659.
Since the final term in this expansion is a unit fraction, the process terminates with this expansion as its result.
Fractions with long expansions.
It is possible for the odd greedy algorithm to produce expansions that are shorter than the usual greedy expansion, with smaller denominators. For instance,
formula_12
where the left expansion is the greedy expansion and the right expansion is the odd greedy expansion. However, the odd greedy expansion is more typically long, with large denominators. For instance, as Wagon discovered, the odd greedy expansion for 3/179 has 19 terms, the largest of which is approximately 1.415×10439491. Curiously, the numerators of the fractions to be expanded in each step of the algorithm form a sequence of consecutive integers:
<templatestyles src="Block indent/styles.css"/>3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 2, 3, 4, 1.
A similar phenomenon occurs with other numbers, such as 5/5809 (an example found independently by K. S. Brown and David Bailey) which has a 27-term expansion. Although the denominators of this expansion are difficult to compute due to their enormous size, the numerator sequence may be found relatively efficiently using modular arithmetic. describes several additional examples of this type found by Broadhurst, and notes that K. S. Brown has described methods for finding fractions with arbitrarily long expansions.
On even denominators.
The odd greedy algorithm cannot terminate when given a fraction with an even denominator, because these fractions do not have finite representations with odd denominators. Therefore, in this case, it produces an infinite series expansion of its input. For instance Sylvester's sequence can be viewed as generated by the odd greedy expansion of 1/2.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x/y"
},
{
"math_id": 1,
"text": "\\frac{x}{y} = \\sum\\frac{1}{2a_i+1},"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "Ax/Ay"
},
{
"math_id": 4,
"text": "A=35\\cdot 3^i"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "Ax"
},
{
"math_id": 7,
"text": "Ay"
},
{
"math_id": 8,
"text": "u"
},
{
"math_id": 9,
"text": "y/x"
},
{
"math_id": 10,
"text": "1/u"
},
{
"math_id": 11,
"text": "x/y-1/u"
},
{
"math_id": 12,
"text": "\\frac{8}{77}=\\frac{1}{10}+\\frac{1}{257}+\\frac{1}{197890}=\\frac{1}{11}+\\frac{1}{77},"
}
] | https://en.wikipedia.org/wiki?curid=8509589 |
850973 | Phase-change memory | Novel computer memory type
Phase-change memory (also known as PCM, PCME, PRAM, PCRAM, OUM (ovonic unified memory) and C-RAM or CRAM (chalcogenide RAM)) is a type of non-volatile random-access memory. PRAMs exploit the unique behaviour of chalcogenide glass. In PCM, heat produced by the passage of an electric current through a heating element generally made of titanium nitride is used to either quickly heat and quench the glass, making it amorphous, or to hold it in its crystallization temperature range for some time, thereby switching it to a crystalline state. PCM also has the ability to achieve a number of distinct intermediary states, thereby having the ability to hold multiple bits in a single cell, but the difficulties in programming cells in this way has prevented these capabilities from being implemented in other technologies (most notably flash memory) with the same capability.
Recent research on PCM has been directed towards attempting to find viable material alternatives to the phase-change material Ge2Sb2Te5 (GST), with mixed success. Other research has focused on the development of a GeTe–Sb2Te3 superlattice to achieve non-thermal phase changes by changing the co-ordination state of the germanium atoms with a laser pulse. This new Interfacial Phase-Change Memory (IPCM) has had many successes and continues to be the site of much active research.
Leon Chua has argued that all two-terminal non-volatile-memory devices, including PCM, should be considered memristors. Stan Williams of HP Labs has also argued that PCM should be considered a memristor. However, this terminology has been challenged, and the potential applicability of memristor theory to any physically realizable device is open to question.
Background.
In the 1960s, Stanford R. Ovshinsky of Energy Conversion Devices first explored the properties of chalcogenide glasses as a potential memory technology. In 1969, Charles Sie published a dissertation at Iowa State University that both described and demonstrated the feasibility of a phase-change-memory device by integrating chalcogenide film with a diode array. A cinematographic study in 1970 established that the phase-change-memory mechanism in chalcogenide glass involves electric-field-induced crystalline filament growth. In the September 1970 issue of "Electronics", Gordon Moore, co-founder of Intel, published an article on the technology. However, material quality and power consumption issues prevented commercialization of the technology. More recently, interest and research have resumed as flash and DRAM memory technologies are expected to encounter scaling difficulties as chip lithography shrinks.
The crystalline and amorphous states of chalcogenide glass have dramatically different electrical resistivity values. The amorphous, high resistance state represents a binary 0, while the crystalline, low resistance state represents a 1. Chalcogenide is the same material used in re-writable optical media (such as CD-RW and DVD-RW). In those instances, the material's optical properties are manipulated, rather than its electrical resistivity, as chalcogenide's refractive index also changes with the state of the material.
Although PRAM has not yet reached the commercialization stage for consumer electronic devices, nearly all prototype devices make use of a chalcogenide alloy of germanium (Ge), antimony (Sb) and tellurium (Te) called GeSbTe (GST). The stoichiometry, or Ge:Sb:Te element ratio, is 2:2:5 in GST. When GST is heated to a high temperature (over 600 °C), its chalcogenide crystallinity is lost. Once cooled, it is frozen into an amorphous glass-like state and its electrical resistance is high. By heating the chalcogenide to a temperature above its crystallization point, but below the melting point, it will transform into a crystalline state with a much lower resistance. The time to complete this phase transition is temperature-dependent. Cooler portions of the chalcogenide take longer to crystallize, and overheated portions may be remelted. A crystallization time scale on the order of 100 ns is commonly used. This is longer than conventional volatile memory devices like modern DRAM, which have a switching time on the order of two nanoseconds. However, a January 2006 Samsung Electronics patent application indicates PRAM may achieve switching times as fast as five nanoseconds.
A 2008 advance pioneered by Intel and ST Microelectronics allowed the material state to be more carefully controlled, allowing it to be transformed into one of four distinct states: the previous amorphous or crystalline states, along with two new partially crystalline ones. Each of these states has different electrical properties that can be measured during reads, allowing a single cell to represent two bits, doubling memory density.
Aluminum/antimony.
Phase-change memory devices based on germanium, antimony and tellurium present manufacturing challenges, since etching and polishing of the material with chalcogens can change the material's composition. Materials based on aluminum and antimony are more thermally stable than GeSbTe. Al50Sb50 has three distinct resistance levels, offering the potential to store three bits of data in two cells as opposed to two (nine states possible for the pair of cells, using eight of those states yields log2 8 = 3 bits).
PRAM vs. Flash.
PRAM's switching time and inherent scalability make it more appealing than flash memory. PRAM's temperature sensitivity is perhaps its most notable drawback, one that may require changes in the production process of manufacturers incorporating the technology.
Flash memory works by modulating charge (electrons) stored within the gate of a MOS transistor. The gate is constructed with a special "stack" designed to trap charges (either on a floating gate or in insulator "traps"). The presence of charge within the gate shifts the transistor's threshold voltage formula_0 higher or lower, corresponding to a change in the cell's bit state from 1 to 0 or 0 to 1. Changing the bit's state requires removing the accumulated charge, which demands a relatively large voltage to "suck" the electrons off the floating gate. This burst of voltage is provided by a charge pump, which takes some time to build up power. General write times for common flash devices are on the order of 100 μs (for a block of data), about 10,000 times the typical 10 ns read time for SRAM for example (for a byte).
PRAM can offer much higher performance in applications where writing quickly is important, both because the memory element can be switched more quickly, and also because single bits may be changed to either 1 or 0 without needing to first erase an entire block of cells. PRAM's high performance, thousands of times faster than conventional hard drives, makes it particularly interesting in nonvolatile memory roles that are currently performance-limited by memory access timing.
In addition, with flash, each burst of voltage across the cell causes degradation. As the size of the cells decreases, damage from programming grows worse because the voltage necessary to program the device does not scale with the lithography. Most flash devices are rated for, currently, only 5,000 writes per sector, and many flash controllers perform wear leveling to spread writes across many physical sectors.
PRAM devices also degrade with use, for different reasons than flash, but degrade much more slowly. A PRAM device may endure around 100 million write cycles. PRAM lifetime is limited by mechanisms such as degradation due to GST thermal expansion during programming, metal (and other material) migration, and other mechanisms still unknown.
Flash parts can be programmed before being soldered onto a board, or even purchased pre-programmed. The contents of a PRAM, however, are lost because of the high temperatures needed to solder the device to a board (see reflow soldering or wave soldering). This was made worse by the requirement to have lead-free manufacturing requiring higher soldering temperatures. A manufacturer using PRAM parts must provide a mechanism to program the PRAM "in-system" after it has been soldered in place.
The special gates used in flash memory "leak" charge (electrons) over time, causing corruption and loss of data. The resistivity of the memory element in PRAM is more stable; at the normal working temperature of 85 °C, it is projected to retain data for 300 years.
By carefully modulating the amount of charge stored on the gate, flash devices can store multiple (usually two) bits in each physical cell. In effect, this doubles the memory density, reducing cost. PRAM devices originally stored only a single bit in each cell, but Intel's recent advances have removed this problem.
Because flash devices trap electrons to store information, they are susceptible to data corruption from radiation, making them unsuitable for many space and military applications. PRAM exhibits higher resistance to radiation.
PRAM cell selectors can use various devices: diodes, BJTs and MOSFETs. Using a diode or a BJT provides the greatest amount of current for a given cell size. However, the concern with using a diode stems from parasitic currents to neighboring cells, as well as a higher voltage requirement, resulting in higher power consumption. Chalcogenide resistance is necessarily larger than that of a diode, meaning operating voltage must exceed 1 V by a wide margin to guarantee adequate forward bias current from the diode. Perhaps the most severe consequence of using a diode-selected array, in particular for large arrays, is the total reverse bias leakage current from the unselected bit lines. In transistor-selected arrays, only the selected bit lines contribute reverse bias leakage current. The difference in leakage current is several orders of magnitude. A further concern with scaling below 40 nm is the effect of discrete dopants as the p-n junction width scales down. Thin film-based selectors allow higher densities, utilizing < 4 F2 cell area by stacking memory layers horizontally or vertically. Often the isolation capabilities are inferior to the use of transistors if the on/off ratio for the selector is not sufficient, limiting the ability to operate very large arrays in this architecture. Chalcogenide-based threshold switches have been demonstrated as a viable selector for high-density PCM arrays
2000 and later.
In August 2004, Nanochip licensed PRAM technology for use in MEMS (micro-electric-mechanical-systems) probe storage devices. These devices are not solid state. Instead, a very small platter coated in chalcogenide is dragged beneath thousands or even millions of electrical probes that can read and write the chalcogenide. Hewlett-Packard's micro-mover technology can accurately position the platter to 3 nm so densities of more than 1 Tbit (125 GB) per square inch will be possible if the technology can be perfected. The basic idea is to reduce the amount of wiring needed on-chip; instead of wiring every cell, the cells are placed closer together and read by current passing through the MEMS probes, acting like wires. This approach resembles IBM's Millipede technology.
Samsung 46.7 nm cell.
In September 2006, Samsung announced a prototype 512 Mb (64 MB) device using diode switches. The announcement was something of a surprise, and it was especially notable for its fairly high memory density. The prototype featured a cell size of only 46.7 nm, smaller than commercial flash devices available at the time. Although flash devices of higher "capacity" were available (64 Gb, or 8 GB, was just coming to market), other technologies competing to replace flash in general offered lower densities (larger cell sizes). The only production MRAM and FeRAM devices are only 4 Mb, for example. The high density of Samsung's prototype PRAM device suggested it could be a viable flash competitor, and not limited to niche roles as other devices have been. PRAM appeared to be particularly attractive as a potential replacement for NOR flash, where device capacities typically lag behind those of NAND flash devices. State-of-the-art capacities on NAND passed 512 Mb some time ago. NOR flash offers similar densities to Samsung's PRAM prototype and already offers bit addressability (unlike NAND where memory is accessed in banks of many bytes at a time).
Intel's PRAM device.
Samsung's announcement was followed by one from Intel and STMicroelectronics, who demonstrated their own PRAM devices at the 2006 Intel Developer Forum in October. They showed a 128 Mb part that began manufacture at STMicroelectronics's research lab in Agrate, Italy. Intel stated that the devices were strictly proof-of-concept.
BAE device.
PRAM is also a promising technology in the military and aerospace industries where radiation effects make the use of standard non-volatile memories such as flash impractical. PRAM devices have been introduced by BAE Systems, referred to as C-RAM, claiming excellent radiation tolerance (rad-hard) and latchup immunity. In addition, BAE claims a write cycle endurance of 108, which will allow it to be a contender for replacing PROMs and EEPROMs in space systems.
Multi-level cell.
In February 2008, Intel and STMicroelectronics revealed the first multilevel (MLC) PRAM array prototype. The prototype stored two logical bits in each physical cell, in effect 256 Mb of memory stored in a 128 Mb physical array. This means that instead of the normal two states—fully amorphous and fully crystalline—an additional two distinct intermediate states represent different degrees of partial crystallization, allowing for twice as many bits to be stored in the same physical area. In June 2011, IBM announced that they had created stable, reliable, multi-bit phase-change memory with high performance and stability. SK Hynix had a joint developmental agreement and a technology license agreement with IBM for the development of multi-level PRAM technology.
Intel's 90 nm device.
Also in February 2008, Intel and STMicroelectronics shipped prototype samples of their first PRAM product to customers. The 90 nm, 128 Mb (16 MB) product was called Alverstone.
In June 2009, Samsung and Numonyx B.V. announced a collaborative effort in the development of PRAM market-tailored hardware products.
In April 2010, Numonyx announced the Omneo line of 128-Mbit NOR-compatible phase-change memories. Samsung announced shipment of 512 Mb phase-change RAM (PRAM) in a multi-chip package (MCP) for use in mobile handsets by Fall 2010.
ST 28 nm, 16 MB array.
In December 2018 STMicroelectronics presented design and performance data for a 16 MB ePCM array for a 28 nm fully depleted silicon on insulator automotive control unit.
In-memory computing.
More recently, there is significant interest in the application of PCM for in-memory computing. The essential idea is to perform computational tasks such as matrix-vector-multiply operations in the memory array itself by exploiting PCM's analog storage capability and Kirchhoff's circuit laws. PCM-based in-memory computing could be interesting for applications such as deep learning inference which do not require very high computing precision. In 2021, IBM published a full-fledged in-memory computing core based on multi-level PCM integrated in 14 nm CMOS technology node.
Challenges.
The greatest challenge for phase-change memory has been the requirement of high programming current density (>107 A/cm2, compared to 105...106 A/cm2 for a typical transistor or diode).
The contact between the hot phase-change region and the adjacent dielectric is another fundamental concern. The dielectric may begin to leak current at higher temperature, or may lose adhesion when expanding at a different rate from the phase-change material.
Phase-change memory is susceptible to a fundamental tradeoff of unintended vs. intended phase-change. This stems primarily from the fact that phase-change is a thermally driven process rather than an electronic process. Thermal conditions that allow for fast crystallization should not be too similar to standby conditions, e.g. room temperature, otherwise data retention cannot be sustained. With the proper activation energy for crystallization it is possible to have fast crystallization at programming conditions while having very slow crystallization at normal conditions.
Probably the biggest challenge for phase-change memory is its long-term resistance and threshold voltage drift. The resistance of the amorphous state slowly increases according to a power law (~t0.1). This severely limits the ability for multilevel operation, since a lower intermediate state would be confused with a higher intermediate state at a later time, and could also jeopardize standard two-state operation if the threshold voltage increases beyond the design value.
In April 2010, Numonyx released its Omneo line of parallel and serial interface 128 Mb NOR flash replacement PRAM chips. Although the NOR flash chips they intended to replace operated in the −40-85 °C range, the PRAM chips operated in the 0-70 °C range, indicating a smaller operating window compared to NOR flash. This is likely due to the use of highly temperature-sensitive p–n junctions to provide the high currents needed for programming.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\,V_\\mathrm{th}"
}
] | https://en.wikipedia.org/wiki?curid=850973 |
851008 | Black hole information paradox | Puzzle of disappearance of information in a black hole
The black hole information paradox is a paradox that appears when the predictions of quantum mechanics and general relativity are combined. The theory of general relativity predicts the existence of black holes that are regions of spacetime from which nothing—not even light—can escape. In the 1970s, Stephen Hawking applied the semiclassical approach of quantum field theory in curved spacetime to such systems and found that an isolated black hole would emit a form of radiation (now called Hawking radiation in his honor). He also argued that the detailed form of the radiation would be independent of the initial state of the black hole, and depend only on its mass, electric charge and angular momentum.
The information paradox appears when one considers a process in which a black hole is formed through a physical process and then evaporates away entirely through Hawking radiation. Hawking's calculation suggests that the final state of radiation would retain information only about the total mass, electric charge and angular momentum of the initial state. Since many different states can have the same mass, charge and angular momentum, this suggests that many initial physical states could evolve into the same final state. Therefore, information about the details of the initial state would be permanently lost; however, this violates a core precept of both classical and quantum physics: that, "in principle only," the state of a system at one point in time should determine its state at any other time. Specifically, in quantum mechanics the state of the system is encoded by its wave function. The evolution of the wave function is determined by a unitary operator, and unitarity implies that the wave function at any instant of time can be used to determine the wave function either in the past or the future. In 1993, Don Page argued that if a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. This is called the Page curve.
It is now generally believed that information is preserved in black-hole evaporation. For many researchers, deriving the Page curve is synonymous with solving the black hole information puzzle. But views differ as to precisely how Hawking's original semiclassical calculation should be corrected. In recent years, several extensions of the original paradox have been explored. Taken together, these puzzles about black hole evaporation have implications for how gravity and quantum mechanics must be combined. The information paradox remains an active field of research in quantum gravity.
Relevant principles.
In quantum mechanics, the evolution of the state is governed by the Schrödinger equation. The Schrödinger equation obeys two principles that are relevant to the paradox—quantum determinism, which means that given a present wave function, its future changes are uniquely determined by the evolution operator, and reversibility, which refers to the fact that the evolution operator has an inverse, meaning that the past wave functions are similarly unique. The combination of the two means that information must always be preserved. In this context "information" means all the details of the state, and the statement that information must be preserved means that details corresponding to an earlier time can always be reconstructed at a later time.
Mathematically, the Schrödinger equation implies that the wavefunction at a time t1 can be related to the wavefunction at a time t2 by means of a unitary operator.
formula_0
Since the unitary operator is bijective, the wavefunction at t2 can be obtained from the wavefunction at t1 and vice versa.
The reversibility of time evolution described above applies only at the "microscopic level", since the wavefunction provides a complete description of the state. It should not be conflated with thermodynamic irreversibility. A process may appear irreversible if one keeps track only of the system's coarse-grained features and not of its microscopic details, as is usually done in thermodynamics. But at the microscopic level, the principles of quantum mechanics imply that every process is completely reversible.
Starting in the mid-1970s, Stephen Hawking and Jacob Bekenstein put forward theoretical arguments that suggested that black-hole evaporation loses information, and is therefore inconsistent with unitarity. Crucially, these arguments were meant to apply at the microscopic level and suggested that black-hole evaporation is not only thermodynamically but microscopically irreversible. This contradicts the principle of unitarity described above and leads to the information paradox. Since the paradox suggested that quantum mechanics would be violated by black-hole formation and evaporation, Hawking framed the paradox in terms of the "breakdown of predictability in gravitational collapse".
The arguments for microscopic irreversibility were backed by Hawking's calculation of the spectrum of radiation that isolated black holes emit. This calculation utilized the framework of general relativity and quantum field theory. The calculation of Hawking radiation is performed at the black hole horizon and does not account for the backreaction of spacetime geometry; for a large enough black hole the curvature at the horizon is small and therefore both these theories should be valid. Hawking relied on the no-hair theorem to arrive at the conclusion that radiation emitted by black holes would depend only on a few macroscopic parameters, such as the black hole's mass, charge, and spin, but not on the details of the initial state that led to the formation of the black hole. In addition, the argument for information loss relied on the causal structure of the black hole spacetime, which suggests that information in the interior should not affect any observation in the exterior, including observations performed on the radiation the black hole emits. If so, the region of spacetime outside the black hole would lose information about the state of the interior after black-hole evaporation, leading to the loss of information.
Today, some physicists believe that the holographic principle (specifically the AdS/CFT duality) demonstrates that Hawking's conclusion was incorrect, and that information is in fact preserved. Moreover, recent analyses indicate that in semiclassical gravity the information loss paradox cannot be formulated in a self-consistent manner due to the impossibility of simultaneously realizing all of the necessary assumptions required for its formulation.
Black hole evaporation.
Hawking radiation.
In 1973–1975, Stephen Hawking showed that black holes should slowly radiate away energy, and he later argued that this leads to a contradiction with unitarity. Hawking used the classical no-hair theorem to argue that the form of this radiation—called Hawking radiation—would be completely independent of the initial state of the star or matter that collapsed to form the black hole. He argued that the process of radiation would continue until the black hole had evaporated completely. At the end of this process, all the initial energy in the black hole would have been transferred to the radiation. But, according to Hawking's argument, the radiation would retain no information about the initial state and therefore information about the initial state would be lost.
More specifically, Hawking argued that the pattern of radiation emitted from the black hole would be random, with a probability distribution controlled only by the black hole's initial temperature, charge, and angular momentum, not by the initial state of the collapse. The state produced by such a probabilistic process is called a mixed state in quantum mechanics. Therefore, Hawking argued that if the star or material that collapsed to form the black hole started in a specific pure quantum state, the process of evaporation would transform the pure state into a mixed state. This is inconsistent with the unitarity of quantum-mechanical evolution discussed above.
The loss of information can be quantified in terms of the change in the fine-grained von Neumann entropy of the state. A pure state is assigned a von Neumann entropy of 0, whereas a mixed state has a finite entropy. The unitary evolution of a state according to Schrödinger's equation preserves the entropy. Therefore Hawking's argument suggests that the process of black-hole evaporation cannot be described within the framework of unitary evolution. Although this paradox is often phrased in terms of quantum mechanics, the evolution from a pure state to a mixed state is also inconsistent with Liouville's theorem in classical physics (see e.g.).
In equations, Hawking showed that if one denotes the creation and annihilation operators at a frequency formula_1 for a quantum field propagating in the black-hole background by formula_2 and formula_3 then the expectation value of the product of these operators in the state formed by the collapse of a black hole would satisfy formula_4 where "k" is the Boltzmann constant and "T" is the temperature of the black hole. (See, for example, section 2.2 of.) This formula has two important aspects. The first is that the form of the radiation depends only on a single parameter, temperature, even though the initial state of the black hole cannot be characterized by one parameter. Second, the formula implies that the black hole radiates mass at a rate given by formula_5 where "a" is constant related to fundamental constants, including the Stefan–Boltzmann constant and certain properties of the black hole spacetime called its greybody factors.
The temperature of the black hole is in turn dependent on its mass, charge, and angular momentum. For a Schwarzschild black hole the temperature is given by
formula_6
This means that if the black hole starts out with an initial mass formula_7, it evaporates completely in a time proportional to formula_8.
The important aspect of these formulas is that they suggest that the final gas of radiation formed through this process depends only on the black hole's temperature and is independent of other details of the initial state. This leads to the following paradox. Consider two distinct initial states that collapse to form a Schwarzschild black hole of the same mass. Even though the states were distinct at first, since the mass (and hence the temperature) of the black holes is the same, they will emit the same Hawking radiation. Once they evaporate completely, in both cases, one will be left with a featureless gas of radiation. This gas cannot be used to distinguish between the two initial states, and therefore information has been lost.
Page curve.
During the same time period in the 1970s, Don Page was a doctoral student of Stephen Hawking. He objected to Hawking's reasoning leading to the paradox above, initially on the basis of violation of CPT symmetry. In 1993, Page focused on the combined system of a black hole with its Hawking radiation as one entangled system, a bipartite system, evolving over the lifetime of the black hole evaporation. Lacking the ability to make a full quantum analysis, he nonetheless made a powerful observation: If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy or entanglement entropy of the Hawking radiation initially increases from zero and then must decrease back to zero when the black hole to which the radiation is entangled has totally evaporated. This is known as the Page curve; and the time corresponding to the maximum or turnover point of the curve, which occurs at about half the black-hole lifetime, is called the Page time. In short, if black hole evaporation is unitary, then the radiation entanglement entropy follows the Page curve. After the Page time, correlations appear and the radiation becomes increasingly information rich.
Recent progress in deriving the Page curve for unitary black hole evaporation is a significant step towards finding both a resolution to the information paradox and a more general understanding of unitarity in quantum gravity. Many researchers consider deriving the Page curve as synonymous with solving the black hole information paradox.
Popular culture.
The information paradox has received coverage in the popular media and has been described in popular-science books. Some of this coverage resulted from a widely publicized bet made in 1997 between John Preskill on the one hand with Hawking and Kip Thorne on the other that information was not lost in black holes. The scientific debate on the paradox was described in Leonard Susskind's 2008 book "The Black Hole War". (The book carefully notes that the 'war' was purely a scientific one, and that, at a personal level, the participants remained friends.) Susskind writes that Hawking was eventually persuaded that black-hole evaporation was unitary by the holographic principle, which was first proposed by 't Hooft, further developed by Susskind, and later given a precise string theory interpretation by the AdS/CFT correspondence. In 2004, Hawking also conceded the 1997 bet, paying Preskill with a baseball encyclopedia "from which information can be retrieved at will". Thorne refused to concede.
Solutions.
Since the 1997 proposal of the AdS/CFT correspondence, the predominant belief among physicists is that information is indeed preserved in black hole evaporation. There are broadly two main streams of thought about how this happens. Within what might broadly be termed the "string theory community", the dominant idea is that Hawking radiation is not precisely thermal but receives quantum correlations that encode information about the black hole's interior. This viewpoint has been the subject of extensive recent research and received further support in 2019 when researchers amended the computation of the entropy of the Hawking radiation in certain models and showed that the radiation is in fact dual to the black hole interior at late times. Hawking himself was influenced by this view and in 2004 published a paper that assumed the AdS/CFT correspondence and argued that quantum perturbations of the event horizon could allow information to escape from a black hole, which would resolve the information paradox. In this perspective, it is the event horizon of the black hole that is important and not the black-hole singularity. The GISR (Gravity Induced Spontaneous Radiation) mechanism of references can be considered an implementation of this idea but with the quantum perturbations of the event horizon replaced by the microscopic states of the black hole.
On the other hand, within what might broadly be termed the "loop quantum gravity community", the dominant belief is that to resolve the information paradox, it is important to understand how the black-hole singularity is resolved. These scenarios are broadly called remnant scenarios since information does not emerge gradually but remains in the black-hole interior only to emerge at the end of black-hole evaporation.
Researchers also study other possibilities, including a modification of the laws of quantum mechanics to allow for non-unitary time evolution.
Some of these solutions are described at greater length below.
GISR mechanism resolution to the paradox.
This resolution takes GISR as the underlying mechanism for Hawking radiation, considering the latter only as a resultant effect. The physics ingredients of GISR are reflected in the following explicitly hermitian hamiltonian
formula_9
The first term of formula_10 is a diagonal matrix representing the microscopic state of black holes no heavier than the initial one. The second term describes vacuum fluctuations of particles around the black hole and is represented by many harmonic oscillators. The third term couples the vacuum fluctuation modes to the black hole, such that for each mode whose energy matches the difference between two states of the black hole, the latter transitions with an amplitude proportional to the similarity factor of their microscopic wave functions. Transitions between higher energy state formula_11 to lower energy state formula_12 and vice versa, are equally permitted at the Hamiltonian level. This coupling mimics the photon-atom coupling in the Jaynes–Cummings model of atomic physics, replacing the photon's vector potential with the binding energy of particles to be radiated in the black hole case, and the dipole moment of initial-to-final state transitions in atoms with the similarity factor of the initial and final states' wave functions in black holes. Despite its ad hoc nature, this coupling introduces no new interactions beyond gravity, and it is deemed necessary irrespective of the future development of quantum gravitational theories.
From the hamiltonian of GISR and the standard Schrodinger equation controlling the evolution of wave function of the system
formula_13
formula_14
here formula_15 is the index of the radiated particles set with total energy formula_1. In the case of short time evolution or single quantum emission, Wigner-Wiesskopf approximation allows one to show that the power spectrum of GISR is exactly of thermal type and the corresponding temperature equals that of Hawking radiation. However, in the case of long time evolution or continuous quantum emission, the process is off-equilibrium and is characterised by an initial state dependent black hole mass or temperature vs. time curve. The observers far away can retrieve the information stored in the initial black hole from this mass or temperature versus time curve.
The hamiltonian and wave function description of GISR allows one to calculate the entanglement entropy between the black hole and its Hawking particles explicitly.
formula_16
formula_17
formula_18
Since the Hamiltonian of GISR is explicitly Hermitian, the resulting Page curve is naturally expected, except for some late-time Rabi-type oscillations. These oscillations arise from the equal probability of emission and absorption transitions as the black hole approaches the vanishing stage. The most important lesson from this calculation is that the intermediate state of an evaporating black hole cannot be considered a semiclassical object with a time-dependent mass. Instead, it must be viewed as a superposition of many different mass ratio combinations of the black hole and Hawking particles. References designed a Schrödinger cat-type thought experiment to illustrate this fact, where an initial black hole is bounded with a group of living cats and each Hawking particle kills on from the group. In the quantum description, because the exact timing and number of particles radiated by a black hole cannot be determined definitively, the intermediate state of the evaporating black hole must be considered a superposition of many cat groups, each with a different ratio of dead members. The biggest flaw in the argument for the information loss paradox is ignoring this superposition.
Small-corrections resolution to the paradox.
This idea suggests that Hawking's computation fails to keep track of small corrections that are eventually sufficient to preserve information about the initial state. This can be thought of as analogous to what happens during the mundane process of "burning": the radiation produced appears to be thermal, but its fine-grained features encode the precise details of the object that was burnt. This idea is consistent with reversibility, as required by quantum mechanics. It is the dominant idea in what might broadly be termed the string-theory approach to quantum gravity.
More precisely, this line of resolution suggests that Hawking's computation is corrected so that the two point correlator computed by Hawking and described above becomes
formula_19
and higher-point correlators are similarly corrected
formula_20
The equations above utilize a concise notation and the correction factors formula_21 may depend on the temperature, the frequencies of the operators that enter the correlation function and other details of the black hole.
Maldacena initially explored such corrections in a simple version of the paradox. They were then analyzed by Papadodimas and Raju, who showed that corrections to low-point correlators (such as formula_22 above ) that were exponentially suppressed in the black-hole entropy were sufficient to preserve unitarity, and significant corrections were required only for very high-point correlators. The mechanism that allowed the right small corrections to form was initially postulated in terms of a loss of exact locality in quantum gravity so that the black-hole interior and the radiation were described by the same degrees of freedom. Recent developments suggest that such a mechanism can be realized precisely within semiclassical gravity and allows information to escape. See § Recent developments.
Fuzzball resolution to the paradox.
Some researchers, most notably Samir Mathur, have argued that the small corrections required to preserve information cannot be obtained while preserving the semiclassical form of the black-hole interior and instead require a modification of the black-hole geometry to a fuzzball.
The defining characteristic of the fuzzball is that it has structure at the horizon scale. This should be contrasted with the conventional picture of the black-hole interior as a largely featureless region of space. For a large enough black hole, tidal effects are very small at the black-hole horizon and remain small in the interior until one approaches the black-hole singularity. Therefore, in the conventional picture, an observer who crosses the horizon may not even realize they have done so until they start approaching the singularity. In contrast, the fuzzball proposal suggests that the black hole horizon is not empty. Consequently, it is also not information-free, since the details of the structure at the surface of the horizon preserve information about the black hole's initial state. This structure also affects the outgoing Hawking radiation and thereby allows information to escape from the fuzzball.
The fuzzball proposal is supported by the existence of a large number of gravitational solutions called microstate geometries.
The firewall proposal can be thought of as a variant of the fuzzball proposal that posits that the black-hole interior is replaced by a firewall rather than a fuzzball. Operationally, the difference between the fuzzball and the firewall proposals has to do with whether an observer crossing the horizon of the black hole encounters high-energy matter, suggested by the firewall proposal, or merely low-energy structure, suggested by the fuzzball proposal. The firewall proposal also originated with an exploration of Mathur's argument that small corrections are insufficient to resolve the information paradox.
The fuzzball and firewall proposals have been questioned for lacking an appropriate mechanism that can generate structure at the horizon scale.
Strong-quantum-effects resolution to the paradox.
In the final stages of black-hole evaporation, quantum effects become important and cannot be ignored. The precise understanding of this phase of black-hole evaporation requires a complete theory of quantum gravity. Within what might be termed the loop-quantum-gravity approach to black holes, it is believed that understanding this phase of evaporation is crucial to resolving the information paradox.
This perspective holds that Hawking's computation is reliable until the final stages of black-hole evaporation, when information suddenly escapes. Another possibility along the same lines is that black-hole evaporation simply stops when the black hole becomes Planck-sized. Such scenarios are called "remnant scenarios".
An appealing aspect of this perspective is that a significant deviation from classical and semiclassical gravity is needed only in the regime in which the effects of quantum gravity are expected to dominate. On the other hand, this idea implies that just before the sudden escape of information, a very small black hole must be able to store an arbitrary amount of information and have a very large number of internal states. Therefore, researchers who follow this idea must take care to avoid the common criticism of remnant-type scenarios, which is that they might may violate the Bekenstein bound and lead to a violation of effective field theory due to the production of remnants as virtual particles in ordinary scattering events.
Soft-hair resolution to the paradox.
In 2016, Hawking, Perry and Strominger noted that black holes must contain "soft hair". Particles that have no rest mass, like photons and gravitons, can exist with arbitrarily low-energy and are called soft particles. The soft-hair resolution posits that information about the initial state is stored in such soft particles. The existence of such soft hair is a peculiarity of four-dimensional asymptotically flat space and therefore this resolution to the paradox does not carry over to black holes in Anti-de Sitter space or black holes in other dimensions.
Information is irretrievably lost.
A minority view in the theoretical physics community is that information is genuinely lost when black holes form and evaporate. This conclusion follows if one assumes that the predictions of semiclassical gravity and the causal structure of the black-hole spacetime are exact.
But this conclusion leads to the loss of unitarity. Banks, Susskind and Peskin argue that, in some cases, loss of unitarity also implies violation of energy–momentum conservation or locality, but this argument may possibly be evaded in systems with a large number of degrees of freedom. According to Roger Penrose, loss of unitarity in quantum systems is not a problem: quantum measurements are by themselves already non-unitary. Penrose claims that quantum systems will in fact no longer evolve unitarily as soon as gravitation comes into play, precisely as in black holes. The Conformal Cyclic Cosmology Penrose advocates critically depends on the condition that information is in fact lost in black holes. This new cosmological model might be tested experimentally by detailed analysis of the cosmic microwave background radiation (CMB): if true, the CMB should exhibit circular patterns with slightly lower or slightly higher temperatures. In November 2010, Penrose and V. G. Gurzadyan announced they had found evidence of such circular patterns in data from the Wilkinson Microwave Anisotropy Probe (WMAP), corroborated by data from the BOOMERanG experiment. The significance of these findings was debated.
Along similar lines, Modak, Ortíz, Peña, and Sudarsky have argued that the paradox can be dissolved by invoking foundational issues of quantum theory often called the measurement problem of quantum mechanics. This work built on an earlier proposal by Okon and Sudarsky on the benefits of objective collapse theory in a much broader context. The original motivation of these studies was Penrose's long-standing proposal wherein collapse of the wave-function is said to be inevitable in the presence of black holes (and even under the influence of gravitational field). Experimental verification of collapse theories is an ongoing effort.
Other proposed resolutions.
Some other resolutions to the paradox have also been explored. These are listed briefly below.
Recent developments.
Significant progress was made in 2019, when, starting with work by Penington and Almheiri, Engelhardt, Marolf and Maxfield, researchers were able to compute the von Neumann entropy of the radiation black holes emit in specific models of quantum gravity. These calculations showed that, in these models, the entropy of this radiation first rises and then falls back to zero. As explained above, one way to frame the information paradox is that Hawking's calculation appears to show that the von Neumann entropy of Hawking radiation increases throughout the lifetime of the black hole. But if the black hole formed from a pure state with zero entropy, unitarity implies that the entropy of the Hawking radiation must decrease back to zero once the black hole evaporates completely, i.e., the Page curve. Therefore, the results above provide a resolution to the information paradox, at least in the specific models of gravity considered in these models.
These calculations compute the entropy by first analytically continuing the spacetime to a Euclidean spacetime and then using the replica trick. The path integral that computes the entropy receives contributions from novel Euclidean configurations called "replica wormholes". (These wormholes exist in a Wick rotated spacetime and should not be conflated with wormholes in the original spacetime.) The inclusion of these wormhole geometries in the computation prevents the entropy from increasing indefinitely.
These calculations also imply that for sufficiently old black holes, one can perform operations on the Hawking radiation that affect the black hole interior. This result has implications for the related firewall paradox, and provides evidence for the physical picture suggested by the ER=EPR proposal, black hole complementarity, and the Papadodimas–Raju proposal.
It has been noted that the models used to perform the Page curve computations above have consistently involved theories where the graviton has mass, unlike the real world, where the graviton is massless. These models have also involved a "nongravitational bath", which can be thought of as an artificial interface where gravity ceases to act. It has also been argued that a key technique used in the Page-curve computations, the "island proposal", is inconsistent in standard theories of gravity with a Gauss law. This would suggest that the Page curve computations are inapplicable to realistic black holes and work only in special toy models of gravity. The validity of these criticisms remains under investigation; there is no consensus in the research community.
In 2020, Laddha, Prabhu, Raju, and Shrivastava argued that, as a result of the effects of quantum gravity, information should always be available outside the black hole. This would imply that the von Neumann entropy of the region outside the black hole always remains zero, as opposed to the proposal above, where the von Neumann entropy first rises and then falls. Extending this, Raju argued that Hawking's error was to assume that the region outside the black hole would have no information about its interior.
Hawking formalized this assumption in terms of a "principle of ignorance". The principle of ignorance is correct in classical gravity, when quantum-mechanical effects are neglected, by virtue of the no-hair theorem. It is also correct when only quantum-mechanical effects are considered and gravitational effects are neglected. But Raju argued that when both quantum mechanical and gravitational effects are accounted for, the principle of ignorance should be replaced by a "principle of holography of information" that would imply just the opposite: all the information about the interior can be regained from the exterior through suitably precise measurements.
The two recent resolutions of the information paradox described above—via replica wormholes and the holography of information—share the feature that observables in the black-hole interior also describe observables far from the black hole. This implies a loss of exact locality in quantum gravity. Although this loss of locality is very small, it persists over large distance scales. This feature has been challenged by some researchers.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " |\\Psi(t_1)\\rangle = U(t_1, t_2)|\\Psi(t_2)\\rangle."
},
{
"math_id": 1,
"text": "\\omega"
},
{
"math_id": 2,
"text": "a_{\\omega}"
},
{
"math_id": 3,
"text": "a_{\\omega}^{\\dagger}"
},
{
"math_id": 4,
"text": " \\langle a_{\\omega} a_{\\omega}^{\\dagger} \\rangle_{\\rm hawk} = {1 \\over 1 - e^{-\\omega /{kT} }} "
},
{
"math_id": 5,
"text": " {d M \\over d t} = -{a T^4}"
},
{
"math_id": 6,
"text": "\nT = {\\hbar c^3 \\over 8 \\pi k G M}\n"
},
{
"math_id": 7,
"text": "M_0"
},
{
"math_id": 8,
"text": "M_0^3"
},
{
"math_id": 9,
"text": "\\begin{align}\n H &= \\begin{pmatrix}w^i\\\\&w_{-}^{j}\\\\&&\\ddots\\\\&&&{\\scriptstyle\\it0}^{\\scriptscriptstyle\\it1}\\end{pmatrix} + \\sum_q\\hbar\\omega_qa^\\dagger_qa_q + \\sum_{u,v}^{|u-v|=\\hbar\\omega_q}g_{u^n v^\\ell}b^\\dagger_{u^n v^\\ell}a_q \\\\\n g_{u\\;\\!\\!^n v^\\ell} &\\propto -\\frac{\\hbar}{G\\{M_u,M_v\\}^\\mathrm{max}}\\mathrm{Siml}\\{\\Psi[M_{u\\;\\!\\!^n}\\!(r)],\\Psi[M_{v\\;\\!\\!^\\ell}\\!(r)]\\}\n\\end{align}"
},
{
"math_id": 10,
"text": "H"
},
{
"math_id": 11,
"text": "u"
},
{
"math_id": 12,
"text": "v"
},
{
"math_id": 13,
"text": "|\\psi(t)\\rangle=\\sum_{u=w}^0\\sum_{n=1}^{u}\\sum_{\\omega{}s}^{\\omega+u=w}\ne^{-iut-i\\omega{}t}c_{u\\;\\!\\!^n}^{\\omega{}s}(t)|u\\;\\!\\!^n\\otimes\\omega{}s\\rangle\n"
},
{
"math_id": 14,
"text": "i\\hbar{\\partial}_t|\\psi(t)\\rangle = H|\\psi(t)\\rangle"
},
{
"math_id": 15,
"text": "\\omega{}s"
},
{
"math_id": 16,
"text": "s_{BR} = -\\operatorname{tr}_{B}\\rho_{B}\\log\\rho_{B}=-tr_{R} \\rho_{R}\\log\\rho_{R}"
},
{
"math_id": 17,
"text": "\n\\rho_{B} = \\operatorname{tr}_{R}\\sum_{u=w}^0\\sum_{n=1}^{u}\\sum_{\\omega{}s}^{\\omega+u=w}|c_{u^n}^{\\omega{}s}\\rangle\\langle c_{u^n}^{\\omega{}s}|\n"
},
{
"math_id": 18,
"text": "\n\\rho_{R} = \\operatorname{tr}_{B}\\sum_{u=w}^0\\sum_{n=1}^{u}\\sum_{\\omega{}s}^{\\omega+u=w}|c_{u^n}^{\\omega{}s}\\rangle\\langle c_{u^n}^{\\omega{}s}|\n"
},
{
"math_id": 19,
"text": " \\langle a_{\\omega} a_{\\omega}^{\\dagger} \\rangle_{\\rm exact} = \\langle a_{\\omega} a_{\\omega}^{\\dagger} \\rangle_{\\rm hawk} (1 + \\epsilon_2) "
},
{
"math_id": 20,
"text": " \\langle a_{\\omega_1} a_{\\omega_1}^{\\dagger} a_{\\omega_2} a_{\\omega_2}^{\\dagger} \\ldots a_{\\omega_n} a_{\\omega_n}^{\\dagger} \\rangle_{\\rm exact} = \\langle a_{\\omega} a_{\\omega}^{\\dagger} \\rangle_{\\rm hawk} (1 + \\epsilon_n) "
},
{
"math_id": 21,
"text": "\\epsilon_i"
},
{
"math_id": 22,
"text": "\\epsilon_2"
}
] | https://en.wikipedia.org/wiki?curid=851008 |
8511907 | Complete linkage | Concept in genetics
In genetics, complete (or absolute) linkage is defined as the state in which two loci are so close together that alleles of these loci are virtually never separated by crossing over. The closer the physical location of two genes on the DNA, the less likely they are to be separated by a crossing-over event. In the case of male Drosophila there is complete absence of recombinant types due to absence of crossing over. This means that all of the genes that start out on a single chromosome, will end up on that same chromosome in their original configuration. In the absence of recombination, only parental phenotypes are expected.
Linkage.
Genetic Linkage is the tendency of alleles, which are located closely together on a chromosome, to be inherited together during the process of meiosis in sexually reproducing organisms. During the process of meiosis, homologous chromosomes pair up, and can exchange corresponding sections of DNA. As a result, genes that were originally on the same chromosome can finish up on different chromosomes. This process is known as genetic recombination. The rate of recombination of two discrete loci corresponds to their physical proximity. Alleles that are closer together have lower rates of recombination than those that are located far apart. The distance between two alleles on a chromosome can be determined by calculating the percentage or recombination between two loci. These probabilities of recombination can be used to construct a linkage map, or a graphical representation of the location of genes and gene in respect to one another. If linkage is complete, there should be no recombination events that separate the two alleles, and therefore only parental combinations of alleles should be observed in offspring. Linkage between two loci can have significant implications regarding the inheritance of certain types of diseases.
Gene maps or Qualitative Trait Loci (QTL) maps can be produced using two separate methods. One way uses the frequency of marker alleles and compares them to individuals selected from the two tails of the trait distribution. This is called the Trait-Based approach and strictly uses phenotypic information only to select the individuals for a sample. The other approach is called the Marker-Base approach (MB), and uses both the difference in marker allele frequencies and the phenotypic values of each marker genotype when selecting samples.
Recombination During Meiosis.
In diploid eukaryotic cells, recombination can occur during the process of Meiosis. Homologous chromosomes pair up during meiosis before finally splitting, resulting in two haploid daughter cells each with a single copy of every chromosome. While homologous chromosomes are lined up, they are free to exchange corresponding segments of their own DNA with that of their homolog. This results in a chromosomes that carry both maternal and paternal DNA. Through recombination, daughter cells have the greatest amount of genetic diversity.
Methods of Analysis.
Hierarchical Clustering.
One powerful tool for interpreting and graphing linkage data sets is called Hierarchical Clustering. Clustering organizes things into groups based on similarity. In the case of linkage, similarity equates to physical proximity on a chromosome. Hierarchical clustering is a bottom-up approach to cluster analysis, in which the two closest data points are grouped together and are treated as a single data point for later clustering. In complete-linkage Hierarchical Clustering, this process of combining data points into clusters of increasing size is repeated until all date as part of a single cluster. The resulting diagram from a Hierarchical Cluster Analysis is called a dendrogram, in which data are nested into brackets of increasing dissimilarity. Two common issues with Hierarchical Clustering include designating a specific distance of “similarity” between two data points, in order to generate meaningful associations between data points, and also how to merge data points, in a way that will be helpful for further clustering once they have been deemed similar. A cross-clustering algorithm with automatic estimation of the number of clusters has been designed, which helps resolve some of these issues. By fine tuning the number of clusters expected, the possibility of associating two unrelated clusters is minimized. Again, under this type of analysis, a single resultant cluster signifies complete-linkage, since all data points are within the range of assigned similarity.
History.
The idea of genetic linkage was first discovered by the British geneticists William Bateson, Edith Rebecca Saunders and Reginald Punnett. Thomas Hunt Morgan expanded the idea of linkage after noticing that in some instances the observed rate of crossing-over events differed from the expected rate of crossing-over events. He attributed the depressed rates of recombination to the smaller spatial separation of genes on a chromosome; Hypothesizing that genes which are more closely positioned on a chromosome will have smaller rates of recombination than those that are spaced farther apart. The unit of measurement describing the distance between two linked genes is the Centimorgan, and is named after Thomas Hunt Morgan. A centimorgan is equivalent to the percent of recombination. two loci with 2% recombination frequency are located 2 centimorgans apart.
formula_0
Uses In Research.
Economic Benefits.
Being able to determine linkage between genes can also have major economic benefits. Learning about linkage of traits in sugar cane has led to more productive and lucrative growth of the crop. Sugar cane is a sustainable crop that is one of the most economically viable renewable energy sources. QTL analysis for sugarcane was used to construct a linkage map that identified gene clusters and important linked loci that can be used to predict the response to fungal infection in a specific line of sugar cane.
Medical Benefits.
Linkage mapping can also be useful in determining the inheritance patterns of traits such as psychological disease. Linkage studies of panic disorder and anxiety disorders have indicated regions of interest on specific chromosomes. Chromosomes 4q21 and 7p are being considered strong candidate regions for panic and fear-associated anxiety disorder loci. Knowing the specific location of these loci and their probability of being inherited together based on their linkage can offer insight into how these disorders are passed down, and why they often occur together in patients.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{recombination frequency}= \\frac{\\text{Number Recombinant Progeny}}{\\text{Total Number Progeny}}\\times100\\%"
}
] | https://en.wikipedia.org/wiki?curid=8511907 |
8513142 | Diversity gain | Diversity gain is the increase in signal-to-interference ratio due to some diversity scheme, or how much the transmission power can be reduced when a diversity scheme is introduced, without a performance loss. Diversity gain is usually expressed in decibels, and sometimes as a power ratio. An example is soft handoff gain. For selection combining N signals are received, and the strongest signal is selected. When the N signals are independent and Rayleigh distributed, the expected diversity gain has been shown to be formula_0, expressed as a power ratio. | [
{
"math_id": 0,
"text": " \\sum_{k=1}^{N}\\frac{1}{k}"
}
] | https://en.wikipedia.org/wiki?curid=8513142 |
8515349 | Table of thermodynamic equations | Thermodynamics
Common thermodynamic equations and quantities in thermodynamics, using mathematical notation, are as follows:
Definitions.
Many of the definitions below are also used in the thermodynamics of chemical reactions.
Equations.
The equations in this article are classified by subject.
Statistical physics.
Below are useful results from the Maxwell–Boltzmann distribution for an ideal gas, and the implications of the Entropy quantity. The distribution is valid for atoms or molecules constituting ideal gases.
Corollaries of the non-relativistic Maxwell–Boltzmann distribution are below.
Quasi-static and reversible processes.
For quasi-static and reversible processes, the first law of thermodynamics is:
formula_2
where "δQ" is the heat supplied "to" the system and "δW" is the work done "by" the system.
Thermodynamic potentials.
The following energies are called the thermodynamic potentials,
and the corresponding fundamental thermodynamic relations or "master equations" are:
Maxwell's relations.
The four most common Maxwell's relations are:
More relations include the following.
Other differential equations are:
Quantum properties.
where "N" is number of particles, "h" is that Planck constant, "I" is moment of inertia, and "Z" is the partition function, in various forms:
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " S = k_\\mathrm{B} \\ln \\Omega "
},
{
"math_id": 1,
"text": " dS = \\frac{\\delta Q}{T} "
},
{
"math_id": 2,
"text": "dU=\\delta Q - \\delta W"
},
{
"math_id": 3,
"text": " U = N k_\\text{B} T^2 \\left(\\frac{\\partial \\ln Z}{\\partial T}\\right)_V "
},
{
"math_id": 4,
"text": " S = \\frac{U}{T} + N k_\\text{B} \\ln Z - N k \\ln N + Nk "
}
] | https://en.wikipedia.org/wiki?curid=8515349 |
851547 | Pressure coefficient | Dimensionless number describing relative pressures in a fluid flow field
In fluid dynamics, the pressure coefficient is a dimensionless number which describes the relative pressures throughout a flow field. The pressure coefficient is used in aerodynamics and hydrodynamics. Every point in a fluid flow field has its own unique pressure coefficient, Cp.
In many situations in aerodynamics and hydrodynamics, the pressure coefficient at a point near a body is independent of body size. Consequently, an engineering model can be tested in a wind tunnel or water tunnel, pressure coefficients can be determined at critical locations around the model, and these pressure coefficients can be used with confidence to predict the fluid pressure at those critical locations around a full-size aircraft or boat.
Definition.
The pressure coefficient is a parameter for studying both incompressible/compressible fluids such as water and air. The relationship between the dimensionless coefficient and the dimensional numbers is
formula_0
where:
formula_1 is the static pressure at the point at which pressure coefficient is being evaluated
formula_2 is the static pressure in the freestream (i.e. remote from any disturbance)
formula_3 is the freestream fluid density (Air at sea level and 15 °C is 1.225 formula_4)
formula_5 is the freestream velocity of the fluid, or the velocity of the body through the fluid
Incompressible flow.
Using Bernoulli's equation, the pressure coefficient can be further simplified for potential flows (inviscid, and steady):
formula_6
where:
formula_7 is the flow speed at the point at which pressure coefficient is being evaluated
formula_8 is the Mach number, which is taken in the limit of zero
formula_9 is the flow's stagnation pressure
This relationship is valid for the flow of incompressible fluids where variations in speed and pressure are sufficiently small that variations in fluid density can be neglected. This assumption is commonly made in engineering practice when the Mach number is less than about 0.3.
Locations where formula_11 are significant in the design of gliders because this indicates a suitable location for a "Total energy" port for supply of signal pressure to the Variometer, a special Vertical Speed Indicator which reacts to vertical movements of the atmosphere but does not react to vertical maneuvering of the glider.
In an incompressible fluid flow field around a body, there will be points having positive pressure coefficients up to one, and negative pressure coefficients including coefficients less than minus one.
Compressible flow.
In the flow of compressible fluids such as air, and particularly the high-speed flow of compressible fluids, formula_12 (the dynamic pressure) is no longer an accurate measure of the difference between stagnation pressure and static pressure. Also, the familiar relationship that stagnation pressure is equal to "total pressure" does not always hold true. (It is always true in isentropic flow, but the presence of shock waves can cause the flow to depart from isentropic.) As a result, pressure coefficients can be greater than one in compressible flow.
Perturbation theory.
The pressure coefficient formula_10 can be estimated for irrotational and isentropic flow by introducing the potential formula_13 and the perturbation potential formula_14, normalized by the free-stream velocity formula_15
formula_16
Using Bernoulli's equation,
formula_17
which can be rewritten as
formula_18
where formula_19 is the sound speed.
The pressure coefficient becomes
formula_20
where formula_21 is the far-field sound speed.
Local piston theory.
The classical piston theory is a powerful aerodynamic tool. From the use of the momentum equation and the assumption of isentropic perturbations, one obtains the following basic piston theory formula for the surface pressure:
formula_22
where formula_23 is the downwash speed and formula_19 is the sound speed.
formula_24
The surface is defined as
formula_25
The slip velocity boundary condition leads to
formula_26
The downwash speed formula_23 is approximated as
formula_27
Hypersonic flow.
In hypersonic flow, the pressure coefficient can be accurately calculated for a vehicle using Newton's corpuscular theory of fluid motion, which is inaccurate for low-speed flow and relies on three assumptions:
For a freestream velocity formula_28 impacting a surface of area formula_29, which is inclined at an angle formula_30 relative to the freestream, the change in normal momentum is formula_31 and the mass flux incident on the surface is formula_32, with formula_33 being the freestream air density. Then the momentum flux, equal to the force exerted on the surface formula_34, from Newton's second law is equal to:
formula_35
Dividing by the surface area, it is clear that the force per unit area is equal to the pressure difference between the surface pressure formula_1 and the freestream pressure formula_36, leading to the relation:
formula_37
The last equation may be identified as the pressure coefficient, meaning that Newtonian theory predicts that the pressure coefficient in hypersonic flow is:
formula_38
For very high speed flows, and vehicles with sharp surfaces, the Newtonian theory works very well.
Modified Newtonian law.
A modification to the Newtonian theory, specifically for blunt bodies, was proposed by Lester Lees:
formula_39
where formula_40 is the maximum value of the pressure coefficient at the stagnation point behind a normal shock wave:
formula_41
where formula_42 is the stagnation pressure and formula_43 is the ratio of specific heats. The last relation is obtained from the ideal gas law formula_44, Mach number formula_45, and speed of sound formula_46. The Rayleigh pitot tube formula for a calorically perfect normal shock says that the ratio of the stagnation and freestream pressure is:
formula_47
Therefore, it follows that the maximum pressure coefficient for the Modified Newtonian law is:
formula_48
In the limit when formula_49, the maximum pressure coefficient becomes:
formula_50
And as formula_51, formula_52, recovering the pressure coefficient from Newtonian theory at very high speeds. The modified Newtonian theory is substantially more accurate than the Newtonian model for calculating the pressure distribution over blunt bodies.
Pressure distribution.
An airfoil at a given angle of attack will have what is called a pressure distribution. This pressure distribution is simply the pressure at all points around an airfoil. Typically, graphs of these distributions are drawn so that negative numbers are higher on the graph, as the formula_10 for the upper surface of the airfoil will usually be farther below zero and will hence be the top line on the graph.
Relationship with aerodynamic coefficients.
All the three aerodynamic coefficients are integrals of the pressure coefficient curve along the chord.
The coefficient of lift for a two-dimensional airfoil section with strictly horizontal surfaces can be calculated from the coefficient of pressure distribution by integration, or calculating the area between the lines on the distribution. This expression is not suitable for direct numeric integration using the panel method of lift approximation, as it does not take into account the direction of pressure-induced lift. This equation is true only for zero angle of attack.
formula_53
where:
formula_54 is pressure coefficient on the lower surface
formula_55 is pressure coefficient on the upper surface
formula_56 is the leading edge location
formula_57 is the trailing edge location
When the lower surface formula_10 is higher (more negative) on the distribution it counts as a negative area as this will be producing down force rather than lift.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_p = {p - p_\\infty \\over \\frac{1}{2} \\rho_\\infty V_{\\infty}^2 }"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "p_\\infty"
},
{
"math_id": 3,
"text": "\\rho_\\infty"
},
{
"math_id": 4,
"text": "\\rm kg/m^3"
},
{
"math_id": 5,
"text": "V_\\infty"
},
{
"math_id": 6,
"text": "C_p|_{M \\, \\approx \\, 0} = {p - p_\\infty \\over p_0 - p_\\infty } = {1 - \\bigg(\\frac{u}{u_{\\infty}} \\bigg)^2}"
},
{
"math_id": 7,
"text": "u"
},
{
"math_id": 8,
"text": "M"
},
{
"math_id": 9,
"text": "p_0"
},
{
"math_id": 10,
"text": "C_p"
},
{
"math_id": 11,
"text": "C_p = -1"
},
{
"math_id": 12,
"text": "{\\frac{1}{2}\\rho v^2}"
},
{
"math_id": 13,
"text": "\\Phi"
},
{
"math_id": 14,
"text": "\\phi"
},
{
"math_id": 15,
"text": "u_{\\infty}"
},
{
"math_id": 16,
"text": "\\Phi = u_{\\infty}x + \\phi(x, y, z)"
},
{
"math_id": 17,
"text": "\n\\frac{\\partial \\Phi}{\\partial t} + \\frac{\\nabla \\Phi \\cdot \\nabla \\Phi}{2} + \\frac{\\gamma}{\\gamma-1}\\frac{p}{\\rho} = \\text{constant}\n"
},
{
"math_id": 18,
"text": "\n\\frac{\\partial \\Phi}{\\partial t} + \\frac{\\nabla \\Phi \\cdot \\nabla \\Phi}{2} + \\frac{a^2}{\\gamma-1}= \\text{constant}\n"
},
{
"math_id": 19,
"text": "a"
},
{
"math_id": 20,
"text": "\\begin{align}\nC_p &= \\frac{p-p_{\\infty}}{\\frac{\\gamma}{2}p_{\\infty} M^2} =\\frac{2}{\\gamma M^2}\\left[\\left(\\frac{a}{a_{\\infty}}\\right)^{\\frac{2\\gamma}{\\gamma-1}} -1\\right]\\\\\n&= \\frac{2}{\\gamma M^2}\\left[\\left(\\frac{\\gamma-1}{a_{\\infty}^2}(\\frac{u_{\\infty}^2}{2} - \\Phi_t - \\frac{\\nabla\\Phi\\cdot\\nabla\\Phi}{2}) + 1\\right)^{\\frac{\\gamma}{\\gamma-1}} -1\\right]\\\\\n&\\approx \\frac{2}{\\gamma M^2}\\left[\\left(1 - \\frac{\\gamma-1}{a_{\\infty}^2}(\\phi_t + u_{\\infty}\\phi_x )\\right)^{\\frac{\\gamma}{\\gamma-1}} -1\\right]\\\\\n&\\approx -\\frac{2\\phi_t}{u_{\\infty}^2} - \\frac{2\\phi_x}{u_{\\infty}}\n\\end{align}\n"
},
{
"math_id": 21,
"text": "a_{\\infty}"
},
{
"math_id": 22,
"text": "p = p_{\\infty}\\left(1 + \\frac{\\gamma-1}{2}\\frac{w}{a}\\right)^{\\frac{2\\gamma}{\\gamma-1}}"
},
{
"math_id": 23,
"text": "w"
},
{
"math_id": 24,
"text": "\nC_p = \\frac{p-p_{\\infty}}{\\frac{\\gamma}{2}p_{\\infty} M^2} = \\frac{2}{\\gamma M^2}\\left[\\left(1 + \\frac{\\gamma-1}{2}\\frac{w}{a}\\right)^{\\frac{2\\gamma}{\\gamma-1}} - 1\\right]\n"
},
{
"math_id": 25,
"text": "\nF(x,y,z,t)= z - f(x,y,t) = 0\n"
},
{
"math_id": 26,
"text": "\n\\frac{\\nabla F}{|\\nabla F|}(u_{\\infty} + \\phi_x,\\phi_y,\\phi_z) = V_{\\text{wall}}\\cdot \\frac{\\nabla F}{|\\nabla F|} = -\\frac{\\partial F}{\\partial t}\\frac{1}{|\\nabla F|}\n"
},
{
"math_id": 27,
"text": "\nw = \\frac{\\partial f}{\\partial t} + u_{\\infty} \\frac{\\partial f}{\\partial x}\n"
},
{
"math_id": 28,
"text": "V_{\\infty}"
},
{
"math_id": 29,
"text": "A"
},
{
"math_id": 30,
"text": "\\theta"
},
{
"math_id": 31,
"text": "V_{\\infty}\\sin\\theta"
},
{
"math_id": 32,
"text": "\\rho_{\\infty}V_{\\infty} A \\sin \\theta"
},
{
"math_id": 33,
"text": "\\rho_{\\infty}"
},
{
"math_id": 34,
"text": "F"
},
{
"math_id": 35,
"text": "F = (\\rho_{\\infty}V_{\\infty}A\\sin\\theta)(V_{\\infty}\\sin\\theta) = \\rho_{\\infty}V_{\\infty}^{2} A \\sin^{2}\\theta"
},
{
"math_id": 36,
"text": "p_{\\infty}"
},
{
"math_id": 37,
"text": "\\frac{F}{A} = p - p_{\\infty} = \\rho_{\\infty}V_{\\infty}^{2} \\sin^{2}\\theta \\implies \\frac{p - p_{\\infty}}{ \\frac{1}{2} \\rho_{\\infty}V_{\\infty}^{2}} = 2\\sin^{2}\\theta"
},
{
"math_id": 38,
"text": "C_{p} = 2\\sin^{2}\\theta"
},
{
"math_id": 39,
"text": "C_{p} = C_{p,\\max}\\sin^{2}\\theta"
},
{
"math_id": 40,
"text": "C_{p,\\max}"
},
{
"math_id": 41,
"text": "C_{p,\\max} = \\frac{p_{o} - p_{\\infty}}{ \\frac{1}{2}\\rho_{\\infty}V_{\\infty}^{2} } = \\frac{p_{\\infty}}{\\frac{1}{2}\\rho_{\\infty}V_{\\infty}^{2}} \\left( \\frac{p_{o}}{p_{\\infty}} - 1 \\right) = \\frac{2}{\\gamma M_{\\infty}^{2} } \\left( \\frac{p_{o}}{p_{\\infty}} - 1 \\right)"
},
{
"math_id": 42,
"text": "p_{o}"
},
{
"math_id": 43,
"text": "\\gamma"
},
{
"math_id": 44,
"text": "p = \\rho RT"
},
{
"math_id": 45,
"text": "M = V/a"
},
{
"math_id": 46,
"text": "a = \\sqrt{\\gamma RT}"
},
{
"math_id": 47,
"text": "\\frac{p_{o}}{p_{\\infty}} = \\left[ \\frac{(\\gamma+1)^{2}M_{\\infty}^{2}}{4\\gamma M_{\\infty}^{2} - 2(\\gamma-1)} \\right]^{\\gamma/(\\gamma-1)} \\left[ \\frac{\\gamma(2M_{\\infty}^{2} - 1) + 1}{\\gamma + 1} \\right]"
},
{
"math_id": 48,
"text": "C_{p,\\max} = \\frac{2}{\\gamma M_{\\infty}^{2} } \\left\\{ \\left[ \\frac{(\\gamma+1)^{2}M_{\\infty}^{2}}{4\\gamma M_{\\infty}^{2} - 2(\\gamma-1)} \\right]^{\\gamma/(\\gamma-1)} \\left[ \\frac{\\gamma(2M_{\\infty}^{2} - 1) + 1}{\\gamma + 1} \\right] - 1 \\right\\}"
},
{
"math_id": 49,
"text": "M_{\\infty} \\rightarrow \\infty"
},
{
"math_id": 50,
"text": "C_{p,\\max} = \\left[ \\frac{(\\gamma+1)^{2}}{4\\gamma} \\right]^{\\gamma/(\\gamma-1)} \\left( \\frac{4}{\\gamma + 1} \\right) "
},
{
"math_id": 51,
"text": "\\gamma \\rightarrow 1"
},
{
"math_id": 52,
"text": "C_{p,\\max} = 2"
},
{
"math_id": 53,
"text": "C_l=\\frac{1}{x_{TE}-x_{LE}}\\int\\limits_{x_{LE}}^{x_{TE}}\\left(C_{p_l}(x)-C_{p_u}(x)\\right)dx"
},
{
"math_id": 54,
"text": "C_{p_l}"
},
{
"math_id": 55,
"text": "C_{p_u}"
},
{
"math_id": 56,
"text": "x_{LE}"
},
{
"math_id": 57,
"text": "x_{TE}"
}
] | https://en.wikipedia.org/wiki?curid=851547 |
8517337 | Incomplete LU factorization | In numerical linear algebra, an incomplete LU factorization (abbreviated as ILU) of a matrix is a sparse approximation of the LU factorization often used as a preconditioner.
Introduction.
Consider a sparse linear system formula_0. These are often solved by computing the factorization formula_1, with "L" lower unitriangular and "U" upper triangular.
One then solves formula_2, formula_3, which can be done efficiently because the matrices are triangular.
For a typical sparse matrix, the LU factors can be much less sparse than the original matrix — a phenomenon called "fill-in".
The memory requirements for using a direct solver can then become a bottleneck in solving linear systems. One can combat this problem by using fill-reducing reorderings of the matrix's unknowns, such as the Minimum degree algorithm.
An incomplete factorization instead seeks triangular matrices "L", "U" such that formula_4 rather than formula_1. Solving for formula_5 can be done quickly but does not yield the exact solution to formula_0. So, we instead use the matrix formula_6 as a preconditioner in another iterative solution algorithm such as the conjugate gradient method or GMRES.
Definition.
For a given matrix formula_7 one defines the graph formula_8 as
formula_9
which is used to define the conditions a "sparsity patterns" formula_10 needs to fulfill
formula_11
A decomposition of the form formula_12 where the following hold
is called an incomplete LU decomposition (with respect to the sparsity pattern formula_10).
The sparsity pattern of "L" and "U" is often chosen to be the same as the sparsity pattern of the original matrix "A". If the underlying matrix structure can be referenced by pointers instead of copied, the only extra memory required is for the entries of "L" and "U". This preconditioner is called ILU(0).
Stability.
Concerning the stability of the ILU the following theorem was proven by Meijerink and van der Vorst.
Let formula_19 be an M-matrix, the (complete) LU decomposition given by formula_20, and the ILU by formula_21.
Then
formula_22
holds.
Thus, the ILU is at least as stable as the (complete) LU decomposition.
Generalizations.
One can obtain a more accurate preconditioner by allowing some level of extra fill in the factorization. A common choice is to use the sparsity pattern of "A2" instead of "A"; this matrix is appreciably more dense than "A", but still sparse over all. This preconditioner is called ILU(1). One can then generalize this procedure; the ILU(k) preconditioner of a matrix "A" is the incomplete LU factorization with the sparsity pattern of the matrix "Ak+1".
More accurate ILU preconditioners require more memory, to such an extent that eventually the running time of the algorithm increases even though the total number of iterations decreases. Consequently, there is a cost/accuracy trade-off that users must evaluate, typically on a case-by-case basis depending on the family of linear systems to be solved.
The ILU factorization can be performed as a fixed-point iteration in a highly parallel way. | [
{
"math_id": 0,
"text": "Ax = b"
},
{
"math_id": 1,
"text": "A = LU"
},
{
"math_id": 2,
"text": "Ly = b"
},
{
"math_id": 3,
"text": "Ux = y"
},
{
"math_id": 4,
"text": "A \\approx LU"
},
{
"math_id": 5,
"text": "LUx = b"
},
{
"math_id": 6,
"text": "M = LU"
},
{
"math_id": 7,
"text": " A \\in \\R^{n \\times n} "
},
{
"math_id": 8,
"text": " G(A) "
},
{
"math_id": 9,
"text": "\n G(A) := \\left\\lbrace (i,j) \\in \\N^2 : A_{ij} \\neq 0 \\right\\rbrace \\,,\n"
},
{
"math_id": 10,
"text": " S "
},
{
"math_id": 11,
"text": "\n S \\subset \\left\\lbrace 1, \\dots , n \\right\\rbrace^2\n \\,, \\quad\n \\left\\lbrace (i,i) : 1 \\leq i \\leq n \\right\\rbrace \\subset S\n \\,, \\quad\n G(A) \\subset S \n \\,.\n"
},
{
"math_id": 12,
"text": " A = LU - R "
},
{
"math_id": 13,
"text": " L \\in \\R^{n \\times n} "
},
{
"math_id": 14,
"text": " U \\in \\R^{n \\times n} "
},
{
"math_id": 15,
"text": " L,U "
},
{
"math_id": 16,
"text": " L_{ij}=U_{ij}=0 \\quad \\forall \\; (i,j) \\notin S "
},
{
"math_id": 17,
"text": " R \\in \\R^{n \\times n} "
},
{
"math_id": 18,
"text": " R_{ij}=0 \\quad \\forall \\; (i,j) \\in S "
},
{
"math_id": 19,
"text": " A "
},
{
"math_id": 20,
"text": " A=\\hat{L} \\hat{U} "
},
{
"math_id": 21,
"text": " A=LU-R "
},
{
"math_id": 22,
"text": "\n |L_{ij}| \\leq |\\hat{L}_{ij}|\n \\quad \\forall \\; i,j\n"
}
] | https://en.wikipedia.org/wiki?curid=8517337 |
8517402 | Symmetric successive over-relaxation | In applied mathematics, symmetric successive over-relaxation (SSOR), is a preconditioner.
If the original matrix can be split into diagonal, lower and upper triangular as formula_0 then the SSOR preconditioner matrix is defined as
formula_1
It can also be parametrised by formula_2 as follows.
formula_3 | [
{
"math_id": 0,
"text": "A=D+L+L^\\mathsf{T}"
},
{
"math_id": 1,
"text": "M=(D+L) D^{-1} (D+L)^\\mathsf{T}"
},
{
"math_id": 2,
"text": "\\omega"
},
{
"math_id": 3,
"text": "M(\\omega)={\\omega\\over{2-\\omega}} \\left ( {1\\over\\omega} D + L \\right ) D^{-1} \\left ( {1\\over\\omega} D + L\\right)^\\mathsf{T}"
}
] | https://en.wikipedia.org/wiki?curid=8517402 |
8518192 | Incomplete Cholesky factorization | In numerical analysis, an incomplete Cholesky factorization of a symmetric positive definite matrix is a sparse approximation of the Cholesky factorization. An incomplete Cholesky factorization is often used as a preconditioner for algorithms like the conjugate gradient method.
The Cholesky factorization of a positive definite matrix "A" is "A" = "LL"* where "L" is a lower triangular matrix. An incomplete Cholesky factorization is given by a sparse lower triangular matrix "K" that is in some sense close to "L". The corresponding preconditioner is "KK"*.
One popular way to find such a matrix "K" is to use the algorithm for finding the exact Cholesky decomposition in which "K" has the same sparsity pattern as "A" (any entry of "K" is set to zero if the corresponding entry in "A" is also zero). This gives an incomplete Cholesky factorization which is as sparse as the matrix "A".
Motivation.
Consider the following matrix as an example:
formula_0
If we apply the full regular Cholesky decomposition, it yields:
formula_1
And, by definition:
formula_2
However, by applying Cholesky decomposition, we observe that some zero elements in the original matrix end up being non-zero elements in the decomposed matrix, like elements (4,2), (5,2) and (5,3) in this example. These elements are known as "fill-ins".
This is not an issue per se, but it is very problematic when working with sparse matrices, since the fill-ins generation is mostly unpredictable and reduces the matrix sparsity, impacting the efficiency of sparse matrix algorithms.
Therefore, given the importance of the Cholesky decomposition in matrix calculations, it is extremely relevant to repurpose the regular method, so as to eliminate the fill-ins generation. The incomplete Cholesky factorization does exactly that, by generating a matrix L similar to the one generated by the regular method, but conserving the zero elements in the original matrix.
Naturally:
formula_3
Multiplying matrix L generated by incomplete Cholesky factorization by its transpose won't yield the original matrix, but a similar one.
Algorithm.
For formula_4 from formula_5 to formula_6:
formula_7
For formula_8 from formula_9 to formula_6:
formula_10
Implementation.
Implementation of the incomplete Cholesky factorization in the GNU Octave language. The factorization is stored as a lower triangular matrix, with the elements in the upper triangle set to zero.
function a = ichol(a)
n = size(a,1);
for k = 1:n
a(k,k) = sqrt(a(k,k));
for i = (k+1):n
if (a(i,k) != 0)
a(i,k) = a(i,k)/a(k,k);
endif
endfor
for j = (k+1):n
for i = j:n
if (a(i,j) != 0)
a(i,j) = a(i,j) - a(i,k)*a(j,k);
endif
endfor
endfor
endfor
for i = 1:n
for j = i+1:n
a(i,j) = 0;
endfor
endfor
endfunction
Sparse example.
Consider again the matrix displayed in the beginning of this article. Since it is symmetric and the method only uses the lower triangular elements, we can represent it by:
formula_11
More specifically, in its sparse form as a coordinate list, sweeping rows first:
Value 5 -2 -2 -2 5 -2 5 -2 5 -2 5
Row 1 2 4 5 2 3 3 4 4 5 5
Col 1 1 1 1 2 2 3 3 4 4 5
Then, we take the square root of (1,1) and divide the other (i,1) elements by the result:
Value 2.24 -0.89 -0.89 -0.89 | 5 -2 5 -2 5 -2 5
Row 1 2 4 5 | 2 3 3 4 4 5 5
Col 1 1 1 1 | 2 2 3 3 4 4 5
After that, for all the other elements with column greater than 1, calculate (i,j)=(i,j)-(i,1)*(j,1) if (i,1) and (j,1) exist. For example: (5,4) = (5,4)-(5,1)*(4,1) = -2 -(-0.89*-0.89) = -2.8.
Value 2.24 -0.89 -0.89 -0.89 | 4.2 -2 5 -2 4.2 -2.8 4.2
Row 1 2 4 5 | 2 3 3 4 4 5 5
Col 1 1 1 1 | 2 2 3 3 4 4 5
The elements (2,2), (4,4), (5,4) and (5,5) (marked with an arrow) have been recalculated, since they obey this rule. On the other hand, the elements (3,2), (3,3) and (4,3) won't be recalculated since the element (3,1) doesn't exist, even though the elements (2,1) and (4,1) exist.
Now, repeat the process, but for (i,2). Take the square root of (2,2) and divide the other (i,2) elements by the result:
Value 2.24 -0.89 -0.89 -0.89 | 2.05 -0.98 | 5 -2 4.2 -2.8 4.2
Row 1 2 4 5 | 2 3 | 3 4 4 5 5
Col 1 1 1 1 | 2 2 | 3 3 4 4 5
Again, for elements with column greater than 2, calculate (i,j)=(i,j)-(i,2)*(j,2) if (i,2) and (j,2) exist:
Value 2.24 -0.89 -0.89 -0.89 | 2.05 -0.98 | 4.05 -2 4.2 -2.8 4.2
Row 1 2 4 5 | 2 3 | 3 4 4 5 5
Col 1 1 1 1 | 2 2 | 3 3 4 4 5
Repeat for (i,3). Take the square root of (3,3) and divide the other (i,3):
Value 2.24 -0.89 -0.89 -0.89 2.05 -0.98 | 2.01 -0.99 | 4.2 -2.8 4.2
Row 1 2 4 5 2 3 | 3 4 | 4 5 5
Col 1 1 1 1 2 2 | 3 3 | 4 4 5
For elements with column greater than 3, calculate (i,j)=(i,j)-(i,3)*(j,3) if (i,3) and (j,3) exist:
Value 2.24 -0.89 -0.89 -0.89 2.05 -0.98 | 2.01 -0.99 | 3.21 -2.8 4.2
Row 1 2 4 5 2 3 | 3 4 | 4 5 5
Col 1 1 1 1 2 2 | 3 3 | 4 4 5
Repeat for (i,4). Take the square root of (4,4) and divide the other (i,4):
Value 2.24 -0.89 -0.89 -0.89 2.05 -0.98 2.01 -0.99 | 1.79 -1.56 | 4.2
Row 1 2 4 5 2 3 3 4 | 4 5 | 5
Col 1 1 1 1 2 2 3 3 | 4 4 | 5
For elements with column greater than 4, calculate (i,j)=(i,j)-(i,4)*(j,4) if (i,4) and (j,4) exist:
Value 2.24 -0.89 -0.89 -0.89 2.05 -0.98 2.01 -0.99 | 1.79 -1.56 | 1.76
Row 1 2 4 5 2 3 3 4 | 4 5 | 5
Col 1 1 1 1 2 2 3 3 | 4 4 | 5
Finally take the square root of (5,5):
Value 2.24 -0.89 -0.89 -0.89 2.05 -0.98 2.01 -0.99 1.79 -1.56 | 1.33
Row 1 2 4 5 2 3 3 4 4 5 | 5
Col 1 1 1 1 2 2 3 3 4 4 | 5
Expanding the matrix to its full form:
formula_12
Note that in this case no fill-ins were generated compared to the original matrix. The elements (4,2), (5,2) and (5,3) are still zero.
However, if we perform the multiplication of L to its transpose:
formula_13
We get a matrix slightly different from the original one, since the decomposition didn't take into account all the elements, in order to eliminate the fill-ins.
Sparse implementation.
The sparse version for the incomplete Cholesky factorization (the same procedure presented above) implemented in MATLAB can be seen below. Naturally, MATLAB has its own means for dealing with sparse matrixes, but the code below was made explicit for pedagogic purposes. This algorithm is efficient, since it treats the matrix as a sequential 1D array, automatically avoiding the zero elements.
function A=Sp_ichol(A)
n=size(A,1);
ncols=A(n).col;
c_end=0;
for col=1:ncols
is_next_col=0;
c_start=c_end+1;
for i=c_start:n
if A(i).col==col % in the current column (col):
if A(i).col==A(i).row
A(i).val=sqrt(A(i).val); % take the square root of the current column's diagonal element
div=A(i).val;
else
A(i).val=A(i).val/div; % divide the other current column's elements by the square root of the diagonal element
end
end
if A(i).col>col % in the next columns (col+1 ... ncols):
if is_next_col==0
c_end=i-1;
is_next_col=1;
end
v1=0;
v2=0;
for j=c_start:c_end
if A(j).col==col
if A(j).row==A(i).row % search for current column's (col) elements A(j) whose row index is the same as current element's A(i) row index
v1=A(j).val;
end
if A(j).row==A(i).col % search for current column's (col) elements A(j) whose row index is the same as current element's A(i) column index
v2=A(j).val;
end
if v1~=0 && v2~=0 % if these elements exist in the current column (col), recalculate the current element A(i):
A(i).val=A(i).val-v1*v2;
break;
end
end
end
end
end
end
end | [
{
"math_id": 0,
"text": "\\mathbf{A}=\\begin{bmatrix} 5 & -2 & 0 & -2 &-2\\\\ -2 & 5 & -2 & 0 & 0 \\\\ 0 & -2 & 5 & -2 & 0\\\\ -2 & 0 &-2 & 5 & -2\\\\ -2 & 0 & 0 &-2 & 5\\\\ \\end{bmatrix}"
},
{
"math_id": 1,
"text": "\\mathbf{L}=\\begin{bmatrix} 2.24 & 0 & 0 & 0 & 0\\\\ -0.89 & 2.05 & 0 & 0 & 0 \\\\ 0 & -0.98 & 2.02 & 0 & 0\\\\ -0.89 & -0.39 &-1.18 & 1.63 & 0\\\\ -0.89 & -0.39 & -0.19 & -1.95 & 0.45\\\\ \\end{bmatrix}"
},
{
"math_id": 2,
"text": "\\mathbf{A}=\\mathbf{L}\\mathbf{L'}"
},
{
"math_id": 3,
"text": "\\mathbf{A}\\neq\\mathbf{L_{ichol}}\\mathbf{L_{ichol} '}"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "1"
},
{
"math_id": 6,
"text": "N"
},
{
"math_id": 7,
"text": "\nL_{ii} = \\left( {a_{ii} - \\sum\\limits_{k = 1}^{i - 1} {L_{ik}^2 } } \\right)^{{1 \\over 2}} \n"
},
{
"math_id": 8,
"text": "j"
},
{
"math_id": 9,
"text": "i+1"
},
{
"math_id": 10,
"text": "\nL_{ji} = {1 \\over {L_{ii} }}\\left( {a_{ji} - \\sum\\limits_{k = 1}^{i - 1} {L_{ik} L_{jk} } } \\right)\n"
},
{
"math_id": 11,
"text": "\\mathbf{A_{tri}}=\\begin{bmatrix} 5 & 0 & 0 & 0 & 0 \\\\ -2 & 5 & 0 & 0 & 0 \\\\ 0 & -2 & 5 & 0 & 0 \\\\ -2 & 0 & -2 & 5 & 0 \\\\ -2 & 0 & 0 & -2 & 5 \\end{bmatrix}"
},
{
"math_id": 12,
"text": "\\mathbf{L}=\\begin{bmatrix} 2.24 & 0 & 0 & 0 & 0 \\\\ -0.89 & 2.05 & 0 & 0 & 0 \\\\ 0 & -0.98 & 2.01 & 0 & 0 \\\\ -0.89 & 0 & -0.99 & 1.79 & 0 \\\\ -0.89 & 0 & 0 & -1.56 & 1.33 \\end{bmatrix}"
},
{
"math_id": 13,
"text": "\\mathbf{LL'}=\\begin{bmatrix} 5 & -2 & 0 & -2 & -2 \\\\ -2 & 5 & -2 & 0.8 & 0.8 \\\\ 0 & -2 & 5 & -2 & 0 \\\\ -2 & 0.8 & -2 & 5 & -2 \\\\ -2 & 0.8 & 0 & -2 & 5 \\end{bmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=8518192 |
8518299 | Modified Richardson iteration | Iterative method used to solve a linear system of equations
Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method.
We seek the solution to a set of linear equations, expressed in matrix terms as
formula_0
The Richardson iteration is
formula_1
where formula_2 is a scalar parameter that has to be chosen such that the sequence formula_3 converges.
It is easy to see that the method has the correct fixed points, because if it converges, then formula_4 and formula_3 has to approximate a solution of formula_5.
Convergence.
Subtracting the exact solution formula_6, and introducing the notation for the error formula_7, we get the equality for the errors
formula_8
Thus,
formula_9
for any vector norm and the corresponding induced matrix norm. Thus, if formula_10, the method converges.
Suppose that formula_11 is symmetric positive definite and that formula_12 are the eigenvalues of formula_11. The error converges to formula_13 if formula_14 for all eigenvalues formula_15. If, e.g., all eigenvalues are positive, this can be guaranteed if formula_2 is chosen such that formula_16. The optimal choice, minimizing all formula_17, is formula_18, which gives the simplest Chebyshev iteration. This optimal choice yields a spectral radius of
formula_19
where formula_20 is the condition number.
If there are both positive and negative eigenvalues, the method will diverge for any formula_2 if the initial error formula_21 has nonzero components in the corresponding eigenvectors.
Equivalence to gradient descent.
Consider minimizing the function formula_22. Since this is a convex function, a sufficient condition for optimality is that the gradient is zero (formula_23) which gives rise to the equation
formula_24
Define formula_25 and formula_26.
Because of the form of "A", it is a positive semi-definite matrix, so it has no negative eigenvalues.
A step of gradient descent is
formula_27
which is equivalent to the Richardson iteration by making formula_28.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " A x = b.\\, "
},
{
"math_id": 1,
"text": " \nx^{(k+1)} = x^{(k)} + \\omega \\left( b - A x^{(k)} \\right),\n"
},
{
"math_id": 2,
"text": "\\omega"
},
{
"math_id": 3,
"text": "x^{(k)}"
},
{
"math_id": 4,
"text": "x^{(k+1)} \\approx x^{(k)}"
},
{
"math_id": 5,
"text": "A x = b"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "e^{(k)} = x^{(k)}-x"
},
{
"math_id": 8,
"text": " \ne^{(k+1)} = e^{(k)} - \\omega A e^{(k)} = (I-\\omega A) e^{(k)}. \n"
},
{
"math_id": 9,
"text": " \n\\|e^{(k+1)}\\| = \\|(I-\\omega A) e^{(k)}\\|\\leq \\|I-\\omega A\\| \\|e^{(k)}\\|, \n"
},
{
"math_id": 10,
"text": "\\|I-\\omega A\\|<1"
},
{
"math_id": 11,
"text": "A"
},
{
"math_id": 12,
"text": "(\\lambda_j)_j"
},
{
"math_id": 13,
"text": "0"
},
{
"math_id": 14,
"text": "| 1 - \\omega \\lambda_j |< 1"
},
{
"math_id": 15,
"text": "\\lambda_j"
},
{
"math_id": 16,
"text": "0 < \\omega < \\omega_\\text{max}\\,, \\ \\omega_\\text{max}:= 2/\\lambda_{\\text{max}}(A)"
},
{
"math_id": 17,
"text": "| 1 - \\omega \\lambda_j |"
},
{
"math_id": 18,
"text": "\\omega_\\text{opt} := 2/(\\lambda_\\text{min}(A)+\\lambda_\\text{max}(A))"
},
{
"math_id": 19,
"text": "\n\\min_{\\omega\\in (0,\\omega_\\text{max}) } \\rho (I-\\omega A)\n = \\rho (I-\\omega_\\text{opt} A)\n = 1 - \\frac{2}{\\kappa(A)+1} \\,,\n"
},
{
"math_id": 20,
"text": "\\kappa(A)"
},
{
"math_id": 21,
"text": "e^{(0)}"
},
{
"math_id": 22,
"text": "F(x) = \\frac{1}{2}\\|\\tilde{A}x-\\tilde{b}\\|_2^2"
},
{
"math_id": 23,
"text": "\\nabla F(x) = 0"
},
{
"math_id": 24,
"text": "\\tilde{A}^T\\tilde{A}x = \\tilde{A}^T\\tilde{b}."
},
{
"math_id": 25,
"text": "A=\\tilde{A}^T\\tilde{A}"
},
{
"math_id": 26,
"text": "b=\\tilde{A}^T\\tilde{b}"
},
{
"math_id": 27,
"text": " x^{(k+1)} = x^{(k)} - t \\nabla F(x^{(k)}) = x^{(k)} - t( Ax^{(k)} - b )"
},
{
"math_id": 28,
"text": "t=\\omega"
}
] | https://en.wikipedia.org/wiki?curid=8518299 |
8519543 | Spectroscopic parallax | Spectroscopic parallax or main sequence fitting is an astronomical method for measuring the distances to stars.
Despite its name, it does not rely on the geometric parallax effect. The spectroscopic parallax technique can be applied to any main sequence star for which a spectrum can be recorded. The method depends on the star being sufficiently bright to provide a measurable spectrum, which as of 2013 limits its range to about 10,000 parsecs.
To apply this method, one must measure the apparent magnitude of the star and know the spectral type of the star. The spectral type can be determined by observing the star's spectrum. If the star lies on the main sequence, as determined by its luminosity class, the spectral type of the star provides a good estimate of the star's absolute magnitude. Knowing the apparent magnitude (m) and absolute magnitude (M) of the star, one can calculate the distance (d, in parsecs) of the star using formula_0 (see distance modulus). The true distance to the star may be different than the one calculated due to interstellar extinction.
The method ultimately derives from the spectroscopic studies of sunspots and stars by Walter Sydney Adams and Ernst Arnold Kohlschütter.
The method is an important step on the cosmic distance ladder.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m - M = 5 \\log (d/10)"
}
] | https://en.wikipedia.org/wiki?curid=8519543 |
8519647 | Enhanced vegetation index | The enhanced vegetation index (EVI) is an 'optimized' vegetation index designed to enhance the vegetation signal with improved sensitivity in high biomass regions and improved vegetation monitoring through a de-coupling of the canopy background signal and a reduction in atmosphere influences. EVI is computed following this equation:
formula_0
where:
The coefficients adopted in the MODIS-EVI algorithm are: L=1, C1 = 6, C2 = 7.5, and G = 2.5.
Whereas the Normalized Difference Vegetation Index (NDVI) is chlorophyll sensitive, the EVI is more responsive to canopy structural variations, including leaf area index (LAI), canopy type, plant physiognomy, and canopy architecture. The two vegetation indices complement each other in global vegetation studies and improve upon the detection of vegetation changes and extraction of canopy biophysical parameters.
Another difference between Normalized Difference Vegetation Index (NDVI) and EVI is that in the presence of snow, NDVI decreases, while EVI increases (Huete, 2002).
Starting 2000, and after the launch of the two MODIS sensors on Terra (satellite) and Aqua (satellite) by NASA, EVI was adopted as a standard product by NASA and became extremely popular with users due to its ability to eliminate background and atmosphere noises, as well as its non saturation, a typical NDVI problem. EVI is currently distributed for free by the USGS LP DAAC.
Two-band EVI.
Two reasons drive the search for a two-band EVI:
Additionally, the original motivation for the inclusion of the blue band (NDVI uses only NIR and red) in the 1990s was to mitigate atmospheric aerosol interference. However, since that time, sensor level atmospheric adjustment has improved substantially minimizing the marginal impact of blue band on accuracy.
We'll call the two-band EVI "EVI2", and the three-band EVI simply "EVI". A number of EVI2 approaches are available; the one of Jiang et al. 2008 is:
formula_1
formula_2
Jiang's EVI2 has the best similarity with the 3-band EVI, particularly when atmospheric effects are insignificant and data quality is good. EVI2 can be used for sensors without a blue band, such as the Advanced Very High Resolution Radiometer (AVHRR), and may reveal different vegetation dynamics in comparison with the current AVHRR NDVI dataset.
There exist some other EVI2s, one being that of Miura 2008 for ASTER:
formula_3
The ASTER sensors have a different spectral range compared to the MODIS ones.
Application of EVI.
An example of the utility of EVI was reported by Huete "et al." (2006). Previously, the Amazon rainforest was viewed as having a monotonous growing season, where there is no particular seasonality to plant growth. Using the MODIS EVI product, Huete "et al." showed that the Amazon forest exhibits a distinct increase in growth during the dry season. This phenomenon has implications for our understanding of the carbon cycle and sinks in the region, though it is unclear whether this is a long-standing pattern or an emergent shift associated with climate change.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{EVI} = G \\times \\frac{(\\text{NIR} - \\text{Red})}{(\\text{NIR} + C_1 \\times \\text{Red} - C_2 \\times \\text{Blue} + L)}"
},
{
"math_id": 1,
"text": "\\text{EVI2} = f(\\text{Red},\\text{NIR}) = G\\times {(\\text{NIR}-\\text{Red}) \\over (L+\\text{NIR}+C \\times \\text{Red})}"
},
{
"math_id": 2,
"text": "\\text{EVI2} = 2.5 \\times {(\\text{NIR}-\\text{Red}) \\over (\\text{NIR}+2.4*\\text{Red}+1)}"
},
{
"math_id": 3,
"text": "\\text{EVI2} = 2.4 \\times {(\\text{NIR}-\\text{Red}) \\over (\\text{NIR}+\\text{Red}+1)}"
}
] | https://en.wikipedia.org/wiki?curid=8519647 |
8519857 | Monte Carlo method in statistical mechanics | Monte Carlo in statistical physics refers to the application of the Monte Carlo method to problems in statistical physics, or statistical mechanics.
Overview.
The general motivation to use the Monte Carlo method in statistical physics is to evaluate a multivariable integral. The typical problem begins with a system for which the Hamiltonian is known, it is at a given temperature and it follows the Boltzmann statistics. To obtain the mean value of some macroscopic variable, say A, the general approach is to compute, over all the phase space, PS for simplicity, the mean value of A using the Boltzmann distribution:
formula_0.
where
formula_1 is the energy of the system for a given state defined by
formula_2 - a vector with all the degrees of freedom (for instance, for a mechanical system, formula_3),
formula_4 and
formula_5
is the partition function.
One possible approach to solve this multivariable integral is to exactly enumerate all possible configurations of the system, and calculate averages at will. This is done in exactly solvable systems, and in simulations of simple systems with few particles. In realistic systems, on the other hand, an exact enumeration can be difficult or impossible to implement.
For those systems, the Monte Carlo integration (and not to be confused with Monte Carlo method, which is used to simulate molecular chains) is generally employed. The main motivation for its use is the fact that, with the Monte Carlo integration, the error goes as formula_6, independently of the dimension of the integral. Another important concept related to the Monte Carlo integration is the importance sampling, a technique that improves the computational time of the simulation.
In the following sections, the general implementation of the Monte Carlo integration for solving this kind of problems is discussed.
Importance sampling.
An estimation, under Monte Carlo integration, of an integral defined as
formula_7
is
formula_8
where formula_9 are uniformly obtained from all the phase space (PS) and N is the number of sampling points (or function evaluations).
From all the phase space, some zones of it are generally more important to the mean of the variable formula_10 than others. In particular, those that have the value of formula_11 sufficiently high when compared to the rest of the energy spectra are the most relevant for the integral. Using this fact, the natural question to ask is: is it possible to choose, with more frequency, the states that are known to be more relevant to the integral? The answer is yes, using the importance sampling technique.
Lets assume formula_12 is a distribution that chooses the states that are known to be more relevant to the integral.
The mean value of formula_10 can be rewritten as
formula_13,
where formula_14 are the sampled values taking into account the importance probability formula_12. This integral can be estimated by
formula_15
where formula_9 are now randomly generated using the formula_12 distribution. Since most of the times it is not easy to find a way of generating states with a given distribution, the Metropolis algorithm must be used.
Canonical.
Because it is known that the most likely states are those that maximize the Boltzmann distribution, a good distribution, formula_12, to choose for the importance sampling is the Boltzmann distribution or canonic distribution. Let
formula_16
be the distribution to use. Substituting on the previous sum,
formula_17.
So, the procedure to obtain a mean value of a given variable, using metropolis algorithm, with the canonical distribution, is to use the Metropolis algorithm to generate states given by the distribution formula_12 and perform means over formula_14.
One important issue must be considered when using the metropolis algorithm with the canonical distribution: when performing a given measure, i.e. realization of formula_9, one must ensure that that realization is not correlated with the previous state of the system (otherwise the states are not being "randomly" generated). On systems with relevant energy gaps, this is the major drawback of the use of the canonical distribution because the time needed to the system de-correlate from the previous state can tend to infinity.
Multi-canonical.
As stated before, micro-canonical approach has a major drawback, which becomes relevant in most of the systems that use Monte Carlo Integration. For those systems with "rough energy landscapes", the multicanonic approach can be used.
The multicanonic approach uses a different choice for importance sampling:
formula_18
where formula_19 is the density of states of the system. The major advantage of this choice is that the energy histogram is flat, i.e. the generated states are equally distributed on energy. This means that, when using the Metropolis algorithm, the simulation doesn't see the "rough energy landscape", because every energy is treated equally.
The major drawback of this choice is the fact that, on most systems, formula_19 is unknown. To overcome this, the Wang and Landau algorithm is normally used to obtain the DOS during the simulation. Note that after the DOS is known, the mean values of every variable can be calculated for every temperature, since the generation of states does not depend on formula_20.
Implementation.
On this section, the implementation will focus on the Ising model. Lets consider a two-dimensional spin network, with L spins (lattice sites) on each side. There are naturally formula_21 spins, and so, the phase space is discrete and is characterized by N spins, formula_22 where formula_23 is the spin of each lattice site. The system's energy is given by formula_24, where formula_25 are the set of first neighborhood spins of i and J is the interaction matrix (for a ferromagnetic ising model, J is the identity matrix). The problem is stated.
On this example, the objective is to obtain formula_26 and formula_27 (for instance, to obtain the magnetic susceptibility of the system) since it is straightforward to generalize to other observables. According to the definition, formula_28.
Canonical.
First, the system must be initialized: let formula_29 be the system's Boltzmann temperature and initialize the system with an initial state (which can be anything since the final result should not depend on it).
With micro-canonic choice, the metropolis method must be employed. Because there is no right way of choosing which state is to be picked, one can particularize and choose to try to flip one spin at the time. This choice is usually called "single spin flip". The following steps are to be made to perform a single measurement.
step 1: generate a state that follows the formula_12 distribution:
step 1.1: Perform TT times the following iteration:
step 1.1.1: pick a lattice site at random (with probability 1/N), which will be called i, with spin formula_30.
step 1.1.2: pick a random number formula_31.
step 1.1.3: calculate the energy change of trying to flip the spin i:
formula_32
and its magnetization change: formula_33
step 1.1.4: if formula_34, flip the spin (formula_35 ), otherwise, don't.
step 1.1.5: update the several macroscopic variables in case the spin flipped: formula_36, formula_37
after TT times, the system is considered to be not correlated from its previous state, which means that, at this moment, the probability of the system to be on a given state follows the Boltzmann distribution, which is the objective proposed by this method.
step 2: perform the measurement:
step 2.1: save, on a histogram, the values of M and M2.
As a final note, one should note that TT is not easy to estimate because it is not easy to say when the system is de-correlated from the previous state. To surpass this point, one generally do not use a fixed TT, but TT as a "tunneling time". One tunneling time is defined as the number of steps 1. the system needs to make to go from the minimum of its energy to the maximum of its energy and return.
A major drawback of this method with the "single spin flip" choice in systems like Ising model is that the tunneling time scales as a power law as formula_38 where z is greater than 0.5, phenomenon known as "critical slowing down".
Applicability.
The method thus neglects dynamics, which can be a major drawback, or a great advantage. Indeed, the method can only be applied to static quantities, but the freedom to choose moves makes the method very flexible. An additional advantage is that some systems, such as the Ising model, lack a dynamical description and are only defined by an energy prescription; for these the Monte Carlo approach is the only one feasible.
Generalizations.
The great success of this method in statistical mechanics has led to various generalizations such as the method of simulated annealing for optimization, in which a fictitious temperature is introduced and then gradually lowered. | [
{
"math_id": 0,
"text": "\\langle A\\rangle=\\int_{PS} A_{\\vec{r}} \\frac{e^{-\\beta E_{\\vec{r}}}}{Z} d\\vec{r}"
},
{
"math_id": 1,
"text": "E(\\vec{r})=E_{\\vec{r}}"
},
{
"math_id": 2,
"text": "\\vec{r}"
},
{
"math_id": 3,
"text": " \\vec{r} = \\left(\\vec{q}, \\vec{p} \\right) "
},
{
"math_id": 4,
"text": "\\beta\\equiv 1/k_bT"
},
{
"math_id": 5,
"text": "Z= \\int_{PS} e^{-\\beta E_{\\vec{r}}}d\\vec{r}"
},
{
"math_id": 6,
"text": " 1/\\sqrt{N}"
},
{
"math_id": 7,
"text": "\\langle A\\rangle = \\int_{PS} A_{\\vec{r}} e^{-\\beta E_{\\vec{r}}}d\\vec{r}/Z"
},
{
"math_id": 8,
"text": "\\langle A\\rangle \\simeq \\frac{1}{N}\\sum_{i=1}^N A_{\\vec{r}_i} e^{-\\beta E_{\\vec{r}_i}}/Z"
},
{
"math_id": 9,
"text": "\\vec{r}_i"
},
{
"math_id": 10,
"text": "A"
},
{
"math_id": 11,
"text": "e^{-\\beta E_{\\vec{r}_i}}"
},
{
"math_id": 12,
"text": "p(\\vec{r})"
},
{
"math_id": 13,
"text": "\\langle A\\rangle = \\int_{PS} p^{-1}(\\vec{r}) \\frac{A_{\\vec{r}} }{p^{-1}(\\vec{r})}e^{-\\beta E_{\\vec{r}}}/Zd\\vec{r} = \\int_{PS} p^{-1}(\\vec{r}) A^{*}_{\\vec{r}} e^{-\\beta E_{\\vec{r}}}/Zd\\vec{r} "
},
{
"math_id": 14,
"text": "A^{*}_{\\vec{r}}"
},
{
"math_id": 15,
"text": "\\langle A\\rangle \\simeq \\frac{1}{N}\\sum_{i=1}^N p^{-1}(\\vec{r}_i) A^{*}_{\\vec{r}_i} e^{-\\beta E_{\\vec{r}_i} }/Z "
},
{
"math_id": 16,
"text": "p(\\vec{r}) = \\frac{ e^{-\\beta E_\\vec{r}}}{Z}"
},
{
"math_id": 17,
"text": "\\langle A\\rangle \\simeq \\frac{1}{N}\\sum_{i=1}^N A^{*}_{\\vec{r}_i}"
},
{
"math_id": 18,
"text": "p(\\vec{r}) = \\frac{1}{\\Omega(E_\\vec{r})}"
},
{
"math_id": 19,
"text": "\\Omega(E)"
},
{
"math_id": 20,
"text": "\\beta"
},
{
"math_id": 21,
"text": "N = L^2"
},
{
"math_id": 22,
"text": "\\vec{r} = (\\sigma_1,\\sigma_2,...,\\sigma_N)"
},
{
"math_id": 23,
"text": "\\sigma_i\\in \\{-1,1\\}"
},
{
"math_id": 24,
"text": "E(\\vec{r}) = \\sum_{i=1}^N\\sum_{j\\in viz_i} (1 - J_{ij}\\sigma_i \\sigma_j)"
},
{
"math_id": 25,
"text": "viz_i"
},
{
"math_id": 26,
"text": "\\langle M \\rangle"
},
{
"math_id": 27,
"text": "\\langle M^2 \\rangle"
},
{
"math_id": 28,
"text": "M(\\vec{r}) = \\sum_{i=1}^N \\sigma_i"
},
{
"math_id": 29,
"text": "\\beta=1/k_b T"
},
{
"math_id": 30,
"text": "\\sigma_i"
},
{
"math_id": 31,
"text": "\\alpha \\in[0,1]"
},
{
"math_id": 32,
"text": "\\Delta E = 2\\sigma_i \\sum_{j\\in viz_i}\\sigma_j"
},
{
"math_id": 33,
"text": "\\Delta M = -2\\sigma_i "
},
{
"math_id": 34,
"text": "\\alpha < \\min(1, e^{-\\beta \\Delta E} )"
},
{
"math_id": 35,
"text": "\\sigma_i = -\\sigma_i"
},
{
"math_id": 36,
"text": "E = E + \\Delta E"
},
{
"math_id": 37,
"text": "M = M + \\Delta M"
},
{
"math_id": 38,
"text": " N^{2+z}"
}
] | https://en.wikipedia.org/wiki?curid=8519857 |
852089 | Gravitational time dilation | Time dilation due to gravitation
Gravitational time dilation is a form of time dilation, an actual difference of elapsed time between two events, as measured by observers situated at varying distances from a gravitating mass. The lower the gravitational potential (the closer the clock is to the source of gravitation), the slower time passes, speeding up as the gravitational potential increases (the clock moving away from the source of gravitation). Albert Einstein originally predicted this in his theory of relativity, and it has since been confirmed by tests of general relativity.
This effect has been demonstrated by noting that atomic clocks at differing altitudes (and thus different gravitational potential) will eventually show different times. The effects detected in such Earth-bound experiments are extremely small, with differences being measured in nanoseconds. Relative to Earth's age in billions of years, Earth's core is in effect 2.5 years younger than its surface. Demonstrating larger effects would require measurements at greater distances from the Earth, or a larger gravitational source.
Gravitational time dilation was first described by Albert Einstein in 1907 as a consequence of special relativity in accelerated frames of reference. In general relativity, it is considered to be a difference in the passage of proper time at different positions as described by a metric tensor of spacetime. The existence of gravitational time dilation was first confirmed directly by the Pound–Rebka experiment in 1959, and later refined by Gravity Probe A and other experiments.
Gravitational time dilation is closely related to gravitational redshift, in which the closer a body emitting light of constant frequency is to a gravitating body, the more its time is slowed by gravitational time dilation, and the lower (more "redshifted") would seem to be the frequency of the emitted light, as measured by a fixed observer.
Definition.
Clocks that are far from massive bodies (or at higher gravitational potentials) run more quickly, and clocks close to massive bodies (or at lower gravitational potentials) run more slowly. For example, considered over the total time-span of Earth (4.6 billion years), a clock set in a geostationary position at an altitude of 9,000 meters above sea level, such as perhaps at the top of Mount Everest (prominence 8,848m), would be about 39 hours ahead of a clock set at sea level. This is because gravitational time dilation is manifested in accelerated frames of reference or, by virtue of the equivalence principle, in the gravitational field of massive objects.
According to general relativity, inertial mass and gravitational mass are the same, and all accelerated reference frames (such as a uniformly rotating reference frame with its proper time dilation) are physically equivalent to a gravitational field of the same strength.
Consider a family of observers along a straight "vertical" line, each of whom experiences a distinct constant g-force directed along this line (e.g., a long accelerating spacecraft, a skyscraper, a shaft on a planet). Let formula_0 be the dependence of g-force on "height", a coordinate along the aforementioned line. The equation with respect to a base observer at formula_1 is
formula_2
where formula_3 is the "total" time dilation at a distant position formula_4, formula_0 is the dependence of g-force on "height" formula_4, formula_5 is the speed of light, and formula_6 denotes exponentiation by e.
For simplicity, in a Rindler's family of observers in a flat spacetime, the dependence would be
formula_7
with constant formula_8, which yields
formula_9.
On the other hand, when formula_10 is nearly constant and formula_11 is much smaller than formula_12, the linear "weak field" approximation formula_13 can also be used.
See Ehrenfest paradox for application of the same formula to a rotating reference frame in flat spacetime.
Outside a non-rotating sphere.
A common equation used to determine gravitational time dilation is derived from the Schwarzschild metric, which describes spacetime in the vicinity of a non-rotating massive spherically symmetric object. The equation is
formula_14
where
To illustrate then, without accounting for the effects of rotation, proximity to Earth's gravitational well will cause a clock on the planet's surface to accumulate around 0.0219 fewer seconds over a period of one year than would a distant observer's clock. In comparison, a clock on the surface of the Sun will accumulate around 66.4 fewer seconds in one year.
Circular orbits.
In the Schwarzschild metric, free-falling objects can be in circular orbits if the orbital radius is larger than formula_24 (the radius of the photon sphere). The formula for a clock at rest is given above; the formula below gives the general relativistic time dilation for a clock in a circular orbit:
formula_25
Both dilations are shown in the figure below.
Experimental confirmation.
Gravitational time dilation has been experimentally measured using atomic clocks on airplanes, such as the Hafele–Keating experiment. The clocks aboard the airplanes were slightly faster than clocks on the ground. The effect is significant enough that the Global Positioning System's artificial satellites need to have their clocks corrected.
Additionally, time dilations due to height differences of less than one metre have been experimentally verified in the laboratory.
Gravitational time dilation in the form of gravitational redshift has also been confirmed by the Pound–Rebka experiment and observations of the spectra of the white dwarf Sirius B.
Gravitational time dilation has been measured in experiments with time signals sent to and from the Viking 1 Mars lander.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g(h)"
},
{
"math_id": 1,
"text": "h=0"
},
{
"math_id": 2,
"text": "T_d(h) = \\exp\\left[\\frac{1}{c^2}\\int_0^h g(h') dh'\\right]"
},
{
"math_id": 3,
"text": "T_d(h)"
},
{
"math_id": 4,
"text": "h"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "\\exp"
},
{
"math_id": 7,
"text": "g(h) = c^2/(H+h)"
},
{
"math_id": 8,
"text": "H"
},
{
"math_id": 9,
"text": "T_d(h) = e^{\\ln (H+h) - \\ln H} = \\tfrac{H+h}H"
},
{
"math_id": 10,
"text": "g"
},
{
"math_id": 11,
"text": "gh"
},
{
"math_id": 12,
"text": "c^2"
},
{
"math_id": 13,
"text": "T_d = 1 + gh/c^2"
},
{
"math_id": 14,
"text": "t_0 = t_f \\sqrt{1 - \\frac{2GM}{rc^2}} = t_f \\sqrt{1 - \\frac{r_{\\rm s}}{r}} = t_f \\sqrt{1 - \\frac{v_e^2}{c^2}} = t_f \\sqrt{1 - \\beta_e^2} < t_f "
},
{
"math_id": 15,
"text": "t_0"
},
{
"math_id": 16,
"text": "t_f"
},
{
"math_id": 17,
"text": "G"
},
{
"math_id": 18,
"text": "M"
},
{
"math_id": 19,
"text": "r"
},
{
"math_id": 20,
"text": "r > r_{\\rm s}"
},
{
"math_id": 21,
"text": "r_{\\rm s} = 2GM/c^2"
},
{
"math_id": 22,
"text": "v_e = \\sqrt{ \\frac{2 G M}{r} }"
},
{
"math_id": 23,
"text": "\\beta_e = v_e/c"
},
{
"math_id": 24,
"text": "\\tfrac{3}{2} r_s"
},
{
"math_id": 25,
"text": "t_0 = t_f \\sqrt{1 - \\frac{3}{2} \\! \\cdot \\! \\frac{r_{\\rm s}}{r}}\\, ."
},
{
"math_id": 26,
"text": "T"
},
{
"math_id": 27,
"text": "g=(dt/T(x))^2-g_{space}"
},
{
"math_id": 28,
"text": "dxdt"
},
{
"math_id": 29,
"text": "g(v,dt)=v^0/T^2"
},
{
"math_id": 30,
"text": "v^0"
},
{
"math_id": 31,
"text": "v"
},
{
"math_id": 32,
"text": "g(v,dt)=1"
},
{
"math_id": 33,
"text": "v^0=T^2"
},
{
"math_id": 34,
"text": "v^0_{loc}=T"
}
] | https://en.wikipedia.org/wiki?curid=852089 |
8522128 | Doob's martingale inequality | In mathematics, Doob's martingale inequality, also known as Kolmogorov’s submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a submartingale exceeds any given value over a given interval of time. As the name suggests, the result is usually given in the case that the process is a martingale, but the result is also valid for submartingales.
The inequality is due to the American mathematician Joseph L. Doob.
Statement of the inequality.
The setting of Doob's inequality is a submartingale relative to a filtration of the underlying probability space. The probability measure on the sample space of the martingale will be denoted by "P". The corresponding expected value of a random variable X, as defined by Lebesgue integration, will be denoted by E["X"].
Informally, Doob's inequality states that the expected value of the process at some final time controls the probability that a sample path will reach above any particular value beforehand. As the proof uses very direct reasoning, it does not require any restrictive assumptions on the underlying filtration or on the process itself, unlike for many other theorems about stochastic processes. In the continuous-time setting, right-continuity (or left-continuity) of the sample paths is required, but only for the sake of knowing that the supremal value of a sample path equals the supremum over an arbitrary countable dense subset of times.
Discrete time.
Let "X"1, ..., "X""n" be a discrete-time submartingale relative to a filtration formula_0 of the underlying probability space, which is to say:
formula_1
The submartingale inequality says that
formula_2
for any positive number C. The proof relies on the set-theoretic fact that the event defined by max("X""i") > "C" may be decomposed as the disjoint union of the events "E""i" defined by "(X""i" > "C" and "(X""j" ≤ "C" for all "j" < "i")). Then
formula_3
having made use of the submartingale property for the last inequality and the fact that formula_4 for the last equality. Summing this result as i ranges from 1 to n results in the conclusion
formula_5
which is sharper than the stated result. By using the elementary fact that "X""n" ≤ max("X""n", 0), the given submartingale inequality follows.
In this proof, the submartingale property is used once, together with the definition of conditional expectation. The proof can also be phrased in the language of stochastic processes so as to become a corollary of the powerful theorem that a stopped submartingale is itself a submartingale. In this setup, the minimal index i appearing in the above proof is interpreted as a stopping time.
Continuous time.
Now let "X""t" be a submartingale indexed by an interval of real numbers, relative to a filtration "F""t" of the underlying probability space, which is to say:
formula_6
for all "s" < "t". The submartingale inequality says that if the sample paths of the martingale are almost-surely right-continuous, then
formula_7
for any positive number C. This is a corollary of the above discrete-time result, obtained by writing
formula_8
in which "Q"1 ⊂ "Q"2 ⊂ ⋅⋅⋅ is any sequence of finite sets whose union is the set of all rational numbers. The first equality is a consequence of the right-continuity assumption, while the second equality is purely set-theoretic. The discrete-time inequality applies to say that
formula_9
for each i, and this passes to the limit to yield the submartingale inequality. This passage from discrete time to continuous time is very flexible, as it only required having a countable dense subset of , which can then automatically be built out of an increasing sequence of finite sets. As such, the submartingale inequality holds even for more general index sets, which are not required to be intervals or natural numbers.
Further inequalities.
There are further submartingale inequalities also due to Doob. Now let "X""t" be a martingale or a positive submartingale; if the index set is uncountable, then (as above) assume that the sample paths are right-continuous. In these scenarios, Jensen's inequality implies that |"X""t"|"p" is a submartingale for any number "p" ≥ 1, provided that these new random variables all have finite integral. The submartingale inequality is then applicable to say that
formula_10
for any positive number C. Here T is the "final time", i.e. the largest value of the index set. Furthermore one has
formula_11
if p is larger than one. This, sometimes known as "Doob's maximal inequality", is a direct result of combining the layer cake representation with the submartingale inequality and the Hölder inequality.
In addition to the above inequality, there holds
formula_12
Related inequalities.
Doob's inequality for discrete-time martingales implies Kolmogorov's inequality: if "X"1, "X"2, ... is a sequence of real-valued independent random variables, each with mean zero, it is clear that
formula_13
so S"n" = "X"1 + ... + "Xn" is a martingale. Note that Jensen's inequality implies that |S"n"| is a nonnegative submartingale if S"n" is a martingale. Hence, taking "p" = 2 in Doob's martingale inequality,
formula_14
which is precisely the statement of Kolmogorov's inequality.
Application: Brownian motion.
Let "B" denote canonical one-dimensional Brownian motion. Then
formula_15
The proof is just as follows: since the exponential function is monotonically increasing, for any non-negative λ,
formula_16
By Doob's inequality, and since the exponential of Brownian motion is a positive submartingale,
formula_17
Since the left-hand side does not depend on "λ", choose "λ" to minimize the right-hand side: "λ" = "C"/"T" gives the desired inequality.
References.
<templatestyles src="Reflist/styles.css" />
Sources
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{F}_1,\\ldots,\\mathcal{F}_n"
},
{
"math_id": 1,
"text": " X_i \\leq \\operatorname E[X_{i+1} \\mid \\mathcal{F}_i]."
},
{
"math_id": 2,
"text": " P\\left[ \\max_{1 \\leq i \\leq n} X_i \\geq C \\right] \\leq \\frac{\\operatorname E[\\textrm{max}(X_n,0)]}{C}"
},
{
"math_id": 3,
"text": "CP(E_i) = \\int_{E_i}C\\,dP \\leq \\int_{E_i}X_i\\,dP\\leq\\int_{E_i}\\text{E}[X_n\\mid\\mathcal{F}_i]\\,dP=\\int_{E_i}X_n\\,dP,"
},
{
"math_id": 4,
"text": "E_i\\in\\mathcal{F}_i"
},
{
"math_id": 5,
"text": "CP(E)\\leq\\int_{E}X_n\\,dP,"
},
{
"math_id": 6,
"text": " X_s \\leq \\operatorname E[X_t \\mid \\mathcal{F}_s]."
},
{
"math_id": 7,
"text": " P\\left[ \\sup_{0 \\leq t \\leq T} X_t \\geq C \\right] \\leq \\frac{\\operatorname E[\\textrm{max}(X_T,0)]}{C}"
},
{
"math_id": 8,
"text": "\\sup_{0\\leq t\\leq T}X_t=\\sup\\{X_t:t\\in[0,T]\\cap\\mathbb{Q}\\}=\\lim_{i\\to\\infty}\\sup\\{X_t:t\\in[0,T]\\cap Q_i\\}"
},
{
"math_id": 9,
"text": "P\\left[ \\sup_{t\\in [0,T]\\cap Q_i} X_t \\geq C \\right] \\leq \\frac{\\operatorname E[\\textrm{max}(X_T,0)]}{C}"
},
{
"math_id": 10,
"text": "\\text{P}[\\sup_t|X_t|\\geq C]\\leq \\frac{\\text{E}[|X_T|^p]}{C^p}."
},
{
"math_id": 11,
"text": "\\text{E}[| X_T |^p] \\leq \\text{E}\\left[\\sup_{0 \\leq s \\leq T} |X_s|^p\\right] \\leq \\left(\\frac{p}{p-1}\\right)^p\\text{E}[|X_T|^p]"
},
{
"math_id": 12,
"text": "\\text{E}\\left| \\sup_{0 \\leq s \\leq T} X_{s} \\right| \\leq \\frac{e}{e - 1} \\left( 1 + \\text{E}[\\max\\{ |X_T|\\log |X_T|,0\\}] \\right)"
},
{
"math_id": 13,
"text": "\\begin{align}\n\\operatorname E\\left[ X_1 + \\cdots + X_n + X_{n + 1} \\mid X_1, \\ldots, X_n \\right] &= X_1 + \\cdots + X_n + \\operatorname E\\left[ X_{n + 1} \\mid X_1, \\ldots, X_n \\right] \\\\\n&= X_1 + \\cdots + X_n,\n\\end{align}"
},
{
"math_id": 14,
"text": " P\\left[ \\max_{1 \\leq i \\leq n} \\left| S_i \\right| \\geq \\lambda \\right] \\leq \\frac{\\operatorname E\\left[ S_n^2 \\right]}{\\lambda^2},"
},
{
"math_id": 15,
"text": " P\\left[ \\sup_{0 \\leq t \\leq T} B_t \\geq C \\right] \\leq \\exp \\left( - \\frac{C^2}{2T} \\right)."
},
{
"math_id": 16,
"text": "\\left\\{ \\sup_{0 \\leq t \\leq T} B_{t} \\geq C \\right\\} = \\left\\{ \\sup_{0 \\leq t \\leq T} \\exp ( \\lambda B_{t} ) \\geq \\exp ( \\lambda C ) \\right\\}."
},
{
"math_id": 17,
"text": "\\begin{align}\nP\\left[ \\sup_{0 \\leq t \\leq T} B_t \\geq C \\right] & = P\\left[ \\sup_{0 \\leq t \\leq T} \\exp ( \\lambda B_t ) \\geq \\exp ( \\lambda C ) \\right] \\\\[8pt]\n& \\leq \\frac{\\operatorname E[ \\exp (\\lambda B_T)]}{\\exp (\\lambda C)} \\\\[8pt]\n& = \\exp \\left( \\tfrac{1}{2} \\lambda^2 T - \\lambda C \\right) && \\operatorname E\\left[ \\exp (\\lambda B_t) \\right] = \\exp \\left( \\tfrac{1}{2} \\lambda^2 t \\right)\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=8522128 |
8524 | Deuterium | Isotope of hydrogen with one neutron
Deuterium (hydrogen-2, symbol 2H or D, also known as heavy hydrogen) is one of two stable isotopes of hydrogen; the other is protium, or hydrogen-1, 1H. The deuterium nucleus, called a deuteron, contains one proton and one neutron, whereas the far more common 1H has no neutrons. Deuterium has a natural abundance in Earth's oceans of about one atom of deuterium in every 6,420 atoms of hydrogen. Thus deuterium accounts for about 0.0156% by number (0.0312% by mass) of all hydrogen in the ocean: tonnes of deuterium – mainly as HOD (or 1HO2H or 1H2HO) and only rarely as D2O (or 2H2O) – in tonnes of water. The abundance of 2H changes slightly from one kind of natural water to another (see Vienna Standard Mean Ocean Water).
The name "deuterium" comes from Greek "", meaning "second". American chemist Harold Urey discovered deuterium in 1931. Urey and others produced samples of heavy water in which the 2H had been highly concentrated. The discovery of deuterium won Urey a Nobel Prize in 1934.
Deuterium is destroyed in the interiors of stars faster than it is produced. Other natural processes are thought to produce only an insignificant amount of deuterium. Nearly all deuterium found in nature was produced in the Big Bang 13.8 billion years ago, as the basic or primordial ratio of 2H to 1H (≈26 atoms of deuterium per 106 hydrogen atoms) has its origin from that time. This is the ratio found in the gas giant planets, such as Jupiter. The analysis of deuterium–protium ratios (2H1HR) in comets found results very similar to the mean ratio in Earth's oceans (156 atoms of deuterium per 106 hydrogen atoms). This reinforces theories that much of Earth's ocean water is of cometary origin. The 2H1HR of comet 67P/Churyumov–Gerasimenko, as measured by the "Rosetta" space probe, is about three times that of Earth water. This figure is the highest yet measured in a comet. 2H1HR's thus continue to be an active topic of research in both astronomy and climatology.
Differences from common hydrogen (protium).
Chemical symbol.
Deuterium is often represented by the chemical symbol D. Since it is an isotope of hydrogen with mass number 2, it is also represented by 2H. IUPAC allows both D and 2H, though 2H is preferred. A distinct chemical symbol is used for convenience because of the isotope's common use in various scientific processes. Also, its large mass difference with protium (1H) confers non-negligible chemical differences with 1H compounds. Deuterium has a mass of , about twice the mean hydrogen atomic weight of , or twice protium's mass of . The isotope weight ratios within other chemical elements are largely insignificant in this regard.
Spectroscopy.
In quantum mechanics, the energy levels of electrons in atoms depend on the reduced mass of the system of electron and nucleus. For the hydrogen atom, the role of reduced mass is most simply seen in the Bohr model of the atom, where the reduced mass appears in a simple calculation of the Rydberg constant and Rydberg equation, but the reduced mass also appears in the Schrödinger equation, and the Dirac equation for calculating atomic energy levels.
The reduced mass of the system in these equations is close to the mass of a single electron, but differs from it by a small amount about equal to the ratio of mass of the electron to the nucleus. For normal hydrogen, this amount is about , or 1.000545, and for deuterium it is even smaller: , or 1.0002725. The energies of electronic spectra lines for 2H and 1H therefore differ by the ratio of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the corresponding lines of light hydrogen, by 0.0272%. In astronomical observation, this corresponds to a blue Doppler shift of 0.0272% of the speed of light, or 81.6 km/s.
The differences are much more pronounced in vibrational spectroscopy such as infrared spectroscopy and Raman spectroscopy, and in rotational spectra such as microwave spectroscopy because the reduced mass of the deuterium is markedly higher than that of protium. In nuclear magnetic resonance spectroscopy, deuterium has a very different NMR frequency (e.g. 61 MHz when protium is at 400 MHz) and is much less sensitive. Deuterated solvents are usually used in protium NMR to prevent the solvent from overlapping with the signal, though deuterium NMR on its own right is also possible.
Big Bang nucleosynthesis.
Deuterium is thought to have played an important role in setting the number and ratios of the elements that were formed in the Big Bang. Combining thermodynamics and the changes brought about by cosmic expansion, one can calculate the fraction of protons and neutrons based on the temperature at the point that the universe cooled enough to allow formation of nuclei. This calculation indicates seven protons for every neutron at the beginning of nucleogenesis, a ratio that would remain stable even after nucleogenesis was over. This fraction was in favor of protons initially, primarily because the lower mass of the proton favored their production. As the Universe expanded, it cooled. Free neutrons and protons are less stable than helium nuclei, and the protons and neutrons had a strong energetic reason to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium.
Through much of the few minutes after the Big Bang during which nucleosynthesis could have occurred, the temperature was high enough that the mean energy per particle was greater than the binding energy of weakly bound deuterium; therefore any deuterium that was formed was immediately destroyed. This situation is known as the deuterium bottleneck. The bottleneck delayed formation of any helium-4 until the Universe became cool enough to form deuterium (at about a temperature equivalent to 100 keV). At this point, there was a sudden burst of element formation (first deuterium, which immediately fused to helium). However, very shortly thereafter, at twenty minutes after the Big Bang, the Universe became too cool for any further nuclear fusion and nucleosynthesis to occur. At this point, the elemental abundances were nearly fixed, with the only change as some of the radioactive products of Big Bang nucleosynthesis (such as tritium) decay. The deuterium bottleneck in the formation of helium, together with the lack of stable ways for helium to combine with hydrogen or with itself (no stable nucleus has a mass number of 5 or 8) meant that an insignificant amount of carbon, or any elements heavier than carbon, formed in the Big Bang. These elements thus required formation in stars. At the same time, the failure of much nucleogenesis during the Big Bang ensured that there would be plenty of hydrogen in the later universe available to form long-lived stars, such as the Sun.
Abundance.
Deuterium occurs in trace amounts naturally as deuterium gas (2H2 or D2), but most deuterium atoms in the Universe are bonded with 1H to form a gas called hydrogen deuteride (HD or 1H2H). Similarly, natural water contains deuterated molecules, almost all as semiheavy water HDO with only one deuterium.
The existence of deuterium on Earth, elsewhere in the Solar System (as confirmed by planetary probes), and in the spectra of stars, is also an important datum in cosmology. Gamma radiation from ordinary nuclear fusion dissociates deuterium into protons and neutrons, and there are no known natural processes other than the Big Bang nucleosynthesis that might have produced deuterium at anything close to its observed natural abundance. Deuterium is produced by the rare cluster decay, and occasional absorption of naturally occurring neutrons by light hydrogen, but these are trivial sources. There is thought to be little deuterium in the interior of the Sun and other stars, as at these temperatures the nuclear fusion reactions that consume deuterium happen much faster than the proton–proton reaction that creates deuterium. However, deuterium persists in the outer solar atmosphere at roughly the same concentration as in Jupiter, and this has probably been unchanged since the origin of the Solar System. The natural abundance of 2H seems to be a very similar fraction of hydrogen, wherever hydrogen is found, unless there are obvious processes at work that concentrate it.
The existence of deuterium at a low but constant primordial fraction in all hydrogen is another one of the arguments in favor of the Big Bang theory over the Steady State theory of the Universe. The observed ratios of hydrogen to helium to deuterium in the universe are difficult to explain except with a Big Bang model. It is estimated that the abundances of deuterium have not evolved significantly since their production about 13.8 billion years ago. Measurements of Milky Way galactic deuterium from ultraviolet spectral analysis show a ratio of as much as 23 atoms of deuterium per million hydrogen atoms in undisturbed gas clouds, which is only 15% below the WMAP estimated primordial ratio of about 27 atoms per million from the Big Bang. This has been interpreted to mean that less deuterium has been destroyed in star formation in the Milky Way galaxy than expected, or perhaps deuterium has been replenished by a large in-fall of primordial hydrogen from outside the galaxy. In space a few hundred light years from the Sun, deuterium abundance is only 15 atoms per million, but this value is presumably influenced by differential adsorption of deuterium onto carbon dust grains in interstellar space.
The abundance of deuterium in the atmosphere of Jupiter has been directly measured by the "Galileo" space probe as 26 atoms per million hydrogen atoms. ISO-SWS observations find 22 atoms per million hydrogen atoms in Jupiter. and this abundance is thought to represent close to the primordial Solar System ratio. This is about 17% of the terrestrial ratio of 156 deuterium atoms per million hydrogen atoms.
Cometary bodies such as Comet Hale-Bopp and Halley's Comet have been measured to contain more deuterium (about 200 atoms per million hydrogens), ratios which are enriched with respect to the presumed protosolar nebula ratio, probably due to heating, and which are similar to the ratios found in Earth seawater. The recent measurement of deuterium amounts of 161 atoms per million hydrogen in Comet 103P/Hartley (a former Kuiper belt object), a ratio almost exactly that in Earth's oceans (155.76 ± 0.1, but in fact from 153 to 156 ppm), emphasizes the theory that Earth's surface water may be largely comet-derived. Most recently the 2H1HR of 67P/Churyumov–Gerasimenko as measured by "Rosetta" is about three times that of Earth water. This has caused renewed interest in suggestions that Earth's water may be partly of asteroidal origin.
Deuterium has also been observed to be concentrated over the mean solar abundance in other terrestrial planets, in particular Mars and Venus.
Production.
Deuterium is produced for industrial, scientific and military purposes, by starting with ordinary water—a small fraction of which is naturally occurring heavy water—and then separating out the heavy water by the Girdler sulfide process, distillation, or other methods.
In theory, deuterium for heavy water could be created in a nuclear reactor, but separation from ordinary water is the cheapest bulk production process.
The world's leading supplier of deuterium was Atomic Energy of Canada Limited until 1997, when the last heavy water plant was shut down. Canada uses heavy water as a neutron moderator for the operation of the CANDU reactor design.
Another major producer of heavy water is India. All but one of India's atomic energy plants are pressurized heavy water plants, which use natural (i.e., not enriched) uranium. India has eight heavy water plants, of which seven are in operation. Six plants, of which five are in operation, are based on D–H exchange in ammonia gas. The other two plants extract deuterium from natural water in a process that uses hydrogen sulfide gas at high pressure.
While India is self-sufficient in heavy water for its own use, India also exports reactor-grade heavy water.
Properties.
Data for molecular deuterium.
Formula: or 21H2
Data at approximately 18 K for 2H2 (triple point):
Physical properties.
Compared to hydrogen in its natural composition on Earth, pure deuterium (2H2) has a higher melting point (18.72 K vs. 13.99 K), a higher boiling point (23.64 vs. 20.27 K), a higher critical temperature (38.3 vs. 32.94 K) and a higher critical pressure (1.6496 vs. 1.2858 MPa).
The physical properties of deuterium compounds can exhibit significant kinetic isotope effects and other physical and chemical property differences from the protium analogs. 2H2O, for example, is more viscous than normal . There are differences in bond energy and length for compounds of heavy hydrogen isotopes compared to protium, which are larger than the isotopic differences in any other element. Bonds involving deuterium and tritium are somewhat stronger than the corresponding bonds in protium, and these differences are enough to cause significant changes in biological reactions. Pharmaceutical firms are interested in the fact that 2H is harder to remove from carbon than 1H.
Deuterium can replace 1H in water molecules to form heavy water (2H2O), which is about 10.6% denser than normal water (so that ice made from it sinks in normal water). Heavy water is slightly toxic in eukaryotic animals, with 25% substitution of the body water causing cell division problems and sterility, and 50% substitution causing death by cytotoxic syndrome (bone marrow failure and gastrointestinal lining failure). Prokaryotic organisms, however, can survive and grow in pure heavy water, though they develop slowly. Despite this toxicity, consumption of heavy water under normal circumstances does not pose a health threat to humans. It is estimated that a person might drink of heavy water without serious consequences. Small doses of heavy water (a few grams in humans, containing an amount of deuterium comparable to that normally present in the body) are routinely used as harmless metabolic tracers in humans and animals.
Quantum properties.
The deuteron has spin +1 ("triplet state") and is thus a boson. The NMR frequency of deuterium is significantly different from normal hydrogen. Infrared spectroscopy also easily differentiates many deuterated compounds, due to the large difference in IR absorption frequency seen in the vibration of a chemical bond containing deuterium, versus light hydrogen. The two stable isotopes of hydrogen can also be distinguished by using mass spectrometry.
The triplet deuteron nucleon is barely bound at "E"B =, and none of the higher energy states are bound. The singlet deuteron is a virtual state, with a negative binding energy of . There is no such stable particle, but this virtual particle transiently exists during neutron–proton inelastic scattering, accounting for the unusually large neutron scattering cross-section of the proton.
Nuclear properties (deuteron).
Deuteron mass and radius.
The nucleus of deuterium is called a deuteron. It has a mass of (just over ).
The charge radius of the deuteron is .
Like the proton radius, measurements using muonic deuterium produce a smaller result: .
Spin and energy.
Deuterium is one of only five stable nuclides with an odd number of protons and an odd number of neutrons. (2H, 6Li, 10B, 14N, 180mTa; the long-lived radionuclides 40K, 50V, 138La, 176Lu also occur naturally.) Most odd–odd nuclei are unstable to beta decay, because the decay products are even–even, and therefore more strongly bound, due to nuclear pairing effects. Deuterium, however, benefits from having its proton and neutron coupled to a spin-1 state, which gives a stronger nuclear attraction; the corresponding spin-1 state does not exist in the two-neutron or two-proton system, due to the Pauli exclusion principle which would require one or the other identical particle with the same spin to have some other different quantum number, such as orbital angular momentum. But orbital angular momentum of either particle gives a lower binding energy for the system, primarily due to increasing distance of the particles in the steep gradient of the nuclear force. In both cases, this causes the diproton and dineutron to be unstable.
The proton and neutron in deuterium can be dissociated through neutral current interactions with neutrinos. The cross section for this interaction is comparatively large, and deuterium was successfully used as a neutrino target in the Sudbury Neutrino Observatory experiment.
Diatomic deuterium (D2 or 2H2) has ortho and para nuclear spin isomers like diatomic hydrogen, but with differences in the number and population of spin states and rotational levels, which occur because the deuteron is a boson with nuclear spin equal to one.
Isospin singlet state of the deuteron.
Due to the similarity in mass and nuclear properties between the proton and neutron, they are sometimes considered as two symmetric types of the same object, a nucleon. While only the proton has an electric charge, this is often negligible due to the weakness of the electromagnetic interaction relative to the strong nuclear interaction. The symmetry relating the proton and neutron is known as isospin and denoted "I" (or sometimes "T").
Isospin is an SU(2) symmetry, like ordinary spin, so is completely analogous to it. The proton and neutron, each of which have isospin-1/2, form an isospin doublet (analogous to a spin doublet), with a "down" state (↓) being a neutron and an "up" state (↑) being a proton. A pair of nucleons can either be in an antisymmetric state of isospin called singlet, or in a symmetric state called triplet. In terms of the "down" state and "up" state, the singlet is
formula_0, which can also be written :formula_1
This is a nucleus with one proton and one neutron, i.e. a deuterium nucleus. The triplet is
formula_2
and thus consists of three types of nuclei, which are supposed to be symmetric: a deuterium nucleus (actually a highly excited state of it), a nucleus with two protons, and a nucleus with two neutrons. These states are not stable.
Approximated wavefunction of the deuteron.
The deuteron wavefunction must be antisymmetric if the isospin representation is used (since a proton and a neutron are not identical particles, the wavefunction need not be antisymmetric in general). Apart from their isospin, the two nucleons also have spin and spatial distributions of their wavefunction. The latter is symmetric if the deuteron is symmetric under parity (i.e. has an "even" or "positive" parity), and antisymmetric if the deuteron is antisymmetric under parity (i.e. has an "odd" or "negative" parity). The parity is fully determined by the total orbital angular momentum of the two nucleons: if it is even then the parity is even (positive), and if it is odd then the parity is odd (negative).
The deuteron, being an isospin singlet, is antisymmetric under nucleons exchange due to isospin, and therefore must be symmetric under the double exchange of their spin and location. Therefore, it can be in either of the following two different states:
In the first case the deuteron is a spin triplet, so that its total spin "s" is 1. It also has an even parity and therefore even orbital angular momentum "l" ; The lower its orbital angular momentum, the lower its energy. Therefore, the lowest possible energy state has "s"
1, "l"
0.
In the second case the deuteron is a spin singlet, so that its total spin "s" is 0. It also has an odd parity and therefore odd orbital angular momentum "l". Therefore, the lowest possible energy state has "s"
0, "l"
1.
Since "s"
1 gives a stronger nuclear attraction, the deuterium ground state is in the "s"
1, "l"
0 state.
The same considerations lead to the possible states of an isospin triplet having "s"
0, "l"
even or "s"
1, "l"
odd. Thus the state of lowest energy has "s"
1, "l"
1, higher than that of the isospin singlet.
The analysis just given is in fact only approximate, both because isospin is not an exact symmetry, and more importantly because the strong nuclear interaction between the two nucleons is related to angular momentum in spin–orbit interaction that mixes different "s" and "l" states. That is, "s" and "l" are not constant in time (they do not commute with the Hamiltonian), and over time a state such as "s"
1, "l"
0 may become a state of "s"
1, "l"
2. Parity is still constant in time so these do not mix with odd "l" states (such as "s"
0, "l"
1). Therefore, the quantum state of the deuterium is a superposition (a linear combination) of the "s"
1, "l"
0 state and the "s"
1, "l"
2 state, even though the first component is much bigger. Since the total angular momentum "j" is also a good quantum number (it is a constant in time), both components must have the same "j", and therefore "j"
1. This is the total spin of the deuterium nucleus.
To summarize, the deuterium nucleus is antisymmetric in terms of isospin, and has spin 1 and even (+1) parity. The relative angular momentum of its nucleons "l" is not well defined, and the deuteron is a superposition of mostly "l"
0 with some "l"
2.
Magnetic and electric multipoles.
In order to find theoretically the deuterium magnetic dipole moment "μ", one uses the formula for a nuclear magnetic moment
formula_3
with
formula_4
"g"("l") and "g"("s") are "g"-factors of the nucleons.
Since the proton and neutron have different values for "g"("l") and "g"("s"), one must separate their contributions. Each gets half of the deuterium orbital angular momentum formula_5 and spin formula_6. One arrives at
formula_7
where subscripts p and n stand for the proton and neutron, and "g"(l)n
0.
By using the same identities as here and using the value g(l)p
1, one gets the following result, in units of the nuclear magneton "μ"N
formula_8
For the "s"
1, "l"
0 state ("j"
1), we obtain
formula_9
For the "s"
1, "l"
2 state ("j"
1), we obtain
formula_10
The measured value of the deuterium magnetic dipole moment, is , which is 97.5% of the value obtained by simply adding moments of the proton and neutron. This suggests that the state of the deuterium is indeed to a good approximation "s"
1, "l"
0 state, which occurs with both nucleons spinning in the same direction, but their magnetic moments subtracting because of the neutron's negative moment.
But the slightly lower experimental number than that which results from simple addition of proton and (negative) neutron moments shows that deuterium is actually a linear combination of mostly "s"
1, "l"
0 state with a slight admixture of "s"
1, "l"
2 state.
The electric dipole is zero as usual.
The measured electric quadrupole of the deuterium is . While the order of magnitude is reasonable, since the deuteron radius is of order of 1 femtometer (see below) and its electric charge is e, the above model does not suffice for its computation. More specifically, the electric quadrupole does not get a contribution from the "l" = 0 state (which is the dominant one) and does get a contribution from a term mixing the "l" = 0 and the "l" = 2 states, because the electric quadrupole operator does not commute with angular momentum.
The latter contribution is dominant in the absence of a pure "l"
0 contribution, but cannot be calculated without knowing the exact spatial form of the nucleons wavefunction inside the deuterium.
Higher magnetic and electric multipole moments cannot be calculated by the above model, for similar reasons.
Applications.
Nuclear reactors.
Deuterium is used in heavy water moderated fission reactors, usually as liquid 2H2O, to slow neutrons without the high neutron absorption of ordinary hydrogen. This is a common commercial use for larger amounts of deuterium.
In research reactors, liquid 2H2 is used in cold sources to moderate neutrons to very low energies and wavelengths appropriate for scattering experiments.
Experimentally, deuterium is the most common nuclide used in fusion reactor designs, especially in combination with tritium, because of the large reaction rate (or nuclear cross section) and high energy yield of the deuterium–tritium (DT) reaction. There is an even higher-yield 2H–3He fusion reaction, though the breakeven point of 2H–3He is higher than that of most other fusion reactions; together with the scarcity of 3He, this makes it implausible as a practical power source, at least until DT and deuterium–deuterium (DD) fusion have been performed on a commercial scale. Commercial nuclear fusion is not yet an accomplished technology.
NMR spectroscopy.
Deuterium is most commonly used in hydrogen nuclear magnetic resonance spectroscopy (proton NMR) in the following way. NMR ordinarily requires compounds of interest to be analyzed as dissolved in solution. Because of deuterium's nuclear spin properties which differ from the light hydrogen usually present in organic molecules, NMR spectra of hydrogen/protium are highly differentiable from that of deuterium, and in practice deuterium is not "seen" by an NMR instrument tuned for 1H. Deuterated solvents (including heavy water, but also compounds like deuterated chloroform, CDCl3 or C2HCl3, are therefore routinely used in NMR spectroscopy, in order to allow only the light-hydrogen spectra of the compound of interest to be measured, without solvent-signal interference.
Nuclear magnetic resonance spectroscopy can also be used to obtain information about the deuteron's environment in isotopically labelled samples (deuterium NMR). For example, the configuration of hydrocarbon chains in lipid bilayers can be quantified using solid state deuterium NMR with deuterium-labelled lipid molecules.
Deuterium NMR spectra are especially informative in the solid state because of its relatively small quadrupole moment in comparison with those of bigger quadrupolar nuclei such as chlorine-35, for example.
Mass spectrometry.
Deuterated (i.e. where all or some hydrogen atoms are replaced with deuterium) compounds are often used as internal standards in mass spectrometry. Like other isotopically labeled species, such standards improve accuracy, while often at a much lower cost than other isotopically labeled standards. Deuterated molecules are usually prepared via hydrogen isotope exchange reactions.
Tracing.
In chemistry, biochemistry and environmental sciences, deuterium is used as a non-radioactive, stable isotopic tracer, for example, in the doubly labeled water test. In chemical reactions and metabolic pathways, deuterium behaves somewhat similarly to ordinary hydrogen (with a few chemical differences, as noted). It can be distinguished from normal hydrogen most easily by its mass, using mass spectrometry or infrared spectrometry. Deuterium can be detected by femtosecond infrared spectroscopy, since the mass difference drastically affects the frequency of molecular vibrations; 2H–carbon bond vibrations are found in spectral regions free of other signals.
Measurements of small variations in the natural abundances of deuterium, along with those of the stable heavy oxygen isotopes 17O and 18O, are of importance in hydrology, to trace the geographic origin of Earth's waters. The heavy isotopes of hydrogen and oxygen in rainwater (meteoric water) are enriched as a function of the environmental temperature of the region in which the precipitation falls (and thus enrichment is related to latitude). The relative enrichment of the heavy isotopes in rainwater (as referenced to mean ocean water), when plotted against temperature falls predictably along a line called the global meteoric water line (GMWL). This plot allows samples of precipitation-originated water to be identified along with general information about the climate in which it originated. Evaporative and other processes in bodies of water, and also ground water processes, also differentially alter the ratios of heavy hydrogen and oxygen isotopes in fresh and salt waters, in characteristic and often regionally distinctive ways. The ratio of concentration of 2H to 1H is usually indicated with a delta as δ2H and the geographic patterns of these values are plotted in maps termed as isoscapes. Stable isotopes are incorporated into plants and animals and an analysis of the ratios in a migrant bird or insect can help suggest a rough guide to their origins.
Contrast properties.
Neutron scattering techniques particularly profit from availability of deuterated samples: The 1H and 2H cross sections are very distinct and different in sign, which allows contrast variation in such experiments. Further, a nuisance problem of normal hydrogen is its large incoherent neutron cross section, which is nil for 2H. The substitution of deuterium for normal hydrogen thus reduces scattering noise.
Hydrogen is an important and major component in all materials of organic chemistry and life science, but it barely interacts with X-rays. As hydrogen atoms (including deuterium) interact strongly with neutrons; neutron scattering techniques, together with a modern deuteration facility, fills a niche in many studies of macromolecules in biology and many other areas.
Nuclear weapons.
See below. Most stars, including the Sun, generate energy over most of their lives by fusing hydrogen into heavier elements; yet such fusion of light hydrogen (protium) has never been successful in the conditions attainable on Earth. Thus, all artificial fusion, including the hydrogen fusion in hydrogen bombs, requires heavy hydrogen (deuterium, tritium, or both).
Drugs.
A deuterated drug is a small molecule medicinal product in which one or more of the hydrogen atoms in the drug molecule have been replaced by deuterium. Because of the kinetic isotope effect, deuterium-containing drugs may have significantly lower rates of metabolism, and hence a longer half-life. In 2017, deutetrabenazine became the first deuterated drug to receive FDA approval.
Reinforced essential nutrients.
Deuterium can be used to reinforce specific oxidation-vulnerable C–H bonds within essential or conditionally essential nutrients, such as certain amino acids, or polyunsaturated fatty acids (PUFA), making them more resistant to oxidative damage. Deuterated polyunsaturated fatty acids, such as linoleic acid, slow down the chain reaction of lipid peroxidation that damage living cells. Deuterated ethyl ester of linoleic acid (RT001), developed by Retrotope, is in a compassionate use trial in infantile neuroaxonal dystrophy and has successfully completed a Phase I/II trial in Friedreich's ataxia.
Thermostabilization.
Live vaccines, such as oral polio vaccine, can be stabilized by deuterium, either alone or in combination with other stabilizers such as MgCl2.
Slowing circadian oscillations.
Deuterium has been shown to lengthen the period of oscillation of the circadian clock when dosed in rats, hamsters, and Gonyaulax dinoflagellates. In rats, chronic intake of 25% 2H2O disrupts circadian rhythm by lengthening the circadian period of suprachiasmatic nucleus-dependent rhythms in the brain's hypothalamus. Experiments in hamsters also support the theory that deuterium acts directly on the suprachiasmatic nucleus to lengthen the free-running circadian period.
History.
Suspicion of lighter element isotopes.
The existence of nonradioactive isotopes of lighter elements had been suspected in studies of neon as early as 1913, and proven by mass spectrometry of light elements in 1920. At that time the neutron had not yet been discovered, and the prevailing theory was that isotopes of an element differ by the existence of additional "protons" in the nucleus accompanied by an equal number of "nuclear electrons". In this theory, the deuterium nucleus with mass two and charge one would contain two protons and one nuclear electron. However, it was expected that the element hydrogen with a measured average atomic mass very close to , the known mass of the proton, always has a nucleus composed of a single proton (a known particle), and could not contain a second proton. Thus, hydrogen was thought to have no heavy isotopes.
Deuterium detected.
It was first detected spectroscopically in late 1931 by Harold Urey, a chemist at Columbia University. Urey's collaborator, Ferdinand Brickwedde, distilled five liters of cryogenically produced liquid hydrogen to of liquid, using the low-temperature physics laboratory that had recently been established at the National Bureau of Standards (now National Institute of Standards and Technology) in Washington, DC. The technique had previously been used to isolate heavy isotopes of neon. The cryogenic boiloff technique concentrated the fraction of the mass-2 isotope of hydrogen to a degree that made its spectroscopic identification unambiguous.
Naming of the isotope and Nobel Prize.
Urey created the names "protium", "deuterium", and "tritium" in an article published in 1934. The name is based in part on advice from Gilbert N. Lewis who had proposed the name "deutium". The name comes from Greek "deuteros" 'second', and the nucleus was to be called a "deuteron" or "deuton". Isotopes and new elements were traditionally given the name that their discoverer decided. Some British scientists, such as Ernest Rutherford, wanted to call the isotope "diplogen", from Greek "diploos" 'double', and the nucleus to be called "diplon".
The amount inferred for normal abundance of deuterium was so small (only about 1 atom in 6400 hydrogen atoms in seawater [156 parts per million]) that it had not noticeably affected previous measurements of (average) hydrogen atomic mass. This explained why it hadn't been suspected before. Urey was able to concentrate water to show partial enrichment of deuterium. Lewis, Urey's graduate advisor at Berkeley, had prepared and characterized the first samples of pure heavy water in 1933. The discovery of deuterium, coming before the discovery of the neutron in 1932, was an experimental shock to theory; but when the neutron was reported, making deuterium's existence more explicable, Urey was awarded the Nobel Prize in Chemistry only three years after the isotope's isolation. Lewis was deeply disappointed by the Nobel Committee's decision in 1934 and several high-ranking administrators at Berkeley believed this disappointment played a central role in his suicide a decade later.
"Heavy water" experiments in World War II.
Shortly before the war, Hans von Halban and Lew Kowarski moved their research on neutron moderation from France to Britain, smuggling the entire global supply of heavy water (which had been made in Norway) across in twenty-six steel drums.
During World War II, Nazi Germany was known to be conducting experiments using heavy water as moderator for a nuclear reactor design. Such experiments were a source of concern because they might allow them to produce plutonium for an atomic bomb. Ultimately it led to the Allied operation called the "Norwegian heavy water sabotage", the purpose of which was to destroy the Vemork deuterium production/enrichment facility in Norway. At the time this was considered important to the potential progress of the war.
After World War II ended, the Allies discovered that Germany was not putting as much serious effort into the program as had been previously thought. The Germans had completed only a small, partly built experimental reactor (which had been hidden away) and had been unable to sustain a chain reaction. By the end of the war, the Germans did not even have a fifth of the amount of heavy water needed to run the reactor, partially due to the Norwegian heavy water sabotage operation. However, even if the Germans had succeeded in getting a reactor operational (as the U.S. did with Chicago Pile-1 in late 1942), they would still have been at least several years away from the development of an atomic bomb. The engineering process, even with maximal effort and funding, required about two and a half years (from first critical reactor to bomb) in both the U.S. and U.S.S.R., for example.
In thermonuclear weapons.
The 62-ton Ivy Mike device built by the United States and exploded on 1 November 1952, was the first fully successful "hydrogen bomb" (thermonuclear bomb). In this context, it was the first bomb in which most of the energy released came from nuclear reaction stages that followed the primary nuclear fission stage of the atomic bomb. The Ivy Mike bomb was a factory-like building, rather than a deliverable weapon. At its center, a very large cylindrical, insulated vacuum flask or cryostat, held cryogenic liquid deuterium in a volume of about 1000 liters (160 kilograms in mass, if this volume had been completely filled). Then, a conventional atomic bomb (the "primary") at one end of the bomb was used to create the conditions of extreme temperature and pressure that were needed to set off the thermonuclear reaction.
Within a few years, so-called "dry" hydrogen bombs were developed that did not need cryogenic hydrogen. Released information suggests that all thermonuclear weapons built since then contain chemical compounds of deuterium and lithium in their secondary stages. The material that contains the deuterium is mostly lithium deuteride, with the lithium consisting of the isotope lithium-6. When the lithium-6 is bombarded with fast neutrons from the atomic bomb, tritium (hydrogen-3) is produced, and then the deuterium and the tritium quickly engage in thermonuclear fusion, releasing abundant energy, helium-4, and even more free neutrons. "Pure" fusion weapons such as the Tsar Bomba are believed to be obsolete. In most modern ("boosted") thermonuclear weapons, fusion directly provides only a small fraction of the total energy. Fission of a natural uranium-238 tamper by fast neutrons produced from D–T fusion accounts for a much larger (i.e. boosted) energy release than the fusion reaction itself.
Modern research.
In August 2018, scientists announced the transformation of gaseous deuterium into a liquid metallic form. This may help researchers better understand gas giant planets, such as Jupiter, Saturn and some exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields.
Antideuterium.
An antideuteron is the antimatter counterpart of the nucleus of deuterium, consisting of an antiproton and an antineutron. The antideuteron was first produced in 1965 at the Proton Synchrotron at CERN and the Alternating Gradient Synchrotron at Brookhaven National Laboratory. A complete atom, with a positron orbiting the nucleus, would be called "antideuterium", but as of 2019[ [update]] antideuterium has not yet been created. The proposed symbol for antideuterium is D, that is, D with an overbar.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{1}{\\sqrt{2}}\\Big( |{\\uparrow\\downarrow}\\rangle - |{\\downarrow\\uparrow}\\rangle\\Big)."
},
{
"math_id": 1,
"text": "\\frac{1}{\\sqrt{2}}\\Big( |p n \\rangle - |n p \\rangle\\Big)."
},
{
"math_id": 2,
"text": "\n\\left(\n\\begin{array}{ll}\n|{\\uparrow\\uparrow}\\rangle\\\\\n\\frac{1}{\\sqrt{2}}( |{\\uparrow\\downarrow}\\rangle + |{\\downarrow\\uparrow}\\rangle )\\\\\n|{\\downarrow\\downarrow}\\rangle\n\\end{array}\n\\right)\n"
},
{
"math_id": 3,
"text": "\\mu = \n\\frac{1}{j+1}\\bigl\\langle(l,s),j,m_j{=}j \\,\\bigr|\\, \\vec{\\mu}\\cdot \\vec{\\jmath} \\,\\bigl|\\,(l,s),j,m_j{=}j\\bigr\\rangle"
},
{
"math_id": 4,
"text": "\\vec{\\mu} = g^{(l)}\\vec{l} + g^{(s)}\\vec{s} "
},
{
"math_id": 5,
"text": "\\vec{l}"
},
{
"math_id": 6,
"text": "\\vec{s}"
},
{
"math_id": 7,
"text": "\\mu = \n\\frac{1}{j+1} \\Bigl\\langle(l,s),j,m_j{=}j \\,\\Bigr|\\left(\\frac{1}{2}\\vec{l} {g^{(l)}}_p + \\frac{1}{2}\\vec{s} ({g^{(s)}}_p + {g^{(s)}}_n)\\right)\\cdot \\vec{\\jmath} \\,\\Bigl|\\, (l,s),j,m_j{=}j \\Bigr\\rangle"
},
{
"math_id": 8,
"text": "\\mu = \n\\frac{1}{4(j+1)}\\left[({g^{(s)}}_p + {g^{(s)}}_n)\\big(j(j+1) - l(l+1) + s(s+1)\\big) + \\big(j(j+1) + l(l+1) - s(s+1)\\big)\\right]"
},
{
"math_id": 9,
"text": "\\mu = \\frac{1}{2}({g^{(s)}}_p + {g^{(s)}}_n) = 0.879"
},
{
"math_id": 10,
"text": "\\mu = -\\frac{1}{4}({g^{(s)}}_p + {g^{(s)}}_n) + \\frac{3}{4} = 0.310"
}
] | https://en.wikipedia.org/wiki?curid=8524 |
8525062 | PEG 400 | <templatestyles src="Chembox/styles.css"/>
Chemical compound
PEG 400 (polyethylene glycol 400) is a low-molecular-weight grade of polyethylene glycol. It is a clear, colorless, viscous liquid. Due in part to its low toxicity, PEG 400 is widely used in a variety of pharmaceutical formulations.
Chemical properties.
PEG 400 is strongly hydrophilic. The partition coefficient of PEG 400 between hexane and water is 0.000015 (logformula_0), indicating that when PEG 400 is mixed with water and hexane, there are only 15 parts of PEG400 in the hexane layer per 1 million parts of PEG 400 in the water layer.
PEG 400 is soluble in water, acetone, alcohols, benzene, glycerin, glycols, and aromatic hydrocarbons. It is not miscible with aliphatic hydrocarbons nor diethyl ether. Therefore, reaction products can be extracted from the reaction media with those solvents.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P = -4.8"
}
] | https://en.wikipedia.org/wiki?curid=8525062 |
852522 | Representation of a Lie superalgebra | In the mathematical field of representation theory, a representation of a Lie superalgebra is an action of Lie superalgebra "L" on a Z2-graded vector space "V", such that if "A" and "B" are any two pure elements of "L" and "X" and "Y" are any two pure elements of "V", then
formula_0
formula_1
formula_2
formula_3
Equivalently, a representation of "L" is a Z2-graded representation of the universal enveloping algebra of "L" which respects the third equation above.
Unitary representation of a star Lie superalgebra.
A * Lie superalgebra is a complex Lie superalgebra equipped with an involutive antilinear map * such that * respects the grading and
[a,b]*=[b*,a*].
A unitary representation of such a Lie algebra is a Z2 graded Hilbert space which is a representation of a Lie superalgebra as above together with the requirement that self-adjoint elements of the Lie superalgebra are represented by Hermitian transformations.
This is a major concept in the study of supersymmetry together with representation of a Lie superalgebra on an algebra. Say A is an *-algebra representation of the Lie superalgebra (together with the additional requirement that * respects the grading and L[a]*=-(-1)LaL*[a*]) and H is the unitary rep and also, H is a unitary representation of A.
These three reps are all compatible if for pure elements a in A, |ψ> in H and L in the Lie superalgebra,
L[a|ψ>)]=(L[a])|ψ>+(-1)Laa(L[|ψ>]).
Sometimes, the Lie superalgebra is embedded within A in the sense that there is a homomorphism from the universal enveloping algebra of the Lie superalgebra to A. In that case, the equation above reduces to
L[a]=La-(-1)LaaL.
This approach avoids working directly with a Lie supergroup, and hence avoids the use of auxiliary Grassmann numbers. | [
{
"math_id": 0,
"text": "(c_1 A+c_2 B)\\cdot X=c_1 A\\cdot X + c_2 B\\cdot X"
},
{
"math_id": 1,
"text": "A\\cdot (c_1 X + c_2 Y)=c_1 A\\cdot X + c_2 A\\cdot Y"
},
{
"math_id": 2,
"text": "(-1)^{A\\cdot X}=(-1)^A(-1)^X"
},
{
"math_id": 3,
"text": "[A,B]\\cdot X=A\\cdot (B\\cdot X)-(-1)^{AB}B\\cdot (A\\cdot X)."
}
] | https://en.wikipedia.org/wiki?curid=852522 |
852721 | Actuarial notation | Shorthand method to record math formulas that deal with interest rates and life tables
Actuarial notation is a shorthand method to allow actuaries to record mathematical formulas that deal with interest rates and life tables.
Traditional notation uses a halo system, where symbols are placed as superscript or subscript before or after the main letter. Example notation using the halo system can be seen below.
Various proposals have been made to adopt a linear system, where all the notation would be on a single line without the use of superscripts or subscripts. Such a method would be useful for computing where representation of the halo system can be extremely difficult. However, a standard linear system has yet to emerge.
Example notation.
Interest rates.
formula_3 is the annual effective interest rate, which is the "true" rate of interest over "a year". Thus if the annual interest rate is 12% then formula_4.
formula_5 (pronounced "i "upper" m") is the nominal interest rate convertible formula_2 times a year, and is numerically equal to formula_2 times the effective rate of interest over one formula_2"th" of a year. For example, formula_6 is the nominal rate of interest convertible semiannually. If the effective annual rate of interest is 12%, then formula_7 represents the effective interest rate every six months. Since formula_8, we have formula_9 and hence formula_10. The "(m)" appearing in the symbol formula_5 is not an "exponent." It merely represents the number of interest conversions, or compounding times, per year. Semi-annual compounding, (or converting interest every six months), is frequently used in valuing bonds (see also fixed income securities) and similar monetary financial liability instruments, whereas home mortgages frequently convert interest monthly. Following the above example again where formula_11, we have formula_12 since formula_13.
Effective and nominal rates of interest are not the same because interest paid in earlier measurement periods "earns" interest in later measurement periods; this is called compound interest. That is, nominal rates of interest credit interest to an investor, (alternatively charge, or debit, interest to a debtor), more frequently than do effective rates. The result is more frequent compounding of interest income to the investor, (or interest expense to the debtor), when nominal rates are used.
The symbol formula_14 represents the present value of 1 to be paid one year from now:
formula_15
This present value factor, or discount factor, is used to determine the amount of money that must be invested now in order to have a given amount of money in the future. For example, if you need 1 in one year, then the amount of money you should invest now is: formula_16. If you need 25 in 5 years the amount of money you should invest now is: formula_17.
formula_18 is the annual effective discount rate:
formula_19
The value of formula_18 can also be calculated from the following relationships: formula_20
The rate of discount equals the amount of interest earned during a one-year period, divided by the balance of money at the end of that period. By contrast, an annual effective rate of interest is calculated by dividing the amount of interest earned during a one-year period by the balance of money at the beginning of the year. The present value (today) of a payment of 1 that is to be made formula_21 years in the future is formula_22. This is analogous to the formula formula_23 for the future (or accumulated) value formula_21 years in the future of an amount of 1 invested today.
formula_24, the nominal rate of discount convertible formula_25 times a year, is analogous to formula_5. Discount is converted on an formula_2"th"-ly basis.
formula_26, the force of interest, is the limiting value of the nominal rate of interest when formula_2 increases without bound:
formula_27
In this case, interest is convertible continuously.
The general relationship between formula_3, formula_26 and formula_18 is:
formula_28
Their numerical value can be compared as follows:
formula_29
Life tables.
A life table (or a mortality table) is a mathematical construction that shows the number of people alive (based on the assumptions used to build the table) at a given age. In addition to the number of lives remaining at each age, a mortality table typically provides various probabilities associated with the development of these values.
formula_30 is the number of people alive, relative to an original cohort, at age formula_0. As age increases the number of people alive decreases.
formula_31 is the starting point for formula_30: the number of people alive at age 0. This is known as the radix of the table. Some mortality tables begin at an age greater than 0, in which case the radix is the number of people assumed to be alive at the youngest age in the table.
formula_32 is the limiting age of the mortality tables. formula_33 is zero for all formula_34.
formula_35 is the number of people who die between age formula_0 and age formula_36. formula_35 may be calculated using the formula formula_37
formula_39 is the probability of death between the ages of formula_0 and age formula_36.
formula_40
formula_41 is the probability that a life age formula_0 will survive to age formula_36.
formula_42
Since the only possible alternatives from one age (formula_0) to the next (formula_38) are living and dying, the relationship between these two probabilities is:
formula_43
These symbols may also be extended to multiple years, by inserting the number of years at the bottom left of the basic symbol.
formula_44 shows the number of people who die between age formula_0 and age formula_45.
formula_46 is the probability of death between the ages of formula_0 and age formula_45.
formula_47
formula_48 is the probability that a life age formula_0 will survive to age formula_45.
formula_49
Another statistic that can be obtained from a life table is life expectancy.
formula_50 is the curtate expectation of life for a person alive at age formula_0. This is the expected number of complete years remaining to live (you may think of it as the expected number of birthdays that the person will celebrate).
formula_51
A life table generally shows the number of people alive at integral ages. If we need information regarding a fraction of a year, we must make assumptions with respect to the table, if not already implied by a mathematical formula underlying the table. A common assumption is that of a Uniform Distribution of Deaths (UDD) at each year of age. Under this assumption, formula_52 is a linear interpolation between formula_30 and formula_53. i.e.
formula_54
Annuities.
The basic symbol for the present value of an annuity is formula_55. The following notation can then be added:
If the payments to be made under an annuity are independent of any life event, it is known as an annuity-certain. Otherwise, in particular if payments end upon the beneficiary's death, it is called a life annuity.
formula_56 (read "a-angle-n at i") represents the present value of an annuity-immediate, which is a series of unit payments at the "end" of each year for formula_1 years (in other words: the value one period before the first of "n" payments). This value is obtained from:
formula_57
formula_59 represents the present value of an annuity-due, which is a series of unit payments at the "beginning" of each year for formula_1 years (in other words: the value at the time of the first of "n" payments). This value is obtained from:
formula_60
formula_62 is the value at the time of the last payment, formula_63 the value one period later.
If the symbol formula_64 is added to the top-right corner, it represents the present value of an annuity whose payments occur each one formula_2th of a year for a period of formula_1 years, and each payment is one formula_2th of a unit.
formula_65, formula_66
formula_67 is the limiting value of formula_68 when formula_2 increases without bound. The underlying annuity is known as a continuous annuity.
formula_69
The present values of these annuities may be compared as follows:
formula_70
To understand the relationships shown above, consider that cash flows paid at a later time have a smaller present value than cash flows of the same total amount that are paid at earlier times.
Life annuities.
A life annuity is an annuity whose payments are contingent on the continuing life of the annuitant. The age of the annuitant is an important consideration in calculating the actuarial present value of an annuity.
For example:
formula_72 indicates an annuity of 1 unit per year payable at the end of each year until death to someone currently age 65
formula_73 indicates an annuity of 1 unit per year payable for 10 years with payments being made at the end of each year
formula_74 indicates an annuity of 1 unit per year for 10 years, or until death if earlier, to someone currently age 65
formula_75 indicates an annuity of 1 unit per year until the earlier death of member or death of spouse, to someone currently age 65 and spouse age 64
formula_76 indicates an annuity of 1 unit per year until the later death of member or death of spouse, to someone currently age 65 and spouse age 64.
formula_77 indicates an annuity of 1 unit per year payable 12 times a year (1/12 unit per month) until death to someone currently age 65
formula_78 indicates an annuity of 1 unit per year payable at the start of each year until death to someone currently age 65
or in general:
formula_79, where formula_0 is the age of the annuitant, formula_1 is the number of years of payments (or until death if earlier), formula_2 is the number of payments per year, and formula_58 is the interest rate.
In the interest of simplicity the notation is limited and does not, for example, show whether the annuity is payable to a man or a woman (a fact that would typically be determined from the context, including whether the life table is based on male or female mortality rates).
The Actuarial Present Value of life contingent payments can be treated as the mathematical expectation of a present value random variable, or calculated through the current payment form.
Life insurance.
The basic symbol for a life insurance is formula_80. The following notation can then be added:
For example:
formula_82 indicates a life insurance benefit of 1 payable at the end of the year of death.
formula_83 indicates a life insurance benefit of 1 payable at the end of the month of death.
formula_84 indicates a life insurance benefit of 1 payable at the (mathematical) instant of death.
Premium.
The basic symbol for premium is formula_85 or formula_86. formula_85 generally refers to net premiums per annum, formula_86 to special premiums, as a unique premium.
Force of mortality.
Among actuaries, force of mortality refers to what economists and other social scientists call the hazard rate and is construed as an instantaneous rate of mortality at a certain age measured on an annualized basis.
In a life table, we consider the probability of a person dying between age ("x") and age "x" + 1; this probability is called "q""x". In the continuous case, we could also consider the conditional probability that a person who has attained age ("x") will die between age ("x") and age ("x" + Δ"x") as:
formula_87
where "F""X"("x") is the cumulative distribution function of the continuous age-at-death random variable, X. As Δ"x" tends to zero, so does this probability in the continuous case. The approximate force of mortality is this probability divided by Δ"x". If we let Δ"x" tend to zero, we get the function for force of mortality, denoted as "μ"("x"):
formula_88 | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "\\,i"
},
{
"math_id": 4,
"text": "\\,i = 0.12"
},
{
"math_id": 5,
"text": "\\,i^{(m)}"
},
{
"math_id": 6,
"text": "\\,i^{(2)}"
},
{
"math_id": 7,
"text": "\\,i^{(2)}/2"
},
{
"math_id": 8,
"text": "\\,(1.0583)^{2}=1.12"
},
{
"math_id": 9,
"text": "\\,i^{(2)}/2=0.0583"
},
{
"math_id": 10,
"text": "\\,i^{(2)}=0.1166"
},
{
"math_id": 11,
"text": "\\,i=0.12"
},
{
"math_id": 12,
"text": "\\,i^{(12)}=0.1139"
},
{
"math_id": 13,
"text": "\\,\\left(1+\\frac{0.1139}{12}\\right)^{12}=1.12"
},
{
"math_id": 14,
"text": "\\,v"
},
{
"math_id": 15,
"text": "\\,v = {(1+i)}^{-1}\\approx 1-i+i^2"
},
{
"math_id": 16,
"text": "\\,1 \\times v"
},
{
"math_id": 17,
"text": "\\,25 \\times v^5"
},
{
"math_id": 18,
"text": "\\,d"
},
{
"math_id": 19,
"text": "d = \\frac{i}{1+i}\\approx i-i^2"
},
{
"math_id": 20,
"text": "\\,(1-d) = v = {(1+i)}^{-1}"
},
{
"math_id": 21,
"text": "\\,n"
},
{
"math_id": 22,
"text": "\\,{(1-d)}^{n}"
},
{
"math_id": 23,
"text": "\\,{(1+i)}^{n}"
},
{
"math_id": 24,
"text": "\\,d^{(m)}"
},
{
"math_id": 25,
"text": "\\,m"
},
{
"math_id": 26,
"text": "\\,\\delta"
},
{
"math_id": 27,
"text": "\\,\\delta = \\lim_{m\\to\\infty}i^{(m)}"
},
{
"math_id": 28,
"text": "\\,(1+i) = \\left(1+\\frac{i^{(m)}}{m}\\right)^{m} = e^{\\delta} = \\left(1-\\frac{d^{(m)}}{m}\\right)^{-m} = (1-d)^{-1}"
},
{
"math_id": 29,
"text": "\\, i > i^{(2)} > i^{(3)} > \\cdots > \\delta > \\cdots > d^{(3)} > d^{(2)} > d"
},
{
"math_id": 30,
"text": "\\,l_x"
},
{
"math_id": 31,
"text": "\\,l_0"
},
{
"math_id": 32,
"text": "\\omega"
},
{
"math_id": 33,
"text": "\\,l_n"
},
{
"math_id": 34,
"text": "\\,n \\geq \\omega"
},
{
"math_id": 35,
"text": "\\,d_x"
},
{
"math_id": 36,
"text": "x + 1"
},
{
"math_id": 37,
"text": "\\,d_x = l_x - l_{x+1}"
},
{
"math_id": 38,
"text": "x+1"
},
{
"math_id": 39,
"text": "\\,q_x"
},
{
"math_id": 40,
"text": "\\,q_x = d_x / l_x"
},
{
"math_id": 41,
"text": "\\,p_x"
},
{
"math_id": 42,
"text": "\\,p_x = l_{x+1} / l_x"
},
{
"math_id": 43,
"text": "\\,p_x+q_x=1"
},
{
"math_id": 44,
"text": "\\,_nd_x = d_x + d_{x+1} + \\cdots + d_{x+n-1} = l_x - l_{x+n}"
},
{
"math_id": 45,
"text": "x + n"
},
{
"math_id": 46,
"text": "\\,_nq_x"
},
{
"math_id": 47,
"text": "\\,_nq_x = {}_nd_x / l_x"
},
{
"math_id": 48,
"text": "\\,_np_x"
},
{
"math_id": 49,
"text": "\\,_np_x = l_{x+n} / l_x"
},
{
"math_id": 50,
"text": "\\,e_x"
},
{
"math_id": 51,
"text": "\\,e_x = \\sum_{t=1}^{\\infty} \\ _tp_x"
},
{
"math_id": 52,
"text": "\\,l_{x+t}"
},
{
"math_id": 53,
"text": "\\,l_{x+1}"
},
{
"math_id": 54,
"text": "\\,l_{x+t} = (1 - t)l_x + tl_{x+1} "
},
{
"math_id": 55,
"text": "\\,a"
},
{
"math_id": 56,
"text": "a_{\\overline{n|}i}"
},
{
"math_id": 57,
"text": "\\,a_{\\overline{n|}i} = v + v^2 + \\cdots + v^n = \\frac{1-v^n}{i}"
},
{
"math_id": 58,
"text": "i"
},
{
"math_id": 59,
"text": "\\ddot{a}_{\\overline{n|}i}"
},
{
"math_id": 60,
"text": "\\ddot{a}_{\\overline{n|}i} = 1 + v + \\cdots + v^{n-1} = \\frac{1-v^n}{d}"
},
{
"math_id": 61,
"text": "d"
},
{
"math_id": 62,
"text": "\\,s_{\\overline{n|}i}"
},
{
"math_id": 63,
"text": "\\ddot{s}_{\\overline{n|}i}"
},
{
"math_id": 64,
"text": "\\,(m)"
},
{
"math_id": 65,
"text": "a_{\\overline{n|}i}^{(m)} = \\frac{1-v^n}{i^{(m)}}"
},
{
"math_id": 66,
"text": "\\ddot{a}_{\\overline{n|}i}^{(m)} = \\frac{1-v^n}{d^{(m)}}"
},
{
"math_id": 67,
"text": "\\overline{a}_{\\overline{n|}i}"
},
{
"math_id": 68,
"text": "\\,a_{\\overline{n|}i}^{(m)}"
},
{
"math_id": 69,
"text": "\\overline{a}_{\\overline{n|}i}= \\frac{1-v^n}{\\delta}"
},
{
"math_id": 70,
"text": "a_{\\overline{n|}i} < a_{\\overline{n|}i}^{(m)} < \\overline{a}_{\\overline{n|}i} < \\ddot{a}_{\\overline{n|}i}^{(m)}< \\ddot{a}_{\\overline{n|}i}"
},
{
"math_id": 71,
"text": "\\delta"
},
{
"math_id": 72,
"text": "\\,a_{65}"
},
{
"math_id": 73,
"text": "a_{\\overline{10|}}"
},
{
"math_id": 74,
"text": "a_{65:\\overline{10|}}"
},
{
"math_id": 75,
"text": "a_{65:64}"
},
{
"math_id": 76,
"text": "a_{\\overline{65:64}}"
},
{
"math_id": 77,
"text": "a_{65}^{(12)}"
},
{
"math_id": 78,
"text": "{\\ddot{a}}_{65}"
},
{
"math_id": 79,
"text": "a_{x:\\overline{n|}i}^{(m)}"
},
{
"math_id": 80,
"text": "\\,A"
},
{
"math_id": 81,
"text": "A^{(12)}"
},
{
"math_id": 82,
"text": "\\,A_x"
},
{
"math_id": 83,
"text": "\\,A_x^{(12)}"
},
{
"math_id": 84,
"text": "\\,\\overline{A}_x"
},
{
"math_id": 85,
"text": "\\,P"
},
{
"math_id": 86,
"text": "\\,\\pi "
},
{
"math_id": 87,
"text": "P_{\\Delta x}(x)=P(x<X<x+\\Delta\\;x\\mid\\;X>x)=\\frac{F_X(x+\\Delta\\;x)-F_X(x)}{(1-F_X(x))}"
},
{
"math_id": 88,
"text": "\\mu\\,(x)=\\frac{F'_X(x)}{1-F_X(x)}"
}
] | https://en.wikipedia.org/wiki?curid=852721 |
8528 | Disjunction introduction | Inference introducing a disjunction in logical proofs
Disjunction introduction or addition (also called or introduction) is a rule of inference of propositional logic and almost every other deduction system. The rule makes it possible to introduce disjunctions to logical proofs. It is the inference that if "P" is true, then "P or Q" must be true.
An example in English:
Socrates is a man.
Therefore, Socrates is a man or pigs are flying in formation over the English Channel.
The rule can be expressed as:
formula_2
where the rule is that whenever instances of "formula_0" appear on lines of a proof, "formula_3" can be placed on a subsequent line.
More generally it's also a simple valid argument form, this means that if the premise is true, then the conclusion is also true as any rule of inference should be, and an immediate inference, as it has a single proposition in its premises.
Disjunction introduction is not a rule in some paraconsistent logics because in combination with other rules of logic, it leads to explosion (i.e. everything becomes provable) and paraconsistent logic tries to avoid explosion and to be able to reason with contradictions. One of the solutions is to introduce disjunction with over rules. See .
Formal notation.
The "disjunction introduction" rule may be written in sequent notation:
formula_4
where formula_5 is a metalogical symbol meaning that formula_3 is a syntactic consequence of formula_0 in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:
formula_6
where formula_0 and formula_1 are propositions expressed in some formal system.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "\\frac{P}{\\therefore P \\lor Q}"
},
{
"math_id": 3,
"text": "P \\lor Q"
},
{
"math_id": 4,
"text": "P \\vdash (P \\lor Q)"
},
{
"math_id": 5,
"text": "\\vdash"
},
{
"math_id": 6,
"text": "P \\to (P \\lor Q)"
}
] | https://en.wikipedia.org/wiki?curid=8528 |
8529 | Disjunction elimination | Rule of inference of propositional logic
In propositional logic, disjunction elimination (sometimes named proof by cases, case analysis, or or elimination) is the valid argument form and rule of inference that allows one to eliminate a disjunctive statement from a logical proof. It is the inference that if a statement formula_0 implies a statement formula_1 and a statement formula_2 also implies formula_1, then if either formula_0 or formula_2 is true, then formula_1 has to be true. The reasoning is simple: since at least one of the statements P and R is true, and since either of them would be sufficient to entail Q, Q is certainly true.
An example in English:
If I'm inside, I have my wallet on me.
If I'm outside, I have my wallet on me.
It is true that either I'm inside or I'm outside.
Therefore, I have my wallet on me.
It is the rule can be stated as:
formula_3
where the rule is that whenever instances of "formula_4", and "formula_5" and "formula_6" appear on lines of a proof, "formula_1" can be placed on a subsequent line.
Formal notation.
The "disjunction elimination" rule may be written in sequent notation:
formula_7
where formula_8 is a metalogical symbol meaning that formula_1 is a syntactic consequence of formula_4, and formula_5 and formula_6 in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:
formula_9
where formula_0, formula_1, and formula_2 are propositions expressed in some formal system.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "\\frac{P \\to Q, R \\to Q, P \\lor R}{\\therefore Q}"
},
{
"math_id": 4,
"text": "P \\to Q"
},
{
"math_id": 5,
"text": "R \\to Q"
},
{
"math_id": 6,
"text": "P \\lor R"
},
{
"math_id": 7,
"text": "(P \\to Q), (R \\to Q), (P \\lor R) \\vdash Q"
},
{
"math_id": 8,
"text": "\\vdash"
},
{
"math_id": 9,
"text": "(((P \\to Q) \\land (R \\to Q)) \\land (P \\lor R)) \\to Q"
}
] | https://en.wikipedia.org/wiki?curid=8529 |
8529055 | Multiplicative distance | In algebraic geometry, formula_0 is said to be a multiplicative distance function over a field if it satisfies
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "\\mu(AB)>1.\\,"
},
{
"math_id": 2,
"text": "\\mu(AB)=\\mu(A'B').\\,"
},
{
"math_id": 3,
"text": "\\mu(AB)<\\mu(A'B').\\,"
},
{
"math_id": 4,
"text": "\\mu(AB+CD)=\\mu(AB)\\mu(CD).\\,"
}
] | https://en.wikipedia.org/wiki?curid=8529055 |
852930 | Anarchy, State, and Utopia | 1974 book by Robert Nozick
Anarchy, State, and Utopia is a 1974 book by the American political philosopher Robert Nozick. It won the 1975 US National Book Award in category Philosophy and Religion, has been translated into 11 languages, and was named one of the "100 most influential books since the war" (1945–1995) by the UK "Times Literary Supplement".
In opposition to "A Theory of Justice" (1971) by John Rawls, and in debate with Michael Walzer, Nozick argues in favor of a minimal state, "limited to the narrow functions of protection against force, theft, fraud, enforcement of contracts, and so on." When a state takes on more responsibilities than these, Nozick argues, rights will be violated. To support the idea of the minimal state, Nozick presents an argument that illustrates how the minimalist state arises naturally from a Lockean state of nature and how any expansion of state power past this minimalist threshold is unjustified.
Summary.
Nozick's entitlement theory, which sees humans as ends in themselves and justifies redistribution of goods only on condition of consent, is a key aspect of "Anarchy, State, and Utopia". It is influenced by John Locke, Immanuel Kant, and Friedrich Hayek.
The book also contains a vigorous defense of minarchist libertarianism against more extreme views, such as anarcho-capitalism (in which there is "no" state). Nozick argues that anarcho-capitalism would inevitably transform into a minarchist state, even without violating any of its own non-aggression principles, through the eventual emergence of a single locally dominant private defense and judicial agency that it is in everyone's interests to align with because other agencies are unable to effectively compete against the advantages of the agency with majority coverage. Therefore, even to the extent that the anarcho-capitalist theory is correct, it results in a single, private, protective agency that is itself a de facto "state". Thus anarcho-capitalism may only exist for a limited period before a minimalist state emerges.
Philosophical activity.
The preface of "Anarchy, State, and Utopia" contains a passage about "the usual manner of presenting philosophical work"—i.e., its presentation as though it were the absolute final word on its subject. Nozick believes that philosophers are really more modest than that and aware of their works' weaknesses. Yet a form of philosophical activity persists which "feels like pushing and shoving things to fit into some fixed perimeter of specified shape." The bulges are masked or the cause of the bulge is thrown "far" away so that no one will notice. Then ""Quickly", you find an angle from which everything appears to fit perfectly and take a snapshot, at a fast shutter speed before something else bulges out too noticeably." After a trip to the darkroom for touching up, "[a]ll that remains is to publish the photograph as a representation of exactly how things are, and to note how nothing fits properly into any other shape." So how does Nozick's work differ from this form of activity? He believed that what he said was correct, but he does not mask the bulges: "the doubts and worries and uncertainties as well as the beliefs, convictions, and arguments."
Why state-of-nature theory?
In this chapter, Nozick tries to explain why investigating a Lockean state of nature is useful in order to understand whether there should be a state in the first place. If one can show that an anarchic society is worse than one that has a state we should choose the second as the less bad alternative. To convincingly compare the two, he argues, one should focus not on an extremely pessimistic nor on an extremely optimistic view of that society. Instead, one should:
<templatestyles src="Template:Blockquote/styles.css" />[...] focus upon a nonstate situation in which people generally satisfy moral constraints and generally act as they ought [...] this state-of-nature situation is the best anarchic situation one reasonably could hope for. Hence investigating its nature and defects is of crucial importance to deciding whether there should be a state rather than anarchy.
Nozick's plan is to first describe the morally permissible and impermissible actions in such a non-political society and how violations of those constraints by some individuals would lead to the emergence of a state. If that would happen, it would explain the appearance even if no state actually developed in that particular way.
He gestures towards perhaps the biggest bulge when he notes (in Chapter 1, "Why State-of-Nature Theory?") the shallowness of his "invisible hand" explanation of the minimal state, deriving it from a Lockean state of nature, in which there are individual rights but no state to enforce and adjudicate them. Although this counts for him as a "fundamental explanation" of the political realm because the political is explained in terms of the nonpolitical, it is shallow relative to his later "genealogical" ambition (in "The Nature of Rationality" and especially in "Invariances") to explain both the political and the moral by reference to beneficial cooperative practices that can be traced back to our hunter-gatherer ancestors and beyond. The genealogy will give Nozick an explanation of what is only assumed in "Anarchy, State, and Utopia": the fundamental status of individual rights. Creativity was not a factor in his interpretation.
The state of nature.
Nozick starts this chapter by summarizing some of the features of the Lockean state of nature. An important one is that every individual has a right to exact compensation by himself whenever another individual violates his rights. Punishing the offender is also acceptable, but only inasmuch as he (or others) will be prevented from doing that again. As Locke himself acknowledges, this raises several problems, and Nozick is going to try to see to what extent can they be solved by voluntary arrangements. A rational response to the "troubles" of a Lockean state of nature is the establishment of mutual-protection associations, in which all will answer the call of any member. It is inconvenient that everyone is always on call, and that the associates can be called out by members who may be "cantankerous or paranoid". Another important inconvenience takes place when two members of the same association have a dispute. Although there are simple rules that could solve this problem (for instance, a policy of non-intervention), most people will prefer associations that try to build systems to decide whose claims are correct.
In any case, the problem of everybody being on call dictates that some entrepreneurs will go into the business of selling protective services (division of labor). This will lead ("through market pressures, economies of scale, and rational self-interest") to either people joining the strongest association in a given area or that some associations will have similar power and hence will avoid the costs of fighting by agreeing to a third party that would act as a judge or court to solve the disputes. But for all practical purposes, this second case is equivalent to having just one protective association. And this is something "very much resembling a minimal state". Nozick judges that Locke was wrong to imagine a social contract as necessary to establish civil society and money. He prefers invisible-hand explanations, that is to say, that voluntary agreements between individuals create far-reaching patterns that "look like they were designed" when in fact nobody did. These explanations are useful in the sense that they "minimize the use of notions constituting the phenomena to be explained".
So far he has shown that such an "invisible hand" would lead to a dominant association, but individuals may still justly enforce their own rights. But this protective agency is not yet a state. At the end of the chapter Nozick points out some of the problems of defining what a state is, but he says:
<templatestyles src="Template:Blockquote/styles.css" />We may proceed, for our purposes, by saying that a necessary condition for the existence of a state is that it (some person or organization) announce that, to the best of its ability [...] it will punish everyone whom it discovers to have used force without its express permission.
The protective agencies so far do not make any such announcement. Furthermore, it does not offer the same degree of protection to all its clients (who may purchase different degrees of coverage) and the individuals who do not purchase the service (the "independents") do not get any protection at all, (spillover effects aside). This goes against our experience with states, where even tourists typically receive protection.Therefore, the dominant protective agency lacks a monopoly on the use of force and fails to protect all people inside its territory.
Moral constraints and the state.
Nozick arrives at the night-watchman state of classical liberalism theory by showing that there are non-redistributive reasons for the apparently redistributive procedure of making its clients pay for the protection of others. He defines what he calls an ultraminimal state, which would not have this seemingly redistributive feature but would be the only one allowed to enforce rights. Proponents of this ultraminimal state do not defend it on the grounds of trying to minimize the total of (weighted) violations of rights (what he calls utilitarianism of rights.) That idea would mean, for example, that someone could punish another person they know to be innocent in order to calm down a mob that would otherwise violate even more rights. This is not the philosophy behind the ultraminimal state. Instead, its proponents hold its members' rights are a side-constraint on what can be done to them. This side-constraint view reflects the underlying Kantian principle that individuals are ends and not merely means, so the rights of "one" individual cannot be violated to avoid violations of the rights of "other" people. Which principle should we choose, then? Nozick will not try to prove which one is better. Instead, he gives some reasons to prefer the Kantian view and later points to problems with classic utilitarianism.
The first reason he gives in favor of the Kantian principle is that the analogy between the individual case (in which we choose to sacrifice now for a greater benefit later) and the social case (in which we sacrifice the interests of one individual for the greater social good) is incorrect:
<templatestyles src="Template:Blockquote/styles.css" />There are only individual people, different individual people with their own individual lives. Using one of these people for the benefit of others, uses him and benefits the others. Nothing more. [...] Talk of an overall social good covers this up. (Intentionally?). To use a person in this way does not sufficiently respect and take account of the fact that he is a separate person, that his is the only life he has. He does not get some overbalancing good from his sacrifice [...].
A second reason focuses on the non-aggression principle. Are we prepared to dismiss this principle? That is, can we accept that some individuals may harm some innocent in certain cases? (This non-aggression principle does not include, of course, self-defense and perhaps some other special cases he points out.)
He then goes on to expose some problems with utilitarianism by discussing whether animals should be taken into account in the utilitarian calculation of happiness, if that depends on the kind of animal, if killing them painlessly would be acceptable, and so on. He believes that utilitarianism is not appropriate even with animals.
But Nozick's most famous argument for the side-constraint view against classical utilitarianism and the idea that only felt experience matters is his Experience Machine thought experiment. It induces whatever illusory experience one might wish, but it prevents the subject from doing anything or making contact with anything. There is only pre-programmed neural stimulation sufficient for the illusion. Nozick pumps the intuition that each of us has a reason to avoid plugging into the Experience Machine forever. This is not to say that "plugging in" might not be the best all-things-considered choice for some who are terminally ill and in great pain. The point of the thought experiment is to articulate a weighty reason not to plug in, a reason that should not be there if all that matters is felt experience.
Prohibition, compensation, and risk.
The procedure that leads to a night-watchman state involves compensation to non-members who are prevented from enforcing their rights, an enforcement mechanism that it deems risky by comparison with its own. Compensation addresses any disadvantages non-members suffer as a result of being unable to enforce their rights. Assuming that non-members take reasonable precautions and adjusting activities to the association's prohibition of their enforcing their own rights, the association is required to raise the non-member above his actual position by an amount equal to the difference between his position on an indifference curve he would occupy were it not for the prohibition, and his original position.
The purpose of this comparatively dense chapter is to deduce what Nozick calls the Compensation Principle. That idea is going to be key for the next chapter, where he shows how (without any violation of rights) an ultraminimal state (one that has a monopoly of enforcement of rights) can become a "minimal" state (which also provides protection to all individuals). Since this would involve some people paying for the protection of others, or some people being forced to pay for protection, the main element of the discussion is whether these kinds of actions can be justified from a natural rights perspective. Hence the development of a theory of compensation in this chapter.
He starts by asking broadly what if someone "crosses a boundary" (for instance, physical harm.) If this is done with the consent of the individual concerned, no problem arises. Unlike Locke, Nozick does not have a "paternalistic" view of the matter. He believes anyone can do "anything" to himself, or allow others to do the same things to him.
But what if B crosses A's boundaries without consent? Is that okay if A is compensated?
What Nozick understands by compensation is anything that makes A indifferent (that is, A has to be just as good in his own judgement "before" the transgression and "after" the compensation) provided that A has taken reasonable precautions to avoid the situation. He argues that compensation is not enough, because "some" people will violate these boundaries, for example, without revealing their identity. Therefore, some extra cost has to be imposed on those who violate someone else's rights. (For the sake of simplicity this discussion on deterrence is summarized in another section of this article).
After discussing the issue of punishment and concluding that not all violations of rights will be deterred under a retributive theory of justice, (which he favors). Nozick returns to compensation. Again, why don't we allow anyone to do anything provided they give full compensation afterwards? There are several problems with that view.
Firstly, if some person gets a big gain by violating another's rights and they then compensate the victim up to the point of indifference, the infractor is getting all the benefits that this provides. But one could argue that it would be fair for the felon to give some compensation beyond that, just like in the marketplace, where the buyer does not necessarily just pay up to the point where the seller is indifferent from selling or not selling. There is usually room for negotiation, which raises the question of fairness. Every attempt to make a theory of a fair price in the marketplace has failed, and Nozick prefers not to try to solve the issue. Instead, he says that, whenever possible, those negotiations should take place, so that the compensation is decided by the people involved. But when one cannot negotiate, it is unclear whether "all" acts should be accepted if compensation is paid.
Secondly, allowing anything if compensation is paid makes "all" people fearful. Nozick argues that even if one knows they will be compensated if their rights are violated, they will still fear this violation. This raises important problems:
The conclusion of these difficulties, particularly the last one, is that anything that produces "general" fear may be prohibited. Another reason to prohibit is that it would imply using people as a means, which violates the Kantian principle that he defended earlier.
But if so, what about prohibiting all boundary crossing "that isn't consented in advance"? That would solve the fear problem, but it would be way too restrictive, since people may cross some boundaries by accident, unintentional acts, etc.) and the costs of getting that consent may be too high (for instance if the known victim is on a trip in the jungle). What then? "The most efficient policy forgoes the fewest net beneficial acts; it allows anyone to perform an unfeared action without prior agreement, provided the transaction costs of reaching a prior agreement are greater, even by a bit, than the costs of the posterior compensation process."
Note that a particular action may not cause fear if it has a low probability of causing harm. But when all the risky activities are added up, the probability of being harmed may be high. This poses the problem that prohibiting all such activities (which may be very varied) is too restrictive. The obvious response, that is, establishing a threshold value V such that there is a violation of rights if formula_0 (where p is the probability of harming and H is the amount of harm that could be done) will not fit a natural-rights position. In his own words:
<templatestyles src="Template:Blockquote/styles.css" />This construal of the problem cannot be utilized by a tradition which holds that stealing a penny or a pin or anything from someone violates his rights. That tradition does not select a threshold measure of harm as a lower limit, in the case of harms certain to occur
Granted, some insurance solutions will work in these cases and he discusses some. But what should be done to people who do not have the means to buy insurance or compensate other people for the risks of his actions? Should they be forbidden from doing it?
<templatestyles src="Template:Blockquote/styles.css" />Since an enormous number of actions do increase risk to others, a society which prohibited such uncovered actions would ill fit a picture of a free society as one embodying a presumption in favor of liberty, under which people permissibly could perform actions so long as they didn't harm others in specified ways. [...] to prohibit risky acts (because they are financially uncovered or because they are too risky) limits individual's freedom to act, even though the actions actually might involve no cost at all to anyone else.
(This is going to have important consequences in the next chapter, see next section).
Nozick's conclusion is to prohibit specially dangerous actions that are generally done "and" compensating the specially disadvantaged individual from the prohibition. This is what he calls the Principle of Compensation. For example, it is allowed to forbid epileptics from driving, but only if they are compensated exactly for the costs that the disadvantaged has to assume (chauffeurs, taxis). This would only take place if the benefit from the increased security outweighs these costs. But this is not a negotiation. The analogy he gives is blackmail: it is not right to pay a person or group to prevent him from doing something that otherwise would give him no benefit whatsoever. Nozick considers such transactions as "unproductive activities". Similarly, (it should be deduced) it is not right for the epileptic to negotiate a payment for not doing something risky to other people.
However, Nozick does point to some problems with this principle. Firstly, he says that the action has to be "generally done". The intention behind that qualification is that eccentric and dangerous activities should not be compensated. His extreme example is someone who has fun playing Russian roulette with the head of others without asking them. Such action must be prohibited, with no qualifications. But one can define anything as a "generally done" action. The Russian roulette could be considered "having fun" and hence be compensated. Secondly, if the special and dangerous action is the only way a person can do something important to him (for instance, if it is the "only" way one can have fun or support himself) then perhaps it should be compensated. Thirdly, more generally, he recognizes he does not have a theory of disadvantage, so it is unclear what counts as a "special disadvantage".
This has to be further developed, because in the state of nature there is no authority to decide how to define these terms (see the discussion of a similar issue in p. 89).
<templatestyles src="Template:Blockquote/styles.css" />[...] nor need we state the principle exactly. We need only claim the correctness of some principles, such as the principle of compensation, requiring those imposing a prohibition on risky activities prohibited to them. I am not completely comfortable presenting and later using a principle whose details have not been worked out fully [...]. I could claim that it is all right as a beginning to leave a principle in a somewhat fuzzy state; the primary question is whether something like it will do.
The state.
An independent might be prohibited from using his methods of privately enforcing justice if:
Nevertheless, an independent may be using a method that does not impose a high risk on others but, if similar procedures are used by many others the total risk may go beyond an acceptable threshold. In that case it is impossible to decide who should stop doing it, since nobody is personally responsible and therefore nobody has a right to stop him. Independents may get together to decide these questions, but even if they agree to a mechanism to keep the total risk below the threshold, each individual will have an incentive to get out of the deal. This procedure fails because of the rationality of being a free rider on such grouping, taking advantage of everyone else's restraint and going ahead with one's own risky activities. In a famous discussion he rejects H. L. A. Hart's "principle of fairness" for dealing with free riders, which would morally bind them to cooperative practices from which they benefit. One may not charge and collect for benefits one bestows without prior agreement. But Nozick refutes this.
If the principle of fairness does not work, how should we decide this? Natural law tradition does not help much in clarifying what procedural rights we have. Nozick assumes that we all have a right to know that we are being applied a fair and reliable method for deciding if we are guilty. If this information is not available publicly, we have a right to resist. We may also do it if we find this procedure unreliable or unfair after considering the information given. We may not even participate in the process, even if it would be advisable to do so.
The application of these rights may be delegated to the protective agency, which will prevent others from applying methods of which it finds unacceptable in terms of reliability or fairness. Presumably, it would publish a list of accepted methods. Anyone who violates this prohibition will be punished. Every individual has a right to do this, and other companies could try to enter the business, but the dominant protective agency is the only one that has the power to actually carry out this prohibition. It is the only one that can guarantee its clients that no unaccepted procedure will be applied to them.
But there is another important difference: the protective agency, in doing this, can put some independents in a situation of disadvantage. Specifically, those independents who use a prohibited method and cannot afford its services without great effort (or are even too poor to pay no matter what). These people will be at the expense of paying clients of the agency.
In the previous chapter we saw that it was necessary to compensate others for the disadvantages imposed on them. We also saw that this compensation would amount only to the extra cost imposed to the disadvantaged beyond the costs that he would otherwise incur (in this case, the costs of the risky/unknown procedure he would want to apply). Nevertheless, it would amount to even the full price of a simple protection policy if the independent is unable to pay for it after the compensation for the disadvantages.
Also, the protective services that count here are strictly against "paying clients", because these are the ones against whom the independent was defenceless in the first place.
But wouldn’t this compensation mechanism generate another free riding problem? Nozick says that not much, because the compensation is only “the amount that would equal the cost of an unfancy policy when added to the sum of the monetary costs of self-help protection plus whatever amount the person comfortably could pay”. Also, as we just said, it is an unfancy policy that protects only against paying clients, not against compensated clients and other independents. Therefore, the more free riders there are, the more important it becomes to buy a full protection policy.
We can see that what we now have resembles a state. In chapter 3 Nozick argued that two necessary conditions to be fulfilled by an organization to be a state were:
The protective agency resembles a state in these two conditions. Firstly, it is a "de facto" monopoly due to the competitive advantage mentioned earlier. It does not have any special right to be it, it just is.
<templatestyles src="Template:Blockquote/styles.css" />“Our explanation does not assume or claim that might makes right. But might does make enforced prohibitions, even if no one thinks the mighty have a "special" entitlement to have realized in the world their own view of which prohibitions are correctly enforced”.
Secondly, most of the people are its clients. There may be independents, however, who apply procedures it approves of. Also, there might still be independents who apply methods that it disapproves of to other independents with unreliable procedures.
These conditions are important because they are the basis for the “individualist anarchist” to claim that every state is necessarily illegitimate. This part of the book is a refutation of that claim, showing that some states could be formed by a series of legitimate steps. The "de facto" monopoly has arisen by morally permissible steps and the universal protection, is not really redistributive because the people who are given money or protective services at a discount had a right to this as a compensation for the disadvantages forced upon them. Therefore, the state is not violating anybody’s rights.
Note that this is not a state as we usually understand it. It is presumably organized more like a company and, more importantly, there still exist independents. But, as Nozick says:
<templatestyles src="Template:Blockquote/styles.css" />“Clearly the dominant agency has almost all the features specified [by anthropologist Lawrence Krader]; and its enduring administrative structures, with full-time specialized personnel, make it diverge greatly – in the direction of a state – from what anthropologists call a stateless society”.
He does recognize, however, that this entity does not fit perfectly in the Weberian tradition of the definition of the state. It is not “the sole authorizer of violence”, since some independents may conduct violence to one another without intervention. But it is the sole "effective" judge over the permissibility of violence. Therefore, he concludes, this may be also called a “statelike entity.”
Finally, Nozick warns us that the step from being just a "de facto" monopoly (the "ultraminimal state") to becoming this “statelike entity” that compensates some independents (the "minimal" state) is not a necessary one. Compensating is a moral obligation. But this does not invalidate Nozick’s response to the individualist anarchist and it remains an invisible hand explanation: after all, to give universal protection the agency does not need to have any plan to become a state. It just happens if it decides to give the protection it owes.
Further considerations on the argument for the state.
A discussion of pre-emptive attack leads Nozick to a principle that excludes prohibiting actions not wrong in themselves, even if those actions make more likely the commission of wrongs later on. This provides him with a significant difference between a protection agency's prohibitions against procedures it deems unreliable or unfair, and other prohibitions that might seem to go too far, such as forbidding others to join another protective agency. Nozick's principle does not disallow others from doing so.
Distributive justice.
Nozick's discussion of Rawls's theory of justice raised a prominent dialogue between libertarianism and liberalism. He sketches an entitlement theory, which states, "From each as they choose, to each as they are chosen". It comprises a theory of (1) justice in acquisition; (2) justice in rectification if (1) is violated (rectification which might require apparently redistributive measures); (3) justice in holdings, and (4) justice in transfer. Assuming justice in acquisition, entitlement to holdings is a function of repeated applications of (3) and (4). Nozick's entitlement theory is a non-patterned historical principle. Almost all other principles of distributive justice (egalitarianism, utilitarianism) are patterned principles of justice. Such principles follow the form, "to each according to..."
Nozick's famous Wilt Chamberlain argument is an attempt to show that patterned principles of just distribution are incompatible with liberty. He asks us to assume that the original distribution in society, D1, is ordered by our choice of patterned principle, for instance Rawls's Difference Principle. Wilt Chamberlain is an extremely popular basketball player in this society, and Nozick further assumes 1 million people are willing to freely give Chamberlain 25 cents each to watch him play basketball over the course of a season (we assume no other transactions occur). Chamberlain now has $250,000, a much larger sum than any of the other people in the society. This new distribution in society, call it D2, obviously is no longer ordered by our favored pattern that ordered D1. However Nozick argues that D2 is just. For if each agent freely exchanges some of his D1 share with the basketball player and D1 was a just distribution (we know D1 was just, because it was ordered according to the favored patterned principle of distribution), how can D2 fail to be a just distribution? Thus Nozick argues that what the Wilt Chamberlain example shows is that no patterned principle of just distribution will be compatible with liberty. In order to preserve the pattern, which arranged D1, the state will have to continually interfere with people's ability to freely exchange their D1 shares, for any exchange of D1 shares explicitly involves violating the pattern that originally ordered it.
Nozick analogizes taxation with forced labor, asking the reader to imagine a man who works longer to gain income to buy a movie ticket and a man who spends his extra time on leisure (for instance, watching the sunset). What, Nozick asks, is the difference between seizing the second man's leisure (which would be forced labor) and seizing the first man's goods? "Perhaps there is no difference in principle," Nozick concludes, and notes that the argument could be extended to taxation on other sources besides labor. "End-state and most patterned principles of distributive justice institute (partial) ownership by others of people and their actions and labor. These principles involve a shift from the classical liberals' notion of self ownership to a notion of (partial) property rights in "other" people."
Nozick then briefly considers Locke's theory of acquisition. After considering some preliminary objections, he "adds an additional bit of complexity" to the structure of the entitlement theory by refining Locke's proviso that "enough and as good" must be left in common for others by one's taking property in an unowned object. Nozick favors a "Lockean proviso" that forbids appropriation when the position of others is thereby worsened. For instance, appropriating the only water hole in a desert and charging monopoly prices would not be legitimate. But in line with his endorsement of the historical principle, this argument does not apply to the medical researcher who discovers a cure for a disease and sells for whatever price he will. Nor does Nozick provide any means or theory whereby abuses of appropriation—acquisition of property when there is "not" enough and as good in common for others—should be corrected.
The Difference Principle.
Nozick attacks John Rawls's Difference Principle on the ground that the well-off could threaten a lack of social cooperation to the worse-off, just as Rawls implies that the worse-off will be assisted by the well-off for the sake of social cooperation. Nozick asks why the well-off would be obliged, due to their inequality and for the sake of social cooperation, to assist the worse-off and not have the worse-off accept the inequality and benefit the well-off. Furthermore, Rawls's idea regarding morally arbitrary natural endowments comes under fire; Nozick argues that natural advantages that the well-off enjoy do not violate anyone's rights and that, therefore, the well-off have a right to them. He also states that Rawls's proposal that inequalities be geared toward assisting the worse-off is morally arbitrary in itself.
Original position.
Nozick's opinions on "historical entitlement" ensures that he naturally rejects the "Original Position" since he argues that in the "Original Position" individuals will use an "end-state" principle to determine the outcome, whilst he explicitly states the importance of the historicity of any such decisions (for example punishments and penalties will require historical information).
Equality, Envy, Exploitation, Etc..
Nozick presses "the major objection" to theories that bestow and enforce positive rights to various things such as equality of opportunity, life, and so on. "These 'rights' require a substructure of things and materials and actions," he writes, "and 'other' people may have rights and entitlements over these."
Nozick concludes that "Marxian exploitation is the exploitation of people's lack of understanding of economics."
Demoktesis.
Demoktesis is a thought-experiment designed to show the incompatibility of democracy with libertarianism in general and the entitlement theory specifically. People desirous of more money might "hit upon the idea of incorporating themselves, raising money by selling shares in themselves." They would partition such rights as which occupation one would have. Though perhaps no one sells himself into utter slavery, there arises through voluntary exchanges a "very extensive domination" of some person by others. This intolerable situation is avoided by writing new terms of incorporation that for any stock no one already owning more than a certain number of shares may purchase it. As the process goes on, everyone sells off rights in themselves, "keeping one share in each right as their own, so they can attend stockholders' meetings if they wish." The inconvenience of attending such meetings leads to a special occupation of stockholders' representative. There is a great dispersal of shares such that almost everybody is deciding about everybody else. The system is still unwieldy, so a "great consolidational convention" is convened for buying and selling shares, and after a "hectic three days (lo and behold!)" each person owns exactly one share in each right over every other person, including himself. So now there can be just one meeting in which everything is decided for everybody. Attendance is too great and it's boring, so it is decided that only those entitled to cast at least 100,000 votes may attend the grand stockholders' meeting. And so on. Their social theorists call the system "demoktesis" (from Greek δῆμος "demos", "people" and κτῆσις "ktesis", "ownership"), "ownership of the people, by the people, and for the people", and declare it the highest form of social life, one that must not be allowed to perish from the earth. With this "eldritch tale" we have in fact arrived at a modern democratic state.
A Framework for Utopia.
The utopia mentioned in the title of Nozick's first book is a meta-utopia, a framework for voluntary migration between utopias tending towards worlds in which everybody benefits from everybody else's presence. This is meant to be the Lockean "night-watchman state" writ large. The state protects individual rights and makes sure that contracts and other market transactions are voluntary. The meta-utopian framework reveals what is inspiring and noble in this night-watchman function. They both contain the only form of social union that is possible for the atomistic rational agents of "Anarchy, State, and Utopia", fully voluntary associations of mutual benefit. The influence of this idea on Nozick's thinking is profound. Even in his last book, "Invariances", he is still concerned to give priority to the mutual-benefit aspect of ethics. This coercively enforceable aspect ideally has an "empty core" in the game theorists' sense: the core of a game is all of those payoff vectors to the group wherein no subgroup can do better for itself acting on its own, without cooperating with others not in the subgroup. The worlds in Nozick's meta-utopia have empty cores. No subgroup of a utopian world is better off to emigrate to its own smaller world. The function of ethics is fundamentally to create and stabilize such empty cores of mutually beneficial cooperation. His view is that we are fortunate to live under conditions that favor "more-extensive cores", and less conquest, slavery, and pillaging, "less imposition of noncore vectors upon subgroups." Higher moral goals are real enough, but they are parasitic (as described in "The Examined Life", the chapter "Darkness and Light") upon mutually beneficial cooperation.
In Nozick's utopia if people are not happy with the society they are in they can leave and start their own community, but he fails to consider that there might be things that prevent a person from leaving or moving about freely. Thomas Pogge states that items that are not socially induced can restrict people's options. Nozick states that for the healthy to have to support the handicapped imposes on their freedom, but Pogge argues that it introduces an inequality. This inequality restricts movement based on the ground rules Nozick has implemented, which could lead to feudalism and slavery, a society which Nozick himself would reject. David Schaefer notes that Nozick himself claims that a person could sell himself into slavery, which would break the very ground rule that was created, restricting the movement and choices that a person could make.
Other topics covered in the book.
Retributive and deterrence theories of punishment.
In chapter 4 Nozick discusses two theories of punishment: the deterrence and the retributive ones. To compare them, we have to take into account what is the decision that a potential infractor is facing. His decision may be determined by:
formula_1
Where G are the gains from violating the victim's rights, p is the probability of getting caught and (C + D + E) are the costs that the infractor would face if caught. Specifically, C is full compensation to the victim, D are all the emotional costs that the infractor would face if caught (by being apprehended, placed on trial and so on) and E are the financial costs of the processes of apprehension and trial.
So if this equation is positive, the potential infractor will have an incentive to violate the potential victim's rights.
Here the two theories come into play. On a retributive justice framework, an additional cost R should be imposed to the transgressor that is proportional to the harm done (or intended to be done).
Specifically, formula_2, where r is the degree of responsibility the infractor has and formula_3.
Therefore, the decision a potential infractor would now face would be:
formula_4
But this still will not deter all people. The equation would be positive if G is high enough or, more importantly, if p is low. That is, if it is very unlikely that an infractor will be caught, they may very well choose to do it even if they have to face the new cost R. Therefore, retributive justice theories allow some failures of deterrence.
On the other hand, deterrence theories ("the penalty for a crime should be the minimal one necessary to deter commission of it") do not give enough guidance on how much deterrence should we aim at. If every single possible violation of rights is to be deterred, "the penalty will be set unacceptably high". The problem here is that the infractor may be punished well beyond the harm done to deter "other people".
According to Nozick, the utilitarian response to the latest problem would be to raise the penalty until the point where more additional unhappiness would be created than would be saved to those who will not be victimized as a consequence of the additional penalty. But this will not do, according to Nozick, because it raises another problem: should the happiness of the victim have more weight in the calculation than the happiness of the felon? If so, how much?
He concludes that the retributive framework is better on grounds of simplicity.
Similarly, under the retributive theory, he contends that self-defense is appropriate even if the victim uses more force to defend themself. In particular, he proposes that the maximum amount of force that a potential victim can use is:
formula_5
And in this case H is the harm that the victim thinks that the other is going to inflict upon themself. However, if he uses more force than f(H), that additional force has to be subtracted later from the punishment that the felon gets.
Animal rights and utilitarianism.
Nozick discusses in chapter 3 whether animals have rights too or whether they can be used, and if the species of the animal says anything about the extent to which this can be done. He also analyzes the proposal "utilitarianism for animals, Kantianism for people," ultimately rejecting it, saying: "Even for animals, utilitarianism won't do as the whole story, but the thicket of questions daunts us." Here Nozick also espouses ethical vegetarianism, saying: "Though I should say in my view the extra benefits Americans today can gain from eating animals do "not" justify doing it. So we shouldn't." Philosopher Josh Milburn has argued that Nozick's contributions have been overlooked in the literature on both animal ethics and libertarianism.
Reception.
"Anarchy, State, and Utopia" came out of a semester-long course that Nozick taught with Michael Walzer at Harvard in 1971, called "Capitalism and Socialism". The course was a debate between the two; Nozick's side is in "Anarchy, State, and Utopia," and Walzer's side is in his "Spheres of Justice" (1983), in which he argues for "complex equality".
Murray Rothbard, an anarcho-capitalist, criticizes "Anarchy, State, and Utopia" in his essay "Robert Nozick and the Immaculate Conception of the State" on the basis that:
The American legal scholar Arthur Allen Leff criticized Nozick in his 1979 article "Unspeakable Ethics, Unnatural Law". Leff stated that Nozick built his entire book on the bald assertion that "individuals have rights which may not be violated by other individuals", for which no justification is offered. According to Leff, no such justification is possible either. Any desired ethical statement, including a negation of Nozick's position, can easily be "proved" with apparent rigor as long as one takes the licence to simply establish a grounding principle by assertion. Leff further calls "ostentatiously unconvincing" Nozick's proposal that differences among individuals will not be a problem if like-minded people form geographically isolated communities.
Philosopher Jan Narveson described Nozick's book as "brilliant".
Cato Institute fellow Tom G. Palmer writes that "Anarchy, State, and Utopia" is "witty and dazzling", and offers a strong criticism of John Rawls's "A Theory of Justice". Palmer adds that, "Largely because of his remarks on Rawls and the extraordinary power of his intellect, Nozick's book was taken quite seriously by academic philosophers and political theorists, many of whom had not read contemporary libertarian (or classical liberal) material and considered this to be the only articulation of libertarianism available. Since Nozick was writing to defend the limited state and did not justify his starting assumption that individuals have rights, this led some academics to dismiss libertarianism as 'without foundations,' in the words of the philosopher Thomas Nagel. When read in light of the explicit statement of the book's purpose, however, this criticism is misdirected".
Libertarian author David Boaz writes that "Anarchy, State, and Utopia", together with Rothbard's "For a New Liberty" (1973) and Ayn Rand's essays on political philosophy, "defined the 'hard-core' version of modern libertarianism, which essentially restated Spencer's law of equal freedom: Individuals have the right to do whatever they want to do, so long as they respect the equal rights of others."
In the article "Social Unity and Primary Goods", republished in his "Collected Papers" (1999), Rawls notes that Nozick handles Sen's liberal paradox in a manner that is similar to his own. However, the rights that Nozick takes to be fundamental and the basis for regarding them to be such are different from the equal basic liberties included in justice as fairness and Rawls conjectures that they are thus not inalienable.
In "Lectures on the History of Political Philosophy" (2007), Rawls notes that Nozick assumes that just transactions are "justice preserving" in much the same way that logical operations are "truth preserving". Thus, as explained in Distributive justice above, Nozick holds that repetitive applications of "justice in holdings" and "justice in transfer" preserve an initial state of justice obtained through "justice in acquisition or rectification". Rawls points out that this is simply an assumption or presupposition and requires substantiation. In reality, he maintains, small inequalities established by just transactions accumulate over time and eventually result in large inequalities and an unjust situation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p \\cdot H \\geq V"
},
{
"math_id": 1,
"text": "G \\cdot (1-p)-(C+D+E) \\cdot p "
},
{
"math_id": 2,
"text": "R = r\\cdot H"
},
{
"math_id": 3,
"text": "0 \\leq r \\leq 1"
},
{
"math_id": 4,
"text": "G\\cdot (1 - p) - (C+D+E+R) \\cdot p "
},
{
"math_id": 5,
"text": "f(H) + r \\cdot H ~~~~~(where~~ f(H) \\geq H ) "
}
] | https://en.wikipedia.org/wiki?curid=852930 |
8529655 | Hilbert system | System of formal deduction in logic
In logic, more specifically proof theory, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style system, Hilbert-style proof system, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of formal proof system attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.
It is defined as a deductive system that generates theorems from axioms and inference rules, especially if the only inference rule is modus ponens. Every Hilbert system is an axiomatic system, which is used by many authors as a sole less specific term to declare their Hilbert systems, without mentioning any more specific terms. In this context, "Hilbert systems" are contrasted with natural deduction systems, in which no axioms are used, only inference rules.
While all sources that refer to an "axiomatic" logical proof system system characterize it simply as a logical proof system with axioms, sources that use variants of the term "Hilbert system" sometimes define it in different ways, which will not be used in this article. For instance, Troelstra defines a "Hilbert system" as a system with axioms "and" with formula_0 and formula_1 as the only inference rules. A specific set of axioms is also sometimes called "the Hilbert system", or "the Hilbert-style calculus". Sometimes, "Hilbert-style" is used to convey the type of axiomatic system that has its axioms given in "schematic" form, as in the below—but other sources use the term "Hilbert-style" as encompassing both systems with schematic axioms and systems with a rule of substitution, as this article does. The use of "Hilbert-style" and similar terms to describe axiomatic proof systems in logic is due to the influence of Hilbert and Ackermann's "Principles of Mathematical Logic" (1928).
Most variants of Hilbert systems take a characteristic tack in the way they balance a trade-off between logical axioms and rules of inference. Hilbert systems can be characterised by the choice of a large number of schemas of logical axioms and a small set of rules of inference. Systems of natural deduction take the opposite tack, including many deduction rules but very few or no axiom schemas. The most commonly studied Hilbert systems have either just one rule of inference – modus ponens, for propositional logics – or two – with generalisation, to handle predicate logics, as well – and several infinite axiom schemas. Hilbert systems for alethic modal logics, sometimes called Hilbert-Lewis systems, additionally require the necessitation rule. Some systems use a finite list of concrete formulas as axioms instead of an infinite set of formulas via axiom schemas, in which case the uniform substitution rule is required.
A characteristic feature of the many variants of Hilbert systems is that the "context" is not changed in any of their rules of inference, while both natural deduction and sequent calculus contain some context-changing rules. Thus, if one is interested only in the derivability of tautologies, no hypothetical judgments, then one can formalize the Hilbert system in such a way that its rules of inference contain only judgments of a rather simple form. The same cannot be done with the other two deductions systems: as context is changed in some of their rules of inferences, they cannot be formalized so that hypothetical judgments could be avoided – not even if we want to use them just for proving derivability of tautologies.
Formal deductions.
In a Hilbert system, a formal deduction (or proof) is a finite sequence of formulas in which each formula is either an axiom or is obtained from previous formulas by a rule of inference. These formal deductions are meant to mirror natural-language proofs, although they are far more detailed.
Suppose formula_2 is a set of formulas, considered as hypotheses. For example, formula_2 could be a set of axioms for group theory or set theory. The notation formula_3 means that there is a deduction that ends with formula_4 using as axioms only logical axioms and elements of formula_2. Thus, informally, formula_3 means that formula_4 is provable assuming all the formulas in formula_2.
Hilbert systems are characterized by the use of numerous schemas of logical axioms. An axiom schema is an infinite set of axioms obtained by substituting all formulas of some form into a specific pattern. The set of logical axioms includes not only those axioms generated from this pattern, but also any generalization of one of those axioms. A generalization of a formula is obtained by prefixing zero or more universal quantifiers on the formula; for example formula_5 is a generalization of formula_6.
Propositional logic.
The following are some Hilbert systems that have been used in propositional logic. One of them, the , is also considered a Frege system.
Frege's "Begriffsschrift".
Axiomatic proofs have been used in mathematics since the famous Ancient Greek textbook, Euclid's "Elements of Geometry", c. 300 BC. But the first known fully formalized proof system that thereby qualifies as a Hilbert system dates back to Gottlob Frege's 1879 "Begriffsschrift". Frege's system used only implication and negation as connectives, and it had six axioms, which were these ones:
These were used by Frege together with modus ponens and a rule of substitution (which was used but never precisely stated) to yield a complete and consistent axiomatization of classical truth-functional propositional logic.
Łukasiewicz's P2.
Jan Łukasiewicz showed that, in Frege's system, "the third axiom is superfluous since it can be derived from the preceding two axioms, and that the last three axioms can be replaced by the single sentence formula_13". Which, taken out of Łukasiewicz's Polish notation into modern notation, means formula_14. Hence, Łukasiewicz is credited with this system of three axioms:
Just like Frege's system, this system uses a substitution rule and uses modus ponens as an inference rule. The exact same system was given (with an explicit substitution rule) by Alonzo Church, who referred to it as the system P2, and helped popularize it.
Schematic form of P2.
One may avoid using the rule of substitution by giving the axioms in schematic form, using them to generate an infinite set of axioms. Hence, using Greek letters to represent schemas (metalogical variables that may stand for any well-formed formulas), the axioms are given as:
The schematic version of P2 is attributed to John von Neumann, and is used in the Metamath "set.mm" formal proof database. In fact, the very idea of using axiom schemes to replace the rule of substitution is attributed to von Neumann. The schematic version of P2 has also been attributed to Hilbert, and named formula_21 in this context.
Systems for propositional logic whose inference rules are schematic are also called Frege systems; as the authors that originally defined the term "Frege system" note, this actually excludes Frege's own system, given above, since it had axioms instead of axiom schemes.
Proof example in P2.
As an example, a proof of formula_22 in P2 is given below. First, the axioms are given names:
(A1) formula_23
(A2) formula_24
(A3) formula_25
And the proof is as follows:
Predicate logic (example system).
There is an unlimited amount of axiomatisations of predicate logic, since for any logic there is freedom in choosing axioms and rules that characterise that logic. We describe here a Hilbert system with nine axioms and just the rule modus ponens, which we call the one-rule axiomatisation and which describes classical equational logic. We deal with a minimal language for this logic, where formulas use only the connectives formula_30 and formula_31 and only the quantifier formula_32. Later we show how the system can be extended to include additional logical connectives, such as formula_33 and formula_34, without enlarging the class of deducible formulas.
The first four logical axiom schemas allow (together with modus ponens) for the manipulation of logical connectives.
P1. formula_35
P2. formula_36
P3. formula_37
P4. formula_38
The axiom P1 is redundant, as it follows from P3, P2 and modus ponens (see proof). These axioms describe classical propositional logic; without axiom P4 we get positive implicational logic. Minimal logic is achieved either by adding instead the axiom P4m, or by defining formula_39 as formula_40.
P4m. formula_41
Intuitionistic logic is achieved by adding axioms P4i and P5i to positive implicational logic, or by adding axiom P5i to minimal logic. Both P4i and P5i are theorems of classical propositional logic.
P4i. formula_42
P5i. formula_43
Note that these are axiom schemas, which represent infinitely many specific instances of axioms. For example, P1 might represent the particular axiom instance formula_44, or it might represent formula_45: the formula_4 is a place where any formula can be placed. A variable such as this that ranges over formulae is called a 'schematic variable'.
With a second rule of uniform substitution (US), we can change each of these axiom schemas into a single axiom, replacing each schematic variable by some propositional variable that isn't mentioned in any axiom to get what we call the substitutional axiomatisation. Both formalisations have variables, but where the one-rule axiomatisation has schematic variables that are outside the logic's language, the substitutional axiomatisation uses propositional variables that do the same work by expressing the idea of a variable ranging over formulae with a rule that uses substitution.
US. Let formula_46 be a formula with one or more instances of the propositional variable formula_47, and let formula_48 be another formula. Then from formula_46, infer formula_49.
The next three logical axiom schemas provide ways to add, manipulate, and remove universal quantifiers.
Q5. formula_50 where "t" may be substituted for "x" in formula_51
Q6. formula_52
Q7. formula_53 where "x" is not free in formula_4.
These three additional rules extend the propositional system to axiomatise classical predicate logic. Likewise, these three rules extend system for intuitionstic propositional logic (with P1-3 and P4i and P5i) to intuitionistic predicate logic.
Universal quantification is often given an alternative axiomatisation using an extra rule of generalisation (see the section on Metatheorems), in which case the rules Q6 and Q7 are redundant.
The final axiom schemas are required to work with formulas involving the equality symbol.
I8. formula_54 for every variable "x".
I9. formula_55
Conservative extensions.
It is common to include in a Hilbert system only axioms for the logical operators implication and negation towards functional completeness. Given these axioms, it is possible to form conservative extensions of the deduction theorem that permit the use of additional connectives. These extensions are called conservative because if a formula φ involving new connectives is rewritten as a logically equivalent formula θ involving only negation, implication, and universal quantification, then φ is derivable in the extended system if and only if θ is derivable in the original system. When fully extended, a Hilbert system will resemble more closely a system of natural deduction.
formula_56
formula_57 where formula_58 is not a free variable of formula_48.
introduction: formula_59
elimination left: formula_60
elimination right: formula_61
introduction left: formula_62
introduction right: formula_63
elimination: formula_64
Metatheorems.
Because Hilbert systems have very few deduction rules, it is common to prove metatheorems that show that additional deduction rules add no deductive power, in the sense that a deduction using the new deduction rules can be converted into a deduction using only the original deduction rules. | [
{
"math_id": 0,
"text": "{\\rightarrow}E"
},
{
"math_id": 1,
"text": "{\\forall}I"
},
{
"math_id": 2,
"text": "\\Gamma"
},
{
"math_id": 3,
"text": "\\Gamma \\vdash \\phi"
},
{
"math_id": 4,
"text": "\\phi"
},
{
"math_id": 5,
"text": "\\forall y ( \\forall x Pxy \\to Pty)"
},
{
"math_id": 6,
"text": "\\forall x Pxy \\to Pty"
},
{
"math_id": 7,
"text": "a \\supset (b \\supset a)"
},
{
"math_id": 8,
"text": "(c \\supset (b \\supset a)) \\supset ((c \\supset b) \\supset (c \\supset a))"
},
{
"math_id": 9,
"text": "(d \\supset (b \\supset a)) \\supset (b \\supset (d \\supset a))"
},
{
"math_id": 10,
"text": "(b \\supset a) \\supset (\\neg a \\supset \\neg b)"
},
{
"math_id": 11,
"text": "\\neg \\neg a \\supset a"
},
{
"math_id": 12,
"text": "a \\supset \\neg \\neg a"
},
{
"math_id": 13,
"text": "CCNpNqCpq"
},
{
"math_id": 14,
"text": "(\\neg p \\rightarrow \\neg q) \\rightarrow (p \\rightarrow q)"
},
{
"math_id": 15,
"text": "p \\to (q \\to p)"
},
{
"math_id": 16,
"text": "(p \\to (q \\to r)) \\to ((p \\to q) \\to (p \\to r))"
},
{
"math_id": 17,
"text": "(\\neg p \\to \\neg q) \\to (q \\to p)"
},
{
"math_id": 18,
"text": "\\varphi \\to (\\psi \\to \\varphi)"
},
{
"math_id": 19,
"text": "(\\varphi \\to (\\psi \\to \\chi)) \\to ((\\varphi \\to \\psi) \\to (\\varphi \\to \\chi))"
},
{
"math_id": 20,
"text": "(\\neg \\varphi \\to \\neg \\psi) \\to (\\psi \\to \\varphi)"
},
{
"math_id": 21,
"text": "\\mathcal{H}"
},
{
"math_id": 22,
"text": " A \\to A "
},
{
"math_id": 23,
"text": "(p \\to (q \\to p))"
},
{
"math_id": 24,
"text": "((p \\to (q \\to r)) \\to ((p \\to q) \\to (p \\to r)))"
},
{
"math_id": 25,
"text": "((\\neg p \\to \\neg q) \\to (q \\to p))"
},
{
"math_id": 26,
"text": " A \\to ((B \\to A) \\to A)"
},
{
"math_id": 27,
"text": " (A \\to ((B \\to A) \\to A)) \\to ((A \\to (B \\to A)) \\to (A \\to A))"
},
{
"math_id": 28,
"text": " (A \\to (B \\to A)) \\to (A \\to A)"
},
{
"math_id": 29,
"text": " A \\to (B \\to A)"
},
{
"math_id": 30,
"text": "\\lnot"
},
{
"math_id": 31,
"text": "\\to"
},
{
"math_id": 32,
"text": "\\forall"
},
{
"math_id": 33,
"text": "\\land"
},
{
"math_id": 34,
"text": "\\lor"
},
{
"math_id": 35,
"text": "\\phi \\to \\phi "
},
{
"math_id": 36,
"text": "\\phi \\to \\left( \\psi \\to \\phi \\right) "
},
{
"math_id": 37,
"text": "\\left( \\phi \\to \\left( \\psi \\rightarrow \\xi \\right) \\right) \\to \\left( \\left( \\phi \\to \\psi \\right) \\to \\left( \\phi \\to \\xi \\right) \\right)"
},
{
"math_id": 38,
"text": "\\left ( \\lnot \\phi \\to \\lnot \\psi \\right) \\to \\left( \\psi \\to \\phi \\right) "
},
{
"math_id": 39,
"text": "\\lnot \\phi"
},
{
"math_id": 40,
"text": "\\phi \\to \\bot"
},
{
"math_id": 41,
"text": "\\left( \\phi \\to \\psi \\right) \\to \\left(\\left(\\phi \\to \\lnot \\psi \\right) \\to \\lnot \\phi \\right)"
},
{
"math_id": 42,
"text": "\\left(\\phi \\to \\lnot \\phi\\right) \\to \\lnot \\phi "
},
{
"math_id": 43,
"text": "\\lnot\\phi \\to \\left( \\phi \\to \\psi \\right) "
},
{
"math_id": 44,
"text": "p \\to p "
},
{
"math_id": 45,
"text": "\\left( p \\to q \\right) \\to \\left( p \\to q \\right) "
},
{
"math_id": 46,
"text": "\\phi(p)"
},
{
"math_id": 47,
"text": "p"
},
{
"math_id": 48,
"text": "\\psi"
},
{
"math_id": 49,
"text": "\\phi(\\psi)"
},
{
"math_id": 50,
"text": " \\forall x \\left( \\phi \\right) \\to \\phi[x:=t]"
},
{
"math_id": 51,
"text": "\\,\\!\\phi"
},
{
"math_id": 52,
"text": "\\forall x \\left( \\phi \\to \\psi \\right) \\to \\left( \\forall x \\left( \\phi \\right) \\to \\forall x \\left( \\psi \\right) \\right)"
},
{
"math_id": 53,
"text": " \\phi \\to \\forall x \\left( \\phi \\right) "
},
{
"math_id": 54,
"text": "x = x"
},
{
"math_id": 55,
"text": "\\left( x = y \\right) \\to \\left( \\phi[z:=x] \\to \\phi[z:=y] \\right)"
},
{
"math_id": 56,
"text": " \\forall x(\\phi \\to \\exists y(\\phi[x:=y])) "
},
{
"math_id": 57,
"text": " \\forall x(\\phi \\to \\psi) \\to \\exists x(\\phi) \\to \\psi "
},
{
"math_id": 58,
"text": "x"
},
{
"math_id": 59,
"text": " \\alpha\\to(\\beta\\to\\alpha\\land\\beta) "
},
{
"math_id": 60,
"text": " \\alpha\\wedge\\beta\\to\\alpha "
},
{
"math_id": 61,
"text": " \\alpha\\wedge\\beta\\to\\beta "
},
{
"math_id": 62,
"text": " \\alpha\\to\\alpha\\vee\\beta "
},
{
"math_id": 63,
"text": " \\beta\\to\\alpha\\vee\\beta "
},
{
"math_id": 64,
"text": " (\\alpha\\to\\gamma)\\to ((\\beta\\to\\gamma) \\to \\alpha\\vee\\beta \\to \\gamma) "
}
] | https://en.wikipedia.org/wiki?curid=8529655 |
8530877 | Hasegawa–Mima equation | In plasma physics, the Hasegawa–Mima equation, named after Akira Hasegawa and Kunioki Mima, is an equation that describes a certain regime of plasma, where the time scales are very fast, and the distance scale in the direction of the magnetic field is long. In particular the equation is useful for describing turbulence in some tokamaks. The equation was introduced in Hasegawa and Mima's paper submitted in 1977 to "Physics of Fluids", where they compared it to the results of the ATC tokamak.
formula_0
for all quantities of interest. When the particles in the plasma are moving through a magnetic field, they spin in a circle around the magnetic field. The frequency of oscillation, formula_1 known as the cyclotron frequency or gyrofrequency, is directly proportional to the magnetic field.
formula_2
where Z is the number of protons in the ions. If we are talking about hydrogen Z = 1, and n is the same for both species. This condition is true as long as the electrons can shield out electric fields. A cloud of electrons will surround any charge with an approximate radius known as the Debye length. For that reason this approximation means the size scale is much larger than the Debye length. The ion particle density can be expressed by a first order term that is the density defined by the quasineutrality condition equation, and a second order term which is how much it differs from the equation.
formula_3
Since the electrons are free to move along the direction of the magnetic field, they screen away electric potentials. This screening causes a Boltzmann distribution of electrons to form around the electric potentials.
The equation.
The Hasegawa–Mima equation is a second order nonlinear partial differential equation that describes the electric potential. The form of the equation is:
formula_4
Although the quasi neutrality condition holds, the small differences in density between the electrons and the ions cause an electric potential.
The Hasegawa–Mima equation is derived from the continuity equation:
formula_5
The fluid velocity can be approximated by the E cross B drift:
formula_6
Previous models derived their equations from this approximation. The divergence of the E cross B drift is zero, which keeps the fluid incompressible. However, the compressibility of the fluid is very important in describing the evolution of the system. Hasegawa and Mima argued that the assumption was invalid. The Hasegawa–Mima equation introduces a second order term for the fluid velocity known as the polarization drift in order to find the divergence of the fluid velocity. Due to the assumption of large magnetic field, the polarization drift is much smaller than the E cross B drift. Nevertheless, it introduces important physics.
For a two-dimensional incompressible fluid which is not a plasma, the Navier–Stokes equations say:
formula_7
after taking the curl of the momentum balance equation. This equation is almost identical to the Hasegawa–Mima equation except the second and fourth terms are gone, and the electric potential is replaced with the fluid velocity vector potential where:
formula_8
The first and third terms to the Hasegawa–Mima equation, which are the same as the Navier Stokes equation, are the terms introduced by adding the polarization drift. In the limit where the wavelength of a perturbation of the electric potential is much smaller than the gyroradius based on the sound speed, the Hasegawa–Mima equations become the same as the two-dimensional incompressible fluid.
Normalization.
One way to understand an equation more fully is to understand what it is normalized to, which gives you an idea of the scales of interest. The time, position, and electric potential are normalized to t',x', and formula_9
The time scale for the Hasegawa–Mima equation is the inverse ion gyrofrequency:
formula_10
From the large magnetic field assumption the normalized time is very small. However, it is still large enough to get information out of it.
The distance scale is the gyroradius based on the sound speed:
formula_11
If you transform to k-space, it is clear that when k, the wavenumber, is much larger than one, the terms that make the Hasegawa–Mima equation differ from the equation derived from Navier-Stokes equation in a two dimensional incompressible flow become much smaller than the rest.
From the distance and time scales we can determine the scale for velocities. This turns out to be the sound speed. The Hasegawa–Mima equation, shows us the dynamics of fast moving sounds as opposed to the slower dynamics such as flows that are captured in the MHD equations. The motion is even faster than the sound speed given that the time scales are much smaller than the time normalization.
The potential is normalized to:
formula_12
Since the electrons fit a Maxwellian and the quasineutrality condition holds, this normalized potential is small, but similar order to the normalized time derivative.
The entire equation without normalization is:
formula_13
Although the time derivative divided by the cyclotron frequency is much smaller than unity, and the normalized electric potential is much smaller than unity, as long as the gradient is on the order of one, both terms are comparable to the nonlinear term. The unperturbed density gradient can also be just as small as the normalized electric potential and be comparable to the other terms.
Other forms of the equation.
Often the Hasegawa–Mima equation is expressed in a different form using Poisson brackets. These Poisson brackets are defined as:
formula_14
Using these Poisson bracket , the equation can be re-expressed as:
formula_15
Often the particle density is assumed to vary uniformly just in one direction, and the equation is written in a sightly different form. The Poisson bracket including the density is replaced with the definition of the Poisson bracket, and a constant replaces the derivative of the density dependent term.
Conserved quantities.
There are two quantities that are conserved in a two-dimensional incompressible fluid.
The kinetic energy:
formula_16
And the enstrophy:
formula_17
For the Hasegawa–Mima equation, there are also two conserved quantities, that are related to the above quantities. The generalized energy:
formula_18
And the generalized enstrophy:
formula_19
In the limit where the Hasegawa–Mima equation is the same as an incompressible fluid, the generalized energy, and enstrophy become the same as the kinetic energy and enstrophy. | [
{
"math_id": 0,
"text": "\n\\frac{1}{\\omega_{ci}}\\frac{\\partial}{\\partial t} \\ll 1\n"
},
{
"math_id": 1,
"text": "\\omega_{ci}"
},
{
"math_id": 2,
"text": "\nn_e \\approx Z n_i \\,\n"
},
{
"math_id": 3,
"text": "\nn = n_0 e^{e\\phi/T_e} \\,\n"
},
{
"math_id": 4,
"text": "\n\\frac{\\partial}{\\partial t}\\left(\\nabla^2\\phi-\\phi\\right)-\\left[\\left(\\nabla\\phi\\times \\mathbf{\\hat z}\\right)\\cdot\\nabla\\right]\\left[\\nabla^2\\phi-\\ln\\left( n_0\\right)\\right]=0.\n"
},
{
"math_id": 5,
"text": "\n\\frac{\\partial n}{\\partial t} + \\nabla\\cdot (n\\mathbf{ v}) = 0.\n"
},
{
"math_id": 6,
"text": "\n\\mathbf{ v_E} = \\frac{\\mathbf{ E}\\times \\mathbf{ B}}{cB^2} = \\frac{-\\nabla\\phi\\times\\mathbf{\\hat z}}{cB}.\n"
},
{
"math_id": 7,
"text": "\n\\frac{\\partial}{\\partial t}\\left(\\nabla^2\\psi\\right)-\\left[\\left(\\nabla\\psi\\times \\mathbf{\\hat z}\\right)\\cdot\\nabla\\right]\\nabla^2\\psi =0\n"
},
{
"math_id": 8,
"text": "\n\\mathbf{ v} = -\\nabla\\psi\\times\\mathbf{\\hat z}.\n"
},
{
"math_id": 9,
"text": "\\phi'"
},
{
"math_id": 10,
"text": "\nt' = \\omega_{ci} t, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\omega_{ci} = \\frac{eZB}{m_i c}.\n"
},
{
"math_id": 11,
"text": "\nx' = \\frac{x}{\\rho_s}, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\rho_s^2 \\equiv \\frac{T_e}{m_i\\omega_{ci}^2}.\n"
},
{
"math_id": 12,
"text": "\n\\phi' =\\frac{e\\phi}{T_e}.\n"
},
{
"math_id": 13,
"text": "\n\\frac{1}{\\omega_{ci}}\\frac{\\partial}{\\partial t}\\left(\\rho_s^2\\nabla^2\\frac{e\\phi}{T_e}-\\frac{e\\phi}{T_e}\\right)-\\left[\\left(\\rho_s\\nabla \\frac{e\\phi}{T_e}\\times \\mathbf{\\hat z}\\right)\\cdot\\rho_s\\nabla\\right]\\left[\\rho_s^2\\nabla^2\\frac{e\\phi}{T_e}-\\ln\\left(\\frac{n_0}{\\omega_{ci}}\\right)\\right]=0.\n"
},
{
"math_id": 14,
"text": "\n\\left[A,B\\right] \\equiv \\frac{\\partial A}{\\partial x}\\frac{\\partial B}{\\partial y}-\\frac{\\partial A}{\\partial y}\\frac{\\partial B}{\\partial x}.\n"
},
{
"math_id": 15,
"text": "\n\\frac{\\partial}{\\partial t}\\left(\\nabla^2\\phi-\\phi\\right)+\\left[\\phi,\\nabla^2\\phi\\right]-\\left[\\phi,\\ln\\left(\\frac{n_0}{\\omega_{ci}}\\right)\\right]=0.\n"
},
{
"math_id": 16,
"text": "\n\\int\\left(\\nabla\\psi\\right)^2dV = \\int v_x^2 + v_y^2\\,dV.\n"
},
{
"math_id": 17,
"text": "\n\\int\\left(\\nabla^2\\psi\\right)^2\\,dV = \\int\\left(\\nabla\\times \\mathbf{ v}\\right)^2\\,dV.\n"
},
{
"math_id": 18,
"text": "\n\\int\\left[\\phi^2+\\left(\\nabla\\phi\\right)^2\\right]\\,dV.\n"
},
{
"math_id": 19,
"text": "\n\\int\\left[\\left(\\nabla\\phi\\right)^2+\\left(\\nabla^2\\phi\\right)^2\\right]\\,dV.\n"
}
] | https://en.wikipedia.org/wiki?curid=8530877 |
8531265 | Perfect spline | In the mathematical subfields function theory and numerical analysis, a univariate polynomial spline of order formula_0 is called a perfect spline if its formula_0-th derivative is equal to formula_1 or formula_2 between knots and changes its sign at every knot.
The term was coined by Isaac Jacob Schoenberg.
Perfect splines often give solutions to various extremal problems in mathematics. For example, norms of periodic perfect splines (they are sometimes called Euler perfect splines) are equal to Favard's constants. | [
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "+1"
},
{
"math_id": 2,
"text": "-1"
}
] | https://en.wikipedia.org/wiki?curid=8531265 |
8531319 | Favard constant | In mathematics, the Favard constant, also called the Akhiezer–Krein–Favard constant, of order "r" is defined as
formula_0
This constant is named after the French mathematician Jean Favard, and after the Soviet mathematicians Naum Akhiezer and Mark Krein.
formula_1
formula_2
Uses.
This constant is used in solutions of several extremal problems, for example | [
{
"math_id": 0,
"text": "K_r = \\frac{4}{\\pi} \\sum\\limits_{k=0}^{\\infty} \\left[ \\frac{(-1)^k}{2k+1} \\right]^{r+1}."
},
{
"math_id": 1,
"text": "K_0 = 1."
},
{
"math_id": 2,
"text": "K_1 = \\frac{\\pi}{2}."
}
] | https://en.wikipedia.org/wiki?curid=8531319 |
853141 | Motzkin number | In mathematics, the nth Motzkin number is the number of different ways of drawing non-intersecting chords between n points on a circle (not necessarily touching every point by a chord). The Motzkin numbers are named after Theodore Motzkin and have diverse applications in geometry, combinatorics and number theory.
The Motzkin numbers formula_0 for formula_1 form the sequence:
1, 1, 2, 4, 9, 21, 51, 127, 323, 835, ... (sequence in the OEIS)
Examples.
The following figure shows the 9 ways to draw non-intersecting chords between 4 points on a circle ("M"4 = 9):
The following figure shows the 21 ways to draw non-intersecting chords between 5 points on a circle ("M"5 = 21):
Properties.
The Motzkin numbers satisfy the recurrence relations
formula_2
The Motzkin numbers can be expressed in terms of binomial coefficients and Catalan numbers:
formula_3
and inversely,
formula_4
This gives
formula_5
The generating function formula_6 of the Motzkin numbers satisfies
formula_7
and is explicitly expressed as
formula_8
An integral representation of Motzkin numbers is given by
formula_9.
They have the asymptotic behaviour
formula_10.
A Motzkin prime is a Motzkin number that is prime. Four such primes are known:
2, 127, 15511, 953467954114363 (sequence in the OEIS)
Combinatorial interpretations.
The Motzkin number for n is also the number of positive integer sequences of length "n" − 1 in which the opening and ending elements are either 1 or 2, and the difference between any two consecutive elements is −1, 0 or 1. Equivalently, the Motzkin number for n is the number of positive integer sequences of length "n" + 1 in which the opening and ending elements are 1, and the difference between any two consecutive elements is −1, 0 or 1.
Also, the Motzkin number for n gives the number of routes on the upper right quadrant of a grid from coordinate (0, 0) to coordinate (n, 0) in n steps if one is allowed to move only to the right (up, down or straight) at each step but forbidden from dipping below the y = 0 axis.
For example, the following figure shows the 9 valid Motzkin paths from (0, 0) to (4, 0):
There are at least fourteen different manifestations of Motzkin numbers in different branches of mathematics, as enumerated by in their survey of Motzkin numbers.
showed that vexillary involutions are enumerated by Motzkin numbers.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M_n"
},
{
"math_id": 1,
"text": "n = 0, 1, \\dots"
},
{
"math_id": 2,
"text": "M_{n}=M_{n-1}+\\sum_{i=0}^{n-2}M_iM_{n-2-i}=\\frac{2n+1}{n+2}M_{n-1}+\\frac{3n-3}{n+2}M_{n-2}."
},
{
"math_id": 3,
"text": "M_n=\\sum_{k=0}^{\\lfloor n/2\\rfloor} \\binom{n}{2k} C_k,"
},
{
"math_id": 4,
"text": "C_{n+1}=\\sum_{k=0}^{n} \\binom{n}{k} M_k"
},
{
"math_id": 5,
"text": "\\sum_{k=0}^{n}C_{k} = 1 + \\sum_{k=1}^{n} \\binom{n}{k} M_{k-1}."
},
{
"math_id": 6,
"text": "m(x) = \\sum_{n=0}^\\infty M_n x^n"
},
{
"math_id": 7,
"text": "x^2 m(x)^2 + (x - 1) m(x) + 1 = 0"
},
{
"math_id": 8,
"text": "m(x) = \\frac{1-x-\\sqrt{1-2x-3x^2}}{2x^2}."
},
{
"math_id": 9,
"text": "M_{n}=\\frac{2}{\\pi}\\int_0^\\pi \\sin(x)^2(2\\cos(x)+1)^n dx"
},
{
"math_id": 10,
"text": "M_{n}\\sim \\frac{1}{2 \\sqrt{\\pi}}\\left(\\frac{3}{n}\\right)^{3/2} 3^n,~ n \\to \\infty"
}
] | https://en.wikipedia.org/wiki?curid=853141 |
853175 | Characterizations of the exponential function | Mathematical concept
In mathematics, the exponential function can be characterized in many ways.
This article presents some common characterizations, discusses why each makes sense, and proves that they are all equivalent.
The exponential function occurs naturally in many branches of mathematics. Walter Rudin called it "the most important function in mathematics".
It is therefore useful to have multiple ways to define (or characterize) it.
Each of the characterizations below may be more or less useful depending on context.
The "product limit" characterization of the exponential function was discovered by Leonhard Euler.
Characterizations.
The six most common definitions of the exponential function formula_0 for real values formula_1 are as follows.
Larger domains.
One way of defining the exponential function over the complex numbers is to first define it for the domain of real numbers using one of the above characterizations, and then extend it as an analytic function, which is characterized by its values on any infinite domain set.
Also, characterisations (1), (2), and (4) for formula_2 apply directly for formula_22 a complex number. Definition (3) presents a problem because there are non-equivalent paths along which one could integrate; but the equation of (3) should hold for any such path modulo formula_23. As for definition (5), the additive property together with the complex derivative formula_24 are sufficient to guarantee formula_25. However, the initial value condition formula_13 together with the other regularity conditions are not sufficient. For example, for real "x" and "y", the functionformula_26satisfies the three listed regularity conditions in (5) but is not equal to formula_27. A sufficient condition is that formula_13 and that formula_28 is a conformal map at some point; or else the two initial values formula_13 and formula_29 together with the other regularity conditions.
One may also define the exponential on other domains, such as matrices and other algebras. Definitions (1), (2), and (4) all make sense for arbitrary Banach algebras.
Proof that each characterization makes sense.
Some of these definitions require justification to demonstrate that they are well-defined. For example, when the value of the function is defined as the result of a limiting process (i.e. an infinite sequence or series), it must be demonstrated that such a limit always exists.
Characterization 1.
The error of the product limit expression is described by:formula_30
where the polynomial's degree (in "x") in the term with denominator "n""k" is 2"k".
Characterization 2.
Since
formula_31
it follows from the ratio test that formula_32 converges for all "x".
Characterization 3.
Since the integrand is an integrable function of t, the integral expression is well-defined. It must be shown that the function from formula_33 to formula_34 defined by
formula_35
is a bijection. Since 1/"t" is positive for positive t, this function is strictly increasing, hence injective. If the two integrals
formula_36
hold, then it is surjective as well. Indeed, these integrals "do" hold; they follow from the integral test and the divergence of the harmonic series.
Characterization 6.
The definition depends on the unique positive real number formula_20 satisfying: formula_37This limit can be shown to exist for any formula_17, and it defines a continuous increasing function formula_38 with formula_39 and formula_40, so the Intermediate value theorem guarantees the existence of such a value formula_20.
Equivalence of the characterizations.
The following arguments demonstrate the equivalence of the above characterizations for the exponential function.
Characterization 1 ⇔ characterization 2.
The following argument is adapted from Rudin, theorem 3.31, p. 63–65.
Let formula_41 be a fixed non-negative real number. Define
formula_42
By the binomial theorem,
formula_43
(using "x" ≥ 0 to obtain the final inequality) so that:
formula_44
One must use lim sup because it is not known if "t""n" converges.
For the other inequality, by the above expression for "t""n", if 2 ≤ "m" ≤ "n", we have:
formula_45
Fix "m", and let "n" approach infinity. Then
formula_46
(again, one must use lim inf because it is not known if "t""n" converges). Now, take the above inequality, let "m" approach infinity, and put it together with the other inequality to obtain:
formula_47
so that
formula_48
This equivalence can be extended to the negative real numbers by noting formula_49 and taking the limit as n goes to infinity.
Characterization 1 ⇔ characterization 3.
Here, the natural logarithm function is defined in terms of a definite integral as above. By the first part of fundamental theorem of calculus,
formula_50
Besides, formula_51
Now, let "x" be any fixed real number, and let
formula_52
Ln("y") = "x", which implies that "y" = "e""x", where "e""x" is in the sense of definition 3. We have
formula_53
Here, the continuity of ln("y") is used, which follows from the continuity of 1/"t":
formula_54
Here, the result ln"a""n" = "n"ln"a" has been used. This result can be established for "n" a natural number by induction, or using integration by substitution. (The extension to real powers must wait until "ln" and "exp" have been established as inverses of each other, so that "a""b" can be defined for real "b" as "e""b" ln"a".)
formula_55
formula_56
formula_57
formula_58
Characterization 1 ⇔ characterization 4.
Let formula_59 denote the solution to the initial value problem formula_60. Applying the simplest form of Euler's method with increment formula_61 and sample points formula_62 gives the recursive formula:formula_63This recursion is immediately solved to give the approximate value formula_64, and since Euler's Method is known to converge to the exact solution, we have:formula_65
Characterization 1 ⇔ characterization 5.
The following proof is a simplified version of the one in Hewitt and Stromberg, exercise 18.46. First, one proves that measurability (or here, Lebesgue-integrability) implies continuity for a non-zero function formula_66 satisfying formula_10, and then one proves that continuity implies formula_67 for some "k", and finally formula_68 implies "k" = 1.
First, a few elementary properties from formula_66 satisfying formula_10 are proven, and the assumption that formula_66 is not identically zero:
The second and third properties mean that it is sufficient to prove formula_25 for positive "x".
If formula_66 is a Lebesgue-integrable function, then
formula_77
It then follows that
formula_78
Since formula_66 is nonzero, some y can be chosen such that formula_79 and solve for formula_66 in the above expression. Therefore:
formula_80
The final expression must go to zero as formula_76 since formula_81 and formula_82 is continuous. It follows that formula_66 is continuous.
Now, formula_83 can be proven, for some "k", for all positive rational numbers "q". Let "q"="n"/"m" for positive integers "n" and "m". Then
formula_84
by elementary induction on "n". Therefore, formula_85 and thus
formula_86
for formula_87. If restricted to real-valued formula_66, then formula_88 is everywhere positive and so "k" is real.
Finally, by continuity, since formula_67 for all rational "x", it must be true for all real "x" since the closure of the rationals is the reals (that is, any real "x" can be written as the limit of a sequence of rationals). If formula_68 then "k" = 1. This is equivalent to characterization 1 (or 2, or 3), depending on which equivalent definition of e one uses.
Characterization 2 ⇔ characterization 4.
Let n be a non-negative integer. In the sense of definition 4 and by induction, formula_89.
Therefore formula_90
Using Taylor series,
formula_91 This shows that definition 4 implies definition 2.
In the sense of definition 2,
formula_92
Besides, formula_93 This shows that definition 2 implies definition 4.
Characterization 2 ⇒ characterization 5.
In the sense of definition 2, the equation formula_94 follows from the term-by-term manipulation of power series justified by uniform convergence, and the resulting equality of coefficients is just the Binomial theorem. Furthermore:
formula_95
Characterization 3 ⇔ characterization 4.
Characterisation 3 involves defining the natural logarithm before the exponential function is defined. First,
formula_96
This means that the natural logarithm of formula_22 equals the (signed) area under the graph of formula_97 between formula_98 and formula_99. If formula_100, then this area is taken to be negative. Then, formula_101 is defined as the inverse of formula_102, meaning that
formula_103
by the definition of an inverse function. If formula_17 is a positive real number then formula_15 is defined as formula_104. Finally, formula_105 is defined as the number formula_17 such that formula_106. It can then be shown that formula_107:
formula_108
By the fundamental theorem of calculus, the derivative of formula_109. We are now in a position to prove that formula_110, satisfying the first part of the initial value problem given in characterisation 4:
formula_111
Then, we merely have to note that formula_112, and we are done. Of course, it is much easier to show that characterisation 4 implies characterisation 3. If formula_2 is the unique function formula_113 satisfying formula_114, and formula_71, then formula_102 can be defined as its inverse. The derivative of formula_102 can be found in the following way:
formula_115
If we differentiate both sides with respect to formula_116, we get
formula_117
Therefore,
formula_118
Characterization 5 ⇒ characterization 4.
The conditions "f"'(0) = 1 and "f"("x" + "y") = "f"("x") "f"("y") imply both conditions in characterization 4. Indeed, one gets the initial condition "f"(0) = 1 by dividing both sides of the equation
formula_119
by "f"(0), and the condition that "f′"("x") = "f"("x") follows from the condition that "f′"(0) = 1 and the definition of the derivative as follows:
formula_120
Characterization 5 ⇒ characterization 4.
In the sense of definition 5, the multiplicative property together with the initial condition formula_121 imply that: formula_122
Characterization 5 ⇔ characterization 6.
The multiplicative property formula_10 of definition 5 implies that formula_71, and that formula_123 according to the multiplication/division and root definition of exponentiation for rational formula_18 in definition 6, where formula_124. Then the condition formula_12 means that formula_125. Also any of the conditions of definition 5 imply that formula_66 is continuous at all real formula_22. The converse is similar.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\exp(x)=e^x"
},
{
"math_id": 1,
"text": "x\\in \\mathbb{R}"
},
{
"math_id": 2,
"text": "e^x"
},
{
"math_id": 3,
"text": "e^x = \\lim_{n\\to\\infty} \\left(1+\\frac x n \\right)^n."
},
{
"math_id": 4,
"text": "e^x = \\sum_{n=0}^\\infty {x^n \\over n!} = 1 + x + \\frac{x^2}{2!} + \\frac{x^3}{3!} + \\frac{x^4}{4!} + \\cdots"
},
{
"math_id": 5,
"text": "\\int_1^y \\frac{dt}{t} = x."
},
{
"math_id": 6,
"text": "x=\\ln(y)"
},
{
"math_id": 7,
"text": "y(x)=e^x"
},
{
"math_id": 8,
"text": "y' = y,\\quad y(0) = 1,"
},
{
"math_id": 9,
"text": "y'=\\tfrac{dy}{dx}"
},
{
"math_id": 10,
"text": "f(x+y)=f(x)f(y)"
},
{
"math_id": 11,
"text": "x,y"
},
{
"math_id": 12,
"text": "f'(0)=1"
},
{
"math_id": 13,
"text": "f(1)=e"
},
{
"math_id": 14,
"text": "a>0"
},
{
"math_id": 15,
"text": "a^x"
},
{
"math_id": 16,
"text": "x=n"
},
{
"math_id": 17,
"text": "a"
},
{
"math_id": 18,
"text": "x=n/m"
},
{
"math_id": 19,
"text": "a^{n/m} =\\ \\ \\sqrt[m]{\\vphantom{A^2}a^n}"
},
{
"math_id": 20,
"text": "a=e"
},
{
"math_id": 21,
"text": "\\lim_{h \\to 0} \\frac{e^h - 1}{h} = 1."
},
{
"math_id": 22,
"text": "x"
},
{
"math_id": 23,
"text": "2\\pi"
},
{
"math_id": 24,
"text": "f'(0) = 1"
},
{
"math_id": 25,
"text": "f(x)=e^x"
},
{
"math_id": 26,
"text": " f(x + iy) = e^x(\\cos(2y) + i\\sin(2y)) = e^{x + 2iy} "
},
{
"math_id": 27,
"text": "\\exp(x+iy)"
},
{
"math_id": 28,
"text": "f"
},
{
"math_id": 29,
"text": " f(i) = \\cos(1) + i\\sin(1) "
},
{
"math_id": 30,
"text": "\\left(1+\\frac x n \\right)^n=e^x \\left(1-\\frac{x^2}{2n}+\\frac{x^3(8+3x)}{24n^2}+\\cdots \\right),"
},
{
"math_id": 31,
"text": "\\lim_{n\\to\\infty} \\left|\\frac{x^{n+1}/(n+1)!}{x^n/n!}\\right|\n = \\lim_{n\\to\\infty} \\left|\\frac{x}{n+1}\\right|\n = 0 < 1."
},
{
"math_id": 32,
"text": "\\sum_{n=0}^\\infty \\frac{x^n}{n!}"
},
{
"math_id": 33,
"text": "\\mathbb{R}^+"
},
{
"math_id": 34,
"text": "\\mathbb{R}"
},
{
"math_id": 35,
"text": "x \\mapsto \\int_1^x \\frac{dt}{t}"
},
{
"math_id": 36,
"text": "\\begin{align}\n\\int_1^\\infty \\frac{dt} t & = \\infty \\\\[8pt]\n\\int_1^0 \\frac{dt} t & = -\\infty\n\\end{align}"
},
{
"math_id": 37,
"text": "\\lim_{h \\to 0} \\frac{a^h - 1}{h} = 1."
},
{
"math_id": 38,
"text": "f(a)=\\ln(a) "
},
{
"math_id": 39,
"text": "f(1)=0"
},
{
"math_id": 40,
"text": "\\lim_{a\\to\\infty}f(a) = \\infty "
},
{
"math_id": 41,
"text": "x \\geq 0"
},
{
"math_id": 42,
"text": "t_n=\\left(1+\\frac x n \\right)^n,\\qquad s_n = \\sum_{k=0}^n\\frac{x^k}{k!},\\qquad \ne^x = \\lim_{n\\to\\infty} s_n."
},
{
"math_id": 43,
"text": "\\begin{align}\nt_n & =\\sum_{k=0}^n{n \\choose k}\\frac{x^k}{n^k}=1+x+\\sum_{k=2}^n\\frac{n(n-1)(n-2)\\cdots(n-(k-1))x^k}{k!\\,n^k} \\\\[8pt]\n& = 1+x+\\frac{x^2}{2!}\\left(1-\\frac{1}{n}\\right)+\\frac{x^3}{3!}\\left(1-\\frac{1}{n}\\right)\\left(1-\\frac{2}{n}\\right)+\\cdots \\\\[8pt]\n& {}\\qquad \\cdots +\\frac{x^n}{n!}\\left(1-\\frac{1}{n}\\right)\\cdots\\left(1-\\frac{n-1}{n}\\right)\\le s_n\n\\end{align}"
},
{
"math_id": 44,
"text": "\\limsup_{n\\to\\infty}t_n \\le \\limsup_{n\\to\\infty}s_n = e^x"
},
{
"math_id": 45,
"text": "1+x+\\frac{x^2}{2!}\\left(1-\\frac{1}{n}\\right)+\\cdots+\\frac{x^m}{m!}\\left(1-\\frac{1}{n}\\right)\\left(1-\\frac{2}{n}\\right)\\cdots\\left(1-\\frac{m-1}{n}\\right)\\le t_n."
},
{
"math_id": 46,
"text": "s_m = 1+x+\\frac{x^2}{2!}+\\cdots+\\frac{x^m}{m!} \\le \\liminf_{n\\to\\infty}\\ t_n"
},
{
"math_id": 47,
"text": "\\limsup_{n\\to\\infty}t_n \\le e^x \\le \\liminf_{n\\to\\infty}t_n "
},
{
"math_id": 48,
"text": "\\lim_{n\\to\\infty}t_n = e^x. "
},
{
"math_id": 49,
"text": "\\left(1 - \\frac r n \\right)^n \\left(1+\\frac{r}{n}\\right)^n = \\left(1-\\frac{r^2}{n^2}\\right)^n "
},
{
"math_id": 50,
"text": "\\frac d {dx}\\ln x=\\frac{d}{dx} \\int_1^x \\frac1 t \\,dt = \\frac 1 x."
},
{
"math_id": 51,
"text": "\\ln 1 = \\int_1^1 \\frac{dt}{t} = 0"
},
{
"math_id": 52,
"text": "y=\\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n."
},
{
"math_id": 53,
"text": "\\ln y=\\ln\\lim_{n\\to\\infty}\\left(1+\\frac{x}{n} \\right)^n = \\lim_{n\\to\\infty} \\ln\\left(1+\\frac{x}{n}\\right)^n."
},
{
"math_id": 54,
"text": "\\ln y=\\lim_{n\\to\\infty}n\\ln \\left(1+\\frac{x}{n} \\right) = \\lim_{n\\to\\infty} \\frac{x\\ln\\left(1+(x/n)\\right)}{(x/n)}."
},
{
"math_id": 55,
"text": "=x\\cdot\\lim_{h\\to 0}\\frac{\\ln\\left(1+h\\right)}{h} \\quad \\text{ where } h = \\frac{x}{n}"
},
{
"math_id": 56,
"text": "=x\\cdot\\lim_{h\\to 0}\\frac{\\ln\\left(1+h\\right)-\\ln 1}{h}"
},
{
"math_id": 57,
"text": "=x\\cdot\\frac{d}{dt} \\ln t \\Bigg|_{t=1}"
},
{
"math_id": 58,
"text": "\\!\\, = x."
},
{
"math_id": 59,
"text": "y(t) "
},
{
"math_id": 60,
"text": "y' = y,\\ y(0) = 1"
},
{
"math_id": 61,
"text": "\\Delta t = \\frac{x}{n}"
},
{
"math_id": 62,
"text": "t \\ =\\ 0,\\ \\Delta t, \\ 2 \\Delta t, \\ldots, \\ n \\Delta t "
},
{
"math_id": 63,
"text": "y(t+\\Delta t) \\ \\approx \\ y(t) + y'(t)\\Delta t \\ =\\ y(t) + y(t)\\Delta t \\ =\\ y(t)\\,(1+\\Delta t)."
},
{
"math_id": 64,
"text": "y(x) = y(n\\Delta t) \\approx (1+\\Delta t)^n"
},
{
"math_id": 65,
"text": "y(x) = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n. "
},
{
"math_id": 66,
"text": "f(x)"
},
{
"math_id": 67,
"text": "f(x) = e^{kx}"
},
{
"math_id": 68,
"text": "f(1) = e"
},
{
"math_id": 69,
"text": "f(y) = f(x) f(y - x) \\neq 0"
},
{
"math_id": 70,
"text": "f(x) \\neq 0"
},
{
"math_id": 71,
"text": "f(0)=1"
},
{
"math_id": 72,
"text": "f(x)= f(x+0) = f(x) f(0)"
},
{
"math_id": 73,
"text": "f(-x)=1/f(x)"
},
{
"math_id": 74,
"text": "1 = f(0)= f(x-x) = f(x) f(-x)"
},
{
"math_id": 75,
"text": "f(x+\\delta) - f(x) = f(x-y) [ f(y+\\delta) - f(y)] \\to 0"
},
{
"math_id": 76,
"text": "\\delta \\to 0"
},
{
"math_id": 77,
"text": "g(x) = \\int_0^x f(x')\\, dx'."
},
{
"math_id": 78,
"text": "g(x+y)-g(x) = \\int_x^{x+y} f(x')\\, dx' = \\int_0^y f(x+x')\\, dx' = f(x) g(y). "
},
{
"math_id": 79,
"text": "g(y) \\neq 0"
},
{
"math_id": 80,
"text": "\\begin{align}\nf(x+\\delta)-f(x) & = \\frac{[g(x+\\delta+y)-g(x+\\delta)]-[g(x+y)-g(x)]}{g(y)} \\\\\n& =\\frac{[g(x+y+\\delta)-g(x+y)]-[g(x+\\delta)-g(x)]}{g(y)} \\\\\n& =\\frac{f(x+y)g(\\delta)-f(x)g(\\delta)}{g(y)}=g(\\delta)\\frac{f(x+y)-f(x)}{g(y)}.\n\\end{align}"
},
{
"math_id": 81,
"text": "g(0)=0"
},
{
"math_id": 82,
"text": "g(x)"
},
{
"math_id": 83,
"text": "f(q) = e^{kq}"
},
{
"math_id": 84,
"text": "f\\left(\\frac{n}{m}\\right)=f\\left(\\frac{1}{m}+\\cdots+\\frac{1}{m} \\right)=f\\left(\\frac{1}{m}\\right)^n"
},
{
"math_id": 85,
"text": "f(1/m)^m = f(1)"
},
{
"math_id": 86,
"text": "f\\left(\\frac{n}{m}\\right)=f(1)^{n/m}=e^{k(n/m)}."
},
{
"math_id": 87,
"text": "k = \\ln [f(1)]"
},
{
"math_id": 88,
"text": "f(x) = f(x/2)^2"
},
{
"math_id": 89,
"text": "\\frac{d^ny}{dx^n}=y"
},
{
"math_id": 90,
"text": "\\frac{d^ny}{dx^n}\\Bigg|_{x=0}=y(0)=1."
},
{
"math_id": 91,
"text": "y= \\sum_{n=0}^\\infty \\frac {f^{(n)}(0)}{n!} \\, x^n = \\sum_{n=0}^\\infty \\frac {1}{n!} \\, x^n = \\sum_{n=0}^\\infty \\frac {x^n}{n!}."
},
{
"math_id": 92,
"text": "\\begin{align}\n\\frac{d}{dx}e^x & = \\frac{d}{dx} \\left(1+\\sum_{n=1}^\\infty \\frac {x^n}{n!} \\right) = \\sum_{n=1}^\\infty \\frac {nx^{n-1}}{n!} =\\sum_{n=1}^\\infty \\frac {x^{n-1}}{(n-1)!} \\\\[6pt]\n& =\\sum_{k=0}^\\infty \\frac {x^k}{k!}, \\text{ where } k=n-1 \\\\[6pt]\n& =e^x\n\\end{align}"
},
{
"math_id": 93,
"text": "e^0 = 1 + 0 + \\frac{0^2}{2!} + \\frac{0^3}{3!} + \\cdots = 1."
},
{
"math_id": 94,
"text": "\\exp(x+y)= \\exp(x)\\exp(y)"
},
{
"math_id": 95,
"text": "\\begin{align}\n\\exp'(0) & = \\lim_{h\\to 0} \\frac{e^h-1}{h} \\\\\n & =\\lim_{h\\to 0} \\frac{1}{h} \\left (\\left (1+h+ \\frac{h^2}{2!}+\\frac{h^3}{3!}+\\frac{h^4}{4!}+\\cdots \\right) -1 \\right) \\\\\n & =\\lim_{h\\to 0} \\left(1+ \\frac{h}{2!}+\\frac{h^2}{3!}+\\frac{h^3}{4!}+\\cdots \\right) \\ =\\ 1.\\\\\n \n\\end{align}"
},
{
"math_id": 96,
"text": "\\log x := \\int_{1}^{x} \\frac{dt}{t}"
},
{
"math_id": 97,
"text": "1/t"
},
{
"math_id": 98,
"text": "t = 1"
},
{
"math_id": 99,
"text": "t=x"
},
{
"math_id": 100,
"text": "x<1"
},
{
"math_id": 101,
"text": "\\exp"
},
{
"math_id": 102,
"text": "\\log"
},
{
"math_id": 103,
"text": "\\exp(\\log(x))=x \\text{ and } \\log(\\exp(x))=x"
},
{
"math_id": 104,
"text": "\\exp(x\\log(a))"
},
{
"math_id": 105,
"text": "e"
},
{
"math_id": 106,
"text": "\\log(a)=1"
},
{
"math_id": 107,
"text": "e^x=\\exp(x)"
},
{
"math_id": 108,
"text": "e^x=\\exp(x\\log(e))=\\exp(x)"
},
{
"math_id": 109,
"text": "\\log x = \\frac{1}{x}"
},
{
"math_id": 110,
"text": "\\frac{d}{dx} e^x=e^x"
},
{
"math_id": 111,
"text": "\\begin{align}\n\\text{Let }y&=e^x=\\exp(x) \\\\\n\\log(y)&=\\log(\\exp(x))=x \\\\\n\\frac{1}{y}\\frac{dy}{dx}&=1 \\\\\n\\frac{dy}{dx}&=y=e^x\n\\end{align}"
},
{
"math_id": 112,
"text": "e^0=\\exp(0)=1"
},
{
"math_id": 113,
"text": "f:\\mathbb{R}\\to\\mathbb{R}"
},
{
"math_id": 114,
"text": "f'(x)=e^x"
},
{
"math_id": 115,
"text": "y = \\log x \\implies x=e^y"
},
{
"math_id": 116,
"text": "y"
},
{
"math_id": 117,
"text": "\\begin{align}\n\\frac{dx}{dy} &= e^y \\\\\n\\frac{dy}{dx} &= \\frac{1}{e^y} = \\frac{1}{x}\n\\end{align}"
},
{
"math_id": 118,
"text": "\\int_{1}^{x}\\frac{1}{t}dt=\\left[\\log t\\right]_{1}^{x} = \\log x - \\log 1 = \\log x - 0 = \\log x"
},
{
"math_id": 119,
"text": "f(0) = f(0 + 0) = f(0) f(0)"
},
{
"math_id": 120,
"text": "\n\\begin{array}{rcccccc}\nf'(x) & = & \\lim\\limits_{h\\to 0}\\frac{f(x+h)-f(x)} h\n & = & \\lim\\limits_{h\\to 0}\\frac{f(x)f(h)-f(x)} h\n & = & \\lim\\limits_{h\\to 0}f(x)\\frac{f(h)-1} h\n\\\\[1em]\n & = & f(x)\\lim\\limits_{h\\to 0}\\frac{f(h)-1} h\n & = & f(x)\\lim\\limits_{h\\to 0}\\frac{f(0+h)-f(0)} h\n & = & f(x)f'(0) = f(x).\n\\end{array}\n"
},
{
"math_id": 121,
"text": "\\exp'(0)= 1 "
},
{
"math_id": 122,
"text": "\\begin{array}{rcl}\n\\frac{d}{dx}\\exp(x) &=& \\lim_{h \\to 0} \\frac{\\exp(x{+}h)-\\exp(x)}{h}\\\\\n& = & \\exp(x) \\cdot \\lim_{h \\to 0}\\frac{\\exp(h)-1}{h}\\\\\n& = & \\exp(x) \\exp'(0) =\\exp(x) . \n\\end{array}"
},
{
"math_id": 123,
"text": "f(x)=a^x"
},
{
"math_id": 124,
"text": "a=f(1)"
},
{
"math_id": 125,
"text": "\\lim_{h\\to 0}\\tfrac{a^h-1}{h}=1"
}
] | https://en.wikipedia.org/wiki?curid=853175 |
8532275 | Dual currency deposit | In finance, a dual currency deposit (DCD, also known as Dual Currency Instrument or Dual Currency Product) is a derivative instrument which combines a money market deposit with a currency option to provide a higher yield than that available for a standard deposit. There is a higher risk than with the latter - the depositor can receive less funds than originally deposited and in a different currency. An investor could do a USD/JPY DCD depositing USD and receiving JPY.
Formal definition.
A dual currency deposit (“DCD”) is a foreign exchange-linked deposit in which the principal can be repaid after being converted into the alternative currency at the strike rate at maturity depending on the spot foreign exchange rate.
If an investor has a view on the initial investment currency a dual currency strategy allows the investor to benefit from higher returns. The returns are higher than the returns on normal deposits in compensation for the higher risks that are associated with DCDs due to being exposed to foreign exchange.
At maturity, if the local currency is weaker than the strike rate, funds will be redeemed in the local currency. If the local currency is stronger, the principal is repaid in the alternative currency, converted at the strike rate. The distance from current exchange rate to “strike” is determined by investor risk appetite: If the client is comfortable with risk the conversion level will be closer to the current level, and the interest payable will be higher as the risk of conversion increases.
Financial maths.
DCD+.
The DCD is actually composed of a normal deposit and an option. Normally in the options market the seller of an option is paid before the premium value date or spot date, however in the case of the DCD the client is paid at the end of the deposit period. For this reason some banks offer their clients a product commonly called a DCD+ which includes an interest element to account for this.
Adding this to the deposit redemption-amount means that the amount of currency that will need converting if the option strike is passed at expiry has now increased. So the option face amount needs to be altered to take the extra interest into account. This affects the premium again, and so on.
To avoid having to compute this to infinity one can use a geometric series with
formula_0
and
formula_1
where yield is the forward value (FV) of the option premium giving a multiplier to change a DCD's option premium to a DCD+ of:
formula_2
Example of DCI/DCD.
Sample parameters selected by an investor
Other investment parameters determined by product offering institution
On the expiry date, the reference rate is 2.0017.
Currency of repayment
Since the reference rate on the expiry date (2.0017) is less than the strike rate selected by the investor (1.9950), proceeds will be paid in the base currency (SGD) to the investor on the maturity date. Here, the base currency (SGD) has appreciated no greater than the strike rate selected by the investor.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a = 1+r\\frac{\\text{delivery term days}}{\\text{currency basis}}"
},
{
"math_id": 1,
"text": "r = \\text{yield of option}"
},
{
"math_id": 2,
"text": "\\text{multiplier} = \n\\frac{1+r\\frac{\\text{delivery term days}}{\\text{currency basis}}}{1-y}"
}
] | https://en.wikipedia.org/wiki?curid=8532275 |
853240 | Process calculus | Family of approaches for modelling concurrent systems
In computer science, the process calculi (or process algebras) are a diverse family of related approaches for formally modelling concurrent systems. Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also provide algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes (e.g., using bisimulation). Leading examples of process calculi include CSP, CCS, ACP, and LOTOS. More recent additions to the family include the π-calculus, the ambient calculus, PEPA, the fusion calculus and the join-calculus.
Essential features.
While the variety of existing process calculi is very large (including variants that incorporate stochastic behaviour, timing information, and specializations for studying molecular interactions), there are several features that all process calculi have in common:
Mathematics of processes.
To define a process calculus, one starts with a set of "names" (or "channels") whose purpose is to provide means of communication. In many implementations, channels have rich internal structure to improve efficiency, but this is abstracted away in most theoretic models. In addition to names, one needs a means to form new processes from old ones. The basic operators, always present in some form or other, allow:
Parallel composition.
Parallel composition of two processes formula_0 and formula_1, usually written formula_2, is the key primitive distinguishing the process calculi from sequential models of computation. Parallel composition allows computation in formula_0 and formula_1 to proceed simultaneously and independently. But it also allows interaction, that is synchronisation and flow of information from formula_0 to formula_1 (or vice versa) on a channel shared by both. Crucially, an agent or process can be connected to more than one channel at a time.
Channels may be synchronous or asynchronous. In the case of a synchronous channel, the agent sending a message waits until another agent has received the message. Asynchronous channels do not require any such synchronization. In some process calculi (notably the π-calculus) channels themselves can be sent in messages through (other) channels, allowing the topology of process interconnections to change. Some process calculi also allow channels to be "created" during the execution of a computation.
Communication.
Interaction can be (but isn't always) a "directed" flow of information. That is, input and output can be distinguished as dual interaction primitives. Process calculi that make such distinctions typically define an input operator ("e.g." formula_3) and an output operator ("e.g." formula_4), both of which name an interaction point (here formula_5) that is used to synchronise with a dual interaction primitive.
Should information be exchanged, it will flow from the outputting to the inputting process. The output primitive will specify the data to be sent. In formula_4, this data is formula_6. Similarly, if an input expects to receive data, one or more bound variables will act as place-holders to be substituted by data, when it arrives. In formula_3, formula_7 plays that role. The choice of the kind of data that can be exchanged in an interaction is one of the key features that distinguishes different process calculi.
Sequential composition.
Sometimes interactions must be temporally ordered. For example, it might be desirable to specify algorithms such as: "first receive some data on formula_5 and then send that data on formula_8". "Sequential composition" can be used for such purposes. It is well known from other models of computation. In process calculi, the sequentialisation operator is usually integrated with input or output, or both. For example, the process formula_9 will wait for an input on formula_5. Only when this input has occurred will the process formula_0 be activated, with the received data through formula_5 substituted for identifier formula_10.
Reduction semantics.
The key operational reduction rule, containing the computational essence of process calculi, can be given solely in terms of parallel composition, sequentialization, input, and output. The details of this reduction vary among the calculi, but the essence remains roughly the same. The reduction rule is:
formula_11
The interpretation to this reduction rule is:
The class of processes that formula_0 is allowed to range over as the continuation of the output operation substantially influences the properties of the calculus.
Hiding.
Processes do not limit the number of connections that can be made at a given interaction point. But interaction points allow interference (i.e. interaction). For the
synthesis of compact, minimal and compositional systems, the ability to restrict interference is crucial. "Hiding" operations allow control of the connections made between interaction points when composing
agents in parallel. Hiding can be denoted in a variety of ways. For example, in the π-calculus the hiding of a name formula_5 in formula_0 can be expressed as formula_15, while in CSP it might be written as formula_16.
Recursion and replication.
The operations presented so far describe only finite interaction and are consequently insufficient for full computability, which includes non-terminating behaviour. "Recursion" and "replication" are operations that allow finite descriptions of infinite behaviour. Recursion is well known from the sequential world. Replication formula_17 can be understood as abbreviating the parallel composition of a countably infinite number of formula_0 processes:
formula_18
Null process.
Process calculi generally also include a "null process" (variously denoted as formula_19, formula_20, formula_21, formula_22, or some other appropriate symbol) which has no interaction points. It is utterly inactive and its sole purpose is to act as the inductive anchor on top of which more interesting processes can be generated.
Discrete and continuous process algebra.
Process algebra has been studied for discrete time and continuous time (real time or dense time).
History.
In the first half of the 20th century, various formalisms were proposed to capture the informal concept of a "computable function", with μ-recursive functions, Turing machines and the lambda calculus possibly being the best-known examples today. The surprising fact that they are essentially equivalent, in the sense that they are all encodable into each other, supports the Church-Turing thesis. Another shared feature is more rarely commented on: they all are most readily understood as models of "sequential" computation. The subsequent consolidation of computer science required a more subtle formulation of the notion of computation, in particular explicit representations of concurrency and communication. Models of concurrency such as the process calculi, Petri nets in 1962, and the actor model in 1973 emerged from this line of inquiry.
Research on process calculi began in earnest with Robin Milner's seminal work on the Calculus of Communicating Systems (CCS) during the period from 1973 to 1980. C.A.R. Hoare's Communicating Sequential Processes (CSP) first appeared in 1978, and was subsequently developed into a full-fledged process calculus during the early 1980s. There was much cross-fertilization of ideas between CCS and CSP as they developed. In 1982 Jan Bergstra and Jan Willem Klop began work on what came to be known as the Algebra of Communicating Processes (ACP), and introduced the term "process algebra" to describe their work. CCS, CSP, and ACP constitute the three major branches of the process calculi family: the majority of the other process calculi can trace their roots to one of these three calculi.
Current research.
Various process calculi have been studied and not all of them fit the paradigm sketched here. The most prominent example may be the ambient calculus. This is to be expected as process calculi are an active field of study. Currently research on process calculi focuses on the following problems.
Software implementations.
The ideas behind process algebra have given rise to several tools including:
Relationship to other models of concurrency.
The history monoid is the free object that is generically able to represent the histories of individual communicating processes. A process calculus is then a formal language imposed on a history monoid in a consistent fashion. That is, a history monoid can only record a sequence of events, with synchronization, but does not specify the allowed state transitions. Thus, a process calculus is to a history monoid what a formal language is to a free monoid (a formal language is a subset of the set of all possible finite-length strings of an alphabet generated by the Kleene star).
The use of channels for communication is one of the features distinguishing the process calculi from other models of concurrency, such as Petri nets and the actor model (see Actor model and process calculi). One of the fundamental motivations for including channels in the process calculi was to enable certain algebraic techniques, thereby making it easier to reason about processes algebraically.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathit{P}"
},
{
"math_id": 1,
"text": "\\mathit{Q}"
},
{
"math_id": 2,
"text": "P \\vert Q"
},
{
"math_id": 3,
"text": "x(v)"
},
{
"math_id": 4,
"text": "x\\langle y\\rangle"
},
{
"math_id": 5,
"text": "\\mathit{x}"
},
{
"math_id": 6,
"text": "y"
},
{
"math_id": 7,
"text": "v"
},
{
"math_id": 8,
"text": "\\mathit{y}"
},
{
"math_id": 9,
"text": "x(v)\\cdot P"
},
{
"math_id": 10,
"text": "\\mathit{v}"
},
{
"math_id": 11,
"text": "\nx\\langle y\\rangle \\cdot P \\; \\vert \\; x(v)\\cdot Q \\longrightarrow P \\; \\vert \\; Q[^y\\!/\\!_v]\n"
},
{
"math_id": 12,
"text": "x\\langle y\\rangle \\cdot P"
},
{
"math_id": 13,
"text": "x(v)\\cdot Q"
},
{
"math_id": 14,
"text": "Q[^y\\!/\\!_v]"
},
{
"math_id": 15,
"text": "(\\nu\\; x)P"
},
{
"math_id": 16,
"text": "P \\setminus \\{x\\}"
},
{
"math_id": 17,
"text": "!P"
},
{
"math_id": 18,
"text": "\n!P = P \\mid !P\n\n"
},
{
"math_id": 19,
"text": "\\mathit{nil}"
},
{
"math_id": 20,
"text": "0"
},
{
"math_id": 21,
"text": "\\mathit{STOP}"
},
{
"math_id": 22,
"text": "\\delta"
}
] | https://en.wikipedia.org/wiki?curid=853240 |
8532654 | Mice problem | Mathematical problem
In mathematics, the mice problem is a continuous pursuit–evasion problem in which a number of mice (or insects, dogs, missiles, etc.) are considered to be placed at the corners of a regular polygon. In the classic setup, each then begins to move towards its immediate neighbour (clockwise or anticlockwise). The goal is often to find out at what time the mice meet.
The most common version has the mice starting at the corners of a unit square, moving at unit speed. In this case they meet after a time of one unit, because the distance between two neighboring mice always decreases at a speed of one unit. More generally, for a regular polygon of formula_0 unit-length sides, the distance between neighboring mice decreases at a speed of formula_1, so they meet after a time of formula_2.
Path of the mice.
For all regular polygons, each mouse traces out a pursuit curve in the shape of a logarithmic spiral. These curves meet in the center of the polygon.
In media.
In "", the mice problem is discussed. Instead of 4 mice, 4 ballroom dancers are used.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "1 - \\cos(2\\pi/n)"
},
{
"math_id": 2,
"text": "1/\\bigl(1 - \\cos(2\\pi/n)\\bigr)"
}
] | https://en.wikipedia.org/wiki?curid=8532654 |
8533497 | Composite field (mathematics) | A composite field or compositum of fields is an object of study in field theory. Let "K" be a field, and let formula_0, formula_1 be subfields of "K". Then the (internal) composite of formula_0 and formula_1 is the field defined as the intersection of all subfields of "K" containing both formula_0 and formula_1. The composite is commonly denoted formula_2.
Properties.
Equivalently to intersections we can define the composite formula_2 to be "the smallest" subfield of "K" that contains both formula_0 and formula_1. While for the definition via intersection well-definedness hinges only on the property that intersections of fields are themselves fields, here two auxiliary assertion are included. That 1. there exist "minimal" subfields of "K" that include formula_0 and formula_1 and 2. that such "a minimal" subfield is unique and therefor justly called "the smallest".
It also can be defined using field of fractions
formula_3
where formula_4 is the set of all formula_5-rational expressions in finitely many elements of formula_6.
Let formula_7 be a common subfield and formula_8 a Galois extension then formula_9 and formula_10 are both also Galois and there is an isomorphism given by restriction
formula_11
For finite field extension this can be explicitly found in Milne and for infinite extensions this follows since infinite Galois extensions are precisely those extensions that are unions of an (infinite) set of finite Galois extensions.
If additionally formula_12 is a Galois extension then formula_13 and formula_14 are both also Galois and the map
formula_15
is a group homomorphism which is an isomorphism onto the subgroup
formula_16
See Milne.
Both properties are particularly useful for formula_17 and their statements simplify accordingly in this special case. In particular formula_18 is always an isomorphism in this case.
External composite.
When formula_0 and formula_1 are not regarded as subfields of a common field then the (external) composite is defined using the tensor product of fields. Note that some care has to be taken for the choice of the common subfield over which this tensor product is performed, otherwise the tensor product might come out to be only an algebra which is not a field.
Generalizations.
If formula_19 is a set of subfields of a fixed field "K" indexed by the set "I", the generalized composite field can be defined via the intersection
formula_20
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_1"
},
{
"math_id": 1,
"text": "E_2"
},
{
"math_id": 2,
"text": "E_1E_2"
},
{
"math_id": 3,
"text": "E_1E_2=E_1(E_2)=E_2(E_1),"
},
{
"math_id": 4,
"text": "F(S)"
},
{
"math_id": 5,
"text": "F"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "L\\subseteq E_1\\cap E_2"
},
{
"math_id": 8,
"text": "E_1/L"
},
{
"math_id": 9,
"text": "E_1E_2/E_2"
},
{
"math_id": 10,
"text": "E_1/(E_1\\cap E_2)"
},
{
"math_id": 11,
"text": "\\text{Gal}(E_1E_2/E_2)\\rightarrow\\text{Gal}(E_1/(E_1\\cap E_2)), \\sigma\\mapsto\\sigma|_{E_1}."
},
{
"math_id": 12,
"text": "E_2/L"
},
{
"math_id": 13,
"text": "E_1E_2/L"
},
{
"math_id": 14,
"text": "(E_1\\cap E_2)/L"
},
{
"math_id": 15,
"text": "\\psi:\\text{Gal}(E_1E_2/L)\\rightarrow\\text{Gal}(E_1/L)\\times\\text{Gal}(E_2/L), \\sigma\\mapsto(\\sigma|_{E_1},\\sigma|_{E_2})"
},
{
"math_id": 16,
"text": "H=\\{(\\sigma_1,\\sigma_2):\\sigma_1|_{E_1\\cap E_2}=\\sigma_2|_{E_1\\cap E_2}\\}=\\text{Gal}(E_1/L)\\times_{\\text{Gal}((E_1\\cap E_2)/L)}\\text{Gal}(E_2/L)\\subseteq\\text{Gal}(E_1/L)\\times\\text{Gal}(E_2/L)."
},
{
"math_id": 17,
"text": "L=E_1\\cap E_2"
},
{
"math_id": 18,
"text": "\\psi"
},
{
"math_id": 19,
"text": "\\mathcal{E}=\\left\\{E_i:i\\in I\\right\\}"
},
{
"math_id": 20,
"text": "\\bigvee_{i\\in I}E_i = \\bigcap_{F\\subseteq K\\text{ s.t. }\\forall i \\in I: E_i\\subseteq F}F."
}
] | https://en.wikipedia.org/wiki?curid=8533497 |
8536 | Differential cryptanalysis | General form of cryptanalysis applicable primarily to block ciphers
Differential cryptanalysis is a general form of cryptanalysis applicable primarily to block ciphers, but also to stream ciphers and cryptographic hash functions. In the broadest sense, it is the study of how differences in information input can affect the resultant difference at the output. In the case of a block cipher, it refers to a set of techniques for tracing differences through the network of transformation, discovering where the cipher exhibits non-random behavior, and exploiting such properties to recover the secret key (cryptography key).
History.
The discovery of differential cryptanalysis is generally attributed to Eli Biham and Adi Shamir in the late 1980s, who published a number of attacks against various block ciphers and hash functions, including a theoretical weakness in the Data Encryption Standard (DES). It was noted by Biham and Shamir that DES was surprisingly resistant to differential cryptanalysis, but small modifications to the algorithm would make it much more susceptible.
In 1994, a member of the original IBM DES team, Don Coppersmith, published a paper stating that differential cryptanalysis was known to IBM as early as 1974, and that defending against differential cryptanalysis had been a design goal. According to author Steven Levy, IBM had discovered differential cryptanalysis on its own, and the NSA was apparently well aware of the technique. IBM kept some secrets, as Coppersmith explains: "After discussions with NSA, it was decided that disclosure of the design considerations would reveal the technique of differential cryptanalysis, a powerful technique that could be used against many ciphers. This in turn would weaken the competitive advantage the United States enjoyed over other countries in the field of cryptography." Within IBM, differential cryptanalysis was known as the "T-attack" or "Tickle attack".
While DES was designed with resistance to differential cryptanalysis in mind, other contemporary ciphers proved to be vulnerable. An early target for the attack was the FEAL block cipher. The original proposed version with four rounds (FEAL-4) can be broken using only eight chosen plaintexts, and even a 31-round version of FEAL is susceptible to the attack. In contrast, the scheme can successfully cryptanalyze DES with an effort on the order of 247 chosen plaintexts.
Attack mechanics.
Differential cryptanalysis is usually a chosen plaintext attack, meaning that the attacker must be able to obtain ciphertexts for some set of plaintexts of their choosing. There are, however, extensions that would allow a known plaintext or even a ciphertext-only attack. The basic method uses pairs of plaintexts related by a constant "difference". Difference can be defined in several ways, but the eXclusive OR (XOR) operation is usual. The attacker then computes the differences of the corresponding ciphertexts, hoping to detect statistical patterns in their distribution. The resulting pair of differences is called a differential. Their statistical properties depend upon the nature of the S-boxes used for encryption, so the attacker analyses differentials formula_0 where
formula_1
(and ⊕ denotes exclusive or) for each such S-box "S". In the basic attack, one particular ciphertext difference is expected to be especially frequent. In this way, the cipher can be distinguished from random. More sophisticated variations allow the key to be recovered faster than an exhaustive search.
In the most basic form of key recovery through differential cryptanalysis, an attacker requests the ciphertexts for a large number of plaintext pairs, then assumes that the differential holds for at least "r" − 1 rounds, where "r" is the total number of rounds. The attacker then deduces which round keys (for the final round) are possible, assuming the difference between the blocks before the final round is fixed. When round keys are short, this can be achieved by simply exhaustively decrypting the ciphertext pairs one round with each possible round key. When one round key has been deemed a potential round key considerably more often than any other key, it is assumed to be the correct round key.
For any particular cipher, the input difference must be carefully selected for the attack to be successful. An analysis of the algorithm's internals is undertaken; the standard method is to trace a path of highly probable differences through the various stages of encryption, termed a "differential characteristic".
Since differential cryptanalysis became public knowledge, it has become a basic concern of cipher designers. New designs are expected to be accompanied by evidence that the algorithm is resistant to this attack and many including the Advanced Encryption Standard, have been proven secure against the attack.
Attack in detail.
The attack relies primarily on the fact that a given input/output difference pattern only occurs for certain values of inputs. Usually the attack is applied in essence to the non-linear components as if they were a solid component (usually they are in fact look-up tables or "S-boxes"). Observing the desired output difference (between two chosen or known plaintext inputs) "suggests" possible key values.
For example, if a differential of 1 => 1 (implying a difference in the least significant bit (LSB) of the input leads to an output difference in the LSB) occurs with probability of 4/256 (possible with the non-linear function in the AES cipher for instance) then for only 4 values (or 2 pairs) of inputs is that differential possible. Suppose we have a non-linear function where the key is XOR'ed before evaluation and the values that allow the differential are {2,3} and {4,5}. If the attacker sends in the values of {6, 7} and observes the correct output difference it means the key is either 6 ⊕ K = 2, or 6 ⊕ K = 4, meaning the key K is either 2 or 4.
In essence, to protect a cipher from the attack, for an n-bit non-linear function one would ideally seek as close to 2−("n" − 1) as possible to achieve "differential uniformity". When this happens, the differential attack requires as much work to determine the key as simply brute forcing the key.
The AES non-linear function has a maximum differential probability of 4/256 (most entries however are either 0 or 2). Meaning that in theory one could determine the key with half as much work as brute force, however, the high branch of AES prevents any high probability trails from existing over multiple rounds. In fact, the AES cipher would be just as immune to differential and linear attacks with a much "weaker" non-linear function. The incredibly high branch (active S-box count) of 25 over 4R means that over 8 rounds, no attack involves fewer than 50 non-linear transforms, meaning that the probability of success does not exceed Pr[attack] ≤ Pr[best attack on S-box]50. For example, with the current S-box AES emits no fixed differential with a probability higher than (4/256)50 or 2−300 which is far lower than the required threshold of 2−128 for a 128-bit block cipher. This would have allowed room for a more efficient S-box, even if it is 16-uniform the probability of attack would have still been 2−200.
There exist no bijections for even sized inputs/outputs with 2-uniformity. They exist in odd fields (such as GF(27)) using either cubing or inversion (there are other exponents that can be used as well). For instance, S(x) = x3 in any odd binary field is immune to differential and linear cryptanalysis. This is in part why the MISTY designs use 7- and 9-bit functions in the 16-bit non-linear function. What these functions gain in immunity to differential and linear attacks, they lose to algebraic attacks. That is, they are possible to describe and solve via a SAT solver. This is in part why AES (for instance) has an affine mapping after the inversion.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "(\\Delta_x, \\Delta_y)"
},
{
"math_id": 1,
"text": "\\Delta_y = S(x \\oplus \\Delta_x) \\oplus S(x)"
}
] | https://en.wikipedia.org/wiki?curid=8536 |
8536059 | Clutching construction | Topological construct
In topology, a branch of mathematics, the clutching construction is a way of constructing fiber bundles, particularly vector bundles on spheres.
Definition.
Consider the sphere formula_0 as the union of the upper and lower hemispheres formula_1 and formula_2 along their intersection, the equator, an formula_3.
Given trivialized fiber bundles with fiber formula_4 and structure group formula_5 over the two hemispheres, then given a map formula_6 (called the "clutching map"), glue the two trivial bundles together via "f".
Formally, it is the coequalizer of the inclusions formula_7 via formula_8 and formula_9: glue the two bundles together on the boundary, with a twist.
Thus we have a map formula_10: clutching information on the equator yields a fiber bundle on the total space.
In the case of vector bundles, this yields formula_11, and indeed this map is an isomorphism (under connect sum of spheres on the right).
Generalization.
The above can be generalized by replacing formula_12 and formula_0 with any closed triad formula_13, that is, a space "X", together with two closed subsets "A" and "B" whose union is "X". Then a clutching map on formula_14 gives a vector bundle on "X".
Classifying map construction.
Let formula_15 be a fibre bundle with fibre formula_4. Let formula_16 be a collection of pairs formula_17 such that formula_18 is a local trivialization of formula_19 over formula_20. Moreover, we demand that the union of all the sets formula_21 is formula_22 (i.e. the collection is an atlas of trivializations formula_23).
Consider the space formula_24 modulo the equivalence relation formula_25 is equivalent to formula_26 if and only if formula_27 and formula_28. By design, the local trivializations formula_29 give a fibrewise equivalence between this quotient space and the fibre bundle formula_19.
Consider the space formula_30 modulo the equivalence relation formula_31 is equivalent to formula_32 if and only if formula_27 and consider formula_33 to be a map formula_34 then we demand that formula_35. That is, in our re-construction of formula_19 we are replacing the fibre formula_4 by the topological group of homeomorphisms of the fibre, formula_36. If the structure group of the bundle is known to reduce, you could replace formula_36 with the reduced structure group. This is a bundle over formula_22 with fibre formula_36 and is a principal bundle. Denote it by formula_37. The relation to the previous bundle is induced from the principal bundle: formula_38.
So we have a principal bundle formula_39. The theory of classifying spaces gives us an induced push-forward fibration formula_40 where formula_41 is the classifying space of formula_36. Here is an outline:
Given a formula_5-principal bundle formula_42, consider the space formula_43. This space is a fibration in two different ways:
1) Project onto the first factor: formula_44. The fibre in this case is formula_45, which is a contractible space by the definition of a classifying space.
2) Project onto the second factor: formula_46. The fibre in this case is formula_47.
Thus we have a fibration formula_48. This map is called the classifying map of the fibre bundle formula_15 since 1) the principal bundle formula_42 is the pull-back of the bundle formula_49 along the classifying map and 2) The bundle formula_19 is induced from the principal bundle as above.
Contrast with twisted spheres.
Twisted spheres are sometimes referred to as a "clutching-type" construction, but this is misleading: the clutching construction is properly about fiber bundles.
Examples.
The clutching construction is used to form the chiral anomaly, by gluing together a pair of self-dual curvature forms. Such forms are locally exact on each hemisphere, as they are differentials of the Chern–Simons 3-form; by gluing them together, the curvature form is no longer globally exact (and so has a non-trivial homotopy group formula_52)
Similar constructions can be found for various instantons, including the Wess–Zumino–Witten model. | [
{
"math_id": 0,
"text": "S^n"
},
{
"math_id": 1,
"text": "D^n_+"
},
{
"math_id": 2,
"text": "D^n_-"
},
{
"math_id": 3,
"text": "S^{n-1}"
},
{
"math_id": 4,
"text": "F"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "f\\colon S^{n-1} \\to G"
},
{
"math_id": 7,
"text": "S^{n-1} \\times F \\to D^n_+ \\times F \\coprod D^n_- \\times F"
},
{
"math_id": 8,
"text": "(x,v) \\mapsto (x,v) \\in D^n_+ \\times F"
},
{
"math_id": 9,
"text": "(x,v) \\mapsto (x,f(x)(v)) \\in D^n_- \\times F"
},
{
"math_id": 10,
"text": "\\pi_{n-1} G \\to \\text{Fib}_F(S^n)"
},
{
"math_id": 11,
"text": "\\pi_{n-1} O(k) \\to \\text{Vect}_k(S^n)"
},
{
"math_id": 12,
"text": "D^n_\\pm"
},
{
"math_id": 13,
"text": "(X;A,B)"
},
{
"math_id": 14,
"text": "A \\cap B"
},
{
"math_id": 15,
"text": "p \\colon M \\to N"
},
{
"math_id": 16,
"text": "\\mathcal U"
},
{
"math_id": 17,
"text": "(U_i,q_i)"
},
{
"math_id": 18,
"text": "q_i \\colon p^{-1}(U_i) \\to N \\times F"
},
{
"math_id": 19,
"text": "p"
},
{
"math_id": 20,
"text": "U_i \\subset N"
},
{
"math_id": 21,
"text": "U_i"
},
{
"math_id": 22,
"text": "N"
},
{
"math_id": 23,
"text": "\\coprod_i U_i = N"
},
{
"math_id": 24,
"text": "\\coprod_i U_i\\times F"
},
{
"math_id": 25,
"text": "(u_i,f_i)\\in U_i \\times F"
},
{
"math_id": 26,
"text": "(u_j,f_j)\\in U_j \\times F"
},
{
"math_id": 27,
"text": "U_i \\cap U_j \\neq \\phi"
},
{
"math_id": 28,
"text": "q_i \\circ q_j^{-1}(u_j,f_j) = (u_i,f_i)"
},
{
"math_id": 29,
"text": "q_i"
},
{
"math_id": 30,
"text": "\\coprod_i U_i\\times \\operatorname{Homeo}(F)"
},
{
"math_id": 31,
"text": "(u_i,h_i)\\in U_i \\times \\operatorname{Homeo}(F)"
},
{
"math_id": 32,
"text": "(u_j,h_j)\\in U_j \\times \\operatorname{Homeo}(F)"
},
{
"math_id": 33,
"text": "q_i \\circ q_j^{-1}"
},
{
"math_id": 34,
"text": "q_i \\circ q_j^{-1} : U_i \\cap U_j \\to \\operatorname{Homeo}(F)"
},
{
"math_id": 35,
"text": "q_i \\circ q_j^{-1}(u_j)(h_j)=h_i"
},
{
"math_id": 36,
"text": "\\operatorname{Homeo}(F)"
},
{
"math_id": 37,
"text": "p \\colon M_p \\to N"
},
{
"math_id": 38,
"text": "(M_p \\times F)/\\operatorname{Homeo}(F) = M"
},
{
"math_id": 39,
"text": "\\operatorname{Homeo}(F) \\to M_p \\to N"
},
{
"math_id": 40,
"text": "M_p \\to N \\to B(\\operatorname{Homeo}(F))"
},
{
"math_id": 41,
"text": "B(\\operatorname{Homeo}(F))"
},
{
"math_id": 42,
"text": "G \\to M_p \\to N"
},
{
"math_id": 43,
"text": "M_p \\times_{G} EG"
},
{
"math_id": 44,
"text": "M_p \\times_G EG \\to M_p/G = N"
},
{
"math_id": 45,
"text": "EG"
},
{
"math_id": 46,
"text": "M_p \\times_G EG \\to EG/G = BG"
},
{
"math_id": 47,
"text": "M_p"
},
{
"math_id": 48,
"text": "M_p \\to N \\simeq M_p\\times_G EG \\to BG"
},
{
"math_id": 49,
"text": "G \\to EG \\to BG"
},
{
"math_id": 50,
"text": "S^{n-1} \\to S^{n-1}"
},
{
"math_id": 51,
"text": "S^{n-1} \\to G"
},
{
"math_id": 52,
"text": "\\pi_3."
}
] | https://en.wikipedia.org/wiki?curid=8536059 |
8536216 | Generalized Poincaré conjecture | Whether a manifold which is a homotopy sphere is a sphere
In the mathematical area of topology, the generalized Poincaré conjecture is a statement that a manifold which is a homotopy sphere is a sphere. More precisely, one fixes a category of manifolds: topological (Top), piecewise linear (PL), or differentiable (Diff). Then the statement is
Every homotopy sphere (a closed "n"-manifold which is homotopy equivalent to the "n"-sphere) in the chosen category (i.e. topological manifolds, PL manifolds, or smooth manifolds) is isomorphic in the chosen category (i.e. homeomorphic, PL-isomorphic, or diffeomorphic) to the standard "n"-sphere.
The name derives from the Poincaré conjecture, which was made for (topological or PL) manifolds of dimension 3, where being a homotopy sphere is equivalent to being simply connected and closed. The generalized Poincaré conjecture is known to be true or false in a number of instances, due to the work of many distinguished topologists, including the Fields medal awardees John Milnor, Steve Smale, Michael Freedman, and Grigori Perelman.
Status.
Here is a summary of the status of the generalized Poincaré conjecture in various settings.
Thus the veracity of the Poincaré conjectures changes according to which category it is formulated in. More generally the notion of isomorphism differs between the categories Top, PL, and Diff. It is the same in dimension 3 and below. In dimension 4, PL and Diff agree, but Top differs. In dimensions above 6 they all differ. In dimensions 5 and 6 every PL manifold admits an infinitely differentiable structure that is so-called "Whitehead compatible".
History.
The cases "n" = 1 and 2 have long been known by the classification of manifolds in those dimensions.
For a PL or smooth homotopy n-sphere, in 1960 Stephen Smale proved for formula_1 that it was homeomorphic to the "n"-sphere and subsequently extended his proof to formula_2; he received a Fields Medal for his work in 1966. Shortly after Smale's announcement of a proof, John Stallings gave a different proof for dimensions at least 7 that a PL homotopy "n"-sphere was homeomorphic to the "n"-sphere, using the notion of "engulfing". E. C. Zeeman modified Stalling's construction to work in dimensions 5 and 6. In 1962, Smale proved that a PL homotopy "n"-sphere is PL-isomorphic to the standard PL "n"-sphere for "n" at least 5. In 1966, M. H. A. Newman extended PL engulfing to the topological situation and proved that for formula_3 a topological homotopy "n"-sphere is homeomorphic to the "n"-sphere.
Michael Freedman solved the topological case formula_4 in 1982 and received a Fields Medal in 1986. The initial proof consisted of a 50-page outline, with many details missing. Freedman gave a series of lectures at the time, convincing experts that the proof was correct. A project to produce a written version of the proof with background and all details filled in began in 2013, with Freedman's support. The project's output, edited by Stefan Behrens, Boldizsar Kalmar, Min Hoon Kim, Mark Powell, and Arunima Ray, with contributions from 20 mathematicians, was published in August 2021 in the form of a 496-page book, "The Disc Embedding Theorem".
Grigori Perelman solved the case formula_5 (where the topological, PL, and differentiable cases all coincide) in 2003 in a sequence of three papers. He was offered a Fields Medal in August 2006 and the Millennium Prize from the Clay Mathematics Institute in March 2010, but declined both.
Exotic spheres.
The generalized Poincaré conjecture is true topologically, but false smoothly in some dimensions. This results from the construction of the exotic spheres, manifolds that are homeomorphic, but not diffeomorphic, to the standard sphere, which can be interpreted as non-standard smooth structures on the standard (topological) sphere.
Thus the homotopy spheres that John Milnor produced are homeomorphic (Top-isomorphic, and indeed piecewise linear homeomorphic) to the standard sphere formula_6, but are not diffeomorphic (Diff-isomorphic) to it, and thus are exotic spheres: they can be interpreted as non-standard differentiable structures on the standard sphere.
Michel Kervaire and Milnor showed that the oriented 7-sphere has 28 = (7) different smooth structures (or 15 ignoring orientations), and in higher dimensions there are usually many different smooth structures on a sphere. It is suspected that certain differentiable structures on the 4-sphere, called Gluck twists, are not isomorphic to the standard one, but at the moment there are no known topological invariant capable of distinguishing different smooth structures on a 4-sphere.
PL.
For piecewise linear manifolds, the Poincaré conjecture is true except possibly in dimension 4, where the answer is unknown, and equivalent to the smooth case.
In other words, every compact PL manifold of dimension not equal to 4 that is homotopy equivalent to a sphere is PL isomorphic to a sphere.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ge 64"
},
{
"math_id": 1,
"text": "n\\ge 7"
},
{
"math_id": 2,
"text": "n\\ge 5"
},
{
"math_id": 3,
"text": "n \\ge 5"
},
{
"math_id": 4,
"text": "n = 4"
},
{
"math_id": 5,
"text": "n = 3"
},
{
"math_id": 6,
"text": "S^n"
}
] | https://en.wikipedia.org/wiki?curid=8536216 |
853778 | Dispersion relation | Relation of wavelength/wavenumber as a function of a wave's frequency
In the physical sciences and electrical engineering, dispersion relations describe the effect of dispersion on the properties of waves in a medium. A dispersion relation relates the wavelength or wavenumber of a wave to its frequency. Given the dispersion relation, one can calculate the frequency-dependent phase velocity and group velocity of each sinusoidal component of a wave in the medium, as a function of frequency. In addition to the geometry-dependent and material-dependent dispersion relations, the overarching Kramers–Kronig relations describe the frequency-dependence of wave propagation and attenuation.
Dispersion may be caused either by geometric boundary conditions (waveguides, shallow water) or by interaction of the waves with the transmitting medium. Elementary particles, considered as matter waves, have a nontrivial dispersion relation, even in the absence of geometric constraints and other media.
In the presence of dispersion, a wave does not propagate with an unchanging waveform, giving rise to the distinct frequency-dependent phase velocity and group velocity.
Dispersion.
Dispersion occurs when sinusoidal waves of different wavelengths have different propagation velocities, so that a wave packet of mixed wavelengths tends to spread out in space. The speed of a plane wave, formula_0, is a function of the wave's wavelength formula_1:
formula_2
The wave's speed, wavelength, and frequency, "f", are related by the identity
formula_3
The function formula_4 expresses the dispersion relation of the given medium. Dispersion relations are more commonly expressed in terms of the angular frequency formula_5 and wavenumber formula_6. Rewriting the relation above in these variables gives
formula_7
where we now view "f" as a function of "k". The use of "ω"("k") to describe the dispersion relation has become standard because both the phase velocity "ω"/"k" and the group velocity "dω"/"dk" have convenient representations via this function.
The plane waves being considered can be described by
formula_8
where
Plane waves in vacuum.
Plane waves in vacuum are the simplest case of wave propagation: no geometric constraint, no interaction with a transmitting medium.
Electromagnetic waves in vacuum.
For electromagnetic waves in vacuum, the angular frequency is proportional to the wavenumber:
formula_9
This is a "linear" dispersion relation. In this case, the phase velocity and the group velocity are the same:
formula_10
and thus both are equal to the speed of light in vacuum, which is frequency-independent.
De Broglie dispersion relations.
For de Broglie matter waves the frequency dispersion relation is non-linear:
formula_11
The equation says the matter wave frequency formula_12 in vacuum varies with wavenumber (formula_13) in the non-relativistic approximation. The variation has two parts: a constant part due to the de Broglie frequency of the rest mass (formula_14) and a quadratic part due to kinetic energy.
Derivation.
While applications of matter waves occur at non-relativistic velocity, de Broglie applied special relativity to derive his waves.
Starting from the relativistic energy–momentum relation:
formula_15
use the de Broglie relations for energy and momentum for matter waves,
formula_16
where "ω" is the angular frequency and k is the wavevector with magnitude , equal to the wave number. Divide by formula_17 and take the square root. This gives the relativistic frequency dispersion relation:
formula_18
Practical work with matter wavesoccurs at non-relativistic velocity. To approximate, we pull out the rest-mass dependent frequency:
formula_19
Then we see that the formula_20 factor is very small so for formula_21 not too large, we expand formula_22 and multiply:
formula_11
This gives the non-relativistic approximation discussed above.
If we start with the non-relativistic Schrödinger equation we will end up without the first, rest mass, term.
Frequency versus wavenumber.
As mentioned above, when the focus in a medium is on refraction rather than absorption—that is, on the real part of the refractive index—it is common to refer to the functional dependence of angular frequency on wavenumber as the "dispersion relation". For particles, this translates to a knowledge of energy as a function of momentum.
Waves and optics.
The name "dispersion relation" originally comes from optics. It is possible to make the effective speed of light dependent on wavelength by making light pass through a material which has a non-constant index of refraction, or by using light in a non-uniform medium such as a waveguide. In this case, the waveform will spread over time, such that a narrow pulse will become an extended pulse, i.e., be dispersed. In these materials, formula_23 is known as the group velocity and corresponds to the speed at which the peak of the pulse propagates, a value different from the phase velocity.
Deep water waves.
The dispersion relation for deep water waves is often written as
formula_24
where "g" is the acceleration due to gravity. Deep water, in this respect, is commonly denoted as the case where the water depth is larger than half the wavelength. In this case the phase velocity is
formula_25
and the group velocity is
formula_26
Waves on a string.
For an ideal string, the dispersion relation can be written as
formula_27
where "T" is the tension force in the string, and "μ" is the string's mass per unit length. As for the case of electromagnetic waves in vacuum, ideal strings are thus a non-dispersive medium, i.e. the phase and group velocities are equal and independent (to first order) of vibration frequency.
For a nonideal string, where stiffness is taken into account, the dispersion relation is written as
formula_28
where formula_29 is a constant that depends on the string.
Electron band structure.
In the study of solids, the study of the dispersion relation of electrons is of paramount importance. The periodicity of crystals means that many levels of energy are possible for a given momentum and that some energies might not be available at any momentum. The collection of all possible energies and momenta is known as the band structure of a material. Properties of the band structure define whether the material is an insulator, semiconductor or conductor.
Phonons.
Phonons are to sound waves in a solid what photons are to light: they are the quanta that carry it. The dispersion relation of phonons is also non-trivial and important, being directly related to the acoustic and thermal properties of a material. For most systems, the phonons can be categorized into two main types: those whose bands become zero at the center of the Brillouin zone are called acoustic phonons, since they correspond to classical sound in the limit of long wavelengths. The others are optical phonons, since they can be excited by electromagnetic radiation.
Electron optics.
With high-energy (e.g., ) electrons in a transmission electron microscope, the energy dependence of higher-order Laue zone (HOLZ) lines in convergent beam electron diffraction (CBED) patterns allows one, in effect, to "directly image" cross-sections of a crystal's three-dimensional dispersion surface. This dynamical effect has found application in the precise measurement of lattice parameters, beam energy, and more recently for the electronics industry: lattice strain.
History.
Isaac Newton studied refraction in prisms but failed to recognize the material dependence of the dispersion relation, dismissing the work of another researcher whose measurement of a prism's dispersion did not match Newton's own.
Dispersion of waves on water was studied by Pierre-Simon Laplace in 1776.
The universality of the Kramers–Kronig relations (1926–27) became apparent with subsequent papers on the dispersion relation's connection to causality in the scattering theory of all types of waves and particles.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v"
},
{
"math_id": 1,
"text": "\\lambda"
},
{
"math_id": 2,
"text": "v = v(\\lambda)."
},
{
"math_id": 3,
"text": "v(\\lambda) = \\lambda\\ f(\\lambda)."
},
{
"math_id": 4,
"text": " f(\\lambda)"
},
{
"math_id": 5,
"text": "\\omega=2\\pi f"
},
{
"math_id": 6,
"text": "k=2 \\pi /\\lambda"
},
{
"math_id": 7,
"text": "\\omega(k)= v(k) \\cdot k."
},
{
"math_id": 8,
"text": "A(x, t) = A_0e^{2 \\pi i \\frac{x - v t}{\\lambda}}= A_0e^{i (k x - \\omega t)},"
},
{
"math_id": 9,
"text": "\\omega = c k."
},
{
"math_id": 10,
"text": " v = \\frac{\\omega}{k} = \\frac{d\\omega}{d k} = c,"
},
{
"math_id": 11,
"text": "\\omega(k) \\approx \\frac{m_0 c^2}{\\hbar} + \\frac{\\hbar k^2}{2m_{0} }\\,."
},
{
"math_id": 12,
"text": "\\omega"
},
{
"math_id": 13,
"text": "k=2\\pi/\\lambda"
},
{
"math_id": 14,
"text": "\\hbar \\omega_0 = m_{0}c^2"
},
{
"math_id": 15,
"text": "E^2 = (p \\textrm c)^2 + \\left(m_0 \\textrm c^2\\right)^2\\,"
},
{
"math_id": 16,
"text": "E = \\hbar \\omega \\,, \\quad \\mathbf{p} = \\hbar\\mathbf{k}\\,,"
},
{
"math_id": 17,
"text": "\\hbar"
},
{
"math_id": 18,
"text": " \\omega(k) = \\sqrt{k^2c^2 + \\left(\\frac{m_0c^2}{\\hbar}\\right)^2} \\,."
},
{
"math_id": 19,
"text": "\\omega = \\frac{m_0 c^2}{\\hbar}\\sqrt{1+ \\left( \\frac{k\\hbar}{m_{0} c} \\right)^2 } \\,."
},
{
"math_id": 20,
"text": "\\hbar/c"
},
{
"math_id": 21,
"text": "k"
},
{
"math_id": 22,
"text": "\\sqrt{1+x^2}\\approx 1+x^2/2,"
},
{
"math_id": 23,
"text": "\\frac{\\partial \\omega}{\\partial k}"
},
{
"math_id": 24,
"text": "\\omega = \\sqrt{gk},"
},
{
"math_id": 25,
"text": "v_p = \\frac{\\omega}{k} = \\sqrt{\\frac{g}{k}},"
},
{
"math_id": 26,
"text": "v_g = \\frac{d\\omega}{dk} = \\frac{1}{2} v_p."
},
{
"math_id": 27,
"text": "\\omega = k \\sqrt{\\frac{T}{\\mu}},"
},
{
"math_id": 28,
"text": "\\omega^2 = \\frac{T}{\\mu} k^2 + \\alpha k^4,"
},
{
"math_id": 29,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=853778 |
853826 | Electrical breakdown | Conduction of electricity through an insulator under sufficiently high voltage
In electronics, electrical breakdown or dielectric breakdown is a process that occurs when an electrically insulating material (a dielectric), subjected to a high enough voltage, suddenly becomes a conductor and current flows through it. All insulating materials undergo breakdown when the electric field caused by an applied voltage exceeds the material's dielectric strength. The voltage at which a given insulating object becomes conductive is called its "breakdown voltage" and, in addition to its dielectric strength, depends on its size and shape, and the location on the object at which the voltage is applied. Under sufficient voltage, electrical breakdown can occur within solids, liquids, or gases (and theoretically even in a vacuum). However, the specific breakdown mechanisms are different for each kind of dielectric medium.
Electrical breakdown may be a momentary event (as in an electrostatic discharge), or may lead to a continuous electric arc if protective devices fail to interrupt the current in a power circuit. In this case electrical breakdown can cause catastrophic failure of electrical equipment, and fire hazards.
Explanation.
Electric current is a flow of electrically charged particles in a material caused by an electric field, usually created by a voltage across the material. The mobile charged particles which make up an electric current are called charge carriers. In different substances different particles serve as charge carriers: in metals and some other solids some of the outer electrons of each atom (conduction electrons) are able to move about in the material; in electrolytes and plasma it is ions, electrically charged atoms or molecules, and electrons that are charge carriers. A material that has a high concentration of charge carriers available for conduction, such as a metal, will conduct a large current with a given electric field, and thus has a low electrical resistivity; this is called an electrical conductor. A material that has few charge carriers, such as glass or ceramic, will conduct very little current with a given electric field and has a high resistivity; this is called an electrical insulator or dielectric. All matter is composed of charged particles, but the common property of insulators is that the negative charges, the orbital electrons, are tightly bound to the positive charges, the atomic nuclei, and cannot easily be freed to become mobile.
However, when a large enough electric field is applied to any insulating substance, at a certain field strength the number of charge carriers in the material suddenly increases by many orders of magnitude, so its resistance drops and it becomes a conductor. This is called "electrical breakdown". The physical mechanism causing breakdown differs in different substances. In a solid, it usually occurs when the electric field becomes strong enough to pull outer valence electrons away from their atoms, so they become mobile, and the heat created by their collisions with other atoms releases additional electrons. In a gas, the electric field accelerates the small number of free electrons naturally present (due to processes like photoionization and radioactive decay) to a high enough speed that when they collide with gas molecules they knock additional electrons out of them, called ionization, which go on to ionize more molecules creating more free electrons and ions in a chain reaction called a Townsend discharge. As these examples indicate, in most materials breakdown occurs by a rapid chain reaction in which mobile charged particles release additional charged particles.
Dielectric strength and breakdown voltage.
The electric field strength (in volts per metre) at which breakdown occurs is an intrinsic property of the insulating material called its "dielectric strength". The electric field is usually caused by a voltage applied across the material. The applied voltage required to cause breakdown in a given insulating object is called the object's "breakdown voltage". The electric field created in a given insulating object by an applied voltage varies depending on the size and shape of the object and the location on the object of the electrical contacts where the voltage is applied, so in addition to the material's dielectric strength, the breakdown voltage depends on these factors.
In a flat sheet of insulator between two flat metal electrodes, the electric field formula_0 is proportional to the voltage formula_1 divided by the thickness formula_2 of the insulator, so in general the breakdown voltage formula_3 is proportional to the dielectric strength formula_4 and the length of insulation between two conductors
formula_5
However the shape of the conductors can influence the breakdown voltage.
Breakdown process.
Breakdown is a local process, and in an insulating medium subjected to a high voltage difference begins at whatever point in the insulator the electric field first exceeds the local dielectric strength of the material. Since the electric field at the surface of a conductor is highest at protruding parts, sharp points and edges, for a conductor immersed in a homogeneous insulator like air or oil, breakdown usually starts at these points. In a solid insulator, breakdown often starts at a local defect , such as a crack or bubble in a ceramic insulator. If the voltage is low enough, breakdown may remain limited to this small region; this is called "partial discharge". In a gas adjacent to a sharp pointed conductor, local breakdown processes, corona discharge or brush discharge, can allow current to leak off the conductor into the gas as ions. However, usually in a homogeneous solid insulator after one region has broken down and become conductive there is no voltage drop across it, and the full voltage difference is applied to the remaining length of the insulator. Since the voltage drop is now across a shorter length, this creates a higher electric field in the remaining material, which causes more material to break down. So the breakdown region rapidly (within nanoseconds) spreads in the direction of the voltage gradient (electric field) from one end of the insulator to the other, until a continuous conductive path is created through the material between the two contacts applying the voltage difference, allowing a current to flow between them, starting an electric arc.
Electrical breakdown can also occur without an applied voltage, due to an electromagnetic wave. When a sufficiently intense electromagnetic wave passes through a material medium, the electric field of the wave can be strong enough to cause temporary electrical breakdown. For example a laser beam focused to a small spot in air can cause electrical breakdown and ionization of the air at the focal point.
Consequences.
In practical electric circuits electrical breakdown is usually an unwanted occurrence, a failure of insulating material causing a short circuit, possibly resulting in a catastrophic failure of the equipment. In power circuits, the sudden drop in resistance causes a high current to flow through the material, beginning an electric arc, and if safety devices do not interrupt the current quickly the sudden extreme Joule heating may cause the insulating material or other parts of the circuit to melt or vaporize explosively, damaging the equipment and creating a fire hazard. However, external protective devices in the circuit such as circuit breakers and current limiting can prevent the high current; and the breakdown process itself is not necessarily destructive and may be reversible, as for example in a gas discharge lamp tube. If the current supplied by the external circuit is removed sufficiently quickly, no damage is done to the material, and reducing the applied voltage causes a transition back to the material's insulating state.
Lightning and sparks due to static electricity are natural examples of the electrical breakdown of air. Electrical breakdown is part of the normal operating mode of a number of electrical components, such as gas discharge lamps like fluorescent lights, and neon lights, zener diodes, avalanche diodes, IMPATT diodes, mercury-vapor rectifiers, thyratron, ignitron, and krytron tubes, and spark plugs.
Failure of electrical insulation.
Electrical breakdown is often associated with the failure of solid or liquid insulating materials used inside high voltage transformers or capacitors in the electricity distribution grid, usually resulting in a short circuit or a blown fuse. Electrical breakdown can also occur across the insulators that suspend overhead power lines, within underground power cables, or lines arcing to nearby branches of trees.
Dielectric breakdown is also important in the design of integrated circuits and other solid state electronic devices. Insulating layers in such devices are designed to withstand normal operating voltages, but higher voltage such as from static electricity may destroy these layers, rendering a device useless. The dielectric strength of capacitors limits how much energy can be stored and the safe working voltage for the device.
Mechanisms.
Breakdown mechanisms differ in solids, liquids, and gases. Breakdown is influenced by electrode material, sharp curvature of conductor material (resulting in locally intensified electric fields), the size of the gap between the electrodes, and the density of the material in the gap.
Solids.
In solid materials (such as in power cables) a long-time partial discharge caused by a defect such as a crack or bubble in the material typically precedes breakdown. The partial discharge is a local ionization and heating of the area, degrading the insulators and metals nearest to the defect. Ultimately the partial discharge chars through a channel of carbonized material that conducts current across the gap.
Liquids.
Possible mechanisms for breakdown in liquids include bubbles, small impurities, and electrical super-heating. The process of breakdown in liquids is complicated by hydrodynamic effects, since additional pressure is exerted on the fluid by the non-linear electrical field strength in the gap between the electrodes.
In liquefied gases used as coolants for superconductivity – such as Helium at 4.2 K or Nitrogen at 77 K – bubbles can induce breakdown.
In oil-cooled and oil-insulated transformers the field strength for breakdown is about 20 kV/mm (as compared to 3 kV/mm for dry air). Despite the purified oils used, small particle contaminants are blamed.
Gases.
Electrical breakdown occurs within a gas when the dielectric strength of the gas is exceeded. Regions of intense voltage gradients can cause nearby gas to partially ionize and begin conducting. This is done deliberately in low pressure discharges such as in fluorescent lights. The voltage that leads to electrical breakdown of a gas is approximated by Paschen's Law.
Partial discharge in air causes the "fresh air" smell of ozone during thunderstorms or around high-voltage equipment. Although air is normally an excellent insulator, when stressed by a sufficiently high voltage (an electric field of about 3 x 106 V/m or 3 kV/mm), air can begin to break down, becoming partially conductive. Across relatively small gaps, breakdown voltage in air is a function of gap length times pressure. If the voltage is sufficiently high, complete electrical breakdown of the air will culminate in an electrical spark or an electric arc that bridges the entire gap.
The color of the spark depends upon the gases that make up the gaseous media. While the small sparks generated by static electricity may barely be audible, larger sparks are often accompanied by a loud snap or bang. Lightning is an example of an immense spark that can be many miles long and thunder produced by it can be heard from a very large distance.
Persistent arcs.
If a fuse or circuit breaker fails to interrupt the current through a spark in a power circuit, current may continue, forming a very hot electric arc (about 30 000 degrees C). The color of an arc depends primarily upon the conducting gasses, some of which may have been solids before being vaporized and mixed into the hot plasma in the arc. The free ions in and around the arc recombine to create new chemical compounds, such as ozone, carbon monoxide, and nitrous oxide. Ozone is most easily noticed due to its distinct odour.
Although sparks and arcs are usually undesirable, they can be useful in applications such as spark plugs for gasoline engines, electrical welding of metals, or for metal melting in an electric arc furnace. Prior to gas discharge the gas glows with distinct colors that depend on the energy levels of the atoms. Not all mechanisms are fully understood.
The vacuum itself is expected to undergo electrical breakdown at or near the Schwinger limit.
Voltage-current relation.
Before gas breakdown, there is a non-linear relation between voltage and current as shown in the figure. In region 1, there are free ions that can be accelerated by the field and induce a current. These will be saturated after a certain voltage and give a constant current, region 2. Region 3 and 4 are caused by ion avalanche as explained by the Townsend discharge mechanism.
Friedrich Paschen established the relation between the breakdown condition to breakdown voltage. He derived a formula that defines the breakdown voltage (formula_3) for uniform field gaps as a function of gap length (formula_6) and gap pressure (formula_7).
formula_8
Paschen also derived a relation between the minimum value of pressure gap for which breakdown occurs with a minimum voltage.
formula_9
formula_10 and formula_11 are constants depending on the gas used.
Corona breakdown.
Partial breakdown of the air occurs as a corona discharge on high voltage conductors at points with the highest electrical stress. Conductors that have sharp points, or balls with small radii, are prone to causing dielectric breakdown, because the field strength around points is higher than that around a flat surface. High-voltage apparatus is designed with rounded curves and grading rings to avoid concentrated fields that precipitate breakdown.
Appearance.
Corona is sometimes seen as a bluish glow around high voltage wires and heard as a sizzling sound along high voltage power lines. Corona also generates radio frequency noise that can also be heard as ‘static’ or buzzing on radio receivers. Corona can also occur naturally as "St. Elmo's Fire" at high points such as church spires, treetops, or ship masts during thunderstorms.
Ozone generation.
Corona discharge ozone generators have been used for more than 30 years in the water purification process. Ozone is a toxic gas, even more potent than chlorine. In a typical drinking water treatment plant, the ozone gas is dissolved into the filtered water to kill bacteria and destroy viruses. Ozone also removes the bad odours and taste from the water. The main advantage of ozone is that any residual overdose decomposes to gaseous oxygen well before the water reaches the consumer. This is in contrast with chlorine gas or chlorine salts, which stay in the water longer and can be tasted by the consumer.
Other uses.
Although corona discharge is usually undesirable, until recently it was essential in the operation of photocopiers (xerography) and laser printers. Many modern copiers and laser printers now charge the photoconductor drum with an electrically conductive roller, reducing undesirable indoor ozone pollution.
Lightning rods use corona discharge to create conductive paths in the air that point towards the rod, deflecting potentially-damaging lightning away from buildings and other structures.
Corona discharges are also used to modify the surface properties of many polymers. An example is the corona treatment of plastic materials which allows paint or ink to adhere properly.
Disruptive devices.
A disruptive device is designed to electrically overstress a dielectric beyond its dielectric strength so as to intentionally cause electrical breakdown of the device. The disruption causes a sudden transition of a portion of the dielectric, from an insulating state to a highly conductive state. This transition is characterized by the formation of an electric spark or plasma channel, possibly followed by an electric arc through part of the dielectric material.
If the dielectric happens to be a solid, permanent physical and chemical changes along the path of the discharge will significantly reduce the material's dielectric strength, and the device can only be used one time. However, if the dielectric material is a liquid or gas, the dielectric can fully recover its insulating properties once current through the plasma channel has been externally interrupted.
Commercial spark gaps use this property to abruptly switch high voltages in pulsed power systems, to provide surge protection for telecommunication and electrical power systems, and ignite fuel via spark plugs in internal combustion engines. Spark-gap transmitters were used in early radio telegraph systems.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "D"
},
{
"math_id": 3,
"text": "V_\\text{b}"
},
{
"math_id": 4,
"text": "E_\\text{ds}"
},
{
"math_id": 5,
"text": "V_\\text{b} = D E_\\text{ds}"
},
{
"math_id": 6,
"text": "d"
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "V_\\text{b} = {Bpd \\over \\ln\\left({Apd \\over \\ln\\left(1 + {1 \\over \\gamma}\\right)}\\right)}"
},
{
"math_id": 9,
"text": "\\begin{align}\n (pd)_\\min &= {2.718 \\over A} \\ln\\left(1 + \\frac{1}{\\gamma}\\right) \\\\\n V_{\\text{b},\\min} &= 2.718 {B \\over A} \\ln\\left(1 + \\frac{1}{\\gamma}\\right)\n\\end{align}"
},
{
"math_id": 10,
"text": "A"
},
{
"math_id": 11,
"text": "B"
}
] | https://en.wikipedia.org/wiki?curid=853826 |
85411 | PH indicator | Chemical added to show pH of a solution
A pH indicator is a halochromic chemical compound added in small amounts to a solution so the pH (acidity or basicity) of the solution can be determined visually or spectroscopically by changes in absorption and/or emission properties. Hence, a pH indicator is a chemical detector for hydronium ions (H3O+) or hydrogen ions (H+) in the Arrhenius model.
Normally, the indicator causes the color of the solution to change depending on the pH. Indicators can also show change in other physical properties; for example, olfactory indicators show change in their odor. The pH value of a neutral solution is 7.0 at 25°C (standard laboratory conditions). Solutions with a pH value below 7.0 are considered acidic and solutions with pH value above 7.0 are basic. Since most naturally occurring organic compounds are weak electrolytes, such as carboxylic acids and amines, pH indicators find many applications in biology and analytical chemistry. Moreover, pH indicators form one of the three main types of indicator compounds used in chemical analysis. For the quantitative analysis of metal cations, the use of complexometric indicators is preferred, whereas the third compound class, the redox indicators, are used in redox titrations (titrations involving one or more redox reactions as the basis of chemical analysis).
Theory.
In and of themselves, pH indicators are usually weak acids or weak bases. The general reaction scheme of acidic pH indicators in aqueous solutions can be formulated as:
HInd(aq) + H2O(l) ⇌ H3O+(aq) + Ind−(aq)
where, "HInd" is the acidic form and "Ind−" is the conjugate base of the indicator.
Vice versa for basic pH indicators in aqueous solutions:
IndOH(aq) + H2O(l) ⇌ H2O(l) + Ind+(aq) + OH−(aq)
where "IndOH" stands for the basic form and "Ind+" for the conjugate acid of the indicator.
The ratio of concentration of conjugate acid/base to concentration of the acidic/basic indicator determines the pH (or pOH) of the solution and connects the color to the pH (or pOH) value. For pH indicators that are weak electrolytes, the Henderson–Hasselbalch equation can be written as:
pH = p"K"a + log10
"or"
pOH = p"K"b + log10
The equations, derived from the acidity constant and basicity constant, states that when pH equals the p"K"a or p"K"b value of the indicator, both species are present in a 1:1 ratio. If pH is above the p"K"a or p"K"b value, the concentration of the conjugate base is greater than the concentration of the acid, and the color associated with the conjugate base dominates. If pH is below the p"K"a or p"K"b value, the converse is true.
Usually, the color change is not instantaneous at the p"K"a or p"K"b value, but a pH range exists where a mixture of colors is present. This pH range varies between indicators, but as a rule of thumb, it falls between the p"K"a or p"K"b value plus or minus one. This assumes that solutions retain their color as long as at least 10% of the other species persists. For example, if the concentration of the conjugate base is 10 times greater than the concentration of the acid, their ratio is 10:1, and consequently the pH is p"K"a + 1 or p"K"b + 1. Conversely, if a 10-fold excess of the acid occurs with respect to the base, the ratio is 1:10 and the pH is p"K"a − 1 or p"K"b − 1.
For optimal accuracy, the color difference between the two species should be as clear as possible, and the narrower the pH range of the color change the better. In some indicators, such as phenolphthalein, one of the species is colorless, whereas in other indicators, such as methyl red, both species confer a color. While pH indicators work efficiently at their designated pH range, they are usually destroyed at the extreme ends of the pH scale due to undesired side reactions.
Application.
pH indicators are frequently employed in titrations in analytical chemistry and biology to determine the extent of a chemical reaction. Because of the subjective choice (determination) of color, pH indicators are susceptible to imprecise readings. For applications requiring precise measurement of pH, a pH meter is frequently used. Sometimes, a blend of different indicators is used to achieve several smooth color changes over a wide range of pH values. These commercial indicators (e.g., universal indicator and Hydrion papers) are used when only rough knowledge of pH is necessary. For a titration, the difference between the true endpoint and the indicated endpoint is called the indicator error.
Tabulated below are several common laboratory pH indicators. Indicators usually exhibit intermediate colors at pH values inside the listed transition range. For example, phenol red exhibits an orange color between pH 6.8 and pH 8.4. The transition range may shift slightly depending on the concentration of the indicator in the solution and on the temperature at which it is used. The figure on the right shows indicators with their operation range and color changes.
Precise pH measurement.
An indicator may be used to obtain quite precise measurements of pH by measuring absorbance quantitatively at two or more wavelengths. The principle can be illustrated by taking the indicator to be a simple acid, HA, which dissociates into H+ and A−.
HA ⇌ H+ + A−
The value of the acid dissociation constant, p"K"a, must be known. The molar absorbances, "ε"HA and "ε"A− of the two species HA and A− at wavelengths "λx" and "λy" must also have been determined by previous experiment. Assuming Beer's law to be obeyed, the measured absorbances "Ax" and "Ay" at the two wavelengths are simply the sum of the absorbances due to each species.
formula_0
These are two equations in the two concentrations [HA] and [A−]. Once solved, the pH is obtained as
formula_1
If measurements are made at more than two wavelengths, the concentrations [HA] and [A−] can be calculated by linear least squares. In fact, a whole spectrum may be used for this purpose. The process is illustrated for the indicator bromocresol green. The observed spectrum (green) is the sum of the spectra of HA (gold) and of A− (blue), weighted for the concentration of the two species.
When a single indicator is used, this method is limited to measurements in the pH range p"K"a ± 1, but this range can be extended by using mixtures of two or more indicators. Because indicators have intense absorption spectra, the indicator concentration is relatively low, and the indicator itself is assumed to have a negligible effect on pH.
Equivalence point.
In acid-base titrations, an unfitting pH indicator may induce a color change in the indicator-containing solution before or after the actual equivalence point. As a result, different equivalence points for a solution can be concluded based on the pH indicator used. This is because the slightest color change of the indicator-containing solution suggests the equivalence point has been reached. Therefore, the most suitable pH indicator has an effective pH range, where the change in color is apparent, that encompasses the pH of the equivalence point of the solution being titrated.
Naturally occurring pH indicators.
Many plants or plant parts contain chemicals from the naturally colored anthocyanin family of compounds. They are red in acidic solutions and blue in basic. Anthocyanins can be extracted with water or other solvents from a multitude of colored plants and plant parts, including from leaves (red cabbage); flowers (geranium, poppy, or rose petals); berries (blueberries, blackcurrant); and stems (rhubarb). Extracting anthocyanins from household plants, especially red cabbage, to form a crude pH indicator is a popular introductory chemistry demonstration.
Litmus, used by alchemists in the Middle Ages and still readily available, is a naturally occurring pH indicator made from a mixture of lichen species, particularly "Roccella tinctoria". The word "litmus" is literally from 'colored moss' in Old Norse (see Litr). The color changes between red in acid solutions and blue in alkalis. The term 'litmus test' has become a widely used metaphor for any test that purports to distinguish authoritatively between alternatives.
"Hydrangea macrophylla" flowers can change color depending on soil acidity. In acid soils, chemical reactions occur in the soil that make aluminium available to these plants, turning the flowers blue. In alkaline soils, these reactions cannot occur and therefore aluminium is not taken up by the plant. As a result, the flowers remain pink.
Another natural pH indicator is the spice turmeric. It turns yellow when exposed to acids and reddish brown when in presence of an alkalis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\nA_x &= [\\ce{HA}]\\varepsilon^x_\\ce{HA} + [\\ce{A-}]\\varepsilon^x_\\ce{A-} \\\\\nA_y &= [\\ce{HA}]\\varepsilon^y_\\ce{HA} + [\\ce{A-}]\\varepsilon^y_\\ce{A-} \n\\end{align}"
},
{
"math_id": 1,
"text": "\\mathrm{pH} = \\mathrm{p}K_\\mathrm{a}+ \\log \\frac{[\\ce{A-}]}{[\\ce{HA}]}"
}
] | https://en.wikipedia.org/wiki?curid=85411 |
8541166 | Double exponential function | Exponential function of an exponential function
A double exponential function is a constant raised to the power of an exponential function. The general formula is formula_0 (where "a">1 and "b">1), which grows much more quickly than an exponential function. For example, if "a" = "b" = 10:
Factorials grow faster than exponential functions, but much more slowly than double exponential functions. However, tetration and the Ackermann function grow faster. See Big O notation for a comparison of the rate of growth of various functions.
The inverse of the double exponential function is the double logarithm log(log("x")).
Double exponential sequences.
A sequence of positive integers (or real numbers) is said to have "double exponential rate of growth" if the function giving the nth term of the sequence is bounded above and below by double exponential functions of n.
Examples include
Aho and Sloane observed that in several important integer sequences, each term is a constant plus the square of the previous term. They show that such sequences can be formed by rounding to the nearest integer the values of a double exponential function with middle exponent 2.
Ionaşcu and Stănică describe some more general sufficient conditions for a sequence to be the floor of a double exponential sequence plus a constant.
Applications.
Algorithmic complexity.
In computational complexity theory, 2-EXPTIME is the class of decision problems solvable in double exponential time. It is equivalent to AEXPSPACE, the set of decision problems solvable by an alternating Turing machine in exponential space, and is a superset of EXPSPACE. An example of a problem in 2-EXPTIME that is not in EXPTIME is the problem of proving or disproving statements in Presburger arithmetic.
In some other problems in the design and analysis of algorithms, double exponential sequences are used within the design of an algorithm rather than in its analysis. An example is Chan's algorithm for computing convex hulls, which performs a sequence of computations using test values "h""i" = 22"i" (estimates for the eventual output size), taking time O("n" log "h""i") for each test value in the sequence. Because of the double exponential growth of these test values, the time for each computation in the sequence grows singly exponentially as a function of "i", and the total time is dominated by the time for the final step of the sequence. Thus, the overall time for the algorithm is O("n" log "h") where "h" is the actual output size.
Number theory.
Some number theoretical bounds are double exponential. Odd perfect numbers with "n" distinct prime factors are known to be at most formula_6, a result of Nielsen (2003).
The maximal volume of a polytope in a "d"-dimensional integer lattice with "k" ≥ 1 interior lattice points is at most
formula_7
a result of Pikhurko (2001).
The largest known prime number in the electronic era has grown roughly as a double exponential function of the year since Miller and Wheeler found a 79-digit prime on EDSAC1 in 1951.
Theoretical biology.
In population dynamics the growth of human population is sometimes supposed to be double exponential. Varfolomeyev and Gurevich experimentally fit
formula_8
where "N"("y") is the population in millions in year "y".
Physics.
In the Toda oscillator model of self-pulsation, the logarithm of amplitude varies exponentially with time (for large amplitudes), thus the amplitude varies as double exponential function of time.
Dendritic macromolecules have been observed to grow in a doubly-exponential fashion.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x) = a^{b^x}=a^{(b^x)}"
},
{
"math_id": 1,
"text": "F(m) = 2^{2^m}+1"
},
{
"math_id": 2,
"text": "MM(p) = 2^{2^p-1}-1"
},
{
"math_id": 3,
"text": "s_n = \\left\\lfloor E^{2^{n+1}}+\\frac12 \\right\\rfloor"
},
{
"math_id": 4,
"text": "2^{2^k}"
},
{
"math_id": 5,
"text": "a(n) = \\left\\lfloor A^{3^n}\\right\\rfloor"
},
{
"math_id": 6,
"text": "2^{4^n}"
},
{
"math_id": 7,
"text": "k\\cdot(8d)^d\\cdot15^{d\\cdot2^{2d+1}},"
},
{
"math_id": 8,
"text": " N(y)=375.6\\cdot 1.00185^{1.00737^{y-1000}} \\,"
}
] | https://en.wikipedia.org/wiki?curid=8541166 |
8542941 | Advanced IRB | The term Advanced IRB or A-IRB is an abbreviation of advanced internal ratings-based approach, and it refers to a set of credit risk measurement techniques proposed under Basel II capital adequacy rules for banking institutions.
Under this approach the banks are allowed to develop their own empirical model to quantify required capital for credit risk. Banks can use this approach only subject to approval from their local regulators.
Under A-IRB banks are supposed to use their own quantitative models to estimate PD (probability of default), EAD (exposure at default), LGD (loss given default) and other parameters required for calculating the RWA (risk-weighted asset). Then total required capital is calculated as a fixed percentage of the estimated RWA.
Reforms to the internal ratings-based approach to credit risk are due to be introduced under the .
Some formulae in internal-ratings-based approach.
Some credit assessments in standardised approach refer to unrated assessment. Basel II also encourages banks to initiate internal ratings-based approach for measuring credit risks. Banks are expected to be more capable of adopting more sophisticated techniques in credit risk management.
Banks can determine their own estimation for some components of risk measure: the probability of default (PD), loss given default (LGD), exposure at default (EAD) and effective maturity (M). For public companies, default probabilities are commonly estimated using either the "structural model" of credit risk proposed by Robert Merton (1974) or reduced form models like the Jarrow–Turnbull model. For retail and unlisted company exposures, default probabilities are estimated using credit scoring or logistic regression, both of which are closely linked to the reduced form approach.
The goal is to define risk weights by determining the cut-off points between and within areas of the expected loss (EL) and the unexpected loss (UL), where the regulatory capital should be held, in the probability of default. Then, the risk weights for individual exposures are calculated based on the function provided by Basel II.
Below are the formulae for some banks' major products: corporate, small-medium enterprise (SME), residential mortgage and qualifying revolving retail exposure. S being Min(Max(Sales Turnover,5),50 )
In the formulas below,
Corporate exposure.
The exposure for corporate loans is calculated as follows
formula_0
AVC (Asset Value Correlation) was introduced by the Basel III Framework, and is applied as following:
* formula_1 if the company is a large regulated financial institution (total asset equal or greater to US $100 billion) or an unregulated financial institution regardless of size
* formula_2 else
formula_3
formula_4
formula_5
Corporate exposure adjustment for SME.
For small and medium enterprises with annual Sales Turnover below 50 million euro, the correlation may be adjusted as follows:
formula_6
Correlation.
In the above formula, S is the enterprise's annual sales turnover in millions of euro.
Residential mortgage exposure.
The exposure related to residential mortgages can be calculated as this
formula_7
formula_8
formula_9
Qualifying revolving retail exposure (credit card product).
The exposure related to unsecured retail credit products can be calculated as follows:
formula_10
formula_8
formula_9
Other retail exposure.
All other retail exposures are calculated as follows:
Correlation.
formula_11
formula_8
formula_9 | [
{
"math_id": 0,
"text": "R = AVC \\cdot \\Bigl(0.12 \\cdot \\frac{1 - e^{-50 \\cdot PD}}{1 - e^{-50}} + 0.24 \\cdot \\left(1- \\frac{1 - e^{-50 \\cdot PD}}{1 - e^{-50}}\\right)\\Biggr) "
},
{
"math_id": 1,
"text": "AVC = 1.25 "
},
{
"math_id": 2,
"text": "AVC = 1 "
},
{
"math_id": 3,
"text": "b= (0.11852 - 0.05478 \\cdot \\ln(PD))^2"
},
{
"math_id": 4,
"text": "K= LGD \\cdot \\left[N\\left(\\sqrt{\\frac{1}{1-R}} \\cdot G(PD) +\\sqrt{\\frac{R}{1-R}} \\cdot G(0.999)\\right) - PD \\right] \\cdot \\frac{1+(M-2.5) b}{1-1.5 b} "
},
{
"math_id": 5,
"text": "RWA = K \\cdot 12.5 \\cdot EAD \n\n"
},
{
"math_id": 6,
"text": "R = 0.12 \\cdot \\frac{1 - e^{-50 \\cdot PD}}{1 - e^{-50}} + 0.24 \\cdot \\left(1- \\frac{1 - e^{-50 \\cdot PD}}{1 - e^{-50}}\\right) - 0.04 \\cdot (1-\\frac{\\max(S-5,0)}{45})\n"
},
{
"math_id": 7,
"text": "R = 0.15\n"
},
{
"math_id": 8,
"text": "K=LGD \\cdot \\left[N\\left(\\sqrt{\\frac{1}{1-R}} \\cdot G(PD) +\\sqrt{\\frac{R}{1-R}}\\cdot G(0.999)\\right) - PD\\right] "
},
{
"math_id": 9,
"text": "RWA = K \\cdot 12.5 \\cdot EAD\n"
},
{
"math_id": 10,
"text": "R = 0.04\n"
},
{
"math_id": 11,
"text": "R = 0.03 \\frac{(1-e^{-35\\cdot PD})}{(1-e^{-35})} + 0.16 \\left(1-\\frac{(1-e^{-35\\cdot PD})}{(1-e^{-35})}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=8542941 |
854305 | Electromagnetic cavity | Container for electromagnetic fields
An electromagnetic cavity is a cavity that acts as a container for electromagnetic fields such as photons, in effect containing their wave function inside. The size of the cavity determines the maximum photon wavelength that can be trapped. Additionally, it produces quantized energy levels for trapped charged particles like electrons and protons. The Earth's magnetic field in effect places the Earth in an electromagnetic cavity.
Physical description of electromagnetic cavities.
Electromagnetic cavities are represented by potential wells, also called "boxes", which can be of limited or unlimited depth V0.
Quantum-mechanic boxes are described by the time-independent Schrödinger equation:
formula_0
with the additional boundary conditions
which leads to real solutions for the wave functions if the net energy of the particle is negative., i.e. if the particle is in a bound state.
Applications of electromagnetic cavities.
Electrons which are trapped in an electromagnetic cavity are in a bound state and thus organise themselves as they do in a regular atom, thus expressing chemical-like behaviour. Several researchers have proposed to develop programmable matter by varying the number of trapped electrons in those cavities.
The discrete energy levels of electromagnetic cavities are exploited to produce photons of desired frequencies and thus are essential for nano- or submicrometre-scale laser devices.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left[ - \\frac{\\hbar^2}{2m} \\nabla^2 + V(\\mathbf{r}) \\right] \\psi(\\mathbf{r}) = E \\psi (\\mathbf{r}),"
}
] | https://en.wikipedia.org/wiki?curid=854305 |
8543254 | Coefficient of inbreeding | Mathematical estimate of inbreeding
The coefficient of inbreeding (COI) is a number measuring how inbred an individual is. Specifically, it is the probability that two alleles at any locus in an individual are identical by descent from a common ancestor of the two parents. A higher COI will make the traits of the offspring more predictable, but also increases the risk of health issues. In dog breeding, it is recommended to keep the COI less than 5%; however, in some breeds this may not be possible without outcrossing.
Calculation.
An individual is said to be inbred if there is a loop in its pedigree chart. A loop is defined as a path that runs from an individual up to the common ancestor through one parent and back down to the other parent, without going through any individual twice. The number of loops is always the number of common ancestors the parents have. If an individual is inbred, the coefficient of inbreeding is calculated by summing all the probabilities that an individual receives the same allele from its father's side and mother's side. As every individual has a 50% chance of passing on an allele to the next generation, the formula depends on 0.5 raised to the power of however many generations separate the individual from the common ancestor of its parents, on both the father's side and mother's side. This number of generations can be calculated by counting how many individuals lie in the loop defined earlier. Thus, the coefficient of inbreeding (f) of an individual X can be calculated with the following formula:
formula_0
where formula_1 is the number of individuals in the aforementioned loop,<br>and formula_2 is the coefficient of inbreeding of the common ancestor of X's parents.
To give an example, consider the following pedigree.
In this pedigree chart, G is the progeny of C and F, and C is the biological uncle of F. To find the coefficient of inbreeding of G, first locate a loop that leads from G to the common ancestor through one parent and back down to the other parent without going through the same individual twice. There are only two such loops in this chart, as there are only 2 common ancestors of C and F. The loops are G - C - A - D - F and G - C - B - D - F, both of which have 5 members.
Because the common ancestors of the parents (A and B) are not inbred themselves, formula_3. Therefore the coefficient of inbreeding of individual G is formula_4.
If the parents of an individual are not inbred themselves, the coefficient of inbreeding of the individual is one-half the coefficient of relationship between the parents. This can be verified in the previous example, as 12.5% is one-half of 25%, the coefficient of relationship between an uncle and a niece.
Table of coefficients of inbreeding.
<templatestyles src="Citation/styles.css"/>^A At this point the individuals are considered to be part of an inbred strain, and each individual can effectively be considered to be clones.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f_X = \\sum 0.5^{n - 1} \\cdot (1 + f_{A})"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "f_A"
},
{
"math_id": 3,
"text": "f_A = 0"
},
{
"math_id": 4,
"text": "f_G = \\sum 0.5^{5 - 1} \\cdot (1 + 0) = 0.5^{4} + 0.5^{4} = 12.5%"
}
] | https://en.wikipedia.org/wiki?curid=8543254 |
8543439 | Fully differential amplifier | Electronic amplifier, a circuit component
A fully differential amplifier (FDA) is a DC-coupled high-gain electronic voltage amplifier with differential inputs and differential outputs. In its ordinary usage, the output of the FDA is controlled by two feedback paths which, because of the amplifier's high gain, almost completely determine the output voltage for any given input.
In a fully differential amplifier, common-mode noise such as power supply disturbances is rejected; this makes FDAs especially useful as part of a mixed-signal integrated circuit.
An FDA is often used to convert an analog signal into a form more suitable for driving into an analog-to-digital converter; many modern high-precision ADCs have differential inputs.
The ideal FDA.
For any input voltages, the ideal FDA has infinite open-loop gain, infinite bandwidth, infinite input impedances resulting in zero input currents, infinite slew rate, zero output impedance and zero noise.
In the ideal FDA, the difference in the output voltages is equal to the difference between the input voltages multiplied by the gain. The common mode voltage of the output voltages is not dependent on the input voltage. In many cases, the common mode voltage can be directly set by a third voltage input.
A real FDA can only approximate this ideal, and the actual parameters are subject to drift over time and with changes in temperature, input conditions, etc. Modern integrated FET or MOSFET FDAs approximate more closely to these ideals than bipolar ICs where large signals must be handled at room temperature over a limited bandwidth; input impedance, in particular, is much higher, although the bipolar FDA usually exhibit superior (i.e., lower) input offset drift and noise characteristics.
Where the limitations of real devices can be ignored, an FDA can be viewed as a Black Box with gain; circuit function and parameters are determined by feedback, usually negative. An FDA, as implemented in practice, is moderately complex integrated circuit.
DC behavior.
Open-loop gain is defined as the amplification from input to output without any feedback applied. For most practical calculations, the open-loop gain is assumed to be infinite; in reality, it is obviously not. Typical devices exhibit open-loop DC gain ranging from 100,000 to over 1 million; this is sufficiently large for circuit gain to be determined almost entirely by the amount of negative feedback used. Op-amps have performance limits that the designer must keep in mind and sometimes workaround. In particular, instability is possible in a DC amplifier if AC aspects are neglected.
AC behavior.
The FDA gain calculated at DC does not apply at higher frequencies. To a first approximation, the gain of a typical FDA is inversely proportional to frequency. This means that an FDA is characterized by its gain-bandwidth product. For example, an FDA with a gain bandwidth product of 1 MHz would have a gain of 5 at 200 kHz, and a gain of 1 at 1 MHz. This low-pass characteristic is introduced deliberately because it tends to stabilize the circuit by introducing a dominant pole. This is known as frequency compensation.
A typical low-cost general-purpose FDA will have a gain-bandwidth product of a few megahertz. Specialty and high-speed FDAs can achieve gain-bandwidth products of hundreds of megahertz. Some FDAs are even capable of gain-bandwidth products greater than a gigahertz.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_\\mathrm{id} = V_\\mathrm{in+} - V_\\mathrm{in-}"
},
{
"math_id": 1,
"text": "V_\\mathrm{od} = V_\\mathrm{out+} - V_\\mathrm{out-} = V_\\mathrm{id} \\times \\mathrm{Gain}"
},
{
"math_id": 2,
"text": "V_\\mathrm{oc} = \\frac{(V_\\mathrm{out+})+(V_\\mathrm{out-})}{2}"
}
] | https://en.wikipedia.org/wiki?curid=8543439 |
8545410 | Warnock algorithm | Computer graphics algorithm
The Warnock algorithm is a hidden surface algorithm invented by John Warnock that is typically used in the field of computer graphics.
It solves the problem of rendering a complicated image by recursive subdivision of a scene until areas are obtained that are trivial to compute. In other words, if the scene is simple enough to compute efficiently then it is rendered; otherwise it is divided into smaller parts which are likewise tested for simplicity.
This is a divide and conquer algorithm with run-time of formula_0, where "n" is the number of polygons and "p" is the number of pixels in the viewport.
The inputs are a list of polygons and a viewport. The best case is that if the list of polygons is simple, then draw the polygons in the viewport. Simple is defined as one polygon (then the polygon or its part is drawn in appropriate part of a viewport) or a viewport that is one pixel in size (then that pixel gets a color of the polygon closest to the observer). The continuous step is to split the viewport into 4 equally sized quadrants and to recursively call the algorithm for each quadrant, with a polygon list modified such that it only contains polygons that are visible in that quadrant.
Warnock expressed his algorithm in words and pictures, rather than software code, as the core of his PhD thesis, which also described protocols for shading oblique surfaces and other features that are now the core of 3-dimensional computer graphics. The entire thesis was only 26 pages from Introduction to Bibliography.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(np)"
}
] | https://en.wikipedia.org/wiki?curid=8545410 |
8547944 | Lehmer's GCD algorithm | Fast greatest common divisor algorithm
Lehmer's GCD algorithm, named after Derrick Henry Lehmer, is a fast GCD algorithm, an improvement on the simpler but slower Euclidean algorithm. It is mainly used for big integers that have a representation as a string of digits relative to some chosen numeral system base, say "β" = 1000 or "β" = 232.
Algorithm.
Lehmer noted that most of the quotients from each step of the division part of the standard algorithm are small. (For example, Knuth observed that the quotients 1, 2, and 3 comprise 67.7% of all quotients.) Those small quotients can be identified from only a few leading digits. Thus the algorithm starts by splitting off those leading digits and computing the sequence of quotients as long as it is correct.
Say we want to obtain the GCD of the two integers "a" and "b". Let "a" ≥ "b". | [
{
"math_id": 0,
"text": "\\textstyle\n \\begin{bmatrix} A & B & x\\\\ C & D & y \\end{bmatrix}\n "
},
{
"math_id": 1,
"text": "\\textstyle\n \\begin{bmatrix} 1 & 0 & x \\\\ 0 & 1 & y\\end{bmatrix},\n "
},
{
"math_id": 2,
"text": "\\textstyle \\begin{bmatrix} A & B & x \\\\ C & D & y \\end{bmatrix}"
},
{
"math_id": 3,
"text": "\\textstyle\n \\begin{bmatrix} 0 & 1 \\\\ 1 & -w \\end{bmatrix}\n \\cdot\n \\begin{bmatrix} A & B & x \\\\ C & D & y \\end{bmatrix}\n = \\begin{bmatrix} C & D &y \\\\ A - wC & B - wD & x-wy \\end{bmatrix}\n "
}
] | https://en.wikipedia.org/wiki?curid=8547944 |
854978 | Algebra representation | In abstract algebra, a representation of an associative algebra is a module for that algebra. Here an associative algebra is a (not necessarily unital) ring. If the algebra is not unital, it may be made so in a standard way (see the adjoint functors page); there is no essential difference between modules for the resulting unital ring, in which the identity acts by the identity mapping, and representations of the algebra.
Examples.
Linear complex structure.
One of the simplest non-trivial examples is a linear complex structure, which is a representation of the complex numbers C, thought of as an associative algebra over the real numbers R. This algebra is realized concretely as formula_0 which corresponds to i2 = −1. Then a representation of C is a real vector space "V", together with an action of C on "V" (a map formula_1). Concretely, this is just an action of i , as this generates the algebra, and the operator representing i (the image of i in End("V")) is denoted "J" to avoid confusion with the identity matrix "I".
Polynomial algebras.
Another important basic class of examples are representations of polynomial algebras, the free commutative algebras – these form a central object of study in commutative algebra and its geometric counterpart, algebraic geometry. A representation of a polynomial algebra in k variables over the field "K" is concretely a "K"-vector space with k commuting operators, and is often denoted formula_2 meaning the representation of the abstract algebra formula_3 where formula_4
A basic result about such representations is that, over an algebraically closed field, the representing matrices are simultaneously triangularisable.
Even the case of representations of the polynomial algebra in a single variable are of interest – this is denoted by formula_5 and is used in understanding the structure of a single linear operator on a finite-dimensional vector space. Specifically, applying the structure theorem for finitely generated modules over a principal ideal domain to this algebra yields as corollaries the various canonical forms of matrices, such as Jordan canonical form.
In some approaches to noncommutative geometry, the free noncommutative algebra (polynomials in non-commuting variables) plays a similar role, but the analysis is much more difficult.
Weights.
Eigenvalues and eigenvectors can be generalized to algebra representations.
The generalization of an eigenvalue of an algebra representation is, rather than a single scalar, a one-dimensional representation formula_6 (i.e., an algebra homomorphism from the algebra to its underlying ring: a linear functional that is also multiplicative). This is known as a weight, and the analog of an eigenvector and eigenspace are called "weight vector" and "weight space".
The case of the eigenvalue of a single operator corresponds to the algebra formula_7 and a map of algebras formula_8 is determined by which scalar it maps the generator "T" to. A weight vector for an algebra representation is a vector such that any element of the algebra maps this vector to a multiple of itself – a one-dimensional submodule (subrepresentation). As the pairing formula_9 is bilinear, "which multiple" is an "A"-linear functional of "A" (an algebra map "A" → "R"), namely the weight. In symbols, a weight vector is a vector formula_10 such that formula_11 for all elements formula_12 for some linear functional formula_13 – note that on the left, multiplication is the algebra action, while on the right, multiplication is scalar multiplication.
Because a weight is a map to a commutative ring, the map factors through the abelianization of the algebra formula_14 – equivalently, it vanishes on the derived algebra – in terms of matrices, if formula_15 is a common eigenvector of operators formula_16 and formula_17, then formula_18 (because in both cases it is just multiplication by scalars), so common eigenvectors of an algebra must be in the set on which the algebra acts commutatively (which is annihilated by the derived algebra). Thus of central interest are the free commutative algebras, namely the polynomial algebras. In this particularly simple and important case of the polynomial algebra formula_19 in a set of commuting matrices, a weight vector of this algebra is a simultaneous eigenvector of the matrices, while a weight of this algebra is simply a formula_20-tuple of scalars formula_21 corresponding to the eigenvalue of each matrix, and hence geometrically to a point in formula_20-space. These weights – in particularly their geometry – are of central importance in understanding the representation theory of Lie algebras, specifically the finite-dimensional representations of semisimple Lie algebras.
As an application of this geometry, given an algebra that is a quotient of a polynomial algebra on formula_20 generators, it corresponds geometrically to an algebraic variety in formula_20-dimensional space, and the weight must fall on the variety – i.e., it satisfies the defining equations for the variety. This generalizes the fact that eigenvalues satisfy the characteristic polynomial of a matrix in one variable.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{C} = \\mathbb{R}[x]/(x^2+1),"
},
{
"math_id": 1,
"text": "\\mathbb{C} \\to \\mathrm{End}(V)"
},
{
"math_id": 2,
"text": "K[T_1,\\dots,T_k],"
},
{
"math_id": 3,
"text": "K[x_1,\\dots,x_k]"
},
{
"math_id": 4,
"text": "x_i \\mapsto T_i."
},
{
"math_id": 5,
"text": "K[T]"
},
{
"math_id": 6,
"text": "\\lambda\\colon A \\to R"
},
{
"math_id": 7,
"text": "R[T],"
},
{
"math_id": 8,
"text": "R[T] \\to R"
},
{
"math_id": 9,
"text": "A \\times M \\to M"
},
{
"math_id": 10,
"text": "m \\in M"
},
{
"math_id": 11,
"text": "am = \\lambda(a)m"
},
{
"math_id": 12,
"text": "a \\in A,"
},
{
"math_id": 13,
"text": "\\lambda"
},
{
"math_id": 14,
"text": "\\mathcal{A}"
},
{
"math_id": 15,
"text": "v"
},
{
"math_id": 16,
"text": "T"
},
{
"math_id": 17,
"text": "U"
},
{
"math_id": 18,
"text": "T U v = U T v"
},
{
"math_id": 19,
"text": "\\mathbf{F}[T_1,\\dots,T_k]"
},
{
"math_id": 20,
"text": "k"
},
{
"math_id": 21,
"text": " \\lambda = (\\lambda_1,\\dots,\\lambda_k)"
}
] | https://en.wikipedia.org/wiki?curid=854978 |
8549940 | Golomb sequence | In mathematics, the Golomb sequence, named after Solomon W. Golomb (but also called Silverman's sequence), is a monotonically increasing integer sequence where "an" is the number of times that "n" occurs in the sequence, starting with "a"1 = 1, and with the property that for "n" > 1 each "an" is the smallest unique integer which makes it possible to satisfy the condition. For example, "a"1 = 1 says that 1 only occurs once in the sequence, so "a"2 cannot be 1 too, but it can be 2, and therefore must be 2. The first few values are
1, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12 (sequence in the OEIS).
Examples.
"a"1 = 1 <br>
Therefore, 1 occurs exactly one time in this sequence.
"a"2 > 1 <br>
"a"2 = 2
2 occurs exactly 2 times in this sequence. <br>
"a"3 = 2
3 occurs exactly 2 times in this sequence.
"a"4 = "a"5 = 3
4 occurs exactly 3 times in this sequence. <br>
5 occurs exactly 3 times in this sequence.
"a"6 = "a"7 = "a"8 = 4 <br>
"a"9 = "a"10 = "a"11 = 5
etc.
Recurrence.
Colin Mallows has given an explicit recurrence relation formula_0. An asymptotic expression for "an" is
formula_1
where formula_2 is the golden ratio (approximately equal to 1.618034). | [
{
"math_id": 0,
"text": "a(1) = 1; a(n+1) = 1 + a(n + 1 - a(a(n)))"
},
{
"math_id": 1,
"text": "\\varphi^{2-\\varphi}n^{\\varphi-1},"
},
{
"math_id": 2,
"text": "\\varphi"
}
] | https://en.wikipedia.org/wiki?curid=8549940 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.