id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
9127122 | SERF | A spin exchange relaxation-free (SERF) magnetometer is a type of magnetometer developed at Princeton University in the early 2000s. SERF magnetometers measure magnetic fields by using lasers to detect the interaction between alkali metal atoms in a vapor and the magnetic field.
The name for the technique comes from the fact that spin exchange relaxation, a mechanism which usually scrambles the orientation of atomic spins, is avoided in these magnetometers. This is done by using a high (1014 cm−3) density of potassium atoms and a very low magnetic field. Under these conditions, the atoms exchange spin quickly compared to their magnetic precession frequency so that the average spin interacts with the field and is not destroyed by decoherence.
A SERF magnetometer achieves very high magnetic field sensitivity by monitoring a high density vapor of alkali metal atoms precessing in a near-zero magnetic field. The sensitivity of SERF magnetometers improves upon traditional atomic magnetometers by eliminating the dominant cause of atomic spin decoherence caused by spin-exchange collisions among the alkali metal atoms. SERF magnetometers are among the most sensitive magnetic field sensors and in some cases exceed the performance of SQUID detectors of equivalent size. A small 1 cm3 volume glass cell containing potassium vapor has reported 1 fT/√Hz sensitivity and can theoretically become even more sensitive with larger volumes. They are vector magnetometers capable of measuring all three components of the magnetic field simultaneously.
Spin-exchange relaxation.
Spin-exchange collisions preserve total angular momentum of a colliding pair of atoms but can scramble the hyperfine state of the atoms. Atoms in different hyperfine states do not precess coherently and thereby limit the coherence lifetime of the atoms. However, decoherence due to spin-exchange collisions can be nearly eliminated if the spin-exchange collisions occur much faster than the precession frequency of the atoms. In this regime of fast spin-exchange, all atoms in an ensemble rapidly change hyperfine states, spending the same amounts of time in each hyperfine state and causing the spin ensemble to precess more slowly but remain coherent. This so-called SERF regime can be reached by operating with sufficiently high alkali metal density (at higher temperature) and in sufficiently low magnetic field.
The spin-exchange relaxation rate formula_0 for atoms with low polarization experiencing slow spin-exchange can be expressed as follows:
formula_1
where formula_2 is the time between spin-exchange collisions, formula_3 is the nuclear spin, formula_4 is the magnetic resonance frequency, formula_5 is the gyromagnetic ratio for an electron.
In the limit of fast spin-exchange and small magnetic field, the spin-exchange relaxation rate vanishes for sufficiently small magnetic field:
formula_6
where formula_7 is the "slowing-down" constant to account for sharing of angular momentum between the electron and nuclear spins:
formula_8
formula_9
formula_10
where formula_11 is the average polarization of the atoms. The atoms suffering fast spin-exchange precess more slowly when they are not fully polarized because they spend a fraction of the time in different hyperfine states precessing at different frequencies (or in the opposite direction).
Sensitivity.
The sensitivity formula_12 of atomic magnetometers are limited by the number of atoms formula_13 and their spin coherence lifetime formula_14 according to
formula_15
where formula_16 is the gyromagnetic ratio of the atom and formula_17 is the average polarization of total atomic spin formula_18.
In the absence of spin-exchange relaxation, a variety of other relaxation mechanisms contribute to the decoherence of atomic spin:
formula_19
where formula_20 is the relaxation rate due to collisions with the cell walls and formula_21 are the spin destruction rates for collisions among the alkali metal atoms and collisions between alkali atoms and any other gasses that may be present.
In an optimal configuration, a density of 1014 cm−3 potassium atoms in a 1 cm3 vapor cell with ~3 atm helium buffer gas can achieve 10 aT Hz−1/2 (10−17 T Hz−1/2) sensitivity with relaxation rate formula_22 ≈ 1 Hz.
Typical operation.
Alkali metal vapor of sufficient density is obtained by simply heating solid alkali metal inside the vapor cell. A typical SERF atomic magnetometer can take advantage of low noise diode lasers to polarize and monitor spin precession. Circularly polarized pumping light tuned to the formula_23 spectral resonance line polarizes the atoms. An orthogonal probe beam detects the precession using optical rotation of linearly polarized light. In a typical SERF magnetometer, the spins merely tip by a very small angle because the precession frequency is slow compared to the relaxation rates.
Advantages and disadvantages.
SERF magnetometers compete with SQUID magnetometers for use in a variety of applications. The SERF magnetometer has the following advantages:
Potential disadvantages:
Applications.
Applications utilizing high sensitivity of SERF magnetometers potentially include:
History.
The SERF magnetometer was developed by Michael V. Romalis at Princeton University in the early 2000s. The underlying physics governing the suppression spin-exchange relaxation was developed decades earlier by William Happer but the application to magnetic field measurement was not explored at that time. The name "SERF" was partially motivated by its relationship to SQUID detectors in a marine metaphor. | [
{
"math_id": 0,
"text": "R_{se}"
},
{
"math_id": 1,
"text": "\nR_{se} = \\frac{1}{2 \\pi T_{se}} \\left( \\frac{2 I(2 I -1)}{3(2I+1)^2} \\right)\n"
},
{
"math_id": 2,
"text": "T_{se}"
},
{
"math_id": 3,
"text": "I"
},
{
"math_id": 4,
"text": "\\nu"
},
{
"math_id": 5,
"text": "\\gamma_e"
},
{
"math_id": 6,
"text": "\nR_{se} = \\frac{\\gamma_e^2 B^2 T_{se} }{2 \\pi} \\frac{1}{2}\\left( 1-\\frac{(2I+1)^2}{Q^2} \\right)\n"
},
{
"math_id": 7,
"text": "Q"
},
{
"math_id": 8,
"text": "Q(I=3/2)=4\\left( 2 - \\frac{4}{3+P^2} \\right)^{-1}"
},
{
"math_id": 9,
"text": "Q(I=5/2)=6\\left( 3 - \\frac{48(1+P^2)}{19+26 P^2+3 P^4} \\right)^{-1}"
},
{
"math_id": 10,
"text": "Q(I=7/2)=8\\left( \\frac{4(1+7P^2+7P^4+P^6)}{11+35P^2+17P^4+P^6} \\right)^{-1}"
},
{
"math_id": 11,
"text": "P"
},
{
"math_id": 12,
"text": "\\delta B"
},
{
"math_id": 13,
"text": "N"
},
{
"math_id": 14,
"text": "T_2"
},
{
"math_id": 15,
"text": "\\delta B = \\frac{1}{\\gamma} \\sqrt{\\frac{2 R_{tot} Q}{F_z N} }"
},
{
"math_id": 16,
"text": "\\gamma"
},
{
"math_id": 17,
"text": "F_z"
},
{
"math_id": 18,
"text": "F = I+S"
},
{
"math_id": 19,
"text": "R_{tot} = R_D + R_{sd,self} + R_{sd,\\mathrm{He}} + R_{sd,\\mathrm{N_2}} "
},
{
"math_id": 20,
"text": "R_D"
},
{
"math_id": 21,
"text": "R_{sd,X}"
},
{
"math_id": 22,
"text": "R_{tot}"
},
{
"math_id": 23,
"text": "D_1"
}
] | https://en.wikipedia.org/wiki?curid=9127122 |
9127600 | Plücker matrix | Skew-symmetric 4 × 4 matrix, which characterizes a straight line in projective space
The Plücker matrix is a special skew-symmetric 4 × 4 matrix, which characterizes a straight line in projective space. The matrix is defined by 6 Plücker coordinates with 4 degrees of freedom. It is named after the German mathematician Julius Plücker.
Definition.
A straight line in space is defined by two distinct points formula_0 and formula_1 in homogeneous coordinates of the projective space. Its Plücker matrix is:
formula_2
Where the skew-symmetric formula_3-matrix is defined by the 6 Plücker coordinates
formula_4
with
formula_5
Plücker coordinates fulfill the Grassmann–Plücker relations
formula_6
and are defined up to scale. A Plücker matrix has only rank 2 and four degrees of freedom (just like lines in formula_7). They are independent of a particular choice of the points formula_8 and formula_9 and can be seen as a generalization of the line equation i.e. of the cross product for both the intersection (meet) of two lines, as well as the joining line of two points in the projective plane.
Properties.
The Plücker matrix allows us to express the following geometric operations as matrix-vector product:
Uniqueness.
Two arbitrary distinct points on the line can be written as a linear combination of formula_8 and formula_9:
formula_19
Their Plücker matrix is thus:
formula_20
up to scale identical to formula_21.
Intersection with a plane.
Let formula_22 denote the plane with the equation
formula_23
which does not contain the line formula_12. Then, the matrix-vector product with the Plücker matrix describes a point
formula_24
which lies on the line formula_12 because it is a linear combination of formula_8 and formula_9. formula_16 is also contained in the plane formula_13
formula_25
and must therefore be their point of intersection.
In addition, the product of the Plücker matrix with a plane is the zero-vector, exactly if the line formula_12 is contained entirely in the plane:
formula_26 contains formula_27
Dual Plücker matrix.
In projective three-space, both points and planes have the same representation as 4-vectors and the algebraic description of their geometric relationship (point lies on plane) is symmetric. By interchanging the terms plane and point in a theorem, one obtains a dual theorem which is also true.
In case of the Plücker matrix, there exists a dual representation of the line in space as the intersection of two planes:
formula_28
and
formula_29
in homogeneous coordinates of projective space. Their Plücker matrix is:
formula_30
and
formula_31
describes the plane formula_32 which contains both the point formula_16 and the line formula_12.
Relationship between primal and dual Plücker matrices.
As the vector formula_11, with an arbitrary plane formula_13, is either the zero-vector or a point on the line, it follows:
formula_33
Thus:
formula_34
The following product fulfills these properties:
formula_35
due to the Grassmann–Plücker relation. With the uniqueness of Plücker matrices up to scalar multiples, for the primal Plücker coordinates
formula_36
we obtain the following dual Plücker coordinates:
formula_37
In the projective plane.
The 'join' of two points in the projective plane is the operation of connecting two points with a straight line. Its line equation can be computed using the cross product:
formula_38
Dually, one can express the 'meet', or intersection of two straight lines by the cross-product:
formula_39
The relationship to Plücker matrices becomes evident, if one writes the cross product as a matrix-vector product with a skew-symmetric matrix:
formula_40
and analogously formula_41
Geometric interpretation.
Let formula_42 and formula_43, then we can write
formula_44
and
formula_45
where formula_46 is the displacement and formula_47 is the moment of the line, compare the geometric intuition of Plücker coordinates. | [
{
"math_id": 0,
"text": "A = \\left(A_0, A_1, A_2, A_3\\right)^\\top \\in \\mathbb{R}\\mathcal{P}^3"
},
{
"math_id": 1,
"text": "B = \\left(B_0, B_1, B_2, B_3\\right)^\\top \\in \\mathbb{R}\\mathcal{P}^3"
},
{
"math_id": 2,
"text": "\n [\\mathbf{L}]_{\\times} \\propto \\mathbf{A}\\mathbf{B}^{\\top} - \\mathbf{B}\\mathbf{A}^{\\top} =\n \\left(\\begin{array}{cccc}\n 0 & -L_{01} & -L_{02} & -L_{03} \\\\\n L_{01} & 0 & -L_{12} & -L_{13} \\\\\n L_{02} & L_{12} & 0 & -L_{23} \\\\\n L_{03} & L_{13} & L_{23} & 0\n\\end{array}\\right)"
},
{
"math_id": 3,
"text": "4\\times 4"
},
{
"math_id": 4,
"text": "\\mathbf{L}\\propto(L_{01}, L_{02}, L_{03}, L_{12}, L_{13}, L_{23})^\\top"
},
{
"math_id": 5,
"text": "L_{ij} = A_iB_j - B_iA_j."
},
{
"math_id": 6,
"text": "L_{01} L_{23} - L_{02} L_{13} + L_{03} L_{12} = 0"
},
{
"math_id": 7,
"text": "\\mathbb{R}^3"
},
{
"math_id": 8,
"text": "\\mathbf{A}"
},
{
"math_id": 9,
"text": "\\mathbf{B}"
},
{
"math_id": 10,
"text": "\\mathbf{0} = [\\mathbf{L}]_{\\times}\\mathbf{E}"
},
{
"math_id": 11,
"text": "\\mathbf{X} = [\\mathbf{L}]_{\\times}\\mathbf{E}"
},
{
"math_id": 12,
"text": "\\mathbf{L}"
},
{
"math_id": 13,
"text": "\\mathbf{E}"
},
{
"math_id": 14,
"text": "\\mathbf{0} = [\\tilde{\\mathbf{L}}]_{\\times}\\mathbf{X}"
},
{
"math_id": 15,
"text": "\\mathbf{E} = [\\tilde{\\mathbf{L}}]_{\\times}\\mathbf{X}"
},
{
"math_id": 16,
"text": "\\mathbf{X}"
},
{
"math_id": 17,
"text": "[\\mathbf{L}]_{\\times}\\pi^\\infty = [\\mathbf{L}]_{\\times}(0, 0, 0, 1)^\\top = \\left(-L_{03}, -L_{13}, -L_{23}, 0\\right)^\\top"
},
{
"math_id": 18,
"text": "\\mathbf{X}_{0} \\cong [\\mathbf{L}]_{\\times}[\\mathbf{L}]_{\\times}\\pi^{\\infty}."
},
{
"math_id": 19,
"text": " \\mathbf{A}^{\\prime} \\propto \\mathbf{A}\\alpha + \\mathbf{B}\\beta\\text{ and } \\mathbf{B}^\\prime \\propto\\mathbf{A}\\gamma + \\mathbf{B}\\delta. "
},
{
"math_id": 20,
"text": "\\begin{align}\n {[}\\mathbf{L}^\\prime{]}_\\times\n &= \\mathbf{A}^\\prime\\mathbf{B}^\\prime - \\mathbf{B}^\\prime \\mathbf{A}^\\prime \\\\[6pt]\n &= (\\mathbf{A}\\alpha + \\mathbf{B}\\beta)(\\mathbf{A}\\gamma + \\mathbf{B}\\delta)^\\top - (\\mathbf{A}\\gamma + \\mathbf{B}\\delta)(\\mathbf{A}\\alpha + \\mathbf{B}\\beta)^\\top \\\\[6pt]\n &= \\underbrace{(\\alpha\\delta - \\beta\\gamma)}_\\lambda[\\mathbf{L}]_\\times,\n\\end{align}"
},
{
"math_id": 21,
"text": "[\\mathbf{L}]_{\\times}"
},
{
"math_id": 22,
"text": "\\mathbf{E} = \\left(E_{0}, E_{1}, E_{2}, E_{3}\\right)^{\\top} \\in \\mathbb{R}\\mathcal{P}^{3}"
},
{
"math_id": 23,
"text": "E_{0}x + E_{1}y + E_{2}z + E_{3} = 0."
},
{
"math_id": 24,
"text": "\n \\mathbf{X} = [\\mathbf{L}]_{\\times}\\mathbf{E}\n = \\mathbf{A}\\underset{\\alpha}{\\underbrace{\\mathbf{B}^{\\top}\\mathbf{E}}} - \\mathbf{B}\\underset{\\beta}{\\underbrace{\\mathbf{A}^{\\top}\\mathbf{E}}}\n = \\mathbf{A}\\alpha + \\mathbf{B}\\beta,\n"
},
{
"math_id": 25,
"text": "\n \\mathbf{E}^{\\top}\\mathbf{X}\n = \\mathbf{E}^{\\top}[\\mathbf{L}]_{\\times}\\mathbf{E}\n = \\underset{\\alpha}{\\underbrace{\\mathbf{E}^{\\top}\\mathbf{A}}}\\underset{\\beta}{\\underbrace{\\mathbf{B}^{\\top}\\mathbf{E}}} - \\underset{\\beta}{\\underbrace{\\mathbf{E}^{\\top}\\mathbf{B}}}\\underset{\\alpha}{\\underbrace{\\mathbf{A}^{\\top}\\mathbf{E}}}\n = 0,\n"
},
{
"math_id": 26,
"text": "\\alpha = \\beta = 0 \\iff \\mathbf{E}"
},
{
"math_id": 27,
"text": " \\mathbf{L}. "
},
{
"math_id": 28,
"text": "E = \\left(E_0, E_1, E_2, E_3\\right)^\\top \\in \\mathbb{R}\\mathcal{P}^3"
},
{
"math_id": 29,
"text": "F = \\left(F_0, F_1, F_2, F_3\\right)^\\top \\in \\mathbb{R}\\mathcal{P}^3"
},
{
"math_id": 30,
"text": "\\left[\\tilde{\\mathbf{L}}\\right]_{\\times} = \\mathbf{E}\\mathbf{F}^{\\top} - \\mathbf{F}\\mathbf{E}^{\\top}"
},
{
"math_id": 31,
"text": "\\mathbf{G} = \\left[\\tilde{\\mathbf{L}}\\right]_{\\times}\\mathbf{X}"
},
{
"math_id": 32,
"text": "\\mathbf{G}"
},
{
"math_id": 33,
"text": "\n \\forall\\mathbf{E} \\in \\mathbb{R}\\mathcal{P}^{3}:\\,\n \\mathbf{X} = [\\mathbf{L}]_{\\times}\\mathbf{E}\\text{ lies on }\\mathbf{L}\n \\iff \\left[\\tilde{\\mathbf{L}}\\right]_{\\times}\\mathbf{X} = \\mathbf{0}.\n"
},
{
"math_id": 34,
"text": "\n \\left([\\tilde{\\mathbf{L}}]_{\\times}[\\mathbf{L}]_{\\times}\\right)^{\\top}\n = [\\mathbf{L}]_{\\times}\\left[\\tilde{\\mathbf{L}}\\right]_{\\times}\n = \\mathbf{0} \\in \\mathbb{R}^{4\\times 4}.\n"
},
{
"math_id": 35,
"text": "\\begin{align}\n &\\left(\\begin{array}{cccc}\n 0 & L_{23} & -L_{13} & L_{12} \\\\\n -L_{23} & 0 & L_{03} & -L_{02} \\\\\n L_{13} & -L_{03} & 0 & L_{01} \\\\\n -L_{12} & L_{02} & -L_{01} & 0\n \\end{array}\\right)\n \\left(\\begin{array}{cccc}\n 0 & -L_{01} & -L_{02} & -L_{03} \\\\\n L_{01} & 0 & -L_{12} & -L_{13} \\\\\n L_{02} & L_{12} & 0 & -L_{23} \\\\\n L_{03} & L_{13} & L_{23} & 0\n \\end{array}\\right) \\\\[10pt]\n ={} &\\left(L_{01}L_{23} - L_{02}L_{13} + L_{03}L_{12}\\right) \\cdot\n \\left(\\begin{array}{cccc}\n 1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1\n \\end{array}\\right)\n = \\mathbf{0},\n\\end{align}"
},
{
"math_id": 36,
"text": "\\mathbf{L} = \\left(L_{01},\\,L_{02},\\,L_{03},\\,L_{12},\\,L_{13},\\,L_{23}\\right)^{\\top}"
},
{
"math_id": 37,
"text": "\\tilde{\\mathbf{L}} = \\left(L_{23},\\,-L_{13},\\,L_{12},\\,L_{03},\\,-L_{02},\\,L_{01}\\right)^{\\top}."
},
{
"math_id": 38,
"text": "\\mathbf{l} \\propto \\mathbf{a} \\times \\mathbf{b} =\n \\left(\\begin{array}{c}\n a_{1}b_{2} - b_{1}a_{2} \\\\\n b_{0}a_{2} - a_{0}b_{2} \\\\\n a_{0}b_{1} - a_{1}b_{0}\n \\end{array}\\right) = \\left(\\begin{array}{c}\n l_{0} \\\\\n l_{1} \\\\\n l_{2}\n \\end{array}\\right).\n"
},
{
"math_id": 39,
"text": "\\mathbf{x} \\propto \\mathbf{l} \\times \\mathbf{m}"
},
{
"math_id": 40,
"text": "\n [\\mathbf{l}]_{\\times} = \n \\mathbf{a}\\mathbf{b}^{\\top} - \\mathbf{b}\\mathbf{a}^{\\top} =\n \\left(\\begin{array}{ccc}\n 0 & l_{2} & -l_{1} \\\\\n -l_{2} & 0 & l_{0} \\\\\n l_{1} & -l_{0} & 0\n \\end{array}\\right)\n"
},
{
"math_id": 41,
"text": "[\\mathbf{x}]_{\\times} = \\mathbf{l}\\mathbf{m}^{\\top} - \\mathbf{m}\\mathbf{l}^{\\top}"
},
{
"math_id": 42,
"text": "\\mathbf{d} = \\left(-L_{03},\\, -L_{13},\\, -L_{23}\\right)^{\\top}"
},
{
"math_id": 43,
"text": "\\mathbf{m} = \\left(L_{12},\\, -L_{02},\\, L_{01}\\right)^{\\top}"
},
{
"math_id": 44,
"text": "\n [\\mathbf{L}]_{\\times} = \\left(\\begin{array}{cc}\n [\\mathbf{m}]_{\\times} & \\mathbf{d} \\\\\n -\\mathbf{d} & 0\n \\end{array}\\right)\n"
},
{
"math_id": 45,
"text": "\n [\\tilde{\\mathbf{L}}]_{\\times} = \\left(\\begin{array}{cc}\n [-\\mathbf{d}]_{\\times} & \\mathbf{m}\\\\\n -\\mathbf{m} & 0\n \\end{array}\\right),\n"
},
{
"math_id": 46,
"text": "\\mathbf{d}"
},
{
"math_id": 47,
"text": "\\mathbf{m}"
}
] | https://en.wikipedia.org/wiki?curid=9127600 |
9127968 | Abstract model checking | In computer science and in mathematics, abstraction model checking is a form of model checking for systems where an actual representation is too complex in developing the model alone. So, the design undergoes a kind of translation to scaled down "abstract" version.
The set of variables are partitioned into visible and invisible depending on their change of values. The real state space is summarized into a smaller set of the visible ones.
Galois connected.
The real and the abstract state spaces are Galois connected. This means that if we take an element from the abstract space, concretize it and abstract the concretized version, the result will be equal to the original. On the other hand, if you pick an element from the real space, abstract it and concretize the abstract version, the final result will be a super set of the original.
That is,
formula_0(formula_1(abstract)) = abstract
formula_1(formula_0(real)) formula_2 real
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\eta"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "\\supseteq"
}
] | https://en.wikipedia.org/wiki?curid=9127968 |
912825 | Parallel curve | A parallel of a curve is the envelope of a family of congruent circles centered on the curve.
It generalises the concept of "parallel (straight) lines". It can also be defined as a curve whose points are at a constant "normal distance" from a given curve.
These two definitions are not entirely equivalent as the latter assumes smoothness, whereas the former does not.
In computer-aided design the preferred term for a parallel curve is offset curve. (In other geometric contexts, the term offset can also refer to translation.) Offset curves are important, for example, in numerically controlled machining, where they describe, for example, the shape of the cut made by a round cutting tool of a two-axis machine. The shape of the cut is offset from the trajectory of the cutter by a constant distance in the direction normal to the cutter trajectory at every point.
In the area of 2D computer graphics known as vector graphics, the (approximate) computation of parallel curves is involved in one of the fundamental drawing operations, called stroking, which is typically applied to polylines or polybeziers (themselves called paths) in that field.
Except in the case of a line or circle, the parallel curves have a more complicated mathematical structure than the progenitor curve. For example, even if the progenitor curve is smooth, its offsets may not be so; this property is illustrated in the top figure, using a sine curve as progenitor curve. In general, even if a curve is rational, its offsets may not be so. For example, the offsets of a parabola are rational curves, but the offsets of an ellipse or of a hyperbola are not rational, even though these progenitor curves themselves are rational.
The notion also generalizes to 3D surfaces, where it is called an offset surface or parallel surface. Increasing a solid volume by a (constant) distance offset is sometimes called "dilation". The opposite operation is sometimes called "shelling". Offset surfaces are important in numerically controlled machining, where they describe the shape of the cut made by a ball nose end mill of a three-axis machine. Other shapes of cutting bits can be modelled mathematically by general offset surfaces.
Parallel curve of a parametrically given curve.
If there is a regular parametric representation formula_0 of the given curve available, the second definition of a parallel curve (s. above) leads to the following parametric representation of the parallel curve with distance formula_1:
formula_2 with the unit normal formula_3.
In cartesian coordinates:
formula_4
formula_5
The distance parameter formula_6 may be negative. In this case, one gets a parallel curve on the opposite side of the curve (see diagram on the parallel curves of a circle). One can easily check that a parallel curve of a line is a parallel line in the common sense, and the parallel curve of a circle is a concentric circle.
Geometric properties:.
If the given curve is polynomial (meaning that formula_15 and formula_16 are polynomials), then the parallel curves are usually not polynomial. In CAD area this is a drawback, because CAD systems use polynomials or rational curves. In order to get at least rational curves, the square root of the representation of the parallel curve has to be solvable. Such curves are called "pythagorean hodograph curves" and were investigated by R.T. Farouki.
Parallel curves of an implicit curve.
Generally the analytic representation of a parallel curve of an implicit curve is not possible. Only for the simple cases of lines and circles the parallel curves can be described easily.
For example:
"Line" formula_17 → distance function: formula_18 (Hesse normalform)
"Circle" formula_19 → distance function: formula_20
In general, presuming certain conditions, one can prove the existence of an oriented distance function formula_21. In practice one has to treat it numerically. Considering parallel curves the following is true:
Properties of the distance function:.
Example:
The diagram shows parallel curves of the implicit curve with equation formula_27
"Remark:"
The curves formula_28 are not parallel curves, because formula_29 is not true in the area of interest.
Further examples.
And:
Parallel curve to a curve with a corner.
When determining the cutting path of part with a sharp corner for machining, you must define the parallel (offset) curve to a given curve that has a discontinuous normal at the corner. Even though the given curve is not smooth at the sharp corner, its parallel curve may be smooth with a continuous normal, or it may have cusps when the distance from the curve matches the radius of curvature at the sharp corner.
Normal fans.
As described above, the parametric representation of a parallel curve, formula_30, to a given curver, formula_31, with distance formula_32 is:
formula_33 with the unit normal formula_3.
At a sharp corner (formula_34), the normal to formula_35 given by formula_36 is discontinuous, meaning the one-sided limit of the normal from the left formula_37 is unequal to the limit from the right formula_38. Mathematically,
formula_39.
However, we can define a normal fan formula_40 that provides an interpolant between formula_37 and formula_38, and use formula_40 in place of formula_36 at the sharp corner:
formula_41where formula_42.
The resulting definition of the parallel curve formula_30 provides the desired behavior:
formula_43
Algorithms.
In general, the parallel curve of a Bézier curve is not another Bézier curve, a result proved by Tiller and Hanson in 1984. Thus, in practice, approximation techniques are used. Any desired level of accuracy is possible by repeatedly subdividing the curve, though better techniques require fewer subdivisions to attain the same level of accuracy. A 1997 survey by Elber, Lee and Kim is widely cited, though better techniques have been proposed more recently. A modern technique based on curve fitting, with references and comparisons to other algorithms, as well as open source JavaScript source code, was published in a blog post in September 2022.
Another efficient algorithm for offsetting is the level approach described by
Kimmel and Bruckstein (1993).
Parallel (offset) surfaces.
Offset surfaces are important in numerically controlled machining, where they describe the shape of the cut made by a ball nose end mill of a three-axis mill. If there is a regular parametric representation formula_44 of the given surface available, the second definition of a parallel curve (see above) generalizes to the following parametric representation of the parallel surface with distance formula_1:
formula_45 with the unit normal formula_46.
Distance parameter formula_6 may be negative, too. In this case one gets a parallel surface on the opposite side of the surface (see similar diagram on the parallel curves of a circle). One easily checks: a parallel surface of a plane is a parallel plane in the common sense and the parallel surface of a sphere is a concentric sphere.
The principal curvatures are the eigenvalues of the shape operator, the principal curvature directions are its eigenvectors, the Gaussian curvature is its determinant, and the mean curvature is half its trace.
The principal radii of curvature are the eigenvalues of the inverse of the shape operator, the principal curvature directions are its eigenvectors, the reciprocal of the Gaussian curvature is its determinant, and the mean radius of curvature is half its trace.
Geometric properties:.
Note the similarity to the geometric properties of parallel curves.
Generalizations.
The problem generalizes fairly obviously to higher dimensions e.g. to offset surfaces, and slightly less trivially to pipe surfaces. Note that the terminology for the higher-dimensional versions varies even more widely than in the planar case, e.g. other authors speak of parallel fibers, ribbons, and tubes. For curves embedded in 3D surfaces the offset may be taken along a geodesic.
Another way to generalize it is (even in 2D) to consider a variable distance, e.g. parametrized by another curve. One can for example stroke (envelope) with an ellipse instead of circle as it is possible for example in METAFONT.
More recently Adobe Illustrator has added somewhat similar facility in version CS5, although the control points for the variable width are visually specified. In contexts where it's important to distinguish between constant and variable distance offsetting the acronyms CDO and VDO are sometimes used.
General offset curves.
Assume you have a regular parametric representation of a curve, formula_57, and you have a second curve that can be parameterized by its unit normal, formula_58, where the normal of formula_59 (this parameterization by normal exists for curves whose curvature is strictly positive or negative, and thus convex, smooth, and not straight). The parametric representation of the general offset curve of formula_31 offset by formula_58 is:
formula_60 where formula_3 is the unit normal of formula_31.
Note that the trival offset, formula_61, gives you ordinary parallel (aka, offset) curves.
General offset surfaces.
General offset surfaces describe the shape of cuts made by a variety of cutting bits used by three-axis end mills in numerically controlled machining. Assume you have a regular parametric representation of a surface, formula_44, and you have a second surface that can be parameterized by its unit normal, formula_58, where the normal of formula_59 (this parameterization by normal exists for surfaces whose Gaussian curvature is strictly positive, and thus convex, smooth, and not flat). The parametric representation of the general offset surface of formula_31 offset by formula_58 is:
formula_67 where formula_68 is the unit normal of formula_69.
Note that the trival offset, formula_61, gives you ordinary parallel (aka, offset) surfaces.
The principal curvatures are the eigenvalues of the shape operator, the principal curvature directions are its eigenvectors, the Gaussian curvature is its determinant, and the mean curvature is half its trace.
The principal radii of curvature are the eigenvalues of the inverse of the shape operator, the principal curvature directions are its eigenvectors, the reciprocal of the Gaussian curvature is its determinant, and the mean radius of curvature is half its trace.
Geometric properties:.
Note the similarity to the geometric properties of general offset curves.
Derivation of geometric properties for general offsets.
The geometric properties listed above for general offset curves and surfaces can be derived for offsets of arbitrary dimension. Assume you have a regular parametric representation of an n-dimensional surface, formula_78, where the dimension of formula_79 is n-1. Also assume you have a second n-dimensional surface that can be parameterized by its unit normal, formula_58, where the normal of formula_59 (this parameterization by normal exists for surfaces whose Gaussian curvature is strictly positive, and thus convex, smooth, and not flat). The parametric representation of the general offset surface of formula_80 offset by formula_58 is:
formula_81 where formula_82 is the unit normal of formula_80. (The trival offset, formula_61, gives you ordinary parallel surfaces.)
First, notice that the normal of formula_83 the normal of formula_84 by definition. Now, we'll apply the differential w.r.t. formula_79 to formula_52, which gives us its tangent vectors spanning its tangent plane.
formula_85
Notice, the tangent vectors for formula_52 are the sum of tangent vectors for formula_80 and its offset formula_58, which share the same unit normal. Thus, the general offset surface shares the same tangent plane and normal with formula_80 and formula_86. That aligns with the nature of envelopes.
We now consider the Weingarten equations for the shape operator, which can be written as formula_87. If formula_51 is invertable, formula_88. Recall that the principal curvatures of a surface are the eigenvalues of the shape operator, the principal curvature directions are its eigenvectors, the Gauss curvature is its determinant, and the mean curvature is half its trace. The inverse of the shape operator holds these same values for the radii of curvature.
Substituting into the equation for the differential of formula_52, we get:
formula_89 where formula_72 is the shape operator for formula_86.
Next, we use the Weingarten equations again to replace formula_90:
formula_91 where formula_51 is the shape operator for formula_80.
Then, we solve for formula_92 and multiple both sides by formula_93 to get back to the Weingarten equations, this time for formula_94:
formula_95
formula_96
Thus, formula_97, and inverting both sides gives us, formula_98.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\vec x= (x(t),y(t))"
},
{
"math_id": 1,
"text": " |d| "
},
{
"math_id": 2,
"text": " \\vec x_d(t)=\\vec x(t)+d\\vec n(t)"
},
{
"math_id": 3,
"text": "\\vec n(t)"
},
{
"math_id": 4,
"text": " x_d(t)= x(t)+\\frac{d\\; y'(t)}{\\sqrt {x'(t)^2+y'(t)^2}}"
},
{
"math_id": 5,
"text": " y_d(t)= y(t)-\\frac{d\\; x'(t)}{\\sqrt {x'(t)^2+y'(t)^2}} \\ ."
},
{
"math_id": 6,
"text": "d"
},
{
"math_id": 7,
"text": "\\vec x'_d(t) \\parallel \\vec x'(t),\\quad"
},
{
"math_id": 8,
"text": "k_d(t)=\\frac{k(t)}{1+dk(t)},\\quad"
},
{
"math_id": 9,
"text": "k(t)"
},
{
"math_id": 10,
"text": "k_d(t)"
},
{
"math_id": 11,
"text": "t"
},
{
"math_id": 12,
"text": "R_d(t)=R(t) + d,\\quad"
},
{
"math_id": 13,
"text": "R(t)"
},
{
"math_id": 14,
"text": "R_d(t)"
},
{
"math_id": 15,
"text": "x(t)"
},
{
"math_id": 16,
"text": "y(t)"
},
{
"math_id": 17,
"text": "\\; f(x,y)=x+y-1=0\\; "
},
{
"math_id": 18,
"text": "\\; h(x,y)=\\frac{x+y-1}{\\sqrt{2}}=d\\; "
},
{
"math_id": 19,
"text": "\\; f(x,y)=x^2+y^2-1=0\\;"
},
{
"math_id": 20,
"text": "\\; h(x,y)=\\sqrt{x^2+y^2}-1=d\\; ."
},
{
"math_id": 21,
"text": "h(x,y)"
},
{
"math_id": 22,
"text": "h(x,y)=d"
},
{
"math_id": 23,
"text": "h"
},
{
"math_id": 24,
"text": "| \\operatorname{grad} h (\\vec x)|=1 \\; ,"
},
{
"math_id": 25,
"text": " h(\\vec x+d\\operatorname{grad} h (\\vec x)) = h(\\vec x)+d \\; ,"
},
{
"math_id": 26,
"text": " \\operatorname{grad}h(\\vec x+d\\operatorname{grad}h (\\vec x))= \\operatorname{grad}h (\\vec x) \\; ."
},
{
"math_id": 27,
"text": "\\; f(x,y)=x^4+y^4-1=0\\; ."
},
{
"math_id": 28,
"text": "\\; f(x,y)=x^4+y^4-1=d\\; "
},
{
"math_id": 29,
"text": "\\; | \\operatorname{grad} f (x,y)|=1 \\;"
},
{
"math_id": 30,
"text": "\\vec x_d(t)"
},
{
"math_id": 31,
"text": "\\vec x(t)"
},
{
"math_id": 32,
"text": "|d|"
},
{
"math_id": 33,
"text": "\\vec x_d(t) = \\vec x(t) + d\\vec n(t)"
},
{
"math_id": 34,
"text": "t = t_c"
},
{
"math_id": 35,
"text": "\\vec x(t_c)"
},
{
"math_id": 36,
"text": "\\vec n(t_c)"
},
{
"math_id": 37,
"text": "\\vec n(t_c^-)"
},
{
"math_id": 38,
"text": "\\vec n(t_c^+)"
},
{
"math_id": 39,
"text": "\\vec n(t_c^-) = \\lim_{t \\to t_c^-}\\vec n(t) \\ne \\vec n(t_c^+) = \\lim_{t \\to t_c^+}\\vec n(t)"
},
{
"math_id": 40,
"text": "\\vec n_f(\\alpha)"
},
{
"math_id": 41,
"text": "\\vec n_f(\\alpha) = \\frac{(1 - \\alpha)\\vec n(t_c^-) + \\alpha\\vec n(t_c^+)}{\\lVert (1 - \\alpha)\\vec n(t_c^-) + \\alpha\\vec n(t_c^+) \\rVert},\\quad"
},
{
"math_id": 42,
"text": "0 < \\alpha < 1"
},
{
"math_id": 43,
"text": "\\vec x_d(t) = \\begin{cases}\n\\vec x(t) + d\\vec n(t), & \\text{if }t < t_c\\text{ or }t > t_c \\\\\n\\vec x(t_c) + d\\vec n_f(\\alpha), & \\text{if }t = t_c\\text{ where }0 < \\alpha < 1\n\\end{cases}"
},
{
"math_id": 44,
"text": " \\vec x(u,v) = (x(u,v),y(u,v),z(u,v))"
},
{
"math_id": 45,
"text": " \\vec x_d(u,v)=\\vec x(u,v)+d\\vec n(u,v)"
},
{
"math_id": 46,
"text": "\\vec n_d(u,v) = {{{\\partial \\vec x \\over \\partial u} \\times {\\partial \\vec x \\over \\partial v}} \\over {|{{\\partial \\vec x \\over \\partial u} \\times {\\partial \\vec x \\over \\partial v}}|}}"
},
{
"math_id": 47,
"text": "{\\partial \\vec x_d \\over \\partial u} \\parallel {\\partial \\vec x \\over \\partial u}, \\quad {\\partial \\vec x_d \\over \\partial v} \\parallel {\\partial \\vec x \\over \\partial v}, \\quad"
},
{
"math_id": 48,
"text": "\\vec n_d(u,v) = \\pm\\vec n(u,v), \\quad"
},
{
"math_id": 49,
"text": "S_d = (1 + d S)^{-1} S, \\quad"
},
{
"math_id": 50,
"text": "S_d"
},
{
"math_id": 51,
"text": "S"
},
{
"math_id": 52,
"text": "\\vec x_d"
},
{
"math_id": 53,
"text": "\\vec x"
},
{
"math_id": 54,
"text": "S_d^{-1} = S^{-1} + d I, \\quad"
},
{
"math_id": 55,
"text": "S_d^{-1}"
},
{
"math_id": 56,
"text": "S^{-1}"
},
{
"math_id": 57,
"text": " \\vec x(t) = (x(t),y(t))"
},
{
"math_id": 58,
"text": " \\vec d(\\vec n)"
},
{
"math_id": 59,
"text": "\\vec d(\\vec n) = \\vec n"
},
{
"math_id": 60,
"text": " \\vec x_d(t)=\\vec x(t)+ \\vec d(\\vec n(t)), \\quad"
},
{
"math_id": 61,
"text": "\\vec d(\\vec n) = d\\vec n"
},
{
"math_id": 62,
"text": "k_d(t)=\\dfrac{k(t)}{1+\\dfrac{k(t)}{k_n(t)}},\\quad"
},
{
"math_id": 63,
"text": "k_n(t)"
},
{
"math_id": 64,
"text": "\\vec d(\\vec n(t))"
},
{
"math_id": 65,
"text": "R_d(t)=R(t) + R_n(t),\\quad"
},
{
"math_id": 66,
"text": "R_n(t)"
},
{
"math_id": 67,
"text": " \\vec x_d(u,v)=\\vec x(u,v)+ \\vec d(\\vec n(u,v)), \\quad"
},
{
"math_id": 68,
"text": "\\vec n(u,v)"
},
{
"math_id": 69,
"text": "\\vec x(u,v)"
},
{
"math_id": 70,
"text": "S_d = (1 + SS_n^{-1})^{-1} S, \\quad"
},
{
"math_id": 71,
"text": "S_d, S,"
},
{
"math_id": 72,
"text": "S_n"
},
{
"math_id": 73,
"text": "\\vec x_d, \\vec x,"
},
{
"math_id": 74,
"text": "\\vec d(\\vec n)"
},
{
"math_id": 75,
"text": "S_d^{-1} = S^{-1} + S_n^{-1}, \\quad"
},
{
"math_id": 76,
"text": "S_d^{-1}, S^{-1}"
},
{
"math_id": 77,
"text": "S_n^{-1}"
},
{
"math_id": 78,
"text": " \\vec x(\\vec u)"
},
{
"math_id": 79,
"text": "\\vec u"
},
{
"math_id": 80,
"text": "\\vec x(\\vec u)"
},
{
"math_id": 81,
"text": " \\vec x_d(\\vec u) = \\vec x(\\vec u)+ \\vec d(\\vec n(\\vec u)), \\quad"
},
{
"math_id": 82,
"text": "\\vec n(\\vec u)"
},
{
"math_id": 83,
"text": "\\vec x(\\vec u) = "
},
{
"math_id": 84,
"text": "\\vec d(\\vec n(\\vec u)) = \\vec n(\\vec u),"
},
{
"math_id": 85,
"text": " \\partial\\vec x_d(\\vec u) = \\partial\\vec x(\\vec u)+ \\partial\\vec d(\\vec n(\\vec u))"
},
{
"math_id": 86,
"text": "\\vec d(\\vec n(\\vec u))"
},
{
"math_id": 87,
"text": "\\partial\\vec n = -\\partial\\vec xS"
},
{
"math_id": 88,
"text": "\\partial\\vec x = -\\partial\\vec nS^{-1}"
},
{
"math_id": 89,
"text": " \\partial\\vec x_d = \\partial\\vec x - \\partial\\vec n S_n^{-1},\\quad"
},
{
"math_id": 90,
"text": "\\partial\\vec n"
},
{
"math_id": 91,
"text": "\\partial\\vec x_d = \\partial\\vec x + \\partial\\vec x S S_n^{-1},\\quad"
},
{
"math_id": 92,
"text": "\\partial\\vec x"
},
{
"math_id": 93,
"text": "-S"
},
{
"math_id": 94,
"text": "\\partial\\vec x_d"
},
{
"math_id": 95,
"text": "\\partial\\vec x_d (I + S S_n^{-1})^{-1} = \\partial\\vec x,"
},
{
"math_id": 96,
"text": "-\\partial\\vec x_d (I + S S_n^{-1})^{-1}S = -\\partial\\vec xS = \\partial\\vec n."
},
{
"math_id": 97,
"text": "S_d = (I + S S_n^{-1})^{-1}S"
},
{
"math_id": 98,
"text": "S_d^{-1} = S^{-1} + S_n^{-1}"
}
] | https://en.wikipedia.org/wiki?curid=912825 |
912925 | Sensor array | Group of sensors used to increase gain or dimensionality over a single sensor
A sensor array is a group of sensors, usually deployed in a certain geometry pattern, used for collecting and processing electromagnetic or acoustic signals. The advantage of using a sensor array over using a single sensor lies in the fact that an array adds new dimensions to the observation, helping to estimate more parameters and improve the estimation performance.
For example an array of radio antenna elements used for beamforming can increase antenna gain in the direction of the signal while decreasing the gain in other directions, i.e., increasing signal-to-noise ratio (SNR) by amplifying the signal coherently. Another example of sensor array application is to estimate the direction of arrival of impinging electromagnetic waves.
The related processing method is called array signal processing. A third examples includes chemical sensor arrays, which utilize multiple chemical sensors for fingerprint detection in complex mixtures or sensing environments. Application examples of array signal processing include radar/sonar, wireless communications, seismology, machine condition monitoring, astronomical observations fault diagnosis, etc.
Using array signal processing, the temporal and spatial properties (or parameters) of the impinging signals interfered by noise and hidden in the data collected by the sensor array can be estimated and revealed. This is known as parameter estimation.
Plane wave, time domain beamforming.
Figure 1 illustrates a six-element uniform linear array (ULA). In this example, the sensor array is assumed to be in the far-field of a signal source so that it can be treated as planar wave.
Parameter estimation takes advantage of the fact that the distance from the source to each antenna in the array is different, which means that the input data at each antenna will be phase-shifted replicas of each other. Eq. (1) shows the calculation for the extra time it takes to reach each antenna in the array relative to the first one, where "c" is the velocity of the wave.
formula_0
Each sensor is associated with a different delay. The delays are small but not trivial. In frequency domain, they are displayed as phase shift among the signals received by the sensors. The delays are closely related to the incident angle and the geometry of the sensor array. Given the geometry of the array, the delays or phase differences can be used to estimate the incident angle. Eq. (1) is the mathematical basis behind array signal processing. Simply summing the signals received by the sensors and calculating the mean value give the result
formula_1 .
Because the received signals are out of phase, this mean value does not give an enhanced signal compared with the original source. Heuristically, if we can find delays of each of the received signals and remove them prior to the summation, the mean value
formula_2
will result in an enhanced signal. The process of time-shifting signals using a well selected set of delays for each channel of the sensor array so that the signal is added constructively is called beamforming.
In addition to the delay-and-sum approach described above, a number of spectral based (non-parametric) approaches and parametric approaches exist which improve various performance metrics. These beamforming algorithms are briefly described as follows
Array design.
Sensor arrays have different geometrical designs, including linear, circular, planar, cylindrical and spherical arrays. There are sensor arrays with arbitrary array configuration, which require more complex signal processing techniques for parameter estimation. In uniform linear array (ULA) the phase of the incoming signal formula_3 should be limited to formula_4 to avoid grating waves. It means that for angle of arrival formula_5 in the interval formula_6 sensor spacing should be smaller than half the wavelength formula_7. However, the width of the main beam, i.e., the resolution or directivity of the array, is determined by the length of the array compared to the wavelength. In order to have a decent directional resolution the length of the array should be several times larger than the radio wavelength.
Delay-and-sum beamforming.
If a time delay is added to the recorded signal from each microphone that is equal and opposite of the delay caused by the additional travel time, it will result in signals that are perfectly in-phase with each other. Summing these in-phase signals will result in constructive interference that will amplify the SNR by the number of antennas in the array. This is known as delay-and-sum beamforming. For direction of arrival (DOA) estimation, one can iteratively test time delays for all possible directions. If the guess is wrong, the signal will be interfered destructively, resulting in a diminished output signal, but the correct guess will result in the signal amplification described above.
The problem is, before the incident angle is estimated, how could it be possible to know the time delay that is 'equal' and opposite of the delay caused by the extra travel time? It is impossible. The solution is to try a series of angles formula_8 at sufficiently high resolution, and calculate the resulting mean output signal of the array using Eq. (3). The trial angle that maximizes the mean output is an estimation of DOA given by the delay-and-sum beamformer.
Adding an opposite delay to the input signals is equivalent to rotating the sensor array physically. Therefore, it is also known as beam steering.
Spectrum-based beamforming.
Delay and sum beamforming is a time domain approach. It is simple to implement, but it may poorly estimate direction of arrival (DOA). The solution to this is a frequency domain approach. The Fourier transform transforms the signal from the time domain to the frequency domain. This converts the time delay between adjacent sensors into a phase shift. Thus, the array output vector at any time "t" can be denoted as formula_9, where formula_10 stands for the signal received by the first sensor. Frequency domain beamforming algorithms use the spatial covariance matrix, represented by formula_11. This "M" by "M" matrix carries the spatial and spectral information of the incoming signals. Assuming zero-mean Gaussian white noise, the basic model of the spatial covariance matrix is given by
formula_12
where formula_13 is the variance of the white noise, formula_14 is the identity matrix and formula_15 is the array manifold vector formula_16 with formula_17. This model is of central importance in frequency domain beamforming algorithms.
Some spectrum-based beamforming approaches are listed below.
Conventional (Bartlett) beamformer.
The Bartlett beamformer is a natural extension of conventional spectral analysis (spectrogram) to the sensor array. Its spectral power is represented by
formula_18.
The angle that maximizes this power is an estimation of the angle of arrival.
MVDR (Capon) beamformer.
The Minimum Variance Distortionless Response beamformer, also known as the Capon beamforming algorithm, has a power given by
formula_19.
Though the MVDR/Capon beamformer can achieve better resolution than the conventional (Bartlett) approach, this algorithm has higher complexity due to the full-rank matrix inversion. Technical advances in GPU computing have begun to narrow this gap and make real-time Capon beamforming possible.
MUSIC beamformer.
MUSIC (MUltiple SIgnal Classification) beamforming algorithm starts with decomposing the covariance matrix as given by Eq. (4) for both the signal part and the noise part. The eigen-decomposition is represented by
formula_20.
MUSIC uses the noise sub-space of the spatial covariance matrix in the denominator of the Capon algorithm
formula_21.
Therefore MUSIC beamformer is also known as subspace beamformer. Compared to the Capon beamformer, it gives much better DOA estimation.
SAMV beamformer.
SAMV beamforming algorithm is a sparse signal reconstruction based algorithm which explicitly exploits the time invariant statistical characteristic of the covariance matrix. It achieves superresolution and robust to highly correlated signals.
Parametric beamformers.
One of the major advantages of the spectrum based beamformers is a lower computational complexity, but they may not give accurate DOA estimation if the signals are correlated or coherent. An alternative approach are parametric beamformers, also known as maximum likelihood (ML) beamformers. One example of a maximum likelihood method commonly used in engineering is the least squares method. In the least square approach, a quadratic penalty function is used. To get the minimum value (or least squared error) of the quadratic penalty function (or objective function), take its derivative (which is linear), let it equal zero and solve a system of linear equations.
In ML beamformers the quadratic penalty function is used to the spatial covariance matrix and the signal model. One example of ML beamformer penalty function is
formula_22 ,
where formula_23 is the Frobenius norm. It can be seen in Eq. (4) that the penalty function of Eq. (9) is minimized by approximating the signal model to the sample covariance matrix as accurate as possible. In other words, the maximum likelihood beamformer is to find the DOA formula_5, the independent variable of matrix formula_24, so that the penalty function in Eq. (9) is minimized. In practice, the penalty function may look different, depending on the signal and noise model. For this reason, there are two major categories of maximum likelihood beamformers: Deterministic ML beamformers and stochastic ML beamformers, corresponding to a deterministic and a stochastic model, respectively.
Another idea to change the former penalty equation is the consideration of simplifying the minimization by differentiation of the penalty function. In order to simplify the optimization algorithm, logarithmic operations and the probability density function (PDF) of the observations may be used in some ML beamformers.
The optimizing problem is solved by finding the roots of the derivative of the penalty function after equating it with zero. Because the equation is non-linear a numerical searching approach such as Newton–Raphson method is usually employed. The Newton–Raphson method is an iterative root search method with the iteration
formula_25.
The search starts from an initial guess formula_26. If the Newton-Raphson search method is employed to minimize the beamforming penalty function, the resulting beamformer is called Newton ML beamformer. Several well-known ML beamformers are described below without providing further details due to the complexity of the expressions.
In deterministic maximum likelihood beamformer (DML), the noise is modeled as a stationary Gaussian white random processes while the signal waveform as deterministic (but arbitrary) and unknown.
In stochastic maximum likelihood beamformer (SML), the noise is modeled as stationary Gaussian white random processes (the same as in DML) whereas the signal waveform as Gaussian random processes.
Method of direction estimation (MODE) is subspace maximum likelihood beamformer, just as MUSIC, is the subspace spectral based beamformer. Subspace ML beamforming is obtained by eigen-decomposition of the sample covariance matrix.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta t_i = \\frac{(i-1)d \\cos \\theta}{c}, i = 1, 2, ..., M \\ \\ (1) "
},
{
"math_id": 1,
"text": "y = \\frac{1}{M}\\sum_{i=1}^{M} \\boldsymbol x_i (t-\\Delta t_i) \\ \\ (2) "
},
{
"math_id": 2,
"text": "y = \\frac{1}{M}\\sum_{i=1}^{M} \\boldsymbol x_i (t) \\ \\ (3) "
},
{
"math_id": 3,
"text": "\\omega\\tau"
},
{
"math_id": 4,
"text": "\\pm\\pi"
},
{
"math_id": 5,
"text": "\\theta"
},
{
"math_id": 6,
"text": "[-\\frac{\\pi}{2},\\frac{\\pi}{2}]"
},
{
"math_id": 7,
"text": "d \\leq \\lambda/2"
},
{
"math_id": 8,
"text": "\\hat{\\theta} \\in [0, \\pi]"
},
{
"math_id": 9,
"text": " \\boldsymbol x(t) = x_1(t)\\begin{bmatrix} 1 & e^{-j\\omega\\Delta t} & \\cdots & e^{-j\\omega(M-1)\\Delta t} \\end{bmatrix}^T "
},
{
"math_id": 10,
"text": "x_1(t)"
},
{
"math_id": 11,
"text": " \\boldsymbol R=E\\{ \\boldsymbol x(t) \\boldsymbol x^T(t)\\}"
},
{
"math_id": 12,
"text": " \\boldsymbol R = \\boldsymbol V \\boldsymbol S \\boldsymbol V^H + \\sigma^2 \\boldsymbol I \\ \\ (4) "
},
{
"math_id": 13,
"text": "\\sigma^2 "
},
{
"math_id": 14,
"text": " \\boldsymbol I "
},
{
"math_id": 15,
"text": " \\boldsymbol V "
},
{
"math_id": 16,
"text": " \\boldsymbol V = \\begin{bmatrix} \\boldsymbol v_1 & \\cdots & \\boldsymbol v_k \\end{bmatrix}^T "
},
{
"math_id": 17,
"text": " \\boldsymbol v_i = \\begin{bmatrix} 1 & e^{-j\\omega\\Delta t_i} & \\cdots & e^{-j\\omega(M-1)\\Delta t_i} \\end{bmatrix}^T "
},
{
"math_id": 18,
"text": " \\hat{P}_{Bartlett}(\\theta)=\\boldsymbol v^H \\boldsymbol R \\boldsymbol v \\ \\ (5) "
},
{
"math_id": 19,
"text": " \\hat{P}_{Capon}(\\theta)=\\frac{1}{\\boldsymbol v^H \\boldsymbol R^{-1} \\boldsymbol v} \\ \\ (6) "
},
{
"math_id": 20,
"text": " \\boldsymbol R = \\boldsymbol U_s \\boldsymbol \\Lambda_s \\boldsymbol U_s^H + \\boldsymbol U_n \\boldsymbol \\Lambda_n \\boldsymbol U_n^H \\ \\ (7) "
},
{
"math_id": 21,
"text": " \\hat{P}_{MUSIC}(\\theta)=\\frac{1}{\\boldsymbol v^H \\boldsymbol U_n \\boldsymbol U_n^H\\boldsymbol v} \\ \\ (8) "
},
{
"math_id": 22,
"text": "L_{ML}(\\theta)=\\|\\hat{\\boldsymbol R}- \\boldsymbol R\\|_F^2 = \\|\\hat{\\boldsymbol R}-( \\boldsymbol V \\boldsymbol S \\boldsymbol V^H + \\sigma^2 \\boldsymbol I )\\|_F^2 \\ \\ (9) "
},
{
"math_id": 23,
"text": "\\| \\cdot \\|_F "
},
{
"math_id": 24,
"text": " \\boldsymbol V "
},
{
"math_id": 25,
"text": " x_{n+1} = x_n - \\frac{f(x_n)}{f'(x_n)} \\ \\ (10)"
},
{
"math_id": 26,
"text": "x_0"
}
] | https://en.wikipedia.org/wiki?curid=912925 |
9129297 | Radar signal characteristics | A radar system uses a radio-frequency electromagnetic signal reflected from a target to determine information about that target. In any radar system, the signal transmitted and received will exhibit many of the characteristics described below.
In the time domain.
The diagram below shows the characteristics of the transmitted signal in the time domain. Note that in this and in all the diagrams within this article, the x axis is exaggerated to make the explanation clearer.
Carrier.
The carrier is an RF signal, typically of microwave frequencies, which is usually (but not always) modulated to allow the system to capture the required data. In simple ranging radars, the carrier will be pulse modulated and in continuous wave systems, such as Doppler radar, modulation may not be required. Most systems use pulse modulation, with or without other supplementary modulating signals. Note that with pulse modulation, the carrier is simply switched on and off in sync with the pulses; the modulating waveform does not actually exist in the transmitted signal and the envelope of the pulse waveform is extracted from the demodulated carrier in the receiver. Although obvious when described, this point is often missed when pulse transmissions are first studied, leading to misunderstandings about the nature of the signal.
Pulse width.
The pulse width (formula_0) (or pulse duration) of the transmitted signal is the time, typically in microseconds, each pulse lasts. If the pulse is not a perfect square wave, the time is typically measured between the 50% power levels of the rising and falling edges of the pulse.
The pulse width must be long enough to ensure that the radar emits sufficient energy so that the reflected pulse is detectable by its receiver. The amount of energy that can be delivered to a distant target is the product of two things; the peak output power of the transmitter, and the duration of the transmission. Therefore, pulse width constrains the maximum detection range of a target.
Pulse width also constrains the range discrimination, that is the capacity of the radar to distinguish between two targets that are close together. At "any" range, with similar azimuth and elevation angles and as viewed by a radar with an unmodulated pulse, the range resolution is approximately equal in distance to half of the pulse duration times the speed of light (approximately 300 meters per microsecond).
Pulse width also determines the radar's dead zone at close ranges. While the radar transmitter is active, the receiver input is blanked to avoid the amplifiers being swamped (saturated) or, (more likely), damaged. A simple calculation reveals that a radar echo will take approximately 10.8 μs to return from a target 1 statute mile away (counting from the leading edge of the transmitter pulse ("T"0), (sometimes known as transmitter main bang)). For convenience, these figures may also be expressed as 1 nautical mile in 12.4 μs or 1 kilometre in 6.7 μs. (For simplicity, all further discussion will use metric figures.) If the radar pulse width is 1 μs, then there can be no detection of targets closer than about 150 m, because the receiver is blanked.
All this means that the designer cannot simply increase the pulse width to get greater range without having an impact on other performance factors. As with everything else in a radar system, compromises have to be made to a radar system's design to provide the optimal performance for its role.
Pulse repetition frequency (PRF).
In order to build up a discernible echo, most radar systems emit pulses continuously and the repetition rate of these pulses is determined by the role of the system. An echo from a target will therefore be 'painted' on the display or integrated within the signal processor every time a new pulse is transmitted, reinforcing the return and making detection easier. The higher the PRF that is used, then the more the target is painted. However, with the higher PRF the range that the radar can "see" is reduced. Radar designers try to use the highest PRF possible commensurate with the other factors that constrain it, as described below.
There are two other facets related to PRF that the designer must weigh very carefully; the beamwidth characteristics of the antenna, and the required periodicity with which the radar must sweep the field of view. A radar with a 1° horizontal beamwidth that sweeps the entire 360° horizon every 2 seconds with a PRF of 1080 Hz will radiate 6 pulses over each 1-degree arc. If the receiver needs at least 12 reflected pulses of similar amplitudes to achieve an acceptable probability of detection, then there are three choices for the designer: double the PRF, halve the sweep speed, or double the beamwidth. In reality, all three choices are used, to varying extents; radar design is all about compromises between conflicting pressures.
Staggered PRF.
Staggered PRF is a transmission process where the time between interrogations from radar changes slightly, "in a patterned and readily-discernible repeating manner." The change of repetition frequency allows the radar, on a pulse-to-pulse basis, to differentiate between returns from its own transmissions and returns from other radar systems with the same PRF and a similar radio frequency. Consider a radar with a constant interval between pulses; target reflections appear at a relatively constant range related to the flight-time of the pulse. In today's very crowded radio spectrum, there may be many other pulses detected by the receiver, either directly from the transmitter or as reflections from elsewhere. Because their apparent "distance" is defined by measuring their time relative to the last pulse transmitted by "our" radar, these "jamming" pulses could appear at any apparent distance. When the PRF of the "jamming" radar is very similar to "our" radar, those apparent distances may be very slow-changing, just like real targets. By using stagger, a radar designer can force the "jamming" to jump around erratically in apparent range, inhibiting integration and reducing or even suppressing its impact on true target detection.
Without staggered PRF, any pulses originating from another radar on the same radio frequency might appear stable in time and could be mistaken for reflections from the radar's own transmission. With staggered PRF the radar's own targets appear stable in range in relation to the transmit pulse, whilst the 'jamming' echoes may move around in apparent range (uncorrelated), causing them to be rejected by the receiver. Staggered PRF is only one of several similar techniques used for this, including jittered PRF (where the pulse timing is varied in a less-predictable manner), pulse-frequency modulation, and several other similar techniques whose principal purpose is to reduce the probability of unintentional synchronicity. These techniques are in widespread use in marine safety and navigation radars, by far the most numerous radars on planet Earth today.
Clutter.
Clutter refers to radio frequency (RF) echoes returned from targets which are uninteresting to the radar operators. Such targets include natural objects such as ground, sea, precipitation (such as rain, snow or hail), sand storms, animals (especially birds), atmospheric turbulence, and other atmospheric effects, such as ionosphere reflections, meteor trails, and three body scatter spike. Clutter may also be returned from man-made objects such as buildings and, intentionally, by radar countermeasures such as chaff.
Some clutter may also be caused by a long radar waveguide between the radar transceiver and the antenna. In a typical plan position indicator (PPI) radar with a rotating antenna, this will usually be seen as a "sun" or "sunburst" in the centre of the display as the receiver responds to echoes from dust particles and misguided RF in the waveguide. Adjusting the timing between when the transmitter sends a pulse and when the receiver stage is enabled will generally reduce the sunburst without affecting the accuracy of the range, since most sunburst is caused by a diffused transmit pulse reflected before it leaves the antenna. Clutter is considered a passive interference source, since it only appears in response to radar signals sent by the radar.
Clutter is detected and neutralized in several ways. Clutter tends to appear static between radar scans; on subsequent scan echoes, desirable targets will appear to move, and all stationary echoes can be eliminated. Sea clutter can be reduced by using horizontal polarization, while rain is reduced with circular polarization (note that meteorological radars wish for the opposite effect, and therefore use linear polarization to detect precipitation). Other methods attempt to increase the signal-to-clutter ratio.
Clutter moves with the wind or is stationary. Two common strategies to improve measure or performance in a clutter environment are:
* Moving target indication, which integrates successive pulses and
* Doppler processing, which uses filters to separate clutter from desirable signals.
The most effective clutter reduction technique is pulse-Doppler radar with Look-down/shoot-down capability. Doppler separates clutter from aircraft and spacecraft using a frequency spectrum, so individual signals can be separated from multiple reflectors located in the same volume using velocity differences. This requires a coherent transmitter. Another technique uses a moving target indication that subtracts the receive signal from two successive pulses using phase to reduce signals from slow moving objects. This can be adapted for systems that lack a coherent transmitter, such as time-domain pulse-amplitude radar.
Constant False Alarm Rate, a form of Automatic Gain Control (AGC), is a method that relies on clutter returns far outnumbering echoes from targets of interest. The receiver's gain is automatically adjusted to maintain a constant level of overall visible clutter. While this does not help detect targets masked by stronger surrounding clutter, it does help to distinguish strong target sources. In the past, radar AGC was electronically controlled and affected the gain of the entire radar receiver. As radars evolved, AGC became computer-software controlled and affected the gain with greater granularity in specific detection cells.
Clutter may also originate from multipath echoes from valid targets caused by ground reflection, atmospheric ducting or ionospheric reflection/refraction (e.g., Anomalous propagation). This clutter type is especially bothersome since it appears to move and behave like other normal (point) targets of interest. In a typical scenario, an aircraft echo is reflected from the ground below, appearing to the receiver as an identical target below the correct one. The radar may try to unify the targets, reporting the target at an incorrect height, or eliminating it on the basis of jitter or a physical impossibility. Terrain bounce jamming exploits this response by amplifying the radar signal and directing it downward. These problems can be overcome by incorporating a ground map of the radar's surroundings and eliminating all echoes which appear to originate below ground or above a certain height. Monopulse can be improved by altering the elevation algorithm used at low elevation. In newer air traffic control radar equipment, algorithms are used to identify the false targets by comparing the current pulse returns to those adjacent, as well as calculating return improbabilities.
Sensitivity time control (STC).
STC is used to avoid saturation of the receiver from close in ground clutter by adjusting the attenuation of the receiver as a function of distance. More attenuation is applied to returns close in and is reduced as the range increases.
Unambiguous range.
In simple systems, echoes from targets must be detected and processed before the next transmitter pulse is generated if range ambiguity is to be avoided. Range ambiguity occurs when the time taken for an echo to return from a target is greater than the pulse repetition period (T); if the interval between transmitted pulses is 1000 microseconds, and the return-time of a pulse from a distant target is 1200 microseconds, the apparent distance of the target is only 200 microseconds. In sum, these 'second echoes' appear on the display to be targets closer than they really are.
Consider the following example : if the radar antenna is located at around 15 m above sea level, then the distance to the horizon is pretty close, (perhaps 15 km). Ground targets further than this range cannot be detected, so the PRF can be quite high; a radar with a PRF of 7.5 kHz will return ambiguous echoes from targets at about 20 km, or over the horizon. If however, the PRF was doubled to 15 kHz, then the ambiguous range is reduced to 10 km and targets beyond this range would only appear on the display after the transmitter has emitted another pulse. A target at 12 km would appear to be 2 km away, although the strength of the echo might be much lower than that from a genuine target at 2 km.
The maximum non ambiguous range varies inversely with PRF and is given by:
formula_1
where "c" is the speed of light. If a longer unambiguous range is required with this simple system, then lower PRFs are required and it was quite common for early search radars to have PRFs as low as a few hundred Hz, giving an unambiguous range out to well in excess of 150 km. However, lower PRFs introduce other problems, including poorer target painting and velocity ambiguity in Pulse-Doppler systems (see below).
Modern radars, especially air-to-air combat radars in military aircraft, may use PRFs in the tens-to-hundreds of kilohertz and stagger the interval between pulses to allow the correct range to be determined. With this form of staggered PRF, a "packet" of pulses is transmitted with a fixed interval between each pulse, and then another "packet" is transmitted with a slightly different interval. Target reflections appear at different ranges for each "packet"; these differences are accumulated and then simple arithmetical techniques may be applied to determine true range. Such radars may use repetitive patterns of "packets", or more adaptable "packets" that respond to apparent target behaviors. Regardless, radars that employ the technique are universally coherent, with a very stable radio frequency, and the pulse "packets" may also be used to make measurements of the Doppler shift (a velocity-dependent modification of the apparent radio frequency), especially when the PRFs are in the hundreds-of-kilohertz range. Radars exploiting Doppler effects in this manner typically determine relative velocity first, from the Doppler effect, and then use other techniques to derive target distance.
At its most simplistic, MUR (Maximum Unambiguous Range) for a Pulse Stagger sequence may be calculated using the TSP (Total Sequence Period). TSP is defined as the total time it takes for the Pulsed pattern to repeat. This can be found by the addition of all the elements in the stagger sequence. The formula is derived from the speed of light and the length of the sequence :
formula_2
where c is the speed of light, usually in metres per microsecond, and TSP is the addition of all the positions of the stagger sequence, usually in microseconds. However, in a stagger sequence, some intervals may be repeated several times; when this occurs, it is more appropriate to consider TSP as the addition of all the unique intervals in the sequence.
Also, it is worth remembering that there may be vast differences between the MUR and the maximum range (the range beyond which reflections will probably be too weak to be detected), and that the maximum "instrumented" range may be "much" shorter than either of these. A civil marine radar, for instance, may have user-selectable maximum "instrumented" display ranges of 72, or 96 or rarely 120 nautical miles, in accordance with international law, but maximum unambiguous ranges of over 40,000 nautical miles and maximum detection ranges of perhaps 150 nautical miles. When such huge disparities are noted, it reveals that the primary purpose of staggered PRF is to reduce "jamming", rather than to increase unambiguous range capabilities.
In the frequency domain.
Pure CW radars appear as a single line on a Spectrum analyser display and when modulated with other sinusoidal signals, the spectrum differs little from that obtained with standard analogue modulation schemes used in communications systems, such as Frequency Modulation and consist of the carrier plus a relatively small number of sidebands. When the radar signal is modulated with a pulse train as shown above, the spectrum becomes much more complicated and far more difficult to visualise.
Basic Fourier analysis shows that any repetitive complex signal consists of a number of harmonically related sine waves. The radar pulse train is a form of square wave, the pure form of which consists of the fundamental plus all of the odd harmonics. The exact composition of the pulse train will depend on the pulse width and PRF, but mathematical analysis can be used to calculate all of the frequencies in the spectrum. When the pulse train is used to modulate a radar carrier, the typical spectrum shown on the left will be obtained.
Examination of this spectral response shows that it contains two basic structures. The coarse structure; (the peaks or 'lobes' in the diagram on the left) and the Fine Structure which contains the individual frequency components as shown below. The envelope of the lobes in the coarse structure is given by: formula_3.
Note that the pulse width (formula_4) determines the lobe spacing. Smaller pulse widths result in wider lobes and therefore greater bandwidth.
Examination of the spectral response in finer detail, as shown on the right, shows that the Fine Structure contains individual lines or spot frequencies. The formula for the fine structure is given by formula_5 and since the period of the PRF (T) appears at the bottom of the fine spectrum equation, there will be fewer lines if higher PRFs are used. These facts affect the decisions made by radar designers when considering the trade-offs that need to be made when trying to overcome the ambiguities that affect radar signals.
Pulse profiling.
If the rise and fall times of the modulation pulses are zero, (e.g. the pulse edges are infinitely sharp), then the sidebands will be as shown in the spectral diagrams above. The bandwidth consumed by this transmission can be huge and the total power transmitted is distributed over many hundreds of spectral lines. This is a potential source of interference with any other device and frequency-dependent imperfections in the transmit chain mean that some of this power never arrives at the antenna. In reality of course, it is impossible to achieve such sharp edges, so in practical systems the sidebands contain far fewer lines than a perfect system. If the bandwidth can be limited to include relatively few sidebands, by rolling off the pulse edges intentionally, an efficient system can be realised with the minimum of potential for interference with nearby equipment. However, the trade-off of this is that slow edges make range resolution poor. Early radars limited the bandwidth through filtration in the transmit chain, e.g. the waveguide, scanner etc., but performance could be sporadic with unwanted signals breaking through at remote frequencies and the edges of the recovered pulse being indeterminate. Further examination of the basic Radar Spectrum shown above shows that the information in the various lobes of the Coarse Spectrum is identical to that contained in the main lobe, so limiting the transmit and receive bandwidth to that extent provides significant benefits in terms of efficiency and noise reduction.
Recent advances in signal processing techniques have made the use of pulse profiling or shaping more common. By shaping the pulse envelope before it is applied to the transmitting device, say to a cosine law or a trapezoid, the bandwidth can be limited at source, with less reliance on filtering. When this technique is combined with pulse compression, then a good compromise between efficiency, performance and range resolution can be realised. The diagram on the left shows the effect on the spectrum if a trapezoid pulse profile is adopted. It can be seen that the energy in the sidebands is significantly reduced compared to the main lobe and the amplitude of the main lobe is increased.
Similarly, the use of a cosine pulse profile has an even more marked effect, with the amplitude of the sidelobes practically becoming negligible. The main lobe is again increased in amplitude and the sidelobes correspondingly reduced, giving a significant improvement in performance.
There are many other profiles that can be adopted to optimise the performance of the system, but cosine and trapezoid profiles generally provide a good compromise between efficiency and resolution and so tend to be used most frequently.
Unambiguous velocity.
This is an issue only with a particular type of system; the pulse-Doppler radar, which uses the Doppler effect to resolve velocity from the apparent change in frequency caused by targets that have net radial velocities compared to the radar device. Examination of the spectrum generated by a pulsed transmitter, shown above, reveals that each of the sidebands, (both coarse and fine), will be subject to the Doppler effect, another good reason to limit bandwidth and spectral complexity by pulse profiling.
Consider the positive shift caused by the closing target in the diagram which has been highly simplified for clarity.
It can be seen that as the relative velocity increases, a point will be reached where the spectral lines that constitute the echoes are hidden or aliased by the next sideband of the modulated carrier.
Transmission of multiple pulse-packets with different PRF-values, e.g. staggered PRFs, will resolve this ambiguity, since each new PRF value will result in a new sideband position, revealing the velocity to the receiver. The maximum unambiguous target velocity is given by:
formula_6
Typical system parameters.
Taking all of the above characteristics into account means that certain constraints are placed on the radar designer. For example, a system with a 3 GHz carrier frequency and a pulse width of 1 μs will have a carrier period of approximately 333 ps. Each transmitted pulse will contain about 3000 carrier cycles and the velocity and range ambiguity values for such a system would be:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tau"
},
{
"math_id": 1,
"text": "\\text{Range}_{\\text{max unambiguous}} = \\left( \\frac{c}{2 \\,PRF} \\right) "
},
{
"math_id": 2,
"text": "MUR = \\left( c * 0.5 * TSP \\right) "
},
{
"math_id": 3,
"text": " \\frac{1}{\\pi\\,f}"
},
{
"math_id": 4,
"text": "\\,\\tau"
},
{
"math_id": 5,
"text": " \\frac{N}{T}"
},
{
"math_id": 6,
"text": "\\pm \\frac{c\\,PRF}{4\\,f}"
}
] | https://en.wikipedia.org/wiki?curid=9129297 |
912982 | Euclidean quantum gravity | Approach to quantum gravity utilizing Wick rotations
In theoretical physics, Euclidean quantum gravity is a version of quantum gravity. It seeks to use the Wick rotation to describe the force of gravity according to the principles of quantum mechanics.
Introduction in layperson's terms.
The Wick rotation.
In physics, a Wick rotation, named after Gian-Carlo Wick, is a method of finding a solution to dynamics problems in formula_0 dimensions, by transposing their descriptions in formula_1 dimensions, by trading one dimension of space for one dimension of time. More precisely, it substitutes a mathematical problem in Minkowski space into a related problem in Euclidean space by means of a transformation that substitutes an imaginary-number variable for a real-number variable.
It is called a "rotation" because when complex numbers are represented as a plane, the multiplication of a complex number by formula_2 is equivalent to rotating the vector representing that number by an angle of formula_3 radians about the origin.
For example, a Wick rotation could be used to relate a macroscopic event temperature diffusion (like in a bath) to the underlying thermal movements of molecules. If we attempt to model the bath volume with the different gradients of temperature we would have to subdivide this volume into infinitesimal volumes and see how they interact. We know such infinitesimal volumes are in fact water molecules. If we represent all molecules in the bath by only one molecule in an attempt to simplify the problem, this unique molecule should walk along all possible paths that the real molecules might follow. The path integral formulation is the conceptual tool used to describe the movements of this unique molecule, and Wick rotation is one of the mathematical tools that are very useful to analyse a path integral problem.
Application in quantum mechanics.
In a somewhat similar manner, the motion of a quantum object as described by quantum mechanics implies that it can exist simultaneously in different positions and have different speeds. It differs clearly to the movement of a classical object (e.g. a billiard ball), since in this case a single path with precise position and speed can be described. A quantum object does not move from A to B with a single path, but moves from A to B by all ways possible at the same time. According to the Feynman path-integral formulation of quantum mechanics, the path of the quantum object is described mathematically as a weighted average of all those possible paths. In 1966 an explicitly gauge invariant functional-integral algorithm was found by DeWitt, which extended Feynman's new rules to all orders. What is appealing in this new approach is its lack of singularities when they are unavoidable in general relativity.
Another operational problem with general relativity is the computational difficulty, because of the complexity of the mathematical tools used. Path integrals in contrast have been used in mechanics since the end of the nineteenth century and is well known. In addition, the path-integral formalism is used both in classical and quantum physics so it might be a good starting point for unifying general relativity and quantum theories. For example, the quantum-mechanical Schrödinger equation and the classical heat equation are related by Wick rotation. So the Wick relation is a good tool to relate a classical phenomenon to a quantum phenomenon. The ambition of Euclidean quantum gravity is to use the Wick rotation to find connections between a macroscopic phenomenon, gravity, and something more microscopic.
More rigorous treatment.
Euclidean quantum gravity refers to a Wick rotated version of quantum gravity, formulated as a quantum field theory. The manifolds that are used in this formulation are 4-dimensional Riemannian manifolds instead of pseudo Riemannian manifolds. It is also assumed that the manifolds are compact, connected and boundaryless (i.e. no singularities). Following the usual quantum field-theoretic formulation, the vacuum to vacuum amplitude is written as a functional integral over the metric tensor, which is now the quantum field under consideration.
formula_4
where φ denotes all the matter fields. See Einstein–Hilbert action.
Relation to ADM formalism.
Euclidean Quantum Gravity does relate back to ADM formalism used in canonical quantum gravity and recovers the Wheeler–DeWitt equation under various circumstances. If we have some matter field formula_5, then the path integral reads
formula_6
where integration over formula_7 includes an integration over the three-metric, the lapse function formula_8, and shift vector formula_9. But we demand that formula_10 be independent of the lapse function and shift vector at the boundaries, so we obtain
formula_11
where formula_12 is the three-dimensional boundary. Observe that this expression vanishes implies the functional derivative vanishes, giving us the Wheeler–DeWitt equation. A similar statement may be made for the diffeomorphism constraint (take functional derivative with respect to the shift functions instead). | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n + 1"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "\\pi/2"
},
{
"math_id": 4,
"text": "\\int \\mathcal{D}\\mathbf{g}\\, \\mathcal{D}\\phi\\, \\exp\\left(\\int d^4x \\sqrt{|\\mathbf{g}|}(R+\\mathcal{L}_\\mathrm{matter})\\right)"
},
{
"math_id": 5,
"text": "\\phi"
},
{
"math_id": 6,
"text": "Z = \\int \\mathcal{D}\\mathbf{g}\\, \\mathcal{D}\\phi\\, \\exp\\left(\\int d^4x \\sqrt{|\\mathbf{g}|}(R+\\mathcal{L}_\\mathrm{matter})\\right)"
},
{
"math_id": 7,
"text": "\\mathcal{D}\\mathbf{g}"
},
{
"math_id": 8,
"text": "N"
},
{
"math_id": 9,
"text": "N^{a}"
},
{
"math_id": 10,
"text": "Z"
},
{
"math_id": 11,
"text": "\\frac{\\delta Z}{\\delta N}=0=\\int \\mathcal{D}\\mathbf{g}\\, \\mathcal{D}\\phi\\, \\left.\\frac{\\delta S}{\\delta N}\\right|_{\\Sigma} \\exp\\left(\\int d^4x \\sqrt{|\\mathbf{g}|}(R+\\mathcal{L}_\\mathrm{matter})\\right)"
},
{
"math_id": 12,
"text": "\\Sigma"
}
] | https://en.wikipedia.org/wiki?curid=912982 |
9130703 | Channel-state duality | In quantum information theory, the channel-state duality refers to the correspondence between quantum channels and quantum states (described by density matrices). Phrased differently, the duality is the isomorphism between completely positive maps (channels) from "A" to C"n"×"n", where "A" is a C*-algebra and C"n"×"n" denotes the "n"×"n" complex entries, and positive linear functionals (states) on the tensor product
formula_0
Details.
Let "H"1 and "H"2 be (finite-dimensional) Hilbert spaces. The family of linear operators acting on "Hi" will be denoted by "L"("Hi"). Consider two quantum systems, indexed by 1 and 2, whose states are density matrices in "L"("Hi") respectively. A quantum channel, in the Schrödinger picture, is a completely positive (CP for short), trace-preserving linear map
formula_1
that takes a state of system 1 to a state of system 2. Next, we describe the dual state corresponding to Φ.
Let "Ei j" denote the matrix unit whose "ij"-th entry is 1 and zero elsewhere. The (operator) matrix
formula_2
is called the "Choi matrix" of Φ. By Choi's theorem on completely positive maps, Φ is CP if and only if "ρ"Φ is positive (semidefinite). One can view "ρ"Φ as a density matrix, and therefore the state dual to Φ.
The duality between channels and states refers to the map
formula_3
a linear bijection. This map is also called Jamiołkowski isomorphism or Choi–Jamiołkowski isomorphism.
Applications.
This isomorphism is used to show that the "Prepare and Measure" Quantum Key Distribution (QKD) protocols, such as the BB84 protocol devised by C. H. Bennett and G. Brassard are equivalent to the "Entanglement-Based" QKD protocols, introduced by A. K. Ekert. More details on this can be found e.g. in the book Quantum Information Theory by M. Wilde. | [
{
"math_id": 0,
"text": "\\mathbb{C}^{n \\times n} \\otimes A."
},
{
"math_id": 1,
"text": "\\Phi : L(H_1) \\rightarrow L(H_2) "
},
{
"math_id": 2,
"text": "\\rho_{\\Phi} = (\\Phi(E_{ij}))_{ij} \\in L(H_1) \\otimes L(H_2) "
},
{
"math_id": 3,
"text": "\\Phi \\rightarrow \\rho_{\\Phi}, "
}
] | https://en.wikipedia.org/wiki?curid=9130703 |
9131931 | Z curve | The Z curve (or Z-curve) method is a bioinformatics algorithm for genome analysis. The Z-curve is a three-dimensional curve that constitutes a unique representation of a DNA sequence, i.e., for the Z-curve and the given DNA sequence each can be uniquely reconstructed from the other.
The resulting curve has a zigzag shape, hence the name Z-curve.
Background.
The Z Curve method was first created in 1994 as a way to visually map a DNA or RNA sequence. Different properties of the Z curve, such as its symmetry and periodicity can give unique information on the DNA sequence. The Z curve is generated from a series of nodes, P0, P1...PN, with the coordinates xn, yn, and zn (n=0,1,2...N, with N being the length of the DNA sequence). The Z curve is created by connecting each of the nodes sequentially.
formula_0
formula_1
formula_2
formula_3
Applications.
Information on the distribution of nucleotides in a DNA sequence can be determined from the Z curve. The four nucleotides are combined into six different categories. The nucleotides are placed into each category by some defining characteristic and each category is designated a letter.
The x, y, and z components of the Z curve display the distribution of each of these categories of bases for the DNA sequence being studied. The x-component represents the distribution of purines and pyrimidine bases (R/Y). The y-component shows the distribution of amino and keto bases (M/K) and the z-component shows the distribution of strong-H bond and weak-H bond bases (S/W) in the DNA sequence.
The Z-curve method has been used in many different areas of genome research, such as replication origin identification,", ab initio" gene prediction,
isochore identification,
genomic island identification
and comparative genomics. Analysis of the Z curve has also been shown to be able to predict if a gene contains introns,
Research.
Experiments have shown that the Z curve can be used to identify the replication origin in various organisms. One study analyzed the Z curve for multiple species of Archaea and found that the oriC is located at a sharp peak on the curve followed by a broad base. This region was rich in AT bases and had multiple repeats, which is expected for replication origin sites. This and other similar studies were used to generate a program that could predict the origins of replication using the Z curve.
The Z curve has also been experimentally used to determine phylogenetic relationships. In one study, a novel coronavirus in China was analyzed using sequence analysis and the Z curve method to determine its phylogenetic relationship to other coronaviruses. It was determined that similarities and differences in related species can quickly by determined by visually examining their Z curves. An algorithm was created to identify the geometric center and other trends in the Z curve of 24 species of coronaviruses. The data was used to create a phylogenetic tree. The results matched the tree that was generated using sequence analysis. The Z curve method proved superior because while sequence analysis creates a phylogenetic tree based solely on coding sequences in the genome, the Z curve method analyzed the entire genome.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x_{n} = (A_{n} + G_{n}) - (C_{n} + T_{n})\n"
},
{
"math_id": 1,
"text": "y_{n} = (A_{n} + C_{n}) - (G_{n} + T_{n})"
},
{
"math_id": 2,
"text": "z_{n} = (A_{n} + T_{n}) - (C_{n} + G_{n})"
},
{
"math_id": 3,
"text": "n = 0, 1, 2, ... N"
}
] | https://en.wikipedia.org/wiki?curid=9131931 |
9133418 | Uniformly convex space | Concept in mathematics of vector spaces
In mathematics, uniformly convex spaces (or uniformly rotund spaces) are common examples of reflexive Banach spaces. The concept of uniform convexity was first introduced by James A. Clarkson in 1936.
Definition.
A uniformly convex space is a normed vector space such that, for every formula_0 there is some formula_1 such that for any two vectors with formula_2 and formula_3 the condition
formula_4
implies that:
formula_5
Intuitively, the center of a line segment inside the unit ball must lie deep inside the unit ball unless the segment is short.
References.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "0<\\varepsilon \\leq 2"
},
{
"math_id": 1,
"text": "\\delta>0"
},
{
"math_id": 2,
"text": "\\|x\\| = 1"
},
{
"math_id": 3,
"text": "\\|y\\| = 1,"
},
{
"math_id": 4,
"text": "\\|x-y\\|\\geq\\varepsilon"
},
{
"math_id": 5,
"text": "\\left\\|\\frac{x+y}{2}\\right\\|\\leq 1-\\delta."
},
{
"math_id": 6,
"text": " X "
},
{
"math_id": 7,
"text": " 0<\\varepsilon\\le 2 "
},
{
"math_id": 8,
"text": " \\delta>0 "
},
{
"math_id": 9,
"text": " x "
},
{
"math_id": 10,
"text": " y "
},
{
"math_id": 11,
"text": " \\|x\\| \\le 1 "
},
{
"math_id": 12,
"text": " \\|y\\| \\le 1 "
},
{
"math_id": 13,
"text": " \\|x-y\\| \\ge \\varepsilon "
},
{
"math_id": 14,
"text": " \\left\\|{\\frac{x+y}{2}}\\right\\| \\le 1-\\delta "
},
{
"math_id": 15,
"text": " \\varepsilon "
},
{
"math_id": 16,
"text": " \\delta "
},
{
"math_id": 17,
"text": " \\{f_n\\}_{n=1}^{\\infty} "
},
{
"math_id": 18,
"text": " f "
},
{
"math_id": 19,
"text": " \\|f_n\\| \\to \\|f\\|,"
},
{
"math_id": 20,
"text": " f_n "
},
{
"math_id": 21,
"text": " \\|f_n - f\\| \\to 0 "
},
{
"math_id": 22,
"text": " X^* "
},
{
"math_id": 23,
"text": " \\|x+y\\| < \\|x\\|+\\|y\\|"
},
{
"math_id": 24,
"text": "x,y"
},
{
"math_id": 25,
"text": "(1<p<\\infty)"
},
{
"math_id": 26,
"text": "L^\\infty"
}
] | https://en.wikipedia.org/wiki?curid=9133418 |
9136071 | Axis–angle representation | Parameterization of a rotation into a unit vector and angle
In mathematics, the axis–angle representation parameterizes a rotation in a three-dimensional Euclidean space by two quantities: a unit vector e indicating the direction (geometry) of an axis of rotation, and an angle of rotation "θ" describing the magnitude and sense (e.g., clockwise) of the rotation about the axis. Only two numbers, not three, are needed to define the direction of a unit vector e rooted at the origin because the magnitude of e is constrained. For example, the elevation and azimuth angles of e suffice to locate it in any particular Cartesian coordinate frame.
By Rodrigues' rotation formula, the angle and axis determine a transformation that rotates three-dimensional vectors. The rotation occurs in the sense prescribed by the right-hand rule.
The rotation axis is sometimes called the Euler axis. The axis–angle representation is predicated on Euler's rotation theorem, which dictates that any rotation or sequence of rotations of a rigid body in a three-dimensional space is equivalent to a pure rotation about a single fixed axis.
It is one of many rotation formalisms in three dimensions.
Rotation vector.
The axis–angle representation is equivalent to the more concise rotation vector, also called the Euler vector. In this case, both the rotation axis and the angle are represented by a vector codirectional with the rotation axis whose length is the rotation angle θ,
formula_0
It is used for the exponential and logarithm maps involving this representation.
Many rotation vectors correspond to the same rotation. In particular, a rotation vector of length "θ" + 2"πM", for any integer M, encodes exactly the same rotation as a rotation vector of length θ. Thus, there are at least a countable infinity of rotation vectors corresponding to any rotation. Furthermore, all rotations by 2"πM" are the same as no rotation at all, so, for a given integer M, all rotation vectors of length 2"πM", in all directions, constitute a two-parameter uncountable infinity of rotation vectors encoding the same rotation as the zero vector. These facts must be taken into account when inverting the exponential map, that is, when finding a rotation vector that corresponds to a given rotation matrix. The exponential map is "onto" but not "one-to-one".
Example.
Say you are standing on the ground and you pick the direction of gravity to be the negative "z" direction. Then if you turn to your left, you will rotate radians (or -90°) about the "-z" axis. Viewing the axis-angle representation as an ordered pair, this would be
formula_1
The above example can be represented as a rotation vector with a magnitude of pointing in the "z" direction,
formula_2
Uses.
The axis–angle representation is convenient when dealing with rigid-body dynamics. It is useful to both characterize rotations, and also for converting between different representations of rigid body motion, such as homogeneous transformations and twists.
When a rigid body rotates around a fixed axis, its axis–angle data are a constant rotation axis and the rotation angle continuously dependent on time.
Plugging the three eigenvalues 1 and "e"±"iθ" and their associated three orthogonal axes in a Cartesian representation into Mercer's theorem is a convenient construction of the Cartesian representation of the Rotation Matrix in three dimensions.
Rotating a vector.
Rodrigues' rotation formula, named after Olinde Rodrigues, is an efficient algorithm for rotating a Euclidean vector, given a rotation axis and an angle of rotation. In other words, Rodrigues' formula provides an algorithm to compute the exponential map from formula_3 to SO(3) without computing the full matrix exponential.
If v is a vector in R3 and e is a unit vector rooted at the origin describing an axis of rotation about which v is rotated by an angle θ, Rodrigues' rotation formula to obtain the rotated vector is
formula_4
For the rotation of a single vector it may be more efficient than converting e and θ into a rotation matrix to rotate the vector.
Relationship to other representations.
There are several ways to represent a rotation. It is useful to understand how different representations relate to one another, and how to convert between them. Here the unit vector is denoted ω instead of e.
Exponential map from 𝔰𝔬(3) to SO(3).
The exponential map effects a transformation from the axis-angle representation of rotations to rotation matrices,
formula_5
Essentially, by using a Taylor expansion one derives a closed-form relation between these two representations. Given a unit vector formula_6 representing the unit rotation axis, and an angle, "θ" ∈ R, an equivalent rotation matrix R is given as follows, where K is the cross product matrix of ω, that is, Kv = ω × v for all vectors v ∈ R3,
formula_7
Because K is skew-symmetric, and the sum of the squares of its above-diagonal entries is 1, the characteristic polynomial "P"("t") of K is "P"("t") = det(K − "t"I) = −("t"3 + "t"). Since, by the Cayley–Hamilton theorem, "P"(K) = 0, this implies that
formula_8
As a result, K4 = –K2, K5 = K, K6 = K2, K7 = –K.
This cyclic pattern continues indefinitely, and so all higher powers of K can be expressed in terms of K and K2. Thus, from the above equation, it follows that
formula_9
that is,
formula_10
by the Taylor series formula for trigonometric functions.
This is a Lie-algebraic derivation, in contrast to the geometric one in the article Rodrigues' rotation formula.
Due to the existence of the above-mentioned exponential map, the unit vector ω representing the rotation axis, and the angle "θ" are sometimes called the "exponential coordinates" of the rotation matrix R.
Log map from SO(3) to 𝔰𝔬(3).
Let K continue to denote the 3 × 3 matrix that effects the cross product with the rotation axis ω: K(v) = ω × v for all vectors v in what follows.
To retrieve the axis–angle representation of a rotation matrix, calculate the angle of rotation from the trace of the rotation matrix:
formula_11
and then use that to find the normalized axis,
formula_12
where formula_13 is the component of the rotation matrix, formula_14, in the formula_15-th row and formula_16-th column.
The axis-angle representation is not unique since a rotation of formula_17 about formula_18 is the same as a rotation of formula_19 about formula_20.
The above calculation of axis vector formula_21 does not work if R is symmetric. For the general case the formula_21 may be found using null space of R-I, see rotation matrix#Determining the axis.
The matrix logarithm of the rotation matrix R is
formula_22
An exception occurs when "R" has eigenvalues equal to −1. In this case, the log is not unique. However, even in the case where "θ" = "π" the Frobenius norm of the log is
formula_23
Given rotation matrices A and B,
formula_24
is the geodesic distance on the 3D manifold of rotation matrices.
For small rotations, the above computation of θ may be numerically imprecise as the derivative of arccos goes to infinity as "θ" → 0. In that case, the off-axis terms will actually provide better information about θ since, for small angles, "R" ≈ "I" + "θK. (This is because these are the first two terms of the Taylor series for exp("θK).)
This formulation also has numerical problems at "θ" = "π", where the off-axis terms do not give information about the rotation axis (which is still defined up to a sign ambiguity). In that case, we must reconsider the above formula.
formula_25
At "θ" = "π", we have
formula_26
and so let
formula_27
so the diagonal terms of "B" are the squares of the elements of ω and the signs (up to sign ambiguity) can be determined from the signs of the off-axis terms of B.
Unit quaternions.
The following expression transforms axis–angle coordinates to versors (unit quaternions):
formula_28
Given a versor q = "r" + v represented with its scalar r and vector v, the axis–angle coordinates can be extracted using the following:
formula_29
A more numerically stable expression of the rotation angle uses the atan2 function:
formula_30
where is the Euclidean norm of the 3-vector v.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\boldsymbol{\\theta} = \\theta \\mathbf{e} \\,."
},
{
"math_id": 1,
"text": "( \\mathrm{axis}, \\mathrm{angle} ) = \\left( \\begin{bmatrix} e_x \\\\ e_y \\\\ e_z \\end{bmatrix},\\theta \\right) = \\left( \\begin{bmatrix} 0 \\\\ 0 \\\\ -1 \\end{bmatrix},\\frac{-\\pi}{2}\\right)."
},
{
"math_id": 2,
"text": "\\begin{bmatrix} 0 \\\\ 0 \\\\ \\frac{\\pi}{2} \\end{bmatrix}."
},
{
"math_id": 3,
"text": "\\mathfrak{so}(3)"
},
{
"math_id": 4,
"text": "\n\\mathbf{v}_\\mathrm{rot} = (\\cos\\theta) \\mathbf{v} + (\\sin\\theta) (\\mathbf{e} \\times \\mathbf{v})\n + (1 - \\cos\\theta) (\\mathbf{e} \\cdot \\mathbf{v}) \\mathbf{e} \\,.\n"
},
{
"math_id": 5,
"text": "\\exp\\colon \\mathfrak{so}(3) \\to \\mathrm{SO}(3) \\,."
},
{
"math_id": 6,
"text": "\\boldsymbol\\omega \\in \\mathfrak{so}(3) = \\R^3"
},
{
"math_id": 7,
"text": "R = \\exp(\\theta \\mathbf{K}) = \\sum_{k=0}^\\infty\\frac{(\\theta \\mathbf{K})^k}{k!} = I + \\theta \\mathbf{K} + \\frac{1}{2!}(\\theta \\mathbf{K})^2 + \\frac{1}{3!}(\\theta \\mathbf{K})^3 + \\cdots"
},
{
"math_id": 8,
"text": "\\mathbf{K}^3 = -\\mathbf{K} \\,."
},
{
"math_id": 9,
"text": "R = I + \\left(\\theta - \\frac{\\theta^3}{3!} + \\frac{\\theta^5}{5!} - \\cdots\\right) \\mathbf{K} + \\left(\\frac{\\theta^2}{2!} - \\frac{\\theta^4}{4!} + \\frac{\\theta^6}{6!} - \\cdots\\right) \\mathbf{K}^2 \\,, "
},
{
"math_id": 10,
"text": "R = I + (\\sin\\theta) \\mathbf{K} + (1-\\cos\\theta) \\mathbf{K}^2\\, ,"
},
{
"math_id": 11,
"text": " \\theta = \\arccos\\left( \\frac{\\operatorname{Tr}(R) - 1}{2} \\right) "
},
{
"math_id": 12,
"text": " \\boldsymbol{\\omega} = \\frac{1}{2 \\sin \\theta} \\begin{bmatrix} R_{32}-R_{23} \\\\ R_{13}-R_{31} \\\\ R_{21}-R_{12} \\end{bmatrix} ~,"
},
{
"math_id": 13,
"text": "R_{ij}"
},
{
"math_id": 14,
"text": "R"
},
{
"math_id": 15,
"text": "i"
},
{
"math_id": 16,
"text": "j"
},
{
"math_id": 17,
"text": "-\\theta"
},
{
"math_id": 18,
"text": "-\\boldsymbol{\\omega} "
},
{
"math_id": 19,
"text": "\\theta"
},
{
"math_id": 20,
"text": "\\boldsymbol{\\omega} "
},
{
"math_id": 21,
"text": "\\omega"
},
{
"math_id": 22,
"text": " \\log R = \\begin{cases}\n0 & \\text{if } \\theta = 0 \\\\\n\\dfrac{\\theta}{2 \\sin \\theta } \\left(R - R^\\mathsf{T}\\right) & \\text{if } \\theta \\ne 0 \\text{ and } \\theta \\in (-\\pi, \\pi)\n\\end{cases}"
},
{
"math_id": 23,
"text": " \\| \\log(R) \\|_\\mathrm{F} = \\sqrt{2} |\\theta |\\,."
},
{
"math_id": 24,
"text": " d_g(A,B) := \\left\\| \\log\\left(A^\\mathsf{T} B\\right)\\right\\|_\\mathrm{F} "
},
{
"math_id": 25,
"text": "R = I + \\mathbf{K} \\sin\\theta + \\mathbf{K}^2 (1-\\cos\\theta)"
},
{
"math_id": 26,
"text": "R = I + 2 \\mathbf{K}^2 = I + 2(\\boldsymbol{\\omega} \\otimes \\boldsymbol{\\omega} - I) = 2 \\boldsymbol{\\omega} \\otimes \\boldsymbol{\\omega} - I"
},
{
"math_id": 27,
"text": "B := \\boldsymbol{\\omega} \\otimes \\boldsymbol{\\omega} = \\frac{1}{2}(R+I) \\,,"
},
{
"math_id": 28,
"text": "\\mathbf q = \\left(\\cos\\tfrac{\\theta}{2}, \\boldsymbol{\\omega} \\sin\\tfrac{\\theta}{2}\\right)"
},
{
"math_id": 29,
"text": "\\begin{align}\n\\theta &= 2\\arccos r \\\\[8px]\n\\boldsymbol{\\omega} &=\n\\begin{cases}\n\\dfrac{\\mathbf{v}}{ \\sin \\tfrac{\\theta}{2} }, & \\text{if } \\theta \\neq 0 \\\\\n0, & \\text{otherwise}.\n\\end{cases}\n\\end{align}"
},
{
"math_id": 30,
"text": "\\theta = 2 \\operatorname{atan2}(|\\mathbf{v}|,r)\\,,"
}
] | https://en.wikipedia.org/wiki?curid=9136071 |
913620 | Touchdown polymerase chain reaction | Type of PCR
The touchdown polymerase chain reaction or touchdown style polymerase chain reaction is a method of polymerase chain reaction by which primers avoid amplifying nonspecific sequences. The annealing temperature during a polymerase chain reaction determines the specificity of primer annealing. The melting point of the primer sets the upper limit on annealing temperature. At temperatures just above this point, only very specific base pairing between the primer and the template will occur. At lower temperatures, the primers bind less specifically. Nonspecific primer binding obscures polymerase chain reaction results, as the nonspecific sequences to which primers anneal in early steps of amplification will "swamp out" any specific sequences because of the exponential nature of polymerase amplification.
Method.
The earliest steps of a touchdown polymerase chain reaction cycle have high annealing temperatures. The annealing temperature is decreased in increments for every subsequent set of cycles. The primer will anneal at the highest temperature which is least-permissive of nonspecific binding that it is able to tolerate. Thus, the first sequence amplified is the one between the regions of greatest primer specificity; it is most likely that this is the sequence of interest. These fragments will be further amplified during subsequent rounds at lower temperatures, and will outcompete the nonspecific sequences to which the primers may bind at those lower temperatures. If the primer initially (during the higher-temperature phases) binds to the sequence of interest, subsequent rounds of polymerase chain reaction can be performed upon the product to further amplify those fragments. Touchdown increases specificity of the reaction at higher temperatures and increases the efficiency towards the end by lowering the annealing temperature.
From a mathematical point of view products of annealing at smaller temperatures are disadvantaged by formula_0 for the first annealing in cycle formula_1 and the second one in cycle formula_2 for formula_3.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " 2^{i-j} "
},
{
"math_id": 1,
"text": " i "
},
{
"math_id": 2,
"text": " j "
},
{
"math_id": 3,
"text": " i,j\\in \\mathbb N , i\\geq j "
}
] | https://en.wikipedia.org/wiki?curid=913620 |
91388 | Inverse gambler's fallacy | Formal fallacy of Bayesian inference
The inverse gambler's fallacy, named by philosopher Ian Hacking, is a formal fallacy of Bayesian inference which is an inverse of the better known gambler's fallacy. It is the fallacy of concluding, on the basis of an unlikely outcome of a random process, that the process is likely to have occurred many times before. For example, if one observes a pair of fair dice being rolled and turning up double sixes, it is wrong to suppose that this lends any support to the hypothesis that the dice have been rolled many times before. We can see this from the Bayesian update rule: letting "U" denote the unlikely outcome of the random process and "M" the proposition that the process has occurred many times before, we have
formula_0
and since "P"("U"|"M") = "P"("U") (the outcome of the process is unaffected by previous occurrences), it follows that "P"("M"|"U") = "P"("M"); that is, our confidence in "M" should be unchanged when we learn "U".
Real-world examples.
The inverse gambler's fallacy is unquestionably a fallacy, but there is disagreement over whether and where it has been committed in practice. In his original paper, Hacking takes as his main example a certain response to the argument from design. The argument from design asserts, first, that the universe is fine tuned to support life, and second, that this fine tuning points to the existence of an intelligent designer. The rebuttal attacked by Hacking consists of accepting the first premise, but rejecting the second on the grounds that our (big bang) universe is just one in a long "sequence" of universes, and that the fine tuning merely shows that there have been many other (poorly tuned) universes preceding this one. Hacking draws a sharp distinction between this argument and the argument that all possible worlds coexist in some non-temporal sense. He proposes that these arguments, often treated as minor variations of one another, should be considered fundamentally different because one is formally invalid while the other is not.
A rebuttal paper by John Leslie points out a difference between the observation of double sixes and the observation of fine tuning, namely that the former is not necessary (the roll could have come out different) while the latter is necessary (our universe must support life, which means "ex hypothesi" that we must see fine tuning). He suggests the following analogy: instead of being summoned into a room to observe a particular roll of the dice, we are told that we will be summoned into the room immediately after a roll of double sixes. In this situation it may be quite reasonable, upon being summoned, to conclude with high confidence that we are not seeing the first roll. In particular, if we know that the dice are fair and that the rolling would not have been stopped before double sixes turned up, then the probability that we are seeing the first roll is at most 1/36. However, the probability will be 1 if the roller has control over the outcome using omnipotence and omniscience which believers attribute to the creator. But if the roller doesn't have such powers, the probability may even be less than 1/36 because we have not assumed that the roller is obliged to summon us the first time double sixes come up.
In 2009, Daniel M. Oppenheimer and Benoît Monin published empirical evidence for the Inverse gambler's fallacy (they called it the retrospective gambler's fallacy). They found that people believe a longer sequence of random events had happened (e.g., coin toss, die roll) before an event perceived to be unrepresentative of the randomness of the generation process (a streak of heads or tails, double-six) than representative events. This fallacy extends to more real-life events such as getting pregnant, getting a hole in one, etc. | [
{
"math_id": 0,
"text": "P(M|U) = P(M) \\frac{P(U|M)}{P(U)}"
}
] | https://en.wikipedia.org/wiki?curid=91388 |
913945 | Hyperfocal distance | Distance beyond which all objects can be brought into an acceptable focus
In optics and photography, hyperfocal distance is a distance from a lens beyond which all objects can be brought into an "acceptable" focus. As the hyperfocal distance is the focus distance giving the maximum depth of field, it is the most desirable distance to set the focus of a fixed-focus camera. The hyperfocal distance is entirely dependent upon what level of sharpness is considered to be acceptable.
The hyperfocal distance has a property called "consecutive depths of field", where a lens focused at an object whose distance from the lens is at the hyperfocal distance H will hold a depth of field from "H"/2 to infinity, if the lens is focused to "H"/2, the depth of field will be from "H"/3 to H; if the lens is then focused to "H"/3, the depth of field will be from "H"/4 to "H"/2, etc.
Thomas Sutton and George Dawson first wrote about hyperfocal distance (or "focal range") in 1867. Louis Derr in 1906 may have been the first to derive a formula for hyperfocal distance. Rudolf Kingslake wrote in 1951 about the two methods of measuring hyperfocal distance.
Some cameras have their hyperfocal distance marked on the focus dial. For example, on the Minox LX focusing dial there is a red dot between and infinity; when the lens is set at the red dot, that is, focused at the hyperfocal distance, the depth of field stretches from to infinity. Some lenses have markings indicating the hyperfocal range for specific f-stops, also called a "depth-of-field scale".
Two methods.
There are two common methods of defining and measuring "hyperfocal distance", leading to values that differ only slightly. The distinction between the two meanings is rarely made, since they have almost identical values. The value computed according to the first definition exceeds that from the second by just one focal length.
The hyperfocal distance is the closest distance at which a lens can be focused while keeping objects at infinity acceptably sharp. When the lens is focused at this distance, all objects at distances from half of the hyperfocal distance out to infinity will be acceptably sharp.
The hyperfocal distance is the distance beyond which all objects are acceptably sharp, for a lens focused at infinity.
Acceptable sharpness.
The hyperfocal distance is entirely dependent upon what level of sharpness is considered to be acceptable. The criterion for the desired acceptable sharpness is specified through the circle of confusion (CoC) diameter limit. This criterion is the largest acceptable spot size diameter that an infinitesimal point is allowed to spread out to on the imaging medium (film, digital sensor, etc.).
Formulae.
For the first definition,
formula_0
where
For any practical f-number, the added focal length is insignificant in comparison with the first term, so that
formula_1
This formula is exact for the second definition, if H is measured from a thin lens, or from the front principal plane of a complex lens; it is also exact for the first definition if H is measured from a point that is one focal length in front of the front principal plane. For practical purposes, there is little difference between the first and second definitions.
Derivation using geometric optics.
The following derivations refer to the accompanying figures. For clarity, half the aperture and circle of confusion are indicated.
Definition 1.
An object at distance H forms a sharp image at distance x (blue line). Here, objects at infinity have images with a circle of confusion indicated by the brown ellipse where the upper red ray through the focal point intersects the blue line.
First using similar triangles hatched in green,
formula_2
Then using similar triangles dotted in purple,
formula_3
as found above.
Definition 2.
Objects at infinity form sharp images at the focal length f (blue line). Here, an object at H forms an image with a circle of confusion indicated by the brown ellipse where the lower red ray converging to its sharp image intersects the blue line.
Using similar triangles shaded in yellow,
formula_4
Example.
Depths of field of 3 ideal lenses of focal lengths, "f"1, "f"2 and "f"3, and f-numbers "N"1, "N"2, and "N"3 when focused at objects at different distances. "H"1, "H"2, and "H"# denote their respective hyperfocal distances (using "Definition 1" in that article) with a circle of confusion of diameter. The darker bars show how that, for fixed subject distance, the depth of field is increased by using a shorter focal length or smaller aperture. The second topmost bar of each set illustrates the configuration for a fixed focus camera with the focus permanently set at the hyperfocal distance to maximise the depth of field.
As an example, for a lens at <templatestyles src="F//sandbox/styles.css" />f/8 using a circle of confusion of , which is a value typically used in photography, the hyperfocal distance according to "Definition 1" is
formula_5
If the lens is focused at a distance of , then everything from half that distance () to infinity will be acceptably sharp in our photograph. With the formula for the "Definition 2", the result is , a difference of 0.5%.
Consecutive depths of field.
The hyperfocal distance has a curious property: while a lens focused at H will hold a depth of field from "H"/2 to infinity, if the lens is focused to "H"/2, the depth of field will extend from "H"/3 to H; if the lens is then focused to "H"/3, the depth of field will extend from "H"/4 to "H"/2. This continues on through all successive neighboring terms in the harmonic series (1/"x") values of the hyperfocal distance. That is, focusing at "H"/"n" will cause the depth of field to extend from "H"/("n" + 1) to "H"/("n" − 1).
C. Welborne Piper calls this phenomenon "consecutive depths of field" and shows how to test the idea easily. This is also among the earliest of publications to use the word "hyperfocal".
History.
The concepts of the two definitions of hyperfocal distance have a long history, tied up with the terminology for depth of field, depth of focus, circle of confusion, etc. Here are some selected early quotations and interpretations on the topic.
Sutton and Dawson 1867.
Thomas Sutton and George Dawson define "focal range" for what we now call "hyperfocal distance":
<templatestyles src="Template:Blockquote/styles.css" />Focal Range. In every lens there is, corresponding to a given apertal ratio (that is, the ratio of the diameter of the stop to the focal length), a certain distance of a near object from it, between which and infinity all objects are in equally good focus. For instance, in a single view lens of 6 inch focus, with a 1/4 in. stop (apertal ratio one-twenty-fourth), all objects situated at distances lying between 20 feet from the lens and an infinite distance from it (a fixed star, for instance) are in equally good focus. Twenty feet is therefore called the "focal range" of the lens when this stop is used. The focal range is consequently the distance of the nearest object, which will be in good focus when the ground glass is adjusted for an extremely distant object. In the same lens, the focal range will depend upon the size of the diaphragm used, while in different lenses having the same apertal ratio the focal ranges will be greater as the focal length of the lens is increased.
The terms 'apertal ratio' and 'focal range' have not come into general use, but it is very desirable that they should, in order to prevent ambiguity and circumlocution when treating of the properties of photographic lenses. 'Focal range' is a good term, because it expresses the range within which it is necessary to adjust the focus of the lens to objects at different distances from it – in other words, the range within which focusing becomes necessary.
Their focal range is about 1000 times their aperture diameter, so it makes sense as a hyperfocal distance with CoC value of <templatestyles src="F//sandbox/styles.css" />f/1000, or image format diagonal times 1/1000 assuming the lens is a "normal" lens. What is not clear, however, is whether the focal range they cite was computed, or empirical.
Abney 1881.
Sir William de Wivelesley Abney says:
<templatestyles src="Template:Blockquote/styles.css" />The annexed formula will approximately give the nearest point p which will appear in focus when the distance is accurately focussed, supposing the admissible disc of confusion to be :
formula_6
when
That is, a is the reciprocal of what we now call the f-number, and the answer is evidently in meters. His 0.41 should obviously be 0.40. Based on his formulae, and on the notion that the "aperture ratio" should be kept fixed in comparisons across formats, Abney says:
<templatestyles src="Template:Blockquote/styles.css" />It can be shown that an enlargement from a small negative is better than a picture of the same size taken direct as regards sharpness of detail. ... Care must be taken to distinguish between the advantages to be gained in enlargement by the use of a smaller lens, with the disadvantages that ensue from the deterioration in the relative values of light and shade.
Taylor 1892.
John Traill Taylor recalls this word formula for a sort of hyperfocal distance:
<templatestyles src="Template:Blockquote/styles.css" />We have seen it laid down as an approximative rule by some writers on optics (Thomas Sutton, if we remember aright), that if the diameter of the stop be a fortieth part of the focus of the lens, the depth of focus will range between infinity and a distance equal to four times as many feet as there are inches in the focus of the lens.
This formula implies a stricter CoC criterion than we typically use today.
Hodges 1895.
John Hodges discusses depth of field without formulas but with some of these relationships:
<templatestyles src="Template:Blockquote/styles.css" />There is a point, however, beyond which everything will be in pictorially good definition, but the longer the focus of the lens used, the further will the point beyond which everything is in sharp focus be removed from the camera. Mathematically speaking, the amount of depth possessed by a lens varies inversely as the square of its focus.
This "mathematically" observed relationship implies that he had a formula at hand, and a parameterization with the f-number or "intensity ratio" in it. To get an inverse-square relation to focal length, you have to assume that the CoC limit is fixed and the aperture diameter scales with the focal length, giving a constant f-number.
Piper 1901.
C. Welborne Piper may be the first to have published a clear distinction between "Depth of Field" in the modern sense and "Depth of Definition" in the focal plane, and implies that "Depth of Focus" and "Depth of Distance" are sometimes used for the former (in modern usage, "Depth of Focus" is usually reserved for the latter). He uses the term "Depth Constant" for H, and measures it from the front principal focus (i. e., he counts one focal length less than the distance from the lens to get the simpler formula), and even introduces the modern term:
<templatestyles src="Template:Blockquote/styles.css" />
It is unclear what distinction he means. Adjacent to Table I in his appendix, he further notes:
<templatestyles src="Template:Blockquote/styles.css" />If we focus on infinity, the constant is the focal distance of the nearest object in focus. If we focus on an extra-focal distance equal to the constant, we obtain a maximum depth of field from approximately half the constant distance up to infinity. The constant is then the hyper-focal distance.
At this point we do not have evidence of the term "hyperfocal" before Piper, nor the hyphenated "hyper-focal" which he also used, but he obviously did not claim to coin this descriptor himself.
Derr 1906.
Louis Derr may be the first to clearly specify the first definition, which is considered to be the strictly correct one in modern times, and to derive the formula corresponding to it. Using p for hyperfocal distance, D for aperture diameter, d for the diameter that a circle of confusion shall not exceed, and f for focal length, he derives:
formula_7
As the aperture diameter, D is the ratio of the focal length f to the numerical aperture N ("D" = "f"/"N"); and the diameter of the circle of confusion, "c" = "d", this gives the equation for the first definition above.
formula_8
Johnson 1909.
George Lindsay Johnson uses the term "Depth of Field" for what Abney called "Depth of Focus," and "Depth of Focus" in the modern sense (possibly for the first time), as the allowable distance error in the focal plane. His definitions include hyperfocal distance:
<templatestyles src="Template:Blockquote/styles.css" />Depth of Focus is a convenient, but not strictly accurate term, used to describe the amount of racking movement (forwards or backwards) which can be given to the screen without the image becoming sensibly blurred, i.e. without any blurring in the image exceeding 1/100 in., or in the case of negatives to be enlarged or scientific work, the 1/10 or 1/100 mm. Then the breadth of a point of light, which, of course, causes blurring on both sides, i.e. (or ).
His drawing makes it clear that his e is the radius of the circle of confusion. He has clearly anticipated the need to tie it to format size or enlargement, but has not given a general scheme for choosing it.
<templatestyles src="Template:Blockquote/styles.css" />
Johnson's use of "former" and "latter" seem to be swapped; perhaps "former" was here meant to refer to the immediately preceding section title "Depth of Focus", and "latter" to the current section title "Depth of Field". Except for an obvious factor-of-2 error in using the ratio of stop diameter to CoC radius, this definition is the same as Abney's hyperfocal distance.
Others, early twentieth century.
The term "hyperfocal distance" also appears in Cassell's "Cyclopaedia" of 1911, "The Sinclair Handbook of Photography" of 1913, and Bayley's "The Complete Photographer" of 1914.
Kingslake 1951.
Rudolf Kingslake is explicit about the two meanings:
<templatestyles src="Template:Blockquote/styles.css" />
Kingslake uses the simplest formulae for DOF near and far distances, which has the effect of making the two different definitions of hyperfocal distance give identical values.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H = \\frac{f^2}{N c} + f"
},
{
"math_id": 1,
"text": "H \\approx \\frac{f^2}{N c}\\,."
},
{
"math_id": 2,
"text": "\\begin{array}{crcl}\n & \\dfrac{x-f}{c/2} & = & \\dfrac{f}{D/2} \\\\\n\\therefore & x-f & = & \\dfrac{cf}{D} \\\\\n\\therefore & x & = & f+\\dfrac{cf}{D}\n\\end{array}"
},
{
"math_id": 3,
"text": "\\begin{array}{crclcl}\n & \\dfrac{H}{D/2} & = & \\dfrac{x}{c/2} \\\\\n\\therefore & H & = & \\dfrac{Dx}{c} & = & \\dfrac{D}{c}\\Big(f+\\dfrac{cf}{D}\\Big) \\\\\n & & = & \\dfrac{Df}{c}+f & = & \\dfrac{f^2}{Nc}+f\n\\end{array}"
},
{
"math_id": 4,
"text": "\\begin{array}{crclcl}\n & \\dfrac{H}{D/2} & = & \\dfrac{f}{c/2} \\\\\n\\therefore & H & = & \\dfrac{Df}{c} & = & \\dfrac{f^2}{Nc}\n\\end{array}"
},
{
"math_id": 5,
"text": "H = \\frac{(50)^2}{(8)(0.03)} + (50) = 10467 \\mbox{ mm}"
},
{
"math_id": 6,
"text": "p = 0.41 \\cdot f^2 \\cdot a"
},
{
"math_id": 7,
"text": "p = \\frac{(D + d) f}{d}\\,."
},
{
"math_id": 8,
"text": "p = \\frac{\\left(\\tfrac{f}{N} + c\\right) f}{c} = \\frac{f^2}{N c} + f"
}
] | https://en.wikipedia.org/wiki?curid=913945 |
9139708 | Two-dimensional singular-value decomposition | Method of decomposing a set of matrices via low-rank approximation
In linear algebra, two-dimensional singular-value decomposition (2DSVD) computes the low-rank approximation of a set of matrices such as 2D images or weather maps in a manner almost identical to SVD (singular-value decomposition) which computes the low-rank approximation of a single matrix (or a set of 1D vectors).
SVD.
Let matrix formula_0 contains the set of 1D vectors which have been centered. In PCA/SVD, we construct covariance matrix formula_1 and Gram matrix formula_2
formula_3 , formula_4
and compute their eigenvectors formula_5 and formula_6. Since formula_7 and formula_8 we have
formula_9
If we retain only formula_10 principal eigenvectors in formula_11, this gives low-rank approximation of formula_12.
2DSVD.
Here we deal with a set of 2D matrices formula_13. Suppose they are centered formula_14. We construct row–row and column–column covariance matrices
formula_15 and formula_16
in exactly the same manner as in SVD, and compute their eigenvectors formula_17 and formula_18. We approximate formula_19 as
formula_20
in identical fashion as in SVD. This gives a near optimal low-rank approximation of formula_13 with the objective function
formula_21
Error bounds similar to Eckard–Young theorem also exist.
2DSVD is mostly used in image compression and representation. | [
{
"math_id": 0,
"text": " X = [\\mathbf x_1, \\ldots, \\mathbf x_n] "
},
{
"math_id": 1,
"text": " F "
},
{
"math_id": 2,
"text": " G "
},
{
"math_id": 3,
"text": " F = X X^\\mathsf{T} "
},
{
"math_id": 4,
"text": " G = X^\\mathsf{T} X, "
},
{
"math_id": 5,
"text": " U = [\\mathbf u_1, \\ldots, \\mathbf u_n] "
},
{
"math_id": 6,
"text": " V = [\\mathbf v_1, \\ldots, \\mathbf v_n] "
},
{
"math_id": 7,
"text": " VV^\\mathsf{T} = I "
},
{
"math_id": 8,
"text": " UU^\\mathsf{T} = I "
},
{
"math_id": 9,
"text": " X = UU^\\mathsf{T} X VV^\\mathsf{T} = U \\left(U^\\mathsf{T} XV\\right) V^\\mathsf{T} = U \\Sigma V^\\mathsf{T}. "
},
{
"math_id": 10,
"text": " K "
},
{
"math_id": 11,
"text": " U , V"
},
{
"math_id": 12,
"text": " X "
},
{
"math_id": 13,
"text": " (X_1,\\ldots,X_n) "
},
{
"math_id": 14,
"text": " \\sum_i X_i =0 "
},
{
"math_id": 15,
"text": " F = \\sum_i X_i X_i^\\mathsf{T} "
},
{
"math_id": 16,
"text": " G = \\sum_i X_i^\\mathsf{T} X_i "
},
{
"math_id": 17,
"text": " U "
},
{
"math_id": 18,
"text": " V"
},
{
"math_id": 19,
"text": " X_i "
},
{
"math_id": 20,
"text": " X_i = U U^\\mathsf{T} X_i V V^\\mathsf{T} = U \\left(U^\\mathsf{T} X_i V\\right) V^\\mathsf{T} = U M_i V^\\mathsf{T} "
},
{
"math_id": 21,
"text": " J= \\sum_{i=1}^n \\left| X_i - L M_i R^\\mathsf{T}\\right| ^2 "
}
] | https://en.wikipedia.org/wiki?curid=9139708 |
91420 | Existential quantification | Mathematical use of "there exists"
In predicate logic, an existential quantification is a type of quantifier, a logical constant which is interpreted as "there exists", "there is at least one", or "for some". It is usually denoted by the logical operator symbol ∃, which, when used together with a predicate variable, is called an existential quantifier ("∃"x"" or "∃("x")" or "(∃"x")"). Existential quantification is distinct from universal quantification ("for all"), which asserts that the property or relation holds for "all" members of the domain. Some sources use the term existentialization to refer to existential quantification.
Quantification in general is covered in the article on quantification (logic). The existential quantifier is encoded as in Unicode, and as codice_0 in LaTeX and related formula editors.
Basics.
Consider the formal sentence
For some natural number formula_0, formula_1.
This is a single statement using existential quantification. It is roughly analogous to the informal sentence "Either formula_2, or formula_3, or formula_4, or... and so on," but more precise, because it doesn't need us to infer the meaning of the phrase "and so on." (In particular, the sentence explicitly specifies its domain of discourse to be the natural numbers, not, for example, the real numbers.)
This particular example is true, because 5 is a natural number, and when we substitute 5 for "n", we produce the true statement formula_5. It does not matter that "formula_1" is true only for that single natural number, 5; the existence of a single solution is enough to prove this existential quantification to be true.
In contrast, "For some even number formula_0, formula_1" is false, because there are no even solutions. The domain of discourse, which specifies the values the variable "n" is allowed to take, is therefore critical to a statement's trueness or falseness. Logical conjunctions are used to restrict the domain of discourse to fulfill a given predicate. For example, the sentence
For some positive odd number formula_0, formula_1
is logically equivalent to the sentence
For some natural number formula_0, formula_0 is odd and formula_1.
The mathematical proof of an existential statement about "some" object may be achieved either by a constructive proof, which exhibits an object satisfying the "some" statement, or by a nonconstructive proof, which shows that there must be such an object without concretely exhibiting one.
Notation.
In symbolic logic, "∃" (a turned letter "E" in a sans-serif font, Unicode U+2203) is used to indicate existential quantification. For example, the notation formula_6 represents the (true) statement
There exists some formula_0 in the set of natural numbers such that formula_1.
The symbol's first usage is thought to be by Giuseppe Peano in "Formulario mathematico" (1896). Afterwards, Bertrand Russell popularised its use as the existential quantifier. Through his research in set theory, Peano also introduced the symbols formula_7 and formula_8 to respectively denote the intersection and union of sets.
Properties.
Negation.
A quantified propositional function is a statement; thus, like statements, quantified functions can be negated. The formula_9 symbol is used to denote negation.
For example, if "P"("x") is the predicate ""x" is greater than 0 and less than 1", then, for a domain of discourse "X" of all natural numbers, the existential quantification "There exists a natural number "x" which is greater than 0 and less than 1" can be symbolically stated as:
formula_10
This can be demonstrated to be false. Truthfully, it must be said, "It is not the case that there is a natural number "x" that is greater than 0 and less than 1", or, symbolically:
formula_11.
If there is no element of the domain of discourse for which the statement is true, then it must be false for all of those elements. That is, the negation of
formula_10
is logically equivalent to "For any natural number "x", "x" is not greater than 0 and less than 1", or:
formula_12
Generally, then, the negation of a propositional function's existential quantification is a universal quantification of that propositional function's negation; symbolically,
formula_13
A common error is stating "all persons are not married" (i.e., "there exists no person who is married"), when "not all persons are married" (i.e., "there exists a person who is not married") is intended:
formula_14
Negation is also expressible through a statement of "for no", as opposed to "for some":
formula_15
Unlike the universal quantifier, the existential quantifier distributes over logical disjunctions:
formula_16
Rules of inference.
A rule of inference is a rule justifying a logical step from hypothesis to conclusion. There are several rules of inference which utilize the existential quantifier.
"Existential introduction" (∃I) concludes that, if the propositional function is known to be true for a particular element of the domain of discourse, then it must be true that there exists an element for which the proposition function is true. Symbolically,
formula_17
Existential instantiation, when conducted in a Fitch style deduction, proceeds by entering a new sub-derivation while substituting an existentially quantified variable for a subject—which does not appear within any active sub-derivation. If a conclusion can be reached within this sub-derivation in which the substituted subject does not appear, then one can exit that sub-derivation with that conclusion. The reasoning behind existential elimination (∃E) is as follows: If it is given that there exists an element for which the proposition function is true, and if a conclusion can be reached by giving that element an arbitrary name, that conclusion is necessarily true, as long as it does not contain the name. Symbolically, for an arbitrary "c" and for a proposition "Q" in which "c" does not appear:
formula_18
formula_19 must be true for all values of "c" over the same domain "X"; else, the logic does not follow: If "c" is not arbitrary, and is instead a specific element of the domain of discourse, then stating "P"("c") might unjustifiably give more information about that object.
The empty set.
The formula formula_20 is always false, regardless of "P"("x"). This is because formula_21 denotes the empty set, and no "x" of any description – let alone an "x" fulfilling a given predicate "P"("x") – exist in the empty set. See also Vacuous truth for more information.
As adjoint.
In category theory and the theory of elementary topoi, the existential quantifier can be understood as the left adjoint of a functor between power sets, the inverse image functor of a function between sets; likewise, the universal quantifier is the right adjoint.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n\\times n=25"
},
{
"math_id": 2,
"text": "0\\times 0=25"
},
{
"math_id": 3,
"text": "1\\times 1=25"
},
{
"math_id": 4,
"text": "2\\times 2=25"
},
{
"math_id": 5,
"text": "5\\times 5=25"
},
{
"math_id": 6,
"text": "\\exists{n}{\\in}\\mathbb{N}: n\\times n=25"
},
{
"math_id": 7,
"text": "\\cap"
},
{
"math_id": 8,
"text": "\\cup"
},
{
"math_id": 9,
"text": "\\lnot\\ "
},
{
"math_id": 10,
"text": "\\exists{x}{\\in}\\mathbf{X}\\, P(x)"
},
{
"math_id": 11,
"text": "\\lnot\\ \\exists{x}{\\in}\\mathbf{X}\\, P(x)"
},
{
"math_id": 12,
"text": "\\forall{x}{\\in}\\mathbf{X}\\, \\lnot P(x)"
},
{
"math_id": 13,
"text": "\\lnot\\ \\exists{x}{\\in}\\mathbf{X}\\, P(x) \\equiv\\ \\forall{x}{\\in}\\mathbf{X}\\, \\lnot P(x)"
},
{
"math_id": 14,
"text": "\\lnot\\ \\exists{x}{\\in}\\mathbf{X}\\, P(x) \\equiv\\ \\forall{x}{\\in}\\mathbf{X}\\, \\lnot P(x) \\not\\equiv\\ \\lnot\\ \\forall{x}{\\in}\\mathbf{X}\\, P(x) \\equiv\\ \\exists{x}{\\in}\\mathbf{X}\\, \\lnot P(x)"
},
{
"math_id": 15,
"text": "\\nexists{x}{\\in}\\mathbf{X}\\, P(x) \\equiv \\lnot\\ \\exists{x}{\\in}\\mathbf{X}\\, P(x)"
},
{
"math_id": 16,
"text": " \\exists{x}{\\in}\\mathbf{X}\\, P(x) \\lor Q(x) \\to\\ (\\exists{x}{\\in}\\mathbf{X}\\, P(x) \\lor \\exists{x}{\\in}\\mathbf{X}\\, Q(x))"
},
{
"math_id": 17,
"text": " P(a) \\to\\ \\exists{x}{\\in}\\mathbf{X}\\, P(x)"
},
{
"math_id": 18,
"text": " \\exists{x}{\\in}\\mathbf{X}\\, P(x) \\to\\ ((P(c) \\to\\ Q) \\to\\ Q)"
},
{
"math_id": 19,
"text": "P(c) \\to\\ Q"
},
{
"math_id": 20,
"text": "\\exists {x}{\\in}\\varnothing \\, P(x)"
},
{
"math_id": 21,
"text": "\\varnothing"
}
] | https://en.wikipedia.org/wiki?curid=91420 |
9142932 | Proof sketch for Gödel's first incompleteness theorem | Summary of a mathematical proof
This article gives a sketch of a proof of Gödel's first incompleteness theorem. This theorem applies to any formal theory that satisfies certain technical hypotheses, which are discussed as needed during the sketch. We will assume for the remainder of the article that a fixed theory satisfying these hypotheses has been selected.
Throughout this article the word "number" refers to a natural number (including 0). The key property these numbers possess is that any natural number can be obtained by starting with the number 0 and adding 1 a finite number of times.
Hypotheses of the theory.
Gödel's theorem applies to any formal theory that satisfies certain properties. Each formal theory has a signature that specifies the nonlogical symbols in the language of the theory. For simplicity, we will assume that the language of the theory is composed from the following collection of 15 (and only 15) symbols:
This is the language of Peano arithmetic. A well-formed formula is a sequence of these symbols that is formed so as to have a well-defined reading as a mathematical formula. Thus "x"
"SS"0 is well formed while "x"
∀+ is not well formed. A theory is a set of well-formed formulas with no free variables.
A theory is consistent if there is no formula "F" such that both "F" and its negation are provable. ω-consistency is a stronger property than consistency. Suppose that "F"("x") is a formula with one free variable "x". In order to be ω-consistent, the theory cannot prove both ∃"m" "F"("m") while also proving ¬"F"("n") for each natural number "n".
The theory is assumed to be effective, which means that the set of axioms must be recursively enumerable. This means that it is theoretically possible to write a finite-length computer program that, if allowed to run forever, would output the axioms of the theory (necessarily including every well-formed instance of the axiom schema of induction) one at a time and not output anything else. This requirement is necessary; there are theories that are complete, consistent, and include elementary arithmetic, but no such theory can be effective.
"For a simplified outline of the proof, see Gödel's incompleteness theorems"
Outline of the proof.
The sketch here is broken into three parts. In the first part, each formula of the theory is assigned a number, known as a Gödel number, in a manner that allows the formula to be effectively recovered from the number. This numbering is extended to cover finite sequences of formulas. In the second part, a specific formula "PF"("x", "y") is constructed such that for any two numbers "n" and "m", "PF"("n","m") holds if and only if "n" represents a sequence of formulas that constitutes a proof of the formula that "m" represents. In the third part of the proof, we construct a self-referential formula that, informally, says "I am not provable", and prove that this sentence is neither provable nor disprovable within the theory.
Importantly, all the formulas in the proof can be defined by primitive recursive functions, which themselves can be defined in first-order Peano arithmetic.
Gödel numbering.
The first step of the proof is to represent (well-formed) formulas of the theory, and finite lists of these formulas, as natural numbers. These numbers are called the Gödel numbers of the formulas.
Begin by assigning a natural number to each symbol of the language of arithmetic, similar to the manner in which the ASCII code assigns a unique binary number to each letter and certain other characters. This article will employ the following assignment, very similar to the one Douglas Hofstadter used in his "Gödel, Escher, Bach":
The Gödel number of a formula is obtained by concatenating the Gödel numbers of each symbol making up the formula. The Gödel numbers for each symbol are separated by a zero because by design, no Gödel number of a symbol includes a 0. Hence any formula may be correctly recovered from its Gödel number. Let "G"("F") denote the Gödel number of the formula "F".
Given the above Gödel numbering, the sentence asserting that addition commutes, ∀"x" ∀"x"* ("x" + "x"*
"x"* + "x") translates as the number:
626 0 262 0 626 0 262 0 163 0 362 0 262 0 112 0 262 0 163 0 111 0 262 0 163 0 112 0 262 0 323
(Spaces have been inserted on each side of every 0 only for readability; Gödel numbers are strict concatenations of decimal digits.) Not all natural numbers represent a sentence. For example, the number
111 0 626 0 112 0 262
translates to "
∀ + "x"", which is not well-formed.
Because each natural number can be obtained by applying the successor operation "S" to 0 a finite number of times, every natural number has its own Gödel number. For example, the Gödel number corresponding to 4, "SSSS"0, is:
123 0 123 0 123 0 123 0 666.
The assignment of Gödel numbers can be extended to finite lists of formulas. To obtain the Gödel number of a list of formulas, write the Gödel numbers of the formulas in order, separating them by two consecutive zeros. Since the Gödel number of a formula never contains two consecutive zeros, each formula in a list of formulas can be effectively recovered from the Gödel number for the list.
It is crucial that the formal arithmetic be capable of proving a minimum set of facts. In particular, it must be able to prove that every number "m" has a Gödel number "G"("m"). A second fact that the theory must prove is that given any Gödel number "G"("F"("x")) of a formula "F"("x") with one free variable "x" and any number "m", there is a Gödel number of the formula "F"("m") obtained by replacing all occurrences of "G"("x") in "G"("F"("x")) with "G"("m"), and that this second Gödel number can be effectively obtained from the Gödel number "G"("F"("x")) of "F"("x") as a function of "G"("x"). To see that this is in fact possible, note that given the Gödel number of "F"("x"), one can recreate the original formula "F"("x"), make the substitution of "x" with "m", and then find the Gödel number "G"("F"("m")) of the resulting formula "F"("m"). This is a uniform procedure.
The provability relation.
Deduction rules can then be represented by binary relations on Gödel numbers of lists of formulas. In other words, suppose that there is a deduction rule "D"1, by which one can move from the formulas "S"1,"S"2 to a new formula "S". Then the relation "R"1 corresponding to this deduction rule says that "n" is related to "m" (in other words, "n" "R"1"m" holds) if "n" is the Gödel number of the list of formulas containing "S"1 and "S"2 and "m" is the Gödel number of the list of formulas containing "S"1, "S"2 and "S". Because each deduction rule is concrete, it is possible to effectively determine for any natural numbers "n" and "m" whether they are related by the relation.
The second stage in the proof is to use the Gödel numbering, described above, to show that the notion of provability can be expressed within the formal language of the theory. Suppose the theory has deduction rules: "D"1, "D"2, "D"3, ... . Let "R"1, "R"2, "R"3, ... be their corresponding relations, as described above.
Every provable statement is either an axiom itself, or it can be deduced from the axioms by a finite number of applications of the deduction rules.
A proof of a formula "S" is itself a string of mathematical statements related by particular relations (each is either an axiom or related to former statements by deduction rules), where the last statement is "S". Thus one can define the Gödel number of a proof. Moreover, one may define a statement form "Proof"("x","y"), which for every two numbers "x" and "y" is provable if and only if "x" is the Gödel number of a proof of the statement "S" and "y"
"G"("S").
"Proof"("x","y") is in fact an arithmetical relation, just as ""x" + "y"
6" is, though a (much) more complicated one. Given such a relation "R"("x","y"), for any two specific numbers "n" and "m", either the formula "R"("m","n"), or its negation ¬"R"("m","n"), but not both, is provable. This is because the relation between these two numbers can be simply "checked". Formally this can be proven by induction, where all these possible relations (whose number is infinite) are constructed one by one.
The detailed construction of the formula "Proof" makes essential use of the assumption that the theory is effective; it would not be possible to construct this formula without such an assumption.
Self-referential formula.
For every number "n" and every formula "F"("y"), where "y" is a free variable, we define "q"("n", "G"("F")), a relation between two numbers "n" and "G"("F"), such that it corresponds to the statement ""n" is not the Gödel number of a proof of "F"("G"("F"))". Here, "F"("G"("F")) can be understood as "F" with its own Gödel number as its argument.
Note that "q" takes as an argument "G"("F"), the Gödel number of "F". In order to prove either "q"("n", "G"("F")), or ¬"q"("n", "G"("F")), it is necessary to perform number-theoretic operations on "G"("F") that mirror the following steps: decode the number "G"("F") into the formula "F", replace all occurrences of "y" in "F" with the number "G"("F"), and then compute the Gödel number of the resulting formula "F"("G"("F")).
Note that for every specific number "n" and formula "F"("y"), "q"("n", "G"("F")) is a straightforward (though complicated) arithmetical relation between two numbers "n" and "G"("F"), building on the relation "PF" defined earlier. Further, "q"("n", "G"("F")) is provable if the finite list of formulas encoded by "n" is not a proof of "F"("G"("F")), and ¬"q"("n", "G"("F")) is provable if the finite list of formulas encoded by "n" is a proof of "F"("G"("F")). Given any numbers "n" and "G"("F"), either "q"("n", "G"("F")) or ¬"q"("n","G"("F")) (but not both) is provable.
Any proof of "F"("G"("F")) can be encoded by a Gödel number "n", such that "q"("n", "G"("F")) does not hold. If "q"("n", "G"("F")) holds for all natural numbers "n", then there is no proof of "F"("G"("F")). In other words, ∀"y" "q"("y", "G"("F")), a formula about natural numbers, corresponds to "there is no proof of "F"("G"("F"))".
We now define the formula "P"("x")
∀"y" "q"("y", "x"), where "x" is a free variable. The formula "P" itself has a Gödel number "G"("P") as does every formula.
This formula has a free variable "x". Suppose we replace it with "G"("F"),
the Gödel number of a formula "F"("z"), where "z" is a free variable. Then, "P"("G"("F"))
∀"y" "q"("y", "G"("F")) corresponds to "there is no proof of "F"("G"("F"))", as we have seen.
Consider the formula "P"("G"("P"))
∀"y" "q"("y", "G"("P")). This formula concerning the number "G"("P") corresponds to "there is no proof of "P"("G"("P"))". We have here the self-referential feature that is crucial to the proof: A formula of the formal theory that somehow relates to its own provability within that formal theory. Very informally, "P"("G"("P")) says: "I am not provable".
We will now show that neither the formula "P"("G"("P")), nor its negation ¬"P"("G"("P")), is provable.
Suppose "P"("G"("P"))
∀"y" "q"("y", "G"("P")) is provable. Let "n" be the Gödel number of a proof of "P"("G"("P")). Then, as seen earlier, the formula ¬"q"("n", "G"("P")) is provable. Proving both ¬"q"("n", "G"("P")) and ∀"y" "q"("y", "G"("P")) violates the consistency of the formal theory. We therefore conclude that "P"("G"("P")) is not provable.
Consider any number "n". Suppose ¬"q"("n", "G"("P")) is provable.
Then, "n" must be the Gödel number of a proof of "P"("G"("P")). But we have just proved that "P"("G"("P")) is not provable. Since either "q"("n", "G"("P")) or ¬"q"("n", "G"("P")) must be provable, we conclude that, for all natural numbers "n", "q"("n", "G"("P")) is provable.
Suppose the negation of "P"("G"("P")), ¬"P"("G"("P"))
∃"x" ¬ "q"("x", "G"("P")), is provable. Proving both ∃"x" ¬"q"("x", "G"("P")), and "q"("n", "G"("P")), for all natural numbers "n", violates ω-consistency of the formal theory. Thus if the theory is ω-consistent, ¬"P"("G"("P")) is not provable.
We have sketched a proof showing that:
For any formal, recursively enumerable (i.e. effectively generated) theory of Peano Arithmetic,
if it is consistent, then there exists an unprovable formula (in the language of that theory).
if it is ω-consistent, then there exists a formula such that both it and its negation are unprovable.
The truth of the Gödel sentence.
The proof of Gödel's incompleteness theorem just sketched is proof-theoretic (also called syntactic) in that it shows that if certain proofs exist (a proof of "P"("G"("P")) or its negation) then they can be manipulated to produce a proof of a contradiction. This makes no appeal to whether "P"("G"("P")) is "true", only to whether it is provable. Truth is a model-theoretic, or semantic, concept, and is not equivalent to provability except in special cases.
By analyzing the situation of the above proof in more detail, it is possible to obtain a conclusion about the truth of "P"("G"("P")) in the standard model formula_0 of natural numbers. As just seen, "q"("n", "G"("P")) is provable for each natural number "n", and is thus true in the model formula_0. Therefore, within this model,
formula_1
holds. This is what the statement ""P"("G"("P")) is true" usually refers to—the sentence is true in the intended model. It is not true in every model, however: If it were, then by Gödel's completeness theorem it would be provable, which we have just seen is not the case.
Boolos's short proof.
George Boolos (1989) vastly simplified the proof of the First Theorem, if one agrees that the theorem is equivalent to:
"There is no algorithm "M" whose output contains all true sentences of arithmetic and no false ones."
"Arithmetic" refers to Peano or Robinson arithmetic, but the proof invokes no specifics of either, tacitly assuming that these systems allow '<' and '×' to have their usual meanings. Boolos proves the theorem in about two pages. His proof employs the language of first-order logic, but invokes no facts about the connectives or quantifiers. The domain of discourse is the natural numbers. The Gödel sentence builds on Berry's paradox.
Let ["n"] abbreviate "n" successive applications of the successor function, starting from 0. Boolos then asserts (the details are only sketched) that there exists a defined predicate "Cxz" that comes out true iff an arithmetic formula containing "z" symbols names the number "x". This proof sketch contains the only mention of Gödel numbering; Boolos merely assumes that every formula can be so numbered. Here, a formula "F""names" the number "n" iff the following is provable:
formula_2
Boolos then defines the related predicates:
[10] × ["k"]) ∧ "Axy"). "k"
the number of symbols appearing in "Axy".
"Fx" formalizes Berry's paradox. The balance of the proof, requiring but 12 lines of text, shows that the sentence ∀"x"("Fx"↔("x"
["n"])) is true for some number "n", but no algorithm "M" will identify it as true. Hence in arithmetic, truth outruns proof. QED.
The above predicates contain the only existential quantifiers appearing in the entire proof. The '<' and '×' appearing in these predicates are the only defined arithmetical notions the proof requires. The proof nowhere mentions recursive functions or any facts from number theory, and Boolos claims that his proof dispenses with diagonalization. For more on this proof, see Berry's paradox.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{N}"
},
{
"math_id": 1,
"text": "P(G(P)) = \\forall y\\,q(y, G(P))"
},
{
"math_id": 2,
"text": "\\forall x (F(x) \\leftrightarrow x=n)"
}
] | https://en.wikipedia.org/wiki?curid=9142932 |
914498 | John James Waterston | Scottish civil engineer and physicist
John James Waterston (1811 – 18 June 1883) was a Scottish physicist and a neglected pioneer of the kinetic theory of gases.
Early life.
Waterston's father, George, was an Edinburgh sealing wax manufacturer and stationer, a relative of the Sandeman family Robert and his brother, George. John was born, the sixth of nine children, into a family alive with interests in literature, science and music. He was educated at Edinburgh High School before becoming apprenticed as a civil engineer to Messrs. Grainger and Miller. His employers encouraged him to attend lectures at the University of Edinburgh. He studied mathematics and physics under Sir John Leslie as well as attending lectures in chemistry, anatomy and surgery and becoming an active participant in the student literary society.
At age nineteen, Waterston published a paper proposing a mechanical explanation of gravitation, accounting for "action at a distance" in terms of colliding particles and discussing interactions between linear and rotational motion that would play a part in his later kinetic theory. He proposed that ether is made of small cylindrical particles, and collisions with large matter particles would transfer rotational motion into linear motion, locally increasing the speed of the ether particles, which he claimed would lower the density of the ether and create an attractive force.
Waterston moved to London at age twenty-one, where he worked as a railroad surveyor, becoming an associate of the Institution of Civil Engineers and publishing a paper on a graphical method for planning earthworks. The travel and disruption associated with his surveying work left Waterston little time to pursue his studies so he joined the hydrography department of the Admiralty under Francis Beaufort. It was Beaufort who, in 1839, supported Waterston for the post of naval instructor for cadets of the East India Company in Bombay. The posting worked well for Waterston who was able to pursue his reading and research at the library of Grant College.
Kinetic theory.
Building on his theory of the mechanical explanation of gravity, he was the first to develop the kinetic theory, independently of earlier and equally neglected partial accounts by Daniel Bernoulli and John Herapath. He published it, at his own expense, in his book "Thoughts on the Mental Functions" (1843). He correctly derived all the consequences of the premise that gas pressure is a function of the number of molecules per unit volume, "N"; molecular mass, "M"; and molecular mean-squared velocity, formula_0. He established the relationship:
formula_1.
He had been motivated to think of a "wave theory of heat" by analogy with the wave theory of light and some experiments by James Forbes and Macedonio Melloni on radiant heat. His statement that "... in mixed media the mean square molecular velocity is inversely proportional to the specific weight of the molecules" has been seen as the first statement of the equipartition theorem for translational motion. Waterston grasped that, while the kinetic energy of an individual molecule with velocity formula_2 is formula_3, heat energy is proportional to temperature formula_4. That insight led him to derive the ideal gas law:
formula_5.
The publication made little impact, perhaps because of the title. He submitted his theory, under Beaufort's sponsorship, to the Royal Society in 1845 but was rejected. Referee Sir John William Lubbock wrote "The paper is nothing but nonsense."
Unable to retrieve a copy of his paper (he had failed to make a copy for himself before submitting the paper to the Royal Society), he rewrote the work and sought to advertise it elsewhere, attracting little attention other than from William John Macquorn Rankine and Hermann von Helmholtz through whom it may have influenced August Krönig. The theory gained acceptance only when it was proposed by Rudolf Clausius and James Clerk Maxwell in the 1850s by which time Waterston's contribution had been forgotten.
Later life.
He returned to Edinburgh in 1857 to pursue his own novel physical ideas but met with unyielding neglect and discouragement from the scientific establishment. Neglect was exacerbated by his own increasing reclusiveness and hostility to the learned societies. He worked on acoustics, astronomy, fluid mechanics and thermodynamics.
In 1858, 27 years after he published his theory of the mechanical explanation of gravity and 15 years after publishing the kinetic theory, he continued to push for others to explore his ideas on gravity without much avail.
He left his Edinburgh home on 18 June and drowned in a nearby canal, possibly falling into the canal due to heat stress from his astronomical observation activities.
Recognition after death.
As discussed above, Waterston's paper submitted to the Royal Society was rejected. Some years after Waterston's death, Lord Rayleigh (Secretary of Royal Society at that time) managed to dig it out from the archives of the Royal Society. Finally, Waterston's paper was published in the Philosophical Transactions of the Royal Society in 1892. (Please see below.)
Rayleigh felt that Waterston's case was not an aberration, but the norm:"The history of [Waterston's] paper suggests that highly speculative investigations, especially by an unknown author, are best brought before the world through some other channel than a scientific society, which naturally hesitates to admit into its printed records matter of uncertain value. Perhaps one may go further, and say that a young author who believes himself capable of great things would usually do well to secure favourable recognition of the scientific world by work whose scope is limited, and whose value is easily judged, before embarking upon higher flights."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\bar{v^2}"
},
{
"math_id": 1,
"text": "P=NM\\bar{v^2}"
},
{
"math_id": 2,
"text": "v"
},
{
"math_id": 3,
"text": "\\frac{1}{2}mv^2"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "\\frac {PV} {T} = \\mbox{a constant}"
}
] | https://en.wikipedia.org/wiki?curid=914498 |
914554 | Opel Senator | Executive car produced by the German automaker Opel
The Opel Senator is a full-size executive car (E-segment) produced by the German automaker Opel, two generations of which were sold in Europe from 1978 until 1993. A saloon, its first incarnation was also available with a fastback coupé body as the Opel Monza and Vauxhall Royale Coupé. The Senator was for its entire existence, the flagship saloon model for both Opel and Vauxhall.
Through the international divisions of General Motors, it was also known in various markets as the Chevrolet Senator, Daewoo Imperial (in South Korea), Vauxhall Royale (until 1983) and Vauxhall Senator (which took the place of the Royale on Vauxhall models when the Opel brand was phased out from 1983). It was also sold as the Opel Kikinda in Yugoslavia, where it was produced under licence by IDA-Opel in Kikinda, Serbia, after which it was named.
The original Senator was a "de facto" replacement for Opel's KAD cars (the Opel Kapitän, Admiral and Diplomat), which competed in the F-segment (full-size luxury) in which the KAD cars had sold poorly. Sister company Vauxhall had already abandoned the segment with the demise of its Cresta/Viscount models some years earlier, leaving the Ventora model (a luxury derivative of the FE Victor/VX4) as its flagship offering but this was axed in 1976 with no direct replacement.
The Senator shared its platform with the smaller Opel Rekord, the latter being lengthened to make the Senator. The second generation of that car, from 1987, shared its base with the Rekord's Opel Omega successor, which was again lengthened to produce the Senator.
Senator A (1978–1986).
The Senator A was last part of a joint model programme executed by GM in the 1970s to develop a common series of vehicle platforms for both its European brands (and also for the Holden brand in Australia). The first two product families of this strategy - the T-Car (Kadett C/Chevette) and the U-Car (Ascona B/Manta B/Cavalier) had already been released. The V-Car (or V78) platform would simultaneously provide the next-generation Opel Rekord, a replacement for the Vauxhall FE Victor, and a 'stretch' version to replace the unsuccessful 'KAD' cars, and act as a flagship for both Opel and Vauxhall.
The Senator therefore emerged as a long wheelbase version of the Opel Rekord E, complemented by a three-door fastback coupé version on the same platform called the Opel Monza, which was planned as a successor for the Opel Commodore coupé.
Names and markets.
The Senator A and Monza were initially sold in the United Kingdom as the Vauxhall Royale (and Vauxhall Royale Coupé). Unlike other members of the joint Opel/Vauxhall model programme of the period, the Royale was simply a badge engineered version of the Senator with only detail differences from its Opel sister.
Following the merger of the UK Opel and Vauxhall dealer networks in 1982, the Opel marque was repositioned as a performance-luxury brand, and the Vauxhall Royale models were dropped in favour of the Opel Senator/Monza, coinciding with the "A2" mid-cycle facelift. This policy was reversed in late 1984, with the Senator reverting to Vauxhall branding for the 1985 model year, but the Monza remained on sale as an Opel until its discontinuation at the end of 1987.
The vehicle was also available in South Africa as the Chevrolet Senator until 1982, when it was rebadged as an Opel. The Chevrolet Senator was fitted with a locally built version of Chevrolet's 250 inline-six (4,093 cc), with . The post-1982 South African Opel Senator received Australian-built, six-cylinder engines. In Serbia, the locally assembled Senator received the 2.5-litre six and was badged the "Opel Kikinda".
The Senator and Rekord E were used as the base vehicle from which the Holden Commodore was developed for the Australian and New Zealand markets. The later VK Commodore was a hybrid between the two Opel cars, featuring the Senator's six light glasshouse grafted onto the Rekord E derived shell.
Engines.
The engine range for the first phase of the model's life included the 2.8S and the newly developed 3.0E, which had and with fuel injection. The three-speed automatic transmission was Opel's own design introduced in 1969, and was manufactured in Opel's transmission plant in Strasbourg, it was modified to cope with the new and improved power outputs.
Opel's own four speed manual transmission was not up to the job and they turned to transmission producer Getrag, who installed their 264 four speed manual gearbox in the early four cylinder Monzas. This was replaced by the five speed 240 for the 2.5 and 2.8 engines, and the 265 gearbox for the 3.0E.
The straight-six engines were all of the Opel cam-in-head engine design, as used in the earlier Commodore models and originating from the 1.7 and 1.9 litre straight four engines first used in the 1966 Kadett and Rekord. Opel would stick with the CIH engine design up until the 2.4 Frontera in 1993.
With the 3.0 litre engine, the Monza was the fastest car Opel had built, capable of , and 0–100 km/h (0–62 mph) in 8.5 seconds. In June 1981, the fuel injected 2.5E engine also used in the smaller Commodore was added to the Senator/Monza lineup. With it was very close to the now irrelevant 2.8 and its , and the 2.8S was discontinued in 1982.
Facelift (A2).
The original Senator and Monza were facelifted in November 1982, although the Senator "A2" (as it is usually called) only went on sale in March 1983. In the United Kingdom, it was initially sold only as an Opel, before being rebadged as a Vauxhall in 1984. The A2 Monza was only sold as an Opel.
The facelifted car looked similar to its predecessor, with relatively minor changes: smoothed-off headlights increased in size, and chrome parts were changed to a matt black or colour coded finish. The car was much more slippery, with drag resistance down (from 0.45 to 0.36 formula_0). The top of the range 3.0E received upgraded Bosch LE-Jetronic fuel injection.
Interiors were improved with an altered dashboard and the new instrument pack with larger dials used in the Rekord E2, and engines changed. Now, the fuel-injected straight-four two-liter cam-in-head unit from the Rekord E2 was available, although with little fanfare; this and the 2.5 essentially replaced the Commodore which was itself quietly retired in 1982. Power of the 2.0 was soon increased to . In March 1983 a 2.3-litre turbodiesel (shared with the Rekord) became available, and a few months later ABS-brakes (hitherto only available for the Senator CD) became an available option across the entire Senator/Monza range. At the Paris Show in September 1984 the 2.5E was given a new LE-Jetronic Bosch fuel injection system; power inched up to . The 2.0E was replaced by the torquier 2.2E, still with the same max power. Only the 3.0E engine remained untouched, although its name was changed to 3.0i. On the transmission side, the Strasbourg-built THM180 three-speed automatic was replaced by a four-speed unit. For the 1985 model year, the digital instrument display introduced in the Kadett E was available on the top models, although buyers could opt for the conventional analogue dials as a delete option. The trim surrounding the windows was more blacked out than before as well, although ample chrome remained. the four-cylinder models were never sold in Vauxhall form in the United Kingdom.
Shortly thereafter, in November 1984 a supercharged version (Comprex) was shown - at the time, the only production car in the world to use this technique. Going on sale in 1985, this very rare experimental version (1,000 units planned) were officially built by Irmscher rather than Opel. The Comprex offered and a top speed, and acceleration figures showed a twelve percent improvement over the turbodiesel. Like the other Rekord and Senator diesels it had a pronounced bulge in the bonnet. The Comprex offered marginally higher power than the turbodiesel, but more importantly, 90 percent of the maximum torque was available from 1300 rpm. From September 1985 until the end of production in the end of summer 1986, a catalyzed version of the 3.0E was available, with power down to .
Variants.
A four-wheel drive conversion was also available, engineered by Ferguson, who had previously provided similar modifications for the Jensen FF. Rather expensive, this could also be retrofitted to an existing car. The system uses a viscous coupling to distribute power with a 60/40 rearward bias, to improve traction whilst maintaining the Senator's handling characteristics. These were used by British Forces Germany under the BRIXMIS (British Commanders' in Chief Mission to the Soviet Forces in Germany) operations for the collection of technical intelligence. The same kit was also used by Bitter Cars for a four-wheel-drive version of their SC coupé, beginning in the end of 1981.
A limited edition convertible edition was also available in Germany, where the company "Keinath" reinforced the car heavily, and this added to the all round weight to the car.
Senator B (1987–1993).
A new model, the Senator B (marketed without the "B" suffix), arrived in the spring of 1987, a longer-bodied version of the Opel Omega. There was no Monza equivalent.
There were various versions of the Senator B: twelve valve 2.5 L and 3.0 L sized engines were released in 1987 along with a luxury "CD" model with the 3.0 L engine. The CD version boasted Electronic adjustable suspension, "ERC", for the first time in a mass-produced European car, air conditioning, heated seats also in the back, genuine walnut panels, leather covered centre console, trip computer and cruise control.
The cars were available with either five speed manual or four speed automatic gearboxes. A digitally controlled 4-speed automatic from Aisin-Warner equipped with three different switching programs Sport, Economy and Winter.
It was also equipped with torque delay at each shift, called "torque retard", for not notable gear changes. In winter mode the car starts on the third gear and switches immediately to fourth as soon as possible to prevent spinning wheels and instability. This mode remains to the speed of 80 km/h and then automatically switches off. The gearbox also had built-in diagnostic system and emergency program. Later Lexus and Volvo used similar versions of this transmission. As a luxury car, there were many options, but much was also standard. Options included leather seats, heated seats both front and rear, and electronic air conditioning including refrigerator in the glove box.
LCD instrumentation was also an option. Digital electronic power steering ZF-Servotronic, the same as in the BMW 7 Series, was standard, as was a new front axle design which allowed the axle to slide under the car in a crash and thus increasing the length of the deformation zone and prevent deformation of the footwell. The 3.0 24V was equipped with BBS styled multispoke alloy wheels made by Ronal.
A 24-valve version of the 3.0-litre six was introduced in 1989, generating – compared with for the older twelve valve version. This model was very popular with police forces in the United Kingdom, with several cars being supplied to multiple forces with upgraded police specification for traffic policing service, with the notable exception of the Metropolitan Police. The main feature of the new engine was a "Dual Ram" system, increasing torque at low engine speeds by means of a redirected air flow system which engages at 4,000 rpm. The engine in police service was capable of a speed of up to , although the bonnet was prone to rippling at such high speeds.
Opel tuners Irmscher introduced a 4-litre version of the Senator and it went on sale in Germany only in late 1990. Power increased to , it was equipped with a body kit and alloy wheels, while the car's interior benefitted from buffalo hide, added wood panels, and lots of power equipment. For 1990 the 2.5 L was replaced by a 2.6 L Dual Ram. The twelve-valve, 3-litre version was deleted from the range in 1992. CD versions of the 2.6 L (UK market only), and a 24 valve 3.0 L were available up to the model's withdrawal in 1993.
With the second generation Omega presented at the end of 1993 and available for sale from March 1994, Opel considered themselves sufficiently represented in the upper end of the market by the top specification Omega B. Production of the Opel Senator B ended in the autumn of 1993 with only 69,943 cars produced since the car's launch six and a half years earlier. Annual production had slumped from 14,007 in 1990 to 5,952 in 1992, with only 2,688 cars produced in 1993. Following the announcement of the discontinuation of the Senator, the government of the United Kingdom would order a final batch of around 200 Vauxhall Senators in 1993 for diplomatic and policing use prior to moving over to the Vauxhall Omega.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle C_\\mathrm x\\,"
}
] | https://en.wikipedia.org/wiki?curid=914554 |
914820 | G-test | Statistical test
In statistics, "G"-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended.
Formulation.
The general formula for "G" is
formula_0
where formula_1 is the observed count in a cell, formula_2 is the expected count under the null hypothesis, formula_3 denotes the natural logarithm, and the sum is taken over all non-empty cells. The resulting formula_4 is chi-squared distributed.
Furthermore, the total observed count should be equal to the total expected count:formula_5where formula_6 is the total number of observations.
Derivation.
We can derive the value of the "G"-test from the log-likelihood ratio test where the underlying model is a multinomial model.
Suppose we had a sample formula_7 where each formula_8 is the number of times that an object of type formula_9 was observed. Furthermore, let formula_10 be the total number of objects observed. If we assume that the underlying model is multinomial, then the test statistic is defined byformula_11where formula_12 is the null hypothesis and formula_13 is the maximum likelihood estimate (MLE) of the parameters given the data. Recall that for the multinomial model, the MLE of formula_14 given some data is defined byformula_15Furthermore, we may represent each null hypothesis parameter formula_16 asformula_17Thus, by substituting the representations of formula_12 and formula_18 in the log-likelihood ratio, the equation simplifies toformula_19Relabel the variables formula_20 with formula_21 and formula_8 with formula_22. Finally, multiply by a factor of formula_23 (used to make the G test formula asymptotically equivalent to the Pearson's chi-squared test formula) to achieve the form
formula_24
Heuristically, one can imagine formula_25 as continuous and approaching zero, in which case formula_26 and terms with zero observations can simply be dropped. However the "expected" count in each cell must be strictly greater than zero for each cell (formula_27) to apply the method.
Distribution and use.
Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, the distribution of "G" is approximately a chi-squared distribution, with the same number of degrees of freedom as in the corresponding chi-squared test.
For very small samples the multinomial test for goodness of fit, and Fisher's exact test for contingency tables, or even Bayesian hypothesis selection are preferable to the "G"-test. McDonald recommends to always use an exact test (exact test of goodness-of-fit, Fisher's exact test) if the total sample size is less than 1 000 .
There is nothing magical about a sample size of 1 000, it's just a nice round number that is well within the range where an exact test, chi-square test, and "G"–test will give almost identical p values. Spreadsheets, web-page calculators, and SAS shouldn't have any problem doing an exact test on a sample size of 1 000 .
— John H. McDonald
"G"-tests have been recommended at least since the 1981 edition of "Biometry", a statistics textbook by Robert R. Sokal and F. James Rohlf.
Relation to other metrics.
Relation to the chi-squared test.
The commonly used chi-squared tests for goodness of fit to a distribution and for independence in contingency tables are in fact approximations of the log-likelihood ratio on which the "G"-tests are based.
The general formula for Pearson's chi-squared test statistic is
formula_28
The approximation of "G" by chi squared is obtained by a second order Taylor expansion of the natural logarithm around 1 (see #Derivation (chi-squared) below).
We have formula_29 when the observed counts formula_25 are close to the expected counts formula_30 When this difference is large, however, the formula_31 approximation begins to break down. Here, the effects of outliers in data will be more pronounced, and this explains the why formula_31 tests fail in situations with little data.
For samples of a reasonable size, the "G"-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for the "G"-test is better than for the Pearson's chi-squared test. In cases where formula_32 for some cell case the "G"-test is always better than the chi-squared test.
For testing goodness-of-fit the "G"-test is infinitely more efficient than the chi squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodges and Lehmann.
Derivation (chi-squared).
Consider
formula_33
and let formula_34 with formula_35 so that the total number of counts remains the same. Upon substitution we find,
formula_36
A Taylor expansion around formula_37 can be performed using formula_38. The result is
formula_39 and distributing terms we find,
formula_40
Now, using the fact that formula_41 and formula_42 we can write the result,
formula_43
Relation to Kullback–Leibler divergence.
The "G"-test statistic is proportional to the Kullback–Leibler divergence of the theoretical distribution from the empirical distribution:
formula_44
where "N" is the total number of observations and formula_45 and formula_46 are the empirical and theoretical frequencies, respectively.
Relation to mutual information.
For analysis of contingency tables the value of "G" can also be expressed in terms of mutual information.
Let
formula_47 , formula_48 , formula_49, and formula_50.
Then "G" can be expressed in several alternative forms:
formula_51
formula_52
formula_53
where the entropy of a discrete random variable formula_54 is defined as
formula_55
and where
formula_56
is the mutual information between the row vector "r" and the column vector "c" of the contingency table.
It can also be shown that the inverse document frequency weighting commonly used for text retrieval is an approximation of "G" applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus. Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to the "G" statistic.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " G = 2\\sum_{i} {O_{i} \\cdot \\ln\\left(\\frac{O_i}{E_i}\\right)}, "
},
{
"math_id": 1,
"text": "O_i \\geq 0"
},
{
"math_id": 2,
"text": "E_i > 0"
},
{
"math_id": 3,
"text": "\\ln"
},
{
"math_id": 4,
"text": "G"
},
{
"math_id": 5,
"text": "\\sum_i O_i = \\sum_i E_i = N"
},
{
"math_id": 6,
"text": "N"
},
{
"math_id": 7,
"text": "x = (x_1, \\ldots, x_m)"
},
{
"math_id": 8,
"text": "x_i"
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "n = \\sum_{i=1}^m x_i"
},
{
"math_id": 11,
"text": "\\ln \\left( \\frac{L(\\tilde{\\theta}|x)}{L(\\hat{\\theta}|x)} \\right)\n= \\ln \\left( \\frac{\\prod_{i=1}^m \\tilde{\\theta}_i^{x_i}}{\\prod_{i=1}^m \\hat{\\theta}_i^{x_i}} \\right)"
},
{
"math_id": 12,
"text": "\\tilde{\\theta}"
},
{
"math_id": 13,
"text": "\\hat{\\theta}"
},
{
"math_id": 14,
"text": "\\hat{\\theta}_i"
},
{
"math_id": 15,
"text": "\\hat{\\theta}_i = \\frac{x_i}{n}"
},
{
"math_id": 16,
"text": "\\tilde{\\theta}_i"
},
{
"math_id": 17,
"text": "\\tilde{\\theta}_i = \\frac{e_i}{n}"
},
{
"math_id": 18,
"text": "\\hat{\\theta}"
},
{
"math_id": 19,
"text": "\\begin{align}\n\\ln \\left( \\frac{L(\\tilde{\\theta}|x)}{L(\\hat{\\theta}|x)} \\right)\n&= \\ln \\prod_{i=1}^m \\left(\\frac{e_i}{x_i}\\right)^{x_i} \\\\\n&= \\sum_{i=1}^m x_i \\ln\\left(\\frac{e_i}{x_i}\\right) \\\\\n\\end{align}"
},
{
"math_id": 20,
"text": "e_i"
},
{
"math_id": 21,
"text": "E_i"
},
{
"math_id": 22,
"text": "O_i"
},
{
"math_id": 23,
"text": "-2"
},
{
"math_id": 24,
"text": "\\begin{alignat}{2}\nG & = & \\; -2 \\sum_{i=1}^m O_i \\ln\\left(\\frac{E_i}{O_i}\\right) \\\\\n & = & 2 \\sum_{i=1}^m O_i \\ln\\left(\\frac{O_i}{E_i}\\right)\n\\end{alignat}"
},
{
"math_id": 25,
"text": "~ O_i ~"
},
{
"math_id": 26,
"text": "~ O_i \\ln O_i \\to 0 ~,"
},
{
"math_id": 27,
"text": "~ E_i > 0 ~ \\forall \\, i ~"
},
{
"math_id": 28,
"text": " \\chi^2 = \\sum_{i} {\\frac{\\left(O_i - E_i\\right)^2}{E_i}} ~."
},
{
"math_id": 29,
"text": " G \\approx \\chi^2 "
},
{
"math_id": 30,
"text": "~ E_i ~."
},
{
"math_id": 31,
"text": "~ \\chi^2 ~"
},
{
"math_id": 32,
"text": "~ O_i > 2 \\cdot E_i ~"
},
{
"math_id": 33,
"text": " G = 2\\sum_{i} {O_{i} \\ln\\left(\\frac{O_i}{E_i}\\right)} ~,"
},
{
"math_id": 34,
"text": "O_i = E_i + \\delta_i"
},
{
"math_id": 35,
"text": "\\sum_i \\delta_i = 0 ~,"
},
{
"math_id": 36,
"text": " G = 2\\sum_{i} {(E_i + \\delta_i) \\ln \\left(1+\\frac{\\delta_i}{E_i}\\right)} ~."
},
{
"math_id": 37,
"text": "1+\\frac{\\delta_i}{E_i}"
},
{
"math_id": 38,
"text": " \\ln(1 + x) = x - \\frac{1}{2}x^2 + \\mathcal{O}(x^3) "
},
{
"math_id": 39,
"text": " G = 2\\sum_{i} (E_i + \\delta_i) \\left(\\frac{\\delta_i}{E_i} - \\frac{1}{2}\\frac{\\delta_i^2}{E_i^2} + \\mathcal{O}\\left(\\delta_i^3\\right) \\right) ~,"
},
{
"math_id": 40,
"text": " G = 2\\sum_{i} \\delta_i + \\frac{1}{2}\\frac{\\delta_i^2}{E_i} + \\mathcal{O}\\left(\\delta_i^3\\right)~."
},
{
"math_id": 41,
"text": "~ \\sum_{i} \\delta_i = 0 ~"
},
{
"math_id": 42,
"text": "~ \\delta_i = O_i - E_i ~,"
},
{
"math_id": 43,
"text": "~ G \\approx \\sum_{i} \\frac{\\left(O_i-E_i\\right)^2}{E_i} ~."
},
{
"math_id": 44,
"text": "\n\\begin{align}\nG &= 2\\sum_{i} {O_{i} \\cdot \\ln\\left(\\frac{O_i}{E_i}\\right)} \n = 2 N \\sum_{i} {o_i \\cdot \\ln\\left(\\frac{o_i}{e_i}\\right)} \\\\\n &= 2 N \\, D_{\\mathrm{KL}}(o\\|e),\n\\end{align}"
},
{
"math_id": 45,
"text": "o_i"
},
{
"math_id": 46,
"text": "e_i"
},
{
"math_id": 47,
"text": "N = \\sum_{ij}{O_{ij}} \\; "
},
{
"math_id": 48,
"text": " \\; \\pi_{ij} = \\frac{O_{ij}}{N} \\;"
},
{
"math_id": 49,
"text": "\\; \\pi_{i.} = \\frac{\\sum_j O_{ij}}{N} \\; "
},
{
"math_id": 50,
"text": "\\; \\pi_{. j} = \\frac{\\sum_i O_{ij}}{N} \\;"
},
{
"math_id": 51,
"text": " G = 2 \\cdot N \\cdot \\sum_{ij}{\\pi_{ij} \\left( \\ln(\\pi_{ij})-\\ln(\\pi_{i.})-\\ln(\\pi_{.j}) \\right)} ,"
},
{
"math_id": 52,
"text": " G = 2 \\cdot N \\cdot \\left[ H(r) + H(c) - H(r,c) \\right] , "
},
{
"math_id": 53,
"text": " G = 2 \\cdot N \\cdot \\operatorname{MI}(r,c) \\, ,"
},
{
"math_id": 54,
"text": "X \\,"
},
{
"math_id": 55,
"text": " H(X) = - {\\sum_{x \\in \\text{Supp}(X)} p(x) \\log p(x)} \\, ,"
},
{
"math_id": 56,
"text": " \\operatorname{MI}(r,c)= H(r) + H(c) - H(r,c) \\, "
}
] | https://en.wikipedia.org/wiki?curid=914820 |
9148277 | Mathematical descriptions of the electromagnetic field | Formulations of electromagnetism
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.
Vector field approach.
The most common description of the electromagnetic field uses two three-dimensional vector fields called the electric field and the magnetic field. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as E("x", "y", "z", "t") (electric field) and B("x", "y", "z", "t") (magnetic field).
If only the electric field (E) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field (B) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations.
Maxwell's equations in the vector field approach.
The behaviour of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell-Heaviside's equations:
where "ρ" is the charge density, which can (and often does) depend on time and position, "ε"0 is the electric constant, "μ"0 is the magnetic constant, and J is the current per unit area, also a function of time and position. The equations take this form with the International System of Quantities.
When dealing with only nondispersive isotropic linear materials, Maxwell's equations are often modified to ignore bound charges by replacing the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. For some materials that have more complex responses to electromagnetic fields, these properties can be represented by tensors, with time-dependence related to the material's ability to respond to rapid field changes (dispersion (optics), Green–Kubo relations), and possibly also field dependencies representing nonlinear and/or nonlocal material responses to large amplitude fields (nonlinear optics).
Potential field approach.
Many times in the use and calculation of electric and magnetic fields, the approach used first computes an associated potential: the electric potential, formula_0, for the electric field, and the magnetic vector potential, A, for the magnetic field. The electric potential is a scalar field, while the magnetic potential is a vector field. This is why sometimes the electric potential is called the scalar potential and the magnetic potential is called the vector potential. These potentials can be used to find their associated fields as follows:
formula_1
formula_2
Maxwell's equations in potential formulation.
These relations can be substituted into Maxwell's equations to express the latter in terms of the potentials. Faraday's law and Gauss's law for magnetism (the homogeneous equations) turn out to be identically true for any potentials. This is because of the way the fields are expressed as gradients and curls of the scalar and vector potentials. The homogeneous equations in terms of these potentials involve the divergence of the curl formula_3 and the curl of the gradient formula_4, which are always zero. The other two of Maxwell's equations (the inhomogeneous equations) are the ones that describe the dynamics in the potential formulation.
Maxwell's equations ("potential formulation")
formula_5
formula_6
These equations taken together are as powerful and complete as Maxwell's equations. Moreover, the problem has been reduced somewhat, as the electric and magnetic fields together had six components to solve for. In the potential formulation, there are only four components: the electric potential and the three components of the vector potential. However, the equations are messier than Maxwell's equations using the electric and magnetic fields.
Gauge freedom.
These equations can be simplified by taking advantage of the fact that the electric and magnetic fields are physically meaningful quantities that can be measured; the potentials are not. There is a freedom to constrain the form of the potentials provided that this does not affect the resultant electric and magnetic fields, called gauge freedom. Specifically for these equations, for any choice of a twice-differentiable scalar function of position and time "λ", if ("φ", A) is a solution for a given system, then so is another potential ("φ"′, A′) given by:
formula_7
formula_8
This freedom can be used to simplify the potential formulation. Either of two such scalar functions is typically chosen: the Coulomb gauge and the Lorenz gauge.
Coulomb gauge.
The Coulomb gauge is chosen in such a way that formula_9, which corresponds to the case of magnetostatics. In terms of "λ", this means that it must satisfy the equation
formula_10
This choice of function results in the following formulation of Maxwell's equations:
formula_11
formula_12
Several features about Maxwell's equations in the Coulomb gauge are as follows. Firstly, solving for the electric potential is very easy, as the equation is a version of Poisson's equation. Secondly, solving for the magnetic vector potential is particularly difficult. This is the big disadvantage of this gauge. The third thing to note, and something that is not immediately obvious, is that the electric potential changes instantly everywhere in response to a change in conditions in one locality.
For instance, if a charge is moved in New York at 1 pm local time, then a hypothetical observer in Australia who could measure the electric potential directly would measure a change in the potential at 1 pm New York time. This seemingly violates causality in special relativity, i.e. the impossibility of information, signals, or anything travelling faster than the speed of light. The resolution to this apparent problem lies in the fact that, as previously stated, no observers can measure the potentials; they measure the electric and magnetic fields. So, the combination of ∇"φ" and ∂A/∂"t" used in determining the electric field restores the speed limit imposed by special relativity for the electric field, making all observable quantities consistent with relativity.
Lorenz gauge condition.
A gauge that is often used is the Lorenz gauge condition. In this, the scalar function "λ" is chosen such that
formula_13
meaning that "λ" must satisfy the equation
formula_14
The Lorenz gauge results in the following form of Maxwell's equations:
formula_15
formula_16
The operator formula_17 is called the d'Alembertian (some authors denote this by only the square formula_18). These equations are inhomogeneous versions of the wave equation, with the terms on the right side of the equation serving as the source functions for the wave. As with any wave equation, these equations lead to two types of solution: advanced potentials (which are related to the configuration of the sources at future points in time), and retarded potentials (which are related to the past configurations of the sources); the former are usually disregarded where the field is to analyzed from a causality perspective.
As pointed out above, the Lorenz gauge is no more valid than any other gauge since the potentials cannot be directly measured, however the Lorenz gauge has the advantage of the equations being Lorentz invariant.
Extension to quantum electrodynamics.
Canonical quantization of the electromagnetic fields proceeds by elevating the scalar and vector potentials; "φ"(x), A(x), from fields to field operators. Substituting 1/"c"2 = "ε"0"μ"0 into the previous Lorenz gauge equations gives:
formula_19
formula_20
Here, J and "ρ" are the current and charge density of the "matter field". If the matter field is taken so as to describe the interaction of electromagnetic fields with the Dirac electron given by the four-component Dirac spinor field "ψ", the current and charge densities have form:
formula_21
where α are the first three Dirac matrices. Using this, we can re-write Maxwell's equations as:
Maxwell's equations ("QED")
formula_22
formula_23
which is the form used in quantum electrodynamics.
Geometric algebra formulations.
Analogous to the tensor formulation, two objects, one for the electromagnetic field and one for the current density, are introduced. In geometric algebra (GA) these are multivectors, which sometimes follow Ricci calculus.
Algebra of physical space.
In the Algebra of physical space (APS), also known as the Clifford algebra formula_24, the field and current are represented by multivectors.
The field multivector, known as the Riemann–Silberstein vector, is
formula_25
and the four-current multivector is
formula_26
using an orthonormal basis formula_27. Similarly, the unit pseudoscalar is formula_28, due to the fact that the basis used is orthonormal. These basis vectors share the algebra of the Pauli matrices, but are usually not equated with them, as they are different objects with different interpretations.
After defining the derivative
formula_29
Maxwell's equations are reduced to the single equation
Maxwell's equations "(APS formulation)"
formula_30
In three dimensions, the derivative has a special structure allowing the introduction of a cross product:
formula_31
from which it is easily seen that Gauss's law is the scalar part, the Ampère–Maxwell law is the vector part, Faraday's law is the pseudovector part, and Gauss's law for magnetism is the pseudoscalar part of the equation. After expanding and rearranging, this can be written as
formula_32
Spacetime algebra.
We can identify APS as a subalgebra of the spacetime algebra (STA) formula_33, defining formula_34 and formula_35. The formula_36s have the same algebraic properties of the gamma matrices but their matrix representation is not needed. The derivative is now
formula_37
The Riemann–Silberstein becomes a bivector
formula_38
and the charge and current density become a vector
formula_39
Owing to the identity
formula_40
Maxwell's equations reduce to the single equation
Maxwell's equations "(STA formulation)"
formula_41
Differential forms approach.
In what follows, cgs-Gaussian units, not SI units are used. (To convert to SI, see here.) By Einstein notation, we implicitly take the sum over all values of the indices that can vary within the dimension.
Field 2-form.
In free space, where "ε" = "ε"0 and "μ" = "μ"0 are constant everywhere, Maxwell's equations simplify considerably once the language of differential geometry and differential forms is used. The electric and magnetic fields are now jointly described by a 2-form F in a 4-dimensional spacetime manifold. The Faraday tensor formula_42 (electromagnetic tensor) can be written as a 2-form in Minkowski space with metric signature (− + + +) as
formula_43
which is the exterior derivative of the electromagnetic four-potential formula_44
formula_45
The source free equations can be written by the action of the exterior derivative on this 2-form. But for the equations with source terms (Gauss's law and the Ampère-Maxwell equation), the Hodge dual of this 2-form is needed. The Hodge star operator takes a "p"-form to a ("n" − "p")-form, where "n" is the number of dimensions. Here, it takes the 2-form ("F") and gives another 2-form (in four dimensions, "n" − "p" = 4 − 2 = 2). For the basis cotangent vectors, the Hodge dual is given as (see )
formula_46
and so on. Using these relations, the dual of the Faraday 2-form is the Maxwell tensor,
formula_47
Current 3-form, dual current 1-form.
Here, the 3-form J is called the "electric current form" or "current 3-form":
formula_48
That F is a closed form, and the exterior derivative of its Hodge dual is the current 3-form, express Maxwell's equations:
Maxwell's equations
formula_49
formula_50
Here d denotes the exterior derivative – a natural coordinate- and metric-independent differential operator acting on forms, and the (dual) Hodge star operator formula_51 is a linear transformation from the space of 2-forms to the space of (4 − 2)-forms defined by the metric in Minkowski space (in four dimensions even by any metric conformal to this metric). The fields are in natural units where 1/(4"πε"0) = 1.
Since d2 = 0, the 3-form J satisfies the conservation of current (continuity equation):
formula_52
The current 3-form can be integrated over a 3-dimensional space-time region. The physical interpretation of this integral is the charge in that region if it is spacelike, or the amount of charge that flows through a surface in a certain amount of time if that region is a spacelike surface cross a timelike interval.
As the exterior derivative is defined on any manifold, the differential form version of the Bianchi identity makes sense for any 4-dimensional manifold, whereas the source equation is defined if the manifold is oriented and has a Lorentz metric. In particular the differential form version of the Maxwell equations are a convenient and intuitive formulation of the Maxwell equations in general relativity.
"Note:" In much of the literature, the notations formula_53 and formula_54 are switched, so that formula_53 is a 1-form called the current and formula_54 is a 3-form called the dual current.
Linear macroscopic influence of matter.
In a linear, macroscopic theory, the influence of matter on the electromagnetic field is described through more general linear transformation in the space of 2-forms. We call
formula_55
the constitutive transformation. The role of this transformation is comparable to the Hodge duality transformation. The Maxwell equations in the presence of matter then become:
formula_56
formula_57
where the current 3-form J still satisfies the continuity equation dJ = 0.
When the fields are expressed as linear combinations (of exterior products) of basis forms "θ""i",
formula_58
the constitutive relation takes the form
formula_59
where the field coefficient functions and the constitutive coefficients are anticommutative for swapping of each one's indices. In particular, the Hodge star operator that was used in the above case is obtained by taking
formula_60
in terms of tensor index notation with respect to a (not necessarily orthonormal) basis formula_61 in a tangent space formula_62 and its dual basis formula_63 in formula_64, having the gram metric matrix formula_65 and its inverse matrix formula_66, and formula_67 is the Levi-Civita symbol with formula_68. Up to scaling, this is the only invariant tensor of this type that can be defined with the metric.
In this formulation, electromagnetism generalises immediately to any 4-dimensional oriented manifold or with small adaptations any manifold.
Alternative metric signature.
In the particle physicist's sign convention for the metric signature (+ − − −), the potential 1-form is
formula_69
The Faraday curvature 2-form becomes
formula_70
and the Maxwell tensor becomes
formula_71
The current 3-form J is
formula_72
and the corresponding dual 1-form is
formula_73
The current norm is now positive and equals
formula_74
with the canonical volume form formula_75.
Curved spacetime.
Traditional formulation.
Matter and energy generate curvature of spacetime. This is the subject of general relativity. Curvature of spacetime affects electrodynamics. An electromagnetic field having energy and momentum also generates curvature in spacetime. Maxwell's equations in curved spacetime can be obtained by replacing the derivatives in the equations in flat spacetime with covariant derivatives. (Whether this is the appropriate generalization requires separate investigation.) The sourced and source-free equations become (cgs-Gaussian units):
formula_76
and
formula_77
Here,
formula_78
is a Christoffel symbol that characterizes the curvature of spacetime and ∇"α" is the covariant derivative.
Formulation in terms of differential forms.
The formulation of the Maxwell equations in terms of differential forms can be used without change in general relativity. The equivalence of the more traditional general relativistic formulation using the covariant derivative with the differential form formulation can be seen as follows. Choose local coordinates "x""α" that gives a basis of 1-forms d"x""α" in every point of the open set where the coordinates are defined. Using this basis and cgs-Gaussian units we define
The epsilon tensor contracted with the differential 3-form produces 6 times the number of terms required.
Here "g" is as usual the determinant of the matrix representing the metric tensor, "g""αβ". A small computation that uses the symmetry of the Christoffel symbols (i.e., the torsion-freeness of the Levi-Civita connection) and the covariant constantness of the Hodge star operator then shows that in this coordinate neighborhood we have:
Classical electrodynamics as the curvature of a line bundle.
An elegant and intuitive way to formulate Maxwell's equations is to use complex line bundles or a principal U(1)-bundle, on the fibers of which U(1) acts regularly. The principal U(1)-connection ∇ on the line bundle has a curvature F = ∇2, which is a two-form that automatically satisfies dF = 0 and can be interpreted as a field strength. If the line bundle is trivial with flat reference connection "d" we can write ∇ = d + A and F = dA with A the 1-form composed of the electric potential and the magnetic vector potential.
In quantum mechanics, the connection itself is used to define the dynamics of the system. This formulation allows a natural description of the Aharonov–Bohm effect. In this experiment, a static magnetic field runs through a long magnetic wire (e.g., an iron wire magnetized longitudinally). Outside of this wire the magnetic induction is zero, in contrast to the vector potential, which essentially depends on the magnetic flux through the cross-section of the wire and does not vanish outside. Since there is no electric field either, the Maxwell tensor F = 0 throughout the space-time region outside the tube, during the experiment. This means by definition that the connection ∇ is flat there.
In mentioned Aharonov–Bohm effect, however, the connection depends on the magnetic field through the tube since the holonomy along a non-contractible curve encircling the tube is the magnetic flux through the tube in the proper units. This can be detected quantum-mechanically with a double-slit electron diffraction experiment on an electron wave traveling around the tube. The holonomy corresponds to an extra phase shift, which leads to a shift in the diffraction pattern.
Discussion and other approaches.
Following are the reasons for using each of such formulations.
Potential formulation.
In advanced classical mechanics it is often useful, and in quantum mechanics frequently essential, to express Maxwell's equations in a "potential formulation" involving the electric potential (also called scalar potential) "φ", and the magnetic potential (a vector potential) A. For example, the analysis of radio antennas makes full use of Maxwell's vector and scalar potentials to separate the variables, a common technique used in formulating the solutions of differential equations. The potentials can be introduced by using the Poincaré lemma on the homogeneous equations to solve them in a universal way (this assumes that we consider a topologically simple, e.g. contractible space). The potentials are defined as in the table above. Alternatively, these equations define E and B in terms of the electric and magnetic potentials that then satisfy the homogeneous equations for E and B as identities. Substitution gives the non-homogeneous Maxwell equations in potential form.
Many different choices of A and "φ" are consistent with given observable electric and magnetic fields E and B, so the potentials seem to contain more, (classically) unobservable information. The non uniqueness of the potentials is well understood, however. For every scalar function of position and time "λ"("x", "t"), the potentials can be changed by a gauge transformation as
formula_84
without changing the electric and magnetic field. Two pairs of gauge transformed potentials ("φ", A) and ("φ"′, A′) are called "gauge equivalent", and the freedom to select any pair of potentials in its gauge equivalence class is called gauge freedom. Again by the Poincaré lemma (and under its assumptions), gauge freedom is the only source of indeterminacy, so the field formulation is equivalent to the potential formulation if we consider the potential equations as equations for gauge equivalence classes.
The potential equations can be simplified using a procedure called gauge fixing. Since the potentials are only defined up to gauge equivalence, we are free to impose additional equations on the potentials, as long as for every pair of potentials there is a gauge equivalent pair that satisfies the additional equations (i.e. if the gauge fixing equations define a slice to the gauge action). The gauge-fixed potentials still have a gauge freedom under all gauge transformations that leave the gauge fixing equations invariant. Inspection of the potential equations suggests two natural choices. In the Coulomb gauge, we impose ∇ ⋅ A = 0, which is mostly used in the case of magneto statics when we can neglect the "c"−2∂2A/∂"t"2 term. In the Lorenz gauge (named after the Dane Ludvig Lorenz), we impose
formula_85
The Lorenz gauge condition has the advantage of being Lorentz invariant and leading to Lorentz-invariant equations for the potentials.
Manifestly covariant (tensor) approach.
Maxwell's equations are exactly consistent with special relativity—i.e., if they are valid in one inertial reference frame, then they are automatically valid in every other inertial reference frame. In fact, Maxwell's equations were crucial in the historical development of special relativity. However, in the usual formulation of Maxwell's equations, their consistency with special relativity is not obvious; it can only be proven by a laborious calculation.
For example, consider a conductor moving in the field of a magnet. In the frame of the magnet, that conductor experiences a "magnetic" force. But in the frame of a conductor moving relative to the magnet, the conductor experiences a force due to an "electric" field. The motion is exactly consistent in these two different reference frames, but it mathematically arises in quite different ways.
For this reason and others, it is often useful to rewrite Maxwell's equations in a way that is "manifestly covariant"—i.e. "obviously" consistent with special relativity, even with just a glance at the equations—using covariant and contravariant four-vectors and tensors. This can be done using the EM tensor F, or the 4-potential A, with the 4-current J.
Differential forms approach.
Gauss's law for magnetism and the Faraday–Maxwell law can be grouped together since the equations are homogeneous, and be seen as geometric "identities" expressing the "field" F (a 2-form), which can be derived from the "4-potential" A. Gauss's law for electricity and the Ampere–Maxwell law could be seen as the "dynamical equations of motion" of the fields, obtained via the Lagrangian principle of least action, from the "interaction term" AJ (introduced through gauge covariant derivatives), coupling the field to matter. For the field formulation of Maxwell's equations in terms of a principle of extremal action, see electromagnetic tensor.
Often, the time derivative in the Faraday–Maxwell equation motivates calling this equation "dynamical", which is somewhat misleading in the sense of the preceding analysis. This is rather an artifact of breaking relativistic covariance by choosing a preferred time direction. To have physical degrees of freedom propagated by these field equations, one must include a kinetic term F ⋆F for A, and take into account the non-physical degrees of freedom that can be removed by gauge transformation A ↦ A − d"α". See also gauge fixing and Faddeev–Popov ghosts.
Geometric calculus approach.
This formulation uses the algebra that spacetime generates through the introduction of a distributive, associative (but not commutative) product called the geometric product. Elements and operations of the algebra can generally be associated with geometric meaning. The members of the algebra may be decomposed by grade (as in the formalism of differential forms) and the (geometric) product of a vector with a "k"-vector decomposes into a ("k" − 1)-vector and a ("k" + 1)-vector. The ("k" − 1)-vector component can be identified with the inner product and the ("k" + 1)-vector component with the outer product. It is of algebraic convenience that the geometric product is invertible, while the inner and outer products are not. As such, powerful techniques such as Green's functions can be used. The derivatives that appear in Maxwell's equations are vectors and electromagnetic fields are represented by the Faraday bivector F. This formulation is as general as that of differential forms for manifolds with a metric tensor, as then these are naturally identified with "r"-forms and there are corresponding operations. Maxwell's equations reduce to one equation in this formalism. This equation can be separated into parts as is done above for comparative reasons.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varphi"
},
{
"math_id": 1,
"text": "\\mathbf E = - \\mathbf \\nabla \\varphi - \\frac{\\partial \\mathbf A}{\\partial t}"
},
{
"math_id": 2,
"text": "\\mathbf B = \\mathbf \\nabla \\times \\mathbf A"
},
{
"math_id": 3,
"text": "\\nabla \\cdot \\nabla \\times \\mathbf A"
},
{
"math_id": 4,
"text": "\\nabla \\times \\nabla \\varphi"
},
{
"math_id": 5,
"text": "\\nabla^2 \\varphi + \\frac{\\partial}{\\partial t} \\left ( \\mathbf \\nabla \\cdot \\mathbf A \\right ) = - \\frac{\\rho}{\\varepsilon_0}"
},
{
"math_id": 6,
"text": "\\left ( \\nabla^2 \\mathbf A - \\frac{1}{c^2} \\frac{\\partial^2 \\mathbf A}{\\partial t^2} \\right ) - \\mathbf \\nabla \\left ( \\mathbf \\nabla \\cdot \\mathbf A + \\frac{1}{c^2} \\frac{\\partial \\varphi}{\\partial t} \\right ) = - \\mu_0 \\mathbf J"
},
{
"math_id": 7,
"text": "\\varphi' = \\varphi - \\frac{\\partial \\lambda}{\\partial t}"
},
{
"math_id": 8,
"text": "\\mathbf A' = \\mathbf A + \\mathbf \\nabla \\lambda "
},
{
"math_id": 9,
"text": "\\mathbf \\nabla \\cdot \\mathbf A' = 0"
},
{
"math_id": 10,
"text": "\\nabla^2 \\lambda = - \\mathbf \\nabla \\cdot \\mathbf A."
},
{
"math_id": 11,
"text": "\\nabla^2 \\varphi' = -\\frac{\\rho}{\\varepsilon_0}"
},
{
"math_id": 12,
"text": "\\nabla^2 \\mathbf A' - \\mu_0 \\varepsilon_0 \\frac{\\partial^2\\! \\mathbf A'}{\\partial t^2} = - \\mu_0 \\mathbf J + \\mu_0 \\varepsilon_0 \\nabla\\!\\! \\left (\\! \\frac{\\partial \\varphi'}{\\partial t} \\!\\right )"
},
{
"math_id": 13,
"text": "\\mathbf \\nabla \\cdot \\mathbf A' = - \\mu_0 \\varepsilon_0 \\frac{\\partial \\varphi'}{\\partial t} ,"
},
{
"math_id": 14,
"text": "\\nabla^2 \\lambda - \\mu_0 \\varepsilon_0 \\frac{\\partial^2 \\lambda }{\\partial t^2}= - \\mathbf \\nabla \\cdot \\mathbf A - \\mu_0 \\varepsilon_0 \\frac{\\partial \\varphi}{\\partial t} ."
},
{
"math_id": 15,
"text": "\\nabla^2 \\varphi' - \\mu_0 \\varepsilon_0 \\frac{\\partial^2 \\varphi'}{\\partial t^2} = -\\Box^2 \\varphi' = - \\frac{\\rho}{\\varepsilon_0}"
},
{
"math_id": 16,
"text": "\\nabla^2 \\mathbf A' - \\mu_0 \\varepsilon_0 \\frac{\\partial^2 \\mathbf A'}{\\partial t^2} = -\\Box^2 \\mathbf A' = - \\mu_0 \\mathbf J"
},
{
"math_id": 17,
"text": "\\Box^2"
},
{
"math_id": 18,
"text": "\\Box"
},
{
"math_id": 19,
"text": "\\nabla^2 \\mathbf A - \\frac 1 {c^2} \\frac{\\partial^2 \\mathbf A}{\\partial t^2} = - \\mu_0 \\mathbf J"
},
{
"math_id": 20,
"text": "\\nabla^2 \\varphi - \\frac 1 {c^2} \\frac{\\partial^2 \\varphi}{\\partial t^2} = - \\frac{\\rho}{\\varepsilon_0}"
},
{
"math_id": 21,
"text": "\\mathbf{J}=-e\\psi^{\\dagger}\\boldsymbol{\\alpha}\\psi\\,\\quad \\rho=-e\\psi^{\\dagger}\\psi \\,,"
},
{
"math_id": 22,
"text": "\\nabla^2 \\mathbf A - \\frac 1 {c^2} \\frac{\\partial^2 \\mathbf A}{\\partial t^2} = \\mu_0 e \\psi^{\\dagger} \\boldsymbol{\\alpha} \\psi "
},
{
"math_id": 23,
"text": "\\nabla^2 \\varphi - \\frac 1 {c^2} \\frac{\\partial^2 \\varphi}{\\partial t^2} = \\frac{1}{\\varepsilon_0} e \\psi^{\\dagger} \\psi"
},
{
"math_id": 24,
"text": "C\\ell_{3,0}(\\R)"
},
{
"math_id": 25,
"text": " \\mathbf{F} = \\mathbf{E} + Ic\\mathbf{B} = E^k\\sigma_k + IcB^k\\sigma_k ,"
},
{
"math_id": 26,
"text": " c \\rho - \\mathbf{J} = c \\rho - J^k\\sigma_k"
},
{
"math_id": 27,
"text": "\\{\\sigma_k\\}"
},
{
"math_id": 28,
"text": "I=\\sigma_1\\sigma_2\\sigma_3"
},
{
"math_id": 29,
"text": " \\boldsymbol{\\nabla} = \\sigma^k \\partial_k,"
},
{
"math_id": 30,
"text": " \\left(\\frac{1}{c}\\dfrac{\\partial }{\\partial t} + \\boldsymbol{\\nabla}\\right)\\mathbf{F} = \\mu_0 c (c \\rho - \\mathbf{J}). "
},
{
"math_id": 31,
"text": " \\boldsymbol{\\nabla}\\mathbf{F} = \\boldsymbol{\\nabla} \\cdot \\mathbf{F} + \\boldsymbol{\\nabla} \\wedge \\mathbf{F} = \\boldsymbol{\\nabla} \\cdot \\mathbf{F} + I \\boldsymbol{\\nabla} \\times \\mathbf{F}"
},
{
"math_id": 32,
"text": "\n\\left( \\boldsymbol{\\nabla} \\cdot \\mathbf{E} - \\frac{\\rho}{\\varepsilon_0} \\right)- c \\left( \\boldsymbol{\\nabla} \\times \\mathbf{B} - \\mu_0 \\varepsilon_0 \\frac{\\partial {\\mathbf{E}}}{\\partial {t}} - \\mu_0 \\mathbf{J} \\right)+ I \\left( \\boldsymbol{\\nabla} \\times \\mathbf{E} + \\frac{\\partial {\\mathbf{B}}}{\\partial {t}} \\right)+ I c \\left( \\boldsymbol{\\nabla} \\cdot \\mathbf{B} \\right)= 0 \n"
},
{
"math_id": 33,
"text": "C\\ell_{1,3}(\\mathbb{R})"
},
{
"math_id": 34,
"text": "\\sigma_k=\\gamma_k\\gamma_0"
},
{
"math_id": 35,
"text": "I=\\gamma_0\\gamma_1\\gamma_2\\gamma_3"
},
{
"math_id": 36,
"text": "\\gamma_\\mu"
},
{
"math_id": 37,
"text": "\\nabla = \\gamma^\\mu \\partial_\\mu."
},
{
"math_id": 38,
"text": "F = \\mathbf{E} + Ic\\mathbf{B} = E^1\\gamma_1\\gamma_0 + E^2\\gamma_2\\gamma_0 + E^3\\gamma_3\\gamma_0 -c(B^1\\gamma_2\\gamma_3 + B^2\\gamma_3\\gamma_1 + B^3\\gamma_1\\gamma_2),"
},
{
"math_id": 39,
"text": "J = J^\\mu \\gamma_\\mu = c \\rho \\gamma_0 + J^k \\gamma_k = \\gamma_0(c \\rho - J^k \\sigma_k)."
},
{
"math_id": 40,
"text": "\\gamma_0 \\nabla = \\gamma_0\\gamma^0 \\partial_0 + \\gamma_0\\gamma^k\\partial_k = \\partial_0 + \\sigma^k\\partial_k = \\frac{1}{c}\\dfrac{\\partial }{\\partial t} + \\boldsymbol{\\nabla},"
},
{
"math_id": 41,
"text": " \\nabla F = \\mu_0 c J. "
},
{
"math_id": 42,
"text": "F_{\\mu\\nu}"
},
{
"math_id": 43,
"text": " \\begin{align}\n\\mathbf{F} & \\equiv \\frac{1}{2}F_{\\mu\\nu} \\mathrm{d}x^{\\mu} \\wedge \\mathrm{d}x^{\\nu} \\\\\n& = B_x \\mathrm{d}y \\wedge \\mathrm{d}z + B_y \\mathrm{d}z \\wedge \\mathrm{d}x + B_z \\mathrm{d}x \\wedge \\mathrm{d}y + E_x \\mathrm{d}x \\wedge \\mathrm{d}t + E_y \\mathrm{d}y \\wedge \\mathrm{d}t + E_z \\mathrm{d}z \\wedge \\mathrm{d}t\n\\end{align}"
},
{
"math_id": 44,
"text": "\\mathbf{A} :"
},
{
"math_id": 45,
"text": " \\mathbf{A} = - \\phi\\, \\mathrm{d}t + A_x \\mathrm{d}x + A_y \\mathrm{d}y + A_z \\mathrm{d}z ."
},
{
"math_id": 46,
"text": " {\\star} ( \\mathrm{d}x \\wedge \\mathrm{d}y ) = - \\mathrm{d}z \\wedge \\mathrm{d}t ,\\quad {\\star} ( \\mathrm{d}x \\wedge \\mathrm{d}t ) = \\mathrm{d}y \\wedge \\mathrm{d}z, "
},
{
"math_id": 47,
"text": " {\\star} \\mathbf{F} = - B_x \\mathrm{d}x \\wedge \\mathrm{d}t - B_y \\mathrm{d}y \\wedge \\mathrm{d}t - B_z \\mathrm{d}z \\wedge \\mathrm{d}t + E_x \\mathrm{d}y \\wedge \\mathrm{d}z + E_y \\mathrm{d}z \\wedge \\mathrm{d}x + E_z \\mathrm{d}x \\wedge \\mathrm{d}y "
},
{
"math_id": 48,
"text": " \\mathbf{J} = \\rho\\, \\mathrm{d}x \\wedge \\mathrm{d}y \\wedge \\mathrm{d}z - j_x \\mathrm{d}t \\wedge \\mathrm{d}y \\wedge \\mathrm{d}z - j_y \\mathrm{d}t \\wedge \\mathrm{d}z \\wedge \\mathrm{d}x - j_z \\mathrm{d}t \\wedge \\mathrm{d}x \\wedge \\mathrm{d}y ."
},
{
"math_id": 49,
"text": "\\mathrm{d}\\mathbf{F}=0"
},
{
"math_id": 50,
"text": "\\mathrm{d}{\\star}\\mathbf{F}=\\mathbf{J}"
},
{
"math_id": 51,
"text": "{\\star}"
},
{
"math_id": 52,
"text": "\\mathrm{d}{\\mathbf{J}}=\\mathrm{d}^2{\\star}\\mathbf{F}=0."
},
{
"math_id": 53,
"text": "\\mathbf{J}"
},
{
"math_id": 54,
"text": "{\\star}\\mathbf{J}"
},
{
"math_id": 55,
"text": " C:\\Lambda^2\\ni\\mathbf{F}\\mapsto \\mathbf{G}\\in\\Lambda^{(4-2)}"
},
{
"math_id": 56,
"text": " \\mathrm{d}\\mathbf{F} = 0"
},
{
"math_id": 57,
"text": " \\mathrm{d}\\mathbf{G} = \\mathbf{J}"
},
{
"math_id": 58,
"text": " \\mathbf{F} = \\frac{1}{2}F_{pq}\\mathbf{\\theta}^p\\wedge\\mathbf{\\theta}^q."
},
{
"math_id": 59,
"text": " G_{pq} = C_{pq}^{mn}F_{mn}"
},
{
"math_id": 60,
"text": " C_{pq}^{mn} = \\frac{1}{2}g^{ma}g^{nb} \\varepsilon_{abpq} \\sqrt{-g} "
},
{
"math_id": 61,
"text": "\\left\\{\\frac{\\partial}{\\partial x_1}, \\ldots, \\frac{\\partial}{\\partial x_n}\\right\\}"
},
{
"math_id": 62,
"text": "V = T_p M"
},
{
"math_id": 63,
"text": "\\{dx_1,\\ldots,dx_n\\}"
},
{
"math_id": 64,
"text": "V^* = T^*_p M"
},
{
"math_id": 65,
"text": "(g_{ij}) = \\left(\\left\\langle \\frac{\\partial}{\\partial x_i}, \\frac{\\partial}{\\partial x_j}\\right\\rangle\\right)"
},
{
"math_id": 66,
"text": "(g^{ij}) = (\\langle dx^i, dx^j\\rangle)"
},
{
"math_id": 67,
"text": "\\varepsilon_{abpq}"
},
{
"math_id": 68,
"text": "\\varepsilon_{1234} = 1"
},
{
"math_id": 69,
"text": " \\mathbf{A} = \\phi\\, \\mathrm{d}t - A_x \\mathrm{d}x - A_y \\mathrm{d}y - A_z \\mathrm{d}z ."
},
{
"math_id": 70,
"text": " \\begin{align}\n\\mathbf{F} \\equiv & \\frac{1}{2}F_{\\mu\\nu} \\mathrm{d}x^{\\mu} \\wedge \\mathrm{d}x^{\\nu} \\\\\n= & E_x \\mathrm{d}t \\wedge \\mathrm{d}x + E_y \\mathrm{d}t \\wedge \\mathrm{d}y + E_z \\mathrm{d}t \\wedge \\mathrm{d}z - B_x \\mathrm{d}y \\wedge \\mathrm{d}z - B_y \\mathrm{d}z \\wedge \\mathrm{d}x - B_z \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}"
},
{
"math_id": 71,
"text": "{{\\star} \\mathbf{F}} = - E_x \\mathrm{d}y \\wedge \\mathrm{d}z - E_y \\mathrm{d}z \\wedge \\mathrm{d}x - E_z \\mathrm{d}x \\wedge \\mathrm{d}y - B_x \\mathrm{d}t \\wedge \\mathrm{d}x - B_y \\mathrm{d}t \\wedge \\mathrm{d}y - B_z \\mathrm{d}t \\wedge \\mathrm{d}z."
},
{
"math_id": 72,
"text": " \\mathbf{J} = - \\rho\\, \\mathrm{d}x \\wedge \\mathrm{d}y \\wedge \\mathrm{d}z + j_x \\mathrm{d}t \\wedge \\mathrm{d}y \\wedge \\mathrm{d}z + j_y \\mathrm{d}t \\wedge \\mathrm{d}z \\wedge \\mathrm{d}x + j_z \\mathrm{d}t \\wedge \\mathrm{d}x \\wedge \\mathrm{d}y"
},
{
"math_id": 73,
"text": " {{\\star}\\mathbf{J}} = -\\rho\\, \\mathrm{d}t + j_x \\mathrm{d}x + j_y \\mathrm{d}y + j_z \\mathrm{d}z ."
},
{
"math_id": 74,
"text": " {\\mathbf{J} \\wedge {\\star}\\mathbf{J}} = [\\rho^2 + (j_x)^2 + (j_y)^2 + (j_z)^2]\\,{\\star}(1)"
},
{
"math_id": 75,
"text": "{\\star}(1) = \\mathrm{d}t \\wedge \\mathrm{d}x \\wedge \\mathrm{d}y \\wedge \\mathrm{d}z"
},
{
"math_id": 76,
"text": " { 4 \\pi \\over c }j^{\\beta} = \\partial_{\\alpha} F^{\\alpha\\beta} + {\\Gamma^{\\alpha}}_{\\mu\\alpha} F^{\\mu\\beta} + {\\Gamma^{\\beta}}_{\\mu\\alpha} F^{\\alpha \\mu} \\ \\stackrel{\\mathrm{def}}{=}\\ \\nabla_{\\alpha} F^{\\alpha\\beta} \\ \\stackrel{\\mathrm{def}}{=}\\ {F^{\\alpha\\beta}}_{;\\alpha} \\, \\!"
},
{
"math_id": 77,
"text": "0 = \\partial_{\\gamma} F_{\\alpha\\beta} + \\partial_{\\beta} F_{\\gamma\\alpha} + \\partial_{\\alpha} F_{\\beta\\gamma} = \\nabla_{\\gamma} F_{\\alpha\\beta} + \\nabla_{\\beta} F_{\\gamma\\alpha} + \\nabla_{\\alpha} F_{\\beta\\gamma}.\\,"
},
{
"math_id": 78,
"text": "{\\Gamma^{\\alpha}}_{\\mu\\beta}"
},
{
"math_id": 79,
"text": " \\mathbf{F} = \\frac{1}{2}F_{\\alpha\\beta} \\,\\mathrm{d}x^{\\alpha} \\wedge \\mathrm{d}x^{\\beta}."
},
{
"math_id": 80,
"text": " \\mathbf{J} = {4 \\pi \\over c } \\left ( \\frac{1}{6} j^{\\alpha} \\sqrt{-g} \\, \\varepsilon_{\\alpha\\beta\\gamma\\delta} \\mathrm{d}x^{\\beta} \\wedge \\mathrm{d}x^{\\gamma} \\wedge \\mathrm{d}x^{\\delta}. \\right)"
},
{
"math_id": 81,
"text": " \\mathrm{d}\\mathbf{F} = 2(\\partial_{\\gamma} F_{\\alpha\\beta} + \\partial_{\\beta} F_{\\gamma\\alpha} + \\partial_{\\alpha} F_{\\beta\\gamma})\\mathrm{d}x^{\\alpha}\\wedge \\mathrm{d}x^{\\beta} \\wedge \\mathrm{d}x^{\\gamma} = 0,"
},
{
"math_id": 82,
"text": " \\mathrm{d}{\\star \\mathbf{F}} = \\frac{1}{6}{F^{\\alpha\\beta}}_{;\\alpha}\\sqrt{-g} \\, \\varepsilon_{\\beta\\gamma\\delta\\alpha}\\mathrm{d}x^{\\gamma} \\wedge \\mathrm{d}x^{\\delta} \\wedge \\mathrm{d}x^{\\alpha} = \\mathbf{J},"
},
{
"math_id": 83,
"text": " \\mathrm{d}\\mathbf{J} = { 4 \\pi \\over c } {j^{\\alpha}}_{;\\alpha} \\sqrt{-g} \\, \\varepsilon_{\\alpha\\beta\\gamma\\delta}\\mathrm{d}x^{\\alpha}\\wedge \\mathrm{d}x^{\\beta} \\wedge \\mathrm{d}x^{\\gamma} \\wedge \\mathrm{d}x^{\\delta} = 0."
},
{
"math_id": 84,
"text": "\\varphi' = \\varphi - \\frac{\\partial \\lambda}{\\partial t}, \\quad \\mathbf A' = \\mathbf A + \\mathbf \\nabla \\lambda"
},
{
"math_id": 85,
"text": "\\mathbf \\nabla \\cdot \\mathbf A + \\frac{1}{c^2} \\frac{\\partial \\varphi}{\\partial t} = 0\\,."
}
] | https://en.wikipedia.org/wiki?curid=9148277 |
914901 | Sard's theorem | Theorem in mathematical analysis
In mathematics, Sard's theorem, also known as Sard's lemma or the Morse–Sard theorem, is a result in mathematical analysis that asserts that the set of critical values (that is, the image of the set of critical points) of a smooth function "f" from one Euclidean space or manifold to another is a null set, i.e., it has Lebesgue measure 0. This makes the set of critical values "small" in the sense of a generic property. The theorem is named for Anthony Morse and Arthur Sard.
Statement.
More explicitly, let
formula_0
be formula_1, (that is, formula_2 times continuously differentiable), where formula_3. Let formula_4 denote the "critical set" of formula_5 which is the set of points formula_6 at which the Jacobian matrix of formula_7 has rank formula_8. Then the image formula_9 has Lebesgue measure 0 in formula_10.
Intuitively speaking, this means that although formula_11 may be large, its image must be small in the sense of Lebesgue measure: while formula_7 may have many critical "points" in the domain formula_12, it must have few critical "values" in the image formula_10.
More generally, the result also holds for mappings between differentiable manifolds formula_13 and formula_14 of dimensions formula_15 and formula_16, respectively. The critical set formula_11 of a formula_1 function
formula_17
consists of those points at which the differential
formula_18
has rank less than formula_15 as a linear transformation. If formula_19, then Sard's theorem asserts that the image of formula_11 has measure zero as a subset of formula_13. This formulation of the result follows from the version for Euclidean spaces by taking a countable set of coordinate patches. The conclusion of the theorem is a local statement, since a countable union of sets of measure zero is a set of measure zero, and the property of a subset of a coordinate patch having zero measure is invariant under diffeomorphism.
Variants.
There are many variants of this lemma, which plays a basic role in singularity theory among other fields. The case formula_20 was proven by Anthony P. Morse in 1939, and the general case by Arthur Sard in 1942.
A version for infinite-dimensional Banach manifolds was proven by Stephen Smale.
The statement is quite powerful, and the proof involves analysis. In topology it is often quoted — as in the Brouwer fixed-point theorem and some applications in Morse theory — in order to prove the weaker corollary that “a non-constant smooth map has at least one regular value”.
In 1965 Sard further generalized his theorem to state that if formula_17 is formula_1 for formula_3 and if formula_21 is the set of points formula_22 such that formula_23 has rank strictly less than formula_24, then the "r"-dimensional Hausdorff measure of formula_25 is zero. In particular the Hausdorff dimension of formula_25 is at most "r". Caveat: The Hausdorff dimension of formula_25 can be arbitrarily close to "r".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f\\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m"
},
{
"math_id": 1,
"text": "C^k"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "k\\geq \\max\\{n-m+1, 1\\}"
},
{
"math_id": 4,
"text": "X \\subset \\mathbb R^n"
},
{
"math_id": 5,
"text": "f,"
},
{
"math_id": 6,
"text": "x\\in \\mathbb{R}^n"
},
{
"math_id": 7,
"text": "f"
},
{
"math_id": 8,
"text": "<m"
},
{
"math_id": 9,
"text": "f(X)"
},
{
"math_id": 10,
"text": "\\mathbb{R}^m"
},
{
"math_id": 11,
"text": "X"
},
{
"math_id": 12,
"text": "\\mathbb{R}^n"
},
{
"math_id": 13,
"text": "M"
},
{
"math_id": 14,
"text": "N"
},
{
"math_id": 15,
"text": "m"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "f:N\\rightarrow M"
},
{
"math_id": 18,
"text": "df:TN\\rightarrow TM"
},
{
"math_id": 19,
"text": "k\\geq \\max\\{n-m+1,1\\}"
},
{
"math_id": 20,
"text": "m=1"
},
{
"math_id": 21,
"text": "A_r\\subseteq N"
},
{
"math_id": 22,
"text": "x\\in N"
},
{
"math_id": 23,
"text": "df_x"
},
{
"math_id": 24,
"text": "r"
},
{
"math_id": 25,
"text": "f(A_r)"
}
] | https://en.wikipedia.org/wiki?curid=914901 |
915 | Andrey Markov | Russian mathematician (1856–1922)
Andrey Andreyevich Markov (14 June 1856 – 20 July 1922) was a Russian mathematician best known for his work on stochastic processes. A primary subject of his research later became known as the Markov chain. He was also a strong, close to master-level, chess player.
Markov and his younger brother Vladimir Andreevich Markov (1871–1897) proved the Markov brothers' inequality.
His son, another Andrey Andreyevich Markov (1903–1979), was also a notable mathematician, making contributions to constructive mathematics and recursive function theory.
Biography.
Andrey Markov was born on 14 June 1856 in Russia. He attended the St. Petersburg Grammar School, where some teachers saw him as a rebellious student. In his academics he performed poorly in most subjects other than mathematics. Later in life he attended Saint Petersburg Imperial University (now Saint Petersburg State University). Among his teachers were Yulian Sokhotski (differential calculus, higher algebra), Konstantin Posse (analytic geometry), Yegor Zolotarev (integral calculus), Pafnuty Chebyshev (number theory and probability theory), Aleksandr Korkin (ordinary and partial differential equations), Mikhail Okatov (mechanism theory), Osip Somov (mechanics), and Nikolai Budajev (descriptive and higher geometry). He completed his studies at the university and was later asked if he would like to stay and have a career as a mathematician. He later taught at high schools and continued his own mathematical studies. In this time he found a practical use for his mathematical skills. He figured out that he could use chains to model the alliteration of vowels and consonants in Russian literature. He also contributed to many other mathematical aspects in his time. He died at age 66 on 20 July 1922.
Timeline.
In 1877, Markov was awarded a gold medal for his outstanding solution of the problem
"About Integration of Differential Equations by Continued Fractions with an Application to the Equation" formula_0.
During the following year, he passed the candidate's examinations, and he remained at the university to prepare for a lecturer's position.
In April 1880, Markov defended his master's thesis "On the Binary Square Forms with Positive Determinant", which was directed by Aleksandr Korkin and Yegor Zolotarev. Four years later in 1884, he defended his doctoral thesis titled "On Certain Applications of the Algebraic Continuous Fractions".
His pedagogical work began after the defense of his master's thesis in autumn 1880. As a privatdozent he lectured on differential and integral calculus. Later he lectured alternately on "introduction to analysis", probability theory (succeeding Chebyshev, who had left the university in 1882) and the calculus of differences. From 1895 through 1905 he also lectured in differential calculus.
One year after the defense of his doctoral thesis, Markov was appointed extraordinary professor (1886) and in the same year he was elected adjunct to the Academy of Sciences. In 1890, after the death of Viktor Bunyakovsky, Markov became an extraordinary member of the academy. His promotion to an ordinary professor of St. Petersburg University followed in the fall of 1894.
In 1896, Markov was elected an ordinary member of the academy as the successor of Chebyshev. In 1905, he was appointed merited professor and was granted the right to retire, which he did immediately. Until 1910, however, he continued to lecture in the calculus of differences.
In connection with student riots in 1908, professors and lecturers of St. Petersburg University were ordered to monitor their students. Markov refused to accept this decree, and he wrote an explanation in which he declined to be an "agent of the governance". Markov was removed from further teaching duties at St. Petersburg University, and hence he decided to retire from the university.
Markov was an atheist. In 1912, he responded to Leo Tolstoy's excommunication from the Russian Orthodox Church by requesting his own excommunication. The Church complied with his request.
In 1913, the council of St. Petersburg elected nine scientists honorary members of the university. Markov was among them, but his election was not affirmed by the minister of education. The affirmation only occurred four years later, after the February Revolution in 1917. Markov then resumed his teaching activities and lectured on probability theory and the calculus of differences until his death in 1922.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " (1+x^2) \\frac{dy}{dx} = n (1+y^2)"
}
] | https://en.wikipedia.org/wiki?curid=915 |
915081 | Differential diagnosis | Method of analysis of a patient's history and physical examination
In healthcare, a differential diagnosis (DDx) is a method of analysis that distinguishes a particular disease or condition from others that present with similar clinical features. Differential diagnostic procedures are used by clinicians to diagnose the specific disease in a patient, or, at least, to consider any imminently life-threatening conditions. Often, each individual option of a possible disease is called a differential diagnosis (e.g., acute bronchitis could be a differential diagnosis in the evaluation of a cough, even if the final diagnosis is common cold).
More generally, a differential diagnostic procedure is a systematic diagnostic method used to identify the presence of a disease entity where multiple alternatives are possible. This method may employ algorithms, akin to the process of elimination, or at least a process of obtaining information that decreases the "probabilities" of candidate conditions to negligible levels, by using evidence such as symptoms, patient history, and medical knowledge to adjust epistemic confidences in the mind of the diagnostician (or, for computerized or computer-assisted diagnosis, the software of the system).
Differential diagnosis can be regarded as implementing aspects of the hypothetico-deductive method, in the sense that the potential presence of candidate diseases or conditions can be viewed as hypotheses that clinicians further determine as being true or false.
A differential diagnosis is also commonly used within the field of psychiatry/psychology, where two different diagnoses can be attached to a patient who is exhibiting symptoms that could fit into either diagnosis. For example, a patient who has been diagnosed with bipolar disorder may also be given a differential diagnosis of borderline personality disorder, given the similarity in the symptoms of both conditions.
Strategies used in preparing a differential diagnosis list vary with the experience of the healthcare provider. While novice providers may work systemically to assess all possible explanations for a patient's concerns, those with more experience often draw on clinical experience and pattern recognition to protect the patient from delays, risks, and cost of inefficient strategies or tests. Effective providers utilize an evidence-based approach, complementing their clinical experience with knowledge from clinical research.
General components.
A differential diagnosis has four general steps. The clinician will:
A mnemonic to help in considering multiple possible pathological processes is VINDICATEM:
Specific methods.
There are several methods for differential diagnostic procedures and several variants among those. Furthermore, a differential diagnostic procedure can be used concomitantly or alternately with protocols, guidelines, or other diagnostic procedures (such as pattern recognition or using medical algorithms).
For example, in case of medical emergency, there may not be enough time to do any detailed calculations or estimations of different probabilities, in which case the ABC protocol (airway, breathing and circulation) may be more appropriate. Later, when the situation is less acute, a more comprehensive differential diagnostic procedure may be adopted.
The differential diagnostic procedure may be simplified if a "pathognomonic" sign or symptom is found (in which case it is almost certain that the target condition is present) or in the absence of a sine qua non sign or symptom (in which case it is almost certain that the target condition is absent).
A diagnostician can be selective, considering first those disorders that are more likely (a probabilistic approach), more serious if left undiagnosed and untreated (a prognostic approach), or more responsive to treatment if offered (a pragmatic approach). Since the subjective probability of the presence of a condition is never exactly 100% or 0%, the differential diagnostic procedure may aim at specifying these various probabilities to form indications for further action.
The following are two methods of differential diagnosis, being based on epidemiology and likelihood ratios, respectively.
Epidemiology-based method.
One method of performing a differential diagnosis by epidemiology aims to estimate the probability of each candidate condition by comparing their probabilities to have occurred in the first place in the individual. It is based on probabilities related both to the presentation (such as pain) and probabilities of the various candidate conditions (such as diseases).
Theory.
The statistical basis for differential diagnosis is Bayes' theorem. As an analogy, when a die has landed the outcome is certain by 100%, but the probability that it Would Have Occurred in the First Place (hereafter abbreviated WHOIFP) is still 1/6. In the same way, the probability that a presentation or condition would have occurred in the first place in an individual (WHOIFPI) is not same as the probability that the presentation or condition "has" occurred in the individual, because the presentation "has" occurred by 100% certainty in the individual. Yet, the contributive probability fractions of each condition are assumed the same, relatively:
formula_0
where:
When an individual presents with a symptom or sign, Pr(Presentation has occurred in individual) is 100% and can therefore be replaced by 1, and can be ignored since division by 1 does not make any difference:
formula_1
The total probability of the presentation to have occurred in the individual can be approximated as the sum of the individual candidate conditions:
formula_2
Also, the probability of the presentation to have been caused by any candidate condition is proportional to the probability of the condition, depending on what rate it causes the presentation:
formula_3
where:
The probability that a condition would have occurred in the first place in an individual is approximately equal to that of a population that is as similar to the individual as possible except for the current presentation, compensated where possible by relative risks given by known risk factor that distinguish the individual from the population:
formula_4
where:
The following table demonstrates how these relations can be made for a series of candidate conditions:
One additional "candidate condition" is the instance of there being no abnormality, and the presentation is only a (usually relatively unlikely) appearance of a basically normal state. Its probability in the population ("P(No abnormality in population)") is complementary to the sum of probabilities of "abnormal" candidate conditions.
Example.
This example case demonstrates how this method is applied but does not represent a guideline for handling similar real-world cases. Also, the example uses relatively specified numbers with sometimes several decimals, while in reality, there are often simply rough estimations, such as of likelihoods being very high", "high", "low" or "very low, but still using the general principles of the method.
For an individual (who becomes the "patient" in this example), a blood test of, for example, serum calcium shows a result above the standard reference range, which, by most definitions, classifies as hypercalcemia, which becomes the "presentation" in this case. A clinician (who becomes the "diagnostician" in this example), who does not currently see the patient, gets to know about his finding.
By practical reasons, the clinician considers that there is enough test indication to have a look at the patient's medical records. For simplicity, let's say that the only information given in the medical records is a family history of primary hyperparathyroidism (here abbreviated as PH), which may explain the finding of hypercalcemia. For this patient, let's say that the resultant hereditary risk factor is estimated to confer a relative risk of 10 (RRPH = 10).
The clinician considers that there is enough motivation to perform a differential diagnostic procedure for the finding of hypercalcemia. The main causes of hypercalcemia are primary hyperparathyroidism (PH) and cancer, so for simplicity, the list of candidate conditions that the clinician could think of can be given as:
The probability that 'primary hyperparathyroidism' (PH) would have occurred in the first place in the individual ("P(PH WHOIFPI)") can be calculated as follows:
Let's say that the last blood test taken by the patient was half a year ago and was normal and that the incidence of primary hyperparathyroidism in a general population appropriately matches the individual (except for the presentation and mentioned heredity) is 1 in 4000 per year. Ignoring more detailed retrospective analyses (such as including speed of disease progress and lag time of medical diagnosis), the time-at-risk for having developed primary hyperparathyroidism can roughly be regarded as being the last half-year because a previously developed hypercalcemia would probably have been caught up by the previous blood test. This corresponds to a probability of primary hyperparathyroidism (PH) in the population of:
formula_5
With the relative risk conferred from the family history, the probability that primary hyperparathyroidism (PH) would have occurred in the first place in the individual given from the currently available information becomes:
formula_6
Primary hyperparathyroidism can be assumed to cause hypercalcemia essentially 100% of the time (rPH → hypercalcemia = 1), so this independently calculated probability of primary hyperparathyroidism (PH) can be assumed to be the same as the probability of being a cause of the presentation:
formula_7
For cancer, the same time-at-risk is assumed for simplicity, and let's say that the incidence of cancer in the area is estimated at 1 in 250 per year, giving a population probability of cancer of:
formula_8
For simplicity, let's say that any association between a family history of primary hyperparathyroidism and risk of cancer is ignored, so the relative risk for the individual to have contracted cancer in the first place is similar to that of the population (RRcancer = 1):
formula_9
However, hypercalcemia only occurs in, very approximately, 10% of cancers, (rcancer → hypercalcemia = 0.1), so:
formula_10
The probabilities that hypercalcemia would have occurred in the first place by other candidate conditions can be calculated in a similar manner. However, for simplicity, let's say that the probability that any of these would have occurred in the first place is calculated at 0.0005 in this example.
For the instance of there being no disease, the corresponding probability in the population is complementary to the sum of probabilities for other conditions:
formula_11
The probability that the individual would be healthy in the first place can be assumed to be the same:
formula_12
The rate at which the case of no abnormal condition still ends up in measurement of serum calcium of being above the standard reference range (thereby classifying as hypercalcemia) is, by the definition of standard reference range, less than 2.5%. However, this probability can be further specified by considering how much the measurement deviates from the mean in the standard reference range. Let's say that the serum calcium measurement was 1.30 mmol/L, which, with a standard reference range established at 1.05 to 1.25 mmol/L, corresponds to a standard score of 3 and a corresponding probability of 0.14% that such degree of hypercalcemia would have occurred in the first place in the case of no abnormality:
formula_13
Subsequently, the probability that hypercalcemia would have resulted from no disease can be calculated as:
formula_14
The probability that hypercalcemia would have occurred in the first place in the individual can thus be calculated as:
formula_15
Subsequently, the probability that hypercalcemia is caused by primary hyperparathyroidism (PH) in the individual can be calculated as:
formula_16
Similarly, the probability that hypercalcemia is caused by cancer in the individual can be calculated as:
formula_17
and for other candidate conditions:
formula_18
and the probability that there actually is no disease:
formula_19
For clarification, these calculations are given as the table in the method description:
Thus, this method estimates that the probability that the hypercalcemia is caused by primary hyperparathyroidism, cancer, other conditions or no disease at all are 37.3%, 6.0%, 14.9%, and 41.8%, respectively, which may be used in estimating further test indications.
This case is continued in the example of the method described in the next section.
Likelihood ratio-based method.
The procedure of differential diagnosis can become extremely complex when fully taking additional tests and treatments into consideration. One method that is somewhat a tradeoff between being clinically perfect and being relatively simple to calculate is one that uses likelihood ratios to derive subsequent post-test likelihoods.
Theory.
The initial likelihoods for each candidate condition can be estimated by various methods, such as:
One method of estimating likelihoods even after further tests uses likelihood ratios (which is derived from sensitivities and specificities) as a multiplication factor after each test or procedure. In an ideal world, sensitivities and specificities would be established for all tests for all possible pathological conditions. In reality, however, these parameters may only be established for one of the candidate conditions. Multiplying with likelihood ratios necessitates conversion of likelihoods from probabilities to "odds in favor" (hereafter simply termed "odds") by:
formula_20
However, only the candidate conditions with known likelihood ratio need this conversion. After multiplication, conversion back to probability is calculated by:
formula_21
The rest of the candidate conditions (for which there is no established likelihood ratio for the test at hand) can, for simplicity, be adjusted by subsequently multiplying all candidate conditions with a common factor to again yield a sum of 100%.
The resulting probabilities are used for estimating the indications for further medical tests, treatments or other actions. If there is an indication for an additional test, and it returns with a result, then the procedure is repeated using the likelihood ratio of the additional test. With updated probabilities for each of the candidate conditions, the indications for further tests, treatments, or other actions change as well, and so the procedure can be repeated until an "endpoint" where there no longer is any indication for currently performing further actions. Such an endpoint mainly occurs when one candidate condition becomes so certain that no test can be found that is powerful enough to change the relative probability profile enough to motivate any change in further actions. Tactics for reaching such an endpoint with as few tests as possible includes making tests with high specificity for conditions of already outstandingly high-profile-relative probability, because the high likelihood ratio positive for such tests is very high, bringing all less likely conditions to relatively lower probabilities. Alternatively, tests with high sensitivity for competing candidate conditions have a high likelihood ratio negative, potentially bringing the probabilities for competing candidate conditions to negligible levels. If such negligible probabilities are achieved, the clinician can rule out these conditions, and continue the differential diagnostic procedure with only the remaining candidate conditions.
Example.
This example continues for the same patient as in the example for the epidemiology-based method. As with the previous example of epidemiology-based method, this example case is made to demonstrate how this method is applied but does not represent a guideline for handling similar real-world cases. Also, the example uses relatively specified numbers, while in reality, there are often just rough estimations. In this example, the probabilities for each candidate condition were established by an epidemiology-based method to be as follows:
These percentages could also have been established by experience at the particular clinic by knowing that these are the percentages for final diagnosis for people presenting to the clinic with hypercalcemia and having a family history of primary hyperparathyroidism.
The condition of highest profile-relative probability (except "no disease") is primary hyperparathyroidism (PH), but cancer is still of major concern, because if it is the actual causative condition for the hypercalcemia, then the choice of whether to treat or not likely means life or death for the patient, in effect potentially putting the indication at a similar level for further tests for both of these conditions.
Here, let's say that the clinician considers the profile-relative probabilities of being of enough concern to indicate sending the patient a call for a clinician visit, with an additional visit to the medical laboratory for an additional blood test complemented with further analyses, including parathyroid hormone for the suspicion of primary hyperparathyroidism.
For simplicity, let's say that the clinician first receives the blood test (in formulas abbreviated as "BT") result for the parathyroid hormone analysis and that it showed a parathyroid hormone level that is elevated relative to what would be expected by the calcium level.
Such a constellation can be estimated to have a sensitivity of approximately 70% and a specificity of approximately 90% for primary hyperparathyroidism. This confers a likelihood ratio positive of 7 for primary hyperparathyroidism.
The probability of primary hyperparathyroidism is now termed "Pre-BTPH" because it corresponds to before the blood test (Latin preposition "prae" means before). It was estimated at 37.3%, corresponding to an odds of 0.595. With the likelihood ratio positive of 7 for the blood test, the post-test odds is calculated as:
formula_22
where:
An Odds(PostBTPH) of 4.16 is again converted to the corresponding probability by:
formula_23
The sum of the probabilities for the rest of the candidate conditions should therefore be:
formula_24
Before the blood test for parathyroid hormone, the sum of their probabilities were:
formula_25
Therefore, to conform to a sum of 100% for all candidate conditions, each of the other candidates must be multiplied by a correcting factor:
formula_26
For example, the probability of cancer after the test is calculated as:
formula_27
The probabilities for each candidate conditions before and after the blood test are given in following table:
These "new" percentages, including a profile-relative probability of 80% for primary hyperparathyroidism, underlie any indications for further tests, treatments, or other actions. In this case, let's say that the clinician continues the plan for the patient to attend a clinician visit for a further checkup, especially focused on primary hyperparathyroidism.
A clinician visit can, theoretically, be regarded as a series of tests, including both questions in a medical history, as well as components of a physical examination, where the post-test probability of a previous test, can be used as the pre-test probability of the next. The indications for choosing the next test are dynamically influenced by the results of previous tests.
Let's say that the patient in this example is revealed to have at least some of the symptoms and signs of depression, bone pain, joint pain or constipation of more severity than what would be expected by the hypercalcemia itself, supporting the suspicion of primary hyperparathyroidism, and let's say that the likelihood ratios for the tests, when multiplied together, roughly results in a product of 6 for primary hyperparathyroidism.
The presence of unspecific pathologic symptoms and signs in the history and examination are often concurrently indicative of cancer as well, and let's say that the tests gave an overall likelihood ratio estimated at 1.5 for cancer. For other conditions, as well as the instance of not having any disease at all, let's say that it is unknown how they are affected by the tests at hand, as often happens in reality. This gives the following results for the history and physical examination (abbreviated as P&E):
These probabilities after the history and examination may make the physician confident enough to plan the patient for surgery for a parathyroidectomy to resect the affected tissue.
At this point, the probability of "other conditions" is so low that the physician cannot think of any test for them that could make a difference that would be substantial enough to form an indication for such a test, and the physician thereby practically regards "other conditions" as ruled out, in this case not primarily by any specific test for such other conditions that were negative, but rather by the absence of positive tests so far.
For "cancer", the cutoff at which to confidently regard it as ruled out maybe more stringent because of severe consequences of missing it, so the physician may consider that at least a histopathologic examination of the resected tissue is indicated.
This case is continued in the example of "Combinations" in the corresponding section below.
Coverage of candidate conditions.
The validity of both the initial estimation of probabilities by epidemiology and further workup by likelihood ratios are dependent on the inclusion of candidate conditions that are responsible for a large part as possible of the probability of having developed the condition, and it is clinically important to include those where relatively fast initiation of therapy is most likely to result in the greatest benefit. If an important candidate condition is missed, no method of differential diagnosis will supply the correct conclusion. The need to find more candidate conditions for inclusion increases with the increasing severity of the presentation itself. For example, if the only presentation is a deviating laboratory parameter and all common harmful underlying conditions have been ruled out, then it may be acceptable to stop finding more candidate conditions, but this would much more likely be unacceptable if the presentation would have been severe pain.
Combinations.
If two conditions get high post-test probabilities, especially if the sum of the probabilities for conditions with known likelihood ratios becomes higher than 100%, then the actual condition is a combination of the two. In such cases, that combined condition can be added to the list of candidate conditions, and the calculations should start over from the beginning.
To continue the example used above, let's say that the history and physical examination were indicative of cancer as well, with a likelihood ratio of 3, giving an Odds(PostH&E) of 0.057, corresponding to a P(PostH&E) of 5.4%. This would correspond to a "Sum of known P(PostH&E)" of 101.5%. This is an indication for considering a combination of primary hyperparathyroidism and cancer, such as, in this case, a parathyroid hormone-producing parathyroid carcinoma. A recalculation may therefore be needed, with the first two conditions being separated into "primary hyperparathyroidism without cancer", "cancer without primary hyperparathyroidism" as well as "combined primary hyperparathyroidism and cancer", and likelihood ratios being applied to each condition separately. In this case, however, tissue has already been resected, wherein a histopathologic examination can be performed that includes the possibility of parathyroid carcinoma in the examination (which may entail appropriate sample staining).
Let's say that the histopathologic examination confirms primary hyperparathyroidism, but also showed a malignant pattern. By an initial method by epidemiology, the incidence of parathyroid carcinoma is estimated at 1 in 6 million people per year, giving a very low probability before taking any tests into consideration. In comparison, the probability that non-malignant primary hyperparathyroidism would have occurred at the same time as an unrelated non-carcinoma cancer that presents with malignant cells in the parathyroid gland is calculated by multiplying the probabilities of the two. The resultant probability is, however, much smaller than the 1 in 6 million. Therefore, the probability of parathyroid carcinoma may still be close to 100% after histopathologic examination despite the low probability of occurring in the first place.
Machine differential diagnosis.
Machine differential diagnosis is the use of computer software to partly or fully make a differential diagnosis. It may be regarded as an application of artificial intelligence. Alternatively, it may be seen as "augmented intelligence" if it meets the FDA criteria, namely that (1) it reveals the underlying data, (2) reveals the underlying logic, and (3) leaves the clinician in charge to shape and make the decision. Machine learning AI is generally seen as a device by the FDA, whereas augmented intelligence applications are not.
Many studies demonstrate improvement of quality of care and reduction of medical errors by using such decision support systems. Some of these systems are designed for a specific medical problem such as schizophrenia, Lyme disease or ventilator-associated pneumonia. Others are designed to cover all major clinical and diagnostic findings to assist physicians with faster and more accurate diagnosis.
However, these tools all still require advanced medical skills to rate symptoms and choose additional tests to deduce the probabilities of different diagnoses. Machine differential diagnosis is also currently unable to diagnose multiple concurrent disorders. Their usage by non-experts is therefore not a substitute for professional diagnosis.
History.
The method of differential diagnosis was first suggested for use in the diagnosis of mental disorders by Emil Kraepelin. It is more systematic than the old-fashioned method of diagnosis by "gestalt" (impression).
Alternative medical meanings.
"Differential diagnosis" is also used more loosely to refer simply to a list of the most common causes of a given symptom, to a list of disorders similar to a given disorder, or to such lists when they are annotated with advice on how to narrow the list down ("French's Index of Differential Diagnosis" is an example). Thus, a differential diagnosis in this sense is medical information specially organized to aid in diagnosis.
Usage apart from in medicine.
Methods similar to those of differential diagnostic processes in medicine are also used by biological taxonomists to identify and classify organisms, living and extinct. For example, after finding an unknown species, there can first be a listing of all potential species, followed by ruling out of one by one until, optimally, only one potential choice remains.
Similar procedures may be used by plant and maintenance engineers and automotive mechanics and used to be used in diagnosing faulty electronic circuitry.
In popular culture.
In the American television medical drama "House", the main protagonist Dr. Gregory House leads a team of diagnosticians who regularly use differential diagnostics procedures.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\n& \\frac{\\Pr(\\text{Presentation is caused by condition in individual})}{\\Pr(\\text{Presentation has occurred in individual})}\n= \\frac {\\Pr(\\text{Presentation WHOIFPI by condition})}{\\Pr(\\text{Presentation WHOIFPI})}\n\\end{align}\n"
},
{
"math_id": 1,
"text": " \\Pr(\\text{Presentation is caused by condition in individual}) = \\frac {\\Pr(\\text{Presentation WHOIFPI by condition})}{\\Pr(\\text{Presentation WHOIFPI})}"
},
{
"math_id": 2,
"text": " \\begin{align} \\Pr(\\text{Presentation WHOIFPI}) & = \\Pr(\\text{Presentation WHOIFPI by condition 1}) \\\\\n& {} + \\Pr(\\text{Presentation WHOIFPI by condition 2}) \\\\\n& {} + \\Pr(\\text{Presentation WHOIFPI by condition 3}) + \\text{etc.} \\end{align}"
},
{
"math_id": 3,
"text": " \\Pr(\\text{Presentation WHOIFPI by condition}) = \\Pr(\\text{Condition WHOIFPI}) \\cdot r_{\\text{condition} \\rightarrow \\text{presentation}},"
},
{
"math_id": 4,
"text": " \\Pr(\\text{Condition WHOIFPI}) \\approx RR_\\text{condition} \\cdot \\Pr(\\text{Condition in population}),"
},
{
"math_id": 5,
"text": " \\Pr(\\text{PH in population}) = 0.5\\text{ years} \\cdot \\frac{1}{\\text{4000 per year}} = \\frac{1}{8000}"
},
{
"math_id": 6,
"text": " \\Pr(\\text{PH WHOIFPI}) \\approx RR_{PH}\\cdot \\Pr(\\text{PH in population}) = 10 \\cdot \\frac {1}{8000} = \\frac {1}{800} = 0.00125 "
},
{
"math_id": 7,
"text": "\\begin{align} \\Pr(\\text{Hypercalcemia WHOIFPI by PH}) & = \\Pr(\\text{PH WHOIFPI}) \\cdot r_{\\text{PH} \\rightarrow \\text{hypercalcemia}} \\\\\n& = 0.00125 \\cdot 1 = 0.00125 \\end{align}"
},
{
"math_id": 8,
"text": " \\Pr(\\text{cancer in population}) = 0.5\\text{ years} \\cdot \\frac{1}{\\text{250 per year}} = \\frac{1}{500}"
},
{
"math_id": 9,
"text": " \\Pr(\\text{cancer WHOIFPI}) \\approx RR_\\text{cancer} \\cdot \\Pr(\\text{cancer in population}) = 1 \\cdot \\frac{1}{500} = \\frac{1}{500} = 0.002. "
},
{
"math_id": 10,
"text": "\\begin{align}\n& \\Pr(\\text{Hypercalcemia WHOIFPI by cancer}) \\\\\n= & \\Pr(\\text{cancer WHOIFPI}) \\cdot r_{\\text{cancer} \\rightarrow \\text{hypercalcemia}} \\\\ = & 0.002 \\cdot 0.1 = 0.0002. \\end{align} "
},
{
"math_id": 11,
"text": "\\begin{align}\n\\Pr(\\text{no disease in population}) & = 1 - \\Pr(\\text{PH in population}) - \\Pr(\\text{cancer in population}) \\\\\n& {} \\quad - \\Pr(\\text{other conditions in population}) \\\\\n& {} = 0.997.\n\\end{align}"
},
{
"math_id": 12,
"text": " \\Pr(\\text{no disease WHOIFPI}) = 0.997. \\, "
},
{
"math_id": 13,
"text": " r_{\\text{no disease} \\rightarrow \\text{hypercalcemia}} = 0.0014 "
},
{
"math_id": 14,
"text": " \\begin{align} & \\Pr(\\text{Hypercalcemia WHOIFPI by no disease}) \\\\\n= & \\Pr(\\text{no disease WHOIFPI}) \\cdot r_{\\text{no disease} \\rightarrow \\text{hypercalcemia}} \\\\\n= & 0.997 \\cdot 0.0014 \\approx 0.0014 \\end{align}"
},
{
"math_id": 15,
"text": "\\begin{align}\n& \\Pr(\\text{hypercalcemia WHOIFPI}) \\\\\n= & \\Pr(\\text{hypercalcemia WHOIFPI by PH}) + \\Pr(\\text{hypercalcemia WHOIFPI by cancer}) \\\\\n& {} + \\Pr(\\text{hypercalcemia WHOIFPI by other conditions}) + \\Pr(\\text{hypercalcemia WHOIFPI by no disease}) \\\\\n= & 0.00125 + 0.0002 + 0.0005 + 0.0014 = 0.00335 \\end{align} "
},
{
"math_id": 16,
"text": "\\begin{align} & \\Pr(\\text{hypercalcemia is caused by PH in individual}) \\\\\n= & \\frac {\\Pr(\\text{hypercalcemia WHOIFPI by PH})}{\\Pr(\\text{hypercalcemia WHOIFPI})} \\\\\n= & \\frac {0.00125}{0.00335} = 0.373 = 37.3\\% \\end{align}"
},
{
"math_id": 17,
"text": " \\begin{align} & \\Pr(\\text{hypercalcemia is caused by cancer in individual}) \\\\\n= & \\frac {\\Pr(\\text{hypercalcemia WHOIFPI by cancer})}{\\Pr(\\text{hypercalcemia WHOIFPI})} \\\\\n= & \\frac {0.0002}{0.00335} = 0.060 = 6.0\\%, \\end{align}"
},
{
"math_id": 18,
"text": "\\begin{align} & \\Pr(\\text{hypercalcemia is caused by other conditions in individual}) \\\\\n= & \\frac {\\Pr(\\text{hypercalcemia WHOIFPI by other conditions})}{\\Pr(\\text{hypercalcemia WHOIFPI})} \\\\\n= & \\frac {0.0005}{0.00335} = 0.149 = 14.9\\%, \\end{align}"
},
{
"math_id": 19,
"text": "\\begin{align} & \\Pr(\\text{hypercalcemia is present despite no disease in individual}) \\\\\n= & \\frac {\\Pr(\\text{hypercalcemia WHOIFPI by no disease})}{\\Pr(\\text{hypercalcemia WHOIFPI})} \\\\\n= & \\frac {0.0014}{0.00335} = 0.418= 41.8\\% \\end{align}"
},
{
"math_id": 20,
"text": "\\text{odds} = \\frac{\\text{probability}}{1-\\text{probability}}"
},
{
"math_id": 21,
"text": " \\text{probability} = \\frac{\\text{odds}}{\\text{odds}+1} "
},
{
"math_id": 22,
"text": " \\operatorname{Odds}(\\text{PostBT}_{PH}) = \\operatorname{Odds}(\\text{PreBT}_{PH}) \\cdot LH(BT) = 0.595 \\cdot 7 = 4.16,"
},
{
"math_id": 23,
"text": " \\Pr(\\text{PostBT}_{PH}) = \\frac{\\operatorname{Odds}(\\text{PostBT}_{PH})}{ \\operatorname{Odds}(\\text{PostBT}_{PH}) + 1} = \\frac{4.16}{4.16+1} = 0.806 = 80.6\\%"
},
{
"math_id": 24,
"text": " \\Pr(\\text{PostBT}_{rest}) = 100\\% - 80.6\\% = 19.4\\%"
},
{
"math_id": 25,
"text": " \\Pr(\\text{PreBT}_\\text{rest}) = 6.0\\% + 14.9\\% + 41.8\\% = 62.7\\% "
},
{
"math_id": 26,
"text": " \\text{Correcting factor} = \\frac{\\Pr(\\text{PostBT}_\\text{rest})}{\\Pr(\\text{PreBT}_\\text{rest})} = \\frac{19.4}{62.7} = 0.309"
},
{
"math_id": 27,
"text": " \\Pr(\\text{PostBT}_\\text{cancer}) = \\Pr(\\text{PreBT}_\\text{cancer}) \\cdot \\text{Correcting factor} = 6.0\\% \\cdot 0.309 = 1.9\\%"
}
] | https://en.wikipedia.org/wiki?curid=915081 |
9153180 | Albert algebra | In mathematics, an Albert algebra is a 27-dimensional exceptional Jordan algebra. They are named after Abraham Adrian Albert, who pioneered the study of non-associative algebras, usually working over the real numbers. Over the real numbers, there are three such Jordan algebras up to isomorphism. One of them, which was first mentioned by Pascual Jordan, John von Neumann, and Eugene Wigner (1934) and studied by , is the set of 3×3 self-adjoint matrices over the octonions, equipped with the binary operation
formula_0
where formula_1 denotes matrix multiplication. Another is defined the same way, but using split octonions instead of octonions. The final is constructed from the non-split octonions using a different standard involution.
Over any algebraically closed field, there is just one Albert algebra, and its automorphism group "G" is the simple split group of type F4. (For example, the complexifications of the three Albert algebras over the real numbers are isomorphic Albert algebras over the complex numbers.) Because of this, for a general field "F", the Albert algebras are classified by the Galois cohomology group H1("F","G").
The Kantor–Koecher–Tits construction applied to an Albert algebra gives a form of the E7 Lie algebra. The split Albert algebra is used in a construction of a 56-dimensional structurable algebra whose automorphism group has identity component the simply-connected algebraic group of type E6.
The space of cohomological invariants of Albert algebras a field "F" (of characteristic not 2) with coefficients in Z/2Z is a free module over the cohomology ring of "F" with a basis 1, "f"3, "f"5, of degrees 0, 3, 5. The cohomological invariants with 3-torsion coefficients have a basis 1, "g"3 of degrees 0, 3. The invariants "f"3 and "g"3 are the primary components of the Rost invariant.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x \\circ y = \\frac12 (x \\cdot y + y \\cdot x),"
},
{
"math_id": 1,
"text": "\\cdot"
}
] | https://en.wikipedia.org/wiki?curid=9153180 |
9154659 | Tolman–Oppenheimer–Volkoff equation | Equation explaining structure of a spherical body of isotropic material
In astrophysics, the Tolman–Oppenheimer–Volkoff (TOV) equation constrains the structure of a spherically symmetric body of isotropic material which is in static gravitational equilibrium, as modeled by general relativity. The equation is
formula_0
Here, formula_1 is a radial coordinate, and formula_2 and formula_3 are the density and pressure, respectively, of the material at radius formula_1. The quantity formula_4, the total mass within formula_1, is discussed below.
The equation is derived by solving the Einstein equations for a general time-invariant, spherically symmetric metric. For a solution to the Tolman–Oppenheimer–Volkoff equation, this metric will take the form
formula_5
where formula_6 is determined by the constraint
formula_7
When supplemented with an equation of state, formula_8, which relates density to pressure, the Tolman–Oppenheimer–Volkoff equation completely determines the structure of a spherically symmetric body of isotropic material in equilibrium. If terms of order formula_9 are neglected, the Tolman–Oppenheimer–Volkoff equation becomes the Newtonian hydrostatic equation, used to find the equilibrium structure of a spherically symmetric body of isotropic material when general-relativistic corrections are not important.
If the equation is used to model a bounded sphere of material in a vacuum, the zero-pressure condition formula_10 and the condition formula_11 should be imposed at the boundary. The second boundary condition is imposed so that the metric at the boundary is continuous with the unique static spherically symmetric solution to the vacuum field equations, the Schwarzschild metric:
formula_12
Total mass.
formula_4 is the total mass contained inside radius formula_1, as measured by the gravitational field felt by a distant observer. It satisfies formula_13.
formula_14
Here, formula_15 is the total mass of the object, again, as measured by the gravitational field felt by a distant observer. If the boundary is at formula_16, continuity of the metric and the definition of formula_4 require that
formula_17
Computing the mass by integrating the density of the object over its volume, on the other hand, will yield the larger value
formula_18
The difference between these two quantities,
formula_19
will be the gravitational binding energy of the object divided by formula_20 and it is negative.
Derivation from general relativity.
Let us assume a static, spherically symmetric perfect fluid. The metric components are similar to those for the Schwarzschild metric:
formula_21
By the perfect fluid assumption, the stress-energy tensor is diagonal (in the central spherical coordinate system), with eigenvalues of energy density and pressure:
formula_22
and
formula_23
Where formula_2 is the fluid density and formula_3 is the fluid pressure.
To proceed further, we solve Einstein's field equations:
formula_24
Let us first consider the formula_25 component:
formula_26
Integrating this expression from 0 to formula_1, we obtain
formula_27
where formula_4 is as defined in the previous section.
Next, consider the formula_28 component. Explicitly, we have
formula_29
which we can simplify (using our expression for formula_30) to
formula_31
We obtain a second equation by demanding continuity of the stress-energy tensor: formula_32. Observing that formula_33 (since the configuration is assumed to be static) and that formula_34 (since the configuration is also isotropic), we obtain in particular
formula_35
Rearranging terms yields:
formula_36
This gives us two expressions, both containing formula_37. Eliminating formula_37, we obtain:
formula_38
Pulling out a factor of formula_39 and rearranging factors of 2 and formula_20 results in the Tolman–Oppenheimer–Volkoff equation:
History.
Richard C. Tolman analyzed spherically symmetric metrics in 1934 and 1939. The form of the equation given here was derived by J. Robert Oppenheimer and George Volkoff in their 1939 paper, "On Massive Neutron Cores". In this paper, the equation of state for a degenerate Fermi gas of neutrons was used to calculate an upper limit of ~0.7 solar masses for the gravitational mass of a neutron star. Since this equation of state is not realistic for a neutron star, this limiting mass is likewise incorrect. Using gravitational wave observations from binary neutron star mergers (like GW170817) and the subsequent information from electromagnetic radiation (kilonova), the data suggest that the maximum mass limit is close to 2.17 solar masses. Earlier estimates for this limit range from 1.5 to 3.0 solar masses.
Post-Newtonian approximation.
In the post-Newtonian approximation, i.e., gravitational fields that slightly deviates from Newtonian field, the equation can be expanded in powers of formula_9. In other words, we have
formula_40
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{dP}{dr}=-\\frac{G m}{r^2}\\rho\\left(1+\\frac{P}{\\rho c^2}\\right)\\left(1+\\frac{4\\pi r^3P}{mc^2}\\right)\\left(1-\\frac{2Gm}{rc^2}\\right)^{-1}"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": "\\rho(r)"
},
{
"math_id": 3,
"text": "P(r)"
},
{
"math_id": 4,
"text": "m(r)"
},
{
"math_id": 5,
"text": "ds^2=e^{\\nu} c^2 \\,dt^2 - \\left(1-\\frac{2Gm}{rc^2}\\right)^{-1} \\,dr^2 - r^2\\left(d\\theta^2 + \\sin^2 \\theta \\,d\\phi^2\\right) "
},
{
"math_id": 6,
"text": "\\nu (r)"
},
{
"math_id": 7,
"text": "\\frac{d\\nu}{dr}=- \\left(\\frac{2}{P+\\rho c^2} \\right) \\frac{dP}{dr} "
},
{
"math_id": 8,
"text": "F(\\rho,P)=0"
},
{
"math_id": 9,
"text": "1/c^2"
},
{
"math_id": 10,
"text": "P(r)=0"
},
{
"math_id": 11,
"text": "e^{\\nu} = 1 - 2 G m/c^2 r"
},
{
"math_id": 12,
"text": "ds^2=\\left(1-\\frac{2GM}{rc^2}\\right) c^2 \\,dt^2 - \\left(1-\\frac{2GM}{rc^2}\\right)^{-1} \\,dr^2 - r^2(d\\theta^2 + \\sin^2 \\theta \\,d\\phi^2) "
},
{
"math_id": 13,
"text": "m(0) = 0"
},
{
"math_id": 14,
"text": "\\frac{dm}{dr}=4 \\pi r^2 \\rho "
},
{
"math_id": 15,
"text": "M"
},
{
"math_id": 16,
"text": "r = R"
},
{
"math_id": 17,
"text": "M=m(R)=\\int_0^{R} 4\\pi r^2 \\rho \\, dr "
},
{
"math_id": 18,
"text": "M_1=\\int_0^{R} \\frac{4\\pi r^2 \\rho}{\\sqrt{1-\\frac{2Gm}{rc^2}}} \\, dr"
},
{
"math_id": 19,
"text": "\\delta M=\\int_0^{R} 4\\pi r^2 \\rho \\left(1-\\frac{1}\\sqrt{1-\\frac{2Gm}{rc^2}}\\right) \\, dr"
},
{
"math_id": 20,
"text": "c^2"
},
{
"math_id": 21,
"text": "c^2 \\,d\\tau^2 = g_{\\mu\\nu} \\,dx^\\mu \\,dx^\\nu = e^{\\nu} c^2 \\,dt^2 - e^{\\lambda} \\,dr^2 - r^2 \\,d\\theta^2 - r^2 \\sin^2 \\theta \\,d\\phi^2 "
},
{
"math_id": 22,
"text": "T_0^0 = \\rho c^2"
},
{
"math_id": 23,
"text": "T_i^j = - P \\delta_i^j "
},
{
"math_id": 24,
"text": "\\frac{8 \\pi G}{c^4} T_{\\mu\\nu} = G_{\\mu\\nu} "
},
{
"math_id": 25,
"text": "G_{00}"
},
{
"math_id": 26,
"text": "\\frac{8 \\pi G}{c^4} \\rho c^2 e^\\nu = \\frac{e^\\nu}{r^2} \\left(1 - \\frac{d}{dr} [r e^{-\\lambda}] \\right) "
},
{
"math_id": 27,
"text": "e^{-\\lambda} = 1 - \\frac{2 Gm}{r c^2}"
},
{
"math_id": 28,
"text": "G_{11}"
},
{
"math_id": 29,
"text": "- \\frac{8 \\pi G}{c^4} P e^{\\lambda} = \\frac{- r \\nu' + e^{\\lambda} - 1}{r^2} "
},
{
"math_id": 30,
"text": "e^{\\lambda}"
},
{
"math_id": 31,
"text": " \\frac{d \\nu}{d r} = \\frac{1}{r}\\left(1 - \\frac{2 G m}{c^2 r}\\right)^{-1} \\left(\\frac{2 G m}{c^2 r} + \\frac{8 \\pi G}{c^4} r^2 P\\right) "
},
{
"math_id": 32,
"text": "\\nabla_{\\mu} T^{\\mu}_{\\,\\nu} = 0"
},
{
"math_id": 33,
"text": "\\partial_t \\rho = \\partial_t P = 0"
},
{
"math_id": 34,
"text": "\\partial_{\\phi} P = \\partial_{\\theta} P = 0"
},
{
"math_id": 35,
"text": "0 = \\nabla_\\mu T^\\mu_1 = - \\frac{d P}{d r} - \\frac12 \\left(P + \\rho c^2\\right) \\frac{d\\nu}{d r} \\;"
},
{
"math_id": 36,
"text": "\\frac{dP}{dr} = - \\left( \\frac{\\rho c^2 + P}{2} \\right) \\frac{d\\nu}{dr} \\;"
},
{
"math_id": 37,
"text": "d\\nu/dr"
},
{
"math_id": 38,
"text": "\\frac{dP}{dr} = - \\frac{1}{r} \\left( \\frac{\\rho c^2 + P}{2} \\right) \\left(\\frac{2 G m}{c^2 r} + \\frac{8 \\pi G}{c^4} r^2 P\\right) \\left(1 - \\frac{2 G m}{c^2 r}\\right)^{-1} "
},
{
"math_id": 39,
"text": "G/r"
},
{
"math_id": 40,
"text": "\\frac{dP}{dr}=-\\frac{G m}{r^2}\\rho\\left(1+\\frac{P}{\\rho c^2}+\\frac{4\\pi r^3P}{mc^2}+\\frac{2Gm}{rc^2}\\right) + O(c^{-4})."
}
] | https://en.wikipedia.org/wiki?curid=9154659 |
915551 | Carl Neumann | Prussian mathematician (1832–1925)
Carl Gottfried Neumann (also Karl; 7 May 1832 – 27 March 1925) was a German mathematician.
Biography.
Neumann was born in Königsberg, Prussia, as the son of the mineralogist, physicist and mathematician Franz Ernst Neumann (1798–1895), who was professor of mineralogy and physics at Königsberg University. Carl Neumann studied in Königsberg and Halle and was a professor at the universities of Halle, Basel, Tübingen, and Leipzig.
While in Königsberg, he studied physics with his father, and later as a working mathematician, dealt almost exclusively with problems arising from physics. Stimulated by Bernhard Riemann's work on electrodynamics, Neumann developed a theory founded on the finite propagation of electrodynamic actions, which interested Wilhelm Eduard Weber and Rudolf Clausius into striking up a correspondence with him. Weber described Neumann's professorship at Leipzig as for "higher mechanics, which essentially encompasses mathematical physics," and his lectures did so. Maxwell makes reference to the electrodynamic theory developed by Weber and Neumann in the Introduction to (1864).
Neumann worked on the Dirichlet principle, and can be considered one of the initiators of the theory of integral equations. The Neumann series, which is analogous to the geometric series
formula_0
but for infinite matrices or for bounded operators, is named after him.
Together with Alfred Clebsch, Neumann founded the mathematical research journal "Mathematische Annalen". He died in Leipzig.
The Neumann boundary condition for certain types of ordinary and partial differential equations is named after him (Cheng and Cheng, 2005).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac{1}{1-x} = 1 + x + x^2 + \\cdots "
}
] | https://en.wikipedia.org/wiki?curid=915551 |
915558 | Property B | In mathematics, Property B is a certain set theoretic property. Formally, given a finite set "X", a collection "C" of subsets of "X" has Property B if we can partition "X" into two disjoint subsets "Y" and "Z" such that every set in "C" meets both "Y" and "Z".
The property gets its name from mathematician Felix Bernstein, who first introduced the property in 1908.
Property B is equivalent to 2-coloring the hypergraph described by the collection "C". A hypergraph with property B is also called 2-colorable. Sometimes it is also called bipartite, by analogy to the bipartite graphs.
Property B is often studied for uniform hypergraphs (set systems in which all subsets of the system have the same cardinality) but it has also been considered in the non-uniform case.
The problem of checking whether a collection "C" has Property B is called the set splitting problem.
Smallest set-families without property B.
The smallest number of sets in a collection of sets of size "n" such that "C" does not have Property B is denoted by "m"("n").
Known values of m(n).
It is known that "m"(1) = 1, "m"(2) = 3, and "m"(3) = 7 (as can by seen by the following examples); the value of "m"(4) = 23 (Östergård), although finding this result was the result of an exhaustive search. An upper bound of 23 (Seymour, Toft) and a lower bound of 21 (Manning) have been proven. At the time of this writing (March 2017), there is no OEIS entry for the sequence "m"("n") yet, due to the lack of terms known.
For "n" = 1, set "X" = {1}, and "C" = <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template <br>Unexpected use of template - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details. - see for details.. Then C does not have Property B.
For "n" = 2, set "X" = {1, 2, 3} and "C" = (a triangle). Then C does not have Property B, so "m"(2) <= 3. However, "C"' = does (set "Y" = {1} and "Z" = {2, 3}), so "m"(2) >= 3.
For "n" = 3, set "X" = {1, 2, 3, 4, 5, 6, 7}, and "C" = (the Steiner triple system "S"7); "C" does not have Property B (so "m"(3) <= 7), but if any element of "C" is omitted, then that element can be taken as "Y", and the set of remaining elements "C"' will have Property B (so for this particular case, "m"(3) >= 7). One may check all other collections of 6 3-sets to see that all have Property B.
Östergård (2014) through an exhaustive search found "m"(4) = 23. Seymour (1974) constructed a hypergraph on 11 vertices with 23 edges without Property B, which shows that "m"(4) <= 23. Manning (1995) narrowed the floor such that "m"(4) >= 21.
Asymptotics of "m"("n").
Erdős (1963) proved that for any collection of fewer than formula_0 sets of size "n", there exists a 2-coloring in which all set are bichromatic. The proof is simple: Consider a random coloring. The probability that an arbitrary set is monochromatic is formula_1. By a union bound, the probability that there exist a monochromatic set is less than formula_2. Therefore, there exists a good coloring.
Erdős (1964) showed the existence of an "n"-uniform hypergraph with formula_3 hyperedges which does not have property B (i.e., does not have a 2-coloring in which all hyperedges are bichromatic), establishing an upper bound.
Schmidt (1963) proved that every collection of at most formula_4 sets of size "n" has property B. Erdős and Lovász conjectured that formula_5. Beck in 1978 improved the lower bound to formula_6, where formula_7 is an arbitrary small positive number. In 2000, Radhakrishnan and Srinivasan improved the lower bound to formula_8. They used a clever probabilistic algorithm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2^{n-1}"
},
{
"math_id": 1,
"text": "2^{-n+1}"
},
{
"math_id": 2,
"text": "2^{n-1}2^{-n+1} = 1"
},
{
"math_id": 3,
"text": "O(2^n \\cdot n^2)"
},
{
"math_id": 4,
"text": "n/(n+4)\\cdot 2^n"
},
{
"math_id": 5,
"text": "m(n) = \\theta(2^n \\cdot n)"
},
{
"math_id": 6,
"text": "m(n) = \\Omega(n^{1/3 - \\epsilon}2^n)"
},
{
"math_id": 7,
"text": "\\epsilon"
},
{
"math_id": 8,
"text": "m(n) = \\Omega(2^n \\cdot \\sqrt{n / \\log n})"
}
] | https://en.wikipedia.org/wiki?curid=915558 |
9156022 | Odd number theorem | The odd number theorem is a theorem in strong gravitational lensing which comes directly from differential topology.
The theorem states that "the number of multiple images produced by a bounded transparent lens must be odd".
Formulation.
The gravitational lensing is a thought to mapped from what's known as "image plane" to "source plane" following the formula :
formula_0.
Argument.
If we use direction cosines describing the bent light rays, we can write a vector field on formula_1 plane formula_2.
However, only in some specific directions formula_3, will the bent light rays reach the observer, i.e., the images only form where formula_4. Then we can directly apply the Poincaré–Hopf theorem formula_5.
The index of sources and sinks is +1, and that of saddle points is −1. So the Euler characteristic equals the difference between the number of positive indices formula_6 and the number of negative indices formula_7. For the far field case, there is only one image, i.e., formula_8. So the total number of images is formula_9, i.e., odd. The strict proof needs Uhlenbeck's Morse theory of null geodesics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M: (u,v) \\mapsto (u',v')"
},
{
"math_id": 1,
"text": "(u,v)"
},
{
"math_id": 2,
"text": "V:(s,w)"
},
{
"math_id": 3,
"text": "V_0:(s_0,w_0)"
},
{
"math_id": 4,
"text": " D=\\delta V=0|_{(s_0,w_0)}"
},
{
"math_id": 5,
"text": "\\chi=\\sum \\text{index}_D = \\text{constant}"
},
{
"math_id": 6,
"text": "n_{+}"
},
{
"math_id": 7,
"text": "n_{-}"
},
{
"math_id": 8,
"text": " \\chi=n_{+}-n_{-}=1"
},
{
"math_id": 9,
"text": " N=n_{+}+n_{-}=2n_{-}+1 "
}
] | https://en.wikipedia.org/wiki?curid=9156022 |
9157119 | Feasible region | Mathematical constraints that define ways of finding the best solution
In mathematical optimization and computer science, a feasible region, feasible set, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down.
For example, consider the problem of minimizing the function formula_0 with respect to the variables formula_1 and formula_2 subject to formula_3 and formula_4 Here the feasible set is the set of pairs ("x", "y") in which the value of "x" is at least 1 and at most 10 and the value of "y" is at least 5 and at most 12. The feasible set of the problem is separate from the objective function, which states the criterion to be optimized and which in the above example is formula_5
In many problems, the feasible set reflects a constraint that one or more variables must be non-negative. In pure integer programming problems, the feasible set is the set of integers (or some subset thereof). In linear programming problems, the feasible set is a convex polytope: a region in multidimensional space whose boundaries are formed by hyperplanes and whose corners are vertices.
Constraint satisfaction is the process of finding a point in the feasible region.
Convex feasible set.
A convex feasible set is one in which a line segment connecting any two feasible points goes through only other feasible points, and not through any points outside the feasible set. Convex feasible sets arise in many types of problems, including linear programming problems, and they are of particular interest because, if the problem has a convex objective function that is to be maximized, it will generally be easier to solve in the presence of a convex feasible set and any local optimum will also be a global optimum.
No feasible set.
If the constraints of an optimization problem are mutually contradictory, there are no points that satisfy all the constraints and thus the feasible region is the empty set. In this case the problem has no solution and is said to be "infeasible".
Bounded and unbounded feasible sets.
Feasible sets may be bounded or unbounded. For example, the feasible set defined by the constraint set {"x" ≥ 0, "y" ≥ 0} is unbounded because in some directions there is no limit on how far one can go and still be in the feasible region. In contrast, the feasible set formed by the constraint set {"x" ≥ 0, "y" ≥ 0, "x" + 2"y" ≤ 4} is bounded because the extent of movement in any direction is limited by the constraints.
In linear programming problems with "n" variables, a necessary but insufficient condition for the feasible set to be bounded is that the number of constraints be at least "n" + 1 (as illustrated by the above example).
If the feasible set is unbounded, there may or may not be an optimum, depending on the specifics of the objective function. For example, if the feasible region is defined by the constraint set {"x" ≥ 0, "y" ≥ 0}, then the problem of maximizing "x" + "y" has no optimum since any candidate solution can be improved upon by increasing "x" or "y"; yet if the problem is to "minimize" "x" + "y", then there is an optimum (specifically at ("x", "y") = (0, 0)).
Candidate solution.
In optimization and other branches of mathematics, and in search algorithms (a topic in computer science), a candidate solution is a member of the set of possible solutions in the feasible region of a given problem. A candidate solution does not have to be a likely or reasonable solution to the problem—it is simply in the set that satisfies all constraints; that is, it is in the set of "feasible solutions". Algorithms for solving various types of optimization problems often narrow the set of candidate solutions down to a subset of the feasible solutions, whose points remain as candidate solutions while the other feasible solutions are henceforth excluded as candidates.
The space of all candidate solutions, before any feasible points have been excluded, is called the feasible region, feasible set, search space, or solution space. This is the set of all possible solutions that satisfy the problem's constraints. Constraint satisfaction is the process of finding a point in the feasible set.
Genetic algorithm.
In the case of the genetic algorithm, the candidate solutions are the individuals in the population being evolved by the algorithm.
Calculus.
In calculus, an optimal solution is sought using the first derivative test: the first derivative of the function being optimized is equated to zero, and any values of the choice variable(s) that satisfy this equation are viewed as candidate solutions (while those that do not are ruled out as candidates). There are several ways in which a candidate solution might not be an actual solution. First, it might give a minimum when a maximum is being sought (or vice versa), and second, it might give neither a minimum nor a maximum but rather a saddle point or an inflection point, at which a temporary pause in the local rise or fall of the function occurs. Such candidate solutions may be able to be ruled out by use of the second derivative test, the satisfaction of which is sufficient for the candidate solution to be at least locally optimal. Third, a candidate solution may be a local optimum but not a global optimum.
In taking antiderivatives of monomials of the form formula_6 the candidate solution using Cavalieri's quadrature formula would be formula_7 This candidate solution is in fact correct except when formula_8
Linear programming.
In the simplex method for solving linear programming problems, a vertex of the feasible polytope is selected as the initial candidate solution and is tested for optimality; if it is rejected as the optimum, an adjacent vertex is considered as the next candidate solution. This process is continued until a candidate solution is found to be the optimum.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " x^2+y^4 "
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "y,"
},
{
"math_id": 3,
"text": " 1 \\le x \\le 10 "
},
{
"math_id": 4,
"text": " 5 \\le y \\le 12. \\, "
},
{
"math_id": 5,
"text": " x^2+y^4. "
},
{
"math_id": 6,
"text": "x^n,"
},
{
"math_id": 7,
"text": "\\tfrac{1}{n+1}x^{n+1}+C."
},
{
"math_id": 8,
"text": "n=-1."
}
] | https://en.wikipedia.org/wiki?curid=9157119 |
9158134 | Jackknife resampling | Statistical method for resampling
In statistics, the jackknife (jackknife cross-validation) is a cross-validation technique and, therefore, a form of resampling.
It is especially useful for bias and variance estimation. The jackknife pre-dates other common resampling methods such as the bootstrap. Given a sample of size formula_0, a jackknife estimator can be built by aggregating the parameter estimates from each subsample of size formula_1 obtained by omitting one observation.
The jackknife technique was developed by Maurice Quenouille (1924–1973) from 1949 and refined in 1956. John Tukey expanded on the technique in 1958 and proposed the name "jackknife" because, like a physical jack-knife (a compact folding knife), it is a rough-and-ready tool that can improvise a solution for a variety of problems even though specific problems may be more efficiently solved with a purpose-designed tool.
The jackknife is a linear approximation of the bootstrap.
A simple example: mean estimation.
The jackknife estimator of a parameter is found by systematically leaving out each observation from a dataset and calculating the parameter estimate over the remaining observations and then aggregating these calculations.
For example, if the parameter to be estimated is the population mean of random variable "formula_2", then for a given set of i.i.d. observations formula_3 the natural estimator is the sample mean:
formula_4
where the last sum used another way to indicate that the index formula_5 runs over the set formula_6.
Then we proceed as follows: For each formula_7 we compute the mean formula_8 of the jackknife subsample consisting of all but the "formula_5"-th data point, and this is called the formula_5-th jackknife replicate:
formula_9
It could help to think that these "formula_0" jackknife replicates formula_10 give us an approximation of the distribution of the sample mean formula_11 and the larger the formula_0 the better this approximation will be. Then finally to get the jackknife estimator we take the average of these formula_0 jackknife replicates:
formula_12
One may ask about the bias and the variance of formula_13. From the definition of formula_13 as the average of the jackknife replicates one could try to calculate explicitly, and the bias is a trivial calculation but the variance of formula_13 is more involved since the jackknife replicates are not independent.
For the special case of the mean, one can show explicitly that the jackknife estimate equals the usual estimate:
formula_14
This establishes the identity formula_15. Then taking expectations we get formula_16, so formula_13 is unbiased, while taking variance we get formula_17. However, these properties do not generally hold for parameters other than the mean.
This simple example for the case of mean estimation is just to illustrate the construction of a jackknife estimator, while the real subtleties (and the usefulness) emerge for the case of estimating other parameters, such as higher moments than the mean or other functionals of the distribution.
formula_13 could be used to construct an empirical estimate of the bias of formula_11, namely formula_18 with some suitable factor formula_19, although in this case we know that formula_15 so this construction does not add any meaningful knowledge, but it gives the correct estimation of the bias (which is zero).
A jackknife estimate of the variance of formula_11 can be calculated from the variance of the jackknife replicates formula_8:
formula_20
The left equality defines the estimator formula_21 and the right equality is an identity that can be verified directly. Then taking expectations we get formula_22, so this is an unbiased estimator of the variance of formula_11.
Estimating the bias of an estimator.
The jackknife technique can be used to estimate (and correct) the bias of an estimator calculated over the entire sample.
Suppose formula_23 is the target parameter of interest, which is assumed to be some functional of the distribution of formula_2. Based on a finite set of observations formula_3, which is assumed to consist of i.i.d. copies of formula_2, the estimator formula_24 is constructed:
formula_25
The value of formula_24 is sample-dependent, so this value will change from one random sample to another.
By definition, the bias of formula_24 is as follows:
formula_26
One may wish to compute several values of formula_24 from several samples, and average them, to calculate an empirical approximation of formula_27, but this is impossible when there are no "other samples" when the entire set of available observations formula_3 was used to calculate formula_24. In this kind of situation the jackknife resampling technique may be of help.
We construct the jackknife replicates:
formula_28
formula_29
formula_30
formula_31
where each replicate is a "leave-one-out" estimate based on the jackknife subsample consisting of all but one of the data points:
formula_32
Then we define their average:
formula_33
The jackknife estimate of the bias of formula_24 is given by:
formula_34
and the resulting bias-corrected jackknife estimate of formula_23 is given by:
formula_35
This removes the bias in the special case that the bias is formula_36 and reduces it to formula_37 in other cases.
Estimating the variance of an estimator.
The jackknife technique can be also used to estimate the variance of an estimator calculated over the entire sample.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "(n-1)"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "x_1, ..., x_n"
},
{
"math_id": 4,
"text": "\\bar{x} =\\frac{1}{n} \\sum_{i=1}^{n} x_i =\\frac{1}{n} \\sum_{i \\in [n]} x_i,"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "[n] = \\{ 1,\\ldots,n\\}"
},
{
"math_id": 7,
"text": "i \\in [n]"
},
{
"math_id": 8,
"text": "\\bar{x}_{(i)}"
},
{
"math_id": 9,
"text": "\\bar{x}_{(i)} =\\frac{1}{n-1} \\sum_{j \\in [n], j\\ne i} x_j, \\quad \\quad i=1, \\dots ,n."
},
{
"math_id": 10,
"text": "\\bar{x}_{(1)},\\ldots,\\bar{x}_{(n)}"
},
{
"math_id": 11,
"text": "\\bar{x}"
},
{
"math_id": 12,
"text": "\\bar{x}_{\\mathrm{jack}} = \\frac{1}{n}\\sum_{i=1}^n \\bar{x}_{(i)}."
},
{
"math_id": 13,
"text": "\\bar{x}_{\\mathrm{jack}}"
},
{
"math_id": 14,
"text": "\\frac{1}{n}\\sum_{i=1}^n \\bar{x}_{(i)} = \\bar{x}."
},
{
"math_id": 15,
"text": "\\bar{x}_{\\mathrm{jack}} = \\bar{x}"
},
{
"math_id": 16,
"text": "E[\\bar{x}_{\\mathrm{jack}}] = E[\\bar{x}] =E[x]"
},
{
"math_id": 17,
"text": "V[\\bar{x}_{\\mathrm{jack}}] = V[\\bar{x}] =V[x]/n"
},
{
"math_id": 18,
"text": "\\widehat{\\operatorname{bias}}(\\bar{x})_{\\mathrm{jack}} = c(\\bar{x}_{\\mathrm{jack}} - \\bar{x})"
},
{
"math_id": 19,
"text": "c>0"
},
{
"math_id": 20,
"text": "\\widehat{\\operatorname{var}}(\\bar{x})_{\\mathrm{jack}}\n=\\frac{n-1}{n} \\sum_{i=1}^n (\\bar{x}_{(i)} - \\bar{x}_{\\mathrm{jack}})^2 \n=\\frac{1}{n(n-1)} \\sum_{i=1}^n (x_i - \\bar{x})^2."
},
{
"math_id": 21,
"text": "\\widehat{\\operatorname{var}}(\\bar{x})_{\\mathrm{jack}}"
},
{
"math_id": 22,
"text": "E[\\widehat{\\operatorname{var}}(\\bar{x})_{\\mathrm{jack}}] = V[x]/n = V[\\bar{x}]"
},
{
"math_id": 23,
"text": "\\theta"
},
{
"math_id": 24,
"text": "\\hat{\\theta}"
},
{
"math_id": 25,
"text": "\\hat{\\theta} =f_n(x_1,\\ldots,x_n)."
},
{
"math_id": 26,
"text": "\\text{bias}(\\hat{\\theta}) = E[\\hat{\\theta}] - \\theta."
},
{
"math_id": 27,
"text": "E[\\hat{\\theta}]"
},
{
"math_id": 28,
"text": "\\hat{\\theta}_{(1)} =f_{n-1}(x_{2},x_{3}\\ldots,x_{n})"
},
{
"math_id": 29,
"text": "\\hat{\\theta}_{(2)} =f_{n-1}(x_{1},x_{3},\\ldots,x_{n})"
},
{
"math_id": 30,
"text": "\\vdots"
},
{
"math_id": 31,
"text": "\\hat{\\theta}_{(n)} =f_{n-1}(x_1,x_{2},\\ldots,x_{n-1})"
},
{
"math_id": 32,
"text": "\\hat{\\theta}_{(i)} =f_{n-1}(x_{1},\\ldots,x_{i-1},x_{i+1},\\ldots,x_{n}) \\quad \\quad i=1, \\dots,n."
},
{
"math_id": 33,
"text": "\\hat{\\theta}_\\mathrm{jack}=\\frac{1}{n} \\sum_{i=1}^n \\hat{\\theta}_{(i)}"
},
{
"math_id": 34,
"text": "\\widehat{\\text{bias}}(\\hat{\\theta})_\\mathrm{jack} =(n-1)(\\hat{\\theta}_\\mathrm{jack} - \\hat{\\theta})"
},
{
"math_id": 35,
"text": "\\hat{\\theta}_{\\text{jack}}^{*} \n=\\hat{\\theta} - \\widehat{\\text{bias}}(\\hat{\\theta})_\\mathrm{jack}\n=n\\hat{\\theta} - (n-1)\\hat{\\theta}_\\mathrm{jack}."
},
{
"math_id": 36,
"text": "O(n^{-1})"
},
{
"math_id": 37,
"text": "O(n^{-2})"
}
] | https://en.wikipedia.org/wiki?curid=9158134 |
915822 | Craig retroazimuthal projection | Retroazimuthal compromise map projection
The Craig retroazimuthal map projection was created by James Ireland Craig in 1909. It is a modified cylindrical projection. As a retroazimuthal projection, it preserves directions from everywhere to one location of interest that is configured during construction of the projection. The projection is sometimes known as the Mecca projection because Craig, who had worked in Egypt as a cartographer, created it to help Muslims find their qibla. In such maps, Mecca is the configurable location of interest.
Given latitude "φ" to plot, latitude "φ"0 of the fixed location of interest, longitude "λ" to plot, and the longitude "λ"0 of the fixed location of interest, the projection is defined by:
formula_0
But when "λ" − "λ"0 = 0, "y" above is undefined, so instead use the ratio's continuous completion:
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align} x &= \\lambda - \\lambda_0\\\\\ny &= \\frac{\\lambda - \\lambda_0}{\\sin \\left(\\lambda - \\lambda_0\\right)}\\Big(\\sin \\varphi \\cos \\left(\\lambda - \\lambda_0\\right) - \\tan \\varphi_0 \\cos \\varphi\\Big)\\end{align}"
},
{
"math_id": 1,
"text": "y = \\sin \\varphi \\cos \\left(\\lambda - \\lambda_0\\right) - \\tan \\varphi_0 \\cos \\varphi = \\sin \\varphi - \\tan \\varphi_0 \\cos \\varphi"
}
] | https://en.wikipedia.org/wiki?curid=915822 |
91591 | Linearity | Properties of mathematical relationships
In mathematics, the term linear is used in two distinct senses for two different properties:
An example of a linear function is the function defined by formula_0 that maps the real line to a line in the Euclidean plane R2 that passes through the origin. An example of a linear polynomial in the variables formula_1 formula_2 and formula_3 is formula_4
Linearity of a mapping is closely related to "proportionality". Examples in physics include the linear relationship of voltage and current in an electrical conductor (Ohm's law), and the relationship of mass and weight. By contrast, more complicated relationships, such as between velocity and kinetic energy, are "nonlinear".
Generalized for functions in more than one dimension, linearity means the property of a function of being compatible with addition and scaling, also known as the superposition principle.
Linearity of a polynomial means that its degree is less than two. The use of the term for polynomials stems from the fact that the graph of a polynomial in one variable is a straight line. In the term "linear equation", the word refers to the linearity of the polynomials involved.
Because a function such as formula_5 is defined by a linear polynomial in its argument, it is sometimes also referred to as being a "linear function", and the relationship between the argument and the function value may be referred to as a "linear relationship". This is potentially confusing, but usually the intended meaning will be clear from the context.
The word linear comes from Latin "linearis", "pertaining to or resembling a line".
In mathematics.
Linear maps.
In mathematics, a linear map or linear function "f"("x") is a function that satisfies the two properties:
These properties are known as the superposition principle. In this definition, "x" is not necessarily a real number, but can in general be an element of any vector space. A more special definition of linear function, not coinciding with the definition of linear map, is used in elementary mathematics (see below).
Additivity alone implies homogeneity for rational α, since formula_6 implies formula_7 for any natural number "n" by mathematical induction, and then formula_8 implies formula_9. The density of the rational numbers in the reals implies that any additive continuous function is homogeneous for any real number α, and is therefore linear.
The concept of linearity can be extended to linear operators. Important examples of linear operators include the derivative considered as a differential operator, and other operators constructed from it, such as del and the Laplacian. When a differential equation can be expressed in linear form, it can generally be solved by breaking the equation up into smaller pieces, solving each of those pieces, and summing the solutions.
Linear polynomials.
In a different usage to the above definition, a polynomial of degree 1 is said to be linear, because the graph of a function of that form is a straight line.
Over the reals, a simple example of a linear equation is given by:
formula_10
where "m" is often called the slope or gradient, and "b" the y-intercept, which gives the point of intersection between the graph of the function and the "y"-axis.
Note that this usage of the term "linear" is not the same as in the section above, because linear polynomials over the real numbers do not in general satisfy either additivity or homogeneity. In fact, they do so if and only if the constant term – "b" in the example – equals 0. If "b" ≠ 0, the function is called an affine function (see in greater generality affine transformation).
Linear algebra is the branch of mathematics concerned with systems of linear equations.
Boolean functions.
In Boolean algebra, a linear function is a function formula_11 for which there exist formula_12 such that
formula_13, where formula_14
Note that if formula_15, the above function is considered affine in linear algebra (i.e. not linear).
A Boolean function is linear if one of the following holds for the function's truth table:
Another way to express this is that each variable always makes a difference in the truth value of the operation or it never makes a difference.
Negation, Logical biconditional, exclusive or, tautology, and contradiction are linear functions.
Physics.
In physics, "linearity" is a property of the differential equations governing many systems; for instance, the Maxwell equations or the diffusion equation.
Linearity of a homogenous differential equation means that if two functions "f" and "g" are solutions of the equation, then any linear combination "af" + "bg" is, too.
In instrumentation, linearity means that a given change in an input variable gives the same change in the output of the measurement apparatus: this is highly desirable in scientific work. In general, instruments are close to linear over a certain range, and most useful within that range. In contrast, human senses are highly nonlinear: for instance, the brain completely ignores incoming light unless it exceeds a certain absolute threshold number of photons.
Linear motion traces a straight line trajectory.
Electronics.
In electronics, the linear operating region of a device, for example a transistor, is where an output dependent variable (such as the transistor collector current) is directly proportional to an input dependent variable (such as the base current). This ensures that an analog output is an accurate representation of an input, typically with higher amplitude (amplified). A typical example of linear equipment is a high fidelity audio amplifier, which must amplify a signal without changing its waveform. Others are linear filters, and linear amplifiers in general.
In most scientific and technological, as distinct from mathematical, applications, something may be described as linear if the characteristic is approximately but not exactly a straight line; and linearity may be valid only within a certain operating region—for example, a high-fidelity amplifier may distort a small signal, but sufficiently little to be acceptable (acceptable but imperfect linearity); and may distort very badly if the input exceeds a certain value.
Integral linearity.
For an electronic device (or other physical device) that converts a quantity to another quantity, Bertram S. Kolts writes:
There are three basic definitions for integral linearity in common use: independent linearity, zero-based linearity, and terminal, or end-point, linearity. In each case, linearity defines how well the device's actual performance across a specified operating range approximates a straight line. Linearity is usually measured in terms of a deviation, or non-linearity, from an ideal straight line and it is typically expressed in terms of percent of full scale, or in ppm (parts per million) of full scale. Typically, the straight line is obtained by performing a least-squares fit of the data. The three definitions vary in the manner in which the straight line is positioned relative to the actual device's performance. Also, all three of these definitions ignore any gain, or offset errors that may be present in the actual device's performance characteristics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x)=(ax,bx)"
},
{
"math_id": 1,
"text": "X,"
},
{
"math_id": 2,
"text": "Y"
},
{
"math_id": 3,
"text": "Z"
},
{
"math_id": 4,
"text": "aX+bY+cZ+d."
},
{
"math_id": 5,
"text": "f(x)=ax+b"
},
{
"math_id": 6,
"text": "f(x+x)=f(x)+f(x)"
},
{
"math_id": 7,
"text": "f(nx)=n f(x)"
},
{
"math_id": 8,
"text": "n f(x) = f(nx)=f(m\\tfrac{n}{m}x)= m f(\\tfrac{n}{m}x)"
},
{
"math_id": 9,
"text": "f(\\tfrac{n}{m}x) = \\tfrac{n}{m} f(x)"
},
{
"math_id": 10,
"text": "y = m x + b\\ "
},
{
"math_id": 11,
"text": "f"
},
{
"math_id": 12,
"text": "a_0, a_1, \\ldots, a_n \\in \\{0,1\\}"
},
{
"math_id": 13,
"text": "f(b_1, \\ldots, b_n) = a_0 \\oplus (a_1 \\land b_1) \\oplus \\cdots \\oplus (a_n \\land b_n)"
},
{
"math_id": 14,
"text": "b_1, \\ldots, b_n \\in \\{0,1\\}."
},
{
"math_id": 15,
"text": "a_0 = 1"
}
] | https://en.wikipedia.org/wiki?curid=91591 |
916064 | Sumset | In additive combinatorics, the sumset (also called the Minkowski sum) of two subsets formula_0 and formula_1 of an abelian group formula_2 (written additively) is defined to be the set of all sums of an element from formula_0 with an element from formula_1. That is,
formula_3
The formula_4-fold iterated sumset of formula_0 is
formula_5
where there are formula_4 summands.
Many of the questions and results of additive combinatorics and additive number theory can be phrased in terms of sumsets. For example, Lagrange's four-square theorem can be written succinctly in the form
formula_6
where formula_7 is the set of square numbers. A subject that has received a fair amount of study is that of sets with "small doubling", where the size of the set formula_8 is small (compared to the size of formula_0); see for example Freiman's theorem.
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "G"
},
{
"math_id": 3,
"text": "A + B = \\{a+b : a \\in A, b \\in B\\}."
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "nA = A + \\cdots + A,"
},
{
"math_id": 6,
"text": "4\\,\\Box = \\mathbb{N},"
},
{
"math_id": 7,
"text": "\\Box"
},
{
"math_id": 8,
"text": "A+A"
}
] | https://en.wikipedia.org/wiki?curid=916064 |
916157 | Vandermonde's identity | Mathematical theorem on convolved binomial coefficients
In combinatorics, Vandermonde's identity (or Vandermonde's convolution) is the following identity for binomial coefficients:
formula_0
for any nonnegative integers "r", "m", "n". The identity is named after Alexandre-Théophile Vandermonde (1772), although it was already known in 1303 by the Chinese mathematician Zhu Shijie.
There is a "q"-analog to this theorem called the "q"-Vandermonde identity.
Vandermonde's identity can be generalized in numerous ways, including to the identity
formula_1
Proofs.
Algebraic proof.
In general, the product of two polynomials with degrees "m" and "n", respectively, is given by
formula_2
where we use the convention that "ai" = 0 for all integers "i" > "m" and "bj" = 0 for all integers "j" > "n". By the binomial theorem,
formula_3
Using the binomial theorem also for the exponents "m" and "n", and then the above formula for the product of polynomials, we obtain
formula_4
where the above convention for the coefficients of the polynomials agrees with the definition of the binomial coefficients, because both give zero for all "i" > "m" and "j" > "n", respectively.
By comparing coefficients of "x"
"r", Vandermonde's identity follows for all integers "r" with 0 ≤ "r" ≤ "m" + "n". For larger integers "r", both sides of Vandermonde's identity are zero due to the definition of binomial coefficients.
Combinatorial proof.
Vandermonde's identity also admits a combinatorial double counting proof, as follows. Suppose a committee consists of "m" men and "n" women. In how many ways can a subcommittee of "r" members be formed? The answer is
formula_5
The answer is also the sum over all possible values of "k", of the number of subcommittees consisting of "k" men and "r" − "k" women:
formula_6
Geometrical proof.
Take a rectangular grid of "r" x ("m"+"n"−"r") squares. There are
formula_7
paths that start on the bottom left vertex and, moving only upwards or rightwards, end at the top right vertex (this is because "r" right moves and "m"+"n"-"r" up moves must be made (or vice versa) in any order, and the total path length is "m" + "n"). Call the bottom left vertex (0, 0).
There are formula_8 paths starting at (0, 0) that end at ("k", "m"−"k"), as "k" right moves and "m"−"k" upward moves must be made (and the path length is "m"). Similarly, there are formula_9 paths starting at ("k", "m"−"k") that end at ("r", "m"+"n"−"r"), as a total of "r"−"k" right moves and ("m"+"n"−"r") − ("m"−"k") upward moves must be made and the path length must be "r"−"k" + ("m"+"n"−"r") − ("m"−"k") = "n". Thus there are
formula_10
paths that start at (0, 0), end at ("r", "m"+"n"−"r"), and go through ("k", "m"−"k"). This is a subset of all paths that start at (0, 0) and end at ("r", "m"+"n"−"r"), so sum from "k" = 0 to "k" = "r" (as the point ("k", "m"−"k") is confined to be within the square) to obtain the total number of paths that start at (0, 0) and end at ("r", "m"+"n"−"r").
Generalizations.
Generalized Vandermonde's identity.
One can generalize Vandermonde's identity as follows:
formula_11
This identity can be obtained through the algebraic derivation above when more than two polynomials are used, or through a simple double counting argument.
On the one hand, one chooses formula_12 elements out of a first set of formula_13 elements; then formula_14 out of another set, and so on, through formula_15 such sets, until a total of formula_16 elements have been chosen from the formula_15 sets. One therefore chooses formula_16 elements out of formula_17 in the left-hand side, which is also exactly what is done in the right-hand side.
Chu–Vandermonde identity.
The identity generalizes to non-integer arguments. In this case, it is known as the Chu–Vandermonde identity (see Askey 1975, pp. 59–60) and takes the form
formula_18
for general complex-valued "s" and "t" and any non-negative integer "n". It can be proved along the lines of the algebraic proof above by multiplying the binomial series for formula_19 and formula_20 and comparing terms with the binomial series for formula_21.
This identity may be rewritten in terms of the falling Pochhammer symbols as
formula_22
in which form it is clearly recognizable as an umbral variant of the binomial theorem (for more on umbral variants of the binomial theorem, see binomial type). The Chu–Vandermonde identity can also be seen to be a special case of Gauss's hypergeometric theorem, which states that
formula_23
where formula_24 is the hypergeometric function and formula_25 is the gamma function. One regains the Chu–Vandermonde identity by taking "a" = −"n" and applying the identity
formula_26
liberally.
The Rothe–Hagen identity is a further generalization of this identity.
The hypergeometric probability distribution.
When both sides have been divided by the expression on the left, so that the sum is 1, then the terms of the sum may be interpreted as probabilities. The resulting probability distribution is the hypergeometric distribution. That is the probability distribution of the number of red marbles in "r" draws "without replacement" from an urn containing "n" red and "m" blue marbles.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{m+n \\choose r}=\\sum_{k=0}^r{m \\choose k}{n \\choose r-k}"
},
{
"math_id": 1,
"text": "\n{ n_1+\\dots +n_p \\choose m }= \\sum_{k_1+\\cdots +k_p = m} {n_1\\choose k_1} {n_2\\choose k_2} \\cdots {n_p\\choose k_p}. \n"
},
{
"math_id": 2,
"text": "\\biggl(\\sum_{i=0}^m a_ix^i\\biggr) \\biggl(\\sum_{j=0}^n b_jx^j\\biggr)\n= \\sum_{r=0}^{m+n}\\biggl(\\sum_{k=0}^r a_k b_{r-k}\\biggr) x^r,"
},
{
"math_id": 3,
"text": "(1+x)^{m+n} = \\sum_{r=0}^{m+n} {m+n \\choose r}x^r. "
},
{
"math_id": 4,
"text": "\\begin{align}\n\\sum_{r=0}^{m+n} {m+n \\choose r}x^r\n&= (1+x)^{m+n}\\\\\n&= (1+x)^m (1+x)^n \\\\\n&= \\biggl(\\sum_{i=0}^m {m\\choose i}x^i\\biggr)\n \\biggl(\\sum_{j=0}^n {n\\choose j}x^j\\biggr)\\\\\n&=\\sum_{r=0}^{m+n}\\biggl(\\sum_{k=0}^r {m\\choose k} {n\\choose r-k}\\biggr) x^r,\n\\end{align}\n"
},
{
"math_id": 5,
"text": "{m+n \\choose r}."
},
{
"math_id": 6,
"text": "\\sum_{k=0}^r{m \\choose k}{n \\choose r-k}."
},
{
"math_id": 7,
"text": "\\binom{r+(m+n-r)}{r}=\\binom{m+n}{r}"
},
{
"math_id": 8,
"text": "\\binom{m}{k}"
},
{
"math_id": 9,
"text": "\\binom{n}{r-k}"
},
{
"math_id": 10,
"text": " \\binom{m}{k}\\binom{n}{r-k} "
},
{
"math_id": 11,
"text": "\n\\sum_{k_1+\\cdots +k_p = m} {n_1\\choose k_1} {n_2\\choose k_2} \\cdots {n_p\\choose k_p} = { n_1+\\dots +n_p \\choose m }.\n"
},
{
"math_id": 12,
"text": "\\textstyle k_1"
},
{
"math_id": 13,
"text": "\\textstyle n_1"
},
{
"math_id": 14,
"text": "\\textstyle k_2"
},
{
"math_id": 15,
"text": "\\textstyle p"
},
{
"math_id": 16,
"text": "\\textstyle m"
},
{
"math_id": 17,
"text": "\\textstyle n_1+\\dots +n_p"
},
{
"math_id": 18,
"text": "{s+t \\choose n}=\\sum_{k=0}^n {s \\choose k}{t \\choose n-k}"
},
{
"math_id": 19,
"text": "(1+x)^s"
},
{
"math_id": 20,
"text": "(1+x)^t"
},
{
"math_id": 21,
"text": "(1+x)^{s+t}"
},
{
"math_id": 22,
"text": "(s+t)_n = \\sum_{k=0}^n {n \\choose k} (s)_k (t)_{n-k}"
},
{
"math_id": 23,
"text": "\\;_2F_1(a,b;c;1) = \\frac{\\Gamma(c)\\Gamma(c-a-b)}{\\Gamma(c-a)\\Gamma(c-b)}"
},
{
"math_id": 24,
"text": "\\;_2F_1"
},
{
"math_id": 25,
"text": "\\Gamma(n+1)=n!"
},
{
"math_id": 26,
"text": "{n\\choose k} = (-1)^k {k-n-1 \\choose k}"
}
] | https://en.wikipedia.org/wiki?curid=916157 |
916176 | Span of control | Term in business management
Span of control, also called span of management, is a term used in business management, particularly human resource management. The term refers to the number of direct reports a supervisor is responsible for (the number of people the supervisor supports).
Overview.
In simple words, span of control means the manageable number of subordinates of a superior. The bigger the number of the subordinates a manager controls, the broader is her/his span of control.
In a hierarchical business organization of some time in the past it was not uncommon to see average spans of 1-to-4 or even less, i.e. one manager supervised four employees on average. In the 1980s corporate leaders flattened many organizational structures causing average spans to move closer to 1-to-10. That was made possible primarily by the development of inexpensive information technology. As information technology was developed capable of easing many middle manager tasks – tasks like collecting, manipulating and presenting operational information – upper managers found they could hire fewer middle managers to do more work managing more subordinates for less money.
The current shift to self-directed cross-functional teams and other forms of non-hierarchical structures, have made the concept of span of control less important.
Theories about the optimum span of control go back to V. A. Graicunas. In 1933 he used assumptions about mental capacity and attention span to develop a set of practical heuristics. Lyndall Urwick (1956) developed a theory based on geographical dispersion and the need for face to face meetings. In spite of numerous attempts since then, no convincing theories have been presented. This is because the optimum span of control depends on numerous variables including organizational structure, available technology, the functions being performed, and the competencies of the manager as well as staff. An alternative view is proposed by Elliott Jaques that a manager may have up to as many immediate subordinates that they can know personally in the sense that they can assess personal effectiveness.
Factors Considerable deciding span of Control.
These are the factors affecting span of control:
Theoretical considerations.
The first to develop a more general theory of management was Henry Fayol, who had gathered empirical experience during his time as general manager of a coal and steel company, the Commentary-Fourchambault Company. He was the first to add a managerial perspective to the problem of organizational governance. The rationale for defining a strict hierarchy of communication channels is found in the need for vertical integration of activities, imposed by management's need for control and information.
However, exercising control over activities performed by subordinates and monitoring their communication would inflict information overload to the nodes at the upper hierarchical levels, since all communication to other branches of the organizational structure would be routed through them. In addition, a larger number of subordinates also requires supervisors to monitor a high number of interactions below their own level; information overload and span of control are positively correlated.
Graicunas (Gulick and Urwick, 1937) distinguished three types of interactions – direct single relationships, cross-relationships, and direct group relationships – each of them contributing to the total amount of interactions within the organization. According to Graicunas, the number of possible interactions can be computed in the following way. Let n be the number of subordinates reporting to a supervisor. Then, the number of relationships of direct single type the supervisor could possibly engage into is
formula_0
The number of interactions between subordinates (cross relationships) that the supervisor has to monitor is
formula_1
and the number of direct group relationships is
formula_2
The sum of these three types of interactions is the number of potential relationships of a supervisor. Graicunas showed with these formulas that each additional subordinate increases the number of potential interactions significantly. It appears natural that no organization can afford to maintain a control structure of a dimension being required for implementing a scalar chain under the unity of command condition. Therefore, other mechanisms had to be found for dealing with the dilemma of maintaining managerial control, while keeping cost and time at a reasonable level, thus making the span of control a critical figure for the organization. Consequently, for a long time, finding the optimum span of control has been a major challenge to organization design. As Mackenzie (1978, p 121) describes it:
”One could argue that with larger spans, the costs of supervision would tend to be reduced, because a smaller percentage of the members of the organization are supervisors. On the other hand, if the span of control is too large, the supervisor may not have the capacity to supervise effectively such large numbers of immediate subordinates. Thus, there is a possible trade-off to be made in an attempt to balance these possibly opposing tendencies.”
Fayol proposed that subordinate employees should be allowed to communicate directly with each other, given that their superiors had agreed upon this procedure. This principle became known as "Fayol's Bridge."
The use of Fayol’s Bridge resulted in a number of other aspects needing to be taken into consideration. In order to put this system to work, Taylor’s functional foremanship has to be abandoned, and unity of command needs to be established. At the same time, decision power is distributed to individuals on lower levels in the organization, and only decisions that exceed the pre-defined decision scope of an employee are referred upwards. This, in turn, strengthens the co-equality of authority and responsibility. Since a Fayol Bridge is not limited to a certain functional area within the organization, but can span over functional boundaries, e.g. from purchasing to manufacturing, it can be considered as a first attempt to create a horizontal integration of related activities under a certain level of self-management, an early business process.
Mackenzie and others (Massie 1965, Pugh et al., 1972) also noted that there is no generally applicable optimum span of control. There are instead several factors influencing the balance between the desired level of control and the manageability of the organization.
Firstly, it depends on the capabilities of the organizational members, managers and workers. It was assumed, that no manager would be capable of supervising more than 5-6 direct subordinates. However, this conclusion built on the assumption that the superior must actively monitor the work of all subordinates. Later on, this statement was diversified when Davis (1951) divided managerial work into two categories, one requiring the attention to physical work, the other one requiring mental activity. Depending on the type of supervision, a span of 3-8 subordinates for managers at higher levels was considered adequate, while first level supervisors, i.e. those supervising shop floor personnel could have up to 30 subordinates.
The neoclassical theorists have developed a different solution. They assumed that a considerable amount of decisions could be delegated to organizational members at lower organizational levels. This solution would be equivalent to the application of Fayol's Bridge combined with the principle of employee initiative that he proposed. As a result, the need for supervision would be reduced from direct control to exception handling. According to this assumption, they considered the opportunity of having access to a supervising manager would be sufficient to satisfy the need for control in standard situations. Peter Drucker refers to this principle as the span of managerial responsibility.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n."
},
{
"math_id": 1,
"text": "n (n - 1)"
},
{
"math_id": 2,
"text": "n (2^n / 2 - 1)"
}
] | https://en.wikipedia.org/wiki?curid=916176 |
9164216 | Neyman construction | Neyman construction, named after Jerzy Neyman, is a frequentist method to construct an interval at a confidence level formula_0 such that if we repeat the experiment many times the interval will contain the true value of some parameter a fraction formula_1 of the time.
Theory.
Assume formula_2 are random variables with joint pdf formula_3, which depends on k unknown parameters. For convenience, let formula_4 be the sample space defined by the n random variables and subsequently define a sample point in the sample space as formula_5
Neyman originally proposed defining two functions formula_6 and formula_7 such that for any sample point,formula_8,
Given an observation, formula_11, the probability that formula_12 lies between formula_13 and formula_14 is defined as formula_15 with probability of formula_16 or formula_17. These calculated probabilities fail to draw meaningful inference about formula_12 since the probability is simply zero or unity. Furthermore, under the frequentist construct the model parameters are unknown constants and not permitted to be random variables.
For example if formula_18, then formula_19. Likewise, if formula_20, then formula_21
As Neyman describes in his 1937 paper, suppose that we consider all points in the sample space, that is, formula_10, which are a system of random variables defined by the joint pdf described above. Since formula_22 and formula_23 are functions of formula_8 they too are random variables and one can examine the meaning of the following probability statement:
Under the frequentist construct the model parameters are unknown constants and not permitted to be random variables. Considering all the sample points in the sample space as random variables defined by the joint pdf above, that is all formula_24 it can be shown that formula_22 and formula_23 are functions of random variables and hence random variables. Therefore one can look at the probability of formula_25 and formula_26 for some formula_24. If formula_27 is the true value of formula_12, we can define formula_22 and formula_23 such that the probability formula_28 and formula_29 is equal to pre-specified confidence levelformula_30.
That is, formula_31 where formula_32 and formula_25 and formula_26 are the upper and lower confidence limits for formula_12
Coverage probability.
The coverage probability, formula_33, for Neyman construction is the frequency of experiments in which the confidence interval contains the actual value of interest. Generally, the coverage probability is set to a formula_34 confidence. For Neyman construction, the coverage probability is set to some value formula_33 where formula_35.
Implementation.
A Neyman construction can be carried out by performing multiple experiments that construct data sets corresponding to a given value of the parameter. The experiments are fitted with conventional methods, and the space of fitted parameter values constitutes the band which the confidence interval can be selected from.
Classic example.
Suppose formula_36, where formula_37 and formula_38 are unknown constants where we wish to estimate formula_37. We can define (2) single value functions, formula_22 and formula_23, defined by the process above such that given a pre-specified confidence level, formula_33, and random sample formula_39
formula_40
formula_41
where formula_42 is the standard error, and the sample mean and standard deviation are:
formula_43
formula_44
The factor formula_45 follows a "t" distribution with (n-1) degrees of freedom, formula_45~tformula_46
Another Example.
formula_47 are iid random variables, and let formula_48. Suppose formula_49. Now to construct a confidence interval with formula_50 level of confidence. We know formula_51 is sufficient for formula_52. So,
formula_53
formula_54
formula_55
This produces a formula_56 confidence interval for formula_52 where,
formula_57
formula_58.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " C, \\,"
},
{
"math_id": 1,
"text": " C\\,"
},
{
"math_id": 2,
"text": " X_{1},X_{2},...X_{n}"
},
{
"math_id": 3,
"text": "f(x_{1},x_{2},...x_{n} | \\theta_{1},\\theta_{2},...,\\theta_{k})"
},
{
"math_id": 4,
"text": "\\Theta"
},
{
"math_id": 5,
"text": "X=(X_{1},X_{2},...X_{n})"
},
{
"math_id": 6,
"text": "L(x)"
},
{
"math_id": 7,
"text": "U(x)"
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "L(X)\\leq U(X)"
},
{
"math_id": 10,
"text": "\\forall X\\in\\Theta"
},
{
"math_id": 11,
"text": "X^'"
},
{
"math_id": 12,
"text": "\\theta_{1}"
},
{
"math_id": 13,
"text": "L(X^')"
},
{
"math_id": 14,
"text": "U(X^')"
},
{
"math_id": 15,
"text": "P(L(X^')\\leq\\theta_{1}\\leq U(X^') | X^')"
},
{
"math_id": 16,
"text": "0"
},
{
"math_id": 17,
"text": "1"
},
{
"math_id": 18,
"text": "\\theta_{1}=5"
},
{
"math_id": 19,
"text": "P(2 \\leq 5\\leq 10)=1"
},
{
"math_id": 20,
"text": "\\theta_{1}=11"
},
{
"math_id": 21,
"text": "P(2 \\leq 11 \\leq 10)=0"
},
{
"math_id": 22,
"text": "L"
},
{
"math_id": 23,
"text": "U"
},
{
"math_id": 24,
"text": "X\\in\\Theta"
},
{
"math_id": 25,
"text": "L(X)"
},
{
"math_id": 26,
"text": "U(X)"
},
{
"math_id": 27,
"text": "\\theta_{1}^'"
},
{
"math_id": 28,
"text": "L(X) \\leq\\theta_{1}^'"
},
{
"math_id": 29,
"text": "\\theta_{1}^'\\leq U(X)"
},
{
"math_id": 30,
"text": ", C"
},
{
"math_id": 31,
"text": "P(L(X)\\leq\\theta_{1}^'\\leq U(X) | \\theta_{1}^')=C"
},
{
"math_id": 32,
"text": "0\\leq C \\leq1"
},
{
"math_id": 33,
"text": "C"
},
{
"math_id": 34,
"text": "95\\%"
},
{
"math_id": 35,
"text": "0 < C < 1"
},
{
"math_id": 36,
"text": "X \\sim N( \\theta,\\sigma^2)"
},
{
"math_id": 37,
"text": "\\theta"
},
{
"math_id": 38,
"text": "\\sigma^2"
},
{
"math_id": 39,
"text": "X^*=(x_1,x_2,...x_n)"
},
{
"math_id": 40,
"text": "L(X^*)=\\bar{x} - t \\frac{s}{ \\sqrt{n}}"
},
{
"math_id": 41,
"text": "U(X^*)=\\bar{x} + t \\frac{s}{ \\sqrt{n}}"
},
{
"math_id": 42,
"text": "s/\\sqrt{n}"
},
{
"math_id": 43,
"text": "\\bar{x}=\\frac{1}{n} \\sum_{i=1}^n x_i=\\frac{1}{n}(x_1,x_2,...x_n)"
},
{
"math_id": 44,
"text": "s=\\sqrt{\\frac{1}{n-1} \\sum_{i=1}^n (x_i- \\bar{x})^2}"
},
{
"math_id": 45,
"text": "t"
},
{
"math_id": 46,
"text": "({1-C}/2,n-1)"
},
{
"math_id": 47,
"text": " X_1, X_2, ... , X_n "
},
{
"math_id": 48,
"text": " T = (X_1, X_2,..., X_n) "
},
{
"math_id": 49,
"text": " T\\sim N(\\mu, \\sigma^2) "
},
{
"math_id": 50,
"text": " C "
},
{
"math_id": 51,
"text": " \\bar{x} "
},
{
"math_id": 52,
"text": " \\mu "
},
{
"math_id": 53,
"text": " p(-Z_\\frac{\\alpha}{2} \\le \\frac{\\bar{x} - \\mu}{\\sigma^2} \\le Z_\\frac{\\alpha}{2} ) = C "
},
{
"math_id": 54,
"text": " p(-Z_\\frac{\\alpha}{2} \\sigma^2 \\le \\bar{x} - \\mu \\le Z_\\frac{\\alpha}{2} \\sigma^2 ) = C "
},
{
"math_id": 55,
"text": " p(\\bar{x} - Z_\\frac{\\alpha}{2} \\sigma^2 \\le \\mu \\le \\bar{x} + Z_\\frac{\\alpha}{2} \\sigma^2 ) = C "
},
{
"math_id": 56,
"text": " 100(C)\\% "
},
{
"math_id": 57,
"text": " L(T) = \\bar{x} - Z_\\frac{\\alpha}{2} \\sigma^2 "
},
{
"math_id": 58,
"text": " U(T) = \\bar{x} + Z_\\frac{\\alpha}{2} \\sigma^2 "
}
] | https://en.wikipedia.org/wiki?curid=9164216 |
9165 | Directed set | Mathematical ordering with upper bounds
In mathematics, a directed set (or a directed preorder or a filtered set) is a nonempty set formula_0 together with a reflexive and transitive binary relation formula_1 (that is, a preorder), with the additional property that every pair of elements has an upper bound. In other words, for any formula_2 and formula_3 in formula_0 there must exist formula_4 in formula_0 with formula_5 and formula_6 A directed set's preorder is called a direction.
The notion defined above is sometimes called an <templatestyles src="Template:Visible anchor/styles.css" />upward directed set. A <templatestyles src="Template:Visible anchor/styles.css" />downward directed set is defined analogously, meaning that every pair of elements is bounded below.
Some authors (and this article) assume that a directed set is directed upward, unless otherwise stated. Other authors call a set directed if and only if it is directed both upward and downward.
Directed sets are a generalization of nonempty totally ordered sets. That is, all totally ordered sets are directed sets (contrast partially ordered sets, which need not be directed). Join-semilattices (which are partially ordered sets) are directed sets as well, but not conversely. Likewise, lattices are directed sets both upward and downward.
In topology, directed sets are used to define nets, which generalize sequences and unite the various notions of limit used in analysis. Directed sets also give rise to direct limits in abstract algebra and (more generally) category theory.
Equivalent definition.
In addition to the definition above, there is an equivalent definition. A directed set is a set formula_0 with a preorder such that every finite subset of formula_0 has an upper bound. In this definition, the existence of an upper bound of the empty subset implies that formula_0 is nonempty.
Examples.
The set of natural numbers formula_7 with the ordinary order formula_1 is one of the most important examples of a directed set. Every totally ordered set is a directed set, including formula_8 formula_9 formula_10 and formula_11
A (trivial) example of a partially ordered set that is not directed is the set formula_12 in which the only order relations are formula_13 and formula_14 A less trivial example is like the following example of the "reals directed towards formula_15" but in which the ordering rule only applies to pairs of elements on the same side of formula_15 (that is, if one takes an element formula_2 to the left of formula_16 and formula_3 to its right, then formula_2 and formula_3 are not comparable, and the subset formula_17 has no upper bound).
Product of directed sets.
Let formula_18 and formula_19 be directed sets. Then the Cartesian product set formula_20 can be made into a directed set by defining formula_21 if and only if formula_22 and formula_23 In analogy to the product order this is the product direction on the Cartesian product. For example, the set formula_24 of pairs of natural numbers can be made into a directed set by defining formula_25 if and only if formula_26 and formula_27
Directed towards a point.
If formula_15 is a real number then the set formula_28 can be turned into a directed set by defining formula_29 if formula_30 (so "greater" elements are closer to formula_15). We then say that the reals have been directed towards formula_31 This is an example of a directed set that is neither partially ordered nor totally ordered. This is because antisymmetry breaks down for every pair formula_2 and formula_3 equidistant from formula_16 where formula_2 and formula_3 are on opposite sides of formula_31 Explicitly, this happens when formula_32 for some real formula_33 in which case formula_29 and formula_34 even though formula_35 Had this preorder been defined on formula_36 instead of formula_37 then it would still form a directed set but it would now have a (unique) greatest element, specifically formula_15; however, it still wouldn't be partially ordered. This example can be generalized to a metric space formula_38 by defining on formula_39 or formula_40 the preorder formula_41 if and only if formula_42
Maximal and greatest elements.
An element formula_43 of a preordered set formula_44 is a "maximal element" if for every formula_45 formula_46 implies formula_47
It is a "greatest element" if for every formula_45 formula_47
Any preordered set with a greatest element is a directed set with the same preorder.
For instance, in a poset formula_48 every lower closure of an element; that is, every subset of the form formula_49 where formula_50 is a fixed element from formula_48 is directed.
Every maximal element of a directed preordered set is a greatest element. Indeed, a directed preordered set is characterized by equality of the (possibly empty) sets of maximal and of greatest elements.
Subset inclusion.
The subset inclusion relation formula_51 along with its dual formula_52 define partial orders on any given family of sets.
A non-empty family of sets is a directed set with respect to the partial order formula_53 (respectively, formula_54) if and only if the intersection (respectively, union) of any two of its members contains as a subset (respectively, is contained as a subset of) some third member.
In symbols, a family formula_55 of sets is directed with respect to formula_53 (respectively, formula_54) if and only if
for all formula_56 there exists some formula_57 such that formula_58 and formula_59 (respectively, formula_60 and formula_61)
or equivalently,
for all formula_56 there exists some formula_57 such that formula_62 (respectively, formula_63).
Many important examples of directed sets can be defined using these partial orders.
For example, by definition, a prefilter or filter base is a non-empty family of sets that is a directed set with respect to the partial order formula_53 and that also does not contain the empty set (this condition prevents triviality because otherwise, the empty set would then be a greatest element with respect to formula_53).
Every π-system, which is a non-empty family of sets that is closed under the intersection of any two of its members, is a directed set with respect to formula_64 Every λ-system is a directed set with respect to formula_65 Every filter, topology, and σ-algebra is a directed set with respect to both formula_53 and formula_65
Tails of nets.
By definition, a net is a function from a directed set and a sequence is a function from the natural numbers formula_66 Every sequence canonically becomes a net by endowing formula_7 with formula_67
If formula_68 is any net from a directed set formula_44 then for any index formula_69 the set formula_70 is called the tail of formula_44 starting at formula_71 The family formula_72 of all tails is a directed set with respect to formula_73 in fact, it is even a prefilter.
Neighborhoods.
If formula_74 is a topological space and formula_15 is a point in formula_75 the set of all neighbourhoods of formula_15 can be turned into a directed set by writing formula_76 if and only if formula_77 contains formula_78 For every formula_79 formula_80 and formula_81 :
Finite subsets.
The set formula_93 of all finite subsets of a set formula_55 is directed with respect to formula_54 since given any two formula_94 their union formula_95 is an upper bound of formula_0 and formula_96 in formula_97 This particular directed set is used to define the sum formula_98 of a generalized series of an formula_55-indexed collection of numbers formula_99 (or more generally, the sum of elements in an abelian topological group, such as vectors in a topological vector space) as the limit of the net of partial sums formula_100 that is:
formula_101
Logic.
Let formula_102 be a formal theory, which is a set of sentences with certain properties (details of which can be found in the article on the subject). For instance, formula_102 could be a first-order theory (like Zermelo–Fraenkel set theory) or a simpler zeroth-order theory. The preordered set formula_103 is a directed set because if formula_104 and if formula_105 denotes the sentence formed by logical conjunction formula_106 then formula_107 and formula_108 where formula_109
If formula_110 is the Lindenbaum–Tarski algebra associated with formula_102 then formula_111 is a partially ordered set that is also a directed set.
Contrast with semilattices.
Directed set is a more general concept than (join) semilattice: every join semilattice is a directed set, as the join or least upper bound of two elements is the desired formula_112 The converse does not hold however, witness the directed set {1000,0001,1101,1011,1111} ordered bitwise (e.g. formula_113 holds, but formula_114 does not, since in the last bit 1 > 0), where {1000,0001} has three upper bounds but no least upper bound, cf. picture. (Also note that without 1111, the set is not directed.)
Directed subsets.
The order relation in a directed set is not required to be antisymmetric, and therefore directed sets are not always partial orders. However, the term directed set is also used frequently in the context of posets. In this setting, a subset formula_0 of a partially ordered set formula_115 is called a directed subset if it is a directed set according to the same partial order: in other words, it is not the empty set, and every pair of elements has an upper bound. Here the order relation on the elements of formula_0 is inherited from formula_116; for this reason, reflexivity and transitivity need not be required explicitly.
A directed subset of a poset is not required to be downward closed; a subset of a poset is directed if and only if its downward closure is an ideal. While the definition of a directed set is for an "upward-directed" set (every pair of elements has an upper bound), it is also possible to define a downward-directed set in which every pair of elements has a common lower bound. A subset of a poset is downward-directed if and only if its upper closure is a filter.
Directed subsets are used in domain theory, which studies directed-complete partial orders. These are posets in which every upward-directed set is required to have a least upper bound. In this context, directed subsets again provide a generalization of convergent sequences.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "\\,\\leq\\,"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "b"
},
{
"math_id": 4,
"text": "c"
},
{
"math_id": 5,
"text": "a \\leq c"
},
{
"math_id": 6,
"text": "b \\leq c."
},
{
"math_id": 7,
"text": "\\N"
},
{
"math_id": 8,
"text": "(\\N, \\leq),"
},
{
"math_id": 9,
"text": "(\\N, \\geq),"
},
{
"math_id": 10,
"text": "(\\Reals, \\leq),"
},
{
"math_id": 11,
"text": "(\\Reals, \\geq)."
},
{
"math_id": 12,
"text": "\\{a, b\\},"
},
{
"math_id": 13,
"text": "a \\leq a"
},
{
"math_id": 14,
"text": "b \\leq b."
},
{
"math_id": 15,
"text": "x_0"
},
{
"math_id": 16,
"text": "x_0,"
},
{
"math_id": 17,
"text": "\\{ a, b \\}"
},
{
"math_id": 18,
"text": "\\mathbb{D}_1"
},
{
"math_id": 19,
"text": "\\mathbb{D}_2"
},
{
"math_id": 20,
"text": "\\mathbb{D}_1 \\times \\mathbb{D}_2"
},
{
"math_id": 21,
"text": "\\left(n_1, n_2\\right) \\leq \\left(m_1, m_2\\right)"
},
{
"math_id": 22,
"text": "n_1 \\leq m_1"
},
{
"math_id": 23,
"text": "n_2 \\leq m_2."
},
{
"math_id": 24,
"text": "\\N \\times \\N"
},
{
"math_id": 25,
"text": "\\left(n_0, n_1\\right) \\leq \\left(m_0, m_1\\right)"
},
{
"math_id": 26,
"text": "n_0 \\leq m_0"
},
{
"math_id": 27,
"text": "n_1 \\leq m_1."
},
{
"math_id": 28,
"text": "I := \\R \\backslash \\lbrace x_0 \\rbrace"
},
{
"math_id": 29,
"text": "a \\leq_I b"
},
{
"math_id": 30,
"text": "\\left|a - x_0\\right| \\geq \\left|b - x_0\\right|"
},
{
"math_id": 31,
"text": "x_0."
},
{
"math_id": 32,
"text": "\\{a, b\\} = \\left\\{x_0 - r, x_0 + r\\right\\}"
},
{
"math_id": 33,
"text": "r \\neq 0,"
},
{
"math_id": 34,
"text": "b \\leq_I a"
},
{
"math_id": 35,
"text": "a \\neq b."
},
{
"math_id": 36,
"text": "\\R"
},
{
"math_id": 37,
"text": "\\R \\backslash \\lbrace x_0 \\rbrace"
},
{
"math_id": 38,
"text": "(X, d)"
},
{
"math_id": 39,
"text": "X"
},
{
"math_id": 40,
"text": "X \\setminus \\left\\{x_0\\right\\}"
},
{
"math_id": 41,
"text": "a \\leq b"
},
{
"math_id": 42,
"text": "d\\left(a, x_0\\right) \\geq d\\left(b, x_0\\right)."
},
{
"math_id": 43,
"text": "m"
},
{
"math_id": 44,
"text": "(I, \\leq)"
},
{
"math_id": 45,
"text": "j \\in I,"
},
{
"math_id": 46,
"text": "m \\leq j"
},
{
"math_id": 47,
"text": "j \\leq m."
},
{
"math_id": 48,
"text": "P,"
},
{
"math_id": 49,
"text": "\\{a \\in P : a \\leq x\\}"
},
{
"math_id": 50,
"text": "x"
},
{
"math_id": 51,
"text": "\\,\\subseteq,\\,"
},
{
"math_id": 52,
"text": "\\,\\supseteq,\\,"
},
{
"math_id": 53,
"text": "\\,\\supseteq\\,"
},
{
"math_id": 54,
"text": "\\,\\subseteq\\,"
},
{
"math_id": 55,
"text": "I"
},
{
"math_id": 56,
"text": "A, B \\in I,"
},
{
"math_id": 57,
"text": "C \\in I"
},
{
"math_id": 58,
"text": "A \\supseteq C"
},
{
"math_id": 59,
"text": "B \\supseteq C"
},
{
"math_id": 60,
"text": "A \\subseteq C"
},
{
"math_id": 61,
"text": "B \\subseteq C"
},
{
"math_id": 62,
"text": "A \\cap B \\supseteq C"
},
{
"math_id": 63,
"text": "A \\cup B \\subseteq C"
},
{
"math_id": 64,
"text": "\\,\\supseteq\\,."
},
{
"math_id": 65,
"text": "\\,\\subseteq\\,."
},
{
"math_id": 66,
"text": "\\N."
},
{
"math_id": 67,
"text": "\\,\\leq.\\,"
},
{
"math_id": 68,
"text": "x_{\\bull} = \\left(x_i\\right)_{i \\in I}"
},
{
"math_id": 69,
"text": "i \\in I,"
},
{
"math_id": 70,
"text": "x_{\\geq i} := \\left\\{x_j : j \\geq i \\text{ with } j \\in I\\right\\}"
},
{
"math_id": 71,
"text": "i."
},
{
"math_id": 72,
"text": "\\operatorname{Tails}\\left(x_{\\bull}\\right) := \\left\\{x_{\\geq i} : i \\in I\\right\\}"
},
{
"math_id": 73,
"text": "\\,\\supseteq;\\,"
},
{
"math_id": 74,
"text": "T"
},
{
"math_id": 75,
"text": "T,"
},
{
"math_id": 76,
"text": "U \\leq V"
},
{
"math_id": 77,
"text": "U"
},
{
"math_id": 78,
"text": "V."
},
{
"math_id": 79,
"text": "U,"
},
{
"math_id": 80,
"text": "V,"
},
{
"math_id": 81,
"text": "W"
},
{
"math_id": 82,
"text": "U \\leq U"
},
{
"math_id": 83,
"text": "V \\leq W,"
},
{
"math_id": 84,
"text": "U \\supseteq V"
},
{
"math_id": 85,
"text": "V \\supseteq W,"
},
{
"math_id": 86,
"text": "U \\supseteq W."
},
{
"math_id": 87,
"text": "U \\leq W."
},
{
"math_id": 88,
"text": "x_0 \\in U \\cap V,"
},
{
"math_id": 89,
"text": "U \\supseteq U \\cap V"
},
{
"math_id": 90,
"text": "V \\supseteq U \\cap V,"
},
{
"math_id": 91,
"text": "U \\leq U \\cap V"
},
{
"math_id": 92,
"text": "V \\leq U \\cap V."
},
{
"math_id": 93,
"text": "\\operatorname{Finite}(I)"
},
{
"math_id": 94,
"text": "A, B \\in \\operatorname{Finite}(I),"
},
{
"math_id": 95,
"text": "A \\cup B \\in \\operatorname{Finite}(I)"
},
{
"math_id": 96,
"text": "B"
},
{
"math_id": 97,
"text": "\\operatorname{Finite}(I)."
},
{
"math_id": 98,
"text": "{\\textstyle\\sum\\limits_{i \\in I}} r_i"
},
{
"math_id": 99,
"text": "\\left(r_i\\right)_{i \\in I}"
},
{
"math_id": 100,
"text": "F \\in \\operatorname{Finite}(I) \\mapsto {\\textstyle\\sum\\limits_{i \\in F}} r_i;"
},
{
"math_id": 101,
"text": "\\sum_{i \\in I} r_i ~:=~ \\lim_{F \\in \\operatorname{Finite}(I)} \\ \\sum_{i \\in F} r_i ~=~ \\lim \\left\\{\\sum_{i \\in F} r_i \\,: F \\subseteq I, F \\text{ finite }\\right\\}."
},
{
"math_id": 102,
"text": "S"
},
{
"math_id": 103,
"text": "(S, \\Leftarrow)"
},
{
"math_id": 104,
"text": "A, B \\in S"
},
{
"math_id": 105,
"text": "C := A \\wedge B"
},
{
"math_id": 106,
"text": "\\,\\wedge,\\,"
},
{
"math_id": 107,
"text": "A \\Leftarrow C"
},
{
"math_id": 108,
"text": "B \\Leftarrow C"
},
{
"math_id": 109,
"text": "C \\in S."
},
{
"math_id": 110,
"text": "S / \\sim"
},
{
"math_id": 111,
"text": "\\left(S / \\sim, \\Leftarrow\\right)"
},
{
"math_id": 112,
"text": "c."
},
{
"math_id": 113,
"text": "1000 \\leq 1011"
},
{
"math_id": 114,
"text": "0001 \\leq 1000"
},
{
"math_id": 115,
"text": "(P, \\leq)"
},
{
"math_id": 116,
"text": "P"
}
] | https://en.wikipedia.org/wiki?curid=9165 |
91652 | Flag of Nepal | National flag
The national flag of Nepal () is the world's only non-rectangular flag which is used as both the state flag and civil flag of a sovereign country. The flag is a simplified combination of two single pennons (or pennants), known as a double-pennon. Its crimson red is the symbol of bravery and it also represents the color of the rhododendron, Nepal's national flower, while the blue border is the color of peace. Until 1962, the flag's emblems, both the sun and the crescent moon, had human faces, but they were removed to modernize the flag.
The current flag was adopted on 16 December 1962, along with the formation of a new constitutional government. Shankar Nath Rimal, a civil engineer, standardised the flag on the request of King Mahendra. It borrows from the original, traditional design, used throughout the 19th and 20th centuries, and is a combination of the two individual pennons used by rival branches of the ruling dynasty.
History.
Historically, triangular shaped flags in South Asia were very common, since it was compact in size so the flag furled even with the lowest wind, thus making it visible over long distances. The traces of triangular flags could be found in Hinduism. The flag's history is vague and there are no specific accounts of its creator. Nepal has historically used both quadrilateral flags as well as non-quadrilateral flags throughout its history.
The flags of almost all states in South Asia were once triangular. A 1928 French book about Nepal shows a double pennant flag with a green border rather than the modern blue. There are other forms of pennant-type flags, mostly used in Hindu and Buddhist temples around Nepal. Many accounts date the creation of the double-pennant to King Prithvi Narayan Shah. The flag of the ancient Gorkha kingdom started off as a single triangular war banner of the Shah kings with a red colour and with various deities and other symbols as symbols in the flag. After Prithvi Narayan Shah unified all small principalities of Nepal, the double-pennon flag became the standard flag. According to some historians, the Rana ruler Jung Bahadur changed the sun and moon symbols into faces of the sun and moon symbolizing the kings as the Rajputs of Lunar dynasty and the Rana themselves as the Rajputs of the Solar dynasty. Nepal has simply maintained its ancient tradition, while every other state has adopted a rectangular or square version in the European vexillological tradition.
The present flag of Nepal was adopted under the Nepalese constitution adopted on 16 December 1962. The modern flag seems to be a combination of the ancient Mustang Kingdom's flag and the ongoing flag used by the former Gorkha Kingdom. The colour gradients have been adopted from the Mustang Kingdom. Prior to 1962, both symbols on the flag, the sun and moon, had human faces. The constitution dedicated an entire section to the precise size and shape of the flag, since people were drawing it incorrectly. This section is continued even today even though multiple constitutions were introduced in the country during the period.
In May 2008 during the drafting of the new constitution, various political parties demanded changes to the flag's design since it symbolized Hinduism and monarchy, but this proposal was rejected.
Symbolism.
In modern times, the flag's symbolism has evolved to incorporate several meanings. The crimson red indicates the bravery of Nepali people and is the country's national color and the blue border represents peace and harmony. The colors are often found in Nepalese decoration and works of art. A theory is that the two points represented peace and hard work, using the symbols of the moon and sun respectively. Traditionally the flag of Nepal is derived from Hinduism which is common in Hindu cultures. However, the modern and government-sanctioned representation is of Hinduism and Buddhism, the main religions of the country.
The inclusion of the celestial bodies indicates Nepal's permanence and the hope that Nepal will enjoy the same longevity as the Sun and the Moon. The moon also symbolizes the cool weather of the Himalayas, whereas the sun symbolizes the heat and the high temperature of the southern lowlands (Terai). Additionally, the stylized moon represents the calm demeanor and purity of spirit of the Nepali people, while the stylized sun represents their fierce resolve.
Flag layout.
A precise geometrical description of the Nepalese national flag was specified in Article 5, Schedule 1 of the former constitution of the Kingdom of Nepal, adopted on 9 November 1990. Schedule 1 of the Constitution of Nepal, adopted on 20 September 2015, details a specific method of making the national flag of Nepal.
Aspect ratio.
When constructed according to the stated geometric construction law, the ratio of the height of the flag to the longest width is an irrational number:
formula_0 ≈ 1:1.21901033… (OEIS: ).
This ratio is the least root of the quartic polynomialformula_1
and arises from the addition of the blue border after construction of the red field. The bounding rectangle of the red field alone has the rational aspect ratio 3:4 (=1:1.333…).
Variations at the Olympics.
The large-scale production of the Nepal flag is difficult because of its exact proportions and it is normal for it to be completely out of shape during large events. The current Olympic protocol determines that all flags used during the Games have to be manufactured in a 2:3 ratio. On some occasions, such as the flags displayed at some venues of the 2016 Summer Olympics, the flag has been printed on a white cloth. The protocol manual for the 2020 Summer Olympics, in contrast, specifically called out Nepal as the exception to the standard size requirement, instead requiring the Nepali flag to be the same height as the other flags.
Incorrect versions.
During a 2018 visit of the Indian Prime Minister Narendra Modi to Janakpur, a version of the flag with incorrect shape and geometrical proportions was flown by officials, causing outrage on social media and with national personnel.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1:\\frac{6136891429688 - 306253616715\\sqrt{2} - \\sqrt{118-48 \\sqrt{2}} \\left( 934861968 + 20332617192 \\sqrt{2} \\right) }{4506606337686}"
},
{
"math_id": 1,
"text": "243356742235044 r^4 - 1325568548812608 r^3 + 2700899847521244 r^2 - 2439951444086880 r + 824634725389225,"
}
] | https://en.wikipedia.org/wiki?curid=91652 |
9165727 | Lung compliance | Ratio of volume change per pressure change in the lung
Lung compliance, or pulmonary compliance, is a measure of the lung's ability to stretch and expand (distensibility of elastic tissue). In clinical practice it is separated into two different measurements, static compliance and dynamic compliance. Static lung compliance is the change in volume for any given applied pressure. Dynamic lung compliance is the compliance of the lung at any given time during actual movement of air.
Low compliance indicates a stiff lung (one with high elastic recoil) and can be thought of as a thick balloon – this is the case often seen in fibrosis. High compliance indicates a pliable lung (one with low elastic recoil) and can be thought of as a grocery bag – this is the case often seen in emphysema. Compliance is highest at moderate lung volumes, and much lower at volumes which are very low or very high. The compliance of the lungs demonstrate lung hysteresis; that is, the compliance is different on inspiration and expiration for identical volume.
Calculation.
Pulmonary compliance is calculated using the following equation, where Δ"V" is the change in volume, and Δ"P" is the change in pleural pressure:
formula_0
For example, if a patient inhales 500 mL of air from a spirometer with an intrapleural pressure before inspiration of −5 cm H2O and −10 cm H2O at the end of inspiration. Then:
formula_1
Static compliance ("C"stat).
Static compliance represents pulmonary compliance during periods without gas flow, such as during an inspiratory pause. It can be calculated with the formula:
formula_2
where
"V"T = tidal volume;
"P"plat = plateau pressure;
"PEEP" = positive end-expiratory pressure.
"P"plat is measured at the end of inhalation and prior to exhalation by using an inspiratory hold maneuver. During this maneuver, airflow is transiently (~0.5 sec) discontinued, which eliminates the effects of airway resistance. "P"plat is never bigger than PIP and is typically <10 cm H2O lower than PIP when airway resistance is not elevated.
Dynamic compliance ("C"dyn).
Dynamic compliance represents pulmonary compliance during periods of gas flow, such as during active inspiration. Dynamic compliance is always lesser than or equal to static lung compliance because PIP − PEEP is always greater than "P"plat − PEEP. It can be calculated using the following equation,
formula_3
where
"C"dyn = Dynamic compliance;
"V"T = tidal volume;
"PIP" = Peak inspiratory pressure (the maximum pressure during inspiration);
"PEEP" = Positive End Expiratory Pressure:
Alterations in airway resistance, lung compliance and chest wall compliance influence "C"dyn.
Dimensionality and physical analogues.
The dimensions of compliance in respiratory physiology are inconsistent with the dimensions of compliance in physics-based applications. In physiology,
formula_4
whereas in newtonian physics, compliance is defined as the inverse of the elastic stiffness constant "k",
formula_5
Pulmonary compliance is analogous to capacitance.
Clinical significance.
Lung compliance is an important measurement in respiratory physiology.
Pulmonary surfactant increases compliance by decreasing the surface tension of water. The internal surface of the alveolus is covered with a thin coat of fluid. The water in this fluid has a high surface tension, and provides a force that could collapse the alveolus. The presence of surfactant in this fluid breaks up the surface tension of water, making it less likely that the alveolus can collapse inward. If the alveolus were to collapse, a great force would be required to open it, meaning that compliance would decrease drastically. Lung volume at any given pressure during inhalation is less than the lung volume at any given pressure during exhalation, which is called hysteresis.
Functional significance of abnormally high or low compliance.
Low compliance indicates a stiff lung and means extra work is required to bring in a normal volume of air. This occurs as the lungs in this case become fibrotic, lose their distensibility and become stiffer.
In a highly compliant lung, as in emphysema, the elastic tissue is damaged by enzymes. These enzymes are secreted by leukocytes (white blood cells) in response to a variety of inhaled irritants, such as cigarette smoke. Patients with emphysema have a very high lung compliance due to the poor elastic recoil. They have extreme difficulty exhaling air. In this condition extra work is required to get air out of the lungs. In addition, patients often have difficulties inhaling air as well. This is due to the fact that a highly compliant lung results in many Atelectasis which makes inflation difficult. Compliance also increases with increasing age.
Both peak inspiratory and plateau pressure increase when elastic resistance increases or when pulmonary compliance decreases (e.g. during abdominal insufflation, ascites, intrinsic lung disease, obesity, pulmonary edema, tension pneumothorax). On the other hand, only peak inspiratory pressure increases (plateau pressure unchanged) when airway resistance increases (e.g. airway compression, bronchospasm, mucous plug, kinked tube, secretions, foreign body).
Compliance decreases in the following cases:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Compliance = \\frac{ \\Delta V}{ \\Delta P} "
},
{
"math_id": 1,
"text": " Compliance = \\frac{\\Delta V}{\\Delta P} = \\frac{.5\\;\\ce L}{-5\\;\\ce{cm \\, H2O} - (-10\\;\\ce{cm \\, H2O})} = \\frac{.5\\;\\ce L}{5\\;\\ce{cm \\, H2O}} = 0.1\\;\\ce L\\;\\times\\;\\ce{cm \\, H2O^{-1}}"
},
{
"math_id": 2,
"text": "C_{stat} = \\frac{V_T}{P_{plat}-\\mathrm{PEEP}}"
},
{
"math_id": 3,
"text": "C_{dyn} = \\frac{V_T}\\mathrm{PIP-PEEP}"
},
{
"math_id": 4,
"text": "\n [C_\\text{pulmonary}] = \\frac{[\\Delta V]}{[\\Delta P]} = \\frac{L^3}{ML^{-1}T^{-2}} = \\frac{L^4T^2}{M},\n"
},
{
"math_id": 5,
"text": "\n [C_\\text{physics}] = \\frac{1}{[k]} = \\frac{[\\delta x]}{[F]} = \\frac{L}{MLT^{-2}} = \\frac{T^2}{M}.\n"
}
] | https://en.wikipedia.org/wiki?curid=9165727 |
9166436 | Multitaper | In signal processing, multitaper analysis is a spectral density estimation technique developed by David J. Thomson. It can estimate the power spectrum "S""X" of a stationary ergodic finite-variance random process "X", given a finite contiguous realization of "X" as data.
Motivation.
The multitaper method overcomes some of the limitations of non-parametric Fourier analysis. When applying the Fourier transform to extract spectral information from a signal, we assume that each Fourier coefficient is a reliable representation of the amplitude and relative phase of the corresponding component frequency. This assumption, however, is not generally valid for empirical data. For instance, a single trial represents only one noisy realization of the underlying process of interest. A comparable situation arises in statistics when estimating measures of central tendency i.e., it is bad practice to estimate qualities of a population using individuals or very small samples. Likewise, a single sample of a process does not necessarily provide a reliable estimate of its spectral properties. Moreover, the naive power spectral density obtained from the signal's raw Fourier transform is a biased estimate of the true spectral content.
These problems are often overcome by averaging over many realizations of the same event after applying a taper to each trial. However, this method is unreliable with small data sets and undesirable when one does not wish to attenuate signal components that vary across trials. Furthermore, even when many trials are available the untapered periodogram is generally biased (with the exception of white noise) and the bias depends upon the length of each realization, not the number of realizations recorded. Applying a single taper reduces bias but at the cost of increased estimator variance due to attenuation of activity at the start and end of each recorded segment of the signal.
The multitaper method partially obviates these problems by obtaining multiple independent estimates from the same sample. Each data taper is multiplied element-wise by the signal to provide a windowed trial from which one estimates the power at each component frequency. As each taper is pairwise orthogonal to all other tapers, the window functions are uncorrelated with one another. The final spectrum is obtained by averaging over all the tapered spectra thus recovering some of the information that is lost due to partial attenuation of the signal that results from applying individual tapers.
This method is especially useful when a small number of trials is available as it reduces the estimator variance beyond what is possible with single taper methods. Moreover, even when many trials are available the multitaper approach is useful as it permits more rigorous control of the trade-off between bias and variance than what is possible in the single taper case.
Thomson chose the Slepian functions or discrete prolate spheroidal sequences as tapers since these vectors are mutually orthogonal and possess desirable spectral concentration properties (see the section on Slepian sequences). In practice, a weighted average is often used to compensate for increased energy loss at higher order tapers.
Formulation.
Consider a p-dimensional zero mean stationary stochastic process
formula_0
Here "T" denotes the matrix transposition. In neurophysiology for example, "p" refers to the total number of channels and
hence formula_1 can represent simultaneous measurement of
electrical activity of those "p" channels. Let the sampling interval
between observations be formula_2, so that the Nyquist frequency is formula_3.
The multitaper spectral estimator utilizes several different data tapers which are orthogonal to each other. The multitaper cross-spectral estimator between channel "l" and "m" is the average of K direct cross-spectral estimators between the same pair of channels ("l" and "m") and hence takes the form
formula_4
Here, formula_5 (for formula_6) is the "k"th direct cross spectral estimator between channel "l" and "m" and is given by
formula_7
where
formula_8
The Slepian sequences.
The sequence formula_9 is the data taper for the
"k"th direct cross-spectral estimator formula_10 and is chosen as follows:
We choose a set of "K" orthogonal data tapers such that each one provides a good protection against leakage. These are given by the Slepian sequences, after David Slepian (also known in literature as discrete prolate spheroidal sequences or DPSS for short) with parameter "W" and orders "k" = 0 to "K" − 1. The maximum order "K" is chosen to be less than the Shannon number formula_11. The quantity 2"W" defines the resolution bandwidth for the spectral concentration problem and formula_12. When "l" = "m", we get the multitaper estimator for the auto-spectrum of the "l"th channel. In recent years, a dictionary based on modulated DPSS was proposed as an overcomplete alternative to DPSS.
See also Window function:DPSS or Slepian window
Applications.
Not limited to time series, the multitaper method is easily extensible to multiple Cartesian dimenions using custom Slepian functions, and can be reformulated for spectral estimation on the sphere using Slepian functions constructed from spherical harmonics for applications in geophysics and cosmology among others. An extensive treatment about the application of this method to analyze multi-trial, multi-channel data generated in neuroscience, biomedical engineering and elsewhere can be found here. This technique is currently used in the spectral analysis toolkit of Chronux.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{X}(t) = {\\lbrack X(1,t), X(2,t), \\dots , X(p,t)\n\\rbrack}^T"
},
{
"math_id": 1,
"text": "\\mathbf{X}(t)"
},
{
"math_id": 2,
"text": " \\Delta t"
},
{
"math_id": 3,
"text": "f_N=1/(2 \\Delta t)"
},
{
"math_id": 4,
"text": "\\hat{S}^{lm} (f)= \\frac{1}{K} \\sum_{k=0}^{K-1} \\hat{S}_k^{lm}(f)."
},
{
"math_id": 5,
"text": "\\hat{S}_{k}^{lm}(f)"
},
{
"math_id": 6,
"text": "0 \\leq k \\leq K-1"
},
{
"math_id": 7,
"text": " \\hat{S}_{k}^{lm}(f) = \\frac{1}{N\\Delta t}\n {\\lbrack J_{k}^{l}(f) \\rbrack}^{*} {\\lbrack J_{k}^{m}(f)\n \\rbrack}, \n"
},
{
"math_id": 8,
"text": "J_k^l(f) = \\sum_{t=1}^N h_{t,k}X(l,t) e^{-i 2\\pi ft\\Delta t}.\n"
},
{
"math_id": 9,
"text": "\\lbrace h_{t,k} \\rbrace "
},
{
"math_id": 10,
"text": "\\hat{S}_k^{lm}(f)"
},
{
"math_id": 11,
"text": "2NW\\Delta t"
},
{
"math_id": 12,
"text": "W \\in\n(0,f_{N})"
}
] | https://en.wikipedia.org/wiki?curid=9166436 |
9168589 | Watterson estimator | In population genetics, the Watterson estimator is a method for describing the genetic diversity in a population. It was developed by Margaret Wu and G. A. Watterson in the 1970s. It is estimated by counting the number of polymorphic sites. It is a measure of the "population mutation rate" (the product of the effective population size and the neutral mutation rate) from the observed nucleotide diversity of a population. formula_0, where formula_1 is the effective population size and formula_2 is the per-generation mutation rate of the population of interest ( ). The assumptions made are that there is a sample of formula_3 haploid individuals from the population of interest, that there are infinitely many sites capable of varying (so that mutations never overlay or reverse one another), and that formula_4.
Because the number of segregating sites counted will increase with the number of sequences looked at, the correction factor formula_5 is used.
The estimate of formula_6, often denoted as formula_7, is
formula_8
where formula_9 is the number of segregating sites (an example of a segregating site would be a single-nucleotide polymorphism) in the sample and
formula_10
is the formula_11th harmonic number.
This estimate is based on coalescent theory. Watterson's estimator is commonly used for its simplicity. When its assumptions are met, the estimator is unbiased and the variance of the estimator decreases with increasing sample size or recombination rate. However, the estimator can be biased by population structure. For example, formula_12 is downwardly biased in an exponentially growing population. It can also be biased by violation of the infinite-sites mutational model; if multiple mutations can overwrite one another, Watterson's estimator will be biased downward.
Comparing the value of the Watterson's estimator, to nucleotide diversity is the basis of Tajima's D which allows inference of the evolutionary regime of a given locus.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta = 4N_e\\mu"
},
{
"math_id": 1,
"text": "N_e"
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "n \\ll N_e"
},
{
"math_id": 5,
"text": "a_n"
},
{
"math_id": 6,
"text": "\\theta"
},
{
"math_id": 7,
"text": "\\widehat {\\theta\\,}_w"
},
{
"math_id": 8,
"text": "\n\\widehat {\\theta\\,}_w = { K \\over a_n },\n"
},
{
"math_id": 9,
"text": "K"
},
{
"math_id": 10,
"text": "\na_n = \\sum^{n-1}_{i=1} {1 \\over i}\n"
},
{
"math_id": 11,
"text": "(n-1)"
},
{
"math_id": 12,
"text": "\\widehat{\\theta\\,}_w"
}
] | https://en.wikipedia.org/wiki?curid=9168589 |
9169137 | Dirichlet-multinomial distribution | AMAR SHAHID THAKUR DARIYAV SINGH
In probability theory and statistics, the Dirichlet-multinomial distribution is a family of discrete multivariate probability distributions on a finite support of non-negative integers. It is also called the Dirichlet compound multinomial distribution (DCM) or multivariate Pólya distribution (after George Pólya). It is a compound probability distribution, where a probability vector p is drawn from a Dirichlet distribution with parameter vector formula_0, and an observation drawn from a multinomial distribution with probability vector p and number of trials "n". The Dirichlet parameter vector captures the prior belief about the situation and can be seen as a pseudocount: observations of each outcome that occur before the actual data is collected. The compounding corresponds to a Pólya urn scheme. It is frequently encountered in Bayesian statistics, machine learning, empirical Bayes methods and classical statistics as an overdispersed multinomial distribution.
It reduces to the categorical distribution as a special case when "n" = 1. It also approximates the multinomial distribution arbitrarily well for large "α". The Dirichlet-multinomial is a multivariate extension of the beta-binomial distribution, as the multinomial and Dirichlet distributions are multivariate versions of the binomial distribution and beta distributions, respectively.
Specification.
Dirichlet-multinomial as a compound distribution.
The Dirichlet distribution is a conjugate distribution to the multinomial distribution. This fact leads to an analytically tractable compound distribution.
For a random vector of category counts formula_1, distributed according to a multinomial distribution, the marginal distribution is obtained by integrating on the distribution for p which can be thought of as a random vector following a Dirichlet distribution:
formula_2
which results in the following explicit formula:
formula_3
where formula_4 is defined as the sum formula_5. Another form for this same compound distribution, written more compactly in terms of the beta function, "B", is as follows:
formula_6
The latter form emphasizes the fact that zero count categories can be ignored in the calculation - a useful fact when the number of categories is very large and sparse (e.g. word counts in documents).
Observe that the pdf is the Beta-binomial distribution when formula_7. It can also be shown that it approaches the multinomial distribution as formula_8 approaches infinity. The parameter formula_8 governs the degree of overdispersion or burstiness relative to the multinomial. Alternative choices to denote formula_8 found in the literature are S and A.
Dirichlet-multinomial as an urn model.
The Dirichlet-multinomial distribution can also be motivated via an urn model for positive integer values of the vector formula_0, known as the Polya urn model. Specifically, imagine an urn containing balls of formula_9 colors numbering formula_10 for the ith color, where random draws are made. When a ball is randomly drawn and observed, then two balls of the same color are returned to the urn. If this is performed formula_11 times, then the probability of observing the random vector formula_12 of color counts is a Dirichlet-multinomial with parameters formula_11 and formula_13.
If the random draws are with simple replacement (no balls over and above the observed ball are added to the urn), then the distribution follows a multinomial distribution and if the random draws are made without replacement, the distribution follows a multivariate hypergeometric distribution.
Properties.
Moments.
Once again, let formula_5 and let formula_14, then the expected number of times the outcome "i" was observed over "n" trials is
formula_15
The covariance matrix is as follows. Each diagonal entry is the variance of a beta-binomially distributed random variable, and is therefore
formula_16
The off-diagonal entries are the covariances:
formula_17
for "i", "j" distinct.
All covariances are negative because for fixed "n", an increase in one component of a Dirichlet-multinomial vector requires a decrease in another component.
This is a "K" × "K" positive-semidefinite matrix of rank "K" − 1.
The entries of the corresponding correlation matrix are
formula_18
formula_19
The sample size drops out of this expression.
Each of the "k" components separately has a beta-binomial distribution.
The support of the Dirichlet-multinomial distribution is the set
formula_20
Its number of elements is
formula_21
Matrix notation.
In matrix notation,
formula_22
and
formula_23
with pT = the row vector transpose of the column vector p. Letting
formula_24, we can write alternatively
formula_25
The parameter formula_26 is known as the "intra class" or "intra cluster" correlation. It is this positive correlation which gives rise to overdispersion relative to the multinomial distribution.
Aggregation.
If
formula_27
then, if the random variables with subscripts "i" and "j" are dropped from the vector and replaced by their sum,
formula_28
This aggregation property may be used to derive the marginal distribution of formula_29.
Likelihood function.
Conceptually, we are making "N" independent draws from a categorical distribution with "K" categories. Let us represent the independent draws as random categorical variables formula_30 for formula_31. Let us denote the number of times a particular category formula_32 has been seen (for formula_33) among all the categorical variables as formula_34, and formula_35. Then, we have two separate views onto this problem:
The former case is a set of random variables specifying each "individual" outcome, while the latter is a variable specifying the "number" of outcomes of each of the "K" categories. The distinction is important, as the two cases have correspondingly different probability distributions.
The parameter of the categorical distribution is formula_39 where formula_40 is the probability to draw value formula_32; formula_41 is likewise the parameter of the multinomial distribution formula_42. Rather than specifying formula_41 directly, we give it a conjugate prior distribution, and hence it is drawn from a Dirichlet distribution with parameter vector formula_43.
By integrating out formula_41, we obtain a compound distribution. However, the form of the distribution is different depending on which view we take.
For a set of individual outcomes.
Joint distribution.
For categorical variables formula_44, the marginal joint distribution is obtained by integrating out formula_41:
formula_45
which results in the following explicit formula:
formula_46
where formula_47 is the gamma function, with
formula_48
Note the absence of the multinomial coefficient due to the formula being about the probability of a sequence of categorical variables instead of a probability on the counts within each category.
Although the variables formula_37 do not appear explicitly in the above formula, they enter in through the formula_34 values.
Conditional distribution.
Another useful formula, particularly in the context of Gibbs sampling, asks what the conditional density of a given variable formula_30 is, conditioned on all the other variables (which we will denote formula_49). It turns out to have an extremely simple form:
formula_50
where formula_51 specifies the number of counts of category formula_32 seen in all variables other than formula_30.
It may be useful to show how to derive this formula. In general, conditional distributions are proportional to the corresponding joint distributions, so we simply start with the above formula for the joint distribution of all the formula_37 values and then eliminate any factors not dependent on the particular formula_30 in question. To do this, we make use of the notation formula_51 defined above, and
formula_52
We also use the fact that
formula_53
Then:
formula_54
In general, it is not necessary to worry about the normalizing constant at the time of deriving the equations for conditional distributions. The normalizing constant will be determined as part of the algorithm for sampling from the distribution (see Categorical distribution#Sampling). However, when the conditional distribution is written in the simple form above, it turns out that the normalizing constant assumes a simple form:
formula_55
Hence
formula_56
This formula is closely related to the Chinese restaurant process, which results from taking the limit as formula_57.
In a Bayesian network.
In a larger Bayesian network in which categorical (or so-called "multinomial") distributions occur with Dirichlet distribution priors as part of a larger network, all Dirichlet priors can be collapsed provided that the only nodes depending on them are categorical distributions. The collapsing happens for each Dirichlet-distribution node separately from the others, and occurs regardless of any other nodes that may depend on the categorical distributions. It also occurs regardless of whether the categorical distributions depend on nodes additional to the Dirichlet priors (although in such a case, those other nodes must remain as additional conditioning factors). Essentially, all of the categorical distributions depending on a given Dirichlet-distribution node become connected into a single Dirichlet-multinomial joint distribution defined by the above formula. The joint distribution as defined this way will depend on the parent(s) of the integrated-out Dirichet prior nodes, as well as any parent(s) of the categorical nodes other than the Dirichlet prior nodes themselves.
In the following sections, we discuss different configurations commonly found in Bayesian networks. We repeat the probability density from above, and define it using the symbol formula_58:
formula_59
Multiple Dirichlet priors with the same hyperprior.
Imagine we have a hierarchical model as follows:
formula_60
In cases like this, we have multiple Dirichet priors, each of which generates some number of categorical observations (possibly a different number for each prior). The fact that they are all dependent on the same hyperprior, even if this is a random variable as above, makes no difference. The effect of integrating out a Dirichlet prior links the categorical variables attached to that prior, whose joint distribution simply inherits any conditioning factors of the Dirichlet prior. The fact that multiple priors may share a hyperprior makes no difference:
formula_61
where formula_62 is simply the collection of categorical variables dependent on prior "d".
Accordingly, the conditional probability distribution can be written as follows:
formula_63
where formula_64 specifically means the number of variables "among the set" formula_62, excluding formula_65 itself, that have the value formula_32 .
It is necessary to count only the variables having the value "k" that are tied together to the variable in question through having the same prior. We do not want to count any other variables also having the value "k".
Multiple Dirichlet priors with the same hyperprior, with dependent children.
Now imagine a slightly more complicated hierarchical model as follows:
formula_66
This model is the same as above, but in addition, each of the categorical variables has a child variable dependent on it. This is typical of a mixture model.
Again, in the joint distribution, only the categorical variables dependent on the same prior are linked into a single Dirichlet-multinomial:
formula_67
The conditional distribution of the categorical variables dependent only on their parents and ancestors would have the identical form as above in the simpler case. However, in Gibbs sampling it is necessary to determine the conditional distribution of a given node formula_65 dependent not only on formula_68 and ancestors such as formula_69 but on "all" the other parameters.
The simplified expression for the conditional distribution is derived above simply by rewriting the expression for the joint probability and removing constant factors. Hence, the same simplification would apply in a larger joint probability expression such as the one in this model, composed of Dirichlet-multinomial densities plus factors for many other random variables dependent on the values of the categorical variables.
This yields the following:
formula_70
Here the probability density of formula_71 appears directly. To do random sampling over formula_65, we would compute the unnormalized probabilities for all "K" possibilities for formula_65 using the above formula, then normalize them and proceed as normal using the algorithm described in the categorical distribution article.
Correctly speaking, the additional factor that appears in the conditional distribution is derived not from the model specification but directly from the joint distribution. This distinction is important when considering models where a given node with Dirichlet-prior parent has multiple dependent children, particularly when those children are dependent on each other (e.g. if they share a parent that is collapsed out). This is discussed more below.
Multiple Dirichlet priors with shifting prior membership.
Now imagine we have a hierarchical model as follows:
formula_72
Here we have a tricky situation where we have multiple Dirichlet priors as before and a set of dependent categorical variables, but the relationship between the priors and dependent variables isn't fixed, unlike before. Instead, the choice of which prior to use is dependent on another random categorical variable. This occurs, for example, in topic models, and indeed the names of the variables above are meant to correspond to those in latent Dirichlet allocation. In this case, the set formula_73 is a set of words, each of which is drawn from one of formula_9 possible topics, where each topic is a Dirichlet prior over a vocabulary of formula_74 possible words, specifying the frequency of different words in the topic. However, the topic membership of a given word isn't fixed; rather, it's determined from a set of latent variables formula_75. There is one latent variable per word, a formula_9 -dimensional categorical variable specifying the topic the word belongs to.
In this case, all variables dependent on a given prior are tied together (i.e. correlated) in a group, as before — specifically, all words belonging to a given topic are linked. In this case, however, the group membership shifts, in that the words are not fixed to a given topic but the topic depends on the value of a latent variable associated with the word. However, the definition of the Dirichlet-multinomial density doesn't actually depend on the number of categorical variables in a group (i.e. the number of words in the document generated from a given topic), but only on the counts of how many variables in the group have a given value (i.e. among all the word tokens generated from a given topic, how many of them are a given word). Hence, we can still write an explicit formula for the joint distribution:
formula_76
Here we use the notation formula_77 to denote the number of word tokens whose value is word symbol "v" and which belong to topic "k".
The conditional distribution still has the same form:
formula_78
Here again, only the categorical variables for words belonging to a given topic are linked (even though this linking will depend on the assignments of the latent variables), and hence the word counts need to be over only the words generated by a given topic. Hence the symbol formula_79, which is the count of words tokens having the word symbol "v", but only among those generated by topic "k", and excluding the word itself whose distribution is being described.
A combined example: LDA topic models.
We now show how to combine some of the above scenarios to demonstrate how to Gibbs sample a real-world model, specifically a smoothed latent Dirichlet allocation (LDA) topic model.
The model is as follows:
formula_80
Essentially we combine the previous three scenarios: We have categorical variables dependent on multiple priors sharing a hyperprior; we have categorical variables with dependent children (the latent variable topic identities); and we have categorical variables with shifting membership in multiple priors sharing a hyperprior. In the standard LDA model, the words are completely observed, and hence we never need to resample them. (However, Gibbs sampling would equally be possible if only some or none of the words were observed. In such a case, we would want to initialize the distribution over the words in some reasonable fashion — e.g. from the output of some process that generates sentences, such as a machine translation model — in order for the resulting posterior latent variable distributions to make any sense.)
Using the above formulas, we can write down the conditional probabilities directly:
formula_81
Here we have defined the counts more explicitly to clearly separate counts of words and counts of topics:
formula_82
As in the scenario above with categorical variables with dependent children, the conditional probability of those dependent children appears in the definition of the parent's conditional probability. In this case, each latent variable has only a single dependent child word, so only one such term appears. (If there were multiple dependent children, all would have to appear in the parent's conditional probability, regardless of whether there was overlap between different parents and the same children, i.e. regardless of whether the dependent children of a given parent also have other parents. In a case where a child has multiple parents, the conditional probability for that child appears in the conditional probability definition of each of its parents.)
The definition above specifies only the "unnormalized" conditional probability of the words, while the topic conditional probability requires the "actual" (i.e. normalized) probability. Hence we have to normalize by summing over all word symbols:
formula_83
where
formula_84
It's also worth making another point in detail, which concerns the second factor above in the conditional probability. Remember that the conditional distribution in general is derived from the joint distribution, and simplified by removing terms not dependent on the domain of the conditional (the part on the left side of the vertical bar). When a node formula_85 has dependent children, there will be one or more factors formula_86 in the joint distribution that are dependent on formula_85. "Usually" there is one factor for each dependent node, and it has the same density function as the distribution appearing the mathematical definition. However, if a dependent node has another parent as well (a co-parent), and that co-parent is collapsed out, then the node will become dependent on all other nodes sharing that co-parent, and in place of multiple terms for each such node, the joint distribution will have only one joint term. We have exactly that situation here. Even though formula_65 has only one child formula_87, that child has a Dirichlet co-parent that we have collapsed out, which induces a Dirichlet-multinomial over the entire set of nodes formula_88.
It happens in this case that this issue does not cause major problems, precisely because of the one-to-one relationship between formula_65 and formula_87. We can rewrite the joint distribution as follows:
formula_89
where in the set formula_90 (i.e. the set of nodes formula_88 excluding formula_87 ), none of the nodes have formula_65 as a parent. Hence it can be eliminated as a conditioning factor (line 2), meaning that the entire factor can be eliminated from the conditional distribution (line 3).
A second example: Naive Bayes document clustering.
Here is another model, with a different set of issues. This is an implementation of an unsupervised Naive Bayes model for document clustering. That is, we would like to classify documents into multiple categories (e.g. "spam" or "non-spam", or "scientific journal article", "newspaper article about finance", "newspaper article about politics", "love letter") based on textual content. However, we don't already know the correct category of any documents; instead, we want to cluster them based on mutual similarities. (For example, a set of scientific articles will tend to be similar to each other in word use but very different from a set of love letters.) This is a type of unsupervised learning. (The same technique can be used for doing semi-supervised learning, i.e. where we know the correct category of some fraction of the documents and would like to use this knowledge to help in clustering the remaining documents.)
The model is as follows:
formula_91
In many ways, this model is very similar to the LDA topic model described above, but it assumes one topic per document rather than one topic per word, with a document consisting of a mixture of topics. This can be seen clearly in the above model, which is identical to the LDA model except that there is only one latent variable per document instead of one per word. Once again, we assume that we are collapsing all of the Dirichlet priors.
The conditional probability for a given word is almost identical to the LDA case. Once again, all words generated by the same Dirichlet prior are interdependent. In this case, this means the words of all documents having a given label — again, this can vary depending on the label assignments, but all we care about is the total counts. Hence:
formula_92
where
formula_93
However, there is a critical difference in the conditional distribution of the latent variables for the label assignments, which is that a given label variable has multiple children nodes instead of just one — in particular, the nodes for all the words in the label's document. This relates closely to the discussion above about the factor formula_94 that stems from the joint distribution. In this case, the joint distribution needs to be taken over all words in all documents containing a label assignment equal to the value of formula_95, and has the value of a Dirichlet-multinomial distribution. Furthermore, we cannot reduce this joint distribution down to a conditional distribution over a single word. Rather, we can reduce it down only to a smaller joint conditional distribution over the words in the document for the label in question, and hence we cannot simplify it using the trick above that yields a simple sum of expected count and prior. Although it is in fact possible to rewrite it as a product of such individual sums, the number of factors is very large, and is not clearly more efficient than directly computing the Dirichlet-multinomial distribution probability.
Related distributions.
The one-dimensional version of the Dirichlet-multinomial distribution is known as the Beta-binomial distribution.
The Dirichlet-multinomial distribution has a relationship with the negative binomial distribution analogous to the relationship of the multinomial distribution with the Poisson distribution.
Uses.
The Dirichlet-multinomial distribution is used in automated document classification and clustering, genetics, economy, combat modeling, and quantitative marketing.
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\boldsymbol{\\alpha}"
},
{
"math_id": 1,
"text": "\\mathbf{x}=(x_1,\\dots,x_K)"
},
{
"math_id": 2,
"text": "\\Pr(\\mathbf{x}\\mid n,\\boldsymbol{\\alpha})=\\int_{\\mathbf{p}}\\mathrm{Mult}(\\mathbf{x}\\mid n,\\mathbf{p})\\mathrm{Dir}(\\mathbf{p}\\mid\\boldsymbol{\\alpha})\\textrm{d}\\mathbf{p}"
},
{
"math_id": 3,
"text": "\\Pr(\\mathbf{x}\\mid n, \\boldsymbol{\\alpha})=\\frac{\\Gamma\\left(\\alpha_0\\right)\\Gamma\\left(n+1\\right)}\n{\\Gamma\\left(n+\\alpha_0\\right)}\\prod_{k=1}^K\\frac{\\Gamma(x_{k}+\\alpha_{k})}{\\Gamma(\\alpha_{k})\\Gamma\\left(x_{k}+1\\right)}"
},
{
"math_id": 4,
"text": "\\alpha_0"
},
{
"math_id": 5,
"text": "\\alpha_0 = \\sum \\alpha_k"
},
{
"math_id": 6,
"text": "\\Pr(\\mathbf{x}\\mid n,\\boldsymbol{\\alpha})=\\frac{n B\\left(\\alpha_0,n\\right)}\n{\\prod_{k:x_k>0} x_k B\\left(\\alpha_k,x_k \\right)} .\n"
},
{
"math_id": 7,
"text": "K=2"
},
{
"math_id": 8,
"text": "\\alpha_{0}"
},
{
"math_id": 9,
"text": "K"
},
{
"math_id": 10,
"text": "\\alpha_{i}"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "x"
},
{
"math_id": 13,
"text": "\\boldsymbol\\alpha"
},
{
"math_id": 14,
"text": "p_i =\\frac{\\alpha_i}{\\sum \\alpha_k}=\\frac{\\alpha_i}{\\alpha_0}"
},
{
"math_id": 15,
"text": "\\operatorname{E}(X_i) = n p_i=n\\frac{\\alpha_i}{\\alpha_0}.\\,"
},
{
"math_id": 16,
"text": "\\operatorname{var}(X_i)=np_i(1-p_i)\\left(\\frac{n+\\sum \\alpha_k}{1+\\sum \\alpha_k}\\right)=n\\frac{\\alpha_i}{\\alpha_0}\\left(1-\\frac{\\alpha_i}{\\alpha_0}\\right)\\left(\\frac{n+\\alpha_0}{1+\\alpha_0}\\right).\\,"
},
{
"math_id": 17,
"text": "\\operatorname{cov}(X_i,X_j)=-np_i p_j\\left(\\frac{n+\\sum \\alpha_k}{1+\\sum \\alpha_k}\\right)=-n\\frac{\\alpha_i\\alpha_j}{\\alpha_0^2}\\left(\\frac{n+\\alpha_0}{1+\\alpha_0}\\right)\\,"
},
{
"math_id": 18,
"text": "\\rho(X_i,X_i) = 1."
},
{
"math_id": 19,
"text": "\\rho(X_i,X_j) = \\frac{\\operatorname{cov}(X_i,X_j)}{\\sqrt{\\operatorname{var}(X_i)\\operatorname{var}(X_j)}} = \\frac{-p_i p_j(\\frac{n+\\alpha_0}{1+\\alpha_0})}{\\sqrt{p_i(1-p_i)(\\frac{n+\\alpha_0}{1+\\alpha_0}) p_j(1-p_j)(\\frac{n+\\alpha_0}{1+\\alpha_0})}} = -\\sqrt{\\frac{\\alpha_i \\alpha_j}{(\\alpha_0-\\alpha_i)(\\alpha_0-\\alpha_j)}}."
},
{
"math_id": 20,
"text": "\\{(n_1,\\dots,n_k)\\in \\mathbb{N}^{k}| n_1+\\cdots+n_k=n\\}.\\,"
},
{
"math_id": 21,
"text": "{n+k-1 \\choose k-1}."
},
{
"math_id": 22,
"text": "\\operatorname{E}(\\mathbf{X}) = n \\mathbf{p},\\,"
},
{
"math_id": 23,
"text": "\\operatorname{var}(\\mathbf{X}) = n \\lbrace \\operatorname{diag}(\\mathbf{p}) - \\mathbf{p}\\mathbf{p}^{\\rm T} \\rbrace \\left( \\frac{n+\\alpha_0}{1+ \\alpha_0} \\right) ,\\,"
},
{
"math_id": 24,
"text": "\\alpha_0 = \\frac{1-\\rho^2}{\\rho^2}\\,"
},
{
"math_id": 25,
"text": "\\operatorname{var}(\\mathbf{X}) = n \\lbrace \\operatorname{diag}(\\mathbf{p}) - \\mathbf{p}\\mathbf{p}^{\\rm T} \\rbrace (1+\\rho^2(n-1)) ,\\,"
},
{
"math_id": 26,
"text": " \\rho \\!"
},
{
"math_id": 27,
"text": "X = (X_1, \\ldots, X_K)\\sim\\operatorname{DM}(\\alpha_1,\\cdots,\\alpha_K)"
},
{
"math_id": 28,
"text": "X' = (X_1, \\ldots, X_i + X_j, \\ldots, X_K)\\sim\\operatorname{DM} \\left(\\alpha_1,\\cdots,\\alpha_i+\\alpha_j,\\cdots,\\alpha_K \\right)."
},
{
"math_id": 29,
"text": "X_i"
},
{
"math_id": 30,
"text": "z_n"
},
{
"math_id": 31,
"text": "n = 1 \\dots N"
},
{
"math_id": 32,
"text": "k"
},
{
"math_id": 33,
"text": "k = 1 \\dots K"
},
{
"math_id": 34,
"text": "n_k"
},
{
"math_id": 35,
"text": "\\sum_k n_k = N"
},
{
"math_id": 36,
"text": "N"
},
{
"math_id": 37,
"text": "z_1,\\dots,z_N"
},
{
"math_id": 38,
"text": "\\mathbf{x}=(n_1,\\dots,n_K)"
},
{
"math_id": 39,
"text": "\\mathbf{p} = (p_1,p_2,\\dots,p_K),"
},
{
"math_id": 40,
"text": "p_k"
},
{
"math_id": 41,
"text": "\\mathbf{p}"
},
{
"math_id": 42,
"text": "P(\\mathbf{x}|\\mathbf{p})"
},
{
"math_id": 43,
"text": "\\boldsymbol\\alpha=(\\alpha_1,\\alpha_2,\\ldots,\\alpha_K)"
},
{
"math_id": 44,
"text": "\\mathbb{Z}=z_1,\\dots,z_N"
},
{
"math_id": 45,
"text": "\\Pr(\\mathbb{Z}\\mid\\boldsymbol{\\alpha})=\\int_{\\mathbf{p}}\\Pr(\\mathbb{Z}\\mid \\mathbf{p})\\Pr(\\mathbf{p}\\mid\\boldsymbol{\\alpha})\\textrm{d}\\mathbf{p}"
},
{
"math_id": 46,
"text": "\\Pr(\\mathbb{Z}\\mid\\boldsymbol{\\alpha})=\\frac{\\Gamma\\left(\\alpha_{0}\\right)}\n{\\Gamma\\left(N+\\alpha_{0}\\right)}\\prod_{k=1}^K\\frac{\\Gamma(n_{k}+\\alpha_{k})}{\\Gamma(\\alpha_{k})}"
},
{
"math_id": 47,
"text": "\\Gamma"
},
{
"math_id": 48,
"text": "\\alpha_0=\\sum_k \\alpha_k\\text{ and }N=\\sum_k n_k\\text{, and where }n_k=\\text{number of }z_n\\text{'s with the value }k."
},
{
"math_id": 49,
"text": "\\mathbb{Z}^{(-n)}"
},
{
"math_id": 50,
"text": "\\Pr(z_n=k\\mid\\mathbb{Z}^{(-n)},\\boldsymbol{\\alpha}) \\propto n_k^{(-n)} + \\alpha_k"
},
{
"math_id": 51,
"text": "n_k^{(-n)}"
},
{
"math_id": 52,
"text": "\nn_j=\n\\begin{cases}\n n_j^{(-n)}, & \\text{if }j\\not=k \\\\\n n_j^{(-n)}+1, & \\text{if }j=k\n\\end{cases}\n"
},
{
"math_id": 53,
"text": "\\Gamma(n+1) = n\\Gamma(n)"
},
{
"math_id": 54,
"text": "\n\\begin{align}\n& \\Pr(z_n=k\\mid\\mathbb{Z}^{(-n)},\\boldsymbol{\\alpha})\\\\ \n\\propto\\ & \\Pr(z_n=k,\\mathbb{Z}^{(-n)}\\mid\\boldsymbol{\\alpha}) \\\\\n=\\ &\\ \\frac{\\Gamma\\left(\\alpha_{0}\\right)}{\\Gamma\\left(N+\\alpha_{0}\\right)}\\prod_{j=1}^K\\frac{\\Gamma(n_{j}+\\alpha_{j})}{\\Gamma(\\alpha_{j})} \\\\\n\\propto\\ & \\prod_{j=1}^K\\Gamma(n_{j}+\\alpha_{j}) \\\\\n=\\ & \\Gamma(n_{k}+\\alpha_{k})\\prod_{j\\not=k}\\Gamma(n_{j}+\\alpha_{j}) \\\\\n=\\ & \\Gamma(n_k^{(-n)}+1+\\alpha_{k})\\prod_{j\\not=k}\\Gamma(n_j^{(-n)}+\\alpha_{j}) \\\\\n=\\ & (n_k^{(-n)}+\\alpha_{k}) \\Gamma(n_k^{(-n)}+\\alpha_{k})\\prod_{j\\not=k}\\Gamma(n_j^{(-n)}+\\alpha_{j}) \\\\\n=\\ & (n_k^{(-n)}+\\alpha_{k}) \\prod_{j}\\Gamma(n_j^{(-n)}+\\alpha_{j}) \\\\\n\\propto\\ & n_k^{(-n)}+\\alpha_{k}\\\\\n\\end{align}\n"
},
{
"math_id": 55,
"text": "\\sum_k \\left( n_k^{(-n)} + \\alpha_{k} \\right) = \\alpha_{0} + \\sum_k n_k^{(-n)} = \\alpha_{0} + N - 1"
},
{
"math_id": 56,
"text": "\\Pr(z_n=k\\mid\\mathbb{Z}^{(-n)},\\boldsymbol{\\alpha}) = \\frac{n_k^{(-n)} + \\alpha_{k}}{\\alpha_{0} + N - 1}"
},
{
"math_id": 57,
"text": "K \\to \\infty"
},
{
"math_id": 58,
"text": "\\operatorname{DirMult}(\\mathbb{Z}\\mid\\boldsymbol{\\alpha})"
},
{
"math_id": 59,
"text": "\\Pr(\\mathbb{Z}\\mid\\boldsymbol{\\alpha})=\\operatorname{DirMult}(\\mathbb{Z}\\mid\\boldsymbol{\\alpha})=\\frac{\\Gamma\\left(\\sum_k \\alpha_k\\right)}\n{\\Gamma\\left(\\sum_k n_k+\\alpha_k\\right)}\\prod_{k=1}^K\\frac{\\Gamma(n_{k}+\\alpha_{k})}{\\Gamma(\\alpha_{k})}"
},
{
"math_id": 60,
"text": "\n\\begin{array}{lcl}\n\\boldsymbol\\alpha &\\sim& \\text{some distribution} \\\\\n\\boldsymbol\\theta_{d=1 \\dots M} &\\sim& \\operatorname{Dirichlet}_K(\\boldsymbol\\alpha) \\\\\nz_{d=1 \\dots M,n=1 \\dots N_d} &\\sim& \\operatorname{Categorical}_K(\\boldsymbol\\theta_d)\n\\end{array}\n"
},
{
"math_id": 61,
"text": "\\Pr(\\mathbb{Z}\\mid\\boldsymbol\\alpha) = \\prod_d \\operatorname{DirMult}(\\mathbb{Z}_d\\mid\\boldsymbol\\alpha)"
},
{
"math_id": 62,
"text": "\\mathbb{Z}_d"
},
{
"math_id": 63,
"text": "\\Pr(z_{dn}=k\\mid\\mathbb{Z}^{(-dn)},\\boldsymbol\\alpha)\\ \\propto\\ n_{k,d}^{(-n)} + \\alpha_k"
},
{
"math_id": 64,
"text": "n_{k,d}^{(-n)}"
},
{
"math_id": 65,
"text": "z_{dn}"
},
{
"math_id": 66,
"text": "\n\\begin{array}{lcl}\n\\boldsymbol\\alpha &\\sim& \\text{some distribution} \\\\\n\\boldsymbol\\theta_{d=1 \\dots M} &\\sim& \\operatorname{Dirichlet}_K(\\boldsymbol\\alpha) \\\\\nz_{d=1 \\dots M,n=1 \\dots N_d} &\\sim& \\operatorname{Categorical}_K(\\boldsymbol\\theta_d) \\\\\n\\boldsymbol\\phi &\\sim& \\text{some other distribution} \\\\\nw_{d=1 \\dots M,n=1 \\dots N_d} &\\sim& \\operatorname{F}(w_{dn}\\mid z_{dn},\\boldsymbol\\phi)\n\\end{array}\n"
},
{
"math_id": 67,
"text": "\\Pr(\\mathbb{Z},\\mathbb{W}\\mid\\boldsymbol\\alpha,\\boldsymbol\\phi) = \\prod_d \\operatorname{DirMult}(\\mathbb{Z}_d\\mid\\boldsymbol\\alpha) \\prod_{d=1}^{M} \\prod_{n=1}^{N_d} \\operatorname{F}(w_{dn}\\mid z_{dn},\\boldsymbol\\phi)"
},
{
"math_id": 68,
"text": "\\mathbb{Z}^{(-dn)}"
},
{
"math_id": 69,
"text": "\\alpha"
},
{
"math_id": 70,
"text": "\\Pr(z_{dn}=k\\mid\\mathbb{Z}^{(-dn)},\\mathbb{W},\\boldsymbol\\alpha,\\boldsymbol\\phi)\\ \\propto\\ (n_{k,d}^{(-n)} + \\alpha_k) \\operatorname{F}(w_{dn}\\mid z_{dn},\\boldsymbol\\phi)"
},
{
"math_id": 71,
"text": "\\operatorname{F}"
},
{
"math_id": 72,
"text": "\n\\begin{array}{lcl}\n\\boldsymbol\\theta &\\sim& \\text{some distribution} \\\\\nz_{n=1 \\dots N} &\\sim& \\operatorname{Categorical}_K(\\boldsymbol\\theta) \\\\\n\\boldsymbol\\alpha &\\sim& \\text{some distribution} \\\\\n\\boldsymbol\\phi_{k=1 \\dots K} &\\sim& \\operatorname{Dirichlet}_V(\\boldsymbol\\alpha) \\\\\nw_{n=1 \\dots N} &\\sim& \\operatorname{Categorical}_V(\\boldsymbol\\phi_{z_{n}}) \\\\\n\\end{array}\n"
},
{
"math_id": 73,
"text": "\\mathbb{W}"
},
{
"math_id": 74,
"text": "V"
},
{
"math_id": 75,
"text": "\\mathbb{Z}"
},
{
"math_id": 76,
"text": "\\Pr(\\mathbb{W}\\mid\\boldsymbol\\alpha,\\mathbb{Z}) = \\prod_{k=1}^K \\operatorname{DirMult}(\\mathbb{W}_k\\mid\\mathbb{Z},\\boldsymbol\\alpha) = \\prod_{k=1}^K \\left[\\frac{\\Gamma\\left(\\sum_v \\alpha_v\\right)}\n{\\Gamma\\left(\\sum_v n_v^{k}+\\alpha_v\\right)}\\prod_{v=1}^V\\frac{\\Gamma(n_v^{k}+\\alpha_{v})}{\\Gamma(\\alpha_{v})} \\right]"
},
{
"math_id": 77,
"text": "n_v^{k}"
},
{
"math_id": 78,
"text": "\\Pr(w_n=v\\mid\\mathbb{W}^{(-n)},\\mathbb{Z},\\boldsymbol\\alpha)\\ \\propto\\ n_v^{k,(-n)} + \\alpha_v"
},
{
"math_id": 79,
"text": "n_v^{k,(-n)}"
},
{
"math_id": 80,
"text": "\n\\begin{array}{lcl}\n\\boldsymbol\\alpha &\\sim& \\text{A Dirichlet hyperprior, either a constant or a random variable} \\\\\n\\boldsymbol\\beta &\\sim& \\text{A Dirichlet hyperprior, either a constant or a random variable} \\\\\n\\boldsymbol\\theta_{d=1 \\dots M} &\\sim& \\operatorname{Dirichlet}_K(\\boldsymbol\\alpha) \\\\\n\\boldsymbol\\phi_{k=1 \\dots K} &\\sim& \\operatorname{Dirichlet}_V(\\boldsymbol\\beta) \\\\\nz_{d=1 \\dots M,n=1 \\dots N_d} &\\sim& \\operatorname{Categorical}_K(\\boldsymbol\\theta_d) \\\\\nw_{d=1 \\dots M,n=1 \\dots N_d} &\\sim& \\operatorname{Categorical}_V(\\boldsymbol\\phi_{z_{dn}}) \\\\\n\\end{array}\n"
},
{
"math_id": 81,
"text": "\n\\begin{array}{lcl}\n\\Pr(w_{dn}=v\\mid\\mathbb{W}^{(-dn)},\\mathbb{Z},\\boldsymbol\\beta)\\ &\\propto\\ & \\#\\mathbb{W}_v^{k,(-dn)} + \\beta_v \\\\\n\\Pr(z_{dn}=k\\mid\\mathbb{Z}^{(-dn)},w_{dn}=v,\\mathbb{W}^{(-dn)},\\boldsymbol\\alpha)\\ &\\propto\\ &(\\#\\mathbb{Z}_k^{d,(-dn)} + \\alpha_k) \\Pr(w_{dn}=v\\mid\\mathbb{W}^{(-dn)},\\mathbb{Z},\\boldsymbol\\beta) \\\\\n\\end{array}\n"
},
{
"math_id": 82,
"text": "\n\\begin{array}{lcl}\n\\#\\mathbb{W}_v^{k,(-dn)} &=& \\text{number of words having value }v\\text{ among topic }k\\text{ excluding }w_{dn} \\\\\n\\#\\mathbb{Z}_k^{d,(-dn)} &=& \\text{number of topics having value }k\\text{ among document }d\\text{ excluding }z_{dn} \\\\\n\\end{array}\n"
},
{
"math_id": 83,
"text": "\n\\begin{array}{rcl}\n\\Pr(z_{dn}=k\\mid\\mathbb{Z}^{(-dn)},w_{dn}=v,\\mathbb{W}^{(-dn)},\\boldsymbol\\alpha)\\ &\\propto\\ &\\bigl(\\#\\mathbb{Z}_k^{d,(-dn)} + \\alpha_k\\bigr) \\dfrac{\\#\\mathbb{W}_v^{k,(-dn)} + \\beta_v}{\\sum_{v'=1}^{V} (\\#\\mathbb{W}_{v'}^{k,(-dn)} + \\beta_{v'})} \\\\\n&& \\\\\n&=& \\bigl(\\#\\mathbb{Z}_k^{d,(-dn)} + \\alpha_k\\bigr) \\dfrac{\\#\\mathbb{W}_v^{k,(-dn)} + \\beta_v}{\\#\\mathbb{W}^{k} + B - 1}\n\\end{array}\n"
},
{
"math_id": 84,
"text": "\n\\begin{array}{lcl}\n\\#\\mathbb{W}^{k} &=& \\text{number of words generated by topic }k \\\\\nB &=& \\sum_{v=1}^{V} \\beta_v \\\\\n\\end{array}\n"
},
{
"math_id": 85,
"text": "z"
},
{
"math_id": 86,
"text": "\\operatorname{F}(\\dots\\mid z)"
},
{
"math_id": 87,
"text": "w_{dn}"
},
{
"math_id": 88,
"text": "\\mathbb{W}^{k}"
},
{
"math_id": 89,
"text": "\n\\begin{array}{lcl}\np(\\mathbb{W}^{k}\\mid z_{dn}) &=& p(w_{dn}\\mid\\mathbb{W}^{k,(-dn)},z_{dn})\\,p(\\mathbb{W}^{k,(-dn)}\\mid z_{dn}) \\\\\n&=& p(w_{dn}\\mid\\mathbb{W}^{k,(-dn)},z_{dn})\\,p(\\mathbb{W}^{k,(-dn)}) \\\\\n&\\sim& p(w_{dn}\\mid\\mathbb{W}^{k,(-dn)},z_{dn})\n\\end{array}\n"
},
{
"math_id": 90,
"text": "\\mathbb{W}^{k,(-dn)}"
},
{
"math_id": 91,
"text": "\n\\begin{array}{lcl}\n\\boldsymbol\\alpha &\\sim& \\text{A Dirichlet hyperprior, either a constant or a random variable} \\\\\n\\boldsymbol\\beta &\\sim& \\text{A Dirichlet hyperprior, either a constant or a random variable} \\\\\n\\boldsymbol\\theta_{d=1 \\dots M} &\\sim& \\operatorname{Dirichlet}_K(\\boldsymbol\\alpha) \\\\\n\\boldsymbol\\phi_{k=1 \\dots K} &\\sim& \\operatorname{Dirichlet}_V(\\boldsymbol\\beta) \\\\\nz_{d=1 \\dots M} &\\sim& \\operatorname{Categorical}_K(\\boldsymbol\\theta_d) \\\\\nw_{d=1 \\dots M,n=1 \\dots N_d} &\\sim& \\operatorname{Categorical}_V(\\boldsymbol\\phi_{z_{d}}) \\\\\n\\end{array}\n"
},
{
"math_id": 92,
"text": "\n\\begin{array}{lcl}\n\\Pr(w_{dn}=v\\mid\\mathbb{W}^{(-dn)},\\mathbb{Z},\\boldsymbol\\beta)\\ &\\propto\\ & \\#\\mathbb{W}_v^{k,(-dn)} + \\beta_v \\\\\n\\end{array}\n"
},
{
"math_id": 93,
"text": "\n\\begin{array}{lcl}\n\\#\\mathbb{W}_v^{k,(-dn)} &=& \\text{number of words having value }v\\text{ among documents with label }k\\text{ excluding }w_{dn} \\\\\n\\end{array}\n"
},
{
"math_id": 94,
"text": "\\operatorname{F}(\\dots\\mid z_d)"
},
{
"math_id": 95,
"text": "z_d"
}
] | https://en.wikipedia.org/wiki?curid=9169137 |
917006 | Bijective proof | Technique for proving sets have equal size
In combinatorics, bijective proof is a proof technique for proving that two sets have equally many elements, or that the sets in two combinatorial classes have equal size, by finding a bijective function that maps one set one-to-one onto the other. This technique can be useful as a way of finding a formula for the number of elements of certain sets, by corresponding them with other sets that are easier to count. Additionally, the nature of the bijection itself often provides powerful insights into each or both of the sets.
Basic examples.
Proving the symmetry of the binomial coefficients.
The symmetry of the binomial coefficients states that
formula_0
This means that there are exactly as many combinations of "k" things in a set of size "n" as there are combinations of "n" − "k" things in a set of size "n".
A bijective proof.
The key idea of the proof may be understood from a simple example: selecting "k" children to be rewarded with ice cream cones, out of a group of "n" children, has exactly the same effect as choosing instead the "n" − "k" children to be denied ice cream cones.
More abstractly and generally, the two quantities asserted to be equal count the subsets of size "k" and "n" − "k", respectively, of any "n"-element set "S". Let A be the set of all k-element subsets of S, the set A has size formula_1 Let B be the set of all n−k subsets of S, the set B has size formula_2. There is a simple bijection between the two sets A and B: it associates every "k"-element subset (that is, a member of A) with its complement, which contains precisely the remaining "n" − "k" elements of "S", and hence is a member of B. More formally, this can be written using functional notation as, "f" : "A" → "B" defined by "f"("X") = "X""c" for X any k-element subset of S and the complement taken in S. To show that f is a bijection, first assume that "f"("X"1) = "f"("X"2), that is to say, "X"1"c" = "X"2"c". Take the complements of each side (in S), using the fact that the complement of a complement of a set is the original set, to obtain "X"1 = "X"2. This shows that f is one-to-one. Now take any n−k-element subset of S in B, say Y. Its complement in S, "Y""c", is a k-element subset, and so, an element of A. Since "f"("Y""c") = ("Y""c")"c" = "Y", f is also onto and thus a bijection. The result now follows since the existence of a bijection between these finite sets shows that they have the same size, that is, formula_3.
Other examples.
Problems that admit bijective proofs are not limited to binomial coefficient identities. As the complexity of the problem increases, a bijective proof can become very sophisticated. This technique is particularly useful in areas of discrete mathematics such as combinatorics, graph theory, and number theory.
The most classical examples of bijective proofs in combinatorics include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " {n \\choose k} = {n \\choose n-k}. "
},
{
"math_id": 1,
"text": "\\tbinom{n}{k}."
},
{
"math_id": 2,
"text": "\\tbinom{n}{n-k}"
},
{
"math_id": 3,
"text": "\\tbinom{n}{k} = \\tbinom{n}{n-k}"
}
] | https://en.wikipedia.org/wiki?curid=917006 |
9170159 | Binocular disparity | Cue to determine depth or distance of an object
Binocular disparity refers to the difference in image location of an object seen by the left and right eyes, resulting from the eyes' horizontal separation (parallax). The mind uses binocular disparity to extract depth information from the two-dimensional retinal images in stereopsis. In computer vision, binocular disparity refers to the difference in coordinates of similar features within two stereo images.
A similar disparity can be used in rangefinding by a coincidence rangefinder to determine distance and/or altitude to a target. In astronomy, the disparity between different locations on the Earth can be used to determine various celestial parallax, and Earth's orbit can be used for stellar parallax.
Definition.
Human eyes are horizontally separated by about 50–75 mm (interpupillary distance) depending on each individual. Thus, each eye has a slightly different view of the world around. This can be easily seen when alternately closing one eye while looking at a vertical edge. The binocular disparity can be observed from apparent horizontal shift of the vertical edge between both views.
At any given moment, the line of sight of the two eyes meet at a point in space. This point in space projects to the same location (i.e. the center) on the retinae of the two eyes. Because of the different viewpoints observed by the left and right eye however, many other points in space do not fall on corresponding retinal locations. Visual binocular disparity is defined as the difference between the point of projection in the two eyes and is usually expressed in degrees as the visual angle.
The term "binocular disparity" refers to geometric measurements made external to the eye. The disparity of the images on the actual retina depends on factors internal to the eye, especially the location of the nodal points, even if the cross section of the retina is a perfect circle. Disparity on retina conforms to binocular disparity when measured as degrees, while much different if measured as distance due to the complicated structure inside eye.
Figure 1: The full black circle is the point of fixation. The blue object lies nearer to the observer. Therefore, it has a "near" disparity "dn". Objects lying more far away (green) correspondingly have a "far" disparity "df". Binocular disparity is the angle between two lines of projection . One of which is the real projection from the object to the actual point of projection. The other one is the imaginary projection running through the nodal point of the fixation point.
In computer vision, binocular disparity is calculated from stereo images taken from a set of stereo cameras. The variable distance between these cameras, called the baseline, can affect the disparity of a specific point on their respective image plane. As the baseline increases, the disparity increases due to the greater angle needed to align the sight on the point. However, in computer vision, binocular disparity is referenced as coordinate differences of the point between the right and left images instead of a visual angle. The units are usually measured in pixels.
Tricking neurons with 2D images.
Brain cells (neurons) in a part of the brain responsible for processing visual information coming from the retinae (primary visual cortex) can detect the existence of disparity in their input from the eyes. Specifically, these neurons will be active, if an object with "their" special disparity lies within the part of the visual field to which they have access (receptive field).
Researchers investigating precise properties of these neurons with respect to disparity present visual stimuli with different disparities to the cells and look whether they are active or not. One possibility to present stimuli with different disparities is to place objects in varying depth in front of the eyes. However, the drawback to this method may not be precise enough for objects placed further away as they possess smaller disparities while objects closer will have greater disparities. Instead, neuroscientists use an alternate method as schematised in Figure 2.
Figure 2: The disparity of an object with different depth than the fixation point can alternatively be produced by presenting an image of the object to one eye and a laterally shifted version of the same image to the other eye. The full black circle is the point of fixation. Objects in varying depths are placed along the line of fixation of the left eye. The same disparity produced from a shift in depth of an object (filled coloured circles) can also be produced by laterally shifting the object in constant depth in the picture one eye sees (black circles with coloured margin). Note that for near disparities the lateral shift has to be larger to correspond to the same depth compared with far disparities. This is what neuroscientists usually do with random dot stimuli to study disparity selectivity of neurons since the lateral distance required to test disparities is less than the distances required using depth tests. This principle has also been applied in autostereogram illusions.
Computing disparity using digital stereo images.
The disparity of features between two stereo images are usually computed as a shift to the left of an image feature when viewed in the right image. For example, a single point that appears at the "x" coordinate "t" (measured in pixels) in the left image may be present at the "x" coordinate "t" − 3 in the right image. In this case, the disparity at that location in the right image would be 3 pixels.
Stereo images may not always be correctly aligned to allow for quick disparity calculation. For example, the set of cameras may be slightly rotated off level. Through a process known as image rectification, both images are rotated to allow for disparities in only the horizontal direction (i.e. there is no disparity in the "y" image coordinates). This is a property that can also be achieved by precise alignment of the stereo cameras before image capture.
Computer algorithm.
After rectification, the correspondence problem can be solved using an algorithm that scans both the left and right images for matching image features. A common approach to this problem is to form a smaller image patch around every pixel in the left image. These image patches are compared to all possible disparities in the right image by comparing their corresponding image patches. For example, for a disparity of 1, the patch in the left image would be compared to a similar-sized patch in the right, shifted to the left by one pixel. The comparison between these two patches can be made by attaining a computational measure from one of the following equations that compares each of the pixels in the patches. For all of the following equations, "L" and "R" refer to the left and right columns while "r" and "c" refer to the current row and column of either images being examined. "d" refers to the disparity of the right image.
The disparity with the lowest computed value using one of the above methods is considered the disparity for the image feature. This lowest score indicates that the algorithm has found the best match of corresponding features in both images.
The method described above is a brute-force search algorithm. With large patch and/or image sizes, this technique can be very time consuming as pixels are constantly being re-examined to find the lowest correlation score. However, this technique also involves unnecessary repetition as many pixels overlap. A more efficient algorithm involves remembering all values from the previous pixel. An even more efficient algorithm involves remembering column sums from the previous row (in addition to remembering all values from the previous pixel). Techniques that save previous information can greatly increase the algorithmic efficiency of this image analyzing process.
Uses of disparity from images.
Knowledge of disparity can be used in further extraction of information from stereo images. One case that disparity is most useful is for depth/distance calculation. Disparity and distance from the cameras are inversely related. As the distance from the cameras increases, the disparity decreases. This allows for depth perception in stereo images. Using geometry and algebra, the points that appear in the 2D stereo images can be mapped as coordinates in 3D space.
This concept is particularly useful for navigation. For example, the Mars Exploration Rover uses a similar method for scanning the terrain for obstacles. The rover captures a pair of images with its stereoscopic navigation cameras and disparity calculations are performed in order to detect elevated objects (such as boulders). Additionally, location and speed data can be extracted from subsequent stereo images by measuring the displacement of objects relative to the rover. In some cases, this is the best source of this type of information as the encoder sensors in the wheels may be inaccurate due to tire slippage. | [
{
"math_id": 0,
"text": "\\frac{\\sum{\\sum{ L(r,c) \\cdot R(r,c-d) }}}{\\sqrt{(\\sum{\\sum{ L(r,c)^2 }}) \\cdot (\\sum{\\sum{ R(r,c-d)^2 }})}}"
},
{
"math_id": 1,
"text": "\\sum{\\sum{ (L(r,c) - R(r,c-d))^2 }}"
},
{
"math_id": 2,
"text": "\\sum{\\sum{ \\left | L(r,c) - R(r,c-d) \\right \\vert }}"
}
] | https://en.wikipedia.org/wiki?curid=9170159 |
917273 | Basic reproduction number | Metric in epidemiology
In epidemiology, the basic reproduction number, or basic reproductive number (sometimes called basic reproduction ratio or basic reproductive rate), denoted formula_0 (pronounced "R nought" or "R zero"), of an infection is the expected number of cases directly generated by one case in a population where all individuals are susceptible to infection. The definition assumes that no other individuals are infected or immunized (naturally or through vaccination). Some definitions, such as that of the Australian Department of Health, add the absence of "any deliberate intervention in disease transmission". The basic reproduction number is not necessarily the same as the effective reproduction number formula_1 (usually written formula_2 ["t" for time], sometimes formula_3), which is the number of cases generated in the current state of a population, which does not have to be the uninfected state. formula_0 is a dimensionless number (persons infected per person infecting) and not a time rate, which would have units of time−1, or units of time like doubling time.
formula_0 is not a biological constant for a pathogen as it is also affected by other factors such as environmental conditions and the behaviour of the infected population. formula_0 values are usually estimated from mathematical models, and the estimated values are dependent on the model used and values of other parameters. Thus values given in the literature only make sense in the given context and it is not recommended to compare values based on different models. formula_0 does not by itself give an estimate of how fast an infection spreads in the population.
The most important uses of formula_0 are determining if an emerging infectious disease can spread in a population and determining what proportion of the population should be immunized through vaccination to eradicate a disease. In commonly used infection models, when formula_4 the infection will be able to start spreading in a population, but not if formula_5. Generally, the larger the value of formula_0, the harder it is to control the epidemic. For simple models, the proportion of the population that needs to be effectively immunized (meaning not susceptible to infection) to prevent sustained spread of the infection has to be larger than formula_6. This is the so-called "Herd immunity" "threshold" or "herd immunity level". Here, herd immunity means that the disease cannot spread in the population because each infected person, on average, can only transmit the infection to less than one other contact. Conversely, the proportion of the population that remains susceptible to infection in the endemic equilibrium is formula_7. However, this threshold is based on simple models that assume a fully mixed population with no structured relations between the individuals. For example, if there is some correlation between people's immunization (e.g., vaccination) status, then the formula formula_6 may underestimate the herd immunity threshold.
The basic reproduction number is affected by several factors, including the duration of infectivity of affected people, the contagiousness of the microorganism, and the number of susceptible people in the population that the infected people contact.
History.
The roots of the basic reproduction concept can be traced through the work of Ronald Ross, Alfred Lotka and others, but its first modern application in epidemiology was by George Macdonald in 1952, who constructed population models of the spread of malaria. In his work he called the quantity basic reproduction rate and denoted it by formula_8.
Overview of formula_0 estimation methods.
Compartmental models.
Compartmental models are a general modeling technique often applied to the mathematical modeling of infectious diseases. In these models, population members are assigned to 'compartments' with labels – for example, S, I, or R, (Susceptible, Infectious, or Recovered). These models can be used to estimate formula_9.
Epidemic models on networks.
Epidemics can be modeled as diseases spreading over networks of contact and disease transmission between people. Nodes in these networks represent individuals and links (edges) between nodes represent the contact or disease transmission between them. If such a network is a locally tree-like network, then the basic reproduction can be written in terms of the average excess degree of the transmission network such that:
formula_10
where formula_11 is the mean-degree (average degree) of the network and formula_12 is the second moment of the transmission network degree distribution.
Heterogeneous populations.
In populations that are not homogeneous, the definition of formula_0 is more subtle. The definition must account for the fact that a typical infected individual may not be an average individual. As an extreme example, consider a population in which a small portion of the individuals mix fully with one another while the remaining individuals are all isolated. A disease may be able to spread in the fully mixed portion even though a randomly selected individual would lead to fewer than one secondary case. This is because the typical infected individual is in the fully mixed portion and thus is able to successfully cause infections. In general, if the individuals infected early in an epidemic are on average either more likely or less likely to transmit the infection than individuals infected late in the epidemic, then the computation of formula_0 must account for this difference. An appropriate definition for formula_0 in this case is "the expected number of secondary cases produced, in a completely susceptible population, produced by a typical infected individual".
The basic reproduction number can be computed as a ratio of known rates over time: if a contagious individual contacts formula_13 other people per unit time, if all of those people are assumed to contract the disease, and if the disease has a mean infectious period of formula_14, then the basic reproduction number is just formula_15. Some diseases have multiple possible latency periods, in which case the reproduction number for the disease overall is the sum of the reproduction number for each transition time into the disease.
Effective reproduction number.
In reality, varying proportions of the population are immune to any given disease at any given time. To account for this, the effective reproduction number formula_3 or formula_1 is used. formula_2 is the average number of new infections caused by a single infected individual at time "t" in the partially susceptible population. It can be found by multiplying formula_0 by the fraction "S" of the population that is susceptible. When the fraction of the population that is immune increases (i. e. the susceptible population "S" decreases) so much that formula_3 drops below, herd immunity has been achieved and the number of cases occurring in the population will gradually decrease to zero.
Limitations of formula_0.
Use of formula_0 in the popular press has led to misunderstandings and distortions of its meaning. formula_0 can be calculated from many different mathematical models. Each of these can give a different estimate of formula_0, which needs to be interpreted in the context of that model. Therefore, the contagiousness of different infectious agents cannot be compared without recalculating formula_0 with invariant assumptions. formula_0 values for past outbreaks might not be valid for current outbreaks of the same disease. Generally speaking, formula_0 can be used as a threshold, even if calculated with different methods: if formula_5, the outbreak will die out, and if formula_4, the outbreak will expand. In some cases, for some models, values of formula_5 can still lead to self-perpetuating outbreaks. This is particularly problematic if there are intermediate vectors between hosts (as is the case for zoonoses), such as malaria. Therefore, comparisons between values from the "Values of formula_0 of well-known contagious diseases" table should be conducted with caution.
Although formula_0 cannot be modified through vaccination or other changes in population susceptibility, it can vary based on a number of biological, sociobehavioral, and environmental factors. It can also be modified by physical distancing and other public policy or social interventions, although some historical definitions exclude any deliberate intervention in reducing disease transmission, including nonpharmacological interventions. And indeed, whether nonpharmacological interventions are included in formula_0 often depends on the paper, disease, and what if any intervention is being studied. This creates some confusion, because formula_0 is not a constant; whereas most mathematical parameters with "nought" subscripts are constants.
formula_1 depends on many factors, many of which need to be estimated. Each of these factors adds to uncertainty in estimates of formula_1. Many of these factors are not important for informing public policy. Therefore, public policy may be better served by metrics similar to formula_1, but which are more straightforward to estimate, such as doubling time or half-life (formula_16).
Methods used to calculate formula_0 include the survival function, rearranging the largest eigenvalue of the Jacobian matrix, the next-generation method, calculations from the intrinsic growth rate, existence of the endemic equilibrium, the number of susceptibles at the endemic equilibrium, the average age of infection and the final size equation. Few of these methods agree with one another, even when starting with the same system of differential equations. Even fewer actually calculate the average number of secondary infections. Since formula_0 is rarely observed in the field and is usually calculated via a mathematical model, this severely limits its usefulness.
Sample values for various contagious diseases.
Despite the difficulties in estimating formula_0mentioned in the previous section, estimates have been made for a number of genera, and are shown in this table. Each genus may be composed of many species, strains, or variants. Estimations of formula_0 for species, strains, and variants are typically less accurate than for genera, and so are provided in separate tables below for diseases of particular interest (influenza and COVID-19).<section begin="r0hittable" />
<section end="r0hittable" />Estimates for strains of influenza.<section begin="Flur0hittable" />
<section end="Flur0hittable" />Estimates for variants of SARS-CoV-2.<section begin="COVIDr0hittable" />
<section end="COVIDr0hittable" />
In popular culture.
In the 2011 film "Contagion", a fictional medical disaster thriller, a blogger's calculations for formula_0 are presented to reflect the progression of a fatal viral infection from isolated cases to a pandemic.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "R_0"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "R_t"
},
{
"math_id": 3,
"text": "R_e"
},
{
"math_id": 4,
"text": "R_0 > 1"
},
{
"math_id": 5,
"text": "R_0 < 1"
},
{
"math_id": 6,
"text": "1 - 1 / R_0"
},
{
"math_id": 7,
"text": "1 / R_0"
},
{
"math_id": 8,
"text": "Z_0"
},
{
"math_id": 9,
"text": "R_0\n"
},
{
"math_id": 10,
"text": "R_0 = \\frac{{\\langle k^2 \\rangle}}{{\\langle k \\rangle}} - 1,"
},
{
"math_id": 11,
"text": "\n{\\langle k \\rangle}\n"
},
{
"math_id": 12,
"text": "\n{\\langle k^2 \\rangle}\n"
},
{
"math_id": 13,
"text": "\\beta"
},
{
"math_id": 14,
"text": "\\dfrac{1}{\\gamma}"
},
{
"math_id": 15,
"text": "R_0 = \\dfrac{\\beta}{\\gamma}"
},
{
"math_id": 16,
"text": "t_{1/2}"
}
] | https://en.wikipedia.org/wiki?curid=917273 |
9175034 | Punchscan | Vote counting system
Punchscan is an optical scan vote counting system invented by cryptographer David Chaum. Punchscan is designed to offer integrity, privacy, and transparency. The system is voter-verifiable, provides an end-to-end (E2E) audit mechanism, and issues a ballot receipt to each voter. The system won grand prize at the 2007 University Voting Systems Competition.
The computer software which Punchscan incorporates is open-source; the source code was released on 2 November 2006 under a revised BSD licence. However, Punchscan is software independent; it draws its security from cryptographic functions instead of relying on software security like DRE voting machines. For this reason, Punchscan can be run on closed source operating systems, like Microsoft Windows, and still maintain unconditional integrity.
The Punchscan team, with additional contributors, has since developed Scantegrity.
Voting procedure.
A Punchscan ballot has two layers of paper. On the top layer, the candidates are listed with a symbol or letter beside their name. Below the candidate list, there are a series of round holes in the top layer of the ballot. Inside the holes on the bottom layer, the corresponding symbols are printed.
To cast a vote for a candidate, the voter must locate the hole with the symbol corresponding to the symbol beside the candidate's name. This hole is marked with a Bingo-style ink dauber, which is purposely larger than the hole. The voter then separates the ballot, chooses either the top or the bottom layer to keep as a receipt, and shreds the other layer. The receipt is scanned at the polling station for tabulation.
The order of the symbols beside the candidate names is generated randomly for each ballot, and thus differs from ballot to ballot. Likewise for the order of the symbols in the holes. For this reason, the receipt does not contain enough information to determine which candidate the vote was cast for. If the top layer is kept, the order of the symbols through the holes is unknown. If the bottom layer is kept, the order of the symbols beside the candidates name is unknown. Therefore, the voter cannot prove to someone else how they voted, which prevents vote buying or voter intimidation.
Tabulation procedure.
As an example, consider a two candidate election between Coke and Pepsi, as illustrated in the preceding diagram. The order of the letters beside the candidates' names could be A and then B, or B and then A. We will call this ordering formula_0, and let formula_0=0 for the former ordering and formula_0=1 for the latter. Therefore,
formula_0: order of symbols beside candidate list,
formula_1.
Likewise we can generalize for other parts of a ballot:
formula_2: order of symbols through the holes,
formula_3.
formula_4: which hole is marked,
formula_5.
formula_6: result of the ballot,
formula_7.
Note that the order of the candidates' names are fixed across all ballots. The result of a ballot can be calculated directly as,
formula_8 (Equation 1)
However, when one layer of the ballot is shredded, either formula_0 or formula_2 is destroyed. Therefore, there is insufficient information to calculate formula_6 from the receipt (which is scanned). In order to calculate the election results, an electronic database is used.
Before the election, the database is created with a series of columns as such. Each row in the database represents a ballot, and the order that the ballots are stored in the database is shuffled (using a cryptographic key that each candidate can contribute to). The first column, formula_9, has the shuffled order of the serial numbers. formula_10 contains a pseudorandom bitstream generated from the key, and it will act as a stream cipher. formula_11 will store an intermediate result. formula_12 contains a bit such that:
formula_13
The result of each ballot will be stored in a separate column, formula_6, where the order of the ballots will be reshuffled again. Thus formula_14 contains the row number in the formula_6 column where the result will be placed.
After the election is run and the formula_4 values have been scanned in, formula_11 is calculated as:
formula_15
And the result is calculated as,
formula_16
This is equivalent to equation 1,
formula_17
The result column is published and given the ballots have been shuffled (twice), the order of the results column does not indicate which result is from which ballot number. Thus the election authority cannot trace votes to serial numbers.
Generalized form.
For an election with formula_18 candidates, the above procedure is followed using modulo-n equations.
Basic auditing procedures.
The voter's ballot receipt does not indicate which candidate the voter cast their ballot for, and therefore it is not secret information. After an election, the election authority will post an image of each receipt online. The voter can look up their ballot by typing in the serial number and they can check that information held by the election authority matches their ballot. This way, the voter can be confident that their ballot was "cast as intended".
Any voter or interested party can also inspect part of the database to ensure the results were calculated correctly. They cannot inspect the whole database, otherwise they could link votes to ballot serial numbers. However, half of the database can be safely inspected without breaking privacy. A random choice is made between opening formula_19 or formula_20 (this choice can be derived from the secret key or from a true random source, such as dice or the stock market). This procedure allows the voter to be confident that the set of all ballots were "counted as cast".
If all ballots are "counted as cast" and "cast as intended", then all ballots are "counted as intended". Therefore, the integrity of the election can be proven to a very high probability.
Additional security.
To further increase the integrity of a Punchscan election, several further steps can be taken to protect against a completely corrupt election authority.
Multiple databases.
Since formula_9, formula_10, and formula_14 in the database are all generated pseudorandomly, multiple databases can be created with different random values for these columns. Each database is independent of the others, allowing the first half of some of the databases to be opened and inspected and the second half of others. Each database must produce the same final tally. Thus if an election authority were to tamper with the database to skew the final tally, they would have to tamper with each of the databases. The probability of the tampering being uncovered in the audit increases with the number of independent databases.
Commitments.
Prior to an election, the election authority prints the ballots and creates the database(s). Part of this creation process involves committing to the unique information contained on each ballot and in the databases. This is accomplished by applying a cryptographic one-way function to the information. Though the result of this function, the commitment, is made public, the actual information being committed to remains sealed. Because the function is one-way, it is computationally infeasible to determine the information on the sealed ballot given only its publicly posted commitment.
Ballot inspection.
Prior to an election, twice as many ballots are produced as the number intended to use in the election. Half of these ballots are selected randomly (or each candidate could choose a fraction of the ballots) and opened. The rows in the database corresponding to these selected ballots can be checked to ensure the calculations are correct and not tampered with. Since the election authority does not know "a priori" which ballots will be selected, passing this audit means the database is well formed with a very high probability. Furthermore, the ballots can be checked against their commitments to ensure with high probability that the ballot commitments are correct.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P_1"
},
{
"math_id": 1,
"text": "P_1\\in\\{0,1\\}=\\{\\mbox{AB},\\mbox{BA}\\}\\,"
},
{
"math_id": 2,
"text": "P_2"
},
{
"math_id": 3,
"text": "P_2\\in\\{0,1\\}=\\{\\mbox{AB},\\mbox{BA}\\}\\,"
},
{
"math_id": 4,
"text": "P_3"
},
{
"math_id": 5,
"text": "P_3\\in\\{0,1\\}=\\{\\mbox{1st},\\mbox{2nd}\\}\\,"
},
{
"math_id": 6,
"text": "R"
},
{
"math_id": 7,
"text": "R\\in\\{0,1\\}=\\{\\mbox{Coke},\\mbox{Pepsi}\\}\\,"
},
{
"math_id": 8,
"text": "R = P_1 + P_2 + P_3\\bmod 2\\,"
},
{
"math_id": 9,
"text": "D_1"
},
{
"math_id": 10,
"text": "D_2"
},
{
"math_id": 11,
"text": "D_3"
},
{
"math_id": 12,
"text": "D_4"
},
{
"math_id": 13,
"text": "D_2 + D_4 = P_1 + P_2 \\bmod 2\\,"
},
{
"math_id": 14,
"text": "D_5"
},
{
"math_id": 15,
"text": "D_3 = P_3 + D_2 \\bmod 2\\,"
},
{
"math_id": 16,
"text": "R = D_3 + D_4 \\bmod 2\\,"
},
{
"math_id": 17,
"text": "\\begin{align}\nR &= (D_3) + D_4 \\bmod 2\\\\\n&= (P_3 + D_2) + D_4 \\bmod 2\\\\\n&= P_3 + (D_2 + D_4) \\bmod 2\\\\\n&= P_3 + (P_1 + P_2) \\bmod 2\n\\end{align}"
},
{
"math_id": 18,
"text": "n"
},
{
"math_id": 19,
"text": "\\{D_1,D_2,D_3\\}"
},
{
"math_id": 20,
"text": "\\{D_3,D_4,D_5\\}"
}
] | https://en.wikipedia.org/wiki?curid=9175034 |
917505 | Belleville washer | Type of spring shaped like a washer
A Belleville washer, also known as a coned-disc spring, conical spring washer, disc spring, Belleville spring or cupped spring washer, is a conical shell which can be loaded along its axis either statically or dynamically. A Belleville washer is a type of spring shaped like a washer. It is the shape, a cone frustum, that gives the washer its characteristic spring.
The "Belleville" name comes from the inventor who in Dunkerque, France, in 1867 patented a spring design which already contained the principle of the disc spring. The real inventor of Belleville washers is unknown.
Through the years, many profiles for disc springs have been developed. Today the most used are the profiles with or without
contact flats, while some other profiles, like disc springs with trapezoidal cross-section, have lost importance.
Features and use.
In the different fields, if they are used as springs or to apply a flexible pre-load to a bolted joint or bearing, Belleville washers can be used as a single spring or as a stack. In a spring-stack, disc springs can be stacked in the same or in an alternating orientation and of course it is possible to stack packets of multiple springs stacked in the same direction.
Disc springs have a number of advantageous properties compared to other types of springs:
Thanks to these advantageous properties, Belleville washers are today used in a large number of fields, some examples are listed in the following.
In the arms industry, Belleville springs are used, for instance, in a number of landmines e.g. the American M19, M15, M14, M1 and the Swedish Tret-Mi.59. The target (a person or vehicle) exerts pressure on the Belleville spring, causing it to exceed a trigger threshold and flip the adjacent firing pin downwards into a stab detonator, firing both it and the surrounding booster charge and main explosive filling.
Belleville washers have been used as return springs in artillery pieces, one example being the French Canet range of marine/coastal cannon from the late 1800s (75 mm, 120 mm, 152 mm).
Some makers of bolt action target rifles use Belleville washer stacks in the bolt instead of a more traditional spring to release the firing pin, as they reduce the time between trigger actuation and firing pin impact on the cartridge.
Belleville washers, without serrations which can harm the clamping surface, have no significant locking capability in bolted applications.
On aircraft (typically experimental aircraft) with wooden propellers, Belleville washers used on the mounting bolts can be useful as an indicator of swelling or shrinkage of the wood. By torquing their associated bolts to provide a specific gap between sets of washers placed with "high ends" facing each other, a change in relative moisture content in the propeller wood will result in a change of the gaps which is often great enough to be detected visually. As propeller balance depends on the weight of blades being equal, a radical difference in the washer gaps may indicate a difference in moisture content – and thus weight – in the adjacent blades.
In the aircraft and automotive industries (including Formula One cars) disc springs are used as vibration-damping elements because of their extremely detailed tuning ability. The Cirrus SR2x series of airplanes, uses a Belleville washer setup to damp out nose gear oscillations (or "shimmy").
In the building industry, in Japan stacks of disc springs have been used under buildings as vibration dampers for earthquakes.
Belleville washers are used in some high pressure air regulators, such as those found on paintball markers and air tanks.
Stacking.
Multiple Belleville washers may be stacked to modify the spring constant (or spring rate) or the amount of deflection. Stacking in the same direction will add the spring constant in parallel, creating a stiffer joint (with the same deflection). Stacking in an alternating direction is the same as adding common springs in series, resulting in a lower spring constant and greater deflection. Mixing and matching directions allow a specific spring constant and deflection capacity to be designed.
Generally, if n disc springs are stacked in parallel (facing the same direction), standing the load, the deflection of the whole stack is equal to that of one disc spring divided by n, then, to obtain the same deflection of a single disc spring the load to apply has to be n times that of a single disc spring. On the other hand, if n washers are stacked in series (facing in alternating directions), standing the load, the deflection is equal to n times that of one washer while the load to apply at the whole stack to obtain the same deflection of one disc spring has to be that of a single disc spring divided by n.
Performance considerations.
In a parallel stack, hysteresis (load losses) will occur due to friction between the springs. The hysteresis losses can be advantageous in some systems because of the added damping and dissipation of vibration energy. This loss due to friction can be calculated using hysteresis methods. Ideally, no more than 4 springs should be placed in parallel. If a greater load is required, then factor of safety must be increased in order to compensate for loss of load due to friction. Friction loss is not as much of an issue in series stacks.
In a series stack, the deflection is not exactly proportional to the number of springs. This is because of a "bottoming out" effect when the springs are compressed to flat as the contact surface area increases once the spring is deflected beyond 95%. This decreases the moment arm and the spring will offer a greater spring resistance. Hysteresis can be used to calculate predicted deflections in a series stack. The number of springs used in a series stack is not as much of an issue as in parallel stacks even if, generally, the stack height should not be greater than three times the outside diameter of the disc spring. If it is not possible to avoid a longer stack, then it should be divided into 2 or possibly 3 partial stacks with suitable washers. These washers should be guided as exactly as possible.
As previously said, Belleville washers are useful for adjustments because different thicknesses can be swapped in and out and they can be configured to achieve essentially infinite tunability of spring rate while only filling up a small part of the technician's tool box. They are ideal in situations where a heavy spring force is required with minimal free length and compression before reaching solid height. The downside, though, is weight, and they are severely travel limited compared to a conventional coil spring when free length is not an issue.
A wave washer also acts as a spring, but wave washers of comparable size do not produce as much force as Belleville washers, nor can they be stacked in series.
Disc springs with contact flats and reduced thickness.
For disc springs with a thickness of more than 6.0 mm, DIN 2093 specifies small contact surfaces at points I and III (that is the point where the load is applied and the point where the load touches the ground) in addition to the rounded corners. These contact flats improve definition of the point of load application and, particularly for spring stacks, reduce friction at the guide rod. The result is a considerable reduction in the lever arm length and a corresponding increase in the spring load. This is in turn compensated for by a reduction in the spring thickness.
The reduced thickness is specified in accordance with the following conditions:
As the overall height is not reduced, springs with reduced thickness inevitably have an increased flank angle and a greater cone height than springs of the same nominal dimension without reduced thickness. Therefore, the characteristic curve is altered and becomes completely different.
Calculation.
Starting from 1936, when J. O. Almen and A.Làszlò published a simplified method of calculation, always more accurate and complex methods appeared also in order to include in calculations disc springs with contact flats and reduced thickness. So, although today there are more accurate methods of calculation, the most used are the simple and convenient formulas of DIN 2092 as, for standard dimensions, they produce values which correspond well to the measured results.
Considering a Belleville washer with outside diameter formula_0, inside diameter formula_1, height formula_2 and thickness formula_3, where formula_4 is the free height, that is the difference between the height and the thickness, the following coefficients are obtained:
formula_5
formula_6
formula_7
formula_8
The equation to calculate the load to apply to a single disc spring in order to obtain a deflection formula_9 is:
formula_10
Note that for disc springs with constant thickness, formula_11 is equal to formula_3 and consequently formula_12 is 1.
For what concerns disc springs with contact flats and reduced thickness it has to be said that a paper published in July 2013, demonstrated that the formula_12 equation as defined inside the standard norms is not correct as it would result in every reduced thickness being considered right and this is, of course, impossible. As written in that paper formula_12 should be replaced with a new coefficient, formula_13, which depends not only from the formula_14 ratio but also from the flank angles of the spring.
The spring constant (or spring rate) is defined as:
formula_15
If friction and bottoming-out effects are ignored, the spring rate of a stack of identical Belleville washers can be quickly approximated. Counting from one end of the stack, group by the number of adjacent washers in parallel. For example, in the stack of washers to the right, the grouping is 2-3-1-2, because there is a group of 2 washers in parallel, then a group of 3, then a single washer, then another group of 2.
The total spring coefficient is:
formula_16
formula_17
formula_18
Where
So, a 2-3-1-2 stack (or, since addition is commutative, a 3-2-2-1 stack) gives a spring constant of 3/7 that of a single washer. These same 8 washers can be arranged in a 3-3-2 configuration (formula_22), a 4-4 configuration (formula_23), a 2-2-2-2 configuration (formula_24), and various other configurations. The number of unique ways to stack formula_25 washers is defined by the integer partition function "p"("n") and increases rapidly with large formula_25, allowing fine-tuning of the spring constant. However, each configuration will have a different length, requiring the use of shims in most cases.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{D_e}"
},
{
"math_id": 1,
"text": "{D_i}"
},
{
"math_id": 2,
"text": "{l}"
},
{
"math_id": 3,
"text": "{t}"
},
{
"math_id": 4,
"text": "{h_0}"
},
{
"math_id": 5,
"text": "\\delta=\\frac{D_e}{D_i}"
},
{
"math_id": 6,
"text": "{C_1}=\\frac{\\left(\\frac{t'}{t}\\right)^2}{\\left(\\frac{1}{4}\\cdot\\frac{l}{t}-\\frac{t'}{t}+\\frac{3}{4}\\right)\\cdot{\\left(\\frac{5}{8}\\cdot\\frac{l}{t}-\\frac{t'}{t}+\\frac{3}{8}\\right)} }"
},
{
"math_id": 7,
"text": "{C_2}=\\frac{C_1}{\\left(\\frac{t'}{t}\\right)^3}\\cdot\\left[\\frac{5}{32}\\cdot\\left(\\frac{l}{t}-1\\right)^2+1\\right] "
},
{
"math_id": 8,
"text": "{K_4}=\\sqrt{-\\frac{C_1}{2}+\\sqrt{\\left(\\frac{C_1}{2}\\right)^2+C_2}}"
},
{
"math_id": 9,
"text": "{s}"
},
{
"math_id": 10,
"text": "F=\\frac{4E}{1-\\mu^2}\\cdot\\frac{t^4}{K_1-{D_e}^2}\\cdot{K_4}^2\\cdot\\frac{s}{t}\\cdot\\left\n[{K_4}^2\\cdot\\left(\\frac{h_0}{t}-\\frac{s}{t}\\right)\\cdot\\left(\\frac{h_0}{t}-\\frac{s}{2t}\\right)+1\\right] "
},
{
"math_id": 11,
"text": "{t'}"
},
{
"math_id": 12,
"text": "{K_4}"
},
{
"math_id": 13,
"text": "{R_d}"
},
{
"math_id": 14,
"text": "\\frac{t'}{t}"
},
{
"math_id": 15,
"text": "{k}=\\frac{dF}{ds}"
},
{
"math_id": 16,
"text": "K = \\frac{k}{\\sum_{i=1}^g \\frac{1}{n_i}}"
},
{
"math_id": 17,
"text": "K = \\frac{k}{\\frac{1}{2}+\\frac{1}{3}+\\frac{1}{1}+\\frac{1}{2}}"
},
{
"math_id": 18,
"text": "K = \\frac{3}{7} \\cdot{k}"
},
{
"math_id": 19,
"text": "n_i"
},
{
"math_id": 20,
"text": "{g}"
},
{
"math_id": 21,
"text": "{k}"
},
{
"math_id": 22,
"text": "K = \\frac{6}{7}\\cdot k"
},
{
"math_id": 23,
"text": "K = 2\\cdot k"
},
{
"math_id": 24,
"text": "K = \\frac{1}{2}\\cdot k"
},
{
"math_id": 25,
"text": "{n}"
}
] | https://en.wikipedia.org/wiki?curid=917505 |
9175084 | Steinhaus theorem | Mathematical theorem in real analysis
In the mathematical field of real analysis, the Steinhaus theorem states that the difference set of a set of positive measure contains an open neighbourhood of zero. It was first proved by Hugo Steinhaus.
Statement.
Let "A" be a Lebesgue-measurable set on the real line such that the Lebesgue measure of "A" is not zero. Then the "difference set"
formula_0
contains an open neighbourhood of the origin.
The general version of the theorem, first proved by André Weil, states that if "G" is a locally compact group, and "A" ⊂ "G" a subset of positive (left) Haar measure, then
formula_1
contains an open neighbourhood of unity.
The theorem can also be extended to nonmeagre sets with the Baire property. The proof of these extensions, sometimes also called Steinhaus theorem, is almost identical to the one below.
Proof.
The following simple proof can be found in a collection of problems by late professor H.M. Martirosian from the Yerevan State University, Armenia (Russian).
Let's keep in mind that for any formula_2, there exists an open set formula_3, so that formula_4 and formula_5. As a consequence, for a given formula_6, we can find an appropriate interval formula_7 so that taking just an appropriate part of positive measure of the set formula_8 we can assume that formula_9, and that formula_10.
Now assume that formula_11, where formula_12. We'll show that there are common points in the sets formula_13 and formula_8. Otherwise formula_14. But since formula_15, and
formula_16,
we would get formula_17, which contradicts the initial property of the set. Hence, since formula_18, when formula_11, it follows immediately that formula_19, what we needed to establish.
Corollary.
A corollary of this theorem is that any measurable proper subgroup of formula_20 is of measure zero.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A-A=\\{a-b\\mid a,b\\in A\\} "
},
{
"math_id": 1,
"text": " AA^{-1} = \\{ ab^{-1} \\mid a,b \\in A \\} "
},
{
"math_id": 2,
"text": "\\varepsilon>0"
},
{
"math_id": 3,
"text": "\\, {\\cal U}"
},
{
"math_id": 4,
"text": "A\\subset{\\cal U}"
},
{
"math_id": 5,
"text": "\\mu ({\\cal U})<\\mu (A)+\\varepsilon"
},
{
"math_id": 6,
"text": "\\alpha \\in (1/2,1)"
},
{
"math_id": 7,
"text": "\\Delta=(a,b)"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "A\\subset\\Delta"
},
{
"math_id": 10,
"text": "\\mu(A)>\\alpha(b-a)"
},
{
"math_id": 11,
"text": "|x|<\\delta"
},
{
"math_id": 12,
"text": "\\delta=(2\\alpha-1)(b-a)"
},
{
"math_id": 13,
"text": "x+A"
},
{
"math_id": 14,
"text": "2\\mu(A)=\\mu \\{(x+A)\\cup A\\}\\leq \\mu \\{(x+\\Delta)\\cup \\Delta\\}"
},
{
"math_id": 15,
"text": "\\delta<b-a"
},
{
"math_id": 16,
"text": " \\mu \\{(x+\\Delta)\\cup \\Delta\\}=b-a+|x|<b-a+\\delta"
},
{
"math_id": 17,
"text": "2\\mu(A)<b-a+\\delta=2\\alpha(b-a)"
},
{
"math_id": 18,
"text": "(x+A)\\cap A\\neq\\varnothing"
},
{
"math_id": 19,
"text": "\\{x; |x|<\\delta\\}\\subset A-A"
},
{
"math_id": 20,
"text": "(\\R,+)"
}
] | https://en.wikipedia.org/wiki?curid=9175084 |
9175375 | Prandtl–Meyer function | In aerodynamics, the Prandtl–Meyer function describes the angle through which a flow turns isentropically from sonic velocity (M=1) to a Mach (M) number greater than 1. The maximum angle through which a sonic ("M" = 1) flow can be turned around a convex corner is calculated for M = formula_2. For an ideal gas, it is expressed as follows,
formula_3
where formula_4 is the Prandtl–Meyer function, formula_0 is the Mach number of the flow and formula_1 is the ratio of the specific heat capacities.
By convention, the constant of integration is selected such that formula_5
As Mach number varies from 1 to formula_2, formula_4 takes values from 0 to formula_6, where
formula_7
where, formula_8 is the absolute value of the angle through which the flow turns, formula_0 is the flow Mach number and the suffixes "1" and "2" denote the initial and final conditions respectively. | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": "\\infty"
},
{
"math_id": 3,
"text": "\\begin{align} \\nu(M) \n& = \\int \\frac{\\sqrt{M^2-1}}{1+\\frac{\\gamma -1}{2}M^2}\\frac{\\,dM}{M} \\\\[4pt]\n& = \\sqrt{\\frac{\\gamma + 1}{\\gamma -1}} \\cdot \\arctan \\sqrt{\\frac{\\gamma -1}{\\gamma +1} (M^2 -1)} - \\arctan \\sqrt{M^2 -1}\n\\end{align} "
},
{
"math_id": 4,
"text": "\\nu \\,"
},
{
"math_id": 5,
"text": "\\nu(1) = 0. \\,"
},
{
"math_id": 6,
"text": "\\nu_\\text{max} \\,"
},
{
"math_id": 7,
"text": "\\nu_\\text{max} = \\frac{\\pi}{2} \\bigg( \\sqrt{\\frac{\\gamma+1}{\\gamma-1}} -1 \\bigg)"
},
{
"math_id": 8,
"text": "\\theta "
}
] | https://en.wikipedia.org/wiki?curid=9175375 |
917633 | Sahlqvist formula | In modal logic, Sahlqvist formulas are a certain kind of modal formula with remarkable properties. The Sahlqvist correspondence theorem states that every Sahlqvist formula is canonical, and corresponds to a class of Kripke frames definable by a first-order formula.
Sahlqvist's definition characterizes a decidable set of modal formulas with first-order correspondents. Since it is undecidable, by Chagrova's theorem, whether an arbitrary modal formula has a first-order correspondent, there are formulas with first-order frame conditions that are not Sahlqvist [Chagrova 1991] (see the examples below). Hence Sahlqvist formulas define only a (decidable) subset of modal formulas with first-order correspondents.
Definition.
Sahlqvist formulas are built up from implications, where the consequent is "positive" and the antecedent is of a restricted form.
Its first-order corresponding formula is formula_6, and it defines all reflexive frames
Its first-order corresponding formula is formula_8, and it defines all symmetric frames
Its first-order corresponding formula is formula_11, and it defines all transitive frames
Its first-order corresponding formula is formula_14, and it defines all dense frames
Its first-order corresponding formula is formula_16, and it defines all right-unbounded frames (also called serial)
Its first-order corresponding formula is formula_18, and it is the Church–Rosser property.
This is the "McKinsey formula"; it does not have a first-order frame condition.
The "Löb axiom" is not Sahlqvist; again, it does not have a first-order frame condition.
The conjunction of the McKinsey formula and the (4) axiom has a first-order frame condition (the conjunction of the transitivity property with the property formula_22) but is not equivalent to any Sahlqvist formula.
Kracht's theorem.
When a Sahlqvist formula is used as an axiom in a normal modal logic, the logic is guaranteed to be complete with respect to the basic elementary class of frames the axiom defines. This result comes from the Sahlqvist completeness theorem [Modal Logic, Blackburn "et al.", Theorem 4.42]. But there is also a converse theorem, namely a theorem that states which first-order conditions are the correspondents of Sahlqvist formulas. Kracht's theorem states that "any Sahlqvist formula locally corresponds to a Kracht formula; and conversely, every Kracht formula is a local first-order correspondent of some Sahlqvist formula which can be effectively obtained from the Kracht formula" [Modal Logic, Blackburn "et al.", Theorem 3.59]. | [
{
"math_id": 0,
"text": "\\Box\\cdots\\Box p"
},
{
"math_id": 1,
"text": "\\Box^i p"
},
{
"math_id": 2,
"text": "0 \\leq i < \\omega"
},
{
"math_id": 3,
"text": "\\Diamond"
},
{
"math_id": 4,
"text": "\\Box"
},
{
"math_id": 5,
"text": "p \\rightarrow \\Diamond p"
},
{
"math_id": 6,
"text": "\\forall x \\; Rxx"
},
{
"math_id": 7,
"text": "p \\rightarrow \\Box\\Diamond p"
},
{
"math_id": 8,
"text": "\\forall x \\forall y [Rxy \\rightarrow Ryx]"
},
{
"math_id": 9,
"text": "\\Diamond \\Diamond p \\rightarrow \\Diamond p"
},
{
"math_id": 10,
"text": "\\Box p \\rightarrow \\Box \\Box p"
},
{
"math_id": 11,
"text": "\\forall x \\forall y \\forall z [(Rxy \\land Ryz) \\rightarrow Rxz]"
},
{
"math_id": 12,
"text": "\\Diamond p \\rightarrow \\Diamond \\Diamond p"
},
{
"math_id": 13,
"text": "\\Box \\Box p \\rightarrow \\Box p"
},
{
"math_id": 14,
"text": "\\forall x \\forall y [Rxy \\rightarrow \\exists z (Rxz \\land Rzy)]"
},
{
"math_id": 15,
"text": "\\Box p \\rightarrow \\Diamond p"
},
{
"math_id": 16,
"text": "\\forall x \\exists y \\; Rxy"
},
{
"math_id": 17,
"text": "\\Diamond\\Box p \\rightarrow \\Box\\Diamond p"
},
{
"math_id": 18,
"text": "\\forall x \\forall x_1 \\forall z_0 [Rxx_1 \\land Rxz_0 \\rightarrow \\exists z_1 (Rx_1z_1 \\land Rz_0z_1)]"
},
{
"math_id": 19,
"text": "\\Box\\Diamond p \\rightarrow \\Diamond \\Box p"
},
{
"math_id": 20,
"text": "\\Box(\\Box p \\rightarrow p) \\rightarrow \\Box p"
},
{
"math_id": 21,
"text": "(\\Box\\Diamond p \\rightarrow \\Diamond \\Box p) \\land (\\Diamond\\Diamond q \\rightarrow \\Diamond q)"
},
{
"math_id": 22,
"text": " \\forall x[\\forall y(Rxy \\rightarrow \\exists z[Ryz]) \\rightarrow \\exists y(Rxy \\wedge \\forall z[Ryz \\rightarrow z = y])] "
}
] | https://en.wikipedia.org/wiki?curid=917633 |
9176412 | Tangential quadrilateral | Polygon whose four sides all touch a circle
In Euclidean geometry, a tangential quadrilateral (sometimes just tangent quadrilateral) or circumscribed quadrilateral is a convex quadrilateral whose sides all can be tangent to a single circle within the quadrilateral. This circle is called the incircle of the quadrilateral or its inscribed circle, its center is the "incenter" and its radius is called the "inradius". Since these quadrilaterals can be drawn surrounding or circumscribing their incircles, they have also been called "circumscribable quadrilaterals", "circumscribing quadrilaterals", and "circumscriptible quadrilaterals". Tangential quadrilaterals are a special case of tangential polygons.
Other less frequently used names for this class of quadrilaterals are "inscriptable quadrilateral", "inscriptible quadrilateral", "inscribable quadrilateral", "circumcyclic quadrilateral", and "co-cyclic quadrilateral". Due to the risk of confusion with a quadrilateral that has a circumcircle, which is called a cyclic quadrilateral or inscribed quadrilateral, it is preferable not to use any of the last five names.
All triangles can have an incircle, but not all quadrilaterals do. An example of a quadrilateral that cannot be tangential is a non-square rectangle. The section characterizations below states what necessary and sufficient conditions a quadrilateral must satisfy to be able to have an incircle.
Special cases.
Examples of tangential quadrilaterals are the kites, which include the rhombi, which in turn include the squares. The kites are exactly the tangential quadrilaterals that are also orthodiagonal. A right kite is a kite with a circumcircle. If a quadrilateral is both tangential and cyclic, it is called a bicentric quadrilateral, and if it is both tangential and a trapezoid, it is called a tangential trapezoid.
Characterizations.
In a tangential quadrilateral, the four angle bisectors meet at the center of the incircle. Conversely, a convex quadrilateral in which the four angle bisectors meet at a point must be tangential and the common point is the incenter.
According to the Pitot theorem, the two pairs of opposite sides in a tangential quadrilateral add up to the same total length, which equals the semiperimeter "s" of the quadrilateral:
formula_0
Conversely a convex quadrilateral in which "a" + "c" = "b" + "d" must be tangential.
If opposite sides in a convex quadrilateral "ABCD" (that is not a trapezoid) intersect at "E" and "F", then it is tangential if and only if either of
formula_1
or
formula_2
Another necessary and sufficient condition is that a convex quadrilateral "ABCD" is tangential if and only if the incircles in the two triangles "ABC" and "ADC" are tangent to each other.
A characterization regarding the angles formed by diagonal "BD" and the four sides of a quadrilateral "ABCD" is due to Iosifescu. He proved in 1954 that a convex quadrilateral has an incircle if and only if
formula_3
Further, a convex quadrilateral with successive sides "a", "b", "c", "d" is tangential if and only if
formula_4
where "R""a", "R""b", "R""c", "R""d" are the radii in the circles externally tangent to the sides "a", "b", "c", "d" respectively and the extensions of the adjacent two sides for each side.
Several more characterizations are known in the four subtriangles formed by the diagonals.
Contact points and tangent lengths.
The incircle is tangent to each side at one "point of contact". These four points define a new quadrilateral inside of the initial quadrilateral: the "contact quadrilateral," which is cyclic as it is inscribed in the initial quadrilateral's incircle.
The eight "tangent lengths" ("e", "f", "g", "h" in the figure to the right) of a tangential quadrilateral are the line segments from a vertex to the points of contact. From each vertex, there are two congruent tangent lengths.
The two "tangency chords" ("k" and "l" in the figure) of a tangential quadrilateral are the line segments that connect contact points on opposite sides. These are also the diagonals of the contact quadrilateral.
Area.
Non-trigonometric formulas.
The area "K" of a tangential quadrilateral is given by
formula_5
where "s" is the semiperimeter and "r" is the inradius. Another formula is
formula_6
which gives the area in terms of the diagonals "p", "q" and the sides "a", "b", "c", "d" of the tangential quadrilateral.
The area can also be expressed in terms of just the four tangent lengths. If these are "e", "f", "g", "h", then the tangential quadrilateral has the area
formula_7
Furthermore, the area of a tangential quadrilateral can be expressed in terms of the sides "a, b, c, d" and the successive tangent lengths "e, f, g, h" as
formula_8
Since "eg" = "fh" if and only if the tangential quadrilateral is also cyclic and hence bicentric, this shows that the maximal area formula_9 occurs if and only if the tangential quadrilateral is bicentric.
Trigonometric formulas.
A trigonometric formula for the area in terms of the sides "a", "b", "c", "d" and two opposite angles is
formula_10
For given side lengths, the area is maximum when the quadrilateral is also cyclic and hence a bicentric quadrilateral. Then formula_11 since opposite angles are supplementary angles. This can be proved in another way using calculus.
Another formula for the area of a tangential quadrilateral "ABCD" that involves two opposite angles is
formula_12
where "I" is the incenter.
In fact, the area can be expressed in terms of just two adjacent sides and two opposite angles as
formula_13
Still another area formula is
formula_14
where "θ" is either of the angles between the diagonals. This formula cannot be used when the tangential quadrilateral is a kite, since then "θ" is 90° and the tangent function is not defined.
Inequalities.
As indirectly noted above, the area of a tangential quadrilateral with sides "a", "b", "c", "d" satisfies
formula_15
with equality if and only if it is a bicentric quadrilateral.
According to T. A. Ivanova (in 1976), the semiperimeter "s" of a tangential quadrilateral satisfies
formula_16
where "r" is the inradius. There is equality if and only if the quadrilateral is a square. This means that for the area "K" = "rs", there is the inequality
formula_17
with equality if and only if the tangential quadrilateral is a square.
Partition properties.
The four line segments between the center of the incircle and the points where it is tangent to the quadrilateral partition the quadrilateral into four right kites.
If a line cuts a tangential quadrilateral into two polygons with equal areas and equal perimeters, then that line passes through the incenter.
Inradius.
The inradius in a tangential quadrilateral with consecutive sides "a", "b", "c", "d" is given by
formula_18
where "K" is the area of the quadrilateral and "s" is its semiperimeter. For a tangential quadrilateral with given sides, the inradius is maximum when the quadrilateral is also cyclic (and hence a bicentric quadrilateral).
In terms of the tangent lengths, the incircle has radius
formula_19
The inradius can also be expressed in terms of the distances from the incenter "I" to the vertices of the tangential quadrilateral "ABCD". If "u = AI", "v = BI", "x = CI" and "y = DI", then
formula_20
where formula_21.
If the incircles in triangles "ABC", "BCD", "CDA", "DAB" have radii formula_22 respectively, then the inradius of a tangential quadrilateral "ABCD" is given by
formula_23
where formula_24.
Angle formulas.
If "e", "f", "g" and "h" are the tangent lengths from the vertices "A", "B", "C" and "D" respectively to the points where the incircle is tangent to the sides of a tangential quadrilateral "ABCD", then the angles of the quadrilateral can be calculated from
formula_25
formula_26
formula_27
formula_28
The angle between the tangency chords "k" and "l" is given by
formula_29
Diagonals.
If "e", "f", "g" and "h" are the tangent lengths from "A", "B", "C" and "D" respectively to the points where the incircle is tangent to the sides of a tangential quadrilateral "ABCD", then the lengths of the diagonals "p = AC" and "q = BD" are
formula_30
formula_31
Tangency chords.
If "e", "f", "g" and "h" are the tangent lengths of a tangential quadrilateral, then the lengths of the tangency chords are
formula_32
formula_33
where the tangency chord of length "k" connects the sides of lengths "a" = "e" + "f" and "c" = "g" + "h", and the one of length "l" connects the sides of lengths "b" = "f" + "g" and "d" = "h" + "e". The squared ratio of the tangency chords satisfies
formula_34
The two tangency chords
The tangency chord between the sides "AB" and "CD" in a tangential quadrilateral "ABCD" is longer than the one between the sides "BC" and "DA" if and only if the bimedian between the sides "AB" and "CD" is shorter than the one between the sides "BC" and "DA".
If tangential quadrilateral "ABCD" has tangency points "W" on "AB" and "Y" on "CD", and if tangency chord "WY" intersects diagonal "BD" at "M", then the ratio of tangent lengths formula_35 equals the ratio formula_36 of the segments of diagonal "BD".
Collinear points.
If "M1" and "M2" are the midpoints of the diagonals "AC" and "BD" respectively in a tangential quadrilateral "ABCD" with incenter "I", and if the pairs of opposite sides meet at "J" and "K" with "M3" being the midpoint of "JK", then the points "M3", "M1", "I", and "M2" are collinear. The line containing them is the Newton line of the quadrilateral.
If the extensions of opposite sides in a tangential quadrilateral intersect at "J" and "K", and the extensions of opposite sides in its contact quadrilateral intersect at "L" and "M", then the four points "J", "L", "K" and "M" are collinear.
If the incircle is tangent to the sides "AB", "BC", "CD", "DA" at "T1", "T2", "T3", "T4" respectively, and if "N1", "N2", "N3", "N4" are the isotomic conjugates of these points with respect to the corresponding sides (that is, "AT1" = "BN1" and so on), then the "Nagel point" of the tangential quadrilateral is defined as the intersection of the lines "N1N3" and "N2N4". Both of these lines divide the perimeter of the quadrilateral into two equal parts. More importantly, the Nagel point "N", the "area centroid" "G", and the incenter "I" are collinear in this order, and "NG" = 2"GI". This line is called the "Nagel line" of a tangential quadrilateral.
In a tangential quadrilateral "ABCD" with incenter "I" and where the diagonals intersect at "P", let "HX", "HY", "HZ", "HW" be the orthocenters of triangles "AIB", "BIC", "CID", "DIA". Then the points "P", "HX", "HY", "HZ", "HW" are collinear.
Concurrent and perpendicular lines.
The two diagonals and the two tangency chords are concurrent. One way to see this is as a limiting case of Brianchon's theorem, which states that a hexagon all of whose sides are tangent to a single conic section has three diagonals that meet at a point. From a tangential quadrilateral, one can form a hexagon with two 180° angles, by placing two new vertices at two opposite points of tangency; all six of the sides of this hexagon lie on lines tangent to the inscribed circle, so its diagonals meet at a point. But two of these diagonals are the same as the diagonals of the tangential quadrilateral, and the third diagonal of the hexagon is the line through two opposite points of tangency. Repeating this same argument with the other two points of tangency completes the proof of the result.
If the extensions of opposite sides in a tangential quadrilateral intersect at "J" and "K", and the diagonals intersect at "P", then "JK" is perpendicular to the extension of "IP" where "I" is the incenter.
Incenter.
The incenter of a tangential quadrilateral lies on its Newton line (which connects the midpoints of the diagonals).
The ratio of two opposite sides in a tangential quadrilateral can be expressed in terms of the distances between the incenter "I" and the vertices according to
formula_37
The product of two adjacent sides in a tangential quadrilateral "ABCD" with incenter "I" satisfies
formula_38
If "I" is the incenter of a tangential quadrilateral "ABCD", then
formula_39
The incenter "I" in a tangential quadrilateral "ABCD" coincides with the "vertex centroid" of the quadrilateral if and only if
formula_40
If "Mp" and "Mq" are the midpoints of the diagonals "AC" and "BD" respectively in a tangential quadrilateral "ABCD" with incenter "I", then
formula_41
where "e", "f", "g" and "h" are the tangent lengths at "A", "B", "C" and "D" respectively. Combining the first equality with a previous property, the "vertex centroid" of the tangential quadrilateral coincides with the incenter if and only if the incenter is the midpoint of the line segment connecting the midpoints of the diagonals.
If a four-bar linkage is made in the form of a tangential quadrilateral, then it will remain tangential no matter how the linkage is flexed, provided the quadrilateral remains convex. (Thus, for example, if a square is deformed into a rhombus it remains tangential, though to a smaller incircle). If one side is held in a fixed position, then as the quadrilateral is flexed, the incenter traces out a circle of radius formula_42 where "a,b,c,d" are the sides in sequence and "s" is the semiperimeter.
Characterizations in the four subtriangles.
In the nonoverlapping triangles "APB", "BPC", "CPD", "DPA" formed by the diagonals in a convex quadrilateral "ABCD", where the diagonals intersect at "P", there are the following characterizations of tangential quadrilaterals.
Let "r"1, "r"2, "r"3, and "r"4 denote the radii of the incircles in the four triangles "APB", "BPC", "CPD", and "DPA" respectively. Chao and Simeonov proved that the quadrilateral is tangential if and only if
formula_43
This characterization had already been proved five years earlier by Vaynshtejn.
In the solution to his problem, a similar characterization was given by Vasilyev and Senderov. If "h"1, "h"2, "h"3, and "h"4 denote the altitudes in the same four triangles (from the diagonal intersection to the sides of the quadrilateral), then the quadrilateral is tangential if and only if
formula_44
Another similar characterization concerns the exradii "r""a", "r""b", "r""c", and "r""d" in the same four triangles (the four excircles are each tangent to one side of the quadrilateral and the extensions of its diagonals). A quadrilateral is tangential if and only if
formula_45
If "R"1, "R"2, "R"3, and "R"4 denote the radii in the circumcircles of triangles "APB", "BPC", "CPD", and "DPA" respectively, then the quadrilateral "ABCD" is tangential if and only if
formula_46
In 1996, Vaynshtejn was probably the first to prove another beautiful characterization of tangential quadrilaterals, that has later appeared in several magazines and websites. It states that when a convex quadrilateral is divided into four nonoverlapping triangles by its two diagonals, then the incenters of the four triangles are concyclic if and only if the quadrilateral is tangential. In fact, the incenters form an orthodiagonal cyclic quadrilateral. A related result is that the incircles can be exchanged for the excircles to the same triangles (tangent to the sides of the quadrilateral and the extensions of its diagonals). Thus a convex quadrilateral is tangential if and only if the excenters in these four excircles are the vertices of a cyclic quadrilateral.
A convex quadrilateral "ABCD", with diagonals intersecting at "P", is tangential if and only if the four excenters in triangles "APB", "BPC", "CPD", and "DPA" opposite the vertices "B" and "D" are concyclic. If "Ra", "Rb", "Rc", and "Rd" are the exradii in the triangles "APB", "BPC", "CPD", and "DPA" respectively opposite the vertices "B" and "D", then another condition is that the quadrilateral is tangential if and only if
formula_47
Further, a convex quadrilateral "ABCD" with diagonals intersecting at "P" is tangential if and only if
formula_48
where ∆("APB") is the area of triangle "APB".
Denote the segments that the diagonal intersection "P" divides diagonal "AC" into as "AP" = "p"1 and "PC" = "p"2, and similarly "P" divides diagonal "BD" into segments "BP" = "q"1 and "PD" = "q"2. Then the quadrilateral is tangential if and only if any one of the following equalities are true:
formula_49
or
formula_50
or
formula_51
Conditions for a tangential quadrilateral to be another type of quadrilateral.
Rhombus.
A tangential quadrilateral is a rhombus if and only if its opposite angles are equal.
Kite.
A tangential quadrilateral is a kite if and only if any one of the following conditions is true:
Bicentric quadrilateral.
If the incircle is tangent to the sides "AB", "BC", "CD", "DA" at "W", "X", "Y", "Z" respectively, then a tangential quadrilateral "ABCD" is also cyclic (and hence bicentric) if and only if any one of the following conditions hold:
The first of these three means that the "contact quadrilateral" "WXYZ" is an orthodiagonal quadrilateral.
A tangential quadrilateral is bicentric if and only if its inradius is greater than that of any other tangential quadrilateral having the same sequence of side lengths.
Tangential trapezoid.
If the incircle is tangent to the sides "AB" and "CD" at "W" and "Y" respectively, then a tangential quadrilateral "ABCD" is also a trapezoid with parallel sides "AB" and "CD" if and only if
formula_54
and "AD" and "BC" are the parallel sides of a trapezoid if and only if
formula_55
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a + c = b + d = \\frac{a + b + c + d}{2} = s."
},
{
"math_id": 1,
"text": "\\displaystyle BE+BF=DE+DF"
},
{
"math_id": 2,
"text": "\\displaystyle AE-EC=AF-FC:"
},
{
"math_id": 3,
"text": "\\tan{\\frac{\\angle ABD}{2}}\\cdot\\tan{\\frac{\\angle BDC}{2}}=\\tan{\\frac{\\angle ADB}{2}}\\cdot\\tan{\\frac{\\angle DBC}{2}}."
},
{
"math_id": 4,
"text": "R_aR_c=R_bR_d"
},
{
"math_id": 5,
"text": "\\displaystyle K = r \\cdot s,"
},
{
"math_id": 6,
"text": "\\displaystyle K = \\tfrac{1}{2}\\sqrt{p^2q^2-(ac-bd)^2}"
},
{
"math_id": 7,
"text": "\\displaystyle K=\\sqrt{(e+f+g+h)(efg+fgh+ghe+hef)}."
},
{
"math_id": 8,
"text": "K=\\sqrt{abcd-(eg-fh)^2}."
},
{
"math_id": 9,
"text": "\\sqrt{abcd}"
},
{
"math_id": 10,
"text": "\\displaystyle K = \\sqrt{abcd} \\sin \\frac{A+C}{2} = \\sqrt{abcd} \\sin \\frac{B+D}{2}."
},
{
"math_id": 11,
"text": "K = \\sqrt{abcd}"
},
{
"math_id": 12,
"text": "K=\\left(IA\\cdot IC+IB\\cdot ID\\right)\\sin\\frac{A+C}{2}"
},
{
"math_id": 13,
"text": "K=ab\\sin{\\frac{B}{2}}\\csc{\\frac{D}{2}}\\sin \\frac{B+D}{2}."
},
{
"math_id": 14,
"text": "K=\\tfrac{1}{2}|(ac-bd)\\tan{\\theta}|,"
},
{
"math_id": 15,
"text": "K\\le\\sqrt{abcd}"
},
{
"math_id": 16,
"text": "s\\ge 4r"
},
{
"math_id": 17,
"text": "K\\ge 4r^2"
},
{
"math_id": 18,
"text": "r=\\frac{K}{s}=\\frac{K}{a+c}=\\frac{K}{b+d}"
},
{
"math_id": 19,
"text": "\\displaystyle r=\\sqrt{\\frac{efg+fgh+ghe+hef}{e+f+g+h}}."
},
{
"math_id": 20,
"text": "r=2\\sqrt{\\frac{(\\sigma-uvx)(\\sigma-vxy)(\\sigma-xyu)(\\sigma-yuv)}{uvxy(uv+xy)(ux+vy)(uy+vx)}}"
},
{
"math_id": 21,
"text": "\\sigma=\\tfrac{1}{2}(uvx+vxy+xyu+yuv)"
},
{
"math_id": 22,
"text": "r_1, r_2, r_3, r_4"
},
{
"math_id": 23,
"text": "r=\\frac{G+\\sqrt{G^2-4r_1r_2r_3r_4(r_1r_3+r_2r_4)}}{2(r_1r_3+r_2r_4)}"
},
{
"math_id": 24,
"text": "G=r_1r_2r_3+r_2r_3r_4+r_3r_4r_1+r_4r_1r_2"
},
{
"math_id": 25,
"text": " \\sin{\\frac{A}{2}}=\\sqrt{\\frac{efg + fgh + ghe + hef}{(e + f)(e + g)(e + h)}},"
},
{
"math_id": 26,
"text": " \\sin{\\frac{B}{2}}=\\sqrt{\\frac{efg + fgh + ghe + hef}{(f + e)(f + g)(f + h)}},"
},
{
"math_id": 27,
"text": " \\sin{\\frac{C}{2}}=\\sqrt{\\frac{efg + fgh + ghe + hef}{(g + e)(g + f)(g + h)}},"
},
{
"math_id": 28,
"text": " \\sin{\\frac{D}{2}}=\\sqrt{\\frac{efg + fgh + ghe + hef}{(h + e)(h + f)(h + g)}}."
},
{
"math_id": 29,
"text": " \\sin{\\varphi}=\\sqrt{\\frac{(e + f + g + h)(efg + fgh + ghe + hef)}{(e + f)(f + g)(g + h)(h + e)}}."
},
{
"math_id": 30,
"text": "\\displaystyle p=\\sqrt{\\frac{e+g}{f+h}\\Big((e+g)(f+h)+4fh\\Big)},"
},
{
"math_id": 31,
"text": "\\displaystyle q=\\sqrt{\\frac{f+h}{e+g}\\Big((e+g)(f+h)+4eg\\Big)}."
},
{
"math_id": 32,
"text": "\\displaystyle k=\\frac{2(efg+fgh+ghe+hef)}{\\sqrt{(e+f)(g+h)(e+g)(f+h)}},"
},
{
"math_id": 33,
"text": "\\displaystyle l=\\frac{2(efg+fgh+ghe+hef)}{\\sqrt{(e+h)(f+g)(e+g)(f+h)}}"
},
{
"math_id": 34,
"text": "\\frac{k^2}{l^2} = \\frac{bd}{ac}."
},
{
"math_id": 35,
"text": "\\tfrac{BW}{DY}"
},
{
"math_id": 36,
"text": "\\tfrac{BM}{DM}"
},
{
"math_id": 37,
"text": "\\frac{AB}{CD}=\\frac{IA\\cdot IB}{IC\\cdot ID},\\quad\\quad \\frac{BC}{DA}=\\frac{IB\\cdot IC}{ID\\cdot IA}."
},
{
"math_id": 38,
"text": "AB\\cdot BC=IB^2+\\frac{IA\\cdot IB\\cdot IC}{ID}."
},
{
"math_id": 39,
"text": "IA\\cdot IC+IB\\cdot ID=\\sqrt{AB\\cdot BC\\cdot CD\\cdot DA}."
},
{
"math_id": 40,
"text": "IA\\cdot IC=IB\\cdot ID."
},
{
"math_id": 41,
"text": "\\frac{IM_p}{IM_q}=\\frac{IA\\cdot IC}{IB\\cdot ID}=\\frac{e+g}{f+h}"
},
{
"math_id": 42,
"text": "\\sqrt{abcd}/s"
},
{
"math_id": 43,
"text": "\\frac{1}{r_1}+\\frac{1}{r_3}=\\frac{1}{r_2}+\\frac{1}{r_4}."
},
{
"math_id": 44,
"text": "\\frac{1}{h_1}+\\frac{1}{h_3}=\\frac{1}{h_2}+\\frac{1}{h_4}."
},
{
"math_id": 45,
"text": "\\frac{1}{r_a}+\\frac{1}{r_c}=\\frac{1}{r_b}+\\frac{1}{r_d}."
},
{
"math_id": 46,
"text": "R_1+R_3=R_2+R_4."
},
{
"math_id": 47,
"text": "\\frac{1}{R_a}+\\frac{1}{R_c}=\\frac{1}{R_b}+\\frac{1}{R_d}."
},
{
"math_id": 48,
"text": "\\frac{a}{\\triangle(APB)}+\\frac{c}{\\triangle(CPD)}=\\frac{b}{\\triangle(BPC)}+\\frac{d}{\\triangle(DPA)}"
},
{
"math_id": 49,
"text": "ap_2q_2 + cp_1q_1 = bp_1q_2 + dp_2q_1"
},
{
"math_id": 50,
"text": "\\frac{(p_1+q_1-a)(p_2+q_2-c)}{(p_1+q_1+a)(p_2+q_2+c)}=\\frac{(p_2+q_1-b)(p_1+q_2-d)}{(p_2+q_1+b)(p_1+q_2+d)}"
},
{
"math_id": 51,
"text": "\\frac{(a+p_1-q_1)(c+p_2-q_2)}{(a-p_1+q_1)(c-p_2+q_2)}=\\frac{(b+p_2-q_1)(d+p_1-q_2)}{(b-p_2+q_1)(d-p_1+q_2)}."
},
{
"math_id": 52,
"text": "AW\\cdot CY=BW\\cdot DY"
},
{
"math_id": 53,
"text": "\\frac{AC}{BD}=\\frac{AW+CY}{BX+DZ}"
},
{
"math_id": 54,
"text": "AW\\cdot DY=BW\\cdot CY"
},
{
"math_id": 55,
"text": "AW\\cdot BW=CY\\cdot DY."
}
] | https://en.wikipedia.org/wiki?curid=9176412 |
9176798 | Quotient of subspace theorem | In mathematics, the quotient of subspace theorem is an important property of finite-dimensional normed spaces, discovered by Vitali Milman.
Let ("X", ||·||) be an "N"-dimensional normed space. There exist subspaces "Z" ⊂ "Y" ⊂ "X" such that the following holds:
formula_0
is uniformly isomorphic to Euclidean. That is, there exists a positive quadratic form ("Euclidean structure") "Q" on "E", such that
formula_1 for formula_2
with "K" > 1 a universal constant.
The statement is relative easy to prove by induction on the dimension of "Z" (even for "Y=Z", "X"="0", "c=1") with a "K" that depends only on "N"; the point of the theorem is that "K" is independent of "N".
In fact, the constant "c" can be made arbitrarily close to 1, at the expense of the
constant "K" becoming large. The original proof allowed
formula_3
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\| e \\| =\\min_{y \\in e} \\| y \\|, \\quad e \\in E, "
},
{
"math_id": 1,
"text": "\\frac{\\sqrt{Q(e)}}{K} \\leq \\| e \\| \\leq K \\sqrt{Q(e)}"
},
{
"math_id": 2,
"text": "e \\in E,"
},
{
"math_id": 3,
"text": " c(K) \\approx 1 - \\text{const} / \\log \\log K. "
}
] | https://en.wikipedia.org/wiki?curid=9176798 |
9177825 | Dini's theorem | Sufficient criterion for uniform convergence
In the mathematical field of analysis, Dini's theorem says that if a monotone sequence of continuous functions converges pointwise on a compact space and if the limit function is also continuous, then the convergence is uniform.
Formal statement.
If formula_0 is a compact topological space, and formula_1 is a monotonically increasing sequence (meaning formula_2 for all formula_3 and formula_4) of continuous real-valued functions on formula_0 which converges pointwise to a continuous function formula_5, then the convergence is uniform. The same conclusion holds if formula_1 is monotonically decreasing instead of increasing. The theorem is named after Ulisse Dini.
This is one of the few situations in mathematics where pointwise convergence implies uniform convergence; the key is the greater control implied by the monotonicity. The limit function must be continuous, since a uniform limit of continuous functions is necessarily continuous. The continuity of the limit function cannot be inferred from the other hypothesis (consider formula_6 in formula_7.)
Proof.
Let formula_8 be given. For each formula_3, let formula_9, and let formula_10 be the set of those formula_4 such that formula_11. Each formula_12 is continuous, and so each formula_10 is open (because each formula_10 is the preimage of the open set formula_13 under formula_12, a continuous function). Since formula_1 is monotonically increasing, formula_14 is monotonically decreasing, it follows that the sequence formula_10 is ascending (i.e. formula_15 for all formula_3). Since formula_1 converges pointwise to formula_16, it follows that the collection formula_17 is an open cover of formula_0. By compactness, there is a finite subcover, and since formula_10 are ascending the largest of these is a cover too. Thus we obtain that there is some positive integer formula_18 such that formula_19. That is, if formula_20 and formula_21 is a point in formula_0, then formula_22, as desired.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "(f_n)_{n\\in\\mathbb{N}}"
},
{
"math_id": 2,
"text": "f_n(x)\\leq f_{n+1}(x)"
},
{
"math_id": 3,
"text": "n\\in\\mathbb{N}"
},
{
"math_id": 4,
"text": "x\\in X"
},
{
"math_id": 5,
"text": "f\\colon X\\to \\mathbb{R}"
},
{
"math_id": 6,
"text": "x^n"
},
{
"math_id": 7,
"text": "[0,1]"
},
{
"math_id": 8,
"text": "\\varepsilon > 0"
},
{
"math_id": 9,
"text": "g_n=f-f_n"
},
{
"math_id": 10,
"text": "E_n"
},
{
"math_id": 11,
"text": "g_n(x)<\\varepsilon"
},
{
"math_id": 12,
"text": "g_n"
},
{
"math_id": 13,
"text": "(-\\infty, \\varepsilon)"
},
{
"math_id": 14,
"text": "(g_n)_{n\\in\\mathbb{N}}"
},
{
"math_id": 15,
"text": "E_n\\subset E_{n+1}"
},
{
"math_id": 16,
"text": "f"
},
{
"math_id": 17,
"text": "(E_n)_{n\\in\\mathbb{N}}"
},
{
"math_id": 18,
"text": "N"
},
{
"math_id": 19,
"text": "E_N=X"
},
{
"math_id": 20,
"text": "n>N"
},
{
"math_id": 21,
"text": "x"
},
{
"math_id": 22,
"text": "|f(x)-f_n(x)|<\\varepsilon"
}
] | https://en.wikipedia.org/wiki?curid=9177825 |
9178245 | Complex conjugate root theorem | If a + bi is a root of a real polynomial, then so is a − bi
In mathematics, the complex conjugate root theorem states that if "P" is a polynomial in one variable with real coefficients, and "a" + "bi" is a root of "P" with "a" and "b" real numbers, then its complex conjugate "a" − "bi" is also a root of "P".
It follows from this (and the fundamental theorem of algebra) that, if the degree of a real polynomial is odd, it must have at least one real root. That fact can also be proved by using the intermediate value theorem.
formula_0
has roots
formula_1
and thus can be factored as
formula_2
In computing the product of the last two factors, the imaginary parts cancel, and we get
formula_3
The non-real factors come in pairs which when multiplied give quadratic polynomials with real coefficients. Since every polynomial with complex coefficients can be factored into 1st-degree factors (that is one way of stating the fundamental theorem of algebra), it follows that every polynomial with real coefficients can be factored into factors of degree no higher than 2: just 1st-degree and quadratic factors.
formula_4.
If the third root is "c", this becomes
formula_5
formula_6.
Examples and consequences.
Corollary on odd-degree polynomials.
It follows from the present theorem and the fundamental theorem of algebra that if the degree of a real polynomial is odd, it must have at least one real root.
This can be proved as follows.
This requires some care in the presence of multiple roots; but a complex root and its conjugate do have the same multiplicity (and this lemma is not hard to prove). It can also be worked around by considering only irreducible polynomials; any real polynomial of odd degree must have an irreducible factor of odd degree, which (having no multiple roots) must have a real root by the reasoning above.
This corollary can also be proved directly by using the intermediate value theorem.
Proof.
One proof of the theorem is as follows:
Consider the polynomial
formula_7
where all "a""r" are real. Suppose some complex number "ζ" is a root of "P", that is formula_8. It needs to be shown that
formula_9
as well.
If "P"("ζ"&hairsp;&hairsp;) = 0, then
formula_10
which can be put as
formula_11
Now
formula_12
and given the properties of complex conjugation,
formula_13
Since
formula_14
it follows that
formula_15
That is,
formula_16
Note that this works only because the "a""r" are real, that is, formula_17. If any of the coefficients were non-real, the roots would not necessarily come in conjugate pairs.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x^3 - 7x^2 + 41x - 87"
},
{
"math_id": 1,
"text": "3,\\, 2 + 5i,\\, 2 - 5i,"
},
{
"math_id": 2,
"text": "(x - 3)(x - 2 - 5i)(x - 2 + 5i)."
},
{
"math_id": 3,
"text": "(x - 3)(x^2 - 4x + 29)."
},
{
"math_id": 4,
"text": "x^2 - 2ax + (a^2 + b^2)"
},
{
"math_id": 5,
"text": "(x^2 - 2ax + (a^2 + b^2))(x-c)"
},
{
"math_id": 6,
"text": "=x^3 + x^2(-2a-c) + x(2ac+a^2+b^2) - c(a^2 + b^2)"
},
{
"math_id": 7,
"text": "P(z) = a_0 + a_1z + a_2z^2 + \\cdots + a_nz^n"
},
{
"math_id": 8,
"text": "P(\\zeta) = 0"
},
{
"math_id": 9,
"text": "P\\big(\\, \\overline{\\zeta} \\,\\big) = 0"
},
{
"math_id": 10,
"text": "a_0 + a_1\\zeta + a_2\\zeta^2 + \\cdots + a_n\\zeta^n = 0"
},
{
"math_id": 11,
"text": "\\sum_{r=0}^n a_r\\zeta^r = 0."
},
{
"math_id": 12,
"text": "P\\big(\\, \\overline{\\zeta} \\,\\big) = \\sum_{r=0}^n a_r \\big(\\, \\overline{\\zeta} \\,\\big)^r"
},
{
"math_id": 13,
"text": "\\sum_{r=0}^n a_r\\big(\\, \\overline{\\zeta} \\,\\big)^r = \\sum_{r=0}^n a_r \\overline{\\zeta^r} = \\sum_{r=0}^n \\overline{a_r\\zeta^r} = \\overline{\\sum_{r=0}^n a_r\\zeta^r}."
},
{
"math_id": 14,
"text": "\\overline{\\sum_{r=0}^n a_r\\zeta^r} = \\overline{0},"
},
{
"math_id": 15,
"text": "\\sum_{r=0}^n a_r\\big(\\, \\overline{\\zeta} \\,\\big)^r = \\overline{0} = 0."
},
{
"math_id": 16,
"text": "P\\big(\\, \\overline{\\zeta} \\,\\big) = a_0 + a_1\\overline{\\zeta} + a_2\\big(\\, \\overline{\\zeta} \\,\\big)^2 + \\cdots + a_n\\big(\\, \\overline{\\zeta} \\,\\big)^n = 0."
},
{
"math_id": 17,
"text": "\\overline{a_r} = a_r"
}
] | https://en.wikipedia.org/wiki?curid=9178245 |
917966 | Data envelopment analysis | Method in operations research and economics
Data envelopment analysis (DEA) is a nonparametric method in operations research and economics for the estimation of production frontiers. DEA has been applied in a large range of fields including international banking, economic sustainability, police department operations, and logistical applications Additionally, DEA has been used to assess the performance of natural language processing models, and it has found other applications within machine learning.
Description.
DEA is used to empirically measure productive efficiency of decision-making units (DMUs). Although DEA has a strong link to production theory in economics, the method is also used for benchmarking in operations management, whereby a set of measures is selected to benchmark the performance of manufacturing and service operations. In benchmarking, the efficient DMUs, as defined by DEA, may not necessarily form a “production frontier”, but rather lead to a “best-practice frontier.”
In contrast to parametric methods that require the "ex-ante" specification of a production- or cost-function, non-parametric approaches compare feasible input and output combinations based on the available data only. DEA, one of the most commonly used non-parametric methods, owes its name to its enveloping property of the dataset's efficient DMUs, where the empirically observed, most efficient DMUs constitute the production frontier against which all DMUs are compared. DEA's popularity stems from its relative lack of assumptions, the ability to benchmark multi-dimensional inputs and outputs as well as its computational ease owing to it being expressable as a linear program, despite its task to calculate efficiency ratios.
History.
Building on the ideas of Farrell, the 1978 work "Measuring the efficiency of decision-making units" by Charnes, Cooper & Rhodes applied linear programming to estimate, for the first time, an empirical, production-technology frontier. In Germany, the procedure had earlier been used to estimate the marginal productivity of R&D and other factors of production. Since then, there have been a large number of books and journal articles written on DEA or about applying DEA to various sets of problems.
Starting with the CCR model, named after Charnes, Cooper, and Rhodes, many extensions to DEA have been proposed in the literature. They range from adapting implicit model assumptions such as input and output orientation, distinguishing technical and allocative efficiency, adding limited disposability
of inputs/outputs or varying returns-to-scale to techniques that utilize DEA results and extend them for more sophisticated analyses, such as stochastic DEA or cross-efficiency analysis.
Techniques.
In a one-input, one-output scenario, efficiency is merely the ratio of output over input that can be produced, while comparing several entities/DMUs based on it is trivial. However, when adding more inputs or outputs the efficiency computation becomes more complex. Charnes, Cooper, and Rhodes (1978) in their basic DEA model (the CCR) define the objective function to find formula_0 efficiency formula_1 as:
formula_2
where the formula_0 known formula_3 outputs formula_4 are multiplied by their respective weights formula_5 and divided by the formula_6 inputs formula_7 multiplied by their respective weights formula_8.
The efficiency score formula_9 is sought to be maximized, under the constraints that using those weights on each formula_10, no efficiency score exceeds one:
formula_11
and all inputs, outputs and weights have to be non-negative. To allow for linear optimization, one typically constrains either the sum of outputs or the sum of inputs to equal a fixed value (typically 1. See later for an example).
Because this optimization problem's dimensionality is equal to the sum of its inputs and outputs, selecting the smallest number of inputs/outputs that collectively, accurately capture the process one attempts to characterize is crucial. And because the production frontier envelopment is done empirically, several guidelines exist on the minimum required number of DMUs for good discriminatory power of the analysis, given homogeneity of the sample. This minimum number of DMUs varies between twice the sum of inputs and outputs (formula_12) and twice the product of inputs and outputs (formula_13).
Some advantages of the DEA approach are:
Some of the disadvantages of DEA are:
Example.
Assume that we have the following data:
To calculate the efficiency of unit 1, we define the objective function (OF) as
which is subject to (ST) all efficiency of other units (efficiency cannot be larger than 1):
and non-negativity:
A fraction with decision variables in the numerator and denominator is nonlinear. Since we are using a linear programming technique, we need to linearize the formulation, such that the denominator of the objective function is constant (in this case 1), then maximize the numerator.
The new formulation would be:
Extensions.
A desire to improve upon DEA by reducing its disadvantages or strengthening its advantages has been a major cause for discoveries in the recent literature. The currently most often DEA-based method to obtain unique efficiency rankings is called "cross-efficiency." Originally developed by Sexton et al. in 1986, it found widespread application ever since Doyle and Green's 1994 publication. Cross-efficiency is based on the original DEA results, but implements a secondary objective where each DMU peer-appraises all other DMU's with its own factor weights. The average of these peer-appraisal scores is then used to calculate a DMU's cross-efficiency score. This approach avoids DEA's disadvantages of having multiple efficient DMUs and potentially non-unique weights. Another approach to remedy some of DEA's drawbacks is Stochastic DEA, which synthesizes DEA and Stochastic Frontier Analysis (SFA).
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "DMU_j's"
},
{
"math_id": 1,
"text": "(\\theta_j)"
},
{
"math_id": 2,
"text": "\\max \\quad \\theta_j = \\frac{\\sum\\limits_{m=1}^{M}y_m^j u_m^j}{\\sum\\limits_{n=1}^{N}x_n^j v_n^j},"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "y_1^j,...,y_m^j"
},
{
"math_id": 5,
"text": "u_1^j,...,u_m^j"
},
{
"math_id": 6,
"text": "N"
},
{
"math_id": 7,
"text": "x_1^j,...,x_n^j"
},
{
"math_id": 8,
"text": "v_1^j,...,v_n^j"
},
{
"math_id": 9,
"text": "\\theta_j"
},
{
"math_id": 10,
"text": "DMU_k \\quad k=1,...,K"
},
{
"math_id": 11,
"text": "\\frac{\\sum\\limits_{m=1}^{M}y_m^k u_m^j}{\\sum\\limits_{n=1}^{N}x_n^k v_n^j} \\leq 1 \\qquad k = 1,...,K,"
},
{
"math_id": 12,
"text": "2 (M + N)"
},
{
"math_id": 13,
"text": "2 M N"
},
{
"math_id": 14,
"text": "Max Efficiency :(100u_1)/(10v_1+2v_2)"
},
{
"math_id": 15,
"text": "(100u_1)/(10v_1+2v_2)\\leq 1"
},
{
"math_id": 16,
"text": "(80u_1)/(8v_1+4v_2)\\leq 1"
},
{
"math_id": 17,
"text": "(120u_1)/(12v_1+1.5v_2)\\leq 1"
},
{
"math_id": 18,
"text": "u,v \\geq 0"
},
{
"math_id": 19,
"text": "Max Efficiency :100u_1"
},
{
"math_id": 20,
"text": "100u_1-(10v_1+2v_2)\\leq 0"
},
{
"math_id": 21,
"text": "80u_1-(8v_1+4v_2)\\leq 0"
},
{
"math_id": 22,
"text": "120u_1-(12v_1+1.5v_2)\\leq 0"
},
{
"math_id": 23,
"text": "10v_1+2v_2=1"
}
] | https://en.wikipedia.org/wiki?curid=917966 |
9179665 | M. Riesz extension theorem | The M. Riesz extension theorem is a theorem in mathematics, proved by Marcel Riesz during his study of the problem of moments.
Formulation.
Let formula_0 be a real vector space, formula_1 be a vector subspace, and formula_2 be a convex cone.
A linear functional formula_3 is called formula_4-"positive", if it takes only non-negative values on the cone formula_4:
formula_5
A linear functional formula_6 is called a formula_4-positive "extension" of formula_7, if it is identical to formula_7 in the domain of formula_7, and also returns a value of at least 0 for all points in the cone formula_4:
formula_8
In general, a formula_4-positive linear functional on formula_9 cannot be extended to a formula_4-positive linear functional on formula_0. Already in two dimensions one obtains a counterexample. Let formula_10 and formula_9 be the formula_11-axis. The positive functional formula_12 can not be extended to a positive functional on formula_0.
However, the extension exists under the additional assumption that formula_13 namely for every formula_14 there exists an formula_15 such that formula_16
Proof.
The proof is similar to the proof of the Hahn–Banach theorem (see also below).
By transfinite induction or Zorn's lemma it is sufficient to consider the case dim formula_17.
Choose any formula_18. Set
formula_19
We will prove below that formula_20. For now, choose any formula_21 satisfying formula_22, and set formula_23, formula_24, and then extend formula_25 to all of formula_0 by linearity. We need to show that formula_25 is formula_4-positive. Suppose formula_26. Then either formula_27, or formula_28 or formula_29 for some formula_30 and formula_31. If formula_27, then formula_32. In the first remaining case formula_33, and so
formula_34
by definition. Thus
formula_35
In the second case, formula_36, and so similarly
formula_37
by definition and so
formula_38
In all cases, formula_32, and so formula_25 is formula_4-positive.
We now prove that formula_20. Notice by assumption there exists at least one formula_31 for which formula_39, and so formula_40. However, it may be the case that there are no formula_31 for which formula_36, in which case formula_41 and the inequality is trivial (in this case notice that the third case above cannot happen). Therefore, we may assume that formula_42 and there is at least one formula_31 for which formula_36. To prove the inequality, it suffices to show that whenever formula_31 and formula_39, and formula_43 and formula_44, then formula_45. Indeed,
formula_46
since formula_4 is a convex cone, and so
formula_47
since formula_7 is formula_4-positive.
Corollary: Krein's extension theorem.
Let "E" be a real linear space, and let "K" ⊂ "E" be a convex cone. Let "x" ∈ "E"/(−"K") be such that R "x" + "K" = "E". Then there exists a "K"-positive linear functional "φ": "E" → R such that "φ"("x") > 0.
Connection to the Hahn–Banach theorem.
The Hahn–Banach theorem can be deduced from the M. Riesz extension theorem.
Let "V" be a linear space, and let "N" be a sublinear function on "V". Let "φ" be a functional on a subspace "U" ⊂ "V" that is dominated by "N":
formula_48
The Hahn–Banach theorem asserts that "φ" can be extended to a linear functional on "V" that is dominated by "N".
To derive this from the M. Riesz extension theorem, define a convex cone "K" ⊂ R×"V" by
formula_49
Define a functional "φ"1 on R×"U" by
formula_50
One can see that "φ"1 is "K"-positive, and that "K" + (R × "U") = R × "V". Therefore "φ"1 can be extended to a "K"-positive functional "ψ"1 on R×"V". Then
formula_51
is the desired extension of "φ". Indeed, if "ψ"("x") > "N"("x"), we have: ("N"("x"), "x") ∈ "K", whereas
formula_52
leading to a contradiction.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "F\\subset E"
},
{
"math_id": 2,
"text": "K\\subset E"
},
{
"math_id": 3,
"text": "\\phi: F\\to\\mathbb{R}"
},
{
"math_id": 4,
"text": "K"
},
{
"math_id": 5,
"text": "\\phi(x) \\geq 0 \\quad \\text{for} \\quad x \\in F \\cap K."
},
{
"math_id": 6,
"text": "\\psi: E\\to\\mathbb{R}"
},
{
"math_id": 7,
"text": "\\phi"
},
{
"math_id": 8,
"text": "\\psi|_F = \\phi \\quad \\text{and} \\quad \\psi(x) \\geq 0\\quad \\text{for} \\quad x \\in K."
},
{
"math_id": 9,
"text": "F"
},
{
"math_id": 10,
"text": "E=\\mathbb{R}^2,\\ K=\\{(x,y): y>0\\}\\cup\\{(x,0): x>0\\},"
},
{
"math_id": 11,
"text": "x"
},
{
"math_id": 12,
"text": "\\phi(x,0)=x"
},
{
"math_id": 13,
"text": "E\\subset K+F,"
},
{
"math_id": 14,
"text": "y\\in E,"
},
{
"math_id": 15,
"text": "x\\in F"
},
{
"math_id": 16,
"text": "y-x\\in K."
},
{
"math_id": 17,
"text": "E/F = 1"
},
{
"math_id": 18,
"text": "y \\in E \\setminus F"
},
{
"math_id": 19,
"text": "a = \\sup \\{\\, \\phi(x) \\mid x \\in F, \\ y-x \\in K \\,\\},\\ b = \\inf \\{\\, \\phi(x) \\mid x \\in F, x-y \\in K \\,\\}."
},
{
"math_id": 20,
"text": "-\\infty < a \\le b"
},
{
"math_id": 21,
"text": "c"
},
{
"math_id": 22,
"text": "a \\le c \\le b"
},
{
"math_id": 23,
"text": "\\psi(y) = c"
},
{
"math_id": 24,
"text": "\\psi|_F = \\phi"
},
{
"math_id": 25,
"text": "\\psi"
},
{
"math_id": 26,
"text": "z \\in K"
},
{
"math_id": 27,
"text": "z = 0"
},
{
"math_id": 28,
"text": "z = p(x + y)"
},
{
"math_id": 29,
"text": "z = p(x - y)"
},
{
"math_id": 30,
"text": "p > 0"
},
{
"math_id": 31,
"text": "x \\in F"
},
{
"math_id": 32,
"text": "\\psi(z) > 0"
},
{
"math_id": 33,
"text": "x + y = y -(-x) \\in K"
},
{
"math_id": 34,
"text": "\\psi(y) = c \\geq a \\geq \\phi(-x) = \\psi(-x)"
},
{
"math_id": 35,
"text": "\\psi(z) = p\\psi(x+y) = p(\\psi(x) + \\psi(y)) \\geq 0."
},
{
"math_id": 36,
"text": "x - y \\in K"
},
{
"math_id": 37,
"text": "\\psi(y) = c \\leq b \\leq \\phi(x) = \\psi(x)"
},
{
"math_id": 38,
"text": "\\psi(z) = p\\psi(x-y) = p(\\psi(x)-\\psi(y)) \\geq 0."
},
{
"math_id": 39,
"text": "y - x \\in K"
},
{
"math_id": 40,
"text": "-\\infty < a"
},
{
"math_id": 41,
"text": "b = \\infty"
},
{
"math_id": 42,
"text": "b < \\infty"
},
{
"math_id": 43,
"text": "x' \\in F"
},
{
"math_id": 44,
"text": "x' - y \\in K"
},
{
"math_id": 45,
"text": "\\phi(x) \\le \\phi(x')"
},
{
"math_id": 46,
"text": "x' -x = (x' - y) + (y-x) \\in K"
},
{
"math_id": 47,
"text": "0 \\leq \\phi(x'-x) = \\phi(x')-\\phi(x)"
},
{
"math_id": 48,
"text": " \\phi(x) \\leq N(x), \\quad x \\in U."
},
{
"math_id": 49,
"text": " K = \\left\\{ (a, x) \\, \\mid \\, N(x) \\leq a \\right\\}."
},
{
"math_id": 50,
"text": " \\phi_1(a, x) = a - \\phi(x)."
},
{
"math_id": 51,
"text": " \\psi(x) = - \\psi_1(0, x) "
},
{
"math_id": 52,
"text": " \\psi_1(N(x), x) = N(x) - \\psi(x) < 0, "
}
] | https://en.wikipedia.org/wiki?curid=9179665 |
918047 | Künneth theorem | Relates the homology of two objects to the homology of their product
In mathematics, especially in homological algebra and algebraic topology, a Künneth theorem, also called a Künneth formula, is a statement relating the homology of two objects to the homology of their product. The classical statement of the Künneth theorem relates the singular homology of two topological spaces "X" and "Y" and their product space formula_0. In the simplest possible case the relationship is that of a tensor product, but for applications it is very often necessary to apply certain tools of homological algebra to express the answer.
A Künneth theorem or Künneth formula is true in many different homology and cohomology theories, and the name has become generic. These many results are named for the German mathematician Hermann Künneth.
Singular homology with coefficients in a field.
Let "X" and "Y" be two topological spaces. In general one uses singular homology; but if "X" and "Y" happen to be CW complexes, then this can be replaced by cellular homology, because that is isomorphic to singular homology. The simplest case is when the coefficient ring for homology is a field "F". In this situation, the Künneth theorem (for singular homology) states that for any integer "k",
formula_1.
Furthermore, the isomorphism is a natural isomorphism. The map from the sum to the homology group of the product is called the "cross product". More precisely, there is a cross product operation by which an "i"-cycle on "X" and a "j"-cycle on "Y" can be combined to create an formula_2-cycle on formula_0; so that there is an explicit linear mapping defined from the direct sum to formula_3.
A consequence of this result is that the Betti numbers, the dimensions of the homology with formula_4 coefficients, of formula_0 can be determined from those of "X" and "Y". If formula_5 is the generating function of the sequence of Betti numbers formula_6 of a space "Z", then
formula_7
Here when there are finitely many Betti numbers of "X" and "Y", each of which is a natural number rather than formula_8, this reads as an identity on Poincaré polynomials. In the general case these are formal power series with possibly infinite coefficients, and have to be interpreted accordingly. Furthermore, the above statement holds not only for the Betti numbers but also for the generating functions of the dimensions of the homology over any field. (If the integer homology is not torsion-free, then these numbers may differ from the standard Betti numbers.)
Singular homology with coefficients in a principal ideal domain.
The above formula is simple because vector spaces over a field have very restricted behavior. As the coefficient ring becomes more general, the relationship becomes more complicated. The next simplest case is the case when the coefficient ring is a principal ideal domain. This case is particularly important because the integers formula_9 are a PID.
In this case the equation above is no longer always true. A correction factor appears to account for the possibility of torsion phenomena. This correction factor is expressed in terms of the Tor functor, the first derived functor of the tensor product.
When "R" is a PID, then the correct statement of the Künneth theorem is that for any topological spaces "X" and "Y" there are natural short exact sequences
formula_10
Furthermore, these sequences split, but not canonically.
Example.
The short exact sequences just described can easily be used to compute the homology groups with integer coefficients of the product formula_11 of two real projective planes, in other words, formula_12. These spaces are CW complexes. Denoting the homology group formula_13 by formula_14 for brevity's sake, one knows from a simple calculation with cellular homology that
formula_15,
formula_16,
formula_17 for all other values of "i".
The only non-zero Tor group (torsion product) which can be formed from these values of formula_14 is
formula_18.
Therefore, the Künneth short exact sequence reduces in every degree to an isomorphism, because there is a zero group in each case on either the left or the right side in the sequence. The result is
formula_19
and all the other homology groups are zero.
The Künneth spectral sequence.
For a general commutative ring "R", the homology of "X" and "Y" is related to the homology of their product by a Künneth spectral sequence
formula_20
In the cases described above, this spectral sequence collapses to give an isomorphism or a short exact sequence.
Relation with homological algebra, and idea of proof.
The chain complex of the space "X" × "Y" is related to the chain complexes of "X" and "Y" by a natural quasi-isomorphism
formula_21
For singular chains this is the theorem of Eilenberg and Zilber. For cellular chains on CW complexes, it is a straightforward isomorphism. Then the homology of the tensor product on the right is given by the spectral Künneth formula of homological algebra.
The freeness of the chain modules means that in this geometric case it is not necessary to use any hyperhomology or total derived tensor product.
There are analogues of the above statements for singular cohomology and sheaf cohomology. For sheaf cohomology on an algebraic variety, Alexander Grothendieck found six spectral sequences relating the possible hyperhomology groups of two chain complexes of sheaves and the hyperhomology groups of their tensor product.
Künneth theorems in generalized homology and cohomology theories.
There are many generalized (or "extraordinary") homology and cohomology theories for topological spaces. K-theory and cobordism are the best-known. Unlike ordinary homology and cohomology, they typically cannot be defined using chain complexes. Thus Künneth theorems can not be obtained by the above methods of homological algebra. Nevertheless, Künneth theorems in just the same form have been proved in very many cases by various other methods. The first were Michael Atiyah's Künneth theorem for complex K-theory and Pierre Conner and Edwin E. Floyd's result in cobordism. A general method of proof emerged, based upon a homotopical theory of modules over highly structured ring spectra. The homotopy category of such modules closely resembles the derived category in homological algebra.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X \\times Y"
},
{
"math_id": 1,
"text": "\\bigoplus_{i + j = k} H_i(X; F) \\otimes H_j(Y; F) \\cong H_k(X \\times Y; F)"
},
{
"math_id": 2,
"text": "(i+j)"
},
{
"math_id": 3,
"text": "H_k(X \\times Y)"
},
{
"math_id": 4,
"text": "\\Q"
},
{
"math_id": 5,
"text": "p_Z(t)"
},
{
"math_id": 6,
"text": "b_k(Z)"
},
{
"math_id": 7,
"text": "p_{X \\times Y}(t) = p_X(t) p_Y(t)."
},
{
"math_id": 8,
"text": "\\infty"
},
{
"math_id": 9,
"text": "\\Z"
},
{
"math_id": 10,
"text": "0 \\to \\bigoplus_{i + j = k} H_i(X; R) \\otimes_R H_j(Y; R) \\to H_k(X \\times Y; R) \\to \\bigoplus_{i + j = k-1} \\mathrm{Tor}_1^R(H_i(X; R), H_j(Y; R)) \\to 0."
},
{
"math_id": 11,
"text": "\\mathbb{RP}^2 \\times \\mathbb{RP}^2"
},
{
"math_id": 12,
"text": "H_k(\\mathbb{RP}^2 \\times \\mathbb{RP}^2; \\Z)"
},
{
"math_id": 13,
"text": "H_i(\\mathbb{RP}^2;\\Z)"
},
{
"math_id": 14,
"text": "h_i"
},
{
"math_id": 15,
"text": "h_0\\cong \\Z"
},
{
"math_id": 16,
"text": "h_1\\cong \\Z/2\\Z"
},
{
"math_id": 17,
"text": "h_i= 0"
},
{
"math_id": 18,
"text": "\\mathrm{Tor}^{\\Z}_1(h_1, h_1) \\cong \\mathrm{Tor}^{\\Z}_1(\\Z/2\\Z,\\Z/2\\Z)\\cong \\Z/2\\Z"
},
{
"math_id": 19,
"text": "\\begin{align}\nH_0 \\left (\\mathbb{RP}^2 \\times \\mathbb{RP}^2;\\Z \\right )\\; &\\cong \\;h_0 \\otimes h_0 \\;\\cong \\;\\Z \\\\\nH_1 \\left (\\mathbb{RP}^2 \\times \\mathbb{RP}^2;\\Z \\right )\\; &\\cong \\; h_0 \\otimes h_1 \\; \\oplus \\; h_1 \\otimes h_0 \\;\\cong \\;\\Z/2\\Z\\oplus \\Z/2\\Z \\\\\nH_2 \\left (\\mathbb{RP}^2 \\times \\mathbb{RP}^2;\\Z \\right )\\; &\\cong \\;h_1 \\otimes h_1 \\;\\cong \\;\\Z/2\\Z \\\\\nH_3 \\left (\\mathbb{RP}^2 \\times \\mathbb{RP}^2;\\Z \\right )\\; &\\cong \\;\\mathrm{Tor}^{\\Z}_1(h_1,h_1) \\;\\cong \\;\\Z/2\\Z \\\\\n\\end{align} "
},
{
"math_id": 20,
"text": "E_{pq}^2 = \\bigoplus_{q_1 + q_2 = q} \\mathrm{Tor}^R_p(H_{q_1}(X; R), H_{q_2}(Y; R)) \\Rightarrow H_{p+q}(X \\times Y; R)."
},
{
"math_id": 21,
"text": "C_*(X \\times Y) \\cong C_*(X) \\otimes C_*(Y)."
}
] | https://en.wikipedia.org/wiki?curid=918047 |
918131 | Representation theory of diffeomorphism groups | Representation theory of the symmetries of manifolds
In mathematics, a source for the representation theory of the group of diffeomorphisms of a smooth manifold "M" is the initial observation that (for "M" connected) that group acts transitively on "M".
History.
A survey paper from 1975 of the subject by Anatoly Vershik, Israel Gelfand and M. I. Graev attributes the original interest in the topic to research in theoretical physics of the local current algebra, in the preceding years. Research on the "finite configuration" representations was in papers of R. S. Ismagilov (1971), and A. A. Kirillov (1974). The representations of interest in physics are described as a cross product "C"∞("M")·Diff("M").
Constructions.
Let therefore "M" be a "n"-dimensional connected differentiable manifold, and "x" be any point on it. Let Diff("M") be the orientation-preserving diffeomorphism group of "M" (only the identity component of mappings homotopic to the identity diffeomorphism if you wish) and Diff"x"1("M") the stabilizer of "x". Then, "M" is identified as a homogeneous space
Diff("M")/Diff"x"1("M").
From the algebraic point of view instead, formula_0 is the algebra of smooth functions over "M" and formula_1 is the ideal of smooth functions vanishing at "x". Let formula_2 be the ideal of smooth functions which vanish up to the n-1th partial derivative at "x". formula_2 is invariant under the group Diff"x"1("M") of diffeomorphisms fixing x. For "n" > 0 the group Diff"x""n"("M") is defined as the subgroup of Diff"x"1("M") which acts as the identity on formula_3. So, we have a descending chain
Diff("M") ⊃ Diff"x"1(M) ⊃ ... ⊃ Diff"x""n"("M") ⊃ ...
Here Diff"x""n"("M") is a normal subgroup of Diff"x"1("M"), which means we can look at the quotient group
Diff"x"1("M")/Diff"x""n"("M").
Using harmonic analysis, a real- or complex-valued function (with some sufficiently nice topological properties) on the diffeomorphism group can be decomposed into Diff"x"1("M") representation-valued functions over "M".
The supply of representations.
So what are the representations of Diff"x"1("M")? Let's use the fact that if we have a group homomorphism φ:"G" → "H", then if we have a "H"-representation, we can obtain a restricted "G"-representation. So, if we have a rep of
Diff"x"1("M")/Diff"x""n"("M"),
we can obtain a rep of Diff"x"1("M").
Let's look at
Diff"x"1("M")/Diff"x"2("M")
first. This is isomorphic to the general linear group GL+("n", R) (and because we're only considering orientation preserving diffeomorphisms and so the determinant is positive). What are the reps of GL+("n", R)?
formula_4.
We know the reps of SL("n", R) are simply tensors over "n" dimensions. How about the R+ part? That corresponds to the "density", or in other words, how the tensor rescales under the determinant of the Jacobian of the diffeomorphism at "x". (Think of it as the conformal weight if you will, except that there is no conformal structure here). (Incidentally, there is nothing preventing us from having a complex density).
So, we have just discovered the tensor reps (with density) of the diffeomorphism group.
Let's look at
Diff"x"1("M")/Diff"x""n"("M").
This is a finite-dimensional group. We have the chain
Diff"x"1("M")/Diff"x"1("M") ⊂ ... ⊂ Diff"x"1("M")/Diff"x""n"("M") ⊂ ...
Here, the "⊂" signs should really be read to mean an injective homomorphism, but since it is canonical, we can pretend these quotient groups are embedded one within the other.
Any rep of
Diff"x"1("M")/Diff"x""m"("M")
can automatically be turned into a rep of
Diff"x"1/Diff"x""n"("M")
if "n" > "m". Let's say we have a rep of
Diff"x"1/Diff"x""p" + 2
which doesn't arise from a rep of
Diff"x"1/Diff"x""p" + 1.
Then, we call the fiber bundle with that rep as the fiber (i.e. Diff"x"1/Diff"x""p" + 2 is the structure group) a jet bundle of order "p".
Side remark: This is really the method of induced representations with the smaller group being Diffx1(M) and the larger group being Diff("M").
Intertwining structure.
In general, the space of sections of the tensor and jet bundles would be an irreducible representation and we often look at a subrepresentation of them. We can study the structure of these reps through the study of the intertwiners between them.
If the fiber is not an irreducible representation of Diff"x"1("M"), then we can have a nonzero intertwiner mapping each fiber pointwise into a smaller quotient representation. Also, the exterior derivative is an intertwiner from the space of differential forms to another of higher order. (Other derivatives are not, because connections aren't invariant under diffeomorphisms, though they are covariant.) The partial derivative isn't diffeomorphism invariant. There is a derivative intertwiner taking sections of a jet bundle of order "p" into sections of a jet bundle of order "p" + 1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C^\\infty(M)"
},
{
"math_id": 1,
"text": "I_x(M)"
},
{
"math_id": 2,
"text": "I_x^n(M)"
},
{
"math_id": 3,
"text": "I_x(M)/I_x^n(M)"
},
{
"math_id": 4,
"text": "GL^+(n,\\mathbb{R})\\cong \\mathbb{R}^+\\times SL(n,\\mathbb{R})"
}
] | https://en.wikipedia.org/wiki?curid=918131 |
9181701 | McDiarmid's inequality | Probability and computer science concept
In probability theory and theoretical computer science, McDiarmid's inequality (named after Colin McDiarmid ) is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent random variables. McDiarmid's inequality applies to functions that satisfy a "bounded differences" property, meaning that replacing a single argument to the function while leaving all other arguments unchanged cannot cause too large of a change in the value of the function.
Statement.
A function formula_0 satisfies the "bounded differences property" if substituting the value of the formula_1th coordinate formula_2 changes the value of formula_3 by at most formula_4. More formally, if there are constants formula_5 such that for all formula_6, and all formula_7,
formula_8
<templatestyles src="Math_theorem/styles.css" />
McDiarmid's Inequality — Let formula_9 satisfy the bounded differences property with bounds formula_5.
Consider independent random variables formula_10 where formula_11 for all formula_1.
Then, for any formula_12,
formula_13
formula_14
and as an immediate consequence,
formula_15
Extensions.
Unbalanced distributions.
A stronger bound may be given when the arguments to the function are sampled from unbalanced distributions, such that resampling a single argument rarely causes a large change to the function value.
<templatestyles src="Math_theorem/styles.css" />
McDiarmid's Inequality (unbalanced) — Let formula_16 satisfy the bounded differences property with bounds formula_5.
Consider independent random variables formula_17 drawn from a distribution where there is a particular value formula_18 which occurs with probability formula_19.
Then, for any formula_12,
formula_20
This may be used to characterize, for example, the value of a function on graphs when evaluated on sparse random graphs and hypergraphs, since in a sparse random graph, it is much more likely for any particular edge to be missing than to be present.
Differences bounded with high probability.
McDiarmid's inequality may be extended to the case where the function being analyzed does not strictly satisfy the bounded differences property, but large differences remain very rare.
<templatestyles src="Math_theorem/styles.css" />
McDiarmid's Inequality (Differences bounded with high probability) — Let formula_9 be a function and formula_21 be a subset of its domain and let formula_22 be constants such that for all pairs formula_23 and formula_24,
formula_25
Consider independent random variables formula_10 where formula_11 for all formula_1.
Let formula_26 and let formula_27.
Then, for any formula_12,
formula_28
and as an immediate consequence,
formula_29
There exist stronger refinements to this analysis in some distribution-dependent scenarios, such as those that arise in learning theory.
Sub-Gaussian and sub-exponential norms.
Let the "formula_30th centered conditional version" of a function formula_3 be
formula_31
so that formula_32 is a random variable depending on random values of formula_33.
<templatestyles src="Math_theorem/styles.css" />
McDiarmid's Inequality (Sub-Gaussian norm) —
Let formula_9 be a function.
Consider independent random variables formula_34 where formula_11 for all formula_1.
Let formula_32 refer to the formula_30th centered conditional version of formula_3.
Let formula_35 denote the sub-Gaussian norm of a random variable.
Then, for any formula_12,
formula_36
<templatestyles src="Math_theorem/styles.css" />
McDiarmid's Inequality (Sub-exponential norm) —
Let formula_9 be a function.
Consider independent random variables formula_34 where formula_11 for all formula_1.
Let formula_32 refer to the formula_30th centered conditional version of formula_3.
Let formula_37 denote the sub-exponential norm of a random variable.
Then, for any formula_12,
formula_38
Bennett and Bernstein forms.
Refinements to McDiarmid's inequality in the style of Bennett's inequality and Bernstein inequalities are made possible by defining a variance term for each function argument. Let
formula_39
<templatestyles src="Math_theorem/styles.css" />
McDiarmid's Inequality (Bennett form) — Let formula_16 satisfy the bounded differences property with bounds formula_5.
Consider independent random variables formula_10 where formula_11 for all formula_1. Let formula_40 and formula_41 be defined as at the beginning of this section.
Then, for any formula_12,
formula_42
<templatestyles src="Math_theorem/styles.css" />
McDiarmid's Inequality (Bernstein form) — Let formula_16 satisfy the bounded differences property with bounds formula_5. Let formula_40 and formula_41 be defined as at the beginning of this section.
Then, for any formula_12,
formula_43
Proof.
The following proof of McDiarmid's inequality constructs the Doob martingale tracking the conditional expected value of the function as more and more of its arguments are sampled and conditioned on, and then applies a martingale concentration inequality (Azuma's inequality).
An alternate argument avoiding the use of martingales also exists, taking advantage of the independence of the function arguments to provide a Chernoff-bound-like argument.
For better readability, we will introduce a notational shorthand: formula_44 will denote formula_45 for any formula_46 and integers formula_47, so that, for example,
formula_48
Pick any formula_49. Then, for any formula_50, by triangle inequality,
formula_51
and thus formula_3 is bounded.
Since formula_3 is bounded, define the Doob martingale formula_52 (each formula_53 being a random variable depending on the random values of formula_54) as
formula_55
for all formula_56 and formula_57, so that formula_58.
Now define the random variables for each formula_1
formula_59
Since formula_60 are independent of each other, conditioning on formula_61 does not affect the probabilities of the other variables, so these are equal to the expressions
formula_62
Note that formula_63. In addition,
formula_64
Then, applying the general form of Azuma's inequality to formula_65, we have
formula_66
The one-sided bound in the other direction is obtained by applying Azuma's inequality to formula_67 and the two-sided bound follows from a union bound. formula_68
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f: \\mathcal{X}_1 \\times \\mathcal{X}_2 \\times \\cdots \\times \\mathcal{X}_n \\rightarrow \\mathbb{R} "
},
{
"math_id": 1,
"text": "i"
},
{
"math_id": 2,
"text": "x_i"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "c_i"
},
{
"math_id": 5,
"text": "c_1, c_2, \\dots, c_n"
},
{
"math_id": 6,
"text": "i\\in[n]"
},
{
"math_id": 7,
"text": "x_1\\in \\mathcal{X}_1,\\,x_2\\in \\mathcal{X}_2,\\, \\ldots,\\, x_n \\in \\mathcal{X}_n"
},
{
"math_id": 8,
"text": "\n\\sup_{x_i' \\in \\mathcal{X}_i} \\left|f(x_1, \\dots, x_{i-1}, x_i, x_{i+1}, \\ldots, x_n) - f(x_1, \\dots, x_{i-1}, x_i', x_{i+1}, \\ldots, x_n)\\right| \\leq c_i.\n"
},
{
"math_id": 9,
"text": "f: \\mathcal{X}_1 \\times \\mathcal{X}_2 \\times \\cdots \\times \\mathcal{X}_n \\rightarrow \\mathbb{R}"
},
{
"math_id": 10,
"text": "X_1, X_2, \\dots, X_n"
},
{
"math_id": 11,
"text": "X_i \\in \\mathcal{X}_i"
},
{
"math_id": 12,
"text": "\\varepsilon > 0"
},
{
"math_id": 13,
"text": "\n\\text{P}\\left(f(X_1, X_2, \\ldots, X_n) - \\mathbb{E}[f(X_1, X_2, \\ldots, X_n)] \\geq \\varepsilon\\right)\n\\leq\n\\exp \\left(-\\frac{2 \\varepsilon^2}{\\sum_{i=1}^{n} c_i^2} \\right),\n"
},
{
"math_id": 14,
"text": "\n\\text{P}(f(X_1, X_2, \\ldots, X_n) - \\mathbb{E}[f(X_1, X_2, \\ldots, X_n)] \\leq -\\varepsilon)\n\\leq\n\\exp \\left(-\\frac{2 \\varepsilon^2}{\\sum_{i=1}^{n} c_i^2}\\right),\n"
},
{
"math_id": 15,
"text": "\n\\text{P}(|f(X_1, X_2, \\ldots, X_n) - \\mathbb{E}[f(X_1, X_2, \\ldots, X_n)]| \\geq \\varepsilon)\n\\leq\n2 \\exp \\left(-\\frac{2 \\varepsilon^2}{\\sum_{i=1}^{n} c_i^2}\\right).\n"
},
{
"math_id": 16,
"text": "f: \\mathcal{X}^n \\rightarrow \\mathbb{R}"
},
{
"math_id": 17,
"text": "X_1, X_2, \\ldots, X_n \\in \\mathcal{X}"
},
{
"math_id": 18,
"text": "\\chi_0 \\in \\mathcal{X}"
},
{
"math_id": 19,
"text": "1-p"
},
{
"math_id": 20,
"text": "\n\\text{P}(|f(X_1, \\ldots, X_n) - \\mathbb{E}[f(X_1, \\ldots, X_n)]| \\geq \\varepsilon)\n\\leq\n2 \\exp \\left(\\frac{-\\varepsilon^2}{2p(2-p)\\sum_{i=1}^{n} c_i^2 + \\frac{2}{3}\\varepsilon\\max_i c_i}\\right).\n"
},
{
"math_id": 21,
"text": "\\mathcal{Y} \\subseteq \\mathcal{X}_1 \\times \\mathcal{X}_2 \\times \\cdots \\times \\mathcal{X}_n"
},
{
"math_id": 22,
"text": "c_1, c_2, \\dots, c_n \\ge 0"
},
{
"math_id": 23,
"text": "(x_1,\\ldots,x_n)\\in \\mathcal{Y}"
},
{
"math_id": 24,
"text": "(x'_1,\\ldots,x'_n)\\in \\mathcal{Y}"
},
{
"math_id": 25,
"text": "\n\\left|f(x_1, \\ldots, x_n) - f(x'_1, \\ldots, x'_n)\\right| \\leq \\sum_{i: x_i \\ne x'_i} c_i.\n"
},
{
"math_id": 26,
"text": "p = 1 - \\mathrm{P}((X_1, \\ldots, X_n) \\in \\mathcal{Y})"
},
{
"math_id": 27,
"text": "m=\\mathbb{E}[f(X_1, \\ldots, X_n) \\mid (X_1, \\ldots, X_n) \\in \\mathcal{Y}]"
},
{
"math_id": 28,
"text": "\n\\text{P}\\left(f(X_1, \\ldots, X_n) - m \\geq \\varepsilon\\right)\n\\leq p +\n\\exp \\left(-\\frac{2 \\max\\left(0,\\varepsilon-p\\sum_{i=1}^nc_i\\right)^2}{\\sum_{i=1}^{n} c_i^2} \\right),\n"
},
{
"math_id": 29,
"text": "\n\\text{P}(|f(X_1, \\ldots, X_n) - m| \\geq \\varepsilon)\n\\leq\n2p+2\\exp \\left(-\\frac{2 \\max\\left(0,\\varepsilon-p\\sum_{i=1}^nc_i\\right)^2}{\\sum_{i=1}^{n} c_i^2} \\right).\n"
},
{
"math_id": 30,
"text": "k"
},
{
"math_id": 31,
"text": "f_k(X)(x) := f(x_1, \\ldots, x_{k-1}, X_k, x_{k+1}, \\ldots, x_n) - \\mathbb{E}_{X'_k}f(x_1, \\ldots, x_{k-1}, X'_k, x_{k+1}, \\ldots, x_n),"
},
{
"math_id": 32,
"text": "f_k(X)"
},
{
"math_id": 33,
"text": "x_1, \\ldots, x_{k-1}, x_{k+1}, \\ldots, x_n"
},
{
"math_id": 34,
"text": "X = (X_1, X_2, \\dots, X_n)"
},
{
"math_id": 35,
"text": "\\|\\cdot\\|_{\\psi_2}"
},
{
"math_id": 36,
"text": "\n\\text{P}\\left(f(X_1, \\ldots, X_n) - m \\geq \\varepsilon\\right)\n\\leq\n\\exp \\left(\\frac{-\\varepsilon^2}{32e\\left\\|\\sum_{k\\in [n]}\\|f_k(X)\\|_{\\psi_2}^2\\right\\|_{\\infty}} \\right).\n"
},
{
"math_id": 37,
"text": "\\|\\cdot\\|_{\\psi_1}"
},
{
"math_id": 38,
"text": "\n\\text{P}\\left(f(X_1, \\ldots, X_n) - m \\geq \\varepsilon\\right)\n\\leq\n\\exp \\left(\\frac{-\\varepsilon^2}{4e^2\\left\\|\\sum_{k\\in [n]}\\|f_k(X)\\|_{\\psi_1}^2\\right\\|_{\\infty} + 2\\varepsilon e\\max_{k \\in [n]}\\left\\|\\|f_k(X)\\|_{\\psi_1}\\right\\|_{\\infty}} \\right).\n"
},
{
"math_id": 39,
"text": "\\begin{align}\nB &:= \\max_{k \\in [n]} \\sup_{x_1, \\dots, x_{k-1}, x_{k+1}, \\dots, x_{n}} \\left|f(x_1, \\dots, x_{k-1}, X_k, x_{k+1}, \\dots, x_n) - \\mathbb{E}_{X_k}f(x_1, \\dots, x_{k-1}, X_k, x_{k+1}, \\dots, x_n)\\right|, \\\\\nV_k &:= \\sup_{x_1, \\dots, x_{k-1}, x_{k+1}, \\dots, x_{n}} \\mathbb{E}_{X_k} \\left(f(x_1, \\dots, x_{k-1}, X_k, x_{k+1}, \\dots, x_n) - \\mathbb{E}_{X_k}f(x_1, \\dots, x_{k-1}, X_k, x_{k+1}, \\dots, x_n)\\right)^2, \\\\\n\\tilde \\sigma^2 &:= \\sum_{k=1}^n V_k.\n\\end{align}"
},
{
"math_id": 40,
"text": "B"
},
{
"math_id": 41,
"text": "\\tilde\\sigma^2"
},
{
"math_id": 42,
"text": "\n\\text{P}(f(X_1, \\ldots, X_n) - \\mathbb{E}[f(X_1, \\ldots, X_n)] \\geq \\varepsilon)\n\\leq\n\\exp \\left(-\\frac{\\varepsilon}{2B}\\log\\left(1+\\frac{B\\varepsilon}{\\tilde\\sigma^2}\\right)\\right).\n"
},
{
"math_id": 43,
"text": "\n\\text{P}(f(X_1, \\ldots, X_n) - \\mathbb{E}[f(X_1, \\ldots, X_n)] \\geq \\varepsilon)\n\\leq\n\\exp \\left(-\\frac{\\varepsilon^2}{2\\left(\\tilde\\sigma^2 + \\frac{B\\varepsilon}{3}\\right)}\\right)."
},
{
"math_id": 44,
"text": "z_{i \\rightharpoondown j}"
},
{
"math_id": 45,
"text": "z_i, \\dots, z_j"
},
{
"math_id": 46,
"text": "z \\in \\mathcal{X}^n"
},
{
"math_id": 47,
"text": "1 \\le i \\le j \\le n"
},
{
"math_id": 48,
"text": "f(X_{1 \\rightharpoondown (i-1)}, y, x_{(i+1) \\rightharpoondown n}) := f(X_1, \\ldots, X_{i-1}, y, x_{i+1}, \\ldots, x_n)."
},
{
"math_id": 49,
"text": "x_1', x_2', \\ldots, x_n'"
},
{
"math_id": 50,
"text": "x_1, x_2, \\ldots, x_n"
},
{
"math_id": 51,
"text": "\n\\begin{align}\n&|f(x_{1 \\rightharpoondown n}) - f(x'_{1 \\rightharpoondown n})| \\\\[6pt]\n\\leq {} & |f(x_{1 \\rightharpoondown\\, n}) - f(x'_{1 \\rightharpoondown (n-1)}, x_n)| + c_n\\\\\n\\leq {} & |f(x_{1 \\rightharpoondown n}) - f(x'_{1 \\rightharpoondown (n-2)}, x_{(n-1) \\rightharpoondown n})| + c_{n-1} + c_n\\\\\n\\leq {} & \\ldots \\\\\n\\leq {} & \\sum_{i=1}^n c_i ,\n\\end{align}\n"
},
{
"math_id": 52,
"text": "\\{Z_i\\}"
},
{
"math_id": 53,
"text": "Z_i"
},
{
"math_id": 54,
"text": "X_1, \\ldots, X_i"
},
{
"math_id": 55,
"text": "Z_i:=\\mathbb{E}[f(X_{1 \\rightharpoondown n}) \\mid X_{1 \\rightharpoondown i} ]"
},
{
"math_id": 56,
"text": "i\\geq 1"
},
{
"math_id": 57,
"text": "Z_0: = \\mathbb{E}[f(X_{1 \\rightharpoondown n})]"
},
{
"math_id": 58,
"text": "Z_n = f(X_{1 \\rightharpoondown n})"
},
{
"math_id": 59,
"text": "\n\\begin{align}\nU_i &:= \\sup_{x \\in \\mathcal{X}_i} \\mathbb{E}[f(X_{1 \\rightharpoondown (i-1)}, x, X_{(i+1) \\rightharpoondown n}) \\mid X_{1 \\rightharpoondown (i-1)}, X_i = x] - \\mathbb[f(X_{1 \\rightharpoondown (i-1)}, X_{i\\rightharpoondown n}) \\mid X_{1 \\rightharpoondown (i-1)}], \\\\\nL_i &:= \\inf_{x \\in \\mathcal{X}_i} \\mathbb{E}[f(X_{1 \\rightharpoondown (i-1)}, x, X_{(i+1) \\rightharpoondown n}) \\mid X_{1 \\rightharpoondown (i-1)}, X_i = x] - \\mathbb[f(X_{1 \\rightharpoondown (i-1)}, X_{i\\rightharpoondown n}) \\mid X_{1 \\rightharpoondown (i-1)}]. \\\\\n\\end{align}\n"
},
{
"math_id": 60,
"text": "X_i, \\ldots, X_n"
},
{
"math_id": 61,
"text": "X_i = x"
},
{
"math_id": 62,
"text": "\n\\begin{align}\nU_i &= \\sup_{x \\in \\mathcal{X}_i} \\mathbb{E}[f(X_{1 \\rightharpoondown (i-1)}, x, X_{(i+1) \\rightharpoondown n}) - f(X_{1 \\rightharpoondown (i-1)}, X_{i\\rightharpoondown n}) \\mid X_{1 \\rightharpoondown (i-1)}], \\\\\nL_i &= \\inf_{x \\in \\mathcal{X}_i} \\mathbb{E}[f(X_{1 \\rightharpoondown (i-1)}, x, X_{(i+1) \\rightharpoondown n}) - f(X_{1 \\rightharpoondown (i-1)}, X_{i\\rightharpoondown n}) \\mid X_{1 \\rightharpoondown (i-1)}]. \\\\\n\\end{align}\n"
},
{
"math_id": 63,
"text": "L_i \\leq Z_i - Z_{i-1} \\leq U_i"
},
{
"math_id": 64,
"text": "\n\\begin{align}\nU_i - L_i &= \\sup_{u\\in \\mathcal{X}_i, \\ell \\in \\mathcal{X}_i}\n\\mathbb{E}[f(X_{1 \\rightharpoondown (i-1)}, u, X_{(i+1) \\rightharpoondown n}) \\mid X_{1 \\rightharpoondown (i-1)}]\n-\\mathbb{E}[f(X_{1 \\rightharpoondown (i-1)}, \\ell, X_{(i+1) \\rightharpoondown n}) \\mid X_{1 \\rightharpoondown (i-1)}] \\\\[6pt]\n&=\\sup_{u\\in \\mathcal{X}_i, \\ell \\in \\mathcal{X}_i}\n\\mathbb{E}[f(X_{1 \\rightharpoondown (i-1)}, u, X_{(i+1) \\rightharpoondown n}) - f(X_{1 \\rightharpoondown (i-1)}, l, X_{(i+1) \\rightharpoondown n}) \\mid X_{1 \\rightharpoondown (i-1)}] \\\\\n&\\leq \\sup_{x_u\\in \\mathcal{X}_i, x_l \\in \\mathcal{X}_i}\n\\mathbb{E}[c_i \\mid X_{1 \\rightharpoondown (i-1)}] \\\\[6pt]\n&\\leq c_i\n\\end{align}\n"
},
{
"math_id": 65,
"text": "\\left\\{Z_i\\right\\}"
},
{
"math_id": 66,
"text": "\n\\text{P}(f(X_1, \\ldots, X_n) - \\mathbb{E}[f(X_1, \\ldots, X_n) ] \\geq \\varepsilon )\n= \\operatorname{P}(Z_n - Z_0 \\geq \\varepsilon)\n\\leq \\exp \\left(-\\frac{2\\varepsilon^2}{\\sum_{i=1}^n c_i^2}\\right).\n"
},
{
"math_id": 67,
"text": "\\left\\{-Z_i\\right\\}"
},
{
"math_id": 68,
"text": "\\square"
}
] | https://en.wikipedia.org/wiki?curid=9181701 |
918414 | Spectrum of a C*-algebra | Mathematical concept
In mathematics, the spectrum of a C*-algebra or dual of a C*-algebra "A", denoted "Â", is the set of unitary equivalence classes of irreducible *-representations of "A". A *-representation π of "A" on a Hilbert space "H" is irreducible if, and only if, there is no closed subspace "K" different from "H" and {0} which is invariant under all operators π("x") with "x" ∈ "A". We implicitly assume that irreducible representation means "non-null" irreducible representation, thus excluding trivial (i.e. identically 0) representations on one-dimensional spaces. As explained below, the spectrum "Â" is also naturally a topological space; this is similar to the notion of the spectrum of a ring.
One of the most important applications of this concept is to provide a notion of dual object for any locally compact group. This dual object is suitable for formulating a Fourier transform and a Plancherel theorem for unimodular separable locally compact groups of type I and a decomposition theorem for arbitrary representations of separable locally compact groups of type I. The resulting duality theory for locally compact groups is however much weaker than the Tannaka–Krein duality theory for compact topological groups or Pontryagin duality for locally compact "abelian" groups, both of which are complete invariants. That the dual is not a complete invariant is easily seen as the dual of any finite-dimensional full matrix algebra M"n"(C) consists of a single point.
Primitive spectrum.
The topology of "Â" can be defined in several equivalent ways. We first define it in terms of the primitive spectrum .
The primitive spectrum of "A" is the set of primitive ideals Prim("A") of "A", where a primitive ideal is the kernel of a non-zero irreducible *-representation. The set of primitive ideals is a topological space with the hull-kernel topology (or Jacobson topology). This is defined as follows: If "X" is a set of primitive ideals, its hull-kernel closure is
formula_0
Hull-kernel closure is easily shown to be an idempotent operation, that is
formula_1
and it can be shown to satisfy the Kuratowski closure axioms. As a consequence, it can be shown that there is a unique topology τ on Prim("A") such that the closure of a set "X" with respect to τ is identical to the hull-kernel closure of "X".
Since unitarily equivalent representations have the same kernel, the map π ↦ ker(π) factors through a surjective map
formula_2
We use the map "k" to define the topology on "Â" as follows:
Definition. The open sets of "Â" are inverse images "k"−1("U") of open subsets "U" of Prim("A"). This is indeed a topology.
The hull-kernel topology is an analogue for non-commutative rings of the Zariski topology for commutative rings.
The topology on "Â" induced from the hull-kernel topology has other characterizations in terms of states of "A".
Examples.
Commutative C*-algebras.
The spectrum of a commutative C*-algebra "A" coincides with the Gelfand dual of "A" (not to be confused with the dual "A"' of the Banach space "A"). In particular, suppose "X" is a compact Hausdorff space. Then there is a natural homeomorphism
formula_3
This mapping is defined by
formula_4
I("x") is a closed maximal ideal in C("X") so is in fact primitive. For details of the proof, see the Dixmier reference. For a commutative C*-algebra,
formula_5
The C*-algebra of bounded operators.
Let "H" be a separable infinite-dimensional Hilbert space. "L"("H") has two norm-closed *-ideals: "I"0 = {0} and the ideal "K" = "K"("H") of compact operators. Thus as a set, Prim("L"("H")) = {"I"0, "K"}. Now
Thus Prim("L"("H")) is a non-Hausdorff space.
The spectrum of "L"("H") on the other hand is much larger. There are many inequivalent irreducible representations with kernel "K"("H") or with kernel {0}.
Finite-dimensional C*-algebras.
Suppose "A" is a finite-dimensional C*-algebra. It is known "A" is isomorphic to a finite direct sum of full matrix algebras:
formula_6
where min("A") are the minimal central projections of "A". The spectrum of "A" is canonically isomorphic to min("A") with the discrete topology. For finite-dimensional C*-algebras, we also have the isomorphism
formula_5
Other characterizations of the spectrum.
The hull-kernel topology is easy to describe abstractly, but in practice for C*-algebras associated to locally compact topological groups, other characterizations of the topology on the spectrum in terms of positive definite functions are desirable.
In fact, the topology on "Â" is intimately connected with the concept of weak containment of representations as is shown by the following:
Theorem. Let "S" be a subset of "Â". Then the following are equivalent for an irreducible representation π;
# The equivalence class of π in "Â" is in the closure of "S"
# Every state associated to π, that is one of the form
formula_7
with ||ξ|| = 1, is the weak limit of states associated to representations in "S".
The second condition means exactly that π is weakly contained in "S".
The GNS construction is a recipe for associating states of a C*-algebra "A" to representations of "A". By one of the basic theorems associated to the GNS construction, a state "f" is pure if and only if the associated representation π"f" is irreducible. Moreover, the mapping κ : PureState("A") → "Â" defined by "f" ↦ π"f" is a surjective map.
From the previous theorem one can easily prove the following;
Theorem The mapping
formula_8
given by the GNS construction is continuous and open.
The space Irr"n"("A").
There is yet another characterization of the topology on "Â" which arises by considering the space of representations as a topological space with an appropriate pointwise convergence topology. More precisely, let "n" be a cardinal number and let "Hn" be the canonical Hilbert space of dimension "n".
Irr"n"("A") is the space of irreducible *-representations of "A" on "Hn" with the point-weak topology. In terms of convergence of nets, this topology is defined by π"i" → π; if and only if
formula_9
It turns out that this topology on Irr"n"("A") is the same as the point-strong topology, i.e. π"i" → π if and only if
formula_10
Theorem. Let "Ân" be the subset of "Â" consisting of equivalence classes of representations whose underlying Hilbert space has dimension "n". The canonical map Irr"n"("A") → "Ân" is continuous and open. In particular, "Ân" can be regarded as the quotient topological space of Irr"n"("A") under unitary equivalence.
Remark. The piecing together of the various "Ân" can be quite complicated.
Mackey–Borel structure.
"Â" is a topological space and thus can also be regarded as a Borel space. A famous conjecture of G. Mackey proposed that a "separable" locally compact group is of type I if and only if the Borel space is standard, i.e. is isomorphic (in the category of Borel spaces) to the underlying Borel space of a complete separable metric space. Mackey called Borel spaces with this property smooth. This conjecture was proved by James Glimm for separable C*-algebras in the 1961 paper listed in the references below.
Definition. A non-degenerate *-representation π of a separable C*-algebra "A" is a factor representation if and only if the center of the von Neumann algebra generated by π("A") is one-dimensional. A C*-algebra "A" is of type I if and only if any separable factor representation of "A" is a finite or countable multiple of an irreducible one.
Examples of separable locally compact groups "G" such that C*("G") is of type I are connected (real) nilpotent Lie groups and connected real semi-simple Lie groups. Thus the Heisenberg groups are all of type I. Compact and abelian groups are also of type I.
Theorem. If "A" is separable, "Â" is smooth if and only if "A" is of type I.
The result implies a far-reaching generalization of the structure of representations of separable type I C*-algebras and correspondingly of separable locally compact groups of type I.
Algebraic primitive spectra.
Since a C*-algebra "A" is a ring, we can also consider the set of primitive ideals of "A", where "A" is regarded algebraically. For a ring an ideal is primitive if and only if it is the annihilator of a simple module. It turns out that for a C*-algebra "A", an ideal is algebraically primitive if and only if it is primitive in the sense defined above.
Theorem. Let "A" be a C*-algebra. Any algebraically irreducible representation of "A" on a complex vector space is algebraically equivalent to a topologically irreducible *-representation on a Hilbert space. Topologically irreducible *-representations on a Hilbert space are algebraically isomorphic if and only if they are unitarily equivalent.
This is the Corollary of Theorem 2.9.5 of the Dixmier reference.
If "G" is a locally compact group, the topology on dual space of the group C*-algebra C*("G") of "G" is called the Fell topology, named after J. M. G. Fell.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\overline{X} = \\left \\{\\rho \\in \\operatorname{Prim}(A): \\rho \\supseteq \\bigcap_{\\pi \\in X} \\pi \\right \\}. "
},
{
"math_id": 1,
"text": " \\overline{\\overline{X}} = \\overline{X},"
},
{
"math_id": 2,
"text": " \\operatorname{k}: \\hat{A} \\to \\operatorname{Prim}(A). "
},
{
"math_id": 3,
"text": " \\operatorname{I}: X \\cong \\operatorname{Prim}( \\operatorname{C}(X))."
},
{
"math_id": 4,
"text": " \\operatorname{I}(x) = \\{f \\in \\operatorname{C}(X): f(x) = 0 \\}."
},
{
"math_id": 5,
"text": " \\hat{A} \\cong \\operatorname{Prim}(A)."
},
{
"math_id": 6,
"text": " A \\cong \\bigoplus_{e \\in \\operatorname{min}(A)} Ae, "
},
{
"math_id": 7,
"text": " f_\\xi(x) = \\langle \\xi \\mid \\pi(x) \\xi \\rangle "
},
{
"math_id": 8,
"text": " \\kappa: \\operatorname{PureState}(A) \\to \\hat{A} "
},
{
"math_id": 9,
"text": "\\langle \\pi_i(x) \\xi \\mid \\eta \\rangle \\to \\langle \\pi(x) \\xi \\mid \\eta \\rangle \\quad \\forall \\xi, \\eta \\in H_n \\ x \\in A. "
},
{
"math_id": 10,
"text": " \\pi_i(x) \\xi \\to \\pi(x) \\xi \\quad \\mbox{ normwise } \\forall \\xi \\in H_n \\ x \\in A. "
}
] | https://en.wikipedia.org/wiki?curid=918414 |
918466 | Laguerre form | In mathematics, the Laguerre form is generally given as a third degree tensor-valued form, that can be written as,
formula_0. | [
{
"math_id": 0,
"text": "\\mathfrak{L} = (w^{1})^{2} D a_{11} + 2 w^{1}w^{2} D a_{12} + (w^{2})^{2} D a_{22}"
}
] | https://en.wikipedia.org/wiki?curid=918466 |
9185674 | Alveolar gas equation | Formula for the partial pressure of alveolar oxygen
The alveolar gas equation is the method for calculating partial pressure of alveolar oxygen (). The equation is used in assessing if the lungs are properly transferring oxygen into the blood. The alveolar air equation is not widely used in clinical medicine, probably because of the complicated appearance of its classic forms.
The partial pressure of oxygen () in the pulmonary alveoli is required to calculate both the alveolar-arterial gradient of oxygen and the amount of right-to-left cardiac shunt, which are both clinically useful quantities. However, it is not practical to take a sample of gas from the alveoli in order to directly measure the partial pressure of oxygen. The alveolar gas equation allows the calculation of the alveolar partial pressure of oxygen from data that is practically measurable. It was first characterized in 1946.
Assumptions.
The equation relies on the following assumptions:
Equation.
formula_0
If is small, or more specifically if formula_1 then the equation can be simplified to:
formula_2
where:
Sample Values given for air at sea level at 37 °C.
Doubling will double .
Other possible equations exist to calculate the alveolar air.
formula_3
Abbreviated alveolar air equation.
formula_4
, , and are the partial pressures of oxygen in alveolar, expired, and inspired gas, respectively, and VD/VT is the ratio of physiologic dead space over tidal volume.
Respiratory quotient (R).
formula_5
Physiologic dead space over tidal volume (VD/VT).
formula_6
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n p_A\\ce{O2} = F_I\\ce{O2}(P_\\ce{ATM} - p\\ce{H2O}) - \\frac{p_a\\ce{CO2}(1 - F_I\\ce{O2}(1 - \\ce{RER}))} \\ce{RER}\n"
},
{
"math_id": 1,
"text": "F_I\\ce{O2}(1-\\ce{RER}) \\ll 1"
},
{
"math_id": 2,
"text": "\n p_A\\ce{O2} \\approx F_I\\ce{O2}(P_\\ce{ATM} - p\\ce{H2O}) - \\frac{p_a\\ce{CO2}} \\ce{RER}\n"
},
{
"math_id": 3,
"text": "\\begin{align} \n p_A \\ce{O2} & = F_I \\ce{O2} \\left(PB - p\\ce{H2O}\\right) - p_A \\ce{CO2} \\left(F_I \\ce{O2} + \\frac{1 - F_I \\ce{O2}}{R}\\right) \\\\[4pt]\n & = p_I \\ce{O2} - p_A \\ce{CO2} \\left(F_I \\ce{O2} + \\frac{1 - F_I \\ce{O2}}{R}\\right) \\\\[4pt]\n & = p_I \\ce{O2} - \\frac{V_T}{V_T - V_D}\\left(p_I \\ce{O2} - p_E \\ce{O2}\\right) \\\\[4pt]\n & = \\frac{p_E \\ce{O2} - p_I \\ce{O2} \\left(\\frac{V_D}{V_T}\\right)}{1 - \\frac{V_D}{V_T}}\n\\end{align}"
},
{
"math_id": 4,
"text": "\n p_A \\ce{O2} = \\frac{p_E \\ce{O2} - p_i \\ce{O2} \\frac{V_D}{V_T}}{1- \\frac{V_D}{V_T}}\n"
},
{
"math_id": 5,
"text": "\n R = \\frac{p_E \\ce{CO2} (1 - F_I \\ce{O2})}{p_i \\ce{O2} - p_E \\ce{O2} - (p_E \\ce{CO2} * F_i \\ce{O2})}\n"
},
{
"math_id": 6,
"text": "\n \\frac{V_D}{V_T} = \\frac{p_A \\ce{CO2} - p_E \\ce{CO2} }{p_A\\ce{CO2} }\n"
}
] | https://en.wikipedia.org/wiki?curid=9185674 |
9185794 | Fixation (population genetics) | Change in a gene pool
In population genetics, fixation is the change in a gene pool from a situation where there exists at least two variants of a particular gene (allele) in a given population to a situation where only one of the alleles remains. That is, the allele becomes fixed.
In the absence of mutation or heterozygote advantage, any allele must eventually either be lost completely from the population, or fixed, i.e. permanently established at 100% frequency in the population. Whether a gene will ultimately be lost or fixed is dependent on selection coefficients and chance fluctuations in allelic proportions. Fixation can refer to a gene in general or particular nucleotide position in the DNA chain (locus).
In the process of substitution, a previously non-existent allele arises by mutation and undergoes fixation by spreading through the population by random genetic drift or positive selection. Once the frequency of the allele is at 100%, i.e. being the only gene variant present in any member, it is said to be "fixed" in the population.
Similarly, genetic differences between taxa are said to have been fixed in each species.
History.
The earliest mention of gene fixation in published works was found in Motoo Kimura's 1962 paper "On Probability of Fixation of Mutant Genes in a Population". In the paper, Kimura uses mathematical techniques to determine the probability of fixation of mutant genes in a population. He showed that the probability of fixation depends on the initial frequency of the allele and the mean and variance of the gene frequency change per generation.
Probability.
Neutral alleles.
Under conditions of genetic drift alone, every finite set of genes or alleles has a "coalescent point" at which all descendants converge to a single ancestor ("i.e." they 'coalesce'). This fact can be used to derive the rate of gene fixation of a neutral allele (that is, one not under any form of selection) for a population of varying size (provided that it is finite and nonzero). Because the effect of natural selection is stipulated to be negligible, the probability at any given time that an allele will ultimately become fixed at its locus is simply its frequency formula_0 in the population at that time. For example, if a population includes allele "A" with frequency equal to 20%, and allele "a" with frequency equal to 80%, there is an 80% chance that after an infinite number of generations "a" will be fixed at the locus (assuming genetic drift is the only operating evolutionary force).
For a diploid population of size "N" and neutral mutation rate formula_1, the initial frequency of a novel mutation is simply 1/(2"N"), and the number of new mutations per generation is formula_2. Since the fixation rate is the rate of novel neutral mutation multiplied by their probability of fixation, the overall fixation rate is formula_3. Thus, the rate of fixation for a mutation not subject to selection is simply the rate of introduction of such mutations.
Non-neutral alleles.
For fixed population sizes, the probability of fixation for a new allele with selective advantage "s" can be approximated using the theory of branching processes. A population with non-overlapping generations n = 0, 1, 2, 3, ... , and with formula_4 genes (or "individuals") at time n forms a Markov chain under the following assumptions. The introduction of an individual possessing an allele with a selective advantage corresponds to formula_5. The number of offspring of any one individual must follow a fixed distribution and is independently determined. In this framework the generating functions formula_6 for each formula_4 satisfy the recursion relation formula_7 and can be used to compute the probabilities formula_8 of no descendants at time n. It can be shown that formula_9, and furthermore, that the formula_10 converge to a specific value formula_11, which is the probability that the individual will have no descendants. The probability of fixation is then formula_12 since the indefinite survival of the beneficial allele will permit its increase in frequency to a point where selective forces will ensure fixation.
Weakly deleterious mutations can fix in smaller populations through chance, and the probability of fixation will depend on rates of drift (~formula_13) and selection (~formula_14), where formula_15 is the effective population size. The ratio formula_16 determines whether selection or drift dominates, and as long as this ratio is not too negative, there will be an appreciable chance that a mildly deleterious allele will fix. For example, in a diploid population of size formula_15, a deleterious allele with selection coefficient formula_17 has a probability fixation equal to formula_18. This estimate can be obtained directly from Kimura's 1962 work. Deleterious alleles with selection coefficients formula_17 satisfying formula_19 are effectively neutral, and consequently have a probability of fixation approximately equal to formula_20.
Effect of growing/shrinking populations.
Probability of fixation is also influenced by population size changes. For growing populations, selection coefficients are more effective. This means that beneficial alleles are more likely to become fixed, whereas deleterious alleles are more likely to be lost. In populations that are shrinking in size, selection coefficients are not as effective. Thus, there is a higher probability of beneficial alleles being lost and deleterious alleles being fixed. This is because if a beneficial mutation is rare, it can be lost purely due to chance of that individual not having offspring, no matter the selection coefficient. In growing populations, the average individual has a higher expected number of offspring, whereas in shrinking populations the average individual has a lower number of expected offspring. Thus, in growing populations it is more likely that the beneficial allele will be passed on to more individuals in the next generation. This continues until the allele flourishes in the population, and is eventually fixed. However, in a shrinking population it is more likely that the allele may not be passed on, simply because the parents produce no offspring. This would cause even a beneficial mutation to be lost.
Time.
Additionally, research has been done into the average time it takes for a neutral mutation to become fixed. Kimura and Ohta (1969) showed that a new mutation that eventually fixes will spend an average of 4Ne generations as a polymorphism in the population. Average time to fixation Ne is the effective population size, the number of individuals in an idealised population under genetic drift required to produce an equivalent amount of genetic diversity. Usually the population statistic used to define effective population size is heterozygosity, but others can be used.
Fixation rates can easily be modeled as well to see how long it takes for a gene to become fixed with varying population sizes and generations. For example, The Biology Project Genetic Drift Simulation allows to model genetic drift and see how quickly the gene for worm color goes to fixation in terms of generations for different population sizes.
Additionally, fixation rates can be modeled using coalescent trees. A coalescent tree traces the descent of alleles of a gene in a population. It aims to trace back to a single ancestral copy called the most recent common ancestor.
Examples in research.
In 1969, Schwartz at Indiana University was able to artificially induce gene fixation into maize, by subjecting samples to suboptimal conditions. Schwartz located a mutation in a gene called Adh1, which when homozygous causes maize to be unable to produce alcohol dehydrogenase. Schwartz then subjected seeds, with both normal alcohol dehydrogenase activity and no activity, to flooding conditions and observed whether the seeds were able to germinate or not. He found that when subjected to flooding, only seeds with alcohol dehydrogenase activity germinated. This ultimately caused gene fixation of the Adh1 wild type allele. The Adh1 mutation was lost in the experimented population.
In 2014, Lee, Langley, and Begun conducted another research study related to gene fixation. They focused on "Drosophila melanogaster" population data and the effects of genetic hitchhiking caused by selective sweeps. Genetic hitchhiking occurs when one allele is strongly selected for and driven to fixation. This causes the surrounding areas to also be driven to fixation, even though they are not being selected for. By looking at the "Drosophila melanogaster" population data, Lee et al. found a reduced amount of heterogeneity within 25 base pairs of focal substitutions. They accredit this to small-scale hitchhiking effects. They also found that neighboring fixations that changed amino acid polarities while maintaining the overall polarity of a protein were under stronger selection pressures. Additionally, they found that substitutions in slowly evolving genes were associated with stronger genetic hitchhiking effects.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "\\mu"
},
{
"math_id": 2,
"text": "2N\\mu"
},
{
"math_id": 3,
"text": "2N\\mu \\times \\frac{1}{2N} = \\mu"
},
{
"math_id": 4,
"text": "X_n"
},
{
"math_id": 5,
"text": "X_0 = 1"
},
{
"math_id": 6,
"text": "p_n (x)"
},
{
"math_id": 7,
"text": "p_n (x) = p_1 ( p_{n-1}(x) )"
},
{
"math_id": 8,
"text": "\\pi_n = P(X_n = 0)"
},
{
"math_id": 9,
"text": "\\pi_n = p_1 ( \\pi_{n-1} )"
},
{
"math_id": 10,
"text": "\\pi_n"
},
{
"math_id": 11,
"text": "\\pi"
},
{
"math_id": 12,
"text": "1-\\pi \\approx 2 s / \\sigma^2"
},
{
"math_id": 13,
"text": "1/N_e"
},
{
"math_id": 14,
"text": "s"
},
{
"math_id": 15,
"text": "N_e"
},
{
"math_id": 16,
"text": "N_e s"
},
{
"math_id": 17,
"text": "-s"
},
{
"math_id": 18,
"text": "(1-e^{-2 s}) /(1-e^{-4 N_e s})"
},
{
"math_id": 19,
"text": "2 N_e s \\ll 1"
},
{
"math_id": 20,
"text": "1/2N_e"
}
] | https://en.wikipedia.org/wiki?curid=9185794 |
918609 | Natural numbers object | In category theory, a natural numbers object (NNO) is an object endowed with a recursive structure similar to natural numbers. More precisely, in a category E with a terminal object 1, an NNO "N" is given by:
such that for any object "A" of E, global element "q" : 1 → "A", and arrow "f" : "A" → "A", there exists a unique arrow "u" : "N" → "A" such that:
In other words, the triangle and square in the following diagram commute.
The pair ("q", "f") is sometimes called the "recursion data" for "u", given in the form of a recursive definition:
The above definition is the universal property of NNOs, meaning they are defined up to canonical isomorphism. If the arrow "u" as defined above merely has to exist, that is, uniqueness is not required, then "N" is called a "weak" NNO.
Equivalent definitions.
NNOs in cartesian closed categories (CCCs) or topoi are sometimes defined in the following equivalent way (due to Lawvere): for every pair of arrows "g" : "A" → "B" and "f" : "B" → "B", there is a unique "h" : "N" × "A" → "B" such that the squares in the following diagram commute.
This same construction defines weak NNOs in cartesian categories that are not cartesian closed.
In a category with a terminal object 1 and binary coproducts (denoted by +), an NNO can be defined as the initial algebra of the endofunctor that acts on objects by and on arrows by .
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1 \\xrightarrow{~ \\quad q \\quad ~} A \\xrightarrow{~ \\quad f \\quad ~} A"
},
{
"math_id": 1,
"text": " \\mathcal{E} "
},
{
"math_id": 2,
"text": " \\top "
},
{
"math_id": 3,
"text": " \\mathcal{E} \\simeq \\mathbf{Shv}(\\mathfrak{C},J) "
},
{
"math_id": 4,
"text": " J "
},
{
"math_id": 5,
"text": " \\mathfrak{C} "
},
{
"math_id": 6,
"text": " \\Gamma_{\\mathbb{N}} "
},
{
"math_id": 7,
"text": " \\mathbb{N}_{\\mathcal{E}} \\cong \\left(\\Gamma_{\\mathbb{N}}\\right)^{++} \\cong \\coprod_{n \\in \\mathbb{N}} \\top. "
}
] | https://en.wikipedia.org/wiki?curid=918609 |
9190726 | Néron model | In algebraic geometry, the Néron model (or Néron minimal model, or minimal model)
for an abelian variety "AK" defined over the field of fractions "K" of a Dedekind domain "R" is the "push-forward" of "AK" from Spec("K") to Spec("R"), in other words the "best possible" group scheme "AR" defined over "R" corresponding to "AK".
They were introduced by André Néron (1961, 1964) for abelian varieties over the quotient field of a Dedekind domain "R" with perfect residue fields, and extended this construction to semiabelian varieties over all Dedekind domains.
Definition.
Suppose that "R" is a Dedekind domain with field of fractions "K", and suppose that "AK" is a smooth separated scheme over "K" (such as an abelian variety). Then a Néron model of "AK" is defined to be a smooth separated scheme "AR" over "R" with fiber "AK" that is universal in the following sense.
If "X" is a smooth separated scheme over "R" then any "K"-morphism from "X""K" to "AK" can be extended to a unique "R"-morphism from "X" to "AR" (Néron mapping property).
In particular, the canonical map formula_0 is an isomorphism. If a Néron model exists then it is unique up to unique isomorphism.
In terms of sheaves, any scheme "A" over Spec("K") represents a sheaf on the category of schemes smooth over Spec("K") with the smooth Grothendieck topology, and this has a pushforward by the injection map from Spec("K") to Spec("R"), which is a sheaf over Spec("R"). If this pushforward is representable by a scheme, then this scheme is the Néron model of "A".
In general the scheme "AK" need not have any Néron model.
For abelian varieties "AK" Néron models exist and are unique (up to unique isomorphism) and are commutative quasi-projective group schemes over "R". The fiber of a Néron model over a closed point of Spec("R") is a smooth commutative algebraic group, but need not be an abelian variety: for example, it may be disconnected or a torus. Néron models exist as well for certain commutative groups other than abelian varieties such as tori, but these are only locally of finite type. Néron models do not exist for the additive group.
The Néron model of an elliptic curve.
The Néron model of an elliptic curve "A""K" over "K" can be constructed as follows. First form the minimal model over "R" in the sense of algebraic (or arithmetic) surfaces. This is a regular proper surface over "R" but is not in general smooth over "R" or a group scheme over "R". Its subscheme of smooth points over "R" is the Néron model, which is a smooth group scheme over "R" but not necessarily proper over "R". The fibers in general may have several irreducible components, and to form the Néron model one discards all multiple components, all points where two components intersect, and all singular points of the components.
Tate's algorithm calculates the special fiber of the Néron model of an elliptic curve, or more precisely the fibers of the minimal surface containing the Néron model. | [
{
"math_id": 0,
"text": "A_R(R)\\to A_K(K)"
}
] | https://en.wikipedia.org/wiki?curid=9190726 |
9193185 | Countryman line | In mathematics, a Countryman line (named after Roger Simmons Countryman Jr.) is an uncountable linear ordering whose square is the union of countably many chains. The existence of Countryman lines was first proven by Shelah. Shelah also conjectured that, assuming PFA, every Aronszajn line contains a Countryman line. This conjecture, which remained open for three decades, was proven by Justin Moore. | [
{
"math_id": 0,
"text": "\\sigma"
}
] | https://en.wikipedia.org/wiki?curid=9193185 |
9194388 | DVB-T2 | Second revision of the DVB-T standard
DVB-T2 is an abbreviation for "Digital Video Broadcasting – Second Generation Terrestrial"; it is the extension of the television standard DVB-T, issued by the consortium DVB, devised for the broadcast transmission of digital terrestrial television. DVB has been standardized by ETSI.
This system transmits compressed digital audio, video, and other data in "physical layer pipes" (PLPs), using OFDM modulation with concatenated channel coding and interleaving. The higher offered bit rate, with respect to its predecessor DVB-T, makes it a system suited for carrying HDTV signals on the terrestrial TV channel (though many broadcasters still use plain DVB-T for this purpose). As of 2019[ [update]], it was implemented in broadcasts in the United Kingdom (Freeview HD, eight channels across two multiplexes, plus an extra multiplex in Northern Ireland carrying three SD channels), Italy (Europa 7 HD, twelve channels), Finland (21 channels, five in HD), Germany (six HD (1080p50) channels, with 40 in planning), the Netherlands (Digitenne, 30 HD (1080p50) channels), Sweden (five channels), Thailand (41 SD, 9 HD channels), Flanders (18 SD channels), Serbia (eight channels), Ukraine (40 SD and HD channels in five nationwide multiplexes), Croatia (all national, local and pay-TV channels), Denmark (two pay-TV multiplexes with 20 channels), Romania (8 SD channels, 1 HD channel), and some other countries.
History.
Preliminary investigation.
In March 2006, DVB decided to study options for an upgraded DVB-T standard. In June 2006, a formal study group named TM-T2 (Technical Module on Next Generation DVB-T) was established by the DVB Group to develop an advanced modulation scheme that could be adopted by a second generation digital terrestrial television standard, to be named DVB-T2.
According to the commercial requirements and call for technologies issued in April 2007, the first phase of DVB-T2 would be devoted to provide optimum reception for stationary (fixed) and portable receivers (i.e., units which can be nomadic, but not fully mobile) using existing aerials, whereas a second and third phase would study methods to deliver higher payloads (with new aerials) and the mobile reception issue. The novel system should provide a minimum 30% increase in payload, under similar channel conditions already used for DVB-T.
The BBC, ITV, Channel 4 and Channel 5 agreed with the regulator Ofcom to convert one UK multiplex (B, or PSB3) to DVB-T2 to increase capacity for HDTV via DTT. They expected the first TV region to use the new standard would be Granada in November 2009 (with existing switched over regions being changed at the same time). It was expected that over time there would be enough DVB-T2 receivers sold to switch all DTT transmissions to DVB-T2, and H.264.
Ofcom published its final decision on 3 April 2008, for HDTV using DVB-T2 and H.264: BBC HD would have one HD slot after digital switchover (DSO) at Granada. ITV and C4 had, as expected, applied to Ofcom for the 2 additional HD slots available from 2009 to 2012.
Ofcom indicated that it found an unused channel covering 3.7 million households in London, which could be used to broadcast the DVB-T2 HD multiplex from 2010, i.e., before DSO in London. Ofcom indicated that they would look for more unused UHF channels in other parts of the UK, that can be used for the DVB-T2 HD multiplex from 2010 until DSO.
The DVB-T2 specification.
The DVB-T2 draft standard was ratified by the DVB Steering Board on 26 June 2008, and published on the DVB homepage as "DVB-T2 standard BlueBook". It was handed over to the European Telecommunications Standards Institute (ETSI) by DVB.ORG on 20 June 2008.
The ETSI process resulted in the DVB-T2 standard being adopted on 9 September 2009. The ETSI process had several phases, but the only changes were text clarifications. Since the DVB-T2 physical layer specification was complete, and there would be no further technical enhancements, receiver VLSI chip design started with confidence in stability of specification. A draft PSI/SI (program and system information) specification document was also agreed with the DVB-TM-GBS group.
Tests.
Prototype receivers were shown in September IBC 2008 and more recent version at the IBC 2009 in Amsterdam. A number of other manufacturers demonstrated DVB-T2 at IBC 2009 including Albis Technologies, Arqiva, DekTec, Enensys Technologies, Harris, Pace, Rohde & Schwarz, Tandberg, Thomson Broadcast and TeamCast. As of 2012, Appear TV also produce DVB-T2 receivers, DVB-T2 modulators and DVB-T2 gateways. Other manufacturers planning DVB-T2 equipment launches include Alitronika, CellMetric, Cisco, Digital TV Labs, Humax, NXP Semiconductors, Panasonic, ProTelevision Technologies, Screen Service, SIDSA, Sony, ST Microelectronics and T-VIPS. The first test from a real TV transmitter was performed by the "BBC Research & Development" in the last weeks of June 2008 using channel 53 from the Guildford transmitter, southwest of London: BBC had developed and built the modulator/demodulator prototype in parallel with the DVB-T2 standard being drafted. Other companies like ENKOM or IfN develop software (processor) based decoding.
NORDIG published a DVB-T2 receiver specification and performance requirement on 1 July 2009. In March 2009 the Digital TV Group (DTG), the industry association for digital TV in the UK, published the technical specification for high definition services on digital terrestrial television (Freeview) using the new DVB-T2 standard. The DTG's test house: DTG Testing are testing Freeview HD products against this specification.
Many tests broadcast transmission using this standard are being in process in France, with local Gap filler near Rennes CCETT.
DVB-T2 was tested in October 2010, in Geneva region, with Mont Salève's repeater, in UHF band on Channel 36. A mobile van was testing BER, strength, and quality reception, with special PCs used as spectrum analysers, constellation testers. The van was moving in Canton Geneva (Switzerland), and France (Annemasse, Pays de Gex). However, none were demonstrated in TELECOM 2011 at Palexpo.
The standard.
The following characteristics have been devised for the T2 standard:
System differences with DVB-T.
The following table reports a comparison of available modes in DVB-T and DVB-T2.
For instance, a UK MFN DVB-T profile (64-QAM, 8k mode, coding rate 2/3, guard interval 1/32) and a DVB-T2 equivalent (256-QAM, 32k, coding rate 3/5, guard interval 1/128) allows for an increase in bit rate from 24.13 Mbit/s to 35.4 Mbit/s (+46.5%). Another example, for an Italian SFN DVB-T profile (64-QAM, 8k, coding rate 2/3, guard interval 1/4) and a DVB-T2 equivalent (256-QAM, 32k, coding rate 3/5, guard interval 1/16), achieves an increase in bit rate from 19.91 Mbit/s to 33.3 Mbit/s (+67%).
Recommended maximum bit-rate configurations for 8 MHz bandwidth, 32K FFT, guard interval 1/128, pilot pattern 7:
Technical details.
The processing workflow is as follows:
Market adoption.
When the digital terrestrial HDTV service Freeview HD was launched in December 2009, it was the first DVB-T2 service intended for the general public. As of November 2010, DVB-T2 broadcasts were available in a couple of European countries.
The earliest introductions of T2 have usually been tied with a launch of high-definition television. There are however some countries where HDTV is broadcast using the old DVB-T standard with no immediate plans to switch those broadcasts to DVB-T2. Among countries using DVB-T for nationwide broadcasts of HDTV are France, Ireland, Italy, Norway, Denmark, Spain, and Taiwan. These are usually using MPEG4. Australia started broadcasting HD content over DVB-T with MPEG2, although in 2015, some Australian broadcasters switched to MPEG4.
Countries where DVB-T2 is in use include:
Countries/continents/regions where DVB-T2 is planned in use include:
Afghanistan.
In April 2015, "OQAAB" started DVB-T2 broadcasting in Kabul. As of 2021, the process is at a standstill after the Taliban's return to power, and the previous government never authorized more than test broadcasts. The infrastructure in six more provinces (Herat, Kandahar, Jalalabad, Mazar, Ghazni, Kunduz) had been built out, without transmitter installation.
Albania.
In July 2011, "DigitAlb" started DVB-T2 broadcasting in Durrës, Tirana on UHF with 29 channels (26 HD, 3 in SD).
Belgium.
In April 2013, Telenet started with DVB-T2 broadcasting in Flanders. However it was discontinued one year later on 31 March 2014. As of the end of 2017, TV Vlaanderen started offering DVB-T2 television using Norkring's network. The following centre frequencies are used in Flanders: 650 MHz (UHF ch. 43), 658 MHz (UHF ch. 44), 674 MHz (UHF ch. 46) and 682 MHz (UHF ch. 47).
Colombia.
In 2012, Colombia adopted DVB-T2 (using a bandwidth of 6 MHz) as the national standard for terrestrial television. This replaced DVB-T, the previously selected standard for digital TV, which was chosen after technical evaluation of several digital TV standards. The two standards coexisted until 2015 when DVB-T was turned off.
Digital TV has been deployed gradually across the country, starting with the four main cities, Bogotá, Medellín, Cali and Barranquilla followed by smaller cities such as Armenia, Bucaramanga, Cartagena, Cúcuta, Manizales, Pereira and Santa Marta. By 2014, most main cities had digital TV. Due to the country's topography as well as there being no sharing of masts between the public and private broadcasters, the coverage in rural areas is patchy. There has been talk of using DVB-S2 (satellite) to ensure 100% coverage: as of January 2024 this hasn't happened.
The first two transmissions were by the two private TV channels RCN TV and Caracol TV. RTVC (the national government TV broadcaster) started to broadcast using the standard in 2013.
The digital system is known in Colombia as TDT which means Televisión Digital Terrestre (Digital Terrestrial Television).
Croatia.
On 13 October 2011, the Croatian Post and Electronic Communications Agency granted license for MUX C and MUX E, both in DVB-T2 standard.
Also in October 2011, OiV – Transmitters & Communications started testing on UHF channel 53 (730.00 MHz) from Sljeme.
Two DVB-T2 multiplexes launched in late 2012 by pay TV platform EVO TV.
In addition to that in September 2019 DVB-T2 was launched for free-to-air television, using the HEVC/H.265 codec and 720p or 1080p resolution. As of Winter 2020, legacy DVB-T broadcasts have ceased. In that time, EVOtv has issued brand new TV set-top boxes with the HEVC/H.265 DVB-T2 codec.
Czech Republic.
DVB-T2 was launched in March 2017, using video format HEVC/H.265. DVB-T was switched off in October 2020. In 2020, there was tested Nasa TV in 4K resolution to show that the DVB T2 system is capable of 4K and the devices can decode it.
Finland.
Finland, the first country in Europe to cease analog terrestrial TV and move to DVB-T, has announced that DVB-T2 will be used exclusively from end March 2020, but currently there is no set date for transition. Many FTA channels are dual broadcast in SD via DVB-T and in HD using DVB-T2. All pay-TV channels moved to DVB-T2 in 2017. The DVB-T2 switchover will allow more channels to move to HD as well as releasing bandwidth for new SD channels over DVB-T2.
India.
Digital Terrestrial Television services to provide mobile TV at 19 cities e.g. Pitampura(Delhi)(578.00 MHz; UHF ch. 34), Mumbai (474.00 MHz and 522.00 MHz; UHF ch. 21 and ch. 27), Kolkata, Chennai, Guwahati, Patna, Ranchi, Cuttack, Lucknow, Jallandhar, Raipur, Indore, Aurangabad, Bhopal, Bangalore, Ahmedabad, Hyderabad, Trivandrum and Srinagar were started on 25 February 2016. Mobile TV can be received using DVB-T2 Dongles in OTG enabled smart phones and tablets, Wi-Fi dongles, besides in integrated digital TV (iDTV).
Public and private transportation vehicles and public places are potential environments for mobile television. Currently DD National, DD National HD, DD News, DD Bharati, DD Sports, and DD Regional/DD Kisan are being relayed.
Indonesia.
The project to adopt DVB-T technology in Indonesia began in 2007 with 'full' government support as the project initiator. All television broadcasters were offered to transform their analogue broadcasts into the new digital form, some were interested to follow suit and started testing their new digital broadcasts and some are still uninterested back then.
During the DVB-T testing period, the Indonesian government (via its Ministry of Information & Communication Technology) wanted to switch to DVB-T2 technology which provides better signal efficiency, capacity and corrections compared to DVB-T. The TV broadcasters still testing their DVB-T broadcasts agreed to join the DVB-T2 conversion programme offered by the government since they saw the significant benefits by switching to DVB-T2 (such as higher data rate for HD content and better carrier-to-noise ratio management), even though it would introduce additional cost for those who have bought DVB-T equipment. The official switch to DVB-T2 from DVB-T began in February 2012, based on the Menkominfo decree (about 5 years from DVB-T introduction and adopting/nurturing period in Indonesia).
The Indonesian Ministry of Information & Communication Technology expects the final DVB-T2 digital television regulation to be finished in 2020 and the analogue switch off transition will begin in the same year.
Most analogue broadcasts were switched off in August 2023, with several local television stations finally broadcasting in digital on 17 August 2023.
Malaysia.
Malaysia started testing DVB-T in mid 2006, but outlined plans to switch to DVB-T2 in 2011, after which tests of both were run concurrently. The DVB-T test concluded in 2016 and at the same time license to roll out DVB-T2 transmitters was contracted to Puncak Semangat Sdn. Bhd. Roll-out began in late 2016 in the Borneo states of Malaysia and has mostly concluded by mid-2017. Plans to shut off analog by mid-2018, but after the opposing party won in the 14th general election, the shutdown was postponed to mid-2019.
South and central Peninsular Malaysia has switched to fully digital TV on 30 September 2019. North and Eastern Peninsular Malaysia has also shut off analog on 14 October 2019. The rest of the country switched over on 31 October 2019.
Nepal.
Currently, a private company called Prabhu TV is operating in Nepal.
Netherlands.
KPN started to switch its digital terrestrial television platform Digitenne to the DVB-T2 HEVC standard in October 2018, this transition completed on 9 July 2019.
Palestine.
On 5 January 2015, StarCom company switched to DVB-T2 technology which provided a better signal, reaching most regions of Palestine instead of limited signal covering (was functional only in Gaza Strip while in testing period using DVB-T1).
Star TV Transponder offers a range of entertainment and sports channels system DVB-T2. The package consists of 10 channels on the UHF channel 35 (586.00 MHz).
Romania.
Although Romania started DVB-T broadcasting in 2005 with the MPEG-2 standard for SD broadcast and MPEG 4 for HD, it was only experimental. In June 2011 Romania shifted to MPEG4 both for SD and HD. In 2012, the Romanian authorities decided that DVB-T2 will be the standard used for terrestrial broadcasts, as it allows a larger number of programs to be broadcast on the same multiplex. Romania's switchover plans were initially delayed due to economical and feasibility-related reasons. One of the reasons was that most Romanian consumers already extensively used either cable or satellite services, which developed very quickly and became very popular after 1990. In fact, a technological boom started around 2003, driven by a solid economical development in the field of telecommunications, made several private operators create large networks of fiber optics and cable covering all of Romania, which are now used for providing both TV, telephony, and high quality broadband internet. As the prices for complete packages (TV, internet, telephony) are low and the quality is quite good (e.g. about 20 EUR for 500 Mbit/s internet, ≈120 SD and HD digital cable TV channels and telephony, with an added 2-4 EUR for mobile telephony), the interest for over-the-air TV quickly became very low. There are rumors that commercial broadcasters that traditionally transmitted over-the-air using analogue channels (like MediaPro, Antena GROUP, Prima TV) will give up terrestrial broadcasting and will be available only on pay-TV services, like cable, satellite and IPTV. It is also rumored that the DVB-T standard (with MPEG-4 encoding) will continue until 2018.
On 17 June 2015, analogue terrestrial television was switched off, with the exception of the main public TV program (TVR1) which will continue to be broadcast strictly in the VHF band until the end of 2016.
Free-to-air DVB-T2 broadcasts on MUX1 (provided by the state-owned Radiocom) are available since June 2015 in Timișoara (UHF channel 21, 474.00 MHz), Cluj-Napoca (UHF channel 26, 514.00 MHz), Iasi (UHF channel 25, 506.00 MHz), and Bucharest (UHF channel 30, 546.00 MHz). The coverage will be extendend so that at the end of 2016, over 90% of the territory will be covered. For now (2015/06/30), only five channels are broadcast on MUX1: TVR1, TVR2, TVR News, TVR 3, and TVR HD, with plans to be extended to 14-16 SD and HD programs. Radiocom's MUX2 and MUX4 implementations will also start in 2016. Legacy DVB-T broadcasts are still available in Bucharest: 6 channels can be received on channels 54 and 59, but will be shut down eventually, being replaced by DVB-T2. TVR announced that TVR News and TVR 3 will be closed, and the fate of TVR HD, is uncertain. This will lower the number of channels available on DVB-T. On 2 July 2015, Kanal D Romania left the terrestrial platform. The only broadcast that remained on terrestrial except TVR is Antena 3, but it is unknown whether it will stay on DVB-T, will shift to DVB-T2 or completely leave terrestrial platform. This will lead to only 3 channels in DVB-T2, and with many TV sets that are only DVB-T compatible (most of sold models being equipped with digital cable tuner) to an unattractive terrestrial platform, and more and more people will subscribe to a cable provider, or a DTH operator in areas where there is no cable TV available.
The DVB-T transmitters were shut down since 1 September 2016, so only the DVB-T2 network remains on air.
As of 1 October 2016, 85% of the population and 78% of the Romanian territory (as stated by the broadcaster) are covered by DVB-T2 signal. The 9 TV channels that are broadcast at the moment are produced by the national television: TVR HD + 8 SD channels TVR1, TVR2, TVR3, TVR Cluj, TVR Craiova, TVR Iasi, TVR Timișoara, TVR Tg Mures.
Russia.
In September 2011, Russian governmental authorities have approved the decision that since this date all newly built terrestrial digital TV networks will use the DVB-T2 standard. In some regions of Russia DVB-T/MPEG-4 networks (mostly consisting of one multiplex) have already been deployed before this decision was made.
On 1 March 2012, "Russian Television and Radio Broadcasting Network" has started DVB-T2 broadcasting in Tatarstan. This is the first region in Russia where DVB-T2 is being used.
In January 2015, transition to DVB-T2 finished. DVB-T2 used on the whole territory of Russia. In 2019, almost all TV in Russia became digital (excluding some regional TV broadcasters).
Serbia.
In May 2009, the Serbian Ministry of Telecommunications and Information Society officially announced that the DVB-T2 standard will be the national digital terrestrial broadcasting standard for both SD and HD. Serbia has become one of the first countries to commit to the DVB-T2 standard. First public test with DVB-T2 signal in Serbia was during Telfor 2009 conference in Belgrade. Analog switch off has been planned for 4 April 2012. But it was postponed to 2013. Now the final switch off is planned to finish on 1 May 2015. On 21 March 2012 JP ETV started trial DVB-T2 transmission across Serbia offering viewers a total of 10 SD channels and a HD version of the public broadcaster's channel RTS. On 14 November 2013 JP ETV has updated initial network for digital terrestrial television, and now DVB-T2 signal is available to over 90 percent of the population of Serbia.
In June 2015, the transition to DVB-T2 was finished.
Singapore.
MediaCorp TV Mobile was the first channel in the world to pioneer the use of Digital Video Broadcast (DVB-T) technology to deliver television programmes to commuters in public transport such as buses, taxi etc. It was ceased transmission in 2010, a small-scale trial of DVB-T was carried on by the state-owned Mediacorp (which holds a monopoly on free-to-air broadcasting in the country) and pay television provider StarHub, Singapore announced in June 2012 that it would adopt DVB-T2 instead as its digital terrestrial television standard, determining that it was best-suited for Singapore's urban environment. By December 2013, Mediacorp had launched digital simulcasts of its channels. The analogue switchover occurred shortly after midnight on 2 January 2019.
South Africa.
On 14 January 2011, the South African Department of Communication officially announced that the DVB-T2 standard will be the national digital terrestrial broadcasting standard. An analog switch-off was planned for December 2013.
Sri Lanka.
With the completion of construction of Colombo's Lotus Tower which is a 350m tall broadcast and leisure tower, DVB T2 will be implemented in Sri Lanka's Colombo and other areas. Completion is set for 3Q 2015. DVB T2 is already implemented from the Kakavil transmission station by the SLRC.
Sweden.
On 17 June 2010, the Swedish Radio and TV Authority and the Swedish Government granted a total of nine licenses to broadcast channels in HDTV spread over two multiplexes using DVB-T2.
Broadcasts started on 1 November 2010, with five channels available initially: SVT1 HD, SVT2 HD, MTVN HD, National Geographic HD and Canal+ Sport HD. From this date a coverage of 70% of the population is achieved, with 90% expected by mid-2011 and nationwide coverage by 2012.
Thailand.
On 25 January 2013, The Royal Thai Army Radio and Television station, Channel 5, has launched a trial DVB-T2 service. The service had broadcast 6 SD channels including 2 HD channels. It has successfully completed Thailand's first DVB-T2 digital terrestrial TV trial with the help of Harris and one of its Maxiva UAX air-cooled UHF transmitters.
On 4 March 2013, Free TV channels 3, 5, 7, 9, NBT and Thai PBS received temporary permission to broadcast in digital DVB-T2 system until the official launch of Digital TV in Thailand in April 2014.
Ukraine.
Ukraine's national terrestrial TV network (built and maintained by the Zeonbud company) uses the DVB-T2 standard for all four nationwide FTV (cardless CAS "Irdeto Cloaked CA") multiplexes, for both SD and HD broadcasts. Before settling for DVB-T2, Ukraine was testing both DVB-T/MPEG-2 and DVB-T/MPEG-4 options, and some experimental transmitters operating in those standards are still live. Ukraine has never had a full-fledged nationwide DVB-T network, thus not having to do a DVB-T-to-DVB-T2 migration.
Zeonbud's network consists of 167 transmitter sites, each carrying four DVB-T2 multiplexes, with transmitter power ranging from 2 kW to 50 W (all in MFN mode). As of 10 October 2011, 150 of the 167 transmitter sites have officially gone live. The biggest problem of Ukraine's DVB-T2 rollout for now is the acute shortage of inexpensive DVB-T2 set-top-boxes.
The four multiplexes carry in total 28 nationwide channels (same for all transmitter sites, distributed via satellite) and 4 local channels. Up to 8 of those 28 nationwide channels can broadcast in HD format.
As of 2019[ [update]], there are 32 channels available on the air, up from 4 channels in October 2012.
United Kingdom.
On the terrestrial television system across most of the UK, there is only one multiplex (the slot corresponding to one channel in analog broadcasting and to many channels digital broadcasting) assigned to digital broadcasting in the DVB-T2 standard. This multiplex is controlled by the service company Freeview HD, which offered to host up to five DVB-T2 HD channels on it.
Freeview HD started its "technical launch" on 2 December 2009, hosting BBC HD, and ITV HD. On 30 March 2010, Freeview HD had its official launch, and added Channel 4 HD to its broadcasts. The fourth channel hosted was BBC One HD, while the fifth slot was used for a high-definition simulcast of CBBC during the daytime and a high-definition simulcast of BBC Three during the evening. The fifth HD stream on the DVB-T2 multiplex was going to be used by Channel 5 for their HD service, but they withdrew their application to Ofcom for the slot in December 2011.
In June 2012, the BBC launched a temporary stream in order to broadcast a high-definition red button service for the 2012 Olympics on Freeview, alongside BBC One HD and BBC HD. At the time, it was still undecided as to the permanent use of the 5th stream after the Olympics.
In Northern Ireland however, a second DVB-T2 multiplex was launched on 24 October 2012. This multiplex carries RTÉ One, RTÉ Two and TG4. All three channels on this multiplex are carried in SD rather than HD.
On 16 March 2013, the BBC announced that it would launch BBC News HD, BBC Three HD, BBC Four HD, CBeebies HD and CBBC HD on all digital television platforms which carry HD channels. On Freeview HD (and YouView), BBC Three HD and CBBC HD would use capacity on the BBC's existing HD multiplex covering 98.5% of UK homes; BBC News HD, BBC Four HD and CBeebies HD will use new HD capacity which will cover part of the UK and grow in coverage over time. These high-definition simulcasts are available on the second multiplex, but the second multiplex is only broadcast from selected transmitters, providing around 70% coverage across the whole of the UK.
On 26 March 2013, BBC HD was replaced by BBC Two HD.
In June 2022, it was announced that com 7 would be closing due to the license expiring and the frequency used being released for 5G. The BBC announced that they have made provisions for a 6th slot for BBC Four HD and CBeebies HD to move into available capacity that has been newly identified on the PSB3 multiplex which the BBC operates. However, BBC News HD would stop being broadcast on Freeview.
Vietnam.
As of 11 November 2011, two DVB-T2 SFN networks of the Audio Visual Global JSC have been officially launched in both Hanoi and Ho Chi Minh city. Later, the same service was offered in the Mekong Delta with transmitter in Can Tho and other cities. Each network with three multiplexes carry totally 40 SD, 05 HD and 05 audio channels (MPEG-4/H264).
Western Asia and North Africa.
Qatar, Saudi Arabia, UAE, Iraq, Egypt, Syria, Lebanon and Tunisia have all adopted DVB-T2. Kuwait has also committed to install the second generation standard. Iraq has already implemented its DVB-T2-based system in parts of the country, while Bahrain, Oman and Yemen are assessing the technology.
Licensing.
Sisvel, a Luxembourg-based company, administers the licenses for patents applying to this standard, as well as other patent pools.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_{bch}"
},
{
"math_id": 1,
"text": "1+x^{14}+x^{15}"
},
{
"math_id": 2,
"text": "N_{ldpc}"
},
{
"math_id": 3,
"text": "a_i"
},
{
"math_id": 4,
"text": "e^{(1)}_i"
},
{
"math_id": 5,
"text": "e^{(2)}_i"
},
{
"math_id": 6,
"text": "e^{(1)}_i = a_i"
},
{
"math_id": 7,
"text": "e^{(1)}_{i+1} = a_{i+1}"
},
{
"math_id": 8,
"text": "e^{(2)}_i = -a^*_{i+1}"
},
{
"math_id": 9,
"text": "e^{(2)}_{i+1} = a^*_{i}"
}
] | https://en.wikipedia.org/wiki?curid=9194388 |
9194858 | Area of a triangle | In geometry, calculating the area of a triangle is an elementary problem encountered often in many different situations. The best known and simplest formula is formula_0 where "b" is the length of the "base" of the triangle, and "h" is the "height" or "altitude" of the triangle. The term "base" denotes any side, and "height" denotes the length of a perpendicular from the vertex opposite the base onto the line containing the base. Euclid proved that the area of a triangle is half that of a parallelogram with the same base and height in his book "Elements" in 300 BCE. In 499 CE Aryabhata, used this illustrated method in the "Aryabhatiya" (section 2.6).
Although simple, this formula is only useful if the height can be readily found, which is not always the case. For example, the land surveyor of a triangular field might find it relatively easy to measure the length of each side, but relatively difficult to construct a 'height'. Various methods may be used in practice, depending on what is known about the triangle. Other frequently used formulas for the area of a triangle use trigonometry, side lengths (Heron's formula), vectors, coordinates, line integrals, Pick's theorem, or other properties.
History.
Heron of Alexandria found what is known as Heron's formula for the area of a triangle in terms of its sides, and a proof can be found in his book, "Metrica", written around 60 CE. It has been suggested that Archimedes knew the formula over two centuries earlier, and since "Metrica" is a collection of the mathematical knowledge available in the ancient world, it is possible that the formula predates the reference given in that work. In 300 BCE Greek mathematician Euclid proved that the area of a triangle is half that of a parallelogram with the same base and height in his book "Elements of Geometry".
In 499 Aryabhata, a great mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, expressed the area of a triangle as one-half the base times the height in the "Aryabhatiya".
A formula equivalent to Heron's was discovered by the Chinese independently of the Greeks. It was published in 1247 in "Shushu Jiuzhang" ("Mathematical Treatise in Nine Sections"), written by Qin Jiushao.
Using trigonometry.
The height of a triangle can be found through the application of trigonometry.
Using the labels in the image on the right, the altitude is "h"
"a" sin formula_1. Substituting this in the formula formula_2 derived above, the area of the triangle can be expressed as:
formula_3
(where α is the interior angle at "A", β is the interior angle at "B", formula_1 is the interior angle at "C" and "c" is the line AB).
Furthermore, since sin α = sin ("π" − α) = sin (β + formula_1), and similarly for the other two angles:
formula_4
formula_5
and analogously if the known side is "a" or "c".
formula_6
and analogously if the known side is "b" or "c".
Using side lengths (Heron's formula).
A triangle's shape is uniquely determined by the lengths of the sides, so its metrical properties, including area, can be described in terms of those lengths. By Heron's formula,
formula_7
where formula_8 is the semiperimeter, or half of the triangle's perimeter.
Three other equivalent ways of writing Heron's formula are
formula_9
Formulas resembling Heron's formula.
Three formulas have the same structure as Heron's formula but are expressed in terms of different variables. First, denoting the medians from sides "a", "b", and "c" respectively as "ma", "mb", and "mc" and their semi-sum ("ma" + "mb" + "mc")/2 as σ, we have
formula_10
Next, denoting the altitudes from sides "a", "b", and "c" respectively as "ha", "hb", and "hc", and denoting the semi-sum of the reciprocals of the altitudes as formula_11 we have
formula_12
And denoting the semi-sum of the angles' sines as "S"
[(sin α) + (sin β) + (sin γ)]/2, we have
formula_13
where "D" is the diameter of the circumcircle: formula_14
Using vectors.
The area of triangle ABC is half of the area of a parallelogram:
formula_15
where &NoBreak;&NoBreak;, &NoBreak;&NoBreak;, and &NoBreak;&NoBreak; are vectors to the triangle's vertices from any arbitrary origin point, so that &NoBreak;&NoBreak; and &NoBreak;&NoBreak; are the translation vectors from vertex &NoBreak;&NoBreak; to each of the others, and &NoBreak;&NoBreak; is the wedge product. If vertex &NoBreak;&NoBreak; is taken to be the origin, this simplifies to formula_16.
The oriented relative area of a parallelogram in any affine space, a type of bivector, is defined as &NoBreak;&NoBreak; where &NoBreak;&NoBreak; and &NoBreak;&NoBreak; are translation vectors from one vertex of the parallelogram to each of the two adacent vertices. In Euclidean space, the magnitude of this bivector is a well-defined scalar number representing the area of the parallelogram. (For vectors in three-dimensional space, the bivector-valued wedge product has the same magnitude as the vector-valued cross product, but unlike the cross product, which is only defined in three-dimensional Euclidean space, the wedge product is well-defined in an affine space of any dimension.)
The area of triangle "ABC" can also be expressed in terms of dot products. Taking vertex &NoBreak;&NoBreak; to be the origin and calling translation vectors to the other vertices &NoBreak;&NoBreak; and &NoBreak;&NoBreak;,
formula_17
where for any Euclidean vector formula_18. This area formula can be derived from the previous one using the elementary vector identity formula_19.
In two-dimensional Euclidean space, for a vector &NoBreak;&NoBreak; with coordinates &NoBreak;&NoBreak; and vector &NoBreak;&NoBreak; with coordinates &NoBreak;&NoBreak;, the magnitude of the wedge product is
formula_20
Using coordinates.
If vertex "A" is located at the origin (0, 0) of a Cartesian coordinate system and the coordinates of the other two vertices are given by "B"
("xB", "yB") and "C"
("xC", "yC"), then the area can be computed as <templatestyles src="Fraction/styles.css" />1⁄2 times the absolute value of the determinant
formula_21
For three general vertices, the equation is:
formula_22
which can be written as
formula_23
If the points are labeled sequentially in the counterclockwise direction, the above determinant expressions are positive and the absolute value signs can be omitted. The above formula is known as the shoelace formula or the surveyor's formula.
If we locate the vertices in the complex plane and denote them in counterclockwise sequence as "a"
"xA" + "yAi", "b"
"xB" + "yBi", and "c"
"xC" + "yCi", and denote their complex conjugates as formula_24, formula_25, and formula_26, then the formula
formula_27
is equivalent to the shoelace formula.
In three dimensions, the area of a general triangle "A"
("xA", "yA", "zA"), "B"
("xB", "yB", "zB") and "C"
("xC", "yC", "zC") is the Pythagorean sum of the areas of the respective projections on the three principal planes (i.e. "x" = 0, "y" = 0 and "z" = 0):
formula_28
Using line integrals.
The area within any closed curve, such as a triangle, is given by the line integral around the curve of the algebraic or signed distance of a point on the curve from an arbitrary oriented straight line "L". Points to the right of "L" as oriented are taken to be at negative distance from "L", while the weight for the integral is taken to be the component of arc length parallel to "L" rather than arc length itself.
This method is well suited to computation of the area of an arbitrary polygon. Taking "L" to be the "x"-axis, the line integral between consecutive vertices ("xi","yi") and ("x""i"+1,"y""i"+1) is given by the base times the mean height, namely ("x""i"+1 − "xi")("yi" + "y""i"+1)/2. The sign of the area is an overall indicator of the direction of traversal, with negative area indicating counterclockwise traversal. The area of a triangle then falls out as the case of a polygon with three sides.
While the line integral method has in common with other coordinate-based methods the arbitrary choice of a coordinate system, unlike the others it makes no arbitrary choice of vertex of the triangle as origin or of side as base. Furthermore, the choice of coordinate system defined by "L" commits to only two degrees of freedom rather than the usual three, since the weight is a local distance (e.g. "x""i"+1 − "xi" in the above) whence the method does not require choosing an axis normal to "L".
When working in polar coordinates it is not necessary to convert to Cartesian coordinates to use line integration, since the line integral between consecutive vertices ("ri",θ"i") and ("r""i"+1,θ"i"+1) of a polygon is given directly by "rir""i"+1sin(θ"i"+1 − θ"i")/2. This is valid for all values of θ, with some decrease in numerical accuracy when |θ| is many orders of magnitude greater than π. With this formulation negative area indicates clockwise traversal, which should be kept in mind when mixing polar and cartesian coordinates. Just as the choice of "y"-axis ("x"
0) is immaterial for line integration in cartesian coordinates, so is the choice of zero heading (θ
0) immaterial here.
Using Pick's theorem.
See Pick's theorem for a technique for finding the area of any arbitrary lattice polygon (one drawn on a grid with vertically and horizontally adjacent lattice points at equal distances, and with vertices on lattice points).
The theorem states:
formula_29
where "formula_30" is the number of internal lattice points and "B" is the number of lattice points lying on the border of the polygon.
Other area formulas.
Numerous other area formulas exist, such as
formula_31
where "r" is the inradius, and "s" is the semiperimeter (in fact, this formula holds for "all" tangential polygons), and
formula_32
where formula_33 are the radii of the excircles tangent to sides "a, b, c" respectively.
We also have
formula_34
and
formula_35
for circumdiameter "D"; and
formula_36
for angle α ≠ 90°.
The area can also be expressed as
formula_37
In 1885, Baker gave a collection of over a hundred distinct area formulas for the triangle. These include:
formula_38
formula_39
formula_40
formula_41
for circumradius (radius of the circumcircle) "R", and
formula_42
Upper bound on the area.
The area "T" of any triangle with perimeter "p" satisfies
formula_43
with equality holding if and only if the triangle is equilateral.
Other upper bounds on the area "T" are given by
formula_44
and
formula_45
both again holding if and only if the triangle is equilateral.
Bisecting the area.
There are infinitely many lines that bisect the area of a triangle. Three of them are the medians, which are the only area bisectors that go through the centroid. Three other area bisectors are parallel to the triangle's sides.
Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter. There can be one, two, or three of these for any given triangle.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T=bh/2,"
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": "T=\\tfrac12 bh"
},
{
"math_id": 3,
"text": "T = \\tfrac12 ab\\sin \\gamma = \\tfrac12 bc\\sin \\alpha = \\tfrac12 ca\\sin \\beta"
},
{
"math_id": 4,
"text": "T = \\tfrac12 ab\\sin (\\alpha+\\beta) = \\tfrac12 bc\\sin (\\beta+\\gamma) = \\tfrac12 ca\\sin (\\gamma+\\alpha)."
},
{
"math_id": 5,
"text": "T = \\frac {b^{2}(\\sin \\alpha)(\\sin (\\alpha + \\beta))}{2\\sin \\beta},"
},
{
"math_id": 6,
"text": "T = \\frac{a^{2}}{2(\\cot \\beta + \\cot \\gamma)} = \\frac{a^{2} (\\sin \\beta)(\\sin \\gamma)}{2\\sin(\\beta + \\gamma)},"
},
{
"math_id": 7,
"text": "T = \\sqrt{s(s-a)(s-b)(s-c)}"
},
{
"math_id": 8,
"text": "s= \\tfrac12(a+b+c)"
},
{
"math_id": 9,
"text": "\\begin{align}\nT &= \\tfrac14 \\sqrt{(a^2+b^2+c^2)^2-2(a^4+b^4+c^4)} \\\\[5mu]\n&= \\tfrac14 \\sqrt{2(a^2b^2+a^2c^2+b^2c^2)-(a^4+b^4+c^4)} \\\\[5mu]\n&= \\tfrac14 \\sqrt{(a+b-c) (a-b+c) (-a+b+c) (a+b+c)}.\n\\end{align}"
},
{
"math_id": 10,
"text": "T = \\tfrac43 \\sqrt{\\sigma (\\sigma - m_a)(\\sigma - m_b)(\\sigma - m_c)}."
},
{
"math_id": 11,
"text": "H = (h_a^{-1} + h_b^{-1} + h_c^{-1})/2"
},
{
"math_id": 12,
"text": "T^{-1} = 4 \\sqrt{H(H-h_a^{-1})(H-h_b^{-1})(H-h_c^{-1})}."
},
{
"math_id": 13,
"text": "T = D^{2} \\sqrt{S(S-\\sin \\alpha)(S-\\sin \\beta)(S-\\sin \\gamma)}"
},
{
"math_id": 14,
"text": "D=\\tfrac{a}{\\sin \\alpha} = \\tfrac{b}{\\sin \\beta} = \\tfrac{c}{\\sin \\gamma}."
},
{
"math_id": 15,
"text": "\nT = \\tfrac12\\bigl\\|(\\mathbf{b}-\\mathbf{a}) \\wedge (\\mathbf{c}-\\mathbf{a})\\bigr\\|\n= \\tfrac12\\bigl\\|\\mathbf{b} \\wedge \\mathbf c + \\mathbf b \\wedge \\mathbf c + \\mathbf c \\wedge \\mathbf a \\bigr\\|,\n"
},
{
"math_id": 16,
"text": "\\tfrac12\\| \\mathbf b \\wedge \\mathbf c \\|"
},
{
"math_id": 17,
"text": "T = \\tfrac12 \\sqrt{\\mathbf{b}^2 \\mathbf{c}^2 - (\\mathbf{b} \\cdot \\mathbf{c})^2},"
},
{
"math_id": 18,
"text": "\\mathbf v^2 = \\|\\mathbf v\\|^2 = \\mathbf v \\cdot \\mathbf v"
},
{
"math_id": 19,
"text": " \\mathbf u^2 \\mathbf v^2 = (\\mathbf u \\cdot \\mathbf v)^2 + \\|\\mathbf u \\wedge \\mathbf v \\|^2"
},
{
"math_id": 20,
"text": "\\| \\mathbf b \\wedge \\mathbf c \\| = |x_B y_C - x_C y_B|."
},
{
"math_id": 21,
"text": "T = \\tfrac12\\left|\\det\\begin{pmatrix}x_B & x_C \\\\ y_B & y_C \\end{pmatrix}\\right| = \\tfrac12 |x_B y_C - x_C y_B|."
},
{
"math_id": 22,
"text": "T = \\tfrac12 \\left| \\det\\begin{pmatrix}x_A & x_B & x_C \\\\ y_A & y_B & y_C \\\\ 1 & 1 & 1\\end{pmatrix} \\right| = \\tfrac12 \\big| x_A y_B - x_A y_C + x_B y_C - x_B y_A + x_C y_A - x_C y_B \\big|,"
},
{
"math_id": 23,
"text": "T = \\tfrac12 \\big| (x_A - x_C) (y_B - y_A) - (x_A - x_B) (y_C - y_A) \\big|."
},
{
"math_id": 24,
"text": "\\bar a"
},
{
"math_id": 25,
"text": "\\bar b"
},
{
"math_id": 26,
"text": "\\bar c"
},
{
"math_id": 27,
"text": "T=\\frac{i}{4}\\begin{vmatrix}a & \\bar a & 1 \\\\ b & \\bar b & 1 \\\\ c & \\bar c & 1 \\end{vmatrix}"
},
{
"math_id": 28,
"text": "T = \\tfrac12 \\sqrt{\\begin{vmatrix} x_A & x_B & x_C \\\\ y_A & y_B & y_C \\\\ 1 & 1 & 1 \\end{vmatrix}^2 +\n\\begin{vmatrix} y_A & y_B & y_C \\\\ z_A & z_B & z_C \\\\ 1 & 1 & 1 \\end{vmatrix}^2 +\n\\begin{vmatrix} z_A & z_B & z_C \\\\ x_A & x_B & x_C \\\\ 1 & 1 & 1 \\end{vmatrix}^2 }."
},
{
"math_id": 29,
"text": "T = I + \\tfrac12 B - 1"
},
{
"math_id": 30,
"text": "I"
},
{
"math_id": 31,
"text": "T = r \\cdot s,"
},
{
"math_id": 32,
"text": "T=r_a(s-a)=r_b(s-b)=r_c(s-c)"
},
{
"math_id": 33,
"text": "r_a, \\, r_b,\\, r_c"
},
{
"math_id": 34,
"text": "T = \\tfrac12 D^{2}(\\sin \\alpha)(\\sin \\beta)(\\sin \\gamma)"
},
{
"math_id": 35,
"text": "T = \\frac{abc}{2D} = \\frac{abc}{4R}"
},
{
"math_id": 36,
"text": "T = \\tfrac14(\\tan \\alpha)(b^{2}+c^{2}-a^{2})"
},
{
"math_id": 37,
"text": "T = \\sqrt{rr_ar_br_c}."
},
{
"math_id": 38,
"text": "T = \\tfrac12\\sqrt[3]{abch_ah_bh_c},"
},
{
"math_id": 39,
"text": "T = \\tfrac12 \\sqrt{abh_ah_b},"
},
{
"math_id": 40,
"text": "T = \\frac{a+b}{2(h_a^{-1} + h_b^{-1})},"
},
{
"math_id": 41,
"text": "T = \\frac{Rh_bh_c}{a}"
},
{
"math_id": 42,
"text": "T = \\frac{h_ah_b}{2 \\sin \\gamma}."
},
{
"math_id": 43,
"text": "T\\le \\tfrac{p^2}{12\\sqrt{3}},"
},
{
"math_id": 44,
"text": "4\\sqrt{3}T \\leq a^2+b^2+c^2"
},
{
"math_id": 45,
"text": "4\\sqrt{3}T \\leq \\frac{9abc}{a+b+c}, "
}
] | https://en.wikipedia.org/wiki?curid=9194858 |
9196302 | Sazonov's theorem | In mathematics, Sazonov's theorem, named after Vyacheslav Vasilievich Sazonov (), is a theorem in functional analysis.
It states that a bounded linear operator between two Hilbert spaces is "γ"-radonifying if it is a Hilbert–Schmidt operator. The result is also important in the study of stochastic processes and the Malliavin calculus, since results concerning probability measures on infinite-dimensional spaces are of central importance in these fields. Sazonov's theorem also has a converse: if the map is not Hilbert–Schmidt, then it is not "γ"-radonifying.
Statement of the theorem.
Let "G" and "H" be two Hilbert spaces and let "T" : "G" → "H" be a bounded operator from "G" to "H". Recall that "T" is said to be "γ"-radonifying if the push forward of the canonical Gaussian cylinder set measure on "G" is a "bona fide" measure on "H". Recall also that "T" is said to be a Hilbert–Schmidt operator if there is an orthonormal basis { "e""i" : "i" ∈ "I"} of "G" such that
formula_0
Then Sazonov's theorem is that "T" is "γ"-radonifying if it is a Hilbert–Schmidt operator.
The proof uses Prokhorov's theorem.
Remarks.
The canonical Gaussian cylinder set measure on an infinite-dimensional Hilbert space can never be a "bona fide" measure; equivalently, the identity function on such a space cannot be "γ"-radonifying.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{i \\in I} \\| T(e_i) \\|_H^2 < + \\infty."
}
] | https://en.wikipedia.org/wiki?curid=9196302 |
9200430 | Feldman–Mahalanobis model | Marxist model of economic development
The Feldman–Mahalanobis model is a Marxist model of economic development, created independently by Soviet economist Grigory Feldman in 1928 and Indian statistician Prasanta Chandra Mahalanobis in 1953. Mahalanobis became essentially the key economist of India's Second Five Year Plan, becoming subject to much of India's most dramatic economic debates.
The essence of the model is a shift in the pattern of industrial investment towards building up a domestic consumption goods sector. Thus the strategy suggests in order to reach a high standard in consumption, investment in building a capacity in the production of capital goods is firstly needed. A high enough capacity in the capital goods sector expands in the long-run the nation's consumer-goods production capacity.
This distinction between the two different types of goods was a clearer formulation of Marx's ideas in "Das Kapital", and also helped people to better understand the extent of the trade off between the levels of immediate and future consumption. These ideas were first introduced in 1928 by Feldman, then an economist working for the GOSPLAN planning commission, where he presented theoretical arguments of a two-department scheme of growth. There is no evidence that Mahalanobis knew of Feldman's approach, being kept behind the borders of the USSR.
Implementation of the model.
The model was created as an analytical framework for India's Second Five-Year Plan in 1955 by appointment of Prime Minister Jawaharlal Nehru, as India felt there was a need to introduce a formal-plan model after the First Five Year Plan (1951–1956). The First Five-Year Plan stressed investment for capital accumulation in the spirit of the one-sector Harrod–Domar model. It argued that production required capital and that capital can be accumulated through investment: the faster one accumulates capital through investment, the higher the growth rate will be. The most fundamental criticisms of that course came from Mahalanobis, who when himself was working with a variant of it in 1951 and 1952. The criticisms were mostly around the model's inability to cope with the real constraints of the economy, in ignoring the fundamental choice problems of planning over time and the lack of connection between the model and the actual selection of projects for governmental expenditure. Subsequently, Mahalanobis introduced his two-sector model, which he later expanded into the four-sector version.
Assumptions.
The assumptions under which the Mahalanobis model is posited are as follows:
Basics of the model.
The full-capacity output equation is as follows:
formula_0
In the model the growth rate is given by both the share of investment in the capital goods sector, formula_1, and the share of investment in the consumer goods sector - formula_2. If we choose to increase the value of formula_1 to be larger than formula_2, this will initially result in a slower growth in the short-run, but in the long run will exceed the former growth rate choice with a higher growth rate and an ultimately higher level of consumption. In other words, if this method is used, only in the long run will investment into the capital goods produce consumer goods, resulting in no short run gains.
Criticisms.
One of the most common criticisms of the model is that Mahalanobis pays hardly any attention to the savings constraint, which he assumes comes from the industrial sector. Developing countries, however, do not have this tendency, as the first stages of saving usually come from the agricultural sector. He also does not mention taxation, an important potential source of capital for the state as viewed by Neoclassical Macroeconomics.
A more serious criticism is the limitation of the assumptions under which this model holds, an example being the limitation of foreign trade. This cannot be justifiable to developing countries today. Another criticism is that a country, to use this model, would have to be large enough to contain all the raw resources needed for production to be self-sustainable, and, therefore, the model would not apply for smaller countries.
Empirical case.
Essentially the model was put into practice in 1956 as the theoretical pathway of India's Second Five Year Plan. However, after two years, the first problems started to emerge. Problems such as unexpected and unavoidable costs contributed to increased money supply and growing inflation. The biggest problem was the fall in the foreign exchange reserve due to liberalised import policy and international tension, leading to modifications in the Second Plan in 1958. It was finally abandoned and replaced by the Third Five Year Plan in 1961.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nY_{t} = Y_{0} \\left \\lbrace 1 + \\alpha_{0} \\frac{\\lambda_{k}\\beta_{k} + \\lambda_{c}\\beta_{c}}{\\lambda_{k}\\beta_{k}} \\left \\lbrack (1 + \\lambda_{k}\\beta_{k})^t - 1 \\right \\rbrack \\right \\rbrace\n"
},
{
"math_id": 1,
"text": "\\lambda_{k}"
},
{
"math_id": 2,
"text": "\\lambda_{c}"
}
] | https://en.wikipedia.org/wiki?curid=9200430 |
9200590 | Gaussian free field | Concept in statistical mechanics
In probability theory and statistical mechanics, the Gaussian free field (GFF) is a Gaussian random field, a central model of random surfaces (random height functions).
The discrete version can be defined on any graph, usually a lattice in "d"-dimensional Euclidean space. The continuum version is defined on R"d" or on a bounded subdomain of R"d". It can be thought of as a natural generalization of one-dimensional Brownian motion to "d" time (but still one space) dimensions: it is a random (generalized) function from R"d" to R. In particular, the one-dimensional continuum GFF is just the standard one-dimensional Brownian motion or Brownian bridge on an interval.
In the theory of random surfaces, it is also called the harmonic crystal. It is also the starting point for many constructions in quantum field theory, where it is called the Euclidean bosonic massless free field. A key property of the 2-dimensional GFF is conformal invariance, which relates it in several ways to the Schramm–Loewner evolution, see and .
Similarly to Brownian motion, which is the scaling limit of a wide range of discrete random walk models (see Donsker's theorem), the continuum GFF is the scaling limit of not only the discrete GFF on lattices, but of many random height function models, such as the height function of uniform random planar domino tilings, see . The planar GFF is also the limit of the fluctuations of the characteristic polynomial of a random matrix model, the Ginibre ensemble, see .
The structure of the discrete GFF on any graph is closely related to the behaviour of the simple random walk on the graph. For instance, the discrete GFF plays a key role in the proof by of several conjectures about the cover time of graphs (the expected number of steps it takes for the random walk to visit all the vertices).
Definition of the discrete GFF.
Let "P"("x", "y") be the transition kernel of the Markov chain given by a random walk on a finite graph "G"("V", "E"). Let "U" be a fixed non-empty subset of the vertices "V", and take the set of all real-valued functions formula_0 with some prescribed values on "U". We then define a Hamiltonian by
formula_1
Then, the random function with probability density proportional to formula_2 with respect to the Lebesgue measure on formula_3 is called the discrete GFF with boundary "U".
It is not hard to show that the expected value formula_4 is the discrete harmonic extension of the boundary values from "U" (harmonic with respect to the transition kernel "P"), and the covariances formula_5 are equal to the discrete Green's function "G"("x", "y").
So, in one sentence, the discrete GFF is the Gaussian random field on "V" with covariance structure given by the Green's function associated to the transition kernel "P".
The continuum field.
The definition of the continuum field necessarily uses some abstract machinery, since it does not exist as a random height function. Instead, it is a random generalized function, or in other words, a probability distribution on distributions (with two different meanings of the word "distribution").
Given a domain Ω ⊆ R"n", consider the Dirichlet inner product
formula_6
for smooth functions "ƒ" and "g" on Ω, coinciding with some prescribed boundary function on formula_7, where formula_8 is the gradient vector at formula_9. Then take the Hilbert space closure with respect to this inner product, this is the Sobolev space formula_10.
The continuum GFF formula_0 on formula_11 is a Gaussian random field indexed by formula_10, i.e., a collection of Gaussian random variables, one for each formula_12, denoted by formula_13, such that the covariance structure is formula_14 for all formula_15.
Such a random field indeed exists, and its distribution is unique. Given any orthonormal basis formula_16 of formula_10 (with the given boundary condition), we can form the formal infinite sum
formula_17
where the formula_18 are i.i.d. standard normal variables. This random sum almost surely will not exist as an element of formula_10, since if it did then
formula_19
However, it exists as a random generalized function, since for any formula_12 we have
formula_20
hence
formula_21
is a centered Gaussian random variable with finite variance formula_22
Special case: "n" = 1.
Although the above argument shows that formula_23 does not exist as a random element of formula_10, it still could be that it is a random function on formula_11 in some larger function space. In fact, in dimension formula_24, an orthonormal basis of formula_25 is given by
formula_26 where formula_27 form an orthonormal basis of formula_28
and then formula_29 is easily seen to be a one-dimensional Brownian motion (or Brownian bridge, if the boundary values for formula_30 are set up that way). So, in this case, it is a random continuous function (not belonging to formula_25, however). For instance, if formula_27 is the Haar basis, then this is Lévy's construction of Brownian motion, see, e.g., Section 3 of .
On the other hand, for formula_31 it can indeed be shown to exist only as a generalized function, see .
Special case: "n" = 2.
In dimension "n" = 2, the conformal invariance of the continuum GFF is clear from the invariance of the Dirichlet inner product. The corresponding two-dimensional conformal field theory describes a massless free scalar boson. | [
{
"math_id": 0,
"text": "\\varphi"
},
{
"math_id": 1,
"text": "H( \\varphi ) = \\frac{1}{2} \\sum_{(x,y)} P(x,y)\\big(\\varphi(x) - \\varphi(y)\\big)^2. "
},
{
"math_id": 2,
"text": "\\exp(-H(\\varphi))"
},
{
"math_id": 3,
"text": "\\R^{V\\setminus U}"
},
{
"math_id": 4,
"text": "\\mathbb{E}[\\varphi(x)]"
},
{
"math_id": 5,
"text": "\\mathrm{Cov}[\\varphi(x),\\varphi(y)]"
},
{
"math_id": 6,
"text": "\\langle f, g\\rangle := \\int_\\Omega (Df(x), Dg(x)) \\, dx "
},
{
"math_id": 7,
"text": "\\partial \\Omega"
},
{
"math_id": 8,
"text": "Df\\,(x)"
},
{
"math_id": 9,
"text": "x\\in \\Omega"
},
{
"math_id": 10,
"text": "H^1(\\Omega)"
},
{
"math_id": 11,
"text": "\\Omega"
},
{
"math_id": 12,
"text": "f \\in H^1(\\Omega)"
},
{
"math_id": 13,
"text": "\\langle \\varphi,f \\rangle"
},
{
"math_id": 14,
"text": "\\mathrm{Cov}[\\langle \\varphi,f \\rangle, \\langle \\varphi,g \\rangle] = \\langle f,g \\rangle"
},
{
"math_id": 15,
"text": "f,g\\in H^1(\\Omega)"
},
{
"math_id": 16,
"text": "\\psi_1, \\psi_2, \\dots"
},
{
"math_id": 17,
"text": " \\varphi := \\sum_{k=1}^\\infty \\xi_k \\psi_k,"
},
{
"math_id": 18,
"text": "\\xi_k"
},
{
"math_id": 19,
"text": " \\langle \\varphi,\\varphi \\rangle = \\sum_{k=1}^\\infty \\xi_k^2=\\infty\\quad \\textrm{a.s.}"
},
{
"math_id": 20,
"text": "f=\\sum_{k=1}^\\infty c_k \\psi_k,\\text{ with }\\sum_{k=1}^\\infty c_k^2 < \\infty,"
},
{
"math_id": 21,
"text": "\\langle \\varphi,f \\rangle := \\sum_{k=1}^\\infty \\xi_k c_k"
},
{
"math_id": 22,
"text": "\\sum_k c_k^2."
},
{
"math_id": 23,
"text": " \\varphi "
},
{
"math_id": 24,
"text": "n=1"
},
{
"math_id": 25,
"text": "H^1[0,1]"
},
{
"math_id": 26,
"text": "\\psi_k (t):= \\int_0^t \\varphi_k(s) \\, ds\\,,"
},
{
"math_id": 27,
"text": "(\\varphi_k)"
},
{
"math_id": 28,
"text": "L^2[0,1]\\,,"
},
{
"math_id": 29,
"text": "\\varphi(t):=\\sum_{k=1}^\\infty \\xi_k \\psi_k(t)"
},
{
"math_id": 30,
"text": "\\varphi_k"
},
{
"math_id": 31,
"text": "n \\geq 2"
}
] | https://en.wikipedia.org/wiki?curid=9200590 |
920110 | Dinitz conjecture | Theorem in combinatorics
In combinatorics, the Dinitz theorem (formerly known as Dinitz conjecture) is a statement about the extension of arrays to partial Latin squares, proposed in 1979 by Jeff Dinitz, and proved in 1994 by Fred Galvin.
The Dinitz theorem is that given an "n" × "n" square array, a set of "m" symbols with "m" ≥ "n", and for each cell of the array an "n"-element set drawn from the pool of "m" symbols, it is possible to choose a way of labeling each cell with one of those elements in such a way that no row or column repeats a symbol.
It can also be formulated as a result in graph theory, that the list chromatic index of the complete bipartite graph formula_0 equals formula_1. That is, if each edge of the complete bipartite graph is assigned a set of formula_1 colors, it is possible to choose one of the assigned colors for each edge
such that no two edges incident to the same vertex have the same color.
Galvin's proof generalizes to the statement that, for every bipartite multigraph, the list chromatic index equals its chromatic index. The more general edge list coloring conjecture states that the same holds not only for bipartite graphs, but also for any loopless multigraph. An even more general conjecture states that the list chromatic number of claw-free graphs always equals their chromatic number. The Dinitz theorem is also related to Rota's basis conjecture.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_{n, n}"
},
{
"math_id": 1,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=920110 |
9202993 | List of PSPACE-complete problems | Here are some of the more commonly known problems that are PSPACE-complete when expressed as decision problems. This list is in no way comprehensive.
Games and puzzles.
Generalized versions of:
<templatestyles src="Div col/styles.css"/>
Logic.
<templatestyles src="Div col/styles.css"/>
Lambda calculus.
Type inhabitation problem for simply typed lambda calculus
Automata and language theory.
Circuit theory.
Integer circuit evaluation
Automata theory.
<templatestyles src="Div col/styles.css"/>
Formal languages.
<templatestyles src="Div col/styles.css"/>
Graph theory.
<templatestyles src="Div col/styles.css"/>
Others.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "T_0"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "n\\times m"
}
] | https://en.wikipedia.org/wiki?curid=9202993 |
9203671 | Fourier–Motzkin elimination | Mathematical algorithm for eliminating variables from a system of linear inequalities
Fourier–Motzkin elimination, also known as the FME method, is a mathematical algorithm for eliminating variables from a system of linear inequalities. It can output real solutions.
The algorithm is named after Joseph Fourier who proposed the method in 1826 and Theodore Motzkin who re-discovered it in 1936.
Elimination.
The elimination of a set of variables, say "V", from a system of relations (here linear inequalities) refers to the creation of another system of the same sort, but without the variables in "V", such that both systems have the same solutions over the remaining variables.
If all variables are eliminated from a system of linear inequalities, then one obtains a system of constant inequalities. It is then trivial to decide whether the resulting system is true or false. It is true if and only if the original system has solutions. As a consequence, elimination of all variables can be used to detect whether a system of inequalities has solutions or not.
Consider a system formula_0 of formula_1 inequalities with formula_2 variables formula_3 to formula_4, with formula_4 the variable to be eliminated. The linear inequalities in the system can be grouped into three classes depending on the sign (positive, negative or null) of the coefficient for formula_4.
The original system is thus equivalent to
formula_13.
Elimination consists in producing a system equivalent to formula_14. Obviously, this formula is equivalent to
formula_15.
The inequality
formula_16
is equivalent to formula_17 inequalities formula_18, for formula_19 and formula_20.
We have therefore transformed the original system into another system where formula_4 is eliminated. Note that the output system has formula_21 inequalities. In particular, if formula_22, then the number of output inequalities is formula_23.
Example.
Consider the following system of inequalities:
formula_24
Since all the inequalities are in the same form (all less-than or all greater-than), we can examine the coefficient signs for each variable.
Eliminating x would yield 2*2 = 4 inequalities on the remaining variables, and so would eliminating y. Eliminating z would yield only 3*1 = 3 inequalities so we use that instead.
formula_25
which gives the 3 inequalities:
formula_26
Simplifying:
formula_27
This system uses only 2 variables instead of 3. Examining the coefficient signs for each variable yields all-positive for y, so we can immediately say that the system is unbounded in y: since all y coefficients are positive and all inequalities are less-than-or-equal, setting y to negative infinity (or any sufficiently large negative number) would satisfy the reduced system, therefore there exist corresponding x and z for the larger systems as well, and there are infinitely many such solutions. E.g. setting y = -1000000, x = 0, z = -2222222 satisfies the original system as well as the reduced ones.
Complexity.
Running an elimination step over formula_1 inequalities can result in at most formula_23 inequalities in the output, thus naively running formula_28 successive steps can result in at most formula_29, a double exponential complexity. This is due to the algorithm producing many redundant constraints implied by other constraints.
McMullen's upper bound theorem that the number of non-redundant constraints grows as a single exponential. A single exponential implementation of Fourier-Motzkin elimination and complexity estimates are given in.
Linear programming is well-known to give solutions to inequality systems in polynomial time, favoring it over Fourier-Motzkin elimination.
Imbert's acceleration theorems.
Two "acceleration" theorems due to Imbert permit the elimination of redundant inequalities based solely on syntactic properties of the formula derivation tree, thus curtailing the need to solve linear programs or compute matrix ranks.
Define the "history" formula_30 of an inequality formula_7 as the set of indexes of inequalities from the initial system formula_0 used to produce formula_7. Thus, formula_31 for inequalities formula_32 of the initial system. When adding a new inequality formula_33 (by eliminating formula_4), the new history formula_34 is constructed as formula_35.
Suppose that the variables formula_36 have been "officially" eliminated. Each inequality formula_7 partitions the set formula_37 into formula_38:
A non-redundant inequality has the property that its history is "minimal".
Theorem (Imbert's first acceleration theorem). If the history formula_30 of an inequality formula_7 is minimal, then formula_43.
An inequality that does not satisfy these bounds is necessarily redundant, and can be removed from the system without changing its solution set.
The second acceleration theorem detects minimal history sets:
Theorem (Imbert's second acceleration theorem). If the inequality formula_7 is such that formula_44, then formula_30 is minimal.
This theorem provides a quick detection criterion and is used in practice to avoid more costly checks, such as those based on matrix ranks. See the reference for implementation details.
Applications in information theory.
Information-theoretic achievability proofs result in conditions under which the existence of a well-performing coding scheme is guaranteed. These conditions are often described by linear system of inequalities. The variables of the system include both the transmission rates (that are part of the problem's formulation) and additional auxiliary rates used for the design of the scheme. Commonly, one aims to describe the fundamental limits of communication in terms of the problem's parameters only. This gives rise to the need of eliminating the aforementioned auxiliary rates, which is executed via Fourier–Motzkin elimination. However, the elimination process results in a new system that possibly contains more inequalities than the original. Yet, often some of the inequalities in the reduced system are redundant. Redundancy may be implied by other inequalities or by inequalities in information theory (a.k.a. Shannon type inequalities). A recently developed open-source software for MATLAB performs the elimination, while identifying and removing redundant inequalities. Consequently, the software's outputs a simplified system (without redundancies) that involves the communication rates only.
Redundant constraint can be identified by solving a linear program as follows. Given a linear constraints system, if the formula_7-th inequality is satisfied for any solution of all other inequalities, then it is redundant. Similarly, STIs refers to inequalities that are implied by the non-negativity of information theoretic measures and basic identities they satisfy. For instance, the STI formula_45 is a consequence of the identity formula_46 and the non-negativity of conditional entropy, i.e., formula_47. Shannon-type inequalities define a cone in formula_48, where formula_1 is the number of random variables appearing in the involved information measures. Consequently, any STI can be proven via linear programming by checking if it is implied by the basic identities and non-negativity constraints. The described algorithm first performs Fourier–Motzkin elimination to remove the auxiliary rates. Then, it imposes the information theoretic non-negativity constraints on the reduced output system and removes redundant inequalities. | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "x_1"
},
{
"math_id": 4,
"text": "x_r"
},
{
"math_id": 5,
"text": "x_r \\geq b_i-\\sum_{k=1}^{r-1} a_{ik} x_k"
},
{
"math_id": 6,
"text": "x_r \\geq A_i(x_1, \\dots, x_{r-1})"
},
{
"math_id": 7,
"text": "i"
},
{
"math_id": 8,
"text": "n_A"
},
{
"math_id": 9,
"text": "x_r \\leq b_i-\\sum_{k=1}^{r-1} a_{ik} x_k"
},
{
"math_id": 10,
"text": "x_r \\leq B_i(x_1, \\dots, x_{r-1})"
},
{
"math_id": 11,
"text": "n_B"
},
{
"math_id": 12,
"text": "\\phi"
},
{
"math_id": 13,
"text": "\\max(A_1(x_1, \\dots, x_{r-1}), \\dots, A_{n_A}(x_1, \\dots, x_{r-1})) \\leq x_r \\leq \\min(B_1(x_1, \\dots, x_{r-1}), \\dots, B_{n_B}(x_1, \\dots, x_{r-1})) \\wedge \\phi"
},
{
"math_id": 14,
"text": "\\exists x_r~S"
},
{
"math_id": 15,
"text": "\\max(A_1(x_1, \\dots, x_{r-1}), \\dots, A_{n_A}(x_1, \\dots, x_{r-1})) \\leq \\min(B_1(x_1, \\dots, x_{r-1}), \\dots, B_{n_B}(x_1, \\dots, x_{r-1})) \\wedge \\phi"
},
{
"math_id": 16,
"text": "\\max(A_1(x_1, \\dots, x_{r-1}), \\dots, A_{n_A}(x_1, \\dots, x_{r-1})) \\leq \\min(B_1(x_1, \\dots, x_{r-1}), \\dots, B_{n_B}(x_1, \\dots, x_{r-1}))"
},
{
"math_id": 17,
"text": "n_A n_B"
},
{
"math_id": 18,
"text": "A_i(x_1, \\dots, x_{r-1}) \\leq B_j(x_1, \\dots, x_{r-1})"
},
{
"math_id": 19,
"text": "1 \\leq i \\leq n_A"
},
{
"math_id": 20,
"text": "1 \\leq j \\leq n_B"
},
{
"math_id": 21,
"text": "(n-n_A-n_B)+n_A n_B"
},
{
"math_id": 22,
"text": "n_A = n_B = n/2"
},
{
"math_id": 23,
"text": "n^2/4"
},
{
"math_id": 24,
"text": "\n\\begin{cases} \n 2x - 5y + 4z \\leqslant 10 \\\\\n 3x - 6y + 3z \\leqslant 9 \\\\\n -x + 5y - 2z \\leqslant -7 \\\\\n -3x + 2y + 6z \\leqslant 12 \\\\\n\\end{cases}\n"
},
{
"math_id": 25,
"text": "\n\\begin{cases} \n z \\leqslant \\frac{10 - 2x + 5y}{4} \\\\\n z \\leqslant \\frac{9 - 3x + 6y}{3} \\\\\n \\frac{7 - x + 5y}{2} \\leqslant z \\\\\n z \\leqslant \\frac{12 + 3x - 2y}{6} \\\\\n\\end{cases}\n"
},
{
"math_id": 26,
"text": "\n\\begin{cases} \n \\frac{7 - x + 5y}{2} \\leqslant \\frac{10 - 2x + 5y}{4} \\\\\n \\frac{7 - x + 5y}{2} \\leqslant \\frac{9 - 3x + 6y}{3} \\\\\n \\frac{7 - x + 5y}{2} \\leqslant \\frac{12 + 3x - 2y}{6} \\\\\n\\end{cases}\n"
},
{
"math_id": 27,
"text": "\n\\begin{cases} \n 5y \\leqslant -4 \\\\\n x + y \\leqslant -1 \\\\\n -6x + 17y \\leqslant -9 \\\\\n\\end{cases}\n"
},
{
"math_id": 28,
"text": "d"
},
{
"math_id": 29,
"text": "4(n/4)^{2^d}"
},
{
"math_id": 30,
"text": "H_i"
},
{
"math_id": 31,
"text": "H_i=\\{i\\}"
},
{
"math_id": 32,
"text": "i \\in S"
},
{
"math_id": 33,
"text": "k: A_i(x_1, \\dots, x_{r-1}) \\leq B_j(x_1, \\dots, x_{r-1})"
},
{
"math_id": 34,
"text": "H_k"
},
{
"math_id": 35,
"text": "H_k = H_i \\cup H_j"
},
{
"math_id": 36,
"text": "O_k = \\{x_{r}, \\ldots, x_{r - k + 1}\\}"
},
{
"math_id": 37,
"text": "O_k"
},
{
"math_id": 38,
"text": "E_i \\cup I_i \\cup R_i"
},
{
"math_id": 39,
"text": "E_i"
},
{
"math_id": 40,
"text": "x_j"
},
{
"math_id": 41,
"text": "I_i"
},
{
"math_id": 42,
"text": "R_i"
},
{
"math_id": 43,
"text": " 1 + |E_i| \\ \\leq \\ |H_i| \\ \\leq 1 + \\left| E_i \\cup (I_i \\cap O_k)\\right|"
},
{
"math_id": 44,
"text": "1 + |E_i| = |H_i|"
},
{
"math_id": 45,
"text": "I(X_1;X_2) \\leq H(X_1) "
},
{
"math_id": 46,
"text": " I(X_1;X_2) = H(X_1) - H(X_1 | X_2)"
},
{
"math_id": 47,
"text": "H(X_1|X_2) \\geq 0"
},
{
"math_id": 48,
"text": "\\mathbb R^{2^n-1}"
}
] | https://en.wikipedia.org/wiki?curid=9203671 |
920526 | Silver ratio | Ratio of numbers, approximately 1:2.4
In mathematics, two quantities are in the silver ratio (or silver mean) if the ratio of the larger of those two quantities to the smaller quantity is the same as the ratio of the sum of the smaller quantity plus twice the larger quantity to the larger quantity (see below). This defines the silver ratio as an irrational mathematical constant, whose value of one plus the square root of 2 is approximately 2.4142135623. Its name is an allusion to the golden ratio; analogously to the way the golden ratio is the limiting ratio of consecutive Fibonacci numbers, the silver ratio is the limiting ratio of consecutive Pell numbers. The silver ratio is sometimes denoted by "δS" but it can vary from "λ" to "σ".
Mathematicians have studied the silver ratio since the time of the Greeks (although perhaps without giving a special name until recently) because of its connections to the square root of 2, its convergents, square triangular numbers, Pell numbers, octagons and the like.
The relation described above can be expressed algebraically, for a > b:
formula_0
or equivalently,
formula_1
The silver ratio can also be defined by the simple continued fraction [2; 2, 2, 2, ...]:
formula_2
The convergents of this continued fraction (, , , , , ...) are ratios of consecutive Pell numbers. These fractions provide accurate rational approximations of the silver ratio, analogous to the approximation of the golden ratio by ratios of consecutive Fibonacci numbers.
The silver rectangle is connected to the regular octagon. If a regular octagon is partitioned into two isosceles trapezoids and a rectangle, then the rectangle is a silver rectangle with an aspect ratio of 1:"δ""S", and the 4 sides of the trapezoids are in a ratio of 1:1:1:"δ""S". If the edge length of a regular octagon is "t", then the span of the octagon (the distance between opposite sides) is "δ""S""t", and the area of the octagon is 2"δ""S""t"2.
Calculation.
For comparison, two quantities "a", "b" with "a" > "b" > 0 are said to be in the "golden ratio" "φ" if,
formula_3
However, they are in the "silver ratio" "δS" if,
formula_4
Equivalently,
formula_5
Therefore,
formula_6
Multiplying by "δS" and rearranging gives
formula_7
Using the quadratic formula, two solutions can be obtained. Because "δS" is the ratio of positive quantities, it is necessarily positive, so,
formula_8
Properties.
Number-theoretic properties.
The silver ratio is a Pisot–Vijayaraghavan number (PV number), as its conjugate has absolute value less than 1. In fact it is the second smallest quadratic PV number after the golden ratio. This means the distance from "δ" to the nearest integer is ≈ 0.41421"n". Thus, the sequence of fractional parts of "δ", "n"
1, 2, 3, ... (taken as elements of the torus) converges. In particular, this sequence is not equidistributed mod 1.
Powers.
The lower powers of the silver ratio are
formula_9
formula_10
formula_11
formula_12
formula_13
formula_14
The powers continue in the pattern
formula_15
where
formula_16
For example, using this property:
formula_17
Using "K"0
1 and "K"1
2 as initial conditions, a Binet-like formula results from solving the recurrence relation
formula_16
which becomes
formula_18
Trigonometric properties.
The silver ratio is intimately connected to trigonometric ratios for
22.5°.
formula_19
formula_20
So the area of a regular octagon with side length "a" is given by
formula_21
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac{2a + b}{a} = \\frac{a}{b} \\equiv \\delta_S"
},
{
"math_id": 1,
"text": " 2 + \\frac{b}{a} = \\frac{a}{b} \\equiv \\delta_S"
},
{
"math_id": 2,
"text": " 2 + \\cfrac{1}{2 + \\cfrac{1}{2 + \\cfrac{1}{2 + \\ddots}}} =\\delta_S "
},
{
"math_id": 3,
"text": " \\frac{a+b}{a} = \\frac{a}{b} = \\varphi"
},
{
"math_id": 4,
"text": " \\frac{2a+b}{a} = \\frac{a}{b} = \\delta_S."
},
{
"math_id": 5,
"text": " 2+\\frac{b}{a} = \\frac{a}{b} = \\delta_S"
},
{
"math_id": 6,
"text": " 2 + \\frac{1}{\\delta_S} = \\delta_S. "
},
{
"math_id": 7,
"text": "{\\delta_S}^2 - 2\\delta_S - 1 = 0."
},
{
"math_id": 8,
"text": "\\delta_S = 1 + \\sqrt{2} = 2.41421356237\\dots"
},
{
"math_id": 9,
"text": " \\delta_S^{-1} = 1 \\delta_S - 2 = [0;2,2,2,2,2,\\dots] \\approx 0.41421"
},
{
"math_id": 10,
"text": " \\delta_S^0 = 0 \\delta_S + 1 = [1] = 1"
},
{
"math_id": 11,
"text": " \\delta_S^1 = 1 \\delta_S + 0 = [2;2,2,2,2,2,\\dots] \\approx 2.41421"
},
{
"math_id": 12,
"text": " \\delta_S^2 = 2 \\delta_S + 1 = [5;1,4,1,4,1,\\dots] \\approx 5.82842"
},
{
"math_id": 13,
"text": " \\delta_S^3 = 5 \\delta_S + 2 = [14;14,14,14,\\dots] \\approx 14.07107"
},
{
"math_id": 14,
"text": " \\delta_S^4 = 12\\delta_S + 5 = [33;1,32,1,32,\\dots] \\approx 33.97056 "
},
{
"math_id": 15,
"text": " \\delta_S^n = K_n\\delta_S + K_{n-1} "
},
{
"math_id": 16,
"text": " K_n = 2 K_{n-1} + K_{n-2} "
},
{
"math_id": 17,
"text": " \\delta_S^5 = 29\\delta_S + 12 = [82;82,82,82,\\dots] \\approx 82.01219 "
},
{
"math_id": 18,
"text": " K_n = \\frac{1}{2\\sqrt{2}} \\left(\\delta_S^{n+1} - {(2-\\delta_S)}^{n+1}\\right) "
},
{
"math_id": 19,
"text": "\\tan \\frac{\\pi}{8} = \\sqrt{2}-1= \\frac{1}{\\delta_s} "
},
{
"math_id": 20,
"text": "\\cot \\frac{\\pi}{8} = \\tan \\frac{3\\pi}{8} = \\sqrt{2}+1=\\delta_s "
},
{
"math_id": 21,
"text": "A = 2a^2 \\cot \\frac{\\pi}{8} = 2\\delta_s a^2 \\simeq 4.828427 a^2."
}
] | https://en.wikipedia.org/wiki?curid=920526 |
920575 | Hippasus | 5th-century BC Pythagorean philosopher
Hippasus of Metapontum (; Greek: , "Híppasos"; c. 530 – c. 450 BC) was a Greek philosopher and early follower of Pythagoras. Little is known about his life or his beliefs, but he is sometimes credited with the discovery of the existence of irrational numbers. The discovery of irrational numbers is said to have been shocking to the Pythagoreans, and Hippasus is supposed to have drowned at sea, apparently as a punishment from the gods for divulging this and crediting it to himself instead of Pythagoras which was the norm in Pythagorean society. However, the few ancient sources who describe this story either do not mention Hippasus by name (e.g. Pappus) or alternatively tell that Hippasus drowned because he revealed how to construct a dodecahedron inside a sphere. The discovery of irrationality is not specifically ascribed to Hippasus by any ancient writer.
Life.
Little is known about the life of Hippasus. He may have lived in the late 5th century BC, about a century after the time of Pythagoras. Metapontum in Magna Graecia is usually referred to as his birthplace, although according to Iamblichus (3rd century AD) some claim Metapontum to be his birthplace, while others the nearby city of Croton. Hippasus is recorded under the city of Sybaris in Iamblichus's list of each city's Pythagoreans. He also states that Hippasus was the founder of a sect of the Pythagoreans called the "Mathematici" () in opposition to the "Acusmatici" (); but elsewhere he makes him the founder of the "Acusmatici" in opposition to the "Mathematici".
Iamblichus says about the death of Hippasus:
It is related to Hippasus that he was a Pythagorean, and that, owing to his being the first to publish and describe the sphere from the twelve pentagons, he perished at sea for his impiety, but he received credit for the discovery, though really it all belonged to HIM (for in this way they refer to Pythagoras, and they do not call him by his name).
According to Iamblichus's "The life of Pythagoras",
There were also two forms of philosophy, for the two genera of those that pursued it: the "Acusmatici" and the "Mathematici". The latter are acknowledged to be Pythagoreans by the rest but the Mathematici do not admit that the Acusmatici derived their instructions from Pythagoras but from Hippasus. The philosophy of the Acusmatici consisted in auditions unaccompanied with demonstrations and a reasoning process; because it merely ordered a thing to be done in a certain way and that they should endeavor to preserve such other things as were said by him, as divine dogmas. Memory was the most valued faculty. All these auditions were of three kinds; some signifying what a thing is; others what it especially is, others what ought or ought not to be done. (p. 61)
Doctrines.
Aristotle speaks of Hippasus as holding the element of fire to be the cause of all things; and Sextus Empiricus contrasts him with the Pythagoreans in this respect, that he believed the "arche" to be material, whereas they thought it was incorporeal, namely, number. Diogenes Laërtius tells us that Hippasus believed that "there is a definite time which the changes in the universe take to complete, and that the universe is limited and ever in motion." According to one statement, Hippasus left no writings, according to another he was the author of the "Mystic Discourse", written to bring Pythagoras into disrepute.
A scholium on Plato's "Phaedo" notes him as an early experimenter in music theory, claiming that he made use of bronze disks to discover the fundamental musical ratios, 4:3, 3:2, and 2:1.
Irrational numbers.
Hippasus is sometimes credited with the discovery of the existence of irrational numbers, following which he was drowned at sea. Pythagoreans preached that all numbers could be expressed as the ratio of integers, and the discovery of irrational numbers is said to have shocked them. However, the evidence linking the discovery to Hippasus is unclear.
Pappus (4th century AD) merely says that the knowledge of irrational numbers originated in the Pythagorean school, and that the member who first divulged the secret perished by drowning. Iamblichus (3rd century AD) gives a series of inconsistent reports. In one story he explains how a Pythagorean was merely expelled for divulging the nature of the irrational; but he then cites the legend of the Pythagorean who drowned at sea for making known the construction of the regular dodecahedron in the sphere. In another account he tells how it was Hippasus who drowned at sea for betraying the construction of the dodecahedron and taking credit for this construction himself; but in another story this same punishment is meted out to the Pythagorean who divulged knowledge of the irrational. Iamblichus clearly states that the drowning at sea was a punishment from the gods for impious behaviour.
These stories are usually taken together to ascribe the discovery of irrationals to Hippasus, but whether he did or not is uncertain. In principle, the stories can be combined, since it is possible to discover irrational numbers when constructing dodecahedra. Irrationality, by infinite reciprocal subtraction, can be easily seen in the golden ratio of the regular pentagon.
Some scholars in the early 20th century credited Hippasus with the discovery of the irrationality of formula_0, the square root of 2. Plato in his "Theaetetus", describes how Theodorus of Cyrene (c. 400 BC) proved the irrationality of formula_1, formula_2, etc. up to formula_3, which implies that an earlier mathematician had already proved the irrationality of formula_0. Aristotle referred to the method for a proof of the irrationality of formula_0, and a full proof along these same lines is set out in the proposition interpolated at the end of Euclid's Book X, which suggests that the proof was certainly ancient. The method is a proof by contradiction, or reductio ad absurdum, which shows that if the diagonal of a square is assumed to be commensurable with the side, then the same number must be both odd and even.
In the hands of modern writers this combination of vague ancient reports and modern guesswork has sometimes evolved into a much more emphatic and colourful tale. Some writers have Hippasus making his discovery while on board a ship, as a result of which his Pythagorean shipmates toss him overboard; while one writer even has Pythagoras himself "to his eternal shame" sentencing Hippasus to death by drowning, for showing "that formula_0 is an irrational number".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt2"
},
{
"math_id": 1,
"text": "\\sqrt3"
},
{
"math_id": 2,
"text": "\\sqrt5"
},
{
"math_id": 3,
"text": "\\sqrt{17}"
}
] | https://en.wikipedia.org/wiki?curid=920575 |
9206499 | Metal–insulator transition | Change between conductive and non-conductive state
Metal–insulator transitions are transitions of a material from a metal (material with good electrical conductivity of electric charges) to an insulator (material where conductivity of charges is quickly suppressed). These transitions can be achieved by tuning various ambient parameters such as temperature, pressure or, in case of a semiconductor, doping.
History.
The basic distinction between metals and insulators was proposed by Hans Bethe, Arnold Sommerfeld and Felix Bloch in 1928-1929. It distinguished between conducting metals (with partially filled bands) and nonconducting insulators. However, in 1937 Jan Hendrik de Boer and Evert Verwey reported that many transition-metal oxides (such as NiO) with a partially filled d-band were poor conductors, often insulating. In the same year, the importance of the electron-electron correlation was stated by Rudolf Peierls. Since then, these materials as well as others exhibiting a transition between a metal and an insulator have been extensively studied, e.g. by Sir Nevill Mott, after whom the insulating state is named Mott insulator.
The first metal-insulator transition to be found was the Verwey transition of magnetite in the 1940s.
Theoretical description.
The classical band structure of solid state physics predicts the Fermi level to lie in a band gap for insulators and in the conduction band for metals, which means metallic behavior is seen for compounds with partially filled bands. However, some compounds have been found which show insulating behavior even for partially filled bands. This is due to the electron-electron correlation, since electrons cannot be seen as noninteracting. Mott considers a lattice model with just one electron per site. Without taking the interaction into account, each site could be occupied by two electrons, one with spin up and one with spin down. Due to the interaction the electrons would then feel a strong Coulomb repulsion, which Mott argued splits the band in two. Having one electron per-site fills the lower band while the upper band remains empty, which suggests the system becomes an insulator. This interaction-driven insulating state is referred to as a Mott insulator. The Hubbard model is one simple model commonly used to describe metal-insulator transitions and the formation of a Mott insulator.
Elementary mechanisms.
Metal–insulator transitions (MIT) and models for approximating them can be classified based on the origin of their transition.
Polarization catastrophe.
The polarization catastrophe model describes the transition of a material from an insulator to a metal. This model considers the electrons in a solid to act as oscillators and the conditions for this transition to occur is determined by the number of oscillators per unit volume of the material. Since every oscillator has a frequency ("ω"0) we can describe the dielectric function of a solid as,
formula_0
where "ε"("ω") is the dielectric function, "N" is the number of oscillators per unit volume, "ω"0 is the fundamental oscillation frequency, m is the oscillator mass, and "ω" is the excitation frequency.
For a material to be a metal, the excitation frequency ("ω") must be zero by definition, which then gives us the static dielectric constant,
where "ε"s is the static dielectric constant. If we rearrange equation (1) to isolate the number of oscillators per unit volume we get the critical concentration of oscillators ("N"c) at which "ε"s becomes infinite, indicating a metallic solid and the transition from an insulator to a metal.
formula_1
This expression creates a boundary that defines the transition of a material from an insulator to a metal. This phenomenon is known as the polarization catastrophe.
The polarization catastrophe model also theorizes that, with a high enough density, and thus a low enough molar volume, any solid could become metallic in character. Predicting whether a material will be metallic or insulating can be done by taking the ratio "R"/"V", where "R" is the molar refractivity, sometimes represented by "A", and "V" is the molar volume. In cases where "R"/"V" is less than 1, the material will have non-metallic, or insulating properties, while an "R"/"V" value greater than one yields metallic character.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\epsilon(\\omega)= 1+\\frac{\\frac{Ne^2}{\\epsilon_{0}m}}{\\omega_0^2-\\frac{Ne^2}{3\\epsilon_0m} -\\omega^2-i\\frac\\omega\\tau} "
},
{
"math_id": 1,
"text": "N_{\\mathrm c} = \\frac{3\\epsilon_0 m \\omega_0^2}{e^2} "
}
] | https://en.wikipedia.org/wiki?curid=9206499 |
920668 | Hqx | hqx ("high quality scale") is a set of 3 image upscaling algorithms developed by Maxim Stepin. The algorithms are hq2x, hq3x, and hq4x, which magnify by a factor of 2, 3, and 4 respectively. It was initially created in 2003 for the Super NES emulator ZSNES, and is used in emulators such as Nestopia, F. CEUXSnes9x., and Snes9x.
Algorithm.
The source image's pixels are iterated through from top-left to bottom-right. For each pixel, the surrounding 8 pixels are compared to the color of the source pixel. Shapes are detected by checking for pixels of similar color according to a YUV threshold. hqx uses the YUV color space to calculate color differences, so that differences in brightness are weighted higher to order to mimic human perception. This gives a total of formula_0 combinations of similar or dissimilar neighbors. To expand the single pixel into a 2×2, 3×3, or 4×4 block of pixels, the arrangement of neighbors is looked up in a predefined table which contains the necessary interpolation patterns.
The interpolation data in the lookup tables are constrained by the requirement that continuity of line segments must be preserved, while optimizing for smoothness. Generating these 256-filter lookup tables is relatively slow, and is the major source of complexity in the algorithm: the render stage is very simple and fast, and designed to be capable of being performed in real time on a MMX-capable CPU.
In the source code, the interpolation data is represented as preprocessor macros to be inserted into switch case statements, and there is no source code leading to the generation of a lookup table. The author describes the process of generating a look-up table as:
... for each combination the most probable vector representation of the area has to be determined, with the idea of edges between the different colored areas of the image to be preserved, with the edge direction to be as close to a correct one as possible. That vector representation is then rasterised with higher (3x) resolution using anti-aliasing, and the result is stored in the lookup table.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2^8 = 256"
}
] | https://en.wikipedia.org/wiki?curid=920668 |
9207590 | Precedence graph | A precedence graph, also named conflict graph and serializability graph, is used in the context of concurrency control in databases. It is the directed graph representing precedence of transactions in the schedule, as reflected by precedence of conflicting operations in the transactions. A schedule is "conflict-serializable" if and only if its precedence graph of "committed transactions" is "acyclic".
The precedence graph for a schedule S contains:
Cycles of committed transactions can be prevented by aborting an "undecided" (neither committed, nor aborted) transaction on each cycle in the precedence graph of all the transactions, which can otherwise turn into a cycle of committed transactions (and a committed transaction cannot be aborted). One transaction aborted per cycle is both required and sufficient in number to break and eliminate the cycle (more aborts are possible, and can happen under some mechanisms, but are unnecessary for serializability). The probability of cycle generation is typically low, but, nevertheless, such a situation is carefully handled, typically with a considerable amount of overhead, since correctness is involved. Transactions aborted due to serializability violation prevention are "restarted" and executed again immediately.
formula_0
formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 formula_7 formula_8
Precedence graph examples.
Example 2.
A precedence graph of the schedule D, with 3 transactions. As there is a cycle (of length 2; with two edges) through the committed transactions T1 and T2, this schedule (history) is "not" Conflict serializable.
Notice, that the commit of Transaction 2 does not have any meaning regarding the creation of a precedence graph.
Example 3.
Algorithm to test "Conflict Serializability" of a Schedule S along with an example schedule.
formula_9
or
formula_10 formula_3 formula_4 formula_5 formula_6 formula_7 formula_8
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S = \\begin{bmatrix}\nT1 & T2 \\\\\nR(A) & R(A) \\\\\nA=A*5 & R(B) \\\\\nW(A) & B=B+A \\\\\nR(B) & W(B) \\\\\nB=B*10 & \\\\\nW(B) & \\\\\n\n\\end{bmatrix}"
},
{
"math_id": 1,
"text": "D = R1(A)"
},
{
"math_id": 2,
"text": "R2(B)"
},
{
"math_id": 3,
"text": "W2(A)"
},
{
"math_id": 4,
"text": " Com.2"
},
{
"math_id": 5,
"text": " W1(A)"
},
{
"math_id": 6,
"text": " Com.1"
},
{
"math_id": 7,
"text": " W3(A)"
},
{
"math_id": 8,
"text": " Com.3 "
},
{
"math_id": 9,
"text": "S = \\begin{bmatrix}\nT1 & T2 & T3 \\\\\nR(A) & & \\\\\n & W(A) & \\\\\n & Com. & \\\\\nW(A) & & \\\\\nCom. & & \\\\\n & & W(A)\\\\\n & & Com.\\\\\n\\end{bmatrix}"
},
{
"math_id": 10,
"text": "S = R1(A)"
}
] | https://en.wikipedia.org/wiki?curid=9207590 |
920844 | Deck (ship) | Part of a ship or boat
A deck is a permanent covering over a compartment or a hull of a ship. On a boat or ship, the primary or upper deck is the horizontal structure that forms the "roof" of the hull, strengthening it and serving as the primary working surface. Vessels often have more than one level both within the hull and in the superstructure above the primary deck, similar to the floors of a multi-storey building, that are also referred to as decks, as are certain compartments and decks built over specific areas of the superstructure. Decks for some purposes have specific names.
Structure.
The main purpose of the upper or primary deck is structural, and only secondarily to provide weather-tightness and support people and equipment. The deck serves as the lid to the complex box girder which can be identified as the hull. It resists tension, compression, and racking forces. The deck's scantling is usually the same as the topsides, or might be heavier if the deck is expected to carry heavier loads (for example a container ship). The deck will be reinforced around deck fittings such as the capstan, cleats, or bollards.
On ships with more than one level, 'deck' refers to the level itself. The actual floor surface is called the sole; the term 'deck' refers to a structural member tying the ships frames or ribs together over the keel. In modern ships, the interior decks are usually numbered from the primary deck, which is #1, downward and upward. So the first deck below the primary deck will be #2, and the first above the primary deck will be #A2 or #S2 (for "above" or "superstructure"). Some merchant ships may alternatively designate decks below the primary deck, usually machinery spaces, by numbers, and those above it, in the accommodation block, by letters. Ships may also call decks by common names, or (especially on cruise ships) may invent fanciful and romantic names for a specific deck or area of that specific ship, such as the "lido deck" of the Princess Cruises' "Love Boat".
Equipment mounted on deck, such as the ship's wheel, binnacle, fife rails, and so forth, may be collectively referred to as deck furniture. Weather decks in Western designs evolved from having structures fore (forward or front) and aft (rear) of the ship mostly clear; in the 19th century, pilothouses/wheelhouses and deckhouses began to appear, eventually developing into the superstructure of modern ships. Eastern designs developed earlier, with efficient middle decks and minimalist fore and aft cabin structures across a range of designs.
Common names for decks.
In vessels having more than one deck there are various naming conventions, numerically, alphabetically, etc. However, there are also various common historical names and types of decks:
Construction.
Methods in wood.
A traditional wood deck would consist of planks laid fore and aft over beams and along carlins, the seams of which are caulked and paid with tar. A yacht or other fancy boat might then have the deck canvased, with the fabric laid down in a thick layer of paint or sealant, and additional coats painted over. The wash or apron boards form the joint between the deck planking and that of the topsides, and are caulked similarly.
Modern "constructed decks" are used primarily on fiberglass, composite, and cold-molded hulls. The under structure of beams and carlins is the same as above. The decking itself is usually multiple layers of marine-grade plywood, covered over with layers of fibreglass in a plastic resin such as epoxy or polyester overlapped onto the topsides of the hull.
Methods in metal.
Generally speaking, the method outlined for "constructed decks" is most similar to metal decks. The deck plating is laid over metal beams and carlins and tacked temporarily in place. The difficulty in metal construction is avoiding distortion of the plate while welding due to the high heat involved in the process. Welds are usually double pass, meaning each seam is welded twice, a time-consuming process which may take longer than building the wood deck. However, welds result in a waterproof deck which is strong and easily repairable. The deck structure is welded to the hull, making it structurally a single unit.
Because a metal deck, painted to reduce corrosion, can pick up heat from the sun, and be quite slippery and noisy to work on, a layer of wood decking or thick non-skid paint is often applied to its surface.
Methods in fiberglass.
The process for building a deck in fiberglass is the same as for building a hull: a female mould is built, a layer of gel coat is sprayed in, then layers of fiberglass in resin are built up to the required deck thickness (if the deck has a core, the outer skin layers of fiberglass and resin are laid, then the core material, and finally the inner skin layers). The deck is removed from the mould and usually mechanically fastened to the hull.
Fiberglass decks are quite slick with their mirror-smooth surfaces, so a non-skid texture is often moulded into their surface, or non-skid pads glued down in working areas.
Rules of thumb to determine the deck scantlings.
The thickness of the decking affects how strong the hull is, and is directly related to how thick the skin of the hull itself is, which is of course related to how large the vessel is, the kind of work it is expected to do, and the kind of weather it may reasonably be expected to endure. While a naval engineer or architect may have precise methods of determining what the scantlings should be, traditional builders used previous experiences and simpler rules-of-thumb to determine how thick the deck should be built.
The numbers derived by these formulae gives a rough number for determining the average thickness of materials based on some crude hull measurements. Below the waterline the thickness should be approximately 115% of the result, while upper topsides and decks might be reduced to 85% of the result.
– Source:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{} = {\\sqrt{\\text{LOA}}+\\text{beam} \\over 16}"
},
{
"math_id": 1,
"text": " {} = \\left[ \\sqrt{\\text{LOA}\\cdot 3.28}+(\\text{beam} \\cdot 3.28)\\cdot 1.58 \\right] "
},
{
"math_id": 2,
"text": "{} = 0.07 + {\\text{LWL}\\over150}"
},
{
"math_id": 3,
"text": "{} = 1.8 + {\\text{LWL}\\over1.8}"
}
] | https://en.wikipedia.org/wiki?curid=920844 |
9209488 | Rabin fingerprint | Fingerprinting algorithm
The Rabin fingerprinting scheme (aka Polynomial fingerprinting) is a method for implementing fingerprints using polynomials over a finite field. It was proposed by Michael O. Rabin.
Scheme.
Given an "n"-bit message "m"0...,"m"n-1, we view it as a polynomial of degree "n"-1 over the finite field GF(2).
formula_0
We then pick a random irreducible polynomial &NoBreak;&NoBreak; of degree "k" over GF(2), and we define the fingerprint of the message "m" to be the remainder formula_1 after division of formula_2 by formula_3 over GF(2) which can be viewed as a polynomial of degree "k" − 1 or as a "k"-bit number.
Applications.
Many implementations of the Rabin–Karp algorithm internally use Rabin fingerprints.
The "Low Bandwidth Network Filesystem" (LBFS) from MIT uses Rabin fingerprints to implement variable size shift-resistant blocks.
The basic idea is that the filesystem computes the cryptographic hash of each block in a file. To save on transfers between the client and server,
they compare their checksums and only transfer blocks whose checksums differ. But one problem with this scheme is that a single insertion at the beginning of the file will cause every checksum to change if fixed-sized (e.g. 4 KB) blocks are used. So the idea is to select blocks not based on a specific offset but rather by some property of the block contents. LBFS does this by sliding a 48 byte window over the file and computing the Rabin fingerprint of each window. When the low 13 bits of the fingerprint are zero LBFS calls those 48 bytes a breakpoint and ends the current block and begins a new one. Since the output of Rabin fingerprints are pseudo-random the probability of any given 48 bytes being a breakpoint is formula_4 (1 in 8192). This has the effect of shift-resistant variable size blocks. "Any" hash function could be used to divide a long file into blocks (as long as a cryptographic hash function is then used to find the checksum of each block): but the Rabin fingerprint is an efficient rolling hash, since the computation of the Rabin fingerprint of region "B" can reuse some of the computation of the Rabin fingerprint of region "A" when regions "A" and "B" overlap.
Note that this is a problem similar to that faced by rsync. | [
{
"math_id": 0,
"text": " f(x) = m_0 + m_1 x + \\ldots + m_{n-1} x^{n-1} "
},
{
"math_id": 1,
"text": "r(x)"
},
{
"math_id": 2,
"text": "f(x)"
},
{
"math_id": 3,
"text": "p(x)"
},
{
"math_id": 4,
"text": "2^{-13}"
}
] | https://en.wikipedia.org/wiki?curid=9209488 |
9209712 | Thermoelectric generator | Device that converts heat flux into electrical energy
A thermoelectric generator (TEG), also called a Seebeck generator, is a solid state device that converts heat (driven by temperature differences) directly into electrical energy through a phenomenon called the "Seebeck effect" (a form of thermoelectric effect). Thermoelectric generators function like heat engines, but are less bulky and have no moving parts. However, TEGs are typically more expensive and less efficient. When the same principle is used in reverse to create a heat gradient from an electric current, it is called a thermoelectric (or Peltier) cooler.
Thermoelectric generators could be used in power plants and factories to convert waste heat into additional electrical power and in automobiles as automotive thermoelectric generators (ATGs) to increase fuel efficiency. Radioisotope thermoelectric generators use radioisotopes to generate the required temperature difference to power space probes. Thermoelectric generators can also be used alongside solar panels.
History.
In 1821, Thomas Johann Seebeck discovered that a thermal gradient formed between two different conductors can produce electricity. At the heart of the thermoelectric effect is that a temperature gradient in a conducting material results in heat flow; this results in the diffusion of charge carriers. The flow of charge carriers between the hot and cold regions in turn creates a voltage difference. In 1834, Jean Charles Athanase Peltier discovered the reverse effect, that running an electric current through the junction of two dissimilar conductors could, depending on the direction of the current, cause it to act as a heater or cooler.
George Cove had been supposed to invented a photovoltaic panel as a concentrated photovoltaic but it was actually a concentrated thermoelectric generator with thermocouples in 1909
Efficiency.
The typical efficiency of TEGs is around 5–8%, although it can be higher. Older devices used bimetallic junctions and were bulky. More recent devices use highly doped semiconductors made from bismuth telluride (Bi2Te3), lead telluride (PbTe), calcium manganese oxide (Ca2Mn3O8), or combinations thereof, depending on application temperature. These are solid-state devices and unlike dynamos have no moving parts, with the occasional exception of a fan or pump to improve heat transfer. If the hot region is around 1273K and the ZT values of 3 - 4 are implemented, the efficiency is approximately 33-37%; allowing TEG's to compete with certain heat engine efficiencies.
As of 2021, there are materials (some containing widely available and inexpensive arsenic and tin) reaching a ZT value > 3; monolayer <chem>AsP3</chem> (ZT = 3.36 on the armchair axis); n-type doped <chem>InP3</chem> (ZT = 3.23); p-type doped <chem>SnP3</chem> (ZT = 3.46); p-type doped <chem>SbP3</chem> (ZT = 3.5).
Construction.
Thermoelectric power generators consist of three major components: thermoelectric materials, thermoelectric modules and thermoelectric systems that interface with the heat source.
Thermoelectric materials.
Thermoelectric materials generate power directly from the heat by converting temperature differences into electric voltage. These materials must have both high electrical conductivity (σ) and low thermal conductivity (κ) to be good thermoelectric materials. Having low thermal conductivity ensures that when one side is made hot, the other side stays cold, which helps to generate a large voltage while in a temperature gradient. The measure of the magnitude of electrons flow in response to a temperature difference across that material is given by the Seebeck coefficient (S). The efficiency of a given material to produce a thermoelectric power is simply estimated by its “figure of merit” zT = S2σT/κ.
For many years, the main three semiconductors known to have both low thermal conductivity and high power factor were bismuth telluride (Bi2Te3), lead telluride (PbTe), and silicon germanium (SiGe). Some of these materials have somewhat rare elements which make them expensive.
Today, the thermal conductivity of semiconductors can be lowered without affecting their high electrical properties using nanotechnology. This can be achieved by creating nanoscale features such as particles, wires or interfaces in bulk semiconductor materials. However, the manufacturing processes of nano-materials are still challenging.
Thermoelectric advantages.
Thermoelectric generators are all-solid-state devices that do not require any fluids for fuel or cooling, making them non-orientation dependent allowing for use in zero-gravity or deep-sea applications. The solid-state design allows for operation in severe environments. Thermoelectric generators have no moving parts which produce a more reliable device that does not require maintenance for long periods. The durability and environmental stability have made thermoelectrics a favorite for NASA's deep space explorers among other applications. One of the key advantages of thermoelectric generators outside of such specialized applications is that they can potentially be integrated into existing technologies to boost efficiency and reduce environmental impact by producing usable power from waste heat.
Thermoelectric module.
A thermoelectric module is a circuit containing thermoelectric materials which generate electricity from heat directly. A thermoelectric module consists of two dissimilar thermoelectric materials joined at their ends: an n-type (with negative charge carriers), and a p-type (with positive charge carriers) semiconductor. Direct electric current will flow in the circuit when there is a temperature difference between the ends of the materials. Generally, the current magnitude is directly proportional to the temperature difference:
formula_0
where formula_1 is the local conductivity, S is the Seebeck coefficient (also known as thermopower), a property of the local material, and formula_2 is the temperature gradient.
In application, thermoelectric modules in power generation work in very tough mechanical and thermal conditions. Because they operate in a very high-temperature gradient, the modules are subject to large thermally induced stresses and strains for long periods. They also are subject to mechanical fatigue caused by a large number of thermal cycles.
Thus, the junctions and materials must be selected so that they survive these tough mechanical and thermal conditions. Also, the module must be designed such that the two thermoelectric materials are thermally in parallel, but electrically in series. The efficiency of a thermoelectric module is greatly affected by the geometry of its design.
Thermoelectric design.
Thermoelectric generators are made of several thermopiles, each consisting of many thermocouples made of a connected n-type and p-type material. The arrangement of the thermocouples is typically in three main designs: planar, vertical, and mixed. Planar design involves thermocouples put onto a substrate horizontally between the heat source and cool side, resulting in the ability to create longer and thinner thermocouples, thereby increasing the thermal resistance and temperature gradient and eventually increasing voltage output. Vertical design has thermocouples arranged vertically between the hot and cool plates, leading to high integration of thermocouples as well as a high output voltage, making this design the most widely-used design commercially. The mixed design has the thermocouples arranged laterally on the substrate while the heat flow is vertical between plates. Microcavities under the hot contacts of the device allow for a temperature gradient, which allows for the substrate’s thermal conductivity to affect the gradient and efficiency of the device.
For microelectromechanical systems, TEGs can be designed on the scale of handheld devices to use body heat in the form of thin films. Flexible TEGs for wearable electronics are able to be made with novel polymers through additive manufacturing or thermal spraying processes. Cylindrical TEGs for using heat from vehicle exhaust pipes can also be made using circular thermocouples arranged in a cylinder. Many designs for TEGs can be made for the different devices they are applied to.
Thermoelectric systems.
Using thermoelectric modules, a thermoelectric system generates power by taking in heat from a source such as a hot exhaust flue. To operate, the system needs a large temperature gradient, which is not easy in real-world applications. The cold side must be cooled by air or water. Heat exchangers are used on both sides of the modules to supply this heating and cooling.
There are many challenges in designing a reliable TEG system that operates at high temperatures. Achieving high efficiency in the system requires extensive engineering design to balance between the heat flow through the modules and maximizing the temperature gradient across them. To do this, designing heat exchanger technologies in the system is one of the most important aspects of TEG engineering. In addition, the system requires to minimize the thermal losses due to the interfaces between materials at several places. Another challenging constraint is avoiding large pressure drops between the heating and cooling sources.
If AC power is required (such as for powering equipment designed to run from AC mains power), the DC power from the TE modules must be passed through an inverter, which lowers efficiency and adds to the cost and complexity of the system.
Materials for TEG.
Only a few known materials to date are identified as thermoelectric materials. Most thermoelectric materials today have a zT, the figure of merit, value of around 1, such as in bismuth telluride (Bi2Te3) at room temperature and lead telluride (PbTe) at 500–700 K. However, in order to be competitive with other power generation systems, TEG materials should have a zT of 2–3. Most research in thermoelectric materials has focused on increasing the Seebeck coefficient (S) and reducing the thermal conductivity, especially by manipulating the nanostructure of the thermoelectric materials. Because both the thermal and electrical conductivity correlate with the charge carriers, new means must be introduced in order to conciliate the contradiction between high electrical conductivity and low thermal conductivity, as is needed.
When selecting materials for thermoelectric generation, a number of other factors need to be considered. During operation, ideally, the thermoelectric generator has a large temperature gradient across it. Thermal expansion will then introduce stress in the device which may cause fracture of the thermoelectric legs or separation from the coupling material. The mechanical properties of the materials must be considered and the coefficient of thermal expansion of the n and p-type material must be matched reasonably well. In segmented thermoelectric generators, the material's compatibility must also be considered to avoid incompatibility of relative current, defined as the ratio of electrical current to diffusion heat current, between segment layers.
A material's compatibility factor is defined as
formula_3.
When the compatibility factor from one segment to the next differs by more than a factor of about two, the device will not operate efficiently. The material parameters determining s (as well as zT) are temperature-dependent, so the compatibility factor may change from the hot side to the cold side of the device, even in one segment. This behavior is referred to as self-compatibility and may become important in devices designed for wide-temperature application.
In general, thermoelectric materials can be categorized into conventional and new materials:
Conventional materials.
Many TEG materials are employed in commercial applications today. These materials can be divided into three groups based on the temperature range of operation:
Although these materials still remain the cornerstone for commercial and practical applications in thermoelectric power generation, significant advances have been made in synthesizing new materials and fabricating material structures with improved thermoelectric performance. Recent research has focused on improving the material’s figure-of-merit (zT), and hence the conversion efficiency, by reducing the lattice thermal conductivity.
New materials.
Researchers are trying to develop new thermoelectric materials for power generation by improving the figure-of-merit zT. One example of these materials is the semiconductor compound ß-Zn4Sb3, which possesses an exceptionally low thermal conductivity and exhibits a maximum zT of 1.3 at a temperature of 670K. This material is also relatively inexpensive and stable up to this temperature in a vacuum, and can be a good alternative in the temperature range between materials based on Bi2Te3 and PbTe. Among the most exciting developments in thermoelectric materials was the development of single crystal tin selenide which produced a record zT of 2.6 in one direction. Other new materials of interest include Skutterudites, Tetrahedrites, and rattling ions crystals.
Besides improving the figure-of-merit, there is increasing focus to develop new materials by increasing the electrical power output, decreasing cost and developing environmentally friendly materials. For example, when the fuel cost is low or almost free, such as in waste heat recovery, then the cost per watt is only determined by the power per unit area and the operating period. As a result, it has initiated a search for materials with high power output rather than conversion efficiency. For example, the rare earth compounds YbAl3 has a low figure-of-merit, but it has a power output of at least double that of any other material, and can operate over the temperature range of a waste heat source.
Novel processing.
To increase the figure of merit (zT), a material’s thermal conductivity should be minimized while its electrical conductivity and Seebeck coefficient is maximized. In most cases, methods to increase or decrease one property result in the same effect on other properties due to their interdependence. A novel processing technique exploits the scattering of different phonon frequencies to selectively reduce lattice thermal conductivity without the typical negative effects on electrical conductivity from the simultaneous increased scattering of electrons. In a bismuth antimony tellurium ternary system, liquid-phase sintering is used to produce low-energy semicoherent grain boundaries, which do not have a significant scattering effect on electrons. The breakthrough is then applying a pressure to the liquid in the sintering process, which creates a transient flow of the Te rich liquid and facilitates the formation of dislocations that greatly reduce the lattice conductivity. The ability to selectively decrease the lattice conductivity results in reported zT value of 1.86, which is a significant improvement over the current commercial thermoelectric generators with zT ~ 0.3–0.6. These improvements highlight the fact that in addition to the development of novel materials for thermoelectric applications, using different processing techniques to design microstructure is a viable and worthwhile effort. In fact, it often makes sense to work to optimize both composition and microstructure.
Uses.
Thermoelectric generators (TEG) have a variety of applications. Frequently, thermoelectric generators are used for low power remote applications or where bulkier but more efficient heat engines such as Stirling engines would not be possible. Unlike heat engines, the solid state electrical components typically used to perform thermal to electric energy conversion have no moving parts. The thermal to electric energy conversion can be performed using components that require no maintenance, have inherently high reliability, and can be used to construct generators with long service-free lifetimes. This makes thermoelectric generators well suited for equipment with low to modest power needs in remote uninhabited or inaccessible locations such as mountaintops, the vacuum of space, or the deep ocean.
The main uses of thermoelectric generators are:
Practical limitations.
Besides low efficiency and relatively high cost, practical problems exist in using thermoelectric devices in certain types of applications resulting from a relatively high electrical output resistance, which increases self-heating, and a relatively low thermal conductivity, which makes them unsuitable for applications where heat removal is critical, as with heat removal from an electrical device such as microprocessors.
Future market.
While TEG technology has been used in military and aerospace applications for decades, new TE materials and systems are being developed to generate power using low or high temperatures waste heat, and that could provide a significant opportunity in the near future. These systems can also be scalable to any size and have lower operation and maintenance cost.
The global market for thermoelectric generators is estimated to be US$320 million in 2015 and US$472 million in 2021; up to US$1.44 billion by 2030 with a CAGR of 11.8%. Today, North America captures 66% of the market share and it will continue to be the biggest market in the near future. However, Asia-Pacific and European countries are projected to grow at relatively higher rates. A study found that the Asia-Pacific market would grow at a Compound Annual Growth Rate (CAGR) of 18.3% in the period from 2015 to 2020 due to the high demand of thermoelectric generators by the automotive industries to increase overall fuel efficiency, as well as the growing industrialization in the region.
Small scale thermoelectric generators are also in the early stages of investigation in wearable technologies to reduce or replace charging and boost charge duration. Recent studies focused on the novel development of a flexible inorganic thermoelectric, silver selenide, on a nylon substrate. Thermoelectrics represent particular synergy with wearables by harvesting energy directly from the human body creating a self-powered device. One project used n-type silver selenide on a nylon membrane. Silver selenide is a narrow bandgap semiconductor with high electrical conductivity and low thermal conductivity, making it perfect for thermoelectric applications.
Low power TEG or "sub-watt" (i.e. generating up to 1 Watt peak) market is a growing part of the TEG market, capitalizing on the latest technologies. Main applications are sensors, low power applications and more globally Internet of things applications. A specialized market research company indicated that 100,000 units have been shipped in 2014 and expects 9 million units per year by 2020.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf J = -\\sigma S \\nabla T"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "\\nabla T"
},
{
"math_id": 3,
"text": "s = \\frac{\\sqrt{1+zT}-1}{ST} "
}
] | https://en.wikipedia.org/wiki?curid=9209712 |
920978 | Demand curve | Graph of how much of something a consumer would buy at a certain price
A demand curve is a graph depicting the inverse demand function, a relationship between the price of a certain commodity (the "y"-axis) and the quantity of that commodity that is demanded at that price (the "x"-axis). Demand curves can be used either for the price-quantity relationship for an individual consumer (an individual demand curve), or for all consumers in a particular market (a market demand curve).
It is generally assumed that demand curves slope down, as shown in the adjacent image. This is because of the law of demand: for most goods, the quantity demanded falls if the price rises. Certain unusual situations do not follow this law. These include Veblen goods, Giffen goods, and speculative bubbles where buyers are attracted to a commodity if its price rises.
Demand curves are used to estimate behaviour in competitive markets and are often combined with supply curves to find the equilibrium price (the price at which sellers together are willing to sell the same amount as buyers together are willing to buy, also known as market clearing price) and the equilibrium quantity (the amount of that good or service that will be produced and bought without surplus/excess supply or shortage/excess demand) of that market.
Movement "along the demand curve" refers to how the quantity demanded changes when the price changes.
Shift of the demand curve as a whole occurs when a factor other than price causes the price curve itself to translate along the x-axis; this may be associated with an advertising campaign or perceived change in the quality of the good.
Demand curves are estimated by a variety of techniques. The usual method is to collect data on past prices, quantities, and variables such as consumer income and product quality that affect demand and apply statistical methods, variants on multiple regression. Consumer surveys and experiments are alternative sources of data. For the shapes of a variety of goods' demand curves, see the article price elasticity of demand.
Shape of the demand curve.
In most circumstances the demand curve has a negative slope, and therefore slopes downwards. This is due to the law of demand which conditions that there is an inverse relationship between price and the demand of commodity (good or a service). As price goes up quantity demanded reduces and as price reduces quantity demanded increases.
Demand curves are often graphed as straight lines, where "a" and "b" are parameters:
formula_0.
The constant "a" embodies the effects of all factors other than price that affect demand. If income were to change, for example, the effect of the change would be represented by a change in the value of "a" and be reflected graphically as a shift of the demand curve. The constant "b" is the slope of the demand curve and shows how the price of the good affects the quantity demanded.
The graph of the demand curve uses the inverse demand function in which price is expressed as a function of quantity. The standard form of the demand equation can be converted to the inverse equation by solving for P:
formula_1.
Curvature.
The demand is called "convex" (with respect to the origin) if the (generally down-sloping) curve bends upwards, "concave" otherwise.
The demand curvature is fundamentally hard to estimate from the empirical data, with some researchers suggesting that demand with high convexity is practically improbable.
Three categories of demand curves.
The slope of the market industry demand curve is greater than the slope of the individual demand curve; the slope of the enterprise demand curve is less than the slope of the industry demand curve.
The slope of a firm's demand curve is less than the slope of the industry's demand curve.
Shift of a demand curve.
The shift of a demand curve takes place when there is a change in any non-price determinant of demand, resulting in a new demand curve. Non-price determinants of demand are those things that will cause demand to change even if prices remain the same—in other words, the things whose changes might cause a consumer to buy more or less of a good even if the good's own price remained unchanged.
Some of the more important factors are the prices of related goods (both substitutes and complements), income, population, and expectations. However, demand is the willingness and ability of a consumer to purchase a good "under the prevailing circumstances"; so, any circumstance that affects the consumer's willingness or ability to buy the good or service in question can be a non-price determinant of demand. As an example, weather could be a factor in the demand for beer at a baseball game.
When income increases, the demand curve for normal goods shifts outward as more will be demanded at all prices, while the demand curve for inferior goods shifts inward due to the increased attainability of superior substitutes. With respect to related goods, when the price of a good (e.g. a hamburger) rises, the demand curve for substitute goods (e.g. chicken) shifts out, while the demand curve for complementary goods (e.g. ketchup) shifts in (i.e. there is more demand for substitute goods as they become more attractive in terms of value for money, while demand for complementary goods contracts in response to the contraction of quantity demanded of the underlying good).
With factors of individual demand and market demand, both complementary goods and substitutes affect the demand curve.
Factors affecting market demand.
In addition to the factors which can affect individual demand there are three factors that can cause the market demand curve to shift:
Some circumstances which can cause the demand curve to shift in include:
Movement along a demand curve.
There is movement "along" a demand curve when a change in price causes the quantity demanded to change. It is important to distinguish between movement along a demand curve, and a shift in a demand curve. Movements along a demand curve happen only when the price of the good changes. When a non-price determinant of demand changes, the curve shifts. These "other variables" are part of the demand function. They are "merely lumped into intercept term of a simple linear demand function." Thus a change in a non-price determinant of demand is reflected in a change in the x-intercept causing the curve to shift along the x axis.
Price elasticity of demand.
The price elasticity of demand is a measure of the sensitivity of the quantity variable, Q, to changes in the price variable, P. Its value answers the question of how much the quantity will change in percentage terms after a 1% change in the price. This is thus important in determining how revenue will change. The elasticity is negative because the price rises, and the quantity demanded falls, a consequence of the law of demand.
The elasticity of demand indicates how sensitive the demand for a good is to a price change. If the elasticity's absolute value is between zero and 1, demand is said to be inelastic; if it equals 1, demand is "unitary elastic"; if it is greater than 1, demand is elastic. A small value--- inelastic demand--- implies that changes in price have little influence on demand. High elasticity indicates that consumers will respond to a price rise by buying much less of the good. For examples of elasticities of particular goods, see the article section, "Selected price elasticities".
The elasticity of demand usually will vary depending on the price. If the demand curve is linear, demand is inelastic at high prices and elastic at low prices, with unitary elasticity somewhere in between. There does exist a family of demand curves with constant elasticity for all prices. They have the demand equation formula_2, where "c" is the elasticity of demand and "a" is a parameter for the size of the market. These demand curves are smoothly curving with steep slopes for high values of price and gentle slopes for low values.
Taxes and subsidies.
A sales tax on the commodity does not directly change the demand curve, if the price axis in the graph represents the price including tax. Similarly, a subsidy on the commodity does not directly change the demand curve, if the price axis in the graph represents the price after deduction of the subsidy.
If the price axis in the graph represents the price before addition of tax and/or subtraction of subsidy then the demand curve moves inward when a tax is introduced, and outward when a subsidy is introduced.
Derived demand.
The demand for goods can be further divorced into the demand markets for final and intermediate goods. An intermediate good is a good utilized in the process of creating another good, effectively named the final good. It is important to note that the cooperation of several inputs in many circumstances yields a final good and thus the demand for these goods is "derived" from the demand of the final product; this concept is known as derived demand. The relationship between the intermediate goods and the final good is direct and positive as demand for a final product increases demand for the intermediate goods used to make it.
In order to construct a derived demand curve, specific assumptions must be made and values held constant. The supply curves for other inputs, demand curve for the final good, and production conditions must all be held constant to ascertain an effective derived demand curve.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q = a+bP\\text{ where }b<0"
},
{
"math_id": 1,
"text": "P = \\frac{Q - a}{b}"
},
{
"math_id": 2,
"text": "Q=aP^{c}"
}
] | https://en.wikipedia.org/wiki?curid=920978 |
9210114 | Rhind Mathematical Papyrus | Ancient Egyptian mathematical document
The Rhind Mathematical Papyrus (RMP; also designated as papyrus British Museum 10057, pBM 10058, and Brooklyn Museum 37.1784Ea-b) is one of the best known examples of ancient Egyptian mathematics.
It is one of two well-known mathematical papyri, along with the Moscow Mathematical Papyrus. The Rhind Papyrus is the larger, but younger, of the two..
In the papyrus' opening paragraphs Ahmes presents the papyrus as giving "Accurate reckoning for inquiring into things, and the knowledge of all things, mysteries ... all secrets". He continues:
This book was copied in regnal year 33, month 4 of Akhet, under the majesty of the King of Upper and Lower Egypt, Awserre, given life, from an ancient copy made in the time of the King of Upper and Lower Egypt Nimaatre. The scribe Ahmose writes this copy.
Several books and articles about the Rhind Mathematical Papyrus have been published, and a handful of these stand out. "The Rhind Papyrus" was published in 1923 by the English Egyptologist T. Eric Peet and contains a discussion of the text that followed Francis Llewellyn Griffith's Book I, II and III outline. Chace published a compendium in 1927–29 which included photographs of the text. A more recent overview of the Rhind Papyrus was published in 1987 by Robins and Shute.
History.
The Rhind Mathematical Papyrus dates to the Second Intermediate Period of Egypt. It was copied by the scribe Ahmes (i.e., Ahmose; "Ahmes" is an older transcription favoured by historians of mathematics) from a now-lost text from the reign of the 12th dynasty king Amenemhat III.
It dates to around 1550 BC. The document is dated to Year 33 of the Hyksos king Apophis and also contains a separate later historical note on its verso likely dating from "Year 11" of his successor, Khamudi.
Alexander Henry Rhind, a Scottish antiquarian, purchased two parts of the papyrus in 1858 in Luxor, Egypt; it was stated to have been found in "one of the small buildings near the Ramesseum", near Luxor.
The British Museum, where the majority of the papyrus is now kept, acquired it in 1865 along with the Egyptian Mathematical Leather Roll, also owned by Henry Rhind.
Fragments of the text were independently purchased in Luxor by American Egyptologist Edwin Smith in the mid 1860s, were donated by his daughter in 1906 to the New York Historical Society, and are now held by the Brooklyn Museum. An central section is missing.
The papyrus began to be transliterated and mathematically translated in the late 19th century. The mathematical-translation aspect remains incomplete in several respects.
Books.
Book I – Arithmetic and Algebra.
The first part of the Rhind papyrus consists of reference tables and a collection of 21 arithmetic and 20 algebraic problems. The problems start out with simple fractional expressions, followed by completion ("sekem") problems and more involved linear equations ("aha" problems).
The first part of the papyrus is taken up by the 2/"n" table. The fractions 2/"n" for odd "n" ranging from 3 to 101 are expressed as sums of unit fractions. For example, formula_0. The decomposition of 2/"n" into unit fractions is never more than 4 terms long as in for example:
formula_1
This table is followed by a much smaller, tiny table of fractional expressions for the numbers 1 through 9 divided by 10. For instance the division of 7 by 10 is recorded as:
7 divided by 10 yields 2/3 + 1/30
After these two tables, the papyrus records 91 problems altogether, which have been designated by moderns as problems (or numbers) 1–87, including four other items which have been designated as problems 7B, 59B, 61B and 82B. Problems 1–7, 7B and 8–40 are concerned with arithmetic and elementary algebra.
Problems 1–6 compute divisions of a certain number of loaves of bread by 10 men and record the outcome in unit fractions. Problems 7–20 show how to multiply the expressions 1 + 1/2 + 1/4 = 7/4, and 1 + 2/3 + 1/3 = 2 by different fractions.
Problems 21–23 are problems in completion, which in modern notation are simply subtraction problems. Problems 24–34 are ‘‘aha’’ problems; these are linear equations. Problem 32 for instance corresponds (in modern notation) to solving x + 1/3 x + 1/4 x = 2 for x. Problems 35–38 involve divisions of the heqat, which is an ancient Egyptian unit of volume. Beginning at this point, assorted units of measurement become much more important throughout the remainder of the papyrus, and indeed a major consideration throughout the rest of the papyrus is dimensional analysis. Problems 39 and 40 compute the division of loaves and use arithmetic progressions.
Book II – Geometry.
The second part of the Rhind papyrus, being problems 41–59, 59B and 60, consists of geometry problems. Peet referred to these problems as "mensuration problems".
Volumes.
Problems 41–46 show how to find the volume of both cylindrical and rectangular granaries. In problem 41 Ahmes computes the volume of a cylindrical granary. Given the diameter d and the height h, the volume V is given by:
formula_2
In modern mathematical notation (and using d = 2r) this gives formula_3. The fractional term 256/81 approximates the value of π as being 3.1605..., an error of less than one percent.
Problem 47 is a table with fractional equalities which represent the ten situations where the physical volume quantity of "100 quadruple heqats" is divided by each of the multiples of ten, from ten through one hundred. The quotients are expressed in terms of Horus eye fractions, sometimes also using a much smaller unit of volume known as a "quadruple ro". The quadruple heqat and the quadruple ro are units of volume derived from the simpler heqat and ro, such that these four units of volume satisfy the following relationships: 1 quadruple heqat = 4 heqat = 1280 ro = 320 quadruple ro. Thus,
100/10 quadruple heqat = 10 quadruple heqat
100/20 quadruple heqat = 5 quadruple heqat
100/30 quadruple heqat = (3 + 1/4 + 1/16 + 1/64) quadruple heqat + (1 + 2/3) quadruple ro
100/40 quadruple heqat = (2 + 1/2) quadruple heqat
100/50 quadruple heqat = 2 quadruple heqat
100/60 quadruple heqat = (1 + 1/2 + 1/8 + 1/32) quadruple heqat + (3 + 1/3) quadruple ro
100/70 quadruple heqat = (1 + 1/4 + 1/8 + 1/32 + 1/64) quadruple heqat + (2 + 1/14 + 1/21 + 1/42) quadruple ro
100/80 quadruple heqat = (1 + 1/4) quadruple heqat
100/90 quadruple heqat = (1 + 1/16 + 1/32 + 1/64) quadruple heqat + (1/2 + 1/18) quadruple ro
100/100 quadruple heqat = 1 quadruple heqat
Areas.
Problems 48–55 show how to compute an assortment of areas. Problem 48 is notable in that it succinctly computes the area of a circle by approximating π. Specifically, problem 48 explicitly reinforces the convention (used throughout the geometry section) that "a circle's area stands to that of its circumscribing square in the ratio 64/81." Equivalently, the papyrus approximates π as 256/81, as was already noted above in the explanation of problem 41.
Other problems show how to find the area of rectangles, triangles and trapezoids.
Pyramids.
The final six problems are related to the slopes of pyramids.
A seked problem is reported as follows:
If a pyramid is 250 cubits high and the side of its base 360 cubits long, what is its "seked"?"
The solution to the problem is given as the ratio of half the side of the base of the pyramid to its height, or the run-to-rise ratio of its face. In other words, the quantity found for the seked is the cotangent of the angle to the base of the pyramid and its face.
Book III – Miscellany.
The third part of the Rhind papyrus consists of the remainder of the 91 problems, being 61, 61B, 62–82, 82B, 83–84, and "numbers" 85–87, which are items that are not mathematical in nature. This final section contains more complicated tables of data (which frequently involve Horus eye fractions), several "pefsu" problems which are elementary algebraic problems concerning food preparation, and even an amusing problem (79) which is suggestive of geometric progressions, geometric series, and certain later problems and riddles in history. Problem 79 explicitly cites, "seven houses, 49 cats, 343 mice, 2401 ears of spelt, 16807 hekats." In particular problem 79 concerns a situation in which 7 houses each contain seven cats, which all eat seven mice, each of which would have eaten seven ears of grain, each of which would have produced seven measures of grain. The third part of the Rhind papyrus is therefore a kind of miscellany, building on what has already been presented.
Problem 61 is concerned with multiplications of fractions. Problem 61B, meanwhile, gives a general expression for computing 2/3 of 1/n, where n is odd. In modern notation the formula given is
formula_4
The technique given in 61B is closely related to the derivation of the 2/n table.
Problems 62–68 are general problems of an algebraic nature. Problems 69–78 are all "pefsu" problems in some form or another. They involve computations regarding the strength of bread and beer, with respect to certain raw materials used in their production.
Problem 79 sums five terms in a geometric progression. Its language is strongly suggestive of the more modern riddle and nursery rhyme "As I was going to St Ives".
Problems 80 and 81 compute Horus eye fractions of hinu (or heqats). The last four mathematical items, problems 82, 82B and 83–84, compute the amount of feed necessary for various animals, such as fowl and oxen. However, these problems, especially 84, are plagued by pervasive ambiguity, confusion, and simple inaccuracy.
The final three items on the Rhind papyrus are designated as "numbers" 85–87, as opposed to "problems", and they are scattered widely across the papyrus's back side, or verso. They are, respectively, a small phrase which ends the document (and has a few possibilities for translation, given below), a piece of scrap paper unrelated to the body of the document, used to hold it together (yet containing words and Egyptian fractions which are by now familiar to a reader of the document), and a small historical note which is thought to have been written some time after the completion of the body of the papyrus's writing. This note is thought to describe events during the "Hyksos domination", a period of external interruption in ancient Egyptian society which is closely related with its second intermediary period. With these non-mathematical yet historically and philologically intriguing errata, the papyrus's writing comes to an end.
Unit concordance.
Much of the Rhind Papyrus's material is concerned with Ancient Egyptian units of measurement and especially the dimensional analysis used to convert between them. A concordance of units of measurement used in the papyrus is given in the image.
Content.
This table summarizes the content of the Rhind Papyrus by means of a concise modern paraphrase. It is based upon the two-volume exposition of the papyrus which was published by Arnold Buffum Chace in 1927, and in 1929. In general, the papyrus consists of four sections: a title page, the 2/n table, a tiny "1–9/10 table", and 91 problems, or "numbers". The latter are numbered from 1 through 87 and include four mathematical items which have been designated by moderns as problems 7B, 59B, 61B, and 82B. Numbers 85–87, meanwhile, are not mathematical items forming part of the body of the document, but instead are respectively: a small phrase ending the document, a piece of "scrap-paper" used to hold the document together (having already contained unrelated writing), and a historical note which is thought to describe a time period shortly after the completion of the body of the papyrus. These three latter items are written on disparate areas of the papyrus's verso (back side), far away from the mathematical content. Chace therefore differentiates them by styling them as "numbers" as opposed to "problems", like the other 88 numbered items.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{2}{15} = \\frac{1}{10} + \\frac{1}{30} "
},
{
"math_id": 1,
"text": "\\frac{2}{101} = \\frac{1}{101} + \\frac{1}{202} + \\frac{1}{303} + \\frac{1}{606}"
},
{
"math_id": 2,
"text": " V = \\left[\\right(1-1/9\\left) d\\right]^2 h"
},
{
"math_id": 3,
"text": " V = (8/9)^2 d^2 h = (256/81) r^2 h"
},
{
"math_id": 4,
"text": " \\frac{2}{3n} = \\frac{1}{2n} + \\frac{1}{6n} "
}
] | https://en.wikipedia.org/wiki?curid=9210114 |
9210345 | Gaussian adaptation | Evolutionary algorithm designed for maximizing manufacturing yield
Gaussian adaptation (GA), also called normal or natural adaptation (NA) is an evolutionary algorithm designed for the maximization of manufacturing yield due to statistical deviation of component values of signal processing systems. In short, GA is a stochastic adaptive process where a number of samples of an "n"-dimensional vector "x"["x"T = ("x"1, "x"2, ..., "xn")] are taken from a multivariate Gaussian distribution, "N"("m", "M"), having mean "m" and moment matrix "M". The samples are tested for fail or pass. The first- and second-order moments of the Gaussian restricted to the pass samples are "m*" and "M*".
The outcome of "x" as a pass sample is determined by a function "s"("x"), 0 < "s"("x") < "q" ≤ 1, such that "s"("x") is the probability that x will be selected as a pass sample. The average probability of finding pass samples (yield) is
formula_0
Then the theorem of GA states:
For any "s"("x") and for any value of "P "< "q", there always exist a Gaussian p. d. f. [ probability density function ] that is adapted for maximum dispersion. The necessary conditions for a local optimum are "m" = "m"* and "M" proportional to "M"*. The dual problem is also solved: "P" is maximized while keeping the dispersion constant (Kjellström, 1991).
Proofs of the theorem may be found in the papers by Kjellström, 1970, and Kjellström & Taxén, 1981.
Since dispersion is defined as the exponential of entropy/disorder/average information it immediately follows that the theorem is valid also for those concepts. Altogether, this means that Gaussian adaptation may carry out a simultaneous maximisation of yield and average information (without any need for the yield or the average information to be defined as criterion functions).
The theorem is valid for all regions of acceptability and all Gaussian distributions. It may be used by cyclic repetition of random variation and selection (like the natural evolution). In every cycle a sufficiently large number of Gaussian distributed points are sampled and tested for membership in the region of acceptability. The centre of gravity of the Gaussian, "m", is then moved to the centre of gravity of the approved (selected) points, "m"*. Thus, the process converges to a state of equilibrium fulfilling the theorem. A solution is always approximate because the centre of gravity is always determined for a limited number of points.
It was used for the first time in 1969 as a pure optimization algorithm making the regions of acceptability smaller and smaller (in analogy to simulated annealing, Kirkpatrick 1983). Since 1970 it has been used for both ordinary optimization and yield maximization.
Natural evolution and Gaussian adaptation.
It has also been compared to the natural evolution of populations of living organisms. In this case "s"("x") is the probability that the individual having an array "x" of phenotypes will survive by giving offspring to the next generation; a definition of individual fitness given by Hartl 1981. The yield, "P", is replaced by the mean fitness determined as a mean over the set of individuals in a large population.
Phenotypes are often Gaussian distributed in a large population and a necessary condition for the natural evolution to be able to fulfill the theorem of Gaussian adaptation, with respect to all Gaussian quantitative characters, is that it may push the centre of gravity of the Gaussian to the centre of gravity of the selected individuals. This may be accomplished by the Hardy–Weinberg law. This is possible because the theorem of Gaussian adaptation is valid for any region of acceptability independent of the structure (Kjellström, 1996).
In this case the rules of genetic variation such as crossover, inversion, transposition etcetera may be seen as random number generators for the phenotypes. So, in this sense Gaussian adaptation may be seen as a genetic algorithm.
How to climb a mountain.
Mean fitness may be calculated provided that the distribution of parameters and the structure of the landscape is known. The real landscape is not known, but figure below shows a fictitious profile (blue) of a landscape along a line (x) in a room spanned by such parameters. The red curve is the mean based on the red bell curve at the bottom of figure. It is obtained by letting the bell curve slide along the "x"-axis, calculating the mean at every location. As can be seen, small peaks and pits are smoothed out. Thus, if evolution is started at A with a relatively small variance (the red bell curve), then climbing will take place on the red curve. The process may get stuck for millions of years at B or C, as long as the hollows to the right of these points remain, and the mutation rate is too small.
If the mutation rate is sufficiently high, the disorder or variance may increase and the parameter(s) may become distributed like the green bell curve. Then the climbing will take place on the green curve, which is even more smoothed out. Because the hollows to the right of B and C have now disappeared, the process may continue up to the peaks at D. But of course the landscape puts a limit on the disorder or variability. Besides — dependent on the landscape — the process may become very jerky, and if the ratio between the time spent by the process at a local peak and the time of transition to the next peak is very high, it may as well look like a punctuated equilibrium as suggested by Gould (see Ridley).
Computer simulation of Gaussian adaptation.
Thus far the theory only considers mean values of continuous distributions corresponding to an infinite number of individuals. In reality however, the number of individuals is always limited, which gives rise to an uncertainty in the estimation of "m" and "M" (the moment matrix of the Gaussian). And this may also affect the efficiency of the process. Unfortunately very little is known about this, at least theoretically.
The implementation of normal adaptation on a computer is a fairly simple task. The adaptation of m may be done by one sample (individual) at a time, for example
"m"("i" + 1) = (1 – "a") "m"("i") + "ax"
where "x" is a pass sample, and "a" < 1 a suitable constant so that the inverse of a represents the number of individuals in the population.
"M" may in principle be updated after every step "y" leading to a feasible point
"x" = "m" + "y" according to:
"M"("i" + 1) = (1 – 2"b") "M"("i") + 2"byy"T,
where "y"T is the transpose of "y" and "b" « 1 is another suitable constant. In order to guarantee a suitable increase of average information, "y" should be normally distributed with moment matrix "μ"2"M", where the scalar "μ" > 1 is used to increase average information (information entropy, disorder, diversity) at a suitable rate. But "M" will never be used in the calculations. Instead we use the matrix "W" defined by "WW"T = "M".
Thus, we have "y" = "Wg", where "g" is normally distributed with the moment matrix "μU", and "U" is the unit matrix. "W" and "W"T may be updated by the formulas
"W" = (1 – "b")"W" + "byg"T and "W""T" = (1 – "b")"W"T + "bgy"T
because multiplication gives
"M" = (1 – 2"b")"M" + 2"byy"T,
where terms including "b"2 have been neglected. Thus, "M" will be indirectly adapted with good approximation. In practice it will suffice to update "W" only
"W"("i" + 1) = (1 – "b")"W"("i") + "byg"T.
This is the formula used in a simple 2-dimensional model of a brain satisfying the Hebbian rule of associative learning; see the next section (Kjellström, 1996 and 1999).
The figure below illustrates the effect of increased average information in a Gaussian p.d.f. used to climb a mountain Crest (the two lines represent the contour line). Both the red and green cluster have equal mean fitness, about 65%, but the green cluster has a much higher average information making the green process much more efficient. The effect of this adaptation is not very salient in a 2-dimensional case, but in a high-dimensional case, the efficiency of the search process may be increased by many orders of magnitude.
The evolution in the brain.
In the brain the evolution of DNA-messages is supposed to be replaced by an evolution of signal patterns and the phenotypic landscape is replaced by a mental landscape, the complexity of which will hardly be second to the former. The metaphor with the mental landscape is based on the assumption that certain signal patterns give rise to a better well-being or performance. For instance, the control of a group of muscles leads to a better pronunciation of a word or performance of a piece of music.
In this simple model it is assumed that the brain consists of interconnected components that may add, multiply and delay signal values.
This is a basis of the theory of digital filters and neural networks consisting of components that may add, multiply and delay signalvalues and also of many brain models, Levine 1991.
In the figure below the brain stem is supposed to deliver Gaussian distributed signal patterns. This may be possible since certain neurons fire at random (Kandel et al.). The stem also constitutes a disordered structure surrounded by more ordered shells (Bergström, 1969), and according to the central limit theorem the sum of signals from many neurons may be Gaussian distributed. The triangular boxes represent synapses and the boxes with the + sign are cell kernels.
In the cortex signals are supposed to be tested for feasibility. When a signal is accepted the contact areas in the synapses are updated according to the formulas below in agreement with the Hebbian theory. The figure shows a 2-dimensional computer simulation of Gaussian adaptation according to the last formula in the preceding section.
"m" and "W" are updated according to:
"m"1 = 0.9 "m"1 + 0.1 "x"1; "m"2 = 0.9 "m"2 + 0.1 "x"2;
"w"11 = 0.9 "w"11 + 0.1 "y"1"g"1; "w"12 = 0.9 "w"12 + 0.1 "y"1"g"2;
"w"21 = 0.9 "w"21 + 0.1 "y"2"g"1; "w"22 = 0.9 "w"22 + 0.1 "y"2"g"2;
As can be seen this is very much like a small brain ruled by the theory of Hebbian learning (Kjellström, 1996, 1999 and 2002).
Gaussian adaptation and free will.
Gaussian adaptation as an evolutionary model of the brain obeying the Hebbian theory of associative learning offers an alternative view of free will due to the ability of the process to maximize the mean fitness of signal patterns in the brain by climbing a mental landscape in analogy with phenotypic evolution.
Such a random process gives us much freedom of choice, but hardly any will. An illusion of will may, however, emanate from the ability of the process to maximize mean fitness, making the process goal seeking. I. e., it prefers higher peaks in the landscape prior to lower, or better alternatives prior to worse. In this way an illusive will may appear. A similar view has been given by Zohar 1990. See also Kjellström 1999.
A theorem of efficiency for random search.
The efficiency of Gaussian adaptation relies on the theory of information due to Claude E. Shannon (see information content). When an event occurs with probability "P", then the information −log("P") may be achieved. For instance, if the mean fitness is "P", the information gained for each individual selected for survival will be −log("P") – on the average - and the work/time needed to get the information is proportional to 1/"P". Thus, if efficiency, E, is defined as information divided by the work/time needed to get it we have:
"E" = −"P" log("P").
This function attains its maximum when "P" = 1/"e" = 0.37. The same result has been obtained by Gaines with a different method.
"E" = 0 if "P" = 0, for a process with infinite mutation rate, and if "P" = 1, for a process with mutation rate = 0 (provided that the process is alive).
This measure of efficiency is valid for a large class of random search processes provided that certain conditions are at hand.
1 The search should be statistically independent and equally efficient in different parameter directions. This condition may be approximately fulfilled when the moment matrix of the Gaussian has been adapted for maximum average information to some region of acceptability, because linear transformations of the whole process do not affect efficiency.
2 All individuals have equal cost and the derivative at "P" = 1 is < 0.
Then, the following theorem may be proved:
All measures of efficiency, that satisfy the conditions above, are asymptotically proportional to –"P" log("P/q") when the number of dimensions increases, and are maximized by "P" = "q" exp(-1) (Kjellström, 1996 and 1999).
The figure above shows a possible efficiency function for a random search process such as Gaussian adaptation. To the left the process is most chaotic when "P" = 0, while there is perfect order to the right where "P" = 1.
In an example by Rechenberg, 1971, 1973, a random walk is pushed thru a corridor maximizing the parameter "x"1. In this case the region of acceptability is defined as a ("n" − 1)-dimensional interval in the parameters "x"2, "x"3, ..., "x""n", but a "x"1-value below the last accepted will never be accepted. Since "P" can never exceed 0.5 in this case, the maximum speed towards higher "x"1-values is reached for "P" = 0.5/"e" = 0.18, in agreement with the findings of Rechenberg.
A point of view that also may be of interest in this context is that no definition of information (other than that sampled points inside some region of acceptability gives information about the extension of the region) is needed for the proof of the theorem. Then, because, the formula may be interpreted as information divided by the work needed to get the information, this is also an indication that −log("P") is a good candidate for being a measure of information.
The Stauffer and Grimson algorithm.
Gaussian adaptation has also been used for other purposes as for instance shadow removal by "The Stauffer-Grimson algorithm" which is equivalent to Gaussian adaptation as used in the section "Computer simulation of Gaussian adaptation" above. In both cases the maximum likelihood method is used for estimation of mean values by adaptation at one sample at a time.
But there are differences. In the Stauffer-Grimson case the information is not used for the control of a random number generator for centering, maximization of mean fitness, average information or manufacturing yield. The adaptation of the moment matrix also differs very much as compared to "the evolution in the brain" above. | [
{
"math_id": 0,
"text": " P(m) = \\int s(x) N(x - m)\\, dx "
}
] | https://en.wikipedia.org/wiki?curid=9210345 |
921168 | Scale factor (cosmology) | Expansion of the universe parameter
The expansion of the universe is parametrized by a dimensionless scale factor formula_0. Also known as the cosmic scale factor or sometimes the Robertson–Walker scale factor, this is a key parameter of the Friedmann equations.
In the early stages of the Big Bang, most of the energy was in the form of radiation, and that radiation was the dominant influence on the expansion of the universe. Later, with cooling from the expansion the roles of matter and radiation changed and the universe entered a matter-dominated era. Recent results suggest that we have already entered an era dominated by dark energy, but examination of the roles of matter and radiation are most important for understanding the early universe.
Using the dimensionless scale factor to characterize the expansion of the universe, the effective energy densities of radiation and matter scale differently. This leads to a radiation-dominated era in the very early universe but a transition to a matter-dominated era at a later time and, since about 4 billion years ago, a subsequent dark-energy-dominated era.
Detail.
Some insight into the expansion can be obtained from a Newtonian expansion model which leads to a simplified version of the Friedmann equation. It relates the proper distance (which can change over time, unlike the comoving distance formula_1 which is constant and set to today's distance) between a pair of objects, e.g. two galaxy clusters, moving with the Hubble flow in an expanding or contracting FLRW universe at any arbitrary time formula_2 to their distance at some reference time formula_3. The formula for this is:
formula_4
where formula_5 is the proper distance at epoch formula_2, formula_6 is the distance at the reference time formula_3, usually also referred to as comoving distance, and formula_7 is the scale factor. Thus, by definition, formula_8 and formula_9.
The scale factor is dimensionless, with formula_2 counted from the birth of the universe and formula_3 set to the present age of the universe: formula_10 giving the current value of formula_11 as formula_12 or formula_13.
The evolution of the scale factor is a dynamical question, determined by the equations of general relativity, which are presented in the case of a locally isotropic, locally homogeneous universe by the Friedmann equations.
The Hubble parameter is defined as:
formula_14
where the dot represents a time derivative. The Hubble parameter varies with time, not with space, with the Hubble constant formula_15 being its current value.
From the previous equation formula_16 one can see that formula_17, and also that formula_18, so combining these gives formula_19, and substituting the above definition of the Hubble parameter gives formula_20 which is just Hubble's law.
Current evidence suggests that the expansion of the universe is accelerating, which means that the second derivative of the scale factor formula_21 is positive, or equivalently that the first derivative formula_22 is increasing over time. This also implies that any given galaxy recedes from us with increasing speed over time, i.e. for that galaxy formula_23 is increasing with time. In contrast, the Hubble parameter seems to be decreasing with time, meaning that if we were to look at some fixed distance d and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones.
According to the Friedmann–Lemaître–Robertson–Walker metric which is used to model the expanding universe, if at present time we receive light from a distant object with a redshift of "z", then the scale factor at the time the object originally emitted that light is formula_24.
Chronology.
Radiation-dominated era.
After Inflation, and until about 47,000 years after the Big Bang, the dynamics of the early universe were set by radiation (referring generally to the constituents of the universe which moved relativistically, principally photons and neutrinos).
For a radiation-dominated universe the evolution of the scale factor in the Friedmann–Lemaître–Robertson–Walker metric is obtained solving the Friedmann equations:
formula_25
Matter-dominated era.
Between about 47,000 years and 9.8 billion years after the Big Bang, the energy density of matter exceeded both the energy density of radiation and the vacuum energy density.
When the early universe was about 47,000 years old (redshift 3600), mass–energy density surpassed the radiation energy, although the universe remained optically thick to radiation until the universe was about 378,000 years old (redshift 1100). This second moment in time (close to the time of recombination), at which the photons which compose the cosmic microwave background radiation were last scattered, is often mistaken as marking the end of the radiation era.
For a matter-dominated universe the evolution of the scale factor in the Friedmann–Lemaître–Robertson–Walker metric is easily obtained solving the Friedmann equations:
formula_26
Dark-energy-dominated era.
In physical cosmology, the dark-energy-dominated era is proposed as the last of the three phases of the known universe, the other two being the radiation-dominated era and the matter-dominated era. The dark-energy-dominated era began after the matter-dominated era, i.e. when the Universe was about 9.8 billion years old. In the era of cosmic inflation, the Hubble parameter is also thought to be constant, so the expansion law of the dark-energy-dominated era also holds for the inflationary prequel of the big bang.
The cosmological constant is given the symbol Λ, and, considered as a source term in the Einstein field equation, can be viewed as equivalent to a "mass" of empty space, or dark energy. Since this increases with the volume of the universe, the expansion pressure is effectively constant, independent of the scale of the universe, while the other terms decrease with time. Thus, as the density of other forms of matter – dust and radiation – drops to very low concentrations, the cosmological constant (or "dark energy") term will eventually dominate the energy density of the Universe. Recent measurements of the change in Hubble constant with time, based on observations of distant supernovae, show this acceleration in expansion rate, indicating the presence of such dark energy.
For a dark-energy-dominated universe, the evolution of the scale factor in the Friedmann–Lemaître–Robertson–Walker metric is easily obtained solving the Friedmann equations:
formula_27
Here, the coefficient formula_15in the exponential, the Hubble constant, is
formula_28
This exponential dependence on time makes the spacetime geometry identical to the de Sitter universe, and only holds for a positive sign of the cosmological constant, which is the case according to the currently accepted value of the cosmological constant, Λ, that is approximately 2 · 10−35 s−2.
The current density of the observable universe is of the order of 9.44 · 10−27 kg m−3 and the age of the universe is of the order of 13.8 billion years, or 4.358 · 1017 s. The Hubble constant, formula_15, is ≈70.88 km s−1 Mpc−1 (The Hubble time is 13.79 billion years).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a "
},
{
"math_id": 1,
"text": "d_C"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "t_0"
},
{
"math_id": 4,
"text": "d(t) = a(t)d_0,\\,"
},
{
"math_id": 5,
"text": "d(t)"
},
{
"math_id": 6,
"text": "d_0"
},
{
"math_id": 7,
"text": "a(t)"
},
{
"math_id": 8,
"text": "d_0=d(t_0)"
},
{
"math_id": 9,
"text": "a(t_0) = 1"
},
{
"math_id": 10,
"text": "13.799\\pm0.021\\,\\mathrm{Gyr}"
},
{
"math_id": 11,
"text": "a"
},
{
"math_id": 12,
"text": "a(t_0)"
},
{
"math_id": 13,
"text": "1"
},
{
"math_id": 14,
"text": "H(t) \\equiv {\\dot{a}(t) \\over a(t)}"
},
{
"math_id": 15,
"text": "H_0"
},
{
"math_id": 16,
"text": "d(t) = d_0 a(t)"
},
{
"math_id": 17,
"text": "\\dot{d}(t) = d_0 \\dot{a}(t)"
},
{
"math_id": 18,
"text": "d_0 = \\frac{d(t)}{a(t)}"
},
{
"math_id": 19,
"text": "\\dot{d}(t) = \\frac{d(t) \\dot{a}(t)}{a(t)}"
},
{
"math_id": 20,
"text": "\\dot{d}(t) = H(t) d(t)"
},
{
"math_id": 21,
"text": "\\ddot{a}(t)"
},
{
"math_id": 22,
"text": "\\dot{a}(t)"
},
{
"math_id": 23,
"text": "\\dot{d}(t)"
},
{
"math_id": 24,
"text": "a(t) = \\frac{1}{1 + z}"
},
{
"math_id": 25,
"text": "a(t)\\propto t^{1/2}. \\, "
},
{
"math_id": 26,
"text": "a(t)\\propto t^{2/3}"
},
{
"math_id": 27,
"text": "a(t)\\propto \\exp(H_0t)"
},
{
"math_id": 28,
"text": "H_0 = \\sqrt{8\\pi G \\rho_\\mathrm{full} / 3} = \\sqrt{\\Lambda / 3}."
}
] | https://en.wikipedia.org/wiki?curid=921168 |
9211715 | Aronszajn line | In mathematical set theory, an Aronszajn line (named after Nachman Aronszajn) is a linear ordering of cardinality formula_0
which contains no subset order-isomorphic to
Unlike Suslin lines, the existence of Aronszajn lines is provable using the standard axioms of set theory. A linear ordering is an Aronszajn line if and only if it is the lexicographical ordering of some Aronszajn tree. | [
{
"math_id": 0,
"text": "\\aleph_1"
},
{
"math_id": 1,
"text": "\\omega_1"
}
] | https://en.wikipedia.org/wiki?curid=9211715 |
921278 | Lender of last resort | Government guarantee to provide liquidity to financial institutions
In public finance, a lender of last resort (LOLR) is the institution in a financial system that acts as the provider of liquidity to a financial institution which finds itself unable to obtain sufficient liquidity in the interbank lending market when other facilities or such sources have been exhausted. It is, in effect, a government guarantee to provide liquidity to financial institutions. Since the beginning of the 20th century, most central banks have been providers of lender of last resort facilities, and their functions usually also include ensuring liquidity in the financial market in general.
The objective is to prevent economic disruption as a result of financial panics and bank runs spreading from one bank to the others due to a lack of liquidity in the first one.
There are varying definitions of a lender of last resort, but a comprehensive one is that it is "the discretionary provision of liquidity to a financial institution (or the market as a whole) by the central bank in reaction to an adverse shock which causes an abnormal increase in demand for liquidity which cannot be met from an alternative source".
While the concept itself had been used previously, the term "lender of last resort" was supposedly first used in its current context by Sir Francis Baring, in his "Observations on the Establishment of the Bank of England", which was published in 1797.
Classical theory.
Although Alexander Hamilton, in 1792, was the first policymaker to explain and implement a lender of last resort policy, the classical theory of the lender of last resort was mostly developed by two Englishmen in the 19th century: Henry Thornton and Walter Bagehot. Although some of the details remain controversial, their general theory is still widely acknowledged in modern research and provides a suitable benchmark. Thornton and Bagehot were mostly concerned with the reduction of the money supply. That was because they feared that the deflationary tendency caused by a reduction of the money supply could reduce the level of economic activity. If prices did not adjust quickly, it would lead to unemployment and a reduction in output. By keeping the money supply constant, the purchasing power remains stable during shocks. When there is a shock-induced panic, two things happen:
Thornton's foundations.
Thornton first published "An Enquiry into the Nature and Effects of the Paper Credit of Great Britain" in 1802. His starting point was that only a central bank could perform the task of lender of last resort because it holds a monopoly in issuing bank notes. Unlike any other bank, the central bank has a responsibility towards the public to keep the money supply constant, thereby preventing negative externalities of monetary instability,
such as unemployment, price instability, bank runs, and financial panic.
Bagehot's contribution.
Bagehot was the second important contributor to the classical theory.
In his book "Lombard Street" (1873), he mostly agreed with Thornton without ever mentioning him but also develops some new points and emphases. Bagehot advocates: "Very large loans at very high rates are the best remedy for the worst malady of the money market when a foreign drain is added to a domestic drain." His main points can be summarized by his famous rule: lend "it most freely... to merchants, to minor bankers, to 'this and that man', whenever the security is good".
Summary of the classical theory.
Thomas M. Humphrey, who has done extensive research on Thornton's and Bagehot's works, summarizes their main proposals as follows: (1) protect the money supply instead of saving individual institutions; (2) rescue solvent institutions only; (3) let insolvent institutions default; (4) charge penalty rates; (5) require good collateral; and (6) announce the conditions before a crisis so that the market knows exactly what to expect.
Many of the points remain controversial today but it seems to be accepted that the Bank of England strictly followed these rules during the last third of the 19th century.
Bank runs and contagion.
Most industrialized countries have had a lender of last resort for many years. Models explaining why propose that a bank run or bank panic can arise in any fractional reserve banking system and that the lender of last resort function is a way of preventing panics from happening. The Diamond and Dybvig model of bank runs has two Nash equilibria: one in which welfare is optimal and one where there is a bank run. The bank run equilibrium is an infamously self-fulfilling prophecy: if individuals expect a run to happen, it is rational for them to withdraw their deposits early: before they actually need it. That makes them lose some interest, but that is better than losing everything from a bank run.
In the Diamond–Dybvig model, introducing a lender of last resort can prevent bank runs from happening so that only the optimal equilibrium remains. That is because individuals are no longer afraid of a liquidity shortage and so have no incentive to withdraw early. The lender of last resort will never come into action because the mere promise is enough to provide the confidence necessary to prevent a panic.
Subsequently, the model has been extended to allow for financial contagion: the spreading of a panic from one bank to another, by Allen and Gale, and Freixas et al. respectively.
Allen and Gale introduced an interbank market into the Diamond–Dybvig model to study contagion of bank panics from one region to another. An interbank market is created by banks because it insures them against a lack of liquidity at certain banks as long as the overall amount of liquidity is sufficient. Liquidity is allocated by the interbank market so that banks that have excess liquidity can provide this to banks that lack liquidity. As long as the total demand for liquidity does not exceed the supply, the interbank market will allocate liquidity efficiently and banks will be better off. However, if demand exceeds supply, it can have disastrous consequences. The interregional cross-holdings of deposits cannot increase the total amount of liquidity. Thus, long-term assets have to be liquidated, which causes loss.
The degree of contagion depends on the interconnectedness of the banks in different regions. In an incomplete market (banks do not exchange deposits with all other banks), a high degree of interconnectedness causes contagion. Contagion is not caused if the market is either complete (banks have exchanged deposits with all other banks) or if the banks are little-connected. In Allen and Gale's model, the role of the central bank is to complete the markets to prevent contagion.
Freixas et al.'s model is similar to the one by Allen and Gale, except that in Freixas et al.'s model, individuals face uncertainty about where they will need their money. There is a fraction of individuals (travelers) who need their money in a region other than home. Without a payment system, an individual has to withdraw his deposit early (when he finds out that he will need the money in a different place in the next period) and simply take the money along. That is inefficient because of the foregone interest payment. Banks therefore establish credit lines to allow individuals to withdraw their deposits in different regions. In the good equilibrium, welfare is increased just as in the Diamond–Dybvig model, but again there is a bank run equilibrium, too. It can arise if some individuals expect too many others to want to withdraw money in the same region in the next period. It is then rational to withdraw money early instead of not receiving any in the next period. It can happen even if all banks are solvent.
Disputed matters.
There is no universal agreement on whether a nation's central bank or any agent of private banking interests should be its lender of last resort. Nor is there on the pros and cons of actions such a lender takes and their consequences.
Moral hazard.
Moral hazard has been an explicit concern in the context of the lender of last resort since the days of Thornton. It is argued, for example, that the existence of a LOLR facility leads to excessive risk-taking by both bankers and investors, which would be dampened if illiquid banks were allowed to fail. Therefore, the LOLR can alleviate current panics in exchange for increasing the likelihood of future panics by risk-taking induced by moral hazard.
That is exactly what the Report of the International Financial Institution Advisory Commission accuses the IMF of doing when it lends to emerging economies: "By preventing or reducing losses by international lenders, the IMF had implicitly signalled that, if local banks and other institutions incurred large foreign liabilities and government guaranteed private debts, the IMF would provide the foreign exchange needed to honour the guarantees." Investors are protected against the downside of their investment and, at the same time, receive higher interest rates to compensate them for their risk. That encourages risk-taking and reduces the necessary diversification and led the Commission to conclude, "The importance of the moral hazard problem cannot be overstated."
However, not having a lender of last resort for fear of moral hazard may have worse consequences than moral hazard itself. Consequently, many countries have a central bank that acts as lender of last resort. These countries then try to prevent moral hazard by other means such as suggested by Stern: "official regulation; encouragement for private sector monitoring and self-regulation; and the imposition of costs on those who make mistakes, including enforcement of bankruptcy procedures when appropriate." Some authors also suggest that moral hazard should not be a concern of the lender of last resort. The task of preventing it should be given to a supervisor or regulator that limits the amount of risk that can be taken.
Macro or micro responsibility.
Whether or not the lender of last resort has a responsibility for saving individual banks has been a very controversial topic. Does the lender of last resort provide liquidity to the market as a whole (through open market operations) or should it (also) make loans to individual banks (through discount window lending)?
There are two main views on this question, the money and the banking view: "the money view", as argued, for example, by Goodfriend and King, and Capie, suggests, that the lender of last resort should provide liquidity to the market by open market operations only because it suffices to limit panics. What they call "banking policy" (discount window lending) may even be harmful because of moral hazard. "The banking view" finds that in reality the market does not allocate liquidity efficiently in times of crisis. Liquidity provided through open market operations is not efficiently distributed among banks in the interbank market, and there is a case for discount window lending. In a well-functioning interbank market only solvent banks can borrow. However, if the market is not functioning, even solvent banks may be unable to borrow, most likely because of asymmetric information.
A model developed by Flannery suggests that the private market for interbank loans can fail if banks face uncertainty about the risk involved in lending to other banks. In times of crisis with less certainty, however, discount window loans are the least costly way of solving the problem of uncertainty.
Rochet and Vives extend the traditional banking view to provide more evidence that interbank markets indeed do not function properly as Goodfriend and King had suggested. "The main contribution of our paper so far has been to show the theoretical possibility of a solvent bank being illiquid, due to a coordination failure on the interbank market."
Goodhart proposes that only discount window lending should be considered lending of last resort. The reason is that central banks' open market operations cannot be separated from regular open market operations.
Distinction between illiquid and insolvent.
According to Bagehot and, following him, many later writers the lender of last resort should not lend to insolvent banks. That is reasonable in particular because it would encourage moral hazard. The distinction seems logical and is helpful in theoretical models, but some authors find that in reality it is difficult to apply. Especially in times of crisis the distinction is difficult to make.
When an illiquid bank approaches the lender of last resort, there should always be a suspicion of insolvency. However, according to Goodhart, it is a myth that the central bank can evaluate that the suspicions are untrue under the usual constraints of time for arriving at a decision. Like Obstfeld he considers insolvency a possibility that arises with a certain amount of probability, not something that is certain.
Penalty rate and collateral requirement.
Bagehot's reasoning behind charging penalty rates (i.e. higher rates than are available in the market) was as follows: (1) it would really make the lender of last resort the very last resort and (2) it would encourage the prompt repayment of the debt.
Some authors suggest that charging a higher rate does not serve the purpose of the lender of last resort because a higher rate could make it too expensive for banks to borrow. Flannery and others mention that the Fed has neither asked for good collateral nor charged rates above the market, in recent years.
Announcement in advance.
If the central bank announces in advance that it will act as lender of last resort in future crises, it can be understood as a credible promise and prevent bank panics. At the same time, it may increase moral hazard. While Bagehot emphasized that the benefit of the promise outweighs the costs, many central banks have intentionally "not" promised anything.
Private alternatives.
Before the founding of the US Federal Reserve System as lender of last resort, its role had been assumed by private banks. Both the clearing-house system of New York and the Suffolk Bank of Boston had provided member banks with liquidity during crises. In the absence of a public solution a private alternative had developed. Advocates of the free banking view suggest that such examples show that there is no necessity for government intervention.
The Suffolk Bank acted as lender of last resort during the Panic of 1837–1839. Rolnick, Smith and Weber "argue that the Suffolk Bank's provision of note-clearing and lender of last resort services (via the Suffolk Banking System) lessened the effects of the Panic of 1837 in New England relative to the rest of the country, where no bank provided such services."
During the Panic of 1857, a policy committee of the New York Clearing House Association (NYCHA) allowed the issuance of the so-called clearing-house loan certificates. While their legality was controversial at the time, the idea of providing additional liquidity eventually led to a public provision of this service that was to be performed by the central bank, founded in 1913.
Some authors view the establishment of clearing-houses as proof that the lender of last resort does not have to be provided by the central bank. Bordo agrees that it does not have to be a central bank. However, historical experience (mainly Canada and US) suggested to him that it has to be a public authority and not a private clearing-house association that provides the service.
Historical experience.
Miron, Bordo, Wood and Goodhart show that the existence of central banks has reduced the frequency of bank runs.
Miron uses data on the crises between 1890 and 1908 and compares it to the period of 1915 to 1933. That allows him to reject the hypothesis that after the new Federal Reserve acted as lender of last resort, the frequency of panics observed did not change. The conclusion of his discussion is that the "effects of monetary policy... that anticipated open market operations by the Fed probably had real effects."
Bordo analyses historical data by Schwartz and Kindleberger to determine whether a lender of last resort can prevent or reduce the effect of a panic or crisis. Bordo finds that Britain's last panic happened in 1866. Afterwards the Bank of England provided the necessary liquidity. According to Bordo, acting as a lender of last resort prevented panics in 1878, 1890, and 1914. Bordo concludes: "Successful lender of last resort actions prevented panics on numerous occasions. On those occasions when panics were not prevented, either the requisite institutions did not exist, or the authorities did not understand the proper actions to take. Most countries developed an effective LLR mechanism by the last one-third of the nineteenth century. The U.S. was the principal exception. Some public authority must provide the lender of last resort function... Such an authority does not have to be a central bank. This is evident from the experience of Canada and other countries."
Wood compares the reaction of central banks to different crises in England, France, and Italy. When a lender of last resort existed, panics did not turn into crises. When the central bank failed to act, crises such as in France in 1848, however, happened. He concludes "that LOLR action contains a crisis, while absence of such action allows a localized panic to turn into a widespread banking crisis." More recent examples are the crises in Argentina, Mexico and Southeastern Asia. There, central banks could not provide liquidity because banks had been borrowing in foreign currencies, which the central bank was unable to provide.
Bank of England.
The Bank of England is often considered the model lender of last resort because it acted according to the classical rules of Thornton and Bagehot. "Banking scholars agree that the Bank of England in the last third of the nineteenth century was the lender of last resort par excellence. More than any central bank before or since, it adhered to the strict classical or Thornton-Bagehot version of the LLR concept."
Federal Reserve System.
The Federal Reserve System in the United States acts rather differently, and at least in some ways not in accordance with Bagehot's advice. Norbert J. Michel, a financial researcher, goes as far as saying that the Federal Reserve made the Great Depression worse by failing to fulfil its role of lender of last resort, a view shared amongst others by Milton Friedman. Critics like Michel nevertheless applaud the Fed's role as LLR in the crisis of 1987, and in that following 9/11, (though concerns about moral hazard resulting were certainly expressed at the time).
However, the Fed's role during the financial crisis of 2007–2008 continues to polarise opinion. The classical economist Thomas M. Humphrey has identified several ways in which the modern Fed deviates from the traditional rules: (1) "Emphasis on Credit (Loans) as Opposed to Money", (2) "Taking Junk Collateral", (3) "Charging Subsidy Rates", (4) "Rescuing Insolvent Firms Too Big and Interconnected to Fail", (5) "Extension of Loan Repayment Deadlines", (6) "No Pre-announced Commitment".
Indeed, some say its lender of last resort policies have jeopardized its operational independence, and have put taxpayers at risk.
Mervyn King however has pointed out that 21st century banking (and hence the Fed as well) operate in a very different world from that of Bagehot, creating new problems for the LLR role Bagehot envisaged, highlighting especially the danger that haircuts on collateral, punitive rates, and the stigma of the deposit window can precipitate a bank run, or exacerbate a credit crunch: "In extreme cases, the LOLR is the Judas kiss for banks forced to turn to the central bank for support". As a result, other strategies were called for, and were indeed pursued by the Fed. The historian Adam Tooze has stressed how the Fed's new liquidity facilities mapped onto the various elements of the eviscerated shadow banking system, thereby replacing a systemic failure of credit as LLR, (a role morphing perhaps into that of a dealer of last resort). Tooze concluded that "In its own terms, as a capitalist stabilization effort...the Fed was remarkably successful".
ECB.
The European Central Bank arguably set itself up (controversially) as a conditional LOLR with its 2012 policy of Outright Monetary Transactions.
Prussia/Imperial Germany.
In 1763, the king was the lender of last resort in Prussia; and in the 19th C., various official bodies, from the Prussian lottery to the Hamburg City Government, worked in consortia as LOLR. After unification, the financial crisis of 1873 forced the formation of the German Reichsbank (1876) to fulfil that role.
International lender of last resort.
Theory.
The matter of whether there is a need for an international lender of last resort is more controversial than for a domestic lender of last resort. Most authors agree that there is a need for a national lender of last resort and argue only about the specific set-up. There is, however, no agreement on the international level. There are mainly two opposing groups: one (Capie and Schwartz) says that an international lender of last resort (ILOLR) is technically impossible, while the other (Fischer, Obstfeld, Goodhardt and Huang) wants a modified International Monetary Fund (IMF) to assume this role.
Fischer argues that financial crises have become more interconnected, which requires an international lender of last resort because domestic lenders cannot create foreign currency. Fischer says this role can and should be taken by the IMF even if it is not a central bank, since it has the ability to provide credit to the market irrespective of being unable to create new money in any "international currency". Fischer's central argument, that the ability to create money is not a necessary attribute of the lender of last resort, is highly controversial, and both Capie and Schwartz argue the opposite.
Goodhart and Huang developed a model arguing "the international contagious risk is much higher when there is an international interbank market than otherwise. Our analysis has indicated that an ILOLR can play a useful role in providing international liquidity and reducing such international contagion."
"A lender-of-last-resort is what it is by virtue of the fact that it alone provides the ultimate means of payment. There is no international money and so there can be no international lender-of-last-resort."
That is the most prominent argument put against the international lender of last resort. Besides this point (considered "semantic" by opposing authors), Capie and Schwartz provide arguments for why the IMF is not fit to be an international lender of last resort.
Schwartz explains that the lender of last resort is not the optimal solution to the crises of today, and the IMF cannot replace the necessary government agencies. Schwartz considers a domestic lender of last resort suitable to stabilize the international financial system, but the IMF lacks the properties necessary for the role of an international lender of last resort.
Practice.
Tooze has argued that, during and since the credit crunch, the dollar has extended its reach as a global reserve currency; and suggests further that, at the height of the crisis, through the Central bank liquidity swap lines, the Fed "assured the key players in the global system...there was one actor in the system that would cover marginal imbalances with an unlimited supply of dollar liquidity. That precisely was the role of the global lender of last resort". Concern as to whether the Fed is in a position to repeat its role as global LOLR is one of the forces behind calls for a formal global currency.
In government bond markets.
Although the European Central Bank (ECB) has supplied large amounts of liquidity through both open market operations and lending to individual banks in 2008, it was hesitant to supply liquidity during the sovereign crisis of 2010. According to Paul De Grauwe, the ECB should be the lender of last resort in the government bond market and supply liquidity to its member countries just as it does to the financial sector. That is because the reasons that the lender of last resort is necessary in the banking sector can be applied to the government bond market analogously. Just like banks that lend long-term while borrowing short-term, governments have highly illiquid assets like infrastructure and maturing debt. If they do not succeed in rolling over their debt, they become illiquid just as banks that run out of liquidity and are not supported by a lender of last resort. The distrust of investors can then increase the rates the government has to pay on its debt, which, in a self-fulfilling way, leads to a solvency crisis. Because banks hold the greatest proportion of government debt, not saving the government may make it necessary to save the banks, in turn. "The single most important argument for mandating the ECB to be a lender of last resort in the government bond markets is to prevent countries from being pushed into a bad equilibrium."
Arguments put forth against a lender of last resort in the government bond market are the following: (1) inflation risk from an increase in the money supply; (2) losses to taxpayers because in the end they bear the losses of the ECB; (3) moral hazard: governments have an incentive to take more risk; (4) Bagehot's rule of not lending to insolvent institutions; and (5) violation of the statutes of the ECB, which do not allow the ECB to buy government bonds directly.
According to De Grauwe, none of the arguments is valid for the following reason: (1) The money supply does not necessarily increase if the money base is increased. (2) All open market operations generate taxpayer risk, and if the lender of last resort is successful in preventing countries from moving into the bad equilibrium, it will not suffer any losses. (3) The risk of moral hazard is identical to the moral hazard in the financial market and should be overcome by risk-limiting regulation. (4) If the distinction between illiquid and insolvent were possible, the market would not need the support of the lender of last resort, but in practice, the distinction cannot be made. (5) While Article 21 of the treaty prohibits buying debt from national governments directly because it "implies a monetary financing of the government budget deficit", Article 18 allows the ECB to buy and sell "marketable instruments", and government bonds are marketable instruments. Finally, De Grauwe asserts that only the central bank itself has the necessary credibility to act as a lender of last resort and so it should replace the European Financial Stability Facility (and its successor, the European Stability Mechanism). The two institutions cannot guarantee that they will always possess enough liquidity or "fire power" to buy debt from sovereign bond holders.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " M = \\quad \\left \\lbrack \\frac{1+\\frac{C}{D}}{\\frac{C}{D} + \\frac{R}{D}} \\right \\rbrack B "
}
] | https://en.wikipedia.org/wiki?curid=921278 |
921525 | Maxwell relations | Equations involving the partial derivatives of thermodynamic quantities
Maxwell's relations are a set of equations in thermodynamics which are derivable from the symmetry of second derivatives and from the definitions of the thermodynamic potentials. These relations are named for the nineteenth-century physicist James Clerk Maxwell.
Equations.
The structure of Maxwell relations is a statement of equality among the second derivatives for continuous functions. It follows directly from the fact that the order of differentiation of an analytic function of two variables is irrelevant (Schwarz theorem). In the case of Maxwell relations the function considered is a thermodynamic potential and formula_4 and formula_5 are two different natural variables for that potential, we have
Schwarz's theorem (general)
formula_6
where the partial derivatives are taken with all other natural variables held constant. For every thermodynamic potential there are formula_7 possible Maxwell relations where formula_8 is the number of natural variables for that potential.
The four most common Maxwell relations.
The four most common Maxwell relations are the equalities of the second derivatives of each of the four thermodynamic potentials, with respect to their thermal natural variable (temperature formula_1, or entropy formula_3) and their "mechanical" natural variable (pressure formula_0, or volume formula_2):
Maxwell's relations "(common)"
formula_9
where the potentials as functions of their natural thermal and mechanical variables are the internal energy formula_10, enthalpy formula_11, Helmholtz free energy formula_12, and Gibbs free energy formula_13. The thermodynamic square can be used as a mnemonic to recall and derive these relations. The usefulness of these relations lies in their quantifying entropy changes, which are not directly measurable, in terms of measurable quantities like temperature, volume, and pressure.
Each equation can be re-expressed using the relationship
formula_14
which are sometimes also known as Maxwell relations.
Derivations.
Short derivation.
This section is based on chapter 5 of.
Suppose we are given four real variables formula_15, restricted to move on a 2-dimensional formula_16 surface in formula_17. Then, if we know two of them, we can determine the other two uniquely (generically).
In particular, we may take any two variables as the independent variables, and let the other two be the dependent variables, then we can take all these partial derivatives.
Proposition:
formula_18
Proof: This is just the chain rule.
Proposition:
formula_19
Proof. We can ignore formula_20. Then locally the surface is just formula_21. Then formula_22, etc. Now multiply them.
Proof of Maxwell's relations:
There are four real variables formula_23, restricted on the 2-dimensional surface of possible thermodynamic states. This allows us to use the previous two propositions.
It suffices to prove the first of the four relations, as the other three can be obtained by transforming the first relation using the previous two propositions.
Pick formula_24 as the independent variables, and formula_25 as the dependent variable. We have
formula_26.
Now, formula_27 since the surface is formula_16, that is,formula_28which yields the result.
Another derivation.
Based on.
Since formula_29, around any cycle, we haveformula_30Take the cycle infinitesimal, we find that formula_31. That is, the map is area-preserving. By the chain rule for Jacobians, for any coordinate transform formula_32, we haveformula_33Now setting formula_32 to various values gives us the four Maxwell relations. For example, setting formula_34 gives us formula_35
Extended derivations.
Maxwell relations are based on simple partial differentiation rules, in particular the total differential of a function and the symmetry of evaluating second order partial derivatives.
<templatestyles src="Math_proof/styles.css" />Derivation
Derivation of the Maxwell relation can be deduced from the differential forms of the thermodynamic potentials:
The differential form of internal energy U is
formula_36
This equation resembles total differentials of the form
formula_37
It can be shown, for any equation of the form,
formula_38
that
formula_39
Consider, the equation formula_40. We can now immediately see that
formula_41
Since we also know that for functions with continuous second derivatives, the mixed partial derivatives are identical (Symmetry of second derivatives), that is, that
formula_42
we therefore can see that
formula_43
and therefore that
formula_44
Derivation of Maxwell Relation from Helmholtz Free energy
The differential form of Helmholtz free energy is
formula_45
formula_46
From symmetry of second derivatives
formula_47
and therefore that
formula_48
The other two Maxwell relations can be derived from differential form of enthalpy formula_49 and the differential form of Gibbs free energy formula_50 in a similar way. So all Maxwell Relationships above follow from one of the Gibbs equations.
<templatestyles src="Math_proof/styles.css" />Extended derivation
Combined form first and second law of thermodynamics,
U, S, and V are state functions.
Let,
Substitute them in Eq.1 and one gets,
formula_57
And also written as,
formula_58
comparing the coefficient of dx and dy, one gets
formula_59
formula_60
Differentiating above equations by y, x respectively
and
U, S, and V are exact differentials, therefore,
formula_61
formula_62
formula_63
Subtract Eq.2 and Eq.3 and one gets
formula_64
"Note: The above is called the general expression for Maxwell's thermodynamical relation."
Allow "x" = "S" and "y" = "V" and one gets
formula_65
Allow "x" = "T" and "y" = "V" and one gets
formula_66
Allow "x" = "S" and "y" = "P" and one gets
formula_35
Allow "x" = "T" and "y" = "P" and one gets
formula_67
Allow "x" = "P" and "y" = "V" and one gets
formula_68
Allow "x" = "T" and "y" = "S" and one gets
formula_69
Derivation based on Jacobians.
If we view the first law of thermodynamics,
formula_36
as a statement about differential forms, and take the exterior derivative of this equation, we get
formula_70
since formula_71. This leads to the fundamental identity
formula_72
The physical meaning of this identity can be seen by noting that the two sides are the equivalent ways of writing the work done in an infinitesimal Carnot cycle. An equivalent way of writing the identity is
formula_73
The Maxwell relations now follow directly. For example,
formula_74
The critical step is the penultimate one. The other Maxwell relations follow in similar fashion. For example,
formula_75
General Maxwell relationships.
The above are not the only Maxwell relationships. When other work terms involving other natural variables besides the volume work are considered or when the number of particles is included as a natural variable, other Maxwell relations become apparent. For example, if we have a single-component gas, then the number of particles "N" is also a natural variable of the above four thermodynamic potentials. The Maxwell relationship for the enthalpy with respect to pressure and particle number would then be:
formula_76
where μ is the chemical potential. In addition, there are other thermodynamic potentials besides the four that are commonly used, and each of these potentials will yield a set of Maxwell relations. For example, the grand potential formula_77 yields:
formula_78
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "x_i"
},
{
"math_id": 5,
"text": "x_j"
},
{
"math_id": 6,
"text": "\\frac{\\partial }{\\partial x_j}\\left(\\frac{\\partial \\Phi}{\\partial x_i}\\right)=\n\\frac{\\partial }{\\partial x_i}\\left(\\frac{\\partial \\Phi}{\\partial x_j}\\right)\n"
},
{
"math_id": 7,
"text": "\\frac{1}{2} n(n-1)"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": " \\begin{align}\n+\\left(\\frac{\\partial T}{\\partial V}\\right)_S &=& -\\left(\\frac{\\partial P}{\\partial S}\\right)_V &=& \\frac{\\partial^2 U }{\\partial S \\partial V}\\\\\n\n+\\left(\\frac{\\partial T}{\\partial P}\\right)_S &=& +\\left(\\frac{\\partial V}{\\partial S}\\right)_P &=& \\frac{\\partial^2 H }{\\partial S \\partial P}\\\\\n+\\left(\\frac{\\partial S}{\\partial V}\\right)_T &=& +\\left(\\frac{\\partial P}{\\partial T}\\right)_V &=& -\\frac{\\partial^2 F }{\\partial T \\partial V}\\\\\n\n-\\left(\\frac{\\partial S}{\\partial P}\\right)_T &=& +\\left(\\frac{\\partial V}{\\partial T}\\right)_P &=& \\frac{\\partial^2 G }{\\partial T \\partial P}\n\\end{align}\\,\\!"
},
{
"math_id": 10,
"text": "U(S, V)"
},
{
"math_id": 11,
"text": "H(S, P)"
},
{
"math_id": 12,
"text": "F(T, V)"
},
{
"math_id": 13,
"text": "G(T, P)"
},
{
"math_id": 14,
"text": "\\left(\\frac{\\partial y}{\\partial x}\\right)_z\n=\n1\\left/\\left(\\frac{\\partial x}{\\partial y}\\right)_z\\right."
},
{
"math_id": 15,
"text": "(x, y, z, w)"
},
{
"math_id": 16,
"text": "C^2"
},
{
"math_id": 17,
"text": "\\R^4"
},
{
"math_id": 18,
"text": " \\left(\\frac{\\partial w}{\\partial y}\\right)_{z} = \\left(\\frac{\\partial w}{\\partial x}\\right)_{z} \\left(\\frac{\\partial x}{\\partial y}\\right)_{z} "
},
{
"math_id": 19,
"text": " \\left(\\frac{\\partial x}{\\partial y}\\right)_z \\left(\\frac{\\partial y}{\\partial z}\\right)_x \\left(\\frac{\\partial z}{\\partial x}\\right)_y = -1 "
},
{
"math_id": 20,
"text": "w"
},
{
"math_id": 21,
"text": "ax + by + cz + d = 0"
},
{
"math_id": 22,
"text": "\\left(\\frac{\\partial x}{\\partial y}\\right)_z = -\\frac{b}{a}"
},
{
"math_id": 23,
"text": "(T, S, p, V)"
},
{
"math_id": 24,
"text": "V, S"
},
{
"math_id": 25,
"text": "E"
},
{
"math_id": 26,
"text": " dE = -pdV + TdS "
},
{
"math_id": 27,
"text": "\\partial_{V,S}E = \\partial_{S, V}E"
},
{
"math_id": 28,
"text": " \\left(\\frac{\\partial \\left(\\frac{\\partial E}{\\partial S}\\right)_{V}}{\\partial V}\\right)_{S} = \\left(\\frac{\\partial \\left(\\frac{\\partial E}{\\partial V}\\right)_{S}}{\\partial S}\\right)_{V} "
},
{
"math_id": 29,
"text": "dU = TdS - PdV"
},
{
"math_id": 30,
"text": "0 = \\oint dU = \\oint TdS - \\oint PdV"
},
{
"math_id": 31,
"text": "\\frac{\\partial(P, V)}{\\partial(T, S)} = 1"
},
{
"math_id": 32,
"text": "(x, y)"
},
{
"math_id": 33,
"text": "\\frac{\\partial(P, V)}{\\partial(x, y)} = \\frac{\\partial(T, S)}{\\partial(x, y)} "
},
{
"math_id": 34,
"text": "(x, y) = (P, S)"
},
{
"math_id": 35,
"text": "\\left(\\frac{\\partial T}{\\partial P}\\right)_S = \\left(\\frac{\\partial V}{\\partial S}\\right)_P"
},
{
"math_id": 36,
"text": "dU = T \\, dS - P \\, dV"
},
{
"math_id": 37,
"text": "dz = \\left(\\frac{\\partial z}{\\partial x}\\right)_y\\!dx + \\left(\\frac{\\partial z}{\\partial y}\\right)_x\\! dy"
},
{
"math_id": 38,
"text": "dz = M \\, dx + N \\, dy "
},
{
"math_id": 39,
"text": "M = \\left(\\frac{\\partial z}{\\partial x}\\right)_y, \\quad\n N = \\left(\\frac{\\partial z}{\\partial y}\\right)_x"
},
{
"math_id": 40,
"text": "dU = T \\, dS - P \\, dV"
},
{
"math_id": 41,
"text": "T = \\left(\\frac{\\partial U}{\\partial S}\\right)_V, \\quad\n-P = \\left(\\frac{\\partial U}{\\partial V}\\right)_S"
},
{
"math_id": 42,
"text": "\\frac{\\partial}{\\partial y}\\left(\\frac{\\partial z}{\\partial x}\\right)_y =\n\\frac{\\partial}{\\partial x}\\left(\\frac{\\partial z}{\\partial y}\\right)_x =\n\\frac{\\partial^2 z}{\\partial y \\partial x} = \\frac{\\partial^2 z}{\\partial x \\partial y}"
},
{
"math_id": 43,
"text": " \\frac{\\partial}{\\partial V}\\left(\\frac{\\partial U}{\\partial S}\\right)_V =\n\\frac{\\partial}{\\partial S}\\left(\\frac{\\partial U}{\\partial V}\\right)_S "
},
{
"math_id": 44,
"text": "\\left(\\frac{\\partial T}{\\partial V}\\right)_S = -\\left(\\frac{\\partial P}{\\partial S}\\right)_V"
},
{
"math_id": 45,
"text": "dF = -S \\, dT - P \\, dV"
},
{
"math_id": 46,
"text": "-S = \\left(\\frac{\\partial F}{\\partial T}\\right)_V, \\quad\n-P = \\left(\\frac{\\partial F}{\\partial V}\\right)_T"
},
{
"math_id": 47,
"text": " \\frac{\\partial}{\\partial V}\\left(\\frac{\\partial F}{\\partial T}\\right)_V =\n\\frac{\\partial}{\\partial T}\\left(\\frac{\\partial F}{\\partial V}\\right)_T "
},
{
"math_id": 48,
"text": "\\left(\\frac{\\partial S}{\\partial V}\\right)_T = \\left(\\frac{\\partial P}{\\partial T}\\right)_V"
},
{
"math_id": 49,
"text": "dH = T \\, dS + V \\, dP"
},
{
"math_id": 50,
"text": "dG = V \\, dP - S \\, dT "
},
{
"math_id": 51,
"text": "U = U(x,y)"
},
{
"math_id": 52,
"text": "S = S(x,y)"
},
{
"math_id": 53,
"text": "V = V(x,y)"
},
{
"math_id": 54,
"text": "dU = \\left(\\frac{\\partial U}{\\partial x}\\right)_y\\!dx + \\left(\\frac{\\partial U}{\\partial y}\\right)_x\\!dy"
},
{
"math_id": 55,
"text": "dS = \\left(\\frac{\\partial S}{\\partial x}\\right)_y\\!dx + \\left(\\frac{\\partial S}{\\partial y}\\right)_x\\!dy"
},
{
"math_id": 56,
"text": "dV = \\left(\\frac{\\partial V}{\\partial x}\\right)_y\\!dx + \\left(\\frac{\\partial V}{\\partial y}\\right)_x\\!dy"
},
{
"math_id": 57,
"text": "T\\left(\\frac{\\partial S}{\\partial x}\\right)_y\\!dx +\n T\\left(\\frac{\\partial S}{\\partial y}\\right)_x\\!dy = \\left(\\frac{\\partial U}{\\partial x}\\right)_y\\!dx +\n \\left(\\frac{\\partial U}{\\partial y}\\right)_x\\!dy + P\\left(\\frac{\\partial V}{\\partial x}\\right)_y\\!dx +\n P\\left(\\frac{\\partial V}{\\partial y}\\right)_x\\!dy"
},
{
"math_id": 58,
"text": "\\left(\\frac{\\partial U}{\\partial x}\\right)_y\\!dx +\n \\left(\\frac{\\partial U}{\\partial y}\\right)_x\\!dy = T\\left(\\frac{\\partial S}{\\partial x}\\right)_y\\!dx +\n T\\left(\\frac{\\partial S}{\\partial y}\\right)_x\\!dy - P\\left(\\frac{\\partial V}{\\partial x}\\right)_y\\!dx -\n P\\left(\\frac{\\partial V}{\\partial y}\\right)_x\\!dy"
},
{
"math_id": 59,
"text": "\\left(\\frac{\\partial U}{\\partial x}\\right)_y = T\\left(\\frac{\\partial S}{\\partial x}\\right)_y - P\\left(\\frac{\\partial V}{\\partial x}\\right)_y"
},
{
"math_id": 60,
"text": "\\left(\\frac{\\partial U}{\\partial y}\\right)_x = T\\left(\\frac{\\partial S}{\\partial y}\\right)_x - P\\left(\\frac{\\partial V}{\\partial y}\\right)_x"
},
{
"math_id": 61,
"text": "\\left(\\frac{\\partial^2U}{\\partial y\\partial x}\\right) = \\left(\\frac{\\partial^2U}{\\partial x\\partial y}\\right)"
},
{
"math_id": 62,
"text": "\\left(\\frac{\\partial^2S}{\\partial y\\partial x}\\right) = \\left(\\frac{\\partial^2S}{\\partial x\\partial y}\\right)"
},
{
"math_id": 63,
"text": "\\left(\\frac{\\partial^2V}{\\partial y\\partial x}\\right) = \\left(\\frac{\\partial^2V}{\\partial x\\partial y}\\right)"
},
{
"math_id": 64,
"text": "\\left(\\frac{\\partial T}{\\partial y}\\right)_x \\left(\\frac{\\partial S}{\\partial x}\\right)_y - \\left(\\frac{\\partial P}{\\partial y}\\right)_x \\left(\\frac{\\partial V}{\\partial x}\\right)_y = \\left(\\frac{\\partial T}{\\partial x}\\right)_y \\left(\\frac{\\partial S}{\\partial y}\\right)_x - \\left(\\frac{\\partial P}{\\partial x}\\right)_y \\left(\\frac{\\partial V}{\\partial y}\\right)_x"
},
{
"math_id": 65,
"text": "\\left(\\frac{\\partial T}{\\partial V}\\right)_S = -\\left(\\frac{\\partial P}{\\partial S}\\right)_V"
},
{
"math_id": 66,
"text": "\\left(\\frac{\\partial S}{\\partial V}\\right)_T = \\left(\\frac{\\partial P}{\\partial T}\\right)_V"
},
{
"math_id": 67,
"text": "\\left(\\frac{\\partial S}{\\partial P}\\right)_T = -\\left(\\frac{\\partial V}{\\partial T}\\right)_P"
},
{
"math_id": 68,
"text": "\\left(\\frac{\\partial T}{\\partial P}\\right)_V \\left(\\frac{\\partial S}{\\partial V}\\right)_P - \\left(\\frac{\\partial T}{\\partial V}\\right)_P \\left(\\frac{\\partial S}{\\partial P}\\right)_V = 1"
},
{
"math_id": 69,
"text": "\\left(\\frac{\\partial P}{\\partial T}\\right)_S \\left(\\frac{\\partial V}{\\partial S}\\right)_T - \\left(\\frac{\\partial P}{\\partial S}\\right)_T \\left(\\frac{\\partial V}{\\partial T}\\right)_S = 1"
},
{
"math_id": 70,
"text": " 0 = dT \\, dS - dP \\, dV"
},
{
"math_id": 71,
"text": " d(dU) = 0"
},
{
"math_id": 72,
"text": " dP \\, dV = dT \\, dS. "
},
{
"math_id": 73,
"text": " \\frac{\\partial(T,S)}{\\partial(P,V)} = 1. "
},
{
"math_id": 74,
"text": " \\left(\\frac{\\partial S}{\\partial V} \\right)_T\n = \\frac{\\partial(T,S)}{\\partial(T,V)}\n = \\frac{\\partial(P,V)}{\\partial(T,V)}\n = \\left(\\frac{\\partial P}{\\partial T} \\right)_V,\n"
},
{
"math_id": 75,
"text": " \\left(\\frac{\\partial T}{\\partial V} \\right)_S\n = \\frac{\\partial(T,S)}{\\partial(V,S)}\n = \\frac{\\partial(P,V)}{\\partial(V,S)}\n = - \\left(\\frac{\\partial P}{\\partial S} \\right)_V.\n"
},
{
"math_id": 76,
"text": "\n\\left(\\frac{\\partial \\mu}{\\partial P}\\right)_{S, N} =\n\\left(\\frac{\\partial V}{\\partial N}\\right)_{S, P}\\qquad=\n\\frac{\\partial^2 H }{\\partial P \\partial N}\n"
},
{
"math_id": 77,
"text": "\\Omega(\\mu, V, T)"
},
{
"math_id": 78,
"text": " \\begin{align}\n\\left(\\frac{\\partial N}{\\partial V}\\right)_{\\mu, T} &=& \\left(\\frac{\\partial P}{\\partial \\mu}\\right)_{V,T} &=& -\\frac{\\partial^2 \\Omega }{\\partial \\mu \\partial V}\\\\\n\\left(\\frac{\\partial N}{\\partial T}\\right)_{\\mu, V} &=& \\left(\\frac{\\partial S}{\\partial \\mu}\\right)_{V,T} &=& -\\frac{\\partial^2 \\Omega }{\\partial \\mu \\partial T}\\\\\n\\left(\\frac{\\partial P}{\\partial T}\\right)_{\\mu, V} &=& \\left(\\frac{\\partial S}{\\partial V}\\right)_{\\mu,T} &=& -\\frac{\\partial^2 \\Omega }{\\partial V \\partial T}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=921525 |
9217017 | Transition state theory | Theory describing the reaction rates of elementary chemical reactions
In chemistry, transition state theory (TST) explains the reaction rates of elementary chemical reactions. The theory assumes a special type of chemical equilibrium (quasi-equilibrium) between reactants and activated transition state complexes.
TST is used primarily to understand qualitatively how chemical reactions take place. TST has been less successful in its original goal of calculating absolute reaction rate constants because the calculation of absolute reaction rates requires precise knowledge of potential energy surfaces, but it has been successful in calculating the standard enthalpy of activation (Δ"H"‡, also written Δ‡"H"ɵ), the standard entropy of activation (Δ"S"‡ or Δ‡"S"ɵ), and the standard Gibbs energy of activation (Δ"G"‡ or Δ‡"G"ɵ) for a particular reaction if its rate constant has been experimentally determined. (The ‡ notation refers to the value of interest "at the transition state"; Δ"H"‡ is the difference between the enthalpy of the transition state and that of the reactants.)
This theory was developed simultaneously in 1935 by Henry Eyring, then at Princeton University, and by Meredith Gwynne Evans and Michael Polanyi of the University of Manchester. TST is also referred to as "activated-complex theory", "absolute-rate theory", and "theory of absolute reaction rates".
Before the development of TST, the Arrhenius rate law was widely used to determine energies for the reaction barrier. The Arrhenius equation derives from empirical observations and ignores any mechanistic considerations, such as whether one or more reactive intermediates are involved in the conversion of a reactant to a product. Therefore, further development was necessary to understand the two parameters associated with this law, the pre-exponential factor ("A") and the activation energy ("E"a). TST, which led to the Eyring equation, successfully addresses these two issues; however, 46 years elapsed between the publication of the Arrhenius rate law, in 1889, and the Eyring equation derived from TST, in 1935. During that period, many scientists and researchers contributed significantly to the development of the theory.
Theory.
The basic ideas behind transition state theory are as follows:
Development.
In the development of TST, three approaches were taken as summarized below
Thermodynamic treatment.
In 1884, Jacobus van 't Hoff proposed the Van 't Hoff equation describing the temperature dependence of the equilibrium constant for a reversible reaction:
<chem> {A} <=> {B} </chem>
formula_0
where Δ"U" is the change in internal energy, "K" is the equilibrium constant of the reaction, "R" is the universal gas constant, and "T" is thermodynamic temperature. Based on experimental work, in 1889, Svante Arrhenius proposed a similar expression for the rate constant of a reaction, given as follows:
formula_1
Integration of this expression leads to the Arrhenius equation
formula_2
where "k" is the rate constant. "A" was referred to as the frequency factor (now called the pre-exponential coefficient), and "E"a is regarded as the activation energy. By the early 20th century many had accepted the Arrhenius equation, but the physical interpretation of "A" and "E"a remained vague. This led many researchers in chemical kinetics to offer different theories of how chemical reactions occurred in an attempt to relate "A" and "E"a to the molecular dynamics directly responsible for chemical reactions.
In 1910, French chemist René Marcelin introduced the concept of standard Gibbs energy of activation. His relation can be written as
formula_3
At about the same time as Marcelin was working on his formulation, Dutch chemists Philip Abraham Kohnstamm, Frans Eppo Cornelis Scheffer, and Wiedold Frans Brandsma introduced standard entropy of activation and the standard enthalpy of activation. They proposed the following rate constant equation
formula_4
However, the nature of the constant was still unclear.
Kinetic-theory treatment.
In early 1900, Max Trautz and William Lewis studied the rate of the reaction using collision theory, based on the kinetic theory of gases. Collision theory treats reacting molecules as hard spheres colliding with one another; this theory neglects entropy changes, since it assumes that the collision between molecules are completely elastic.
Lewis applied his treatment to the following reaction and obtained good agreement with experimental result.
2HI → H2 + I2
However, later when the same treatment was applied to other reactions, there were large discrepancies between theoretical and experimental results.
Statistical-mechanical treatment.
Statistical mechanics played a significant role in the development of TST. However, the application of statistical mechanics to TST was developed very slowly given the fact that in mid-19th century, James Clerk Maxwell, Ludwig Boltzmann, and Leopold Pfaundler published several papers discussing reaction equilibrium and rates in terms of molecular motions and the statistical distribution of molecular speeds.
It was not until 1912 when the French chemist A. Berthoud used the Maxwell–Boltzmann distribution law to obtain an expression for the rate constant.
formula_5
where "a" and "b" are constants related to energy terms.
Two years later, René Marcelin made an essential contribution by treating the progress of a chemical reaction as a motion of a point in phase space. He then applied Gibbs' statistical-mechanical procedures and obtained an expression similar to the one he had obtained earlier from thermodynamic consideration.
In 1915, another important contribution came from British physicist James Rice. Based on his statistical analysis, he concluded that the rate constant is proportional to the "critical increment". His ideas were further developed by Richard Chace Tolman. In 1919, Austrian physicist Karl Ferdinand Herzfeld applied statistical mechanics to the equilibrium constant and kinetic theory to the rate constant of the reverse reaction, "k"−1, for the reversible dissociation of a diatomic molecule.
<chem>AB <=>[k_1][k_{-1}] {A} + {B}</chem>
He obtained the following equation for the rate constant of the forward reaction
formula_6
where formula_7 is the dissociation energy at absolute zero, "k"B is the Boltzmann constant, "h" is the Planck constant, "T" is thermodynamic temperature, "formula_8" is vibrational frequency of the bond.
This expression is very important since it is the first time that the factor "k"B"T"/"h", which is a critical component of TST, has appeared in a rate equation.
In 1920, the American chemist Richard Chace Tolman further developed Rice's idea of the critical increment. He concluded that critical increment (now referred to as activation energy) of a reaction is equal to the average energy of all molecules undergoing reaction minus the average energy of all reactant molecules.
Potential energy surfaces.
The concept of potential energy surface was very important in the development of TST. The foundation of this concept was laid by René Marcelin in 1913. He theorized that the progress of a chemical reaction could be described as a point in a potential energy surface with coordinates in atomic momenta and distances.
In 1931, Henry Eyring and Michael Polanyi constructed a potential energy surface for the reaction below. This surface is a three-dimensional diagram based on quantum-mechanical principles as well as experimental data on vibrational frequencies and energies of dissociation.
H + H2 → H2 + H
A year after the Eyring and Polanyi construction, Hans Pelzer and Eugene Wigner made an important contribution by following the progress of a reaction on a potential energy surface. The importance of this work was that it was the first time that the concept of col or saddle point in the potential energy surface was discussed. They concluded that the rate of a reaction is determined by the motion of the system through that col.
It has been typically assumed that the rate-limiting or lowest saddle point is located on the same energy surface as the initial ground state. However, it was recently found that this could be incorrect for processes occurring in semiconductors and insulators, where an initial excited state could go through a saddle point lower than the one on the surface of the initial ground state.
Kramers theory of reaction rates.
By modeling reactions as Langevin motion along a one dimensional reaction coordinate, Hendrik Kramers was able to derive a relationship between the shape of the potential energy surface along the reaction coordinate and the transition rates of the system. The formulation relies on approximating the potential energy landscape as a series of harmonic wells. In a two state system, there will be three wells; a well for state A, an upside-down well representing the potential energy barrier, and a well for state B.
In the overdamped (or "diffusive") regime, the transition rate from state A to B is related to the resonant frequency of the wells via
formula_9
where formula_10 is the frequency of the well for state A, formula_11 is the frequency of the barrier well, formula_12 is the viscous damping, formula_13 is the energy of the top of the barrier, formula_14 is the energy of bottom of the well for state A, and formula_15 is the temperature of the system times the Boltzmann constant.
For general damping (overdamped or underdamped), there is a similar formula.
Justification for the Eyring equation.
One of the most important features introduced by Eyring, Polanyi and Evans was the notion that activated complexes are in quasi-equilibrium with the reactants. The rate is then directly proportional to the concentration of these complexes multiplied by the frequency ("k"B"T"/"h") with which they are converted into products. Below, a non-rigorous plausibility argument is given for the functional form of the Eyring equation. However, the key statistical mechanical factor "k"B"T"/"h" will not be justified, and the argument presented below does not constitute a true "derivation" of the Eyring equation.
Quasi-equilibrium assumption.
Quasi-equilibrium is different from classical chemical equilibrium, but can be described using a similar thermodynamic treatment. Consider the reaction below
<chem> {A} + {B} <=> {[AB]^\ddagger} -> {P} </chem>
where complete equilibrium is achieved between all the species in the system including activated complexes, [AB]‡ . Using statistical mechanics, concentration of [AB]‡ can be calculated in terms of the concentration of A and B.
TST assumes that even when the reactants and products are not in equilibrium with each other, the activated complexes are in quasi-equilibrium with the reactants. As illustrated in Figure 2, at any instant of time, there are a few activated complexes, and some were reactant molecules in the immediate past, which are designated [AB"l"]‡ (since they are moving from left to right). The remainder of them were product molecules in the immediate past ([AB"r"]‡).
In TST, it is assumed that the flux of activated complexes in the two directions are independent of each other. That is, if all the product molecules were suddenly removed from the reaction system, the flow of [AB"r"]‡ stops, but there is still a flow from left to right. Hence, to be technically correct, the reactants are in equilibrium only with [AB"l"]‡, the activated complexes that were reactants in the immediate past.
Plausibility argument.
The activated complexes do not follow a Boltzmann distribution of energies, but an "equilibrium constant" can still be derived from the distribution they do follow. The equilibrium constant "K"‡ for the quasi-equilibrium can be written as
formula_16.
So, the chemical activity of the transition state AB‡ is
formula_17.
Therefore, the rate equation for the production of product is
formula_18,
where the rate constant "k" is given by
formula_19.
Here, "k"‡ is directly proportional to the frequency of the vibrational mode responsible for converting the activated complex to the product; the frequency of this vibrational mode is formula_8. Every vibration does not necessarily lead to the formation of product, so a proportionality constant formula_20, referred to as the transmission coefficient, is introduced to account for this effect. So "k"‡ can be rewritten as
formula_21.
For the equilibrium constant "K"‡ , statistical mechanics leads to a temperature dependent expression given as
formula_22 (formula_23).
Combining the new expressions for "k"‡ and "K"‡, a new rate constant expression can be written, which is given as
formula_24.
Since, by definition, Δ"G"‡ = Δ"H"‡ –"T"Δ"S"‡, the rate constant expression can be expanded, to give an alternative form of the Eyring equation:
formula_25.
For correct dimensionality, the equation needs to have an extra factor of ("c"⊖)1–"m" for reactions that are not unimolecular:
formula_26,
where "c"⊖ is the standard concentration 1 mol⋅L−1 and "m" is the molecularity.
Inferences from TST and relationship with Arrhenius theory.
The rate constant expression from transition state theory can be used to calculate the Δ"G"‡, Δ"H"‡, Δ"S"‡, and even Δ"V"‡ (the volume of activation) using experimental rate data. These so-called "activation parameters" give insight into the nature of a transition state, including energy content and degree of order, compared to the starting materials and has become a standard tool for elucidation of reaction mechanisms in physical organic chemistry. The free energy of activation, Δ"G"‡, is "defined" in transition state theory to be the energy such that formula_27 holds. The parameters Δ"H"‡ and Δ"S"‡ can then be inferred by determining Δ"G"‡ = Δ"H"‡ – "T"Δ"S"‡ at different temperatures.
Because the functional form of the Eyring and Arrhenius equations are similar, it is tempting to relate the activation parameters with the activation energy and pre-exponential factors of the Arrhenius treatment. However, the Arrhenius equation was derived from experimental data and models the macroscopic rate using only two parameters, irrespective of the number of transition states in a mechanism. In contrast, activation parameters can be found for every transition state of a multistep mechanism, at least in principle. Thus, although the enthalpy of activation, Δ"H"‡, is often equated with Arrhenius's activation energy "E"a, they are not equivalent. For a condensed-phase (e.g., solution-phase) or unimolecular gas-phase reaction step, "E"a = Δ"H"‡ + "RT". For other gas-phase reactions, "E"a = Δ"H"‡ + (1 − Δ"n"‡)"RT", where Δ"n"‡ is the change in the number of molecules on forming the transition state. (Thus, for a bimolecular gas-phase process, "E"a = Δ"H"‡ + 2"RT.")
The entropy of activation, Δ"S"‡, gives the extent to which transition state (including any solvent molecules involved in or perturbed by the reaction) is more disordered compared to the starting materials. It offers a concrete interpretation of the pre-exponential factor "A" in the Arrhenius equation; for a unimolecular, single-step process, the rough equivalence "A" = ("k"B"T"/"h") exp(1 + Δ"S"‡/"R") (or "A" = ("k"B"T"/"h") exp(2 + Δ"S"‡/"R") for bimolecular gas-phase reactions) holds. For a unimolecular process, a negative value indicates a more ordered, rigid transition state than the ground state, while a positive value reflects a transition state with looser bonds and/or greater conformational freedom. It is important to note that, for reasons of dimensionality, reactions that are bimolecular or higher have Δ"S"‡ values that depend on the standard state chosen (standard concentration, in particular). For most recent publications, 1 mol L−1 or 1 molar is chosen. Since this choice is a human construct, based on our definitions of units for molar quantity and volume, the magnitude and sign of Δ"S"‡ for a single reaction is meaningless by itself; only comparisons of the value with that of a reference reaction of "known" (or assumed) mechanism, made at the same standard state, is valid.
The volume of activation is found by taking the partial derivative of Δ"G"‡ with respect to pressure (holding temperature constant): formula_28. It gives information regarding the size, and hence, degree of bonding at the transition state. An associative mechanism will likely have a negative volume of activation, while a dissociative mechanism will likely have a positive value.
Given the relationship between equilibrium constant and the forward and reverse rate constants, formula_29, the Eyring equation implies that
formula_30.
Another implication of TST is the Curtin–Hammett principle: the product ratio of a kinetically-controlled reaction from R to two products A and B will reflect the difference in the energies of the respective transition states leading to product, assuming there is a single transition state to each one:
formula_31 (formula_32).
(In the expression for ΔΔ"G"‡ above, there is an extra formula_33 term if A and B are formed from two different species SA and SB in equilibrium.)
For a thermodynamically-controlled reaction, every difference of "RT" ln 10 ≈ (1.987 × 10−3 kcal/mol K)(298 K)(2.303) ≈ 1.36 kcal/mol in the free energies of products A and B results in a factor of 10 in selectivity at room temperature (298 K), a principle known as the "1.36 rule":
formula_34 (formula_35).
Analogously, every 1.36 kcal/mol difference in the free energy of activation results in a factor of 10 in selectivity for a kinetically-controlled process at room temperature:
formula_36 (formula_37).
Using the Eyring equation, there is a straightforward relationship between Δ"G"‡, first-order rate constants, and reaction half-life at a given temperature. At 298 K, a reaction with Δ"G"‡ = 23 kcal/mol has a rate constant of "k ≈" 8.4 × 10−5 s−1 and a half life of "t"1/2 ≈ 2.3 hours, figures that are often rounded to "k ~" 10−4 s−1 and "t"1/2 ~ 2 h. Thus, a free energy of activation of this magnitude corresponds to a typical reaction that proceeds to completion overnight at room temperature. For comparison, the cyclohexane chair flip has a Δ"G"‡ of about 11 kcal/mol with "k ~" 105 s−1, making it a dynamic process that takes place rapidly (faster than the NMR timescale) at room temperature. At the other end of the scale, the "cis/trans" isomerization of 2-butene has a Δ"G"‡ of about 60 kcal/mol, corresponding to "k ~" 10−31 s−1 at 298 K. This is a negligible rate: the half-life is 12 orders of magnitude longer than the age of the universe.
Limitations.
In general, TST has provided researchers with a conceptual foundation for understanding how chemical reactions take place. Even though the theory is widely applicable, it does have limitations. For example, when applied to each elementary step of a multi-step reaction, the theory assumes that each intermediate is long-lived enough to reach a Boltzmann distribution of energies before continuing to the next step. When the intermediates are very short-lived, TST fails. In such cases, the momentum of the reaction trajectory from the reactants to the intermediate can carry forward to affect product selectivity. An example of such a reaction is the ring closure of cyclopentane biradicals generated from the gas-phase thermal decomposition of 2,3-diazabicyclo[2.2.1]hept-2-ene.
Transition state theory is also based on the assumption that atomic nuclei behave according to classical mechanics. It is assumed that unless atoms or molecules collide with enough energy to form the transition structure, then the reaction does not occur. However, according to quantum mechanics, for any barrier with a finite amount of energy, there is a possibility that particles can still tunnel across the barrier. With respect to chemical reactions this means that there is a chance that molecules will react, even if they do not collide with enough energy to overcome the energy barrier. While this effect is negligible for reactions with large activation energies, it becomes an important phenomenon for reactions with relatively low energy barriers, since the tunneling probability increases with decreasing barrier height.
Transition state theory fails for some reactions at high temperature. The theory assumes the reaction system will pass over the lowest energy saddle point on the potential energy surface. While this description is consistent for reactions occurring at relatively low temperatures, at high temperatures, molecules populate higher energy vibrational modes; their motion becomes more complex and collisions may lead to transition states far away from the lowest energy saddle point. This deviation from transition state theory is observed even in the simple exchange reaction between diatomic hydrogen and a hydrogen radical.
Given these limitations, several alternatives to transition state theory have been proposed. A brief discussion of these theories follows.
Generalized transition state theory.
Any form of TST, such as microcanonical variational TST, canonical variational TST, and improved canonical variational TST, in which the transition state is not necessarily located at the saddle point, is referred to as generalized transition state theory.
Microcanonical variational TST.
A fundamental flaw of transition state theory is that it counts any crossing of the transition state as a reaction from reactants to products or vice versa. In reality, a molecule may cross this "dividing surface" and turn around, or cross multiple times and only truly react once. As such, unadjusted TST is said to provide an upper bound for the rate coefficients. To correct for this, variational transition state theory varies the location of the dividing surface that defines a successful reaction in order to minimize the rate for each fixed energy. The rate expressions obtained in this microcanonical treatment can be integrated over the energy, taking into account the statistical distribution over energy states, so as to give the canonical, or thermal rates.
Canonical variational TST.
A development of transition state theory in which the position of the dividing surface is varied so as to minimize the rate constant at a given temperature.
Improved canonical variational TST.
A modification of canonical variational transition state theory in which, for energies below the threshold energy, the position of the dividing surface is taken to be that of the microcanonical threshold energy. This forces the contributions to rate constants to be zero if they are below the threshold energy. A compromise dividing surface is then chosen so as to minimize the contributions to the rate constant made by reactants having higher energies.
Nonadiabatic TST.
An expansion of TST to the reactions when two spin-states are involved simultaneously is called nonadiabatic transition state theory (NA-TST).
Semiclassical TST.
Using vibrational perturbation theory, effects such as tunnelling and variational effects can be accounted for within the SCTST formalism.
Applications.
Enzymatic reactions.
Enzymes catalyze chemical reactions at rates that are astounding relative to uncatalyzed chemistry at the same reaction conditions. Each catalytic event requires a minimum of three or often more steps, all of which occur within the few milliseconds that characterize typical enzymatic reactions. According to transition state theory, the smallest fraction of the catalytic cycle is spent in the most important step, that of the transition state. The original proposals of absolute reaction rate theory for chemical reactions defined the transition state as a distinct species in the reaction coordinate that determined the absolute reaction rate. Soon thereafter, Linus Pauling proposed that the powerful catalytic action of enzymes could be explained by specific tight binding to the transition state species Because reaction rate is proportional to the fraction of the reactant in the transition state complex, the enzyme was proposed to increase the concentration of the reactive species.
This proposal was formalized by Wolfenden and coworkers at University of North Carolina at Chapel Hill, who hypothesized that the rate increase imposed by enzymes is proportional to the affinity of the enzyme for the transition state structure relative to the Michaelis complex. Because enzymes typically increase the non-catalyzed reaction rate by factors of 106-1026, and Michaelis complexes often have dissociation constants in the range of 10−3-10−6 M, it is proposed that transition state complexes are bound with dissociation constants in the range of 10−14 -10−23 M. As substrate progresses from the Michaelis complex to product, chemistry occurs by enzyme-induced changes in electron distribution in the substrate. Enzymes alter the electronic structure by protonation, proton abstraction, electron transfer, geometric distortion, hydrophobic partitioning, and interaction with Lewis acids and bases. Analogs that resemble the transition state structures should therefore provide the most powerful noncovalent inhibitors known.
All chemical transformations pass through an unstable structure called the transition state, which is poised between the chemical structures of the substrates and products. The transition states for chemical reactions are proposed to have lifetimes near 10−13 seconds, on the order of the time of a single bond vibration. No physical or spectroscopic method is available to directly observe the structure of the transition state for enzymatic reactions, yet transition state structure is central to understanding enzyme catalysis since enzymes work by lowering the activation energy of a chemical transformation.
It is now accepted that enzymes function to stabilize transition states lying between reactants and products, and that they would therefore be expected to bind strongly any inhibitor that closely resembles such a transition state. Substrates and products often participate in several enzyme catalyzed reactions, whereas the transition state tends to be characteristic of one particular enzyme, so that such an inhibitor tends to be specific for that particular enzyme. The identification of numerous transition state inhibitors supports the transition state stabilization hypothesis for enzymatic catalysis.
Currently there is a large number of enzymes known to interact with transition state analogs, most of which have been designed with the intention of inhibiting the target enzyme. Examples include HIV-1 protease, racemases, β-lactamases, metalloproteinases, cyclooxygenases and many others.
Adsorption on surfaces and reactions on surfaces.
Desorption as well as reactions on surfaces are straightforward to describe with transition state theory. Analysis of adsorption to a surface from a liquid phase can present a challenge due to lack of ability to assess the concentration of the solute near the surface. When full details are not available, it has been proposed that reacting species' concentrations should be normalized to the concentration of active surface sites, an approximation called the surface reactant equi-density approximation (SREA).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{d\\ln K}{dT} = \\frac{\\Delta U}{RT^{2}}"
},
{
"math_id": 1,
"text": "\\frac{d\\ln k}{dT} = \\frac{\\Delta E}{RT^{2}}"
},
{
"math_id": 2,
"text": "k = Ae^{-E_a/RT}"
},
{
"math_id": 3,
"text": "k\\propto\\exp\\left(\\frac{-\\Delta^\\ddagger G^\\ominus}{RT}\\right)"
},
{
"math_id": 4,
"text": "k\\propto\\exp\\left(\\frac{\\Delta^\\ddagger S^\\ominus}{R}\\right)\\exp\\left(\\frac{-\\Delta^\\ddagger H^\\ominus}{RT}\\right)"
},
{
"math_id": 5,
"text": " \\frac{d\\ln k}{dT} = \\frac{a-bT}{RT^2} "
},
{
"math_id": 6,
"text": " k_1 = \\frac{k_\\mathrm{B}T}{h}\\left(1-e^{-\\frac{h\\nu}{k_\\text{B}T}}\\right)\\exp\\left(\\frac{-E^\\ominus}{RT}\\right) "
},
{
"math_id": 7,
"text": "\\textstyle E^\\ominus"
},
{
"math_id": 8,
"text": "\\nu"
},
{
"math_id": 9,
"text": " k^{A\\rightarrow B} = \\frac{\\omega_a\\omega_H}{2\\pi\\gamma}\\exp\\left(-\\frac{E_H-E_A}{k_\\text{B}T}\\right) "
},
{
"math_id": 10,
"text": " \\omega_a "
},
{
"math_id": 11,
"text": " \\omega_H "
},
{
"math_id": 12,
"text": " \\gamma "
},
{
"math_id": 13,
"text": " E_H "
},
{
"math_id": 14,
"text": " E_A "
},
{
"math_id": 15,
"text": " k_\\text{B}T "
},
{
"math_id": 16,
"text": "K^{\\ddagger} = \\frac\\ce{[AB]^\\ddagger}\\ce{[A][B]}"
},
{
"math_id": 17,
"text": "[\\ce{AB}]^{\\ddagger} = K^{\\ddagger}[\\ce{A}][\\ce{B}] "
},
{
"math_id": 18,
"text": "\\frac{d[\\ce P]}{dt} = k^{\\ddagger}[\\ce{AB}]^{\\Dagger} = k^{\\ddagger}K^{\\Dagger }[\\ce A][\\ce B] = k[\\ce A][\\ce B]"
},
{
"math_id": 19,
"text": " k = k^{\\Dagger}K^{\\Dagger}"
},
{
"math_id": 20,
"text": "\\kappa"
},
{
"math_id": 21,
"text": "k^{\\Dagger } = \\kappa\\nu "
},
{
"math_id": 22,
"text": "K^{\\Dagger } = \\frac{k_\\text{B}T}{h\\nu} K^{\\Dagger '} "
},
{
"math_id": 23,
"text": "K^{\\Dagger '} =: e^{\\frac{- \\Delta G^{\\Dagger }}{RT}}"
},
{
"math_id": 24,
"text": "k = k^{\\Dagger }K^{\\Dagger } =\\kappa\\frac{k_\\text{B}T}{h}e^{\\frac{- \\Delta G^{\\Dagger }}{RT}}=\\kappa\\frac{k_\\text{B}T}{h}K^{\\ddagger'}"
},
{
"math_id": 25,
"text": "k = \\kappa\\frac{k_\\text{B}T}{h}e^{\\frac{\\Delta S^{\\Dagger }}{R}}e^{\\frac{- \\Delta H^{\\Dagger }}{RT}}"
},
{
"math_id": 26,
"text": "k = \\kappa\\frac{k_\\text{B}T}{h}e^{\\frac{\\Delta S^{\\Dagger }}{R}}e^{\\frac{- \\Delta H^{\\Dagger }}{RT}}(c^\\ominus)^{1-m}"
},
{
"math_id": 27,
"text": "\\Delta G^{\\Dagger } = -RT \\ln K^{\\Dagger '} "
},
{
"math_id": 28,
"text": "\\Delta V^\\ddagger :=(\\partial \\Delta G^{\\ddagger}/\\partial P)_T"
},
{
"math_id": 29,
"text": "K=k_1/k_{-1}"
},
{
"math_id": 30,
"text": "\\Delta G^\\circ =\\Delta G^\\ddagger_{1}-\\Delta G^\\ddagger_{-1}"
},
{
"math_id": 31,
"text": "\\frac{[\\mathrm{A}]}{[\\mathrm{B}]}=e^{-\\Delta\\Delta G^\\ddagger/RT}"
},
{
"math_id": 32,
"text": "\\Delta\\Delta G^\\ddagger=\\Delta G^\\ddagger_{\\mathrm{A}}-\\Delta G^\\ddagger_{\\mathrm{B}}+\\Delta G^\\circ"
},
{
"math_id": 33,
"text": "\\Delta G^\\circ=G_{\\mathrm{S}_\\mathrm{A}}^{\\circ}-G_{\\mathrm{S}_\\mathrm{B}}^{\\circ}"
},
{
"math_id": 34,
"text": "\\frac{[\\mathrm{A}]}{[\\mathrm{B}]}=10^{-\\Delta G^\\circ/(1.36\\ \\mathrm{kcal/mol})}"
},
{
"math_id": 35,
"text": "\\Delta G^\\circ=G_{\\mathrm{A}}^{\\circ}-G_{\\mathrm{B}}^{\\circ}"
},
{
"math_id": 36,
"text": "\\frac{[\\mathrm{A}]}{[\\mathrm{B}]}=10^{-\\Delta\\Delta G^\\ddagger/(1.36\\ \\mathrm{kcal/mol})}"
},
{
"math_id": 37,
"text": "\\Delta\\Delta G^\\ddagger=\\Delta G^\\ddagger_{\\mathrm{A}}-\\Delta G^\\ddagger_{\\mathrm{B}}"
}
] | https://en.wikipedia.org/wiki?curid=9217017 |
92193 | Circular dichroism | Dichroism with circularly polarized light
Circular dichroism (CD) is dichroism involving circularly polarized light, i.e., the differential absorption of left- and right-handed light. Left-hand circular (LHC) and right-hand circular (RHC) polarized light represent two possible spin angular momentum states for a photon, and so circular dichroism is also referred to as dichroism for spin angular momentum. This phenomenon was discovered by Jean-Baptiste Biot, Augustin Fresnel, and Aimé Cotton in the first half of the 19th century. Circular dichroism and circular birefringence are manifestations of optical activity. It is exhibited in the absorption bands of optically active chiral molecules. CD spectroscopy has a wide range of applications in many different fields. Most notably, UV CD is used to investigate the secondary structure of proteins. UV/Vis CD is used to investigate charge-transfer transitions. Near-infrared CD is used to investigate geometric and electronic structure by probing metal d→d transitions. Vibrational circular dichroism, which uses light from the infrared energy region, is used for structural studies of small organic molecules, and most recently proteins and DNA.
Physical principles.
Circular polarization of light.
Electromagnetic radiation consists of an electric formula_0 and magnetic formula_1 field that oscillate perpendicular to one another and to the propagating direction, a transverse wave. While linearly polarized light occurs when the electric field vector oscillates only in one plane, circularly polarized light occurs when the direction of the electric field vector rotates about its propagation direction while the vector retains constant magnitude. At a single point in space, the circularly polarized-vector will trace out a circle over one period of the wave frequency, hence the name. The two diagrams below show the electric field vectors of linearly and circularly polarized light, at one moment of time, for a range of positions; the plot of the circularly polarized electric vector forms a helix along the direction of propagation formula_2. For left circularly polarized light (LCP) with propagation towards the observer, the electric vector rotates counterclockwise. For right circularly polarized light (RCP), the electric vector rotates clockwise.
Interaction of circularly polarized light with matter.
When circularly polarized light passes through an absorbing optically active medium, the speeds between right and left polarizations differ (formula_3) as well as their wavelength(formula_4) and the extent to which they are absorbed (formula_5). "Circular dichroism" is the difference formula_6. The electric field of a light beam causes a linear displacement of charge when interacting with a molecule (electric dipole), whereas its magnetic field causes a circulation of charge (magnetic dipole). These two motions combined cause an excitation of an electron in a helical motion, which includes translation and rotation and their associated operators. The experimentally determined relationship between the rotational strength formula_7 of a sample and the formula_8 is given by
formula_9
The rotational strength has also been determined theoretically,
formula_10
We see from these two equations that in order to have non-zero formula_8, the electric and magnetic dipole moment operators (formula_11 and formula_12) must transform as the same irreducible representation. formula_13 and formula_14 are the only point groups where this can occur, making only chiral molecules CD active.
Simply put, since circularly polarized light itself is "chiral", it interacts differently with chiral molecules. That is, the two types of circularly polarized light are absorbed to different extents. In a CD experiment, equal amounts of left and right circularly polarized light of a selected wavelength are alternately radiated into a (chiral) sample. One of the two polarizations is absorbed more than the other one, and this wavelength-dependent difference of absorption is measured, yielding the CD spectrum of the sample. Due to the interaction with the molecule, the electric field vector of the light traces out an elliptical path after passing through the sample.
It is important that the chirality of the molecule can be conformational rather than structural. That is, for instance, a protein molecule with a helical secondary structure can have a CD that changes with changes in the conformation.
Delta absorbance.
By definition,
formula_15
where formula_16 (Delta Absorbance) is the difference between absorbance of left circularly polarized (LCP) and right circularly polarized (RCP) light (this is what is usually measured). formula_16 is a function of wavelength, so for a measurement to be meaningful the wavelength at which it was performed must be known.
Molar circular dichroism.
It can also be expressed, by applying Beer's law, as:
formula_17
where
formula_18 and formula_19 are the molar extinction coefficients for LCP and RCP light,
"formula_20" is the molar concentration,
"formula_21" is the path length in centimeters (cm).
Then
formula_22
is the molar circular dichroism. This intrinsic property is what is usually meant by the circular dichroism of the substance. Since formula_23 is a function of wavelength, a molar circular dichroism value (formula_23) must specify the wavelength at which it is valid.
Extrinsic effects on circular dichroism.
In many practical applications of circular dichroism (CD), as discussed below, the measured CD is not simply an intrinsic property of the molecule, but rather depends on the molecular conformation. In such a case the CD may also be a function of temperature, concentration, and the chemical environment, including solvents. In this case the reported CD value must also specify these other relevant factors in order to be meaningful.
In ordered structures lacking two-fold rotational symmetry, optical activity, including differential transmission (and reflection) of circularly polarized waves also depends on the propagation direction through the material. In this case, so-called extrinsic 3d chirality is associated with the mutual orientation of light beam and structure.
Molar ellipticity.
Although formula_24 is usually measured, for historical reasons most measurements are reported in degrees of ellipticity.
Molar ellipticity is circular dichroism corrected for concentration. Molar circular dichroism and molar ellipticity, formula_25, are readily interconverted by the equation:
formula_26
This relationship is derived by defining the ellipticity of the polarization as:
formula_27
where
formula_28 and formula_29 are the magnitudes of the electric field vectors of the right-circularly and left-circularly polarized light, respectively.
When formula_28 equals formula_29 (when there is no difference in the absorbance of right- and left-circular polarized light), formula_30 is 0° and the light is linearly polarized. When either formula_28 or formula_29 is equal to zero (when there is complete absorbance of the circular polarized light in one direction), formula_30 is 45° and the light is circularly polarized.
Generally, the circular dichroism effect is small, so formula_31 is small and can be approximated as formula_30 in radians. Since the intensity or irradiance, formula_32, of light is proportional to the square of the electric-field vector, the ellipticity becomes:
formula_33
Then by substituting for I using Beer's law in natural logarithm form:
formula_34
The ellipticity can now be written as:
formula_35
Since formula_36, this expression can be approximated by expanding the exponentials in a Taylor series to first-order and then discarding terms of formula_24 in comparison with unity and converting from radians to degrees:
formula_37
The linear dependence of solute concentration and pathlength is removed by defining molar ellipticity as,
formula_38
Then combining the last two expression with Beer's law, molar ellipticity becomes:
formula_39
The units of molar ellipticity are historically (deg·cm2/dmol). To calculate molar ellipticity, the sample concentration (g/L), cell pathlength (cm), and the molecular weight (g/mol) must be known.
If the sample is a protein, the mean residue weight (average molecular weight of the amino acid residues it contains) is often used in place of the molecular weight, essentially treating the protein as a solution of amino acids. Using mean residue ellipticity facilitates comparing the CD of proteins of different molecular weight; use of this normalized CD is important in studies of protein structure.
Mean residue ellipticity.
Methods for estimating secondary structure in polymers, proteins and polypeptides in particular, often require that the measured molar ellipticity spectrum be converted to a normalized value, specifically a value independent of the polymer length. Mean residue ellipticity is used for this purpose; it is simply the measured molar ellipticity of the molecule divided by the number of monomer units (residues) in the molecule.
Application to biological molecules.
In general, this phenomenon will be exhibited in absorption bands of any optically active molecule. As a consequence, circular dichroism is exhibited by biological molecules, because of their dextrorotary and levorotary components. Even more important is that a secondary structure will also impart a distinct CD to its respective molecules. Therefore, the alpha helix of proteins and the double helix of nucleic acids have CD spectral signatures representative of their structures. The capacity of CD to give a representative structural signature makes it a powerful tool in modern biochemistry with applications that can be found in virtually every field of study.
CD is closely related to the optical rotatory dispersion (ORD) technique, and is generally considered to be more advanced. CD is measured in or near the absorption bands of the molecule of interest, while ORD can be measured far from these bands. CD's advantage is apparent in the data analysis. Structural elements are more clearly distinguished since their recorded bands do not overlap extensively at particular wavelengths as they do in ORD. In principle, these two spectral measurements can be interconverted through an integral transform (Kramers–Kronig relation), if all the absorptions are included in the measurements.
The far-UV (ultraviolet) CD spectrum of proteins can reveal important characteristics of their secondary structure. CD spectra can be readily used to estimate the fraction of a molecule that is in the alpha-helix conformation, the beta-sheet conformation, the beta-turn conformation, or some other (e.g. random coil) conformation. These fractional assignments place important constraints on the possible secondary conformations that the protein can be in. CD cannot, in general, say where the alpha helices that are detected are located within the molecule or even completely predict how many there are. Despite this, CD is a valuable tool, especially for showing changes in conformation. It can, for instance, be used to study how the secondary structure of a molecule changes as a function of temperature or of the concentration of denaturing agents, e.g. Guanidinium chloride or urea. In this way it can reveal important thermodynamic information about the molecule (such as the enthalpy and Gibbs free energy of denaturation) that cannot otherwise be easily obtained. Anyone attempting to study a protein will find CD a valuable tool for verifying that the protein is in its native conformation before undertaking extensive and/or expensive experiments with it. Also, there are a number of other uses for CD spectroscopy in protein chemistry not related to alpha-helix fraction estimation. Moreover, CD spectroscopy has been used in bioinorganic interface studies. Specifically it has been used to analyze the differences in secondary structure of an engineered protein before and after titration with a reagent.
The near-UV CD spectrum (>250 nm) of proteins provides information on the tertiary structure. The signals obtained in the 250–300 nm region are due to the absorption, dipole orientation and the nature of the surrounding environment of the phenylalanine, tyrosine, cysteine (or S-S disulfide bridges) and tryptophan amino acids. Unlike in far-UV CD, the near-UV CD spectrum cannot be assigned to any particular 3D structure. Rather, near-UV CD spectra provide structural information on the nature of the prosthetic groups in proteins, e.g., the heme groups in hemoglobin and cytochrome c.
Visible CD spectroscopy is a very powerful technique to study metal–protein interactions and can resolve individual d–d electronic transitions as separate bands. CD spectra in the visible light region are only produced when a metal ion is in a chiral environment, thus, free metal ions in solution are not detected. This has the advantage of only observing the protein-bound metal, so pH dependence and stoichiometries are readily obtained. Optical activity in transition metal ion complexes have been attributed to configurational, conformational and the vicinal effects. Klewpatinond and Viles (2007) have produced a set of empirical rules for predicting the appearance of visible CD spectra for Cu2+ and Ni2+ square-planar complexes involving histidine and main-chain coordination.
CD gives less specific structural information than X-ray crystallography and protein NMR spectroscopy, for example, which both give atomic resolution data. However, CD spectroscopy is a quick method that does not require large amounts of proteins or extensive data processing. Thus CD can be used to survey a large number of solvent conditions, varying temperature, pH, salinity, and the presence of various cofactors.
CD spectroscopy is usually used to study proteins in solution, and thus it complements methods that study the solid state. This is also a limitation, in that many proteins are embedded in membranes in their native state, and solutions containing membrane structures are often strongly scattering. CD is sometimes measured in thin films.
CD spectroscopy has also been done using semiconducting materials such as TiO2 to obtain large signals in the UV range of wavelengths, where the electronic transitions for biomolecules often occur.
Experimental limitations.
CD has also been studied in carbohydrates, but with limited success due to the experimental difficulties associated with measurement of CD spectra in the vacuum ultraviolet (VUV) region of the spectrum (100–200 nm), where the corresponding CD bands of unsubstituted carbohydrates lie. Substituted carbohydrates with bands above the VUV region have been successfully measured.
Measurement of CD is also complicated by the fact that typical aqueous buffer systems often absorb in the range where structural features exhibit differential absorption of circularly polarized light. Phosphate, sulfate, carbonate, and acetate buffers are generally incompatible with CD unless made extremely dilute e.g. in the 10–50 mM range. The TRIS buffer system should be completely avoided when performing far-UV CD. Borate and Onium compounds are often used to establish the appropriate pH range for CD experiments. Some experimenters have substituted fluoride for chloride ion because fluoride absorbs less in the far UV, and some have worked in pure water. Another, almost universal, technique is to minimize solvent absorption by using shorter path length cells when working in the far UV, 0.1 mm path lengths are not uncommon in this work.
In addition to measuring in aqueous systems, CD, particularly far-UV CD, can be measured in organic solvents e.g. ethanol, methanol, trifluoroethanol (TFE). The latter has the advantage to induce structure formation of proteins, inducing beta-sheets in some and alpha helices in others, which they would not show under normal aqueous conditions. Most common organic solvents such as acetonitrile, THF, chloroform, dichloromethane are however, incompatible with far-UV CD.
It may be of interest to note that the protein CD spectra used in secondary structure estimation are related to the π to π* orbital absorptions of the amide bonds linking the amino acids. These absorption bands lie partly in the "so-called" vacuum ultraviolet (wavelengths less than about 200 nm). The wavelength region of interest is actually inaccessible in air because of the strong absorption of light by oxygen at these wavelengths. In practice these spectra are measured not in vacuum but in an oxygen-free instrument (filled with pure nitrogen gas).
Once oxygen has been eliminated, perhaps the second most important technical factor in working below 200 nm is to design the rest of the optical system to have low losses in this region. Critical in this regard is the use of aluminized mirrors whose coatings have been optimized for low loss in this region of the spectrum.
The usual light source in these instruments is a high pressure, short-arc xenon lamp. Ordinary xenon arc lamps are unsuitable for use in the low UV. Instead, specially constructed lamps with envelopes made from high-purity synthetic fused silica must be used.
Light from synchrotron sources has a much higher flux at short wavelengths, and has been used to record CD down to 160 nm. In 2010 the CD spectrophotometer at the electron storage ring facility ISA at the University of Aarhus in Denmark was used to record solid state CD spectra down to 120 nm.
At the quantum mechanical level, the feature density of circular dichroism and optical rotation are identical. Optical rotary dispersion and circular dichroism share the same quantum information content.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\boldsymbol E"
},
{
"math_id": 1,
"text": "\\boldsymbol B"
},
{
"math_id": 2,
"text": "\\boldsymbol k"
},
{
"math_id": 3,
"text": "c_\\mathrm{L} \\neq c_\\mathrm{R}"
},
{
"math_id": 4,
"text": "\\lambda_\\mathrm{L} \\neq \\lambda_\\mathrm{R}"
},
{
"math_id": 5,
"text": "\\varepsilon_\\mathrm{L} \\neq \\varepsilon_\\mathrm{R}"
},
{
"math_id": 6,
"text": "\\Delta\\varepsilon \\equiv \\varepsilon_\\mathrm{L} - \\varepsilon_\\mathrm{R}"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "\\Delta\\varepsilon"
},
{
"math_id": 9,
"text": "R_{\\mathrm{exp}} = \\frac{3hc10^{3} \\ln(10)}{32\\pi^{3}N_\\mathrm{A}} \\int \\frac{\\Delta\\varepsilon}{\\nu} \\mathrm d{\\nu}"
},
{
"math_id": 10,
"text": "R_\\mathrm{theo} = \\frac{1}{2mc} \\mathrm{Im} \\int \\Psi_g \\widehat{M}_\\mathrm{(elec. dipole)} \\Psi_e \\mathrm d\\tau \\bullet \\int \\Psi_g \\widehat{M}_\\mathrm{(mag. dipole)} \\Psi_e \\mathrm d\\tau "
},
{
"math_id": 11,
"text": "\\widehat{M}_\\mathrm{(elec. dipole)}"
},
{
"math_id": 12,
"text": "\\widehat{M}_\\mathrm{(mag. dipole)}"
},
{
"math_id": 13,
"text": "\\mathrm C_n"
},
{
"math_id": 14,
"text": "\\mathrm D_n "
},
{
"math_id": 15,
"text": "\\Delta A=A_\\mathrm L-A_\\mathrm R \\,"
},
{
"math_id": 16,
"text": "\\Delta A"
},
{
"math_id": 17,
"text": "\\Delta A = (\\varepsilon_\\mathrm L - \\varepsilon_\\mathrm R)Cl\\,"
},
{
"math_id": 18,
"text": "\\varepsilon_\\mathrm L"
},
{
"math_id": 19,
"text": "\\varepsilon_\\mathrm R"
},
{
"math_id": 20,
"text": "C"
},
{
"math_id": 21,
"text": "l"
},
{
"math_id": 22,
"text": " \\Delta \\varepsilon =\\varepsilon_\\mathrm L-\\varepsilon_\\mathrm R\\,"
},
{
"math_id": 23,
"text": " \\Delta \\varepsilon"
},
{
"math_id": 24,
"text": " \\Delta A "
},
{
"math_id": 25,
"text": " [\\theta]"
},
{
"math_id": 26,
"text": " [\\theta] = 3298.2\\,\\Delta \\varepsilon.\\, "
},
{
"math_id": 27,
"text": " \\tan \\theta = \\frac{E_\\mathrm R - E_\\mathrm L}{E_\\mathrm R + E_\\mathrm L} \\,"
},
{
"math_id": 28,
"text": " E_\\mathrm R"
},
{
"math_id": 29,
"text": " E_\\mathrm L"
},
{
"math_id": 30,
"text": " \\theta"
},
{
"math_id": 31,
"text": " \\tan\\theta"
},
{
"math_id": 32,
"text": " I"
},
{
"math_id": 33,
"text": " \\theta (\\text{radians}) = \\frac{(I_\\mathrm R^{1/2} - I_\\mathrm L^{1/2})}{(I_\\mathrm R^{1/2} + I_\\mathrm L^{1/2})}\\,"
},
{
"math_id": 34,
"text": " I = I_0 \\mathrm e^{-A\\ln 10}\\,"
},
{
"math_id": 35,
"text": " \\theta (\\text{radians}) = \\frac{(\\mathrm e^{\\frac{-A_\\mathrm R}{2}\\ln 10} - \\mathrm e^{\\frac{-A_\\mathrm L}{2}\\ln 10})}{(\\mathrm e^{\\frac{-A_\\mathrm R}{2}\\ln 10} + \\mathrm e^{\\frac{-A_\\mathrm L}{2}\\ln 10})} = \\frac{\\mathrm e^{\\Delta A \\frac{\\ln 10}{2}} - 1}{\\mathrm e^{\\Delta A \\frac{\\ln 10}{2}} + 1} \\,"
},
{
"math_id": 36,
"text": " \\Delta A \\ll 1"
},
{
"math_id": 37,
"text": " \\theta (\\text{degrees}) = \\Delta A \\left( \\frac {\\ln 10}{4} \\right) \\left( \\frac {180}{\\pi} \\right)\\, "
},
{
"math_id": 38,
"text": " [\\theta] = \\frac {100\\theta}{Cl}\\, "
},
{
"math_id": 39,
"text": " [\\theta]= 100 \\,\\Delta \\varepsilon \\left( \\frac {\\ln 10}{4} \\right) \\left( \\frac {180}{\\pi} \\right) = 3298.2\\,\\Delta \\varepsilon \\,"
}
] | https://en.wikipedia.org/wiki?curid=92193 |
9220187 | Lightness | Property of a color
Lightness is a visual perception of the luminance formula_0 of an object. It is often judged relative to a similarly lit object. In colorimetry and color appearance models, lightness is a prediction of how an illuminated color will appear to a standard observer. While luminance is a linear measurement of light, lightness is a linear prediction of the human perception of that light.
This distinction is meaningful because human vision's lightness perception is non-linear relative to light. Doubling the quantity of light does not result in a doubling in perceived lightness, only a modest increase.
The symbol for perceptual lightness is usually either formula_1 as used in CIECAM02 or formula_2 as used in CIELAB and CIELUV. formula_2 ("Lstar") is not to be confused with formula_3 as used for luminance. In some color ordering systems such as Munsell, Lightness is referenced as value.
Chiaroscuro and Tenebrism both take advantage of dramatic contrasts of value to heighten drama in art. Artists may also employ shading, subtle manipulation of value.
Lightness in different colorspaces.
In some colorspaces or color systems such as Munsell, HCL, and CIELAB, the lightness (value) achromatically constrains the maximum and minimum limits, and operates independently of the hue and chroma. For example Munsell value 0 is pure black, and value 10 is pure white. Colors with a discernible hue must therefore have values in between these extremes.
In a subtractive color model (e.g. paint, dye, or ink) lightness changes to a color through various tints, shades, or tones can be achieved by adding white, black, or grey respectively. This also reduces saturation.
In HSL and HSV, the displayed luminance is relative to the hue and chroma for a given lightness value, in other words the selected lightness value does not predict the actual displayed luminance nor the perception thereof. Both systems use coordinate triples, where many triples can map onto the same color.
In HSV, all triples with value 0 are pure black. If the hue and saturation are held constant, then increasing the value increases the luminance, such that a value of 1 is the lightest color with the given hue and saturation. HSL is similar, except that all triples with lightness 1 are pure white. In both models, all pure saturated colors indicate the same lightness or value, but this does not relate to the displayed luminance which is determined by the hue. I.e. yellow is higher luminance than blue, even if the lightness value is set at a given number.
While HSL, HSV, and similar spaces serve well enough to choose or adjust a single color, they are not perceptually uniform. They trade off accuracy for computational simplicity, as they were created in an era where computer technology was restricted in performance.
If we take an image and extract the hue, saturation, and lightness or value components for a given color space, we will see that they may differ substantially from a different color space or model. For example, examine the following images of a fire breather (fig. 1). The original is in the sRGB color space. CIELAB formula_2 is a perceptually uniform lightness prediction that is derived from luminance formula_4, but discards the formula_5 and formula_6, of the CIE XYZ color space. Notice this appears similar in perceived lightness to the original color image. Luma formula_7 is a gamma-encoded luminance component of some video encoding systems such as formula_8 and formula_9. It is roughly similar, but differs at high chroma, deviating most from an achromatic signal such as linear luminance formula_4 or non-linear lightness formula_2. HSL formula_3 and HSV formula_10 are neither perceptually uniform, nor uniform as to luminance.
Relationship to value and relative luminance.
The Munsell value has long been used as a perceptually uniform lightness scale. A question of interest is the relationship between the Munsell value scale and the relative luminance. Aware of the Weber–Fechner law, Albert Munsell remarked "Should we use a logarithmic curve or curve of squares?" Neither option turned out to be quite correct; scientists eventually converged on a roughly cube-root curve, consistent with the Stevens's power law for brightness perception, reflecting the fact that lightness is proportional to the number of nerve impulses per nerve fiber per unit time. The remainder of this section is a chronology of lightness models, leading to CIECAM02.
"Note." – Munsell's "V" runs from 0 to 10, while "Y" typically runs from 0 to 100 (often interpreted as a percentage). Typically, the relative luminance is normalized so that the "reference white" (say, magnesium oxide) has a tristimulus value of "Y" = 100. Since the reflectance of magnesium oxide (MgO) relative to the perfect reflecting diffuser is 97.5%, "V" = 10 corresponds to "Y" = % ≈ 102.6 if MgO is used as the reference.
1920.
Irwin Priest, Kasson Gibson, and Harry McNicholas provide a basic estimate of the Munsell value (with "Y" running from 0 to 1 in this case):
formula_11
1933.
Alexander Munsell, Louise Sloan, and Isaac Godlove launch a study on the Munsell neutral value scale, considering several proposals relating the relative luminance to the Munsell value, and suggest:
formula_12
1943.
Sidney Newhall, Dorothy Nickerson, and Deane Judd prepare a report for the Optical Society of America (OSA) on Munsell renotation. They suggest a quintic parabola (relating the reflectance in terms of the value):
formula_13
1943.
Using Table II of the OSA report, Parry Moon and Domina Spencer express the value in terms of the relative luminance:
formula_14
1944.
Jason Saunderson and B.I. Milner introduce a subtractive constant in the previous expression, for a better fit to the Munsell value. Later, Dorothea Jameson and Leo Hurvich claim that this corrects for simultaneous contrast effects.
formula_15
1955.
Ladd and Pinney of Eastman Kodak are interested in the Munsell value as a perceptually uniform lightness scale for use in television. After considering one logarithmic and five power-law functions (per Stevens' power law), they relate value to reflectance by raising the reflectance to the power of 0.352:
formula_16
Realizing this is quite close to the cube root, they simplify it to:
formula_17
1958.
Glasser "et al." define the lightness as ten times the Munsell value (so that the lightness ranges from 0 to 100):
formula_18
1964.
Günter Wyszecki simplifies this to:
formula_19
This formula approximates the Munsell value function for 1% < "Y" < 98% (it is not applicable for "Y" < 1%) and is used for the CIE 1964 color space.
1976.
CIELAB uses the following formula:
formula_20
where "Y"n is the CIE XYZ "Y" tristimulus value of the reference white point (the subscript n suggests "normalized") and is subject to the restriction > 0.01. Pauli removes this restriction by computing a linear extrapolation which maps = 0 to "L"* = 0 and is tangent to the formula above at the point at which the linear extension takes effect. First, the transition point is determined to be = ()3 ≈ 0.008856, then the slope of ()3 ≈ 903.3 is computed. This gives the two-part function:
formula_21
The lightness is then:
formula_22
At first glance, you might approximate the lightness function by a cube root, an approximation that is found in much of the technical literature. However, the linear segment near black is significant, and so the 116 and 16 coefficients. The best-fit pure power function has an exponent of about 0.42, far from . An approximately 18% grey card, having an exact reflectance of ()3, has a lightness value of 50. It is called "mid grey" because its lightness is midway between black and white.
1997.
As early as in 1967 a hyperbolic relationship between light intensity and cone cell responses was discovered in fish, in line with the Michaelis–Menten kinetics model of biochemical reactions. In the 70s the same relationship was found in a number of other vertebrates and in 1982, using microelectrodes to measure cone responses in living rhesus macaques, Valeton and Van Norren found the following relationship:
1 / V ~ 1 + (σ / I)0.74
where V is the measured potential, I the light intensity and σ a constant.
In 1986 Seim and Valberg realised that this relationship might aid in the construction of a more uniform colour space. This inspired advances in colour modelling and when the International Commission on Illumination held a symposium in 1996, objectives for a new standard colour model were formulated and in 1997 CIECAM97s (International Commission on Illumination, colour appearance model, 1997, simple version) was standardised. CIECAM97s distinguishes between lightness, how light something appears compared to a similarly lit white object, and brightness, how much light appears to shine from something.
According to CIECAM97s the lightness of a sample is:
J = 100 (Asample / Awhite)cz
In this formula, for a small sample under bright conditions in a surrounding field with a relative luminance n compared to white, c has been chosen such that:
formula_23
This models that a sample will appear darker on a light background than on a dark background. See contrast effect for more information on the topic. When n = , cz = 1, representing the assumption that most scenes have an average relative luminance of compared to bright white, and that therefore a sample in such a surround should be perceived at its proper lightness.
The quantity A models the achromatic cone response; it is colour dependent but for a grey sample under bright conditions it works out as:
formula_24
Here Y is the relative luminance compared to white on a scale of 0 to 1 and LA is the average luminance of the adapting visual field as a whole, measured in cd/m2. The achromatic response follows a kind of S-curve, ranging from 1 to 123, numbers which follow from the way the cone responses are averaged and which are ultimately based on a rough estimate for the useful range of nerve impulses per second, and which has a fairly large intermediate range where it roughly follows a square root curve.
The brightness according to CIECAM97s is then:
Q = (1.24 / c) (J / 100)0.67 (Awhite + 3)0.9
The factor 1.24 / c is a surround factor that reflects that scenes appear brighter in dark surrounding conditions.
Suggestions for a more comprehensive model, CIECAM97C, were also formulated, to take into account several effects at extremely dark or bright conditions, coloured lighting, as well as the Helmholtz–Kohlrausch effect, where highly chromatic samples appear lighter and brighter in comparison to a neutral grey. To model the latter effect, in CIECAM97C the formula for J is adjusted as follows:
JHK = J + (100 – J) (C / 300) |sin(h – 45°)|,
where C is the chroma and h the hue angle
Q is then calculated from JHK instead of from J. This formula has the effect of pulling up the lightness and brightness of coloured samples. The larger the chroma, the stronger the effect; for very saturated colours C can be close to 100 or even higher. The absolute sine term has a sharp V-shaped valley with a zero at yellow and a broad plateau in the deep blues.
2002.
The achromatic response in CIECAM97s is a weighted addition of cone responses minus 2.05. Since the total noise term adds up to 3.05, this means that A and consequentially J and Q aren't zero for absolute black. To fix this, Li, Luo & Hunt suggested subtracting 3.05 instead, so the scale starts at zero. Although CIECAM97s was a successful model to spur and direct colorimetric research, Fairchild felt that for practical applications some changes were necessary. Those relevant for lightness calculations were to, rather than use several discrete values for the surround factor c, allow for linear interpolation of c and thereby allowing the model to be used under intermediate surround conditions, and to simplify z to remove the special case for large stimuli because he felt it was irrelevant for imaging applications. Based on experimental results, Hunt, Li, Juan and Luo proposed a number of improvements. Relevant for the topic at hand is that they suggested lowering z slightly. Li and Luo found that a colour space based on such a modified CIECAM97s using lightness as one of the coordinates was more perceptually uniform than CIELAB.
Because of the shape of the cone response S-curve, when the luminance of a colour is reduced, even if its spectral composition remains the same, the different cone responses do not quite change at the same rate with respect to each other. It is plausible therefore that the perceived hue and saturation will change at low luminance levels. But CIECAM97s predicts much larger deviations than are generally thought likely and therefore Hunt, Li and Luo suggested using a cone response curve which approximates a power curve for a much larger range of stimuli, so hue and saturation are better preserved.
All these proposals, as well as others relating to chromaticity, resulted in a new colour appearance model, CIECAM02. In this model, the formula for lightness remains the same:
J = 100 (Asample / Awhite)cz
But all the quantities that feed into this formula change in some way. The parameter c is now continuously variable as discussed above and z = 1.48 + √n. Although this is higher than z in CIECAM97s, the total effective power factor is very similar because the effective power factor of the achromatic response is much lower:
formula_25
As before, this formula assumes bright conditions. Apart from 1220, which results from an arbitrarily assumed cone response constant, the various constants in CIECAM02 were fitted to experimental data sets. The expression for the brightness has also changed considerably:
formula_26
Note that contrary to the suggestion from CIECAM97C, CIECAM02 contains no provision for the Helmholtz–Kohlrausch effect.
Other psychological effects.
This subjective perception of luminance in a non-linear fashion is one thing that makes gamma compression of images worthwhile. Beside this phenomenon there are other effects involving perception of lightness. Chromaticity can affect perceived lightness as described by the Helmholtz–Kohlrausch effect. Though the CIELAB space and relatives do not account for this effect on lightness, it may be implied in the Munsell color model. Light levels may also affect perceived chromaticity, as with the Purkinje effect.
References.
<templatestyles src="Reflist/styles.css" />
External links.
Media related to at Wikimedia Commons | [
{
"math_id": 0,
"text": "(L)"
},
{
"math_id": 1,
"text": "J"
},
{
"math_id": 2,
"text": "L^*"
},
{
"math_id": 3,
"text": "L"
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "X"
},
{
"math_id": 6,
"text": "Z"
},
{
"math_id": 7,
"text": "(Y^\\prime)"
},
{
"math_id": 8,
"text": "(Y^\\prime IQ)"
},
{
"math_id": 9,
"text": "(Y^\\prime UV)"
},
{
"math_id": 10,
"text": "V"
},
{
"math_id": 11,
"text": "V = 10 \\sqrt{Y}."
},
{
"math_id": 12,
"text": "V^2 = 1.4742 Y - 0.004743 Y^2."
},
{
"math_id": 13,
"text": "Y = 1.2219 V - 0.23111 V^2 + 0.23951 V^3 - 0.021009 V^4 + 0.0008404 V^5."
},
{
"math_id": 14,
"text": "V = 5\\left(\\frac{Y}{19.77}\\right)^{0.426} = 1.4 Y^{0.426}."
},
{
"math_id": 15,
"text": "V = 2.357 Y^{0.343} - 1.52."
},
{
"math_id": 16,
"text": "V = 2.217 Y^{0.352} - 1.324."
},
{
"math_id": 17,
"text": "V = 2.468 \\sqrt[3]{Y} - 1.636."
},
{
"math_id": 18,
"text": "L^\\star = 25.29 \\sqrt[3]{Y} - 18.38."
},
{
"math_id": 19,
"text": "W^\\star = 25 \\sqrt[3]{Y} - 17."
},
{
"math_id": 20,
"text": "L^\\star = 116\\left(\\frac{Y}{Y_\\mathrm{n}}\\right)^\\frac13 - 16."
},
{
"math_id": 21,
"text": "f(t) = \\begin{cases}\n t^\\frac13 & \\text{if } t > \\left(\\frac{6}{29}\\right)^3 \\\\\n \\frac{1}{3}\\left(\\frac{29}{6}\\right)^2 t + \\frac{4}{29} & \\text{otherwise}\n\\end{cases}"
},
{
"math_id": 22,
"text": "L^\\star = 116 f\\left(\\frac{Y}{Y_\\mathrm{n}}\\right) - 16."
},
{
"math_id": 23,
"text": "\\text{cz} = \\frac{1 + \\sqrt{n}}{1 + \\sqrt{\\frac{1}{5}}}"
},
{
"math_id": 24,
"text": "\n\\frac{\\text{A}}{\\text{N}_\\text{bb}}\n= \\frac{122}{1 + 2\\Bigl(\\tfrac1{10}Y\\sqrt[3]{5L_A}\\Bigr)^{-0.73}} + 1\n"
},
{
"math_id": 25,
"text": "\\frac{\\text{A}}{\\text{N}_\\text{bb}} = \\frac{1220}{1 + \\frac{27.13}{\\left(\\frac{\\sqrt[3]{5\\text{L}_\\text{A}}\\text{Y}}{10}\\right)^{0.42}}}"
},
{
"math_id": 26,
"text": "Q = \\frac{4}{c} \\frac{\\sqrt{J}}{10} (A_{white} + 4) \\left(\\frac{\\sqrt[3]{5L_A}}{10}\\right)^{0.25}"
}
] | https://en.wikipedia.org/wiki?curid=9220187 |
92206 | Magnetic circular dichroism | Magnetic circular dichroism (MCD) is the differential absorption of left and right circularly polarized (LCP and RCP) light, induced in a sample by a strong magnetic field oriented parallel to the direction of light propagation. MCD measurements can detect transitions which are too weak to be seen in conventional optical absorption spectra, and it can be used to distinguish between overlapping transitions. Paramagnetic systems are common analytes, as their near-degenerate magnetic sublevels provide strong MCD intensity that varies with both field strength and sample temperature. The MCD signal also provides insight into the symmetry of the electronic levels of the studied systems, such as metal ion sites.
History.
It was first shown by Faraday that optical activity (the Faraday effect) could be induced in matter by a longitudinal magnetic field (a field in the direction of light propagation). The development of MCD really began in the 1930s when a quantum mechanical theory of MOR (magnetic optical rotatory dispersion) in regions outside absorption bands was formulated. The expansion of the theory to include MCD and MOR effects in the region of absorptions, which were referred to as "anomalous dispersions" was developed soon thereafter. There was, however, little effort made to refine MCD as a modern spectroscopic technique until the early 1960s. Since that time there have been numerous studies of MCD spectra for a very large variety of samples, including stable molecules in solutions, in isotropic solids, and in the gas phase, as well as unstable molecules entrapped in noble gas matrices. More recently, MCD has found useful application in the study of biologically important systems including metalloenzymes and proteins containing metal centers.
Differences between CD and MCD.
In natural optical activity, the difference between the LCP light and the RCP light is caused by the asymmetry of the molecules (i.e. chiral molecules). Because of the handedness of the molecule, the absorption of the LCP light would be different from the RCP light. However, in MCD in the presence of a magnetic field, LCP and RCP no longer interact equivalently with the absorbing medium. Thus, there is not the same direct relation between magnetic optical activity and molecular stereochemistry which would be expected, because it is found in natural optical activity. So, natural CD is much more rare than MCD which does not strictly require the target molecule to be chiral.
Although there is much overlap in the requirements and use of instruments, ordinary CD instruments are usually optimized for operation in the ultraviolet, approximately 170–300 nm, while MCD instruments are typically required to operate in the visible to near infrared, approximately 300–2000 nm. The physical processes that lead to MCD are substantively different from those of CD. However, like CD, it is dependent on the differential absorption of left and right hand circularly polarized light. MCD will only exist at a given wavelength if the studied sample has an optical absorption at that wavelength. This is distinctly different from the related phenomenon of optical rotatory dispersion (ORD), which can be observed at wavelengths far from any absorption band.
Measurement.
The MCD signal ΔA is derived via the absorption of the LCP and RCP light as
formula_0
This signal is often presented as a function of wavelength λ, temperature T or magnetic field H. MCD spectrometers can simultaneously measure absorbance and ΔA along the same light path. This eliminates error introduced through multiple measurements or different instruments that previously occurred before this advent.
The MCD spectrometer example shown below begins with a light source that emits a monochromatic wave of light. This wave is passed through a Rochon prism linear polarizer, which separates the incident wave into two beams that are linearly polarized by 90 degrees. The two beams follow different paths- one beam (the extraordinary beam) traveling directly to a photomultiplier (PMT), and the other beam (the ordinary beam) passing through a photoelastic modulator (PEM) oriented at 45 degrees to the direction of the ordinary ray polarization. The PMT for the extraordinary beam detects the light intensity of the input beam. The PEM is adjusted to cause an alternating plus and minus 1/4 wavelength shift of one of the two orthogonal components of the ordinary beam. This modulation converts the linearly polarized light into circularly polarized light at the peaks of the modulation cycle. Linearly polarized light can be decomposed into two circular components with intensity represented as formula_1
The PEM will delay one component of linearly polarized light with a time dependence that advances the other component by 1/4 λ (hence, quarter-wave shift). The departing circularly polarized light oscillates between RCP and LCP in a sinusoidal time-dependence as depicted below:
The light finally travels through a magnet containing the sample, and the transmittance is recorded by another PMT. The schematic is given below:
The intensity of light from the ordinary wave that reaches the PMT is governed by the equation:
formula_2
Here A– and A+ are the absorbances of LCP or RCP, respectively; ω is the modulator frequency – usually a high acoustic frequency such as 50 kHz; "t" is time; and δ0 is the time-dependent wavelength shift.
This intensity of light passing through the sample is converted into a two-component voltage via a current/voltage amplifier. A DC voltage will emerge corresponding to the intensity of light passed through the sample. If there is a ΔA, then a small AC voltage will be present that corresponds to the modulation frequency, ω. This voltage is detected by the lock in amplifier, which receives its reference frequency, ω, directly from the PEM. From such voltage, ΔA and A can be derived using the following relations:
formula_3
formula_4
where Vex is the (DC) voltage measured by the PMT from the extraordinary wave, and Vdc is the DC component of the voltage measured by the PMT for the ordinary wave (measurement path not shown in the diagram).
Some superconducting magnets have a small sample chamber, far too small to contain the entire optical system. Instead, the magnet sample chamber has windows on two opposite sides. Light from the source enters one side, interacts with the sample (usually also temperature controlled) in the magnetic field, and exits through the opposite window to the detector. Optical relay systems that allow the source and detector each to be about a meter from the sample are typically employed. This arrangement avoids many of the difficulties that would be encountered if the optical apparatus had to operate in the high magnetic field, and also allows for a much less expensive magnet.
Applications.
MCD can be used as an optical technique for the detection of electronic structure of both the ground states and excited states. It is also a strong addition to the more commonly used absorption spectroscopy, and there are two reasons that explain this. First, a transition buried under a stronger transition can appear in MCD if the first derivative of the absorption is much larger for the weaker transition or it is of the opposite sign. Second, MCD will be found where no absorption is detected at all if ΔA > (ΔAmin) but A < Amin, where (ΔA)min and Amin are the minimum of ΔA and A that are detectable. Typically, (ΔAmin) and Amin are of the magnitudes around 10−5 and 10−3 respectively. So, a transition can only be detected in MCD, not in the absorption spectroscopy, if ΔA/A > 10−2. This happens in paramagnetic systems that are at lower temperature or that have sharp lines in the spectroscopy.
In biology, metalloproteins are the most likely candidates for MCD measurements, as the presence of metals with degenerate energy levels leads to strong MCD signals. In the case of ferric heme proteins, MCD is capable of determining both oxidation and spin state to a remarkably exquisite degree. In regular proteins, MCD is capable of stoichiometrically measuring the tryptophan content of proteins, assuming there are no other competing absorbers in the spectroscopic system.
In addition, the application of MCD spectroscopy greatly improved the level of understanding in the ferrous non-heme systems because of the direct observation of the d–d transitions, which generally can not be obtained in optical absorption spectroscopy owing to the weak extinction coefficients and are often electron paramagnetic resonance silent due to relatively large ground-state sublevel splittings and fast relaxation times.
Theory.
Consider a system of localized, non-interacting absorbing centers. Based on the semi-classical radiation absorption theory within the electric dipole approximation, the electric vector of the circularly polarized waves propagates along the +z direction. In this system, formula_5 is the angular frequency, and formula_6 = n – ik is the complex refractive index. As the light travels, the attenuation of the beam is expressed as
formula_7
where formula_8 is the intensity of light at position formula_9, formula_10 is the absorption coefficient of the medium in the formula_9 direction, and formula_11 is the speed of light. Circular dichroism (CD) is then defined by the difference between left (formula_12) and right (formula_13) circularly polarized light, formula_14, following the sign convention of natural optical activity. In the presence of a static, uniform external magnetic field applied parallel to the direction of propagation of light, the Hamiltonian for the absorbing center takes the form formula_15 for formula_16 describing the system in the external magnetic field and formula_17 describing the applied electromagnetic radiation. The absorption coefficient for a transition between two eigenstates of formula_16, formula_18 and formula_19, can be described using the electric dipole transition operator formula_20 as
formula_21
formula_22
The formula_23 term is a frequency-independent correction factor allowing for the effect of the medium on the light wave electric field, composed of the permittivity formula_24 and the real refractive index formula_25.
Discrete line spectrum.
In cases of a discrete spectrum, the observed formula_26 at a particular frequency formula_27 can be treated as a sum of contributions from each transition,
formula_28
where formula_29 is the contribution at formula_27 from the formula_30 transition, formula_31 is the absorption coefficient for the formula_30 transition, and formula_32 is a bandshape function (formula_33). Because eigenstates formula_18 and formula_19 depend on the applied external field, the value of formula_34 varies with field. It is frequently useful to compare this value to the absorption coefficient in the absence of an applied field, often denoted
formula_35
When the Zeeman effect is small compared to zero-field state separations, line width, and formula_36 and when the line shape is independent of the applied external field formula_37, first-order perturbation theory can be applied to separate formula_26 into three contributing Faraday terms, called formula_38, formula_39, and formula_40. The subscript indicates the moment such that formula_38 contributes a derivative-shaped signal and formula_39 and formula_40 contribute regular absorptions. Additionally, a zero-field absorption term formula_41 is defined. The relationships between formula_26, formula_42, and these Faraday terms are
formula_43
formula_44
for external field strength formula_37, Boltzmann constant formula_45, temperature formula_46, and a proportionality constant formula_47. This expression requires assumptions that formula_19 is sufficiently high in energy that formula_48, and that the temperature of the sample is high enough that magnetic saturation does not produce nonlinear formula_49 term behavior.
Though one must pay attention to proportionality constants, there is a proportionality between formula_26 and molar extinction coefficient formula_50 and absorbance formula_51 for concentration formula_52 and path length formula_53.
These Faraday terms are the usual language in which MCD spectra are discussed. Their definitions from perturbation theory are
formula_54
where formula_55 is the degeneracy of ground state formula_56, formula_57 labels states other than formula_56 or formula_58, formula_24 and formula_59 and formula_60 label the levels within states formula_56 and formula_58 and formula_57 (respectively), formula_61 is the energy of unperturbed state formula_62, formula_63 is the formula_9 angular momentum operator, formula_64 is the formula_9 spin operator, and formula_65 indicates the real part of the expression.
Origins of A, B, and C Faraday Terms.
The equations in the previous subsection reveal that the formula_38, formula_39, and formula_40 terms originate through three distinct mechanisms.
The formula_38 term arises from Zeeman splitting of the ground or excited degenerate states. These field-dependent changes in energies of the magnetic sublevels causes small shifts in the bands to higher/lower energy. The slight offsets result in incomplete cancellation of the positive and negative features, giving a net derivative shape in the spectrum. This intensity mechanism is generally independent of sample temperature.
The formula_39 term is due to the field-induced mixing of states. Energetic proximity of a third state formula_66 to either the ground state formula_67 or excited state formula_68 gives appreciable Zeeman coupling in the presence of an applied external field. As the strength of the magnetic field increases, the amount of mixing increases to give growth of an absorption band shape. Like the formula_38 term, the formula_39 term is generally temperature independent. Temperature dependence of formula_39 term intensity can sometimes be observed when formula_66 is particularly low-lying in energy.
The formula_40 term requires the degeneracy of the ground state, often encountered for paramagnetic samples. This happens due to a change in the Boltzmann population of the magnetic sublevels, which is dependent on the degree of field-induced splitting of the sublevel energies and on the sample temperature. Decrease of the temperature and increase of the magnetic field increases the formula_40 term intensity until it reaches the maximum (saturation limit). Experimentally, the formula_40 term spectrum can be obtained from MCD raw data by subtraction of MCD spectra measured in the same applied magnetic field at different temperatures, while formula_38 and formula_39 terms can be distinguished via their different band shapes.
The relative contributions of A, B and C terms to the MCD spectrum are proportional to the inverse line width, energy splitting, and temperature:
formula_69
where formula_70 is line width and formula_71 is the zero-field state separation. For typical values of formula_70 = 1000 cm−1, formula_71 = 10,000 cm−1 and formula_36 = 6 cm−1 (at 10 K), the three terms make relative contributions 1:0.1:150. So, at low temperature the formula_40 term dominates over formula_38 and formula_39 for paramagnetic samples.
Example on C terms.
In the visible and near-ultraviolet regions, the hexacyanoferrate(III) ion (Fe(CN)63−) exhibits three strong absorptions at 24500, 32700, and 40500 cm−1, which have been ascribed to ligand to metal charge transfer (LMCT) transitions. They all have lower energy than the lowest-energy intense band for the Fe(II) complex Fe(CN)62− found at 46000 cm−1. The red shift with increasing oxidation state of the metal is characteristic of LMCT bands. Additionally, only A terms, which are temperature independent, should be involved in MCD structure for closed-shell species.
These features can be explained as follows. The ground state of the anion is 2T2g, which derives from the electronic configuration (t2g)5. So, there would be an unpaired electron in the d orbital of Fe3+
From that, the three bands can be assigned to the transitions 2t2g→2t1u1, 2t2g →2t1u2, 2t2g →2t2u. Two of the excited states are of the same symmetry, and, based on the group theory, they could mix with each other so that there are no pure σ and π characters in the two t1u states, but for t2u, there would be no intermixing. The A terms are also possible from the degenerate excited states, but the studies of temperature dependence showed that the A terms are not as dependent as the C term.
An MCD study of Fe(CN)63− embedded in a thin polyvinyl alcohol (PVA) film revealed a temperature dependence of the C term. The room-temperature C0/D0 values for the three bands in the Fe(CN)63− spectrum are 1.2, −0.6, and 0.6, respectively, and their signs (positive, negative, and positive) establish the energy ordering as 2t2g→2t1u2<2t2g→2t2u<2t2g→2t1u1
Example on A and B terms.
To have an A- and B-term in the MCD spectrum, a molecule must contain degenerate excited states (A-term) and excited states close enough in energy to allow mixing (B-term). One case exemplifying these conditions is a square planar, d8 complex such as [(n-C4H9)4N]2Pt(CN)4. In addition to containing A- and B-terms, this example demonstrates the effects of spin-orbit coupling in metal to ligand charge transfer (MLCT) transitions. As shown in figure 1, the molecular orbital diagram of [(n-C4H9)4N]2Pt(CN)4 reveals MLCT into the antibonding π* orbitals of cyanide. The ground state is diamagnetic (thereby eliminating any C-terms) and the LUMO is the a2u. The dipole-allowed MLCT transitions are a1g-a2u and eg-a2u. Another transition, b2u-a2u, is a weak (orbitally forbidden singlet) but can still be observed in MCD.
Because A- and B-terms arise from the properties of states, all singlet and triplet excited states are given in figure 2.
Mixing of all these singlet and triplet states will occur and is attributed to the spin orbit coupling of platinum 5d orbitals (ζ ~ 3500 cm−1), as shown in figure 3. The black lines on the figure indicate the mixing of 1A2u with 3Eu to give two A2u states. The red lines show the 1Eu, 3Eu, 3A2u, and 3B1u states mixing to give four Eu states. The blue lines indicate remnant orbitals after spin-orbit coupling that are not a result of mixing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta A = \\frac{A_- - A_+}{A_- + A_+}"
},
{
"math_id": 1,
"text": "I_0 = \\frac 12(I_- + I_+)"
},
{
"math_id": 2,
"text": "I_\\Delta=\\frac {I_0}2\\left[\\left(1-\\sin\\left(\\delta_0\\sin \\omega t\\right)\\right)10^{-A_-}+\\left(1+\\sin\\left(\\delta_0\\sin\\omega t\\right)\\right)10^{-A_+}\\right]"
},
{
"math_id": 3,
"text": "\\Delta A= \\frac{V_{ac}}{1.1515V_{dc}\\delta_0\\sin\\omega t} "
},
{
"math_id": 4,
"text": "A=-\\log( \\frac{V_{dc}}{V_{ex}} )"
},
{
"math_id": 5,
"text": "\\omega=2\\pi\\nu"
},
{
"math_id": 6,
"text": "\\tilde{n}"
},
{
"math_id": 7,
"text": " I(z) = I(0) \\exp(-2\\omega kz/c) "
},
{
"math_id": 8,
"text": "I(z)"
},
{
"math_id": 9,
"text": "z"
},
{
"math_id": 10,
"text": "k"
},
{
"math_id": 11,
"text": "c"
},
{
"math_id": 12,
"text": "-"
},
{
"math_id": 13,
"text": "+"
},
{
"math_id": 14,
"text": "\\Delta k = k_- - k_+"
},
{
"math_id": 15,
"text": "\\mathcal{H}(t) = \\mathcal{H}_0 + \\mathcal{H}_1(t)"
},
{
"math_id": 16,
"text": "\\mathcal{H}_0"
},
{
"math_id": 17,
"text": "\\mathcal{H}_1(t)"
},
{
"math_id": 18,
"text": "a"
},
{
"math_id": 19,
"text": "j"
},
{
"math_id": 20,
"text": "m"
},
{
"math_id": 21,
"text": "\n[k_\\pm (a \\to j)] = \\int_{0}^{\\infty} k_\\pm (a \\to j) d \\omega = \\frac{\\pi^2}{\\hbar} (N_a - N_j) \\left(\\frac{\\alpha^2}{n}\\right) \\left| \\langle a | m_\\pm | j \\rangle \\right|^2 \n"
},
{
"math_id": 22,
"text": "\n[\\Delta k (a \\to j)] = \\int_{0}^{\\infty} \\Delta k (a \\to j) d \\omega = \\frac{\\pi^2}{\\hbar} (N_a - N_j) \\left(\\frac{\\alpha^2}{n}\\right) \\left( \\left| \\langle a | m_- | j \\rangle \\right|^2 - \\left| \\langle a | m_+| j \\rangle \\right|^2 \\right) \n"
},
{
"math_id": 23,
"text": "(\\alpha^2/n)"
},
{
"math_id": 24,
"text": "\\alpha"
},
{
"math_id": 25,
"text": "n"
},
{
"math_id": 26,
"text": "\\Delta k"
},
{
"math_id": 27,
"text": "\\omega"
},
{
"math_id": 28,
"text": "\\Delta k_\\mathrm{obs}(\\omega) = \\sum_{a,j} \\Delta k_{a\\to j}(\\omega) = \\sum_{a,j} [\\Delta k_{a\\to j}]f_{ja}(\\omega)"
},
{
"math_id": 29,
"text": "\\Delta k_{a\\to j}(\\omega)"
},
{
"math_id": 30,
"text": "a\\to j"
},
{
"math_id": 31,
"text": "[\\Delta k_{a\\to j}]"
},
{
"math_id": 32,
"text": "f_{ja}(\\omega)"
},
{
"math_id": 33,
"text": "\\textstyle{\\int_{0}^{\\infty} f_{ja}(\\omega) d \\omega = 1}"
},
{
"math_id": 34,
"text": "\\Delta k_\\mathrm{obs}(\\omega)"
},
{
"math_id": 35,
"text": "k^0(\\omega) = \\sum_{a,j} k^0_{a\\to j}(\\omega) = \\sum_{a,j} [k^0_{a\\to j}]f^0_{ja}(\\omega)"
},
{
"math_id": 36,
"text": "kT"
},
{
"math_id": 37,
"text": "H"
},
{
"math_id": 38,
"text": "\\mathcal{A}_1"
},
{
"math_id": 39,
"text": "\\mathcal{B}_0"
},
{
"math_id": 40,
"text": "\\mathcal{C}_0"
},
{
"math_id": 41,
"text": "\\mathcal{D}_0"
},
{
"math_id": 42,
"text": "k^0"
},
{
"math_id": 43,
"text": "\\Delta k_{A\\to J}(\\omega) = -\\frac{4}{3} \\gamma N^0_A \\left\\{\\frac{\\mathcal{A}_1(A\\to J)}{\\hbar} \\frac{\\partial f^0_{ja}(\\omega)}{\\partial \\omega} + \\left[\\mathcal{B}_0(A\\to J) + \\frac{\\mathcal{C}_0(A\\to J)}{k_BT}\\right]f^0_{ja}(\\omega)\\right\\}H "
},
{
"math_id": 44,
"text": "k^0_{A\\to J}(\\omega) = \\frac{2}{3} \\gamma N_A^0 \\mathcal{D}_0(A\\to J) f^0_{ja}(\\omega)"
},
{
"math_id": 45,
"text": "k_B"
},
{
"math_id": 46,
"text": "T"
},
{
"math_id": 47,
"text": "\\gamma"
},
{
"math_id": 48,
"text": "N_j \\approx 0"
},
{
"math_id": 49,
"text": "\\mathcal{C}"
},
{
"math_id": 50,
"text": "\\epsilon"
},
{
"math_id": 51,
"text": "A/Cl"
},
{
"math_id": 52,
"text": "C"
},
{
"math_id": 53,
"text": "l"
},
{
"math_id": 54,
"text": "\\begin{align}\n \\mathcal{A}_1 &= -\\frac{1}{d_A} \\sum_{\\alpha,\\lambda} \\left( \\langle J_\\lambda |L_z+2S_z| J_\\lambda\\rangle - \\langle A_\\alpha |L_z+2S_z| A_\\alpha\\rangle\\right) \\times \\left(|\\langle A_\\alpha|m_-|J_\\lambda\\rangle|^2 - \\langle A_\\alpha|m_+|J_\\lambda\\rangle|^2\\right) \\\\\n \\mathcal{B}_0 &= \\frac{2}{d_A} \\Re \\sum_{\\alpha,\\lambda}\\left[ \\sum_{K\\neq J,\\kappa} \\frac{1}{E_K-E_J} \\langle J_\\lambda |L_z+2S_z| K_\\kappa\\rangle \\times \\left(\\langle A_\\alpha|m_-|J_\\lambda\\rangle\\langle K_\\kappa|m_+|A_\\alpha\\rangle - \\langle A_\\alpha|m_+|J_\\lambda\\rangle\\langle K_\\kappa|m_-|A_\\alpha\\rangle\\right) \\right. \\\\\n &\\qquad \\left. + \\sum_{K\\neq A,\\kappa} \\frac{1}{E_K-E_A} \\langle K_\\kappa |L_z+2S_z| A_\\alpha\\rangle \\times \\left( \\langle A_\\alpha|m_-|J_\\lambda\\rangle\\langle J_\\lambda|m_+|K_\\kappa\\rangle - \\langle A_\\alpha|m_+|J_\\lambda\\rangle\\langle J_\\lambda|m_-|K_\\kappa\\rangle \\right) \\right] \\\\\n \\mathcal{C}_0 &= \\frac{1}{d_A} \\sum_{\\alpha,\\lambda} \\langle A_\\alpha|L_z+2S_z|A_\\alpha\\rangle \\times \\left(|\\langle A_\\alpha|m_-|J_\\lambda\\rangle|^2 - \\langle A_\\alpha|m_+|J_\\lambda\\rangle|^2\\right) \\\\\n \\mathcal{D}_0 &= \\frac{1}{2d_A} \\sum_{\\alpha,\\lambda} \\left(|\\langle A_\\alpha|m_-|J_\\lambda\\rangle|^2 + \\langle A_\\alpha|m_+|J_\\lambda\\rangle|^2\\right) \n\\end{align}"
},
{
"math_id": 55,
"text": "d_A"
},
{
"math_id": 56,
"text": "A"
},
{
"math_id": 57,
"text": "K"
},
{
"math_id": 58,
"text": "J"
},
{
"math_id": 59,
"text": "\\lambda"
},
{
"math_id": 60,
"text": "\\kappa"
},
{
"math_id": 61,
"text": "E_X"
},
{
"math_id": 62,
"text": "X"
},
{
"math_id": 63,
"text": "L_z"
},
{
"math_id": 64,
"text": "S_z"
},
{
"math_id": 65,
"text": "\\Re"
},
{
"math_id": 66,
"text": "|K\\rangle"
},
{
"math_id": 67,
"text": "|A\\rangle"
},
{
"math_id": 68,
"text": "|J\\rangle"
},
{
"math_id": 69,
"text": "A:B:C = \\frac{1} {\\Delta \\Gamma} : \\frac {1} {\\Delta E} : \\frac{1}{kT}"
},
{
"math_id": 70,
"text": "\\Delta \\Gamma"
},
{
"math_id": 71,
"text": "\\Delta E"
}
] | https://en.wikipedia.org/wiki?curid=92206 |
922087 | Bolted joint | Mechanical joint secured by a threaded fastener
A bolted joint is one of the most common elements in construction and machine design. It consists of a male threaded fastener (e. g., a bolt) that captures and joins other parts, secured with a matching female screw thread. There are two main types of bolted joint designs: tension joints and shear joints.
The selection of the components in a threaded joint is a complex process. Careful consideration is given to many factors such as temperature, corrosion, vibration, fatigue, and initial preload.
Joint types.
Tension joint.
In a tension joint, the bolt and clamped components of the joint are designed to transfer an applied tension load through the joint by way of the clamped components by the design of a proper balance of joint and bolt stiffness. The joint should be designed such that the clamp load is never overcome by the external tension forces acting to separate the joint. If the external tension forces overcome the clamp load (bolt preload) the clamped joint components will separate, allowing relative motion of the components.
Shear joint.
The second type of bolted joint transfers the applied load in shear of the bolt shank and relies on the shear strength of the bolt. Tension loads on such a joint are only incidental. A preload is still applied but consideration of joint flexibility is not as critical as in the case where loads are transmitted through the joint in tension. Other such shear joints do not employ a preload on the bolt as they are designed to allow rotation of the joint about the bolt but use other methods of maintaining bolt/joint integrity. Joints that allow rotation include clevis linkages, and rely on a locking mechanism (like lock washers, thread adhesives, and lock nuts).
Proper joint design and bolt preload provides useful properties:
In both the tension and shear joint design cases, some level of tension preload in the bolt and resulting compression preload in the clamped components is essential to the joint integrity. The preload target can be achieved by a variety of methods: applying a measured torque to the bolt, measuring bolt extension, heating to expand the bolt then turning the nut down, torquing the bolt to the yield point, testing ultrasonically, or by applying a certain number of degrees of relative rotation of the threaded components. Each method has a range of uncertainties associated with it, some of which are very substantial.
Theory.
Typically, a bolt is tensioned (preloaded) by the application of a torque to either the bolt head or the nut. The applied torque causes the bolt to "climb" the thread causing a tensioning of the bolt and an equivalent compression in the components being fastened by the bolt. The preload developed in a bolt is due to the applied torque and is a function of the bolt diameter, the geometry of the threads, and the coefficients of friction that exist in the threads and under the torqued bolt head or nut. The stiffness of the components clamped by the bolt has no relation to the preload that is developed by the torque. The relative stiffness of the bolt and the clamped joint components do, however, determine the fraction of the external tension load that the bolt will carry and that in turn determines preload needed to prevent joint separation and by that means to reduce the range of stress the bolt experiences as the tension load is repeatedly applied. This determines the durability of the bolt when subjected to repeated tension loads. Maintaining a sufficient joint preload also prevents relative slippage of the joint components that would produce fretting wear that could result in a fatigue failure of those parts.
The clamp load, also called preload of a fastener, is created when a torque is applied, and so develops a tensile preload that is generally a substantial percentage of the fastener's proof strength. Fasteners are manufactured to various standards that define, among other things, their strength. "Torque charts" are available to specify the required torque for a given fastener based on its "property class" (fineness of manufacture and fit) and "grade" (tensile strength).
When a fastener is torqued, a tension preload develops in the bolt and an equal compressive preload develops in the parts being fastened. This can be modeled as a spring-like assembly that has some assumed distribution of compressive strain in the clamped joint components. When an external tension load is applied, it relieves the compressive strains induced by the preload in the clamped components, hence the preload acting on the compressed joint components provides the external tension load with a path (through the joint) other than through the bolt. In a well designed joint, perhaps 80-90% of the externally applied tension load will pass through the joint and the remainder through the bolt. This reduces the fatigue loading of the bolt.
When the fastened parts are less stiff than the fastener (those that use soft, compressed gaskets for example), this model breaks down and the fastener is subjected to a tension load that is the sum of the tension preload and the external tension load.
In some applications, joints are designed so that the fastener eventually fails before more expensive components. In this case, replacing an existing fastener with a higher strength fastener can result in equipment damage. Thus, it is generally good practice to replace old fasteners with new fasteners of the same grade.
Calculating the torque.
Engineered joints require the torque to be chosen to provide the correct tension preload. Applying the torque to fasteners is commonly achieved using a torque wrench. The required torque value for a particular fastener application may be quoted in the published standard document, defined by the manufacturer or calculated. The side of the threaded fastening having the least friction should receive torque while the other side is counter-held or otherwise prevented from turning.
A common relationship used to calculate the torque for a desired preload takes into account the thread geometry and friction in the threads and under the bolt head or nut. The following assumes standard ISO or National Standard bolts and threads are used:
formula_0
where
formula_1 is the required torque
formula_2 is the nut factor
formula_3 is the desired preload
formula_4 is the bolt diameter
The nut factor K accounts for the thread geometry, friction, pitch. When ISO and Unified National Standard threads are used the nut factor is:
formula_5
where
formula_6 = the mean thread diameter, close to pitch diameter.
formula_4 = nominal bolt diameter
formula_7 = (thread pitch)/(pi * dm)
Thread Pitch = 1/N where N is the number of threads per inch or mm
formula_8 = friction coefficient in the threads
formula_9 = half the thread angle (typically 60°) = 30°
formula_10 = friction coefficient under torqued head or nut
When formula_8 = formula_10 = 0.15, the dimensions used correspond to any size coarse or fine bolt, and the nut factor is K ≈ 0.20, the torque/preload relationship becomes:
formula_11
A study of the effect of torquing two samples, one lubricated and the other unlubricated, 1/2 in.- 20 UNF bolts to 800 lb-in, produced the same mean preload of 7700 lbf. The preloads for the unlubricated bolt sample had a standard deviation from the mean value of 1100 lbf, whereas the lubricated sample had a standard deviation of 680 lbf. If the preload value and torques are used in the above relation to solve for the nut factor it is found to be K = 0.208, which is very close to the recommended value of 0.20
The preferred bolt preload for structural applications should be at least 75% of the fastener's proof load for the higher strength fasteners and as high as 90% of the proof load for permanent fasteners. To achieve the benefits of the preloading, the clamping force must be higher than the joint separation load. For some joints, multiple fasteners are required to secure the joint; these are all hand tightened before the final torque is applied to ensure an even joint seating.
The preload achieved by torquing a bolt is caused by the part of the torque that is effective. Friction in the threads and under the nut or bolt head uses up some fraction of the applied torque. Much of the torque applied is lost overcoming friction under the torqued bolt head or nut (50%) and in the threads (40%). The remaining 10% of the applied torque does useful work in stretching the bolt and providing the preload. Initially, as the torque is applied, it must overcome static friction under the head of the bolt or nut (depending on which end is being torqued) and also in the threads. Finally, dynamic friction prevails and the torque is distributed in a 50/40/10 % manner as the bolt is tensioned. The torque value is dependent on the friction produced in the threads and under the torqued bolt head or nut and the fastened material or washer if used. This friction can be affected by the application of a lubricant or any plating (e.g. cadmium or zinc) applied to the threads, and the fastener's standard defines whether the torque value is for dry or lubricated threading, as lubrication can reduce the torque value by 15% to 25%; lubricating a fastener designed to be torqued dry could over-tighten it, which may damage threading or stretch the fastener beyond its elastic limit, thereby reducing its clamping ability.
Either the bolt head or the nut can be torqued. If one has a larger bearing area or coefficient of friction it will require more torque to provide the same target preload. Fasteners should only be torqued if they are fitted in clearance holes.
Torque wrenches do not give a direct measurement of the preload in the bolt.
More accurate methods for determining the preload rely on defining or measuring the "screw extension" from the nut. Alternatively, measurement of the angular rotation of the nut can serve as the basis for defining screw extension based on the fastener's thread pitch. Measuring the screw extension directly allows the clamping force to be very accurately calculated. This can be achieved using a dial test indicator, reading deflection at the fastener tail, using a strain gauge, or ultrasonic length measurement.
Bolt preload can also be controlled by torquing the bolt to the point of yielding. Under some circumstances, a skilled operator can feel the drop off of the work required to turn the torque wrench as the material of the bolt begins to yield. At that point the bolt has a preload determined by the bolt area and the yield strength of the bolt material. This technique can be more accurately executed by specially built machines. Because this method only works for very high preloads and requires comparatively expensive tooling, it is only commonly used for specific applications, primarily in high performance engines.
There is no (as yet) simple method to measure the tension of a fastener in situ. All methods, from the least to most accurate, involve first relaxing the fastener, then applying force to it and quantifying the resultant amount of elongation achieved. This is known as 're-torqueing' or 're-tensioning' depending on which technology is employed.
Technologies employed in this process can be:
An electronic torque wrench is used on the fastener in question, so that the torque applied can be measured as it is increased in magnitude.
Recent technological developments have enabled tensions to be established (± 1%) by using ultrasonic testing. This provides the same accuracy to that of strain gauging without having to set strain gauges on each fastener.
Another method that indicates tension (mainly in erecting steel) involves the use of crush-washers. These are washers that have been drilled and filled with orange RTV. When a given force has been applied (± 10%), orange rubber strands appear.
Large-volume users (such as auto makers) frequently use computer-controlled nut drivers. With such machines, the computer is in control of shutting off the torque mechanism when a predetermined value has been reached. Such machines are often used to fit and tighten wheel nuts on an assembly line, and have also been developed for use in mobile plant tire fitting bays on mine sites.
Thread engagement.
"Thread engagement" is the length or number of threads that are engaged between the screw and the female threads. Bolted joints are designed so that the bolt shank fails in tension before the threads fail in shear, but for this to hold true, a minimum thread engagement must be achieved. The following equation defines this minimum thread engagement:
formula_12
Where Le is the thread engagement length, At is the tensile stress area, D is the major diameter of the screw, and p is the pitch. This equation only holds true if the screw and female thread materials are the same. If they are not the same, then the following equations can be used to determine the additional thread length that is required:
formula_13
formula_14
Where Le2 is the new required thread engagement.
While these formulas give absolute minimum thread engagement, many industries specify that bolted connections be at least fully engaged. For instance, the FAA has determined that in general cases, at least one thread must be protruding from any bolted connection.
Failure modes.
When doing a failure mode analysis for bolts that have broken, come loose or corroded, careful consideration must be given to the below failure modes:
Overloading occurs when operating forces of the application produce loads that exceed the clamp load, causing the joint to loosen over time or fail catastrophically.
Over-torquing might cause failure by damaging the threads and deforming the fastener, though this can happen over a very long time. Under-torquing can cause failures by allowing a joint to come loose, and it may also allow the joint to flex and thus fail under fatigue.
When axial or transverse loading overcomes the bolts preload or causes the bolt to slip transversely, movement in the bolt can cause small cracks to build up in the material eventually leading to fatigue failure of the bolt or male threaded component. According to Bill Eccles from boltscience, [In the vast majority of applications, the most effective way to ensure that the bolt is fatigue resistant is to ensure that it is tightened sufficiently...]
Brinelling may occur with poor quality washers, leading to a loss of clamp load and subsequent fatigue failure of the joint.
Other modes of failure include corrosion, embedment, and exceeding the shear stress limit.
Bolted joints may be used intentionally as sacrificial parts, which are intended to fail before other parts, as in a shear pin.
Locking mechanisms.
Locking mechanisms keep bolted joints from coming loose. They are required when vibration or joint movement will cause loss of clamp load and joint failure, and in equipment where the security of bolted joints is essential. A prevalent test for the self-loosening behaviour is the Junker test.
Two nuts, tightened on each other. In this application a thinner nut should be placed adjacent to the joint, and a thicker nut tightened onto it. The thicker nut applies more force to the joint, first relieving the force on the threads of the thinner nut and then applying a force in the opposite direction. In this way the thicker nut presses tightly on the side of the threads away from the joint, while the thinner nut presses on the side of the threads nearest the joint, tightly locking the two nuts against the threads in both directions.
An insert on the internal threads (either metallic or non-metallic, e.g. Nyloc nut) or a plug/patch of non-metallic material on the external threads is installed. This material binds against the threads of the opposing fastener with a friction force and creates a prevailing torque, which resists the backing-out or loosening of the fastener.
The use of a chemical locking compound binds the threads together when the compound cures. Examples of such a compound includes anaerobic compounds such as Loctite, which cures in the absence of oxygen and acts as an adhesive to lock the threads of the joint together. Chemical locking methods create friction after the breakaway torque. The prevail torque is usually higher than zero since the cured polymer still creates friction when rotating the nut.
Holes are drilled in nuts and bolt heads, and wire is threaded through the holes to prevent back-rotation. This method of locking is labor intensive, but is still used on critical joints.
Some portion of the nut deforms elastically during tightening to provide a locking action.
A washer that bends axially during tightening. Spring washers create additional axial force whereas lock washers have parts that engage the surfaces in a way to provide more direct resistance against rotation.
Bolt banging.
"Bolt banging" occurs in buildings when bolted joints slip into "bearing under load", thus causing a loud and potentially frightening noise resembling a rifle shot that is not, however, of structural significance and does not pose any threat to occupants.
A bolted joint between two elements may act as a bearing-type joint, or a friction joint. In the friction joint, the elements are clamped together with enough force that the resultant friction between the clamped surfaces prevents them from slipping laterally over each other.
In the bearing joint, the bolt itself limits lateral movement of the elements by the shank of the bolt bearing upon the sides of the holes in the clamped elements. Such joints require less clamping force, because a high level of friction between the clamped surfaces is not required. The clearance between the bolt and the holes means that some lateral movement may occur before the bolt bears against the sides of the holes.
Even when designed as a bearing joint, the surface friction between the clamped elements may be sufficient to resist movement for some time, especially when the building may not yet be fully loaded – thus it operates initially as a friction joint. When the lateral force becomes sufficient to overcome this friction, the clamped elements move until the sides of the holes bear against the shank of the bolt. This movement – "slip into bearing" – usually starts and stops very suddenly, often releasing elastic energy in the associated elements, resulting in a loud but harmless bang.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T = K P_{pre} d "
},
{
"math_id": 1,
"text": "T "
},
{
"math_id": 2,
"text": "K "
},
{
"math_id": 3,
"text": "P_{pre} "
},
{
"math_id": 4,
"text": " d "
},
{
"math_id": 5,
"text": "K = \\frac{d_{m}}{2 d}\\,\\left(\\frac{ \\tan \\psi +\\mu \\sec \\alpha} { 1 - \\mu \\tan \\psi \\sec \\alpha}\\right) + 0.625 \\mu_{c}"
},
{
"math_id": 6,
"text": " d_{m} "
},
{
"math_id": 7,
"text": " \\tan\\psi "
},
{
"math_id": 8,
"text": " \\mu "
},
{
"math_id": 9,
"text": " \\alpha "
},
{
"math_id": 10,
"text": " \\mu_{c}"
},
{
"math_id": 11,
"text": "T = 0.20 P_{pre} d "
},
{
"math_id": 12,
"text": "L_e = \\frac{2 \\times A_t}{0.5 \\pi \\left( D - 0.64952 p \\right)}"
},
{
"math_id": 13,
"text": "J = \\frac{\\text{tensile strength of external thread material}}{\\text{tensile strength of internal thread material}}"
},
{
"math_id": 14,
"text": "L_{e2} = J \\times L_e"
}
] | https://en.wikipedia.org/wiki?curid=922087 |
9221221 | Ceramic capacitor | Fixed-value capacitor using ceramic
A ceramic capacitor is a fixed-value capacitor where the ceramic material acts as the dielectric. It is constructed of two or more alternating layers of ceramic and a metal layer acting as the electrodes. The composition of the ceramic material defines the electrical behavior and therefore applications. Ceramic capacitors are divided into two application classes:
Ceramic capacitors, especially multilayer ceramic capacitors (MLCCs), are the most produced and used capacitors in electronic equipment that incorporate approximately one trillion (1012) pieces per year.
Ceramic capacitors of special shapes and styles are used as capacitors for RFI/EMI suppression, as feed-through capacitors and in larger dimensions as power capacitors for transmitters.
<templatestyles src="Template:TOC limit/styles.css" />
History.
Since the beginning of the study of electricity non-conductive materials such as glass, porcelain, paper and mica have been used as insulators. These materials some decades later were also well-suited for further use as the dielectric for the first capacitors.
Even in the early years of Marconi's wireless transmitting apparatus, porcelain capacitors were used for high voltage and high frequency application in the transmitters. On the receiver side, the smaller mica capacitors were used for resonant circuits. Mica dielectric capacitors were invented in 1909 by William Dubilier. Prior to World War II, mica was the most common dielectric for capacitors in the United States.
Mica is a natural material and not available in unlimited quantities. So in the mid-1920s the deficiency of mica in Germany and the experience in porcelain—a special class of ceramic—led in Germany to the first capacitors using ceramic as dielectric, founding a new family of ceramic capacitors. Paraelectric titanium dioxide (rutile) was used as the first ceramic dielectric because it had a linear temperature dependence of capacitance for temperature compensation of resonant circuits and can replace mica capacitors. In 1926 these ceramic capacitors were produced in small quantities with increasing quantities in the 1940s. The style of these early ceramics was a disc with metallization on both sides contacted with tinned wires. This style predates the transistor and was used extensively in vacuum-tube equipment (e.g., radio receivers) from about 1930 through the 1950s.
But this paraelectric dielectric had relatively low permittivity so that only small capacitance values could be realized. The expanding market of radios in the 1930s and 1940s create a demand for higher capacitance values but below electrolytic capacitors for HF decoupling applications. Discovered in 1921, the ferroelectric ceramic material barium titanate with a permittivity in the range of 1,000, about ten times greater than titanium dioxide or mica, began to play a much larger role in electronic applications.
The higher permittivity resulted in much higher capacitance values, but this was coupled with relatively unstable electrical parameters. Therefore, these ceramic capacitors only could replace the commonly used mica capacitors for applications where stability was less important. Smaller dimensions, as compared to the mica capacitors, lower production costs and independence from mica availability accelerated their acceptance.
The fast-growing broadcasting industry after the Second World War drove deeper understanding of the crystallography, phase transitions and the chemical and mechanical optimization of the ceramic materials. Through the complex mixture of different basic materials, the electrical properties of ceramic capacitors can be precisely adjusted. To distinguish the electrical properties of ceramic capacitors, standardization defined several different application classes (Class 1, Class 2, Class 3). It is remarkable that the separate development during the War and the time afterwards in the US and the European market had led to different definitions of these classes (EIA vs IEC), and only recently (since 2010) has a worldwide harmonization to the IEC standardization taken place.
The typical style for ceramic capacitors beneath the disc (at that time called condensers) in radio applications at the time after the War from the 1950s through the 1970s was a ceramic tube covered with tin or silver on both the inside and outside surface. It included relatively long terminals forming, together with resistors and other components, a tangle of open circuit wiring.
The easy-to-mold ceramic material facilitated the development of special and large styles of ceramic capacitors for high-voltage, high-frequency (RF) and power applications.
With the development of semiconductor technology in the 1950s, barrier layer capacitors, or IEC class 3/EIA class IV capacitors, were developed using doped ferroelectric ceramics. Because this doped material was not suitable to produce multilayers, they were replaced decades later by Y5V class 2 capacitors.
The early style of the ceramic disc capacitor could be more cheaply produced than the common ceramic tube capacitors in the 1950s and 1970s. An American company in the midst of the Apollo program, launched in 1961, pioneered the stacking of multiple discs to create a monolithic block. This "multi-layer ceramic capacitor" (MLCC) was compact and offered high-capacitance capacitors. The production of these capacitors using the tape casting and ceramic-electrode cofiring processes was a great manufacturing challenge. MLCCs expanded the range of applications to those requiring larger capacitance values in smaller cases. These ceramic chip capacitors were the driving force behind the conversion of electronic devices from through-hole mounting to surface-mount technology in the 1980s. Polarized electrolytic capacitors could be replaced by non-polarized ceramic capacitors, simplifying the mounting.
In 1993, TDK Corporation succeeded in displacing palladium bearing electrodes with much cheaper nickel electrodes, significantly reducing production costs and enabling mass production of MLCCs.
As of 2012[ [update]], more than 1012 MLCCs are manufactured each year. Along with the style of ceramic chip capacitors, ceramic disc capacitors are often used as safety capacitors in electromagnetic interference suppression applications. Besides these, large ceramic power capacitors for high voltage or high frequency transmitter applications are also to be found.
New developments in ceramic materials have been made with anti-ferroelectric ceramics. This material has a nonlinear antiferroelectric/ferroelectric phase change that allows increased energy storage with higher volumetric efficiency. They are used for energy storage (for example, in detonators).
Application classes, definitions.
The different ceramic materials used for ceramic capacitors, paraelectric or ferroelectric ceramics, influences the electrical characteristics of the capacitors. Using mixtures of paraelectric substances based on titanium dioxide results in very stable and linear behavior of the capacitance value within a specified temperature range and low losses at high frequencies. But these mixtures have a relatively low permittivity so that the capacitance values of these capacitors are relatively small.
Higher capacitance values for ceramic capacitors can be attained by using mixtures of ferroelectric materials like barium titanate together with specific oxides. These dielectric materials have much higher permittivities, but at the same time their capacitance value are more or less nonlinear over the temperature range, and losses at high frequencies are much higher. These different electrical characteristics of ceramic capacitors requires to group them into "application classes". The definition of the application classes comes from the standardization. As of 2013, two sets of standards were in use, one from International Electrotechnical Commission (IEC) and the other from the now-defunct Electronic Industries Alliance (EIA).
The definitions of the application classes given in the two standards are different. The following table shows the different definitions of the application classes for ceramic capacitors:
Manufacturers, especially in the US, preferred Electronic Industries Alliance (EIA) standards. In many parts very similar to the IEC standard, the EIA RS-198 defines four application classes for ceramic capacitors.
The different class numbers within both standards are the reason for a lot of misunderstandings interpreting the class descriptions in the datasheets of many manufacturers. The EIA ceased operations on February 11, 2011, but the former sectors continue to serve international standardization organizations.
In the following, the definitions of the IEC standard will be preferred and in important cases compared with the definitions of the EIA standard.
Class 1 ceramic capacitors.
Class 1 ceramic capacitors are accurate, temperature-compensating capacitors. They offer the most stable voltage, temperature, and to some extent, frequency. They have the lowest losses and therefore are especially suited for resonant circuit applications where stability is essential or where a precisely defined temperature coefficient is required, for example in compensating temperature effects for a circuit. The basic materials of class 1 ceramic capacitors are composed of a mixture of finely ground granules of paraelectric materials such as titanium dioxide (TiO2), modified by additives of zinc, zirconium, niobium, magnesium, tantalum, cobalt and strontium, which are necessary to achieve the capacitor's desired linear characteristics.
The general capacitance temperature behavior of class 1 capacitors depends on the basic paraelectric material, for example TiO2. The additives of the chemical composition are used to adjust precisely the desired temperature characteristic.
Class 1 ceramic capacitors have the lowest volumetric efficiency among ceramic capacitors. This is the result of the relatively low permittivity (6 to 200) of the paraelectric materials. Therefore, class 1 capacitors have capacitance values in the lower range.
Class 1 capacitors have a temperature coefficient that is typically fairly linear with temperature. These capacitors have very low electrical losses with a dissipation factor of approximately 0.15%. They undergo no significant aging processes and the capacitance value is nearly independent of the applied voltage. These characteristics allow applications for high Q filters, in resonant circuits and oscillators (for example, in phase-locked loop circuits).
The EIA RS-198 standard codes ceramic class 1 capacitors with a three character code that indicates temperature coefficient. The first letter gives the significant figure of the change in capacitance over temperature (temperature coefficient α) in ppm/K. The second character gives the multiplier of the temperature coefficient. The third letter gives the maximum tolerance from that in ppm/K. All ratings are from 25 to 85 °C:
In addition to the EIA code, the temperature coefficient of the capacitance dependence of class 1 ceramic capacitors is commonly expressed in ceramic names like "NP0", "N220" etc. These names include the temperature coefficient (α). In the IEC/EN 60384-8/21 standard, the temperature coefficient and tolerance are replaced by a two digit letter code (see table) in which the corresponding EIA code is added.
For instance, an "NP0" capacitor with EIA code "C0G" will have 0 drift, with a tolerance of ±30 ppm/K, while an "N1500" with the code "P3K" will have −1500 ppm/K drift, with a maximum tolerance of ±250 ppm/°C. Note that the IEC and EIA capacitor codes are industry capacitor codes and not the same as military capacitor codes.
Class 1 capacitors include capacitors with different temperature coefficients α. Especially, NP0/CG/C0G capacitors with an α ±0•10−6 /K and an α tolerance of 30 ppm are technically of great interest. These capacitors have a capacitance variation dC/C of ±0.54% within the temperature range −55 to +125 °C. This enables accurate frequency response over a wide temperature range (in, for example, resonant circuits). The other materials with their special temperature behavior are used to compensate a counter temperature run of parallel connected components like coils in oscillator circuits. Class 1 capacitors exhibit very small tolerances of the rated capacitance.
Class 2 ceramic capacitors.
Class 2 ceramic capacitors have a dielectric with a high permittivity and therefore a better volumetric efficiency than class 1 capacitors, but lower accuracy and stability. The ceramic dielectric is characterized by a nonlinear change of capacitance over the temperature range. The capacitance value also depends on the applied voltage. They are suitable for bypass, coupling and decoupling applications or for frequency discriminating circuits where low losses and high stability of capacitance are less important. They typically exhibit microphony.
Class 2 capacitors are made of ferroelectric materials such as barium titanate (BaTiO3) and suitable additives such as aluminium silicate, magnesium silicate and aluminium oxide. These ceramics have very high permittivity (200 to 14,000), allowing an extreme electric field and therefore capacitance within relatively small packages — class 2 capacitors are significantly smaller than comparable class 1 capacitors. However, the permittivity is nonlinear with respect to field strength, meaning the capacitance varies significantly as the voltage across the terminals increases. Class 2 capacitors also exhibit poor temperature stability and age over time.
Due to these traits, class 2 capacitors are typically used in applications where only a minimum value of capacitance (as opposed to an accurate value) is required, such as the buffering/filtering of inputs and outputs of power supplies, and the coupling of electric signals.
Class 2 capacitors are labeled according to the change in capacitance over the temperature range. The most widely used classification is based on the EIA RS-198 standard and uses a three-digit code. The first character, a letter, denotes the coldest operating temperature; the second character, a numeral, denotes the hottest temperature; and the third character, another letter, denotes the maximum allowed capacitance change over the capacitor's entire specified temperature range:
For instance, a Z5U capacitor will operate from +10 °C to +85 °C with a capacitance change of at most +22% to −56%. An X7R capacitor will operate from −55 °C to +125 °C with a capacitance change of at most ±15%.
Some commonly used class 2 ceramic capacitor materials are listed below:
The IEC/EN 60384 -9/22 standard uses another two-digit-code.
In most cases it is possible to translate the EIA code into the IEC/EN code. Slight translation errors occur, but normally are tolerable.
Because class 2 ceramic capacitors have lower capacitance accuracy and stability, they require higher tolerance.
For military types the class 2 dielectrics specify temperature characteristic (TC) but not temperature-voltage characteristic (TVC). Similar to X7R, military type BX cannot vary more than 15% over temperature, and in addition, must remain within +15%/-25 % at maximum rated voltage. Type BR has a TVC limit of +15%/-40%.
Class 3 ceramic capacitors.
Class 3 barrier layer or semiconductive ceramic capacitors have very high permittivity, up to 50,000 and therefore a better volumetric efficiency than class 2 capacitors. However, these capacitors have worse electrical characteristics, including lower accuracy and stability. The dielectric is characterized by very high nonlinear change of capacitance over the temperature range. The capacitance value additionally depends on the voltage applied. As well, they have very high losses and age over time.
Barrier layer ceramic capacitors are made of doped ferroelectric materials such as barium titanate (BaTiO3). As this ceramic technology improved in the mid-1980s, barrier layer capacitors became available in values of up to 100 μF, and at that time it seemed that they could substitute for smaller electrolytic capacitors.
Because it is not possible to build multilayer capacitors with this material, only leaded single layer types are offered in the market.
Due to advancements in multilayer ceramic capacitors enabling superior performance in a smaller package, barrier layer capacitors as a technology are now considered obsolete and no longer standardized by the IEC.
Construction and styles.
Ceramic capacitors are composed of a mixture of finely ground granules of paraelectric or ferroelectric materials, appropriately mixed with other materials to achieve the desired characteristics. From these powder mixtures, the ceramic is sintered at high temperatures. The ceramic forms the dielectric and serves as a carrier for the metallic electrodes. The minimum thickness of the dielectric layer, which today (2013) for low voltage capacitors is in the size range of 0.5 micrometers is limited downwards by the grain size of the ceramic powder. The thickness of the dielectric for capacitors with higher voltages is determined by the dielectric strength of the desired capacitor.
The electrodes of the capacitor are deposited on the ceramic layer by metallization. For MLCCs alternating metallized ceramic layers are stacked one above the other. The outstanding metallization of the electrodes at both sides of the body are connected with the contacting terminal. A lacquer or ceramic coating protects the capacitor against moisture and other ambient influences.
Ceramic capacitors come in various shapes and styles. Some of the most common are:
Multi-layer ceramic capacitors (MLCC).
Manufacturing.
An MLCC can be thought of as consisting of many single-layer capacitors stacked together into a single package. The starting material for all MLCC chips is a mixture of finely ground granules of paraelectric or ferroelectric raw materials, modified by accurately determined additives. The composition of the mixture and the size of the powder particles, as small as 10 nm, reflect the manufacturer's expertise.
A thin ceramic foil is cast from a suspension of the powder with a suitable binder. Rolls of foil are cut into equal-sized sheets, which are screen printed with a metal paste layer, which will become the electrodes. In an automated process, these sheets are stacked in the required number of layers and solidified by pressure. Besides the relative permittivity, the size and number of layers determines the later capacitance value. The electrodes are stacked in an alternating arrangement slightly offset from the adjoining layers so that they each can later be connected on the offset side, one left, one right. The layered stack is pressed and then cut into individual components. High mechanical precision is required, for example, to produce a 500 or more layer stack of size "0201" (0.5 mm × 0.3 mm).
After cutting, the binder is burnt out of the stack. This is followed by sintering at temperatures between 1,200 and 1,450 °C producing the final, mainly crystalline, structure. This burning process creates the desired dielectric properties. Burning is followed by cleaning and then metallization of both end surfaces. Through the metallization, the ends and the inner electrodes are connected in parallel and the capacitor gets its terminals. Finally, each capacitor is electrically tested to ensure functionality and adequate performance, and packaged in a tape reel.
Miniaturizing.
The capacitance formula ("C") of a MLCC capacitor is based on the formula for a plate capacitor enhanced with the number of layers:
formula_0
where "ε" stands for dielectric permittivity; "A" for electrode surface area; n for the number of layers; and "d" for the distance between the electrodes.
A thinner dielectric or a larger electrode area each increase the capacitance value, as will a dielectric material of higher permittivity.
With the progressive miniaturization of digital electronics in recent decades, the components on the periphery of the integrated logic circuits have been scaled down as well. Shrinking an MLCC involves reducing the dielectric thickness and increasing the number of layers. Both options require huge efforts and are connected with a lot of expertise.
In 1995 the minimum thickness of the dielectric was 4 μm. By 2005 some manufacturers produced MLCC chips with layer thicknesses of 1 μm. As of 2010[ [update]], the minimum thickness is about 0.5 μm. The field strength in the dielectric increased to 35 V/μm.
The size reduction of these capacitors is achieved reducing powder grain size, the assumption to make the ceramic layers thinner. In addition, the manufacturing process became more precisely controlled, so that more and more layers can be stacked.
Between 1995 and 2005, the capacitance of a Y5V MLCC capacitor of size 1206 was increased from 4.7 μF to 100 μF. Meanwhile, (2013) a lot of producers can deliver class 2 MLCC capacitors with a capacitance value of 100 μF in the chip-size 0805.
MLCC case sizes.
MLCCs don't have leads, and as a result they are usually smaller than their counterparts with leads. They don't require through-hole access in a PCB to mount and are designed to be handled by machines rather than by humans. As a result, surface-mount components like MLCCs are typically cheaper.
MLCCs are manufactured in standardized shapes and sizes for comparable handling. Because the early standardization was dominated by American EIA standards the dimensions of the MLCC chips were standardized by EIA in units of inches. A rectangular chip with the dimensions of 0.06-inch length and 0.03-inch width is coded as "0603". This code is international and in common use. JEDEC (IEC/EN), devised a second, metric code. The EIA code and the metric equivalent of the common sizes of multilayer ceramic chip capacitors, and the dimensions in mm are shown in the following table. Missing from the table is the measure of the height "H". This is generally not listed, because the height of MLCC chips depends on the number of layers and thus on the capacitance. Normally, however, the height H does not exceed the width W.
NME and BME metallization.
Originally, MLCC electrodes were constructed out of noble metals such as silver and palladium which can withstand high sintering temperatures of 1200 to 1400 °C without readily oxidizing. These "noble metal electrode" (NME) capacitors offered very good electrical properties.
However, a surge in prices of noble metals in the late 1990s greatly increased manufacturing costs; these pressures resulted in the development of capacitors that used cheaper metals like copper and nickel. These "base metal electrode" (BME) capacitors possessed poorer electrical characteristics; exhibiting greater shrinkage of capacitance at higher voltages and increased loss factor.
The disadvantages of BME were deemed acceptable for class 2 capacitors, which are primarily used in accuracy-insensitive, low-cost applications such as power supplies. NME still sees use in class 1 capacitors where conformance to specifications are critical and cost is less of a concern.
MLCC capacitance ranges.
Capacitance of MLCC chips depends on the dielectric, the size and the required voltage (rated voltage). Capacitance values start at about 1pF. The maximum capacitance value is determined by the production technique. For X7R that is 47 μF, for Y5V: 100 μF.
The picture right shows the maximum capacitance for class 1 and class 2 multilayer ceramic chip capacitors. The following two tables, for ceramics NP0/C0G and X7R each, list for each common case size the maximum available capacitance value and rated voltage of the leading manufacturers Murata, TDK, KEMET, AVX. (Status April 2017)
Low-ESL styles.
In the region of its resonance frequency, a capacitor has the best decoupling properties for noise or electromagnetic interference. The resonance frequency of a capacitor is determined by the inductance of the component. The inductive parts of a capacitor are summarized in the equivalent series inductance, or ESL. (Note that L is the electrical symbol for inductance.) The smaller the inductance, the higher the resonance frequency.
Because, especially in digital signal processing, switching frequencies have continued to rise, the demand for high frequency decoupling or filter capacitors increases. With a simple design change the ESL of an MLCC chip can be reduced. Therefore, the stacked electrodes are connected on the longitudinal side with the connecting terminations. This reduces the distance that the charge carriers flow over the electrodes, which reduces inductance of the component.
For example, an 0.1 μF X7R MLCC in a 0805 package resonates at 16 MHz. The same capacitor with leads on its long sides (i.e. an "0508") has a resonance frequency of 22 MHz.
Another possibility is to form the device as an array of capacitors. Here, several individual capacitors are built in a common housing. Connecting them in parallel, the resulting ESL as well as ESR values of the components are reduced.
X2Y decoupling capacitor.
A standard multi-layer ceramic capacitor has many opposing electrode layers stacked inside connected with two outer terminations. The X2Y ceramic chip capacitor however is a 4 terminal chip device. It is constructed like a standard two-terminal MLCC out of the stacked ceramic layers with an additional third set of shield electrodes incorporated in the chip. These shield electrodes surround each existing electrode within the stack of the capacitor plates and are low ohmic contacted with two additional side terminations across to the capacitor terminations. The X2Y construction results in a three-node capacitive circuit that provides simultaneous line-to-line and line-to-ground filtering.
Capable of replacing 2 or more conventional devices, the X2Y ceramic capacitors are ideal for high frequency filtering or noise suppression of supply voltages in digital circuits, and can prove invaluable in meeting stringent EMC demands in dc motors, in automotive, audio, sensor and other applications.
The X2Y footprint results in lower mounted inductance. This is particularly of interest for use in high-speed digital circuits with clock rates of several 100 MHz and upwards. There the decoupling of the individual supply voltages on the circuit board is difficult to realize due to parasitic inductances of the supply lines. A standard solution with conventional ceramic capacitors requires the parallel use of many conventional MLCC chips with different capacitance values. Here X2Y capacitors are able to replace up to five equal-sized ceramic capacitors on the PCB. However, this particular type of ceramic capacitor is patented, so these components are still comparatively expensive.
An alternative to X2Y capacitors may be a three-terminal capacitor.
Mechanical susceptibility.
Ceramics are brittle, and MLCC chips surface-mount soldered to a circuit board are often vulnerable to cracking from thermal expansion or mechanical stresses like depanelization, more so than leaded through-hole components.
The cracks can come from automated machine assembly line, or from high current in the circuit.
Vibration and shock forces on the circuit board are more or less transmitted undampened to the MLCC and its solder joints; excessive force may cause the capacitor to crack ("flex crack"). Excess solder in the joints are undesirable as they may magnify the forces that the capacitor is subject to.
The capability of MLCC chips to withstand mechanical stress is tested by a so-called substrate bending test, where a PCB with a soldered MLCC is bent by a punch by 1 to 3 mm. Failure occurs if the MLCC becomes a short-circuit or significantly changes in capacitance.
Bending strengths of MLCC chips differ by the ceramic material, the size of the chip, and the physical construction of the capacitors. Without special mitigation, NP0/C0G class 1 ceramic MLCC chips reach a typical bending strength of 2 mm while larger types of X7R, Y5V class 2 ceramic chips achieved only a bending strength of approximately 1 mm. Smaller chips, such as the size of 0402, reached in all types of ceramics larger bending strength values.
With special design features, particularly at the electrodes and terminations, the bending strength can be improved. For example, an internal short circuit arises by the contact of two electrodes with opposite polarity, which will be produced at the break of the ceramic in the region of the terminations. This can be prevented when the overlap surfaces of the electrodes are reduced. This is achieved e.g. by an "Open Mode Design" (OMD). Here a break in the region of the terminations only reduce the capacitance value a little bit (AVX, KEMET).
With a similar construction called "Floating Electrode Design" (FED) or "Multi-layer Serial Capacitors" (MLSC), also, only capacitance reduction results if parts of the capacitor body break. This construction works with floating electrodes without any conductive connection to the termination. A break doesn't lead to a short, only to capacitance reduction.
However, both structures lead to larger designs with respect to a standard MLCC version with the same capacitance value.
The same volume with respect to standard MLCCs is achieved by the introduction of a flexible intermediate layer of a conductive polymer between the electrodes and the termination called "Flexible Terminations" (FT-Cap) or "Soft Terminations". In this construction, the rigid metallic soldering connection can move against the flexible polymer layer, and thus can absorb the bending forces, without resulting in a break in the ceramic.
Some automotive capacitors are specified to adhere to AEC-Q200 and/or VW 80808.
RFI/EMI suppression with X- and Y capacitors.
Suppression capacitors are effective interference reduction components because their electrical impedance decreases with increasing frequency, such that at higher frequencies they appear as short circuits to high-frequency electrical noise and transients between the lines, or to ground. They therefore prevent equipment and machinery (including motors, inverters, and electronic ballasts, as well as solid-state relay snubbers and spark quenchers) from sending and receiving electromagnetic and radio frequency interference as well as transients in across-the-line (X capacitors) and line-to-ground (Y capacitors) connections. X capacitors effectively absorb symmetrical, balanced, or differential interference. Y capacitors are connected in a line bypass between a line phase and a point of zero potential, to absorb asymmetrical, unbalanced, or common-mode interference.
<gallery widths="280" heights="140" caption="RFI/EMI suppression with X- and Y-capacitors for equipment without and with additional safety insulation" class="center">
File:Safety caps-Appliance Class I.svg|Appliance Class I capacitor connection
File:Safety caps-Appliance Class II.svg|Appliance Class II capacitor connection
</gallery>
EMI/RFI suppression capacitors are designed so that any remaining interference or electrical noise does not exceed the limits of EMC directive EN 50081. Suppression components are connected directly to mains voltage for 10 to 20 years or more and are therefore exposed to potentially damaging overvoltages and transients. For this reason, suppression capacitors must comply with the safety and non-flammability requirements of international safety standards such as
RFI capacitors that fulfill all specified requirements are imprinted with the certification mark of various national safety standards agencies. For power line applications, special requirements are placed on the non-flammability of the coating and the epoxy resin impregnating or coating the capacitor body. To receive safety approvals, X and Y powerline-rated capacitors are destructively tested to the point of failure. Even when exposed to large overvoltage surges, these safety-rated capacitors must fail in a fail-safe manner that does not endanger personnel or property.
As of 2012[ [update]] most ceramic capacitors used for EMI/RFI suppression were leaded ones for through-hole mounting on a PCB, the surface-mount technique is becoming more and more important. For this reason, in recent years a lot of MLCC chips for EMI/RFI suppression from different manufacturers have received approvals and fulfill all requirements given in the applicable standards.
Ceramic power capacitors.
Although the materials used for large power ceramic capacitors mostly are very similar to those used for smaller ones, ceramic capacitors with high to very high power or voltage ratings for applications in power systems, transmitters and electrical installations are often classified separately, for historical reasons. The standardization of ceramic capacitors for lower power is oriented toward electrical and mechanical parameters as components for use in electronic equipment. The standardization of power capacitors, contrary to that, is strongly focused on protecting personnel and equipment, given by the local regulating authority.
As modern electronic equipment gained the ability to handle power levels that were previously the exclusive domain of "electrical power" components, the distinction between the "electronic" and "electrical" power ratings has become less distinct. In the past, the boundary between these two families was approximately at a reactive power of 200 volt-amps, but modern power electronics can handle increasing amounts of power.
Power ceramic capacitors are mostly specified for much higher than 200 volt-amps. The great plasticity of ceramic raw material and the high dielectric strength of ceramics deliver solutions for many applications and are the reasons for the enormous diversity of styles within the family of power ceramic capacitors. These power capacitors have been on the market for decades. They are produced according to the requirements as class 1 power ceramic capacitors with high stability and low losses or class 2 power ceramic capacitors with high volumetric efficiency.
Class 1 power ceramic capacitors are used for resonant circuit application in transmitter stations. Class 2 power ceramic capacitors are used for circuit breakers, for power distribution lines, for high voltage power supplies in laser-applications, for induction furnaces and in voltage-doubling circuits. Power ceramic capacitors can be supplied with high rated voltages in the range of 2 kV up to 100 kV.
The dimensions of these power ceramic capacitors can be very large. At high power applications the losses of these capacitors can generate a lot of heat. For this reason some special styles of power ceramic capacitors have pipes for water-cooling.
Electrical characteristics.
Series-equivalent circuit.
All electrical characteristics of ceramic capacitors can be defined and specified by a series equivalent circuit composed out of an idealized capacitance and additional electrical components, which model all losses and inductive parameters of a capacitor. In this series-equivalent circuit the electrical characteristics of a capacitors is defined by
The use of a series equivalent circuit instead of a parallel equivalent circuit is defined in IEC/EN 60384-1.
Capacitance standard values and tolerances.
The "rated capacitance" CR or "nominal capacitance" CN is the value for which the capacitor has been designed. The actual capacitance depends on the measuring frequency and the ambient temperature. Standardized conditions for capacitors are a low-voltage AC measuring method at a temperature of 20 °C with frequencies of
Capacitors are available in different, geometrically increasing preferred values as specified in the E series standards specified in IEC/EN 60063. According to the number of values per decade, these were called the E3, E6, E12, E24, etc. series. The units used to specify capacitor values includes everything from picofarad (pF), nanofarad (nF), microfarad (μF) and farad (F).
The percentage of allowed deviation of the capacitance from the rated value is called capacitance tolerance. The actual capacitance value must be within the tolerance limits, or the capacitor is out of specification. For abbreviated marking in tight spaces, a letter code for each tolerance is specified in IEC/EN 60062.
The required capacitance tolerance is determined by the particular application. The narrow tolerances of E24 to E96 will be used for high-quality class 1 capacitors in circuits such as precision oscillators and timers. For applications such as non-critical filtering or coupling circuits, for class 2 capacitors the tolerance series E12 down to E3 are sufficient.
Temperature dependence of capacitance.
Capacitance of ceramic capacitors varies with temperature. The different dielectrics of the many capacitor types show great differences in temperature dependence. The temperature coefficient is expressed in parts per million (ppm) per degree Celsius for class 1 ceramic capacitors or in percent (%) over the total temperature range for class 2 capacitors.
Frequency dependence of capacitance.
Most discrete capacitor types have greater or smaller capacitance changes with increasing frequencies. The dielectric strength of class 2 ceramic and plastic film diminishes with rising frequency. Therefore, their capacitance value decreases with increasing frequency. This phenomenon is related to the dielectric relaxation in which the time constant of the electrical dipoles is the reason for the frequency dependence of permittivity. The graph on the right hand side shows typical frequency behavior for class 2 vs class 1 capacitors.
Voltage dependence of capacitance.
Capacitance of ceramic capacitors may also change with applied voltage. This effect is more prevalent in class 2 ceramic capacitors. The ferroelectric material depends on the applied voltage. The higher the applied voltage, the lower the permittivity. Capacitance measured or applied with higher voltage can drop to values of −80% of the value measured with the standardized measuring voltage of 0.5 or 1.0 V. This behavior is a small source of nonlinearity in low-distortion filters and other analog applications. In audio applications this can be the reason for harmonic distortions.
The voltage dependence of capacitance in the two diagrams above shows curves from ceramic capacitors with NME metallization. For capacitors with BME metallization the voltage dependence of capacitance increased significantly.
Voltage proof.
For most capacitors, a physically conditioned dielectric strength or a breakdown voltage usually could be specified for each dielectric material and thickness. This is not possible with ceramic capacitors. The breakdown voltage of a ceramic dielectric layer may vary depending on the electrode material and the sintering conditions of the ceramic up to a factor of 10. A high degree of precision and control of process parameters is necessary to keep the scattering of electrical properties for today's very thin ceramic layers within specified limits.
The voltage proof of ceramic capacitors is specified as rated voltage (UR). This is the maximum DC voltage that may be continuously applied to the capacitor up to the upper temperature limit. This guaranteed voltage proof is tested according to the voltages shown in the adjacent table.
Furthermore, in periodic life time tests (endurance tests) the voltage proof of ceramic capacitors is tested with increased test voltage (120 to 150% of UR) to ensure safe construction.
Impedance.
The frequency dependent AC resistance of a capacitor is called impedance formula_1 and is a complex ratio of voltage to current in an AC circuit. Impedance extends the concept of Ohm's law to AC circuits, and possesses both magnitude and phase at a particular frequency, unlike resistance, which has only magnitude.
Impedance is a measure of the ability of the capacitor to pass alternating currents. In this sense impedance can be used like Ohms law
formula_2
to calculate either the peak or the effective value of the current or the voltage.
As shown in the series-equivalent circuit of a capacitor, the real-world component includes an ideal capacitor formula_3, an inductance formula_4 and a resistor formula_5.
To calculate the impedance formula_1 the resistance and then both reactances have to be added geometrically
formula_6
wherein the capacitive reactance (Capacitance) is
formula_7
and an inductive reactance (Inductance) is
formula_8.
In the special case of resonance, in which both reactive resistances have the same value (formula_9), then the impedance will only be determined by formula_10.
Data sheets of ceramic capacitors only specify the impedance magnitude formula_1. The typical impedance curve shows that with increasing frequency, impedance decreases, down to a minimum. The lower the impedance, the more easily alternating currents can pass through the capacitor. At the minimum point of the curve, the point of resonance, where XC has the same value as XL, the capacitor exhibits its lowest impedance value. Here only the ohmic ESR determines the impedance. With frequencies above the resonance, impedance increases again due to the ESL.
ESR, dissipation factor, and quality factor.
The summarized losses in ceramic capacitors are ohmic AC losses. DC losses are specified as "leakage current" or "insulating resistance" and are negligible for an AC specification. These AC losses are nonlinear and may depend on frequency, temperature, age, and for some special types, on humidity. The losses result from two physical conditions,
The largest share of these losses in larger capacitors is usually the frequency dependent ohmic dielectric losses. Regarding the IEC 60384-1 standard, the ohmic losses of capacitors are measured at the same frequency used to measure capacitance. These are:
Results of the summarized resistive losses of a capacitor may be specified either as equivalent series resistance (ESR), as dissipation factor (DF, tan δ), or as quality factor (Q), depending on the application requirements.
Class 2 capacitors are mostly specified with the dissipation factor, tan δ. The dissipation factor is determined as the tangent of the reactance formula_11 – formula_12 and the ESR, and can be shown as the angle δ between the imaginary and impedance axes in the above vector diagram, see paragraph "Impedance".
If the inductance formula_13 is small, the dissipation factor can be approximated as:
formula_14
Class 1 capacitors with very low losses are specified with a dissipation factor and often with a quality factor (Q). The quality factor is defined as the reciprocal of the dissipation factor.
formula_15
The Q factor represents the effect of electrical resistance, and characterizes a resonator's bandwidth formula_16 relative to its center or resonant frequency formula_17. A high Q value is a mark of the quality of the resonance for resonant circuits.
In accordance with IEC 60384-8/-21/-9/-22 ceramic capacitors may not exceed the following dissipation factors:
The ohmic losses of ceramic capacitors are frequency, temperature and voltage dependent. Additionally, class 2 capacitor measurements change because of aging. Different ceramic materials have differing losses over the temperature range and the operating frequency. The changes in class 1 capacitors are in the single-digit range while class 2 capacitors have much higher changes.
HF use, inductance (ESL) and self-resonant frequency.
Electrical resonance occurs in a ceramic capacitor at a particular resonance frequency where the imaginary parts of the capacitor impedance and admittances cancel each other.
This frequency where XC is as high as XL is called the self-resonant frequency and can be calculated with:
formula_18
where ω = 2π"f", in which "f" is the resonance frequency in Hertz, "L" is the inductance in henries, and "C" is the capacitance in farads.
The smaller the capacitance C and the inductance L the higher is the resonance frequency.
The self-resonant frequency is the lowest frequency at which impedance passes through a minimum. For any AC application the self-resonant frequency is the highest frequency at which a capacitor can be used as a capacitive component. At frequencies above the resonance, the impedance increases again due to ESL: the capacitor becomes an inductor with inductance equal to capacitor's ESL, and resistance equal to ESR at the given frequency.
ESL in industrial capacitors is mainly caused by the leads and internal connections used to connect the plates to the outside world. Larger capacitors tend to higher ESL than small ones, because the distances to the plate are longer and every millimeter increases inductance.
Ceramic capacitors, which are available in the range of very small capacitance values (pF and higher) are already out of their smaller capacitance values suitable for higher frequencies up to several 100 MHz (see formula above).
Due to the absence of leads and proximity to the electrodes, MLCC chips have significantly lower parasitic inductance than f. e. leaded types, which makes them suitable for higher frequency applications. A further reduction of parasitic inductance is achieved by contacting the electrodes on the longitudinal side of the chip instead of the lateral side.
Sample self-resonant frequencies for one set of NP0/C0G and one set of X7R ceramic capacitors are:
Note that X7Rs have better frequency response than C0Gs. It makes sense, however, since class 2 capacitors are much smaller than class 1, so they ought to have lower parasitic inductance.
Aging.
In ferroelectric class 2 ceramic capacitors capacitance decreases over time. This behavior is called "aging". Aging occurs in ferroelectric dielectrics, where domains of polarization in the dielectric contribute to total polarization. Degradation of the polarized domains in the dielectric decreases permittivity over time so that the capacitance of class 2 ceramic capacitors decreases as the component ages.
The aging follows a logarithmic law. This law defines the decrease of capacitance as a percentage for a time decade after the soldering recovery time at a defined temperature, for example, in the period from 1 to 10 hours at 20 °C. As the law is logarithmic, the percentage loss of capacitance will twice between 1 h and 100 h and 3 times between 1 h and 1000 h and so on. So aging is fastest near the beginning, and the capacitance value effectively stabilizes over time.
The rate of aging of class 2 capacitors mainly depends on the materials used. A rule of thumb is, the higher the temperature dependence of the ceramic, the higher the aging percentage. The typical aging of X7R ceramic capacitors is about 2.5% per decade The aging rate of Z5U ceramic capacitors is significantly higher and can be up to 7% per decade.
The aging process of class 2 capacitors may be reversed by heating the component above the Curie point.
Class 1 capacitors do not experience ferroelectric aging like Class 2's. But environmental influences such as higher temperature, high humidity and mechanical stress can, over a longer period of time, lead to a small irreversible decline in capacitance, sometimes also called aging. The change of capacitance for P 100 and N 470 Class 1's is lower than 1%, for capacitors with N 750 to N 1500 ceramics it is ≤ 2%.
Insulation resistance and self-discharge constant.
The resistance of the dielectric is never infinite, leading to some level of DC "leakage current", which contributes to self-discharge. For ceramic capacitors this resistance, placed in parallel with the capacitor in the series-equivalent circuit of capacitors, is called "insulation resistance Rins". The insulation resistance must not be confused with the outer isolation with respect to the environment.
The rate of self-discharge with decreasing capacitor voltage follows the formula
formula_19
With the stored DC voltage formula_20 and the self-discharge constant
formula_21
That means, after formula_22 capacitor voltage formula_20 dropped to 37% of the initial value.
The insulation resistance given in the unit MΩ (106 Ohm) as well as the self-discharge constant in seconds is an important parameter for the quality of the dielectric insulation. These time values are important, for example, when a capacitor is used as timing component for relays or for storing a voltage value as in a sample and hold circuits or operational amplifiers.
In accordance with the applicable standards, Class 1 ceramic capacitors have an Rins ≥ 10,000 MΩ for capacitors with CR ≤ 10 nF or τs ≥ 100 s for capacitors with CR > 10 nF. Class 2 ceramic capacitors have an Rins ≥ 4,000 MΩ for capacitors with CR ≤ 25 nF or τs ≥ 100 s for capacitors with CR > 25 nF.
Insulation resistance and thus the self-discharge time rate are temperature dependent and decrease with increasing temperature at about 1 MΩ per 60 °C.
Dielectric absorption (soakage).
Dielectric absorption is the name given to the effect by which a capacitor, which has been charged for a long time, discharges only incompletely. Although an ideal capacitor remains at zero volts after discharge, real capacitors will develop a small voltage coming from time-delayed dipole discharging, a phenomenon that is also called dielectric relaxation, "soakage" or "battery action".
In many applications of capacitors dielectric absorption is not a problem but in some applications, such as long-time-constant integrators, sample-and-hold circuits, switched-capacitor analog-to-digital converters and very low-distortion filters, it is important that the capacitor does not recover a residual charge after full discharge, and capacitors with low absorption are specified. The voltage at the terminals generated by dielectric absorption may in some cases possibly cause problems in the function of an electronic circuit or can be a safety risk to personnel. To prevent shocks, most very large capacitors like power capacitors are shipped with shorting wires that are removed before use.
Microphony.
All class 2 ceramic capacitors using ferroelectric ceramics exhibit piezoelectricity, and have a piezoelectric effect called microphonics, microphony or in audio applications squealing. Microphony describes the phenomenon wherein electronic components transform mechanical vibrations into an electrical signal which in many cases is undesired noise. Sensitive electronic preamplifiers generally use class 1 ceramic and film capacitors to avoid this effect.
In the reverse microphonic effect, the varying electric field between the capacitor plates exerts a physical force, moving them as a speaker. High current impulse loads or high ripple currents can generate audible acoustic sound coming from the capacitor, but discharges the capacitor and stresses the dielectric.
Soldering.
Ceramic capacitors may experience changes to their electrical parameters due to soldering stress. The heat of the solder bath, especially for SMD styles, can cause changes of contact resistance between terminals and electrodes. For ferroelectric class 2 ceramic capacitors, the soldering temperature is above the Curie point. The polarized domains in the dielectric are going back and the aging process of class 2 ceramic capacitors is starting again.
Hence after soldering a recovery time of approximately 24 hours is necessary. After recovery some electrical parameters like capacitance value, ESR, leakage currents are changed irreversibly. The changes are in the lower percentage range depending on the style of capacitor.
Additional information.
Standardization.
The standardization for all electrical, electronic components and related technologies follows the rules given by the International Electrotechnical Commission (IEC), a non-profit, non-governmental international standards organization.
The definition of the characteristics and the procedure of the test methods for capacitors for use in electronic equipment are set out in the generic specification:
The tests and requirements to be met by ceramic capacitors for use in electronic equipment for approval as standardized types are set out in the following sectional specifications:
Tantalum capacitor replacement.
Multilayer ceramic capacitors are increasingly used to replace tantalum and low capacitance aluminium electrolytic capacitors in applications such as bypass or high frequency switched-mode power supplies as their cost, reliability and size becomes competitive. In many applications, their low ESR allows the use of a lower nominal capacitance value.
Features and disadvantages of ceramic capacitors.
For features and disadvantages of ceramic capacitors see main article Capacitor types#Comparison of types
Marking.
Imprinted markings.
If space permits ceramic capacitors, like most other electronic components, have imprinted markings to indicate the manufacturer, the type, their electrical and thermal characteristics and their date of manufacture. In the ideal case, if they are large enough, the capacitor will be marked with:
Smaller capacitors use a shorthand notation, to display all the relevant information in the limited space. The most commonly used format is: XYZ J/K/M VOLTS V, where XYZ represents the capacitance (calculated as XY × 10Z pF), the letters J, K or M indicate the tolerance (±5%, ±10% and ±20% respectively) and VOLTS V represents the working voltage.
Examples.
Capacitance, tolerance and date of manufacture can be identified with a short code according to IEC/EN 60062. Examples of short-marking of the rated capacitance (microfarads):
The date of manufacture is often printed in accordance with international standards.
Year code: "R" = 2003, "S"= 2004, "T" = 2005, "U" = 2006, "V" = 2007, "W" = 2008, "X" = 2009, "A" = 2010, "B" = 2011, "C" = 2012, "D" = 2013 etc.
Month code: "1" to "9" = Jan. to Sept., "O" = October, "N" = November, "D" = December
"X5" is then "2009, May"
For very small capacitors like MLCC chips no marking is possible. Here only the traceability of the manufacturers can ensure the identification of a type.
Colour coding.
The identification of modern capacitors has no detailed color coding.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C=\\varepsilon \\cdot { {n \\cdot A} \\over {d} }"
},
{
"math_id": 1,
"text": "Z"
},
{
"math_id": 2,
"text": "Z = \\frac{\\hat u}{\\hat \\imath} = \\frac{U_\\mathrm{AC}}{I_\\mathrm{AC}}."
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "R"
},
{
"math_id": 6,
"text": "Z=\\sqrt{{ESR}^2 + (X_\\mathrm{C} + (-X_\\mathrm{L}))^2}"
},
{
"math_id": 7,
"text": " X_C= -\\frac{1}{\\omega C}"
},
{
"math_id": 8,
"text": " X_L=\\omega L_{\\mathrm{ESL}}"
},
{
"math_id": 9,
"text": "X_C = X_L"
},
{
"math_id": 10,
"text": "{ESR}"
},
{
"math_id": 11,
"text": "X_C"
},
{
"math_id": 12,
"text": "X_L"
},
{
"math_id": 13,
"text": " ESL "
},
{
"math_id": 14,
"text": "\\tan \\delta = ESR \\cdot \\omega C"
},
{
"math_id": 15,
"text": " Q = \\frac{1}{\\tan \\delta} = \\frac{f_0}{B} \\ "
},
{
"math_id": 16,
"text": "B"
},
{
"math_id": 17,
"text": "f_0"
},
{
"math_id": 18,
"text": "\\omega = \\frac{1}{\\sqrt{LC}}"
},
{
"math_id": 19,
"text": "u(t) = U_0 \\cdot \\mathrm{e}^{-t/\\tau_\\mathrm{s}},"
},
{
"math_id": 20,
"text": "U_0"
},
{
"math_id": 21,
"text": "\\tau_\\mathrm{s} = R_\\mathrm{ins} \\cdot C"
},
{
"math_id": 22,
"text": "\\tau_\\mathrm{s}\\,"
}
] | https://en.wikipedia.org/wiki?curid=9221221 |
922129 | Standard drink | Measure of the pure ethanol in an alcoholic beverage
A standard drink or (in the UK) unit of alcohol is a measure of alcohol consumption representing a fixed amount of pure alcohol. The notion is used in relation to recommendations about alcohol consumption and its relative risks to health. It helps to educate alcohol users. A hypothetical alcoholic beverage sized to one standard drink varies in volume depending on the alcohol concentration of the beverage (for example, a standard drink of spirits takes up much less space than a standard drink of beer), but it always contains the same amount of alcohol and therefore produces the same amount of drunkenness. Many government health guidelines specify low to high risk amounts in units of grams of pure alcohol per day, week, or single occasion. These government guidelines often illustrate these amounts as standard drinks of various beverages, with their serving sizes indicated. Although used for the same purpose, the definition of a standard drink varies from country to country.
Labeling beverages with the equivalent number of standard drinks is common in some countries.
Definitions in various countries.
There is no international consensus on how much pure alcohol is contained in a standard drink; values in different countries range from . The example questionnaire form for the World Health Organization's Alcohol Use Disorders Identification Test (AUDIT) uses , and this definition has been adopted by more countries than any other amount. Some countries choose to base the definition on mass of alcohol (in grams) while others base the unit on the volume (in ml or other volume units). For comparison, both measurements are shown here, as well as the number of standard drinks contained in of 5% ABV beer (a typical large size of beer in Europe, slightly larger than a US pint of 473 mL). The terminology for the unit also varies, as shown in the Notes column.
Calculation of pure alcohol mass in a serving.
In the UK, it is sometimes misleadingly stated that there is one unit per half-pint of beer, or small glass of wine, or single measure of spirits. However, such statements do not take into account the various strengths and volumes supplied in practice. Such approximations can lead to people underestimating their alcohol intake. In some countries, the number of units of alcohol in a beverage can instead be read directly on the label.
In countries without labeling, it is possible to calculate the pure alcohol mass in a serving from the concentration, density of alcohol, and volume:
formula_0
For example, a glass of beer with an ABV of 5.5% contains 19.25 ml of pure alcohol, which has a density of 0.78945 g/mL (at 20 °C), and therefore a mass of .
formula_1
or
formula_2
The standard UK units of alcohol in a drink can be determined by multiplying the volume of the drink (in millilitres) by its percentage ABV, and dividing by 1000. For example, of beer at 4% alcohol by volume (ABV) contains:
formula_3
The formula uses ml ÷ 1000. This results in exactly one unit per percentage point per litre, of any alcoholic beverage.
The formula can be simplified for everyday use by expressing the serving size in centilitres and the alcohol content literally as a percentage:
formula_4
Thus, a bottle of wine at 12% ABV contains 75 cl × 12% = 9 units. Alternatively, the serving size in litres multiplied by the alcohol content as a number, the above example giving 0.75 × 12 = 9 units:
formula_5
In the UK, both pieces of input data are usually mentioned in this form on the bottle, so are easy to retrieve.
When drink size is in fluid ounces (which differ between the UK and the US), the following conversions can be used:
One should bear in mind that a pint in the UK is 20 imperial fluid ounces, whereas a pint in the US is 16 US fluid ounces. However, as 1 imperial fl. oz. ≈ 0.961 US fl. oz., this means 1 imperial pint ≈ 1.201 US pints (i.e. 0.961 × "20/16") instead of 1.25 US pints.
Reference standard drinks.
A standard drink is often different from a normal serving in the country in which it is served. For example, in the United States, a standard drink is defined as of ethanol per serving, which is about 14 grams of alcohol. This corresponds to a can of 5% beer, a glass of 12% ABV (alcohol by volume) wine, or a so-called "shot" of spirit, assuming that beer is 5% ABV, wine is 12% ABV, and spirits is 40% ABV (80 proof). Most wine today is higher than 12% ABV (the average ABV in Napa Valley in 1971 was 12.5% ), hence will be more than a standard drink. Similarly, although 40% ABV is standard for spirits, the amount of spirit in a mixed drink varies widely.
Spirits.
Most spirits sold in the United Kingdom have 35%-40% ABV. In England, a single pub measure () of a spirit contains one unit. However, a larger measure is increasingly used (and in particular is standard in Northern Ireland), which contains 1.4 units of alcohol at 40% ABV. Sellers of spirits by the glass must state the capacity of their standard measure in ml.
In Australia, a shot of spirits (40% ABV) is 0.95 standard drinks.
In the US, one shot of 80 proof liquor is , which is one US standard drink.
Recommended maximum.
From 1992 to 1995, the UK government advised that men should drink no more than 21 units per week, and women no more than 14. (The difference between the sexes was due to the typically lower weight and water-to-body-mass ratio of women). "The Times" claimed in October 2007 that these limits had been "plucked out of the air" and had no scientific basis.
This was changed after a government study showed that many people were in effect "saving up" their units and using them at the end of the week, a form of binge drinking. Since 1995 the advice was that regular consumption of 3–4 units a day for men, or 2–3 units a day for women, would not pose significant health risks, but that consistently drinking four or more units a day (men), or three or more units a day (women), is not advisable.
An international study of about 6,000 men and 11,000 women for a total of 75,000 person-years found that people who reported that they drank more than a threshold value of 2 units of alcohol a day had a higher risk of fractures than non-drinkers. For example, those who drank over 3 units a day had nearly twice the risk of a hip fracture.
Relation to blood alcohol content.
As a rough guide, it takes about one hour for the body to metabolise (break down) one UK unit of alcohol, 10 ml (8 grams). However, this will vary with body weight, sex, age, personal metabolic rate, recent food intake, the type and strength of the alcohol, and medications taken. Alcohol may be metabolised more slowly if liver function is impaired. For other countries, it may be easiest to convert to UK units. For example, in the United States one standard drink contains 14 grams ≈ 1.75 units of alcohol, and so a US standard drink takes the body about an hour and three-quarters to process. Blood alcohol content can more accurately be estimated by using Widmark's formula.
Labeling.
Australia introduced standard drink labelling in the 1990's, and New Zealand followed with a labelling requirement starting in 2002. The labels were criticized for being too small to read. A focus group study found that most student drinkers used the labels to choose stronger drinks and identify the cheapest method of getting drunk, rather than to drink safely.
In the UK in March 2011, alcohol companies voluntarily pledged to the UK Department of Health to implement a health labelling scheme to provide more information about responsible drinking on alcohol labels and containers. The pledge stated:
"We will ensure that over 80% of products on shelf (by December 2013) will have labels with clear unit content, NHS guidelines and a warning about drinking when pregnant."
At the end of 2014, 101 companies had committed to the pledge labelling scheme.
There are five elements included within the overall labelling scheme, the first three being mandatory, and the last two optional:
"Further detailed specifications about the labelling scheme are available from the "Alcohol labelling tool kit"."
Drinks companies had pledged to display the three mandatory items on 80% of drinks containers on shelves in the UK off-trade by the end of December 2013. A report published in November 2014, confirmed that UK drinks producers had delivered on that pledge with a 79.3% compliance with the pledge elements as measured by products on shelf. Compared with labels from 2008 on a like-for-like basis, information on Unit alcohol content had increased by 46%; 91% of products displayed alcohol and pregnancy warnings (18% in 2008); and 75% showed the Chief Medical Officers' lower risk daily guidelines (6% in 2008).
Studies published in 2021 in the UK showed that the label could be further enhanced by including pictures of units and a statement of the drinking guidelines - this would help people understand the recommended limits better.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Pure alcohol mass} = \\text{volume} \\times \\text{alcohol by volume} \\times \\text{density of alcohol}"
},
{
"math_id": 1,
"text": "350\\,\\mathrm{mL} \\times 0.055 \\times 0.78945\\,\\mathrm{g}/\\mathrm{mL} \\approx 15.20\\,\\mathrm{g}"
},
{
"math_id": 2,
"text": "0.35\\,\\mathrm{L} \\times 0.055 \\times 789.45\\,\\mathrm{g}/\\mathrm{L} \\approx 15.20\\,\\mathrm{g}"
},
{
"math_id": 3,
"text": "\\begin{align}\n 568\\mbox{ ml} \\times 4\\% \\times \\frac{1\\mbox{ unit}}{10\\mbox{ ml}} && = && 568\\mbox{ ml} \\times \\frac{4}{100} \\times \\frac{1\\mbox{ unit}}{10\\mbox{ ml}} && = && 568\\mbox{ ml} \\times \\frac{4\\mbox{ units}}{1000\\mbox{ ml}} &= 2.3\\mbox{ units}\n\\end{align}"
},
{
"math_id": 4,
"text": "\\begin{align}\n 75\\mbox{ cl} \\times 12\\% \\times \\frac{1\\mbox{ unit}}{1\\mbox{ cl}} && = && 75\\mbox{ cl} \\times \\frac{12}{100} \\times \\frac{1\\mbox{ unit}}{1\\mbox{ cl}} && = && 75 \\times \\frac{12}{100}\\mbox{ units} &= 9\\mbox{ units}\n\\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align}\n 0.75\\mbox{ l} \\times 12\\% \\times \\frac{100\\mbox{ units}}{1\\mbox{ l}} && = && 0.75\\mbox{ l} \\times \\frac{12}{100} \\times \\frac{100\\mbox{ units}}{1\\mbox{ l}} && = && 0.75 \\times 12\\mbox{ units} &= 9\\mbox{ units}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=922129 |
9223226 | Gullstrand–Painlevé coordinates | Coordinates suitable for following a free-falling observer of a Schwarzchild black hole
Gullstrand–Painlevé coordinates are a particular set of coordinates for the Schwarzschild metric – a solution to the Einstein field equations which describes a black hole. The ingoing coordinates are such that the time coordinate follows the proper time of a free-falling observer who starts from far away at zero velocity, and the spatial slices are flat. There is no coordinate singularity at the Schwarzschild radius (event horizon). The outgoing ones are simply the time reverse of ingoing coordinates (the time is the proper time along outgoing particles that reach infinity with zero velocity).
The solution was proposed independently by Paul Painlevé in 1921 and Allvar Gullstrand in 1922. It was not explicitly shown that these solutions were simply coordinate transformations of the usual Schwarzschild solution until 1933 in Lemaître's paper, although Einstein immediately believed that to be true.
Derivation.
The derivation of GP coordinates requires defining the following coordinate systems and understanding how data measured for events in one coordinate system is interpreted in another coordinate system.
Convention: The units for the variables are all geometrized. Time and mass have units in meters. The speed of light in flat spacetime has a value of 1. The gravitational constant has a value of 1.
The metric is expressed in the +−−− sign convention.
Schwarzschild coordinates.
A Schwarzschild observer is a far observer or a bookkeeper. He does not directly make measurements of events that occur in different places. Instead, he is far away from the black hole and the events. Observers local to the events are enlisted to make measurements and send the results to him. The bookkeeper gathers and combines the reports from various places. The numbers in the reports are translated into data in Schwarzschild coordinates, which provide a systematic means of evaluating and describing the events globally. Thus, the physicist can compare and interpret the data intelligently. He can find meaningful information from these data. The Schwarzschild form of the Schwarzschild metric using Schwarzschild coordinates is given by
formula_0
where
"G=1=c"
"t", "r", "θ", "φ" are the Schwarzschild coordinates,
"M" is the mass of the black hole.
GP coordinates.
Define a new time coordinate by
formula_1
for some arbitrary function formula_2. Substituting in the Schwarzschild metric one gets
formula_3
where formula_4.
If we now choose formula_2 such that the term multiplying formula_5 is unity, we get
formula_6
and the metric becomes
formula_7
The spatial metric (i.e. the restriction of the metric formula_8 on the surface where formula_9 is constant) is simply the flat metric expressed in spherical polar coordinates. This metric is regular along the horizon where "r=2M", since, although the temporal term goes to zero, the off-diagonal term in the metric is still non-zero and ensures that the metric is still invertible (the determinant of the metric is formula_10).
The function formula_2 is given by
formula_11
where formula_12.
The function formula_2 is clearly singular at "r=2M" as it must be to remove that singularity in the Schwarzschild metric.
Other choices for formula_2 lead to other coordinate charts for the Schwarzschild vacuum; a general treatment is given in Francis & Kosowsky.
Motion of raindrop.
Define a raindrop as an object which plunges radially toward a black hole from rest at infinity.
In Schwarzschild coordinates, the velocity of a raindrop is given by
formula_13
In GP coordinates, the velocity is given by
formula_14
Integrate the equation of motion:
formula_16 The result is formula_17
Using this result for the speed of the raindrop we can find the proper time along the trajectory of the raindrop in terms of the time "formula_9". We have
formula_18
I.e. along the rain drops trajectory, the elapse of time formula_9 is exactly the proper time along the trajectory. One could have defined the GP coordinates by this requirement, rather than by demanding that the spatial surfaces be flat.
A closely related set of coordinates is the Lemaître coordinates, in which the "radial" coordinate is chosen to be constant along the paths of the raindrops. Since "r" changes as the raindrops fall, this metric is time dependent while the GP metric is time independent.
The metric obtained if, in the above, we take the function "f(r)" to be the negative of what we choose above is also called the GP coordinate system. The only change in the metric is that cross term changes sign. This metric is regular for outgoing raindrops—i.e. particles which leave the black hole travelling outward with just escape velocity so that their speed at infinity is zero. In the usual GP coordinates, such particles cannot be described for "r<2M". They have a zero value for formula_19 at "r=2M". This is an indication that the Schwarzschild black hole has two horizons, a past horizon, and a future horizon. The Original form of the GP coordinates is regular across the future horizon (where particles fall into when they fall into a black hole) while the alternative negative version is regular across the past horizon (from which particles come out of the black hole if they do so).
The Kruskal–Szekeres coordinates are regular across both horizons at the expense of making the metric strongly dependent on the time coordinate.
Speeds of light.
Assume radial motion. For light, formula_20 Therefore,
formula_21
formula_22
There are two important points to consider:
formula_29
The result is formula_30
A rain observer's view of the universe.
How does the universe look like as seen by a rain observer plunging into the black hole? The view can be described by the following equations:
formula_31
formula_32
formula_33
where
formula_34 are the rain observer's and shell observer's viewing angles with respect to the radially outward direction.
formula_35 is the angle between the distant star and the radially outward direction.
formula_36 is the impact parameter. Each incoming light ray can be backtraced to a corresponding ray at infinity. The Impact parameter for the incoming light ray is the distance between the corresponding ray at infinity and a ray parallel to it that plunges directly into the black hole.
Because of spherical symmetry, the trajectory of light always lies in a plane passing through the center of sphere. It's possible to simplify the metric by assuming formula_37 .
The impact parameter formula_36 can be computed knowing the rain observer's r-coordinate formula_38 and viewing angle formula_39 . Then, the actual angle formula_35 of the distant star, is determined by numerically integrating formula_40 from formula_38 to infinity. A chart of the sample results is shown at right.
History.
Although the publication of Gullstrand's paper came after Painlevé's, Gullstrand's paper was dated 25 May 1921, whereas Painlevé's publication was a writeup of his presentation before the Academie des Sciences in Paris on 24 October 1921. In this way, Gullstrand's work appears to have priority.
Both Painlevé and Gullstrand used this solution to argue that Einstein's theory was incomplete in that it gave multiple solutions for the gravitational field of a spherical body, and moreover gave different physics (they argued that the lengths of rods could sometimes be longer and sometimes shorter in the radial than the tangential directions). The "trick" of the Painlevé proposal was that he no longer stuck to a full quadratic (static) form but instead, allowed a cross time-space product making the metric form no longer static but stationary and no longer direction symmetric but preferentially oriented.
In a second, longer paper (November 14, 1921), Painlevé explains how he derived his solution by directly solving Einstein's equations for a generic spherically symmetric form of the metric.
The result, equation (4) of his paper, depended on two arbitrary functions of the r coordinate yielding a double infinity of solutions. We now know that these simply represent a variety of choices of both the time and radial coordinates.
Painlevé wrote to Einstein to introduce his solution and invited Einstein to Paris for a debate. In Einstein's reply letter (December 7),
he apologized for not being in a position to visit soon and explained why he was not pleased with Painlevé's arguments, emphasising that the coordinates themselves have no meaning. Finally, Einstein came to Paris in early April. On the 5th of April 1922, in a debate at the "Collège de France" with Painlevé, Becquerel, Brillouin, Cartan, De Donder, Hadamard, Langevin and Nordmann on "the infinite potentials", Einstein, baffled by the non quadratic cross term in the line element, rejected the Painlevé solution. | [
{
"math_id": 0,
"text": " g = \\left(1-\\frac{2M}{r} \\right) \\, dt^2 -\\frac{dr^2}{ \\left(1-\\frac{2M}{r} \\right)}- r^2 \\, d\\theta^2-r^2\\sin^2\\theta \\, d\\phi^2\\,"
},
{
"math_id": 1,
"text": " t_r= t+a(r)"
},
{
"math_id": 2,
"text": "a(r)"
},
{
"math_id": 3,
"text": " g = \\left(1-\\frac{2M}{r} \\right)\\, dt_r^2 +2 \\left(1-\\frac{2M}{r} \\right)\\,a' dt_r dr - \\left( {\\frac{1}{1-\\frac{2M}{r} }}-\\left(1-\\frac{2M}{r} \\right)\\,{a'}^2\\right) dr^2 -r^2 \\, d\\theta^2-r^2\\sin^2\\theta \\, d\\phi^2\\,"
},
{
"math_id": 4,
"text": " a'(r)= {\\frac {da}{dr}}"
},
{
"math_id": 5,
"text": "dr^2"
},
{
"math_id": 6,
"text": " a'=-{\\frac{1}{1- {\\frac{2M}{r}}}}\\sqrt{\\frac{2M}{r}} "
},
{
"math_id": 7,
"text": " g = \\left(1-\\frac{2M}{r} \\right)\\, dt_r^2- 2\\sqrt{\\frac{2M}{r}} dt_r dr - dr^2-r^2 \\, d\\theta^2-r^2\\sin^2\\theta \\, d\\phi^2\\,"
},
{
"math_id": 8,
"text": "g|_{t = t_r}"
},
{
"math_id": 9,
"text": "t_r"
},
{
"math_id": 10,
"text": "-r^4 \\sin(\\theta)^2"
},
{
"math_id": 11,
"text": " a(r)= -\\int {\\frac{\\sqrt{\\frac {2M}{r}}}{ 1-{\\frac{2M}{r}}}} dr = 2M\\left(-2y+\\ln \\left({\\frac{y+1}{y-1}}\\right)\\right)\n"
},
{
"math_id": 12,
"text": " y=\\sqrt{\\frac{r}{2M}}"
},
{
"math_id": 13,
"text": "\\frac{dr}{dt}=-\\left(1-\\frac{2M}{r} \\right) \\sqrt{\\frac{2M}{r}}.\\,"
},
{
"math_id": 14,
"text": "\\frac{dr}{dt_r}=-\\beta = -\\sqrt{\\frac{2M}{r}}. \\,"
},
{
"math_id": 15,
"text": "r < 2M\\,\\!"
},
{
"math_id": 16,
"text": "\\int_0^{T_r}\\,dt = -\\int_{2M}^0 \\left ( \\sqrt{\\frac{2M}{r}} \\right )^{-1}\\,dr. \\, "
},
{
"math_id": 17,
"text": " T_r=\\frac{4}{3} M. \\, "
},
{
"math_id": 18,
"text": " d\\tau^2 = g|_{\\text{trajectory}} = dt_r^2 \\left[ 1-{\\frac {2M}{r}} + 2\\sqrt{\\frac{2M}{r}} {\\sqrt{\\frac{2M}{r}}} -{\\sqrt{\\frac{2M}{r}}}^2\\right]=dt_r^2 "
},
{
"math_id": 19,
"text": " \\frac{dr}{dt_r}"
},
{
"math_id": 20,
"text": "d\\tau =0."
},
{
"math_id": 21,
"text": "0 = \\left (dr+\\left ( 1+\\sqrt{\\frac{2M}{r}}\\ \\right ) dt_r \\right)\\left (dr-\\left (1-\\sqrt{\\frac{2M}{r}}\\ \\right ) \\, dt_r\\right ),"
},
{
"math_id": 22,
"text": "\\frac{dr}{dt_r}=\\plusmn 1-\\sqrt{\\frac{2M}{r}}."
},
{
"math_id": 23,
"text": " r \\to \\infty, \\tfrac{dr}{dt_r}=\\pm 1."
},
{
"math_id": 24,
"text": "r=2M,"
},
{
"math_id": 25,
"text": "\\tfrac{dr}{dt_r}=0."
},
{
"math_id": 26,
"text": "r<2M,"
},
{
"math_id": 27,
"text": "\\frac{dr_{ff}}{d\\tau}=\\frac{v}{\\sqrt{1-v^2}} \\ge 1."
},
{
"math_id": 28,
"text": "\\frac{\\left ( \\dfrac{dr}{dt_r}\\right )_\\text{raindrop}} {\\left ( \\dfrac{dr}{dt_r} \\right)_\\text{light}}= \\frac{\\sqrt{\\dfrac{2M}{r}}} {1+\\sqrt{\\dfrac{2M}{r}}} < 1."
},
{
"math_id": 29,
"text": "\\int_0^{T_r}\\,dt=-\\int_{2M}^{0} \\left ( \\sqrt{\\frac{2M}{r}}+1 \\right )^{-1}\\,dr."
},
{
"math_id": 30,
"text": " T_r=4M\\ln 2 -2M \\approx 0.77M."
},
{
"math_id": 31,
"text": "\\cos \\boldsymbol{\\Phi}_r =\\frac{dr_r}{dt_r} = \\frac{ \\sqrt{\\dfrac{2M}{r}}+ \\cos \\boldsymbol{\\Phi}_s } {1+ \\sqrt{\\dfrac{2M}{r}}\\ \\cos \\boldsymbol{\\Phi}_s },\\,"
},
{
"math_id": 32,
"text": "\\cos \\boldsymbol{\\Phi}_s = \\frac{dr_s}{dt_s} =\\pm \\sqrt{1-\\left(1-\\frac{2M}{r}\\right)\\frac{\\mathit{I}^2}{r^2}},\\,\\!"
},
{
"math_id": 33,
"text": " \\phi=\\mathit{I}\\int_{r_0}^{\\infty}\\frac{dr}{r^2 \\ \\cos \\boldsymbol{\\Phi}_s},\\,\\!"
},
{
"math_id": 34,
"text": "\\boldsymbol{\\Phi}_r, \\ \\boldsymbol{\\Phi}_s\\,\\!"
},
{
"math_id": 35,
"text": "\\ \\phi\\,\\!"
},
{
"math_id": 36,
"text": " \\mathit{I}\\,\\!"
},
{
"math_id": 37,
"text": "\\theta = \\frac{\\pi}{2}\\,\\!"
},
{
"math_id": 38,
"text": " r_0\\,\\!"
},
{
"math_id": 39,
"text": "\\ \\boldsymbol{\\Phi}_{r0}\\,\\!"
},
{
"math_id": 40,
"text": "dr\\,\\!"
},
{
"math_id": 41,
"text": " r \\rightarrow 0\\,\\!"
},
{
"math_id": 42,
"text": " \\cos \\boldsymbol{\\Phi}_r \\rightarrow \\sqrt{\\frac{r}{2M}}\\,\\!"
}
] | https://en.wikipedia.org/wiki?curid=9223226 |
92236 | Electron transport chain | Energy-producing metabolic pathway
An electron transport chain (ETC) is a series of protein complexes and other molecules which transfer electrons from electron donors to electron acceptors via redox reactions (both reduction and oxidation occurring simultaneously) and couples this electron transfer with the transfer of protons (H+ ions) across a membrane. Many of the enzymes in the electron transport chain are embedded within the membrane.
The flow of electrons through the electron transport chain is an exergonic process. The energy from the redox reactions creates an electrochemical proton gradient that drives the synthesis of adenosine triphosphate (ATP). In aerobic respiration, the flow of electrons terminates with molecular oxygen as the final electron acceptor. In anaerobic respiration, other electron acceptors are used, such as sulfate.
In an electron transport chain, the redox reactions are driven by the difference in the Gibbs free energy of reactants and products. The free energy released when a higher-energy electron donor and acceptor convert to lower-energy products, while electrons are transferred from a lower to a higher redox potential, is used by the complexes in the electron transport chain to create an electrochemical gradient of ions. It is this electrochemical gradient that drives the synthesis of ATP via coupling with oxidative phosphorylation with ATP synthase.
In eukaryotic organisms, the electron transport chain, and site of oxidative phosphorylation, is found on the inner mitochondrial membrane. The energy released by reactions of oxygen and reduced compounds such as cytochrome "c" and (indirectly) NADH and FADH2 is used by the electron transport chain to pump protons into the intermembrane space, generating the electrochemical gradient over the inner mitochondrial membrane. In photosynthetic eukaryotes, the electron transport chain is found on the thylakoid membrane. Here, light energy drives electron transport through a proton pump and the resulting proton gradient causes subsequent synthesis of ATP. In bacteria, the electron transport chain can vary between species but it always constitutes a set of redox reactions that are coupled to the synthesis of ATP through the generation of an electrochemical gradient and oxidative phosphorylation through ATP synthase.
Mitochondrial electron transport chains.
Most eukaryotic cells have mitochondria, which produce ATP from reactions of oxygen with products of the citric acid cycle, fatty acid metabolism, and amino acid metabolism. At the inner mitochondrial membrane, electrons from NADH and FADH2 pass through the electron transport chain to oxygen, which provides the energy driving the process as it is reduced to water. The electron transport chain comprises an enzymatic series of electron donors and acceptors. Each electron donor will pass electrons to an acceptor of higher redox potential, which in turn donates these electrons to another acceptor, a process that continues down the series until electrons are passed to oxygen, the terminal electron acceptor in the chain. Each reaction releases energy because a higher-energy donor and acceptor convert to lower-energy products. Via the transferred electrons, this energy is used to generate a proton gradient across the mitochondrial membrane by "pumping" protons into the intermembrane space, producing a state of higher free energy that has the potential to do work. This entire process is called oxidative phosphorylation since ADP is phosphorylated to ATP by using the electrochemical gradient that the redox reactions of the electron transport chain have established driven by energy-releasing reactions of oxygen.
Mitochondrial redox carriers.
Energy associated with the transfer of electrons down the electron transport chain is used to pump protons from the mitochondrial matrix into the intermembrane space, creating an electrochemical proton gradient (ΔpH) across the inner mitochondrial membrane. This proton gradient is largely but not exclusively responsible for the mitochondrial membrane potential (ΔΨM). It allows ATP synthase to use the flow of H+ through the enzyme back into the matrix to generate ATP from adenosine diphosphate (ADP) and inorganic phosphate. Complex I (NADH coenzyme Q reductase; labeled I) accepts electrons from the Krebs cycle electron carrier nicotinamide adenine dinucleotide (NADH), and passes them to coenzyme Q (ubiquinone; labeled Q), which also receives electrons from Complex II (succinate dehydrogenase; labeled II). Q passes electrons to Complex III (cytochrome bc1 complex; labeled III), which passes them to cytochrome "c" (cyt "c"). Cyt "c" passes electrons to Complex IV (cytochrome "c" oxidase; labeled IV).
Four membrane-bound complexes have been identified in mitochondria. Each is an extremely complex transmembrane structure that is embedded in the inner membrane. Three of them are proton pumps. The structures are electrically connected by lipid-soluble electron carriers and water-soluble electron carriers. The overall electron transport chain can be summarized as follows:
NADH, H+ → Complex I → Q → Complex III → cytochrome "c" → Complex IV → H2O
Complex II
Succinate
Complex I.
In Complex I (NADH ubiquinone oxidoreductase, Type I NADH dehydrogenase, or mitochondrial complex I; EC 1.6.5.3), two electrons are removed from NADH and transferred to a lipid-soluble carrier, ubiquinone (Q). The reduced product, ubiquinol (QH2), freely diffuses within the membrane, and Complex I translocates four protons (H+) across the membrane, thus producing a proton gradient. Complex I is one of the main sites at which premature electron leakage to oxygen occurs, thus being one of the main sites of production of superoxide.
The pathway of electrons is as follows:
NADH is oxidized to NAD+, by reducing flavin mononucleotide to FMNH2 in one two-electron step. FMNH2 is then oxidized in two one-electron steps, through a semiquinone intermediate. Each electron thus transfers from the FMNH2 to an Fe–S cluster, from the Fe-S cluster to ubiquinone (Q). Transfer of the first electron results in the free-radical (semiquinone) form of Q, and transfer of the second electron reduces the semiquinone form to the ubiquinol form, QH2. During this process, four protons are translocated from the mitochondrial matrix to the intermembrane space. As the electrons move through the complex an electron current is produced along the 180 Angstrom width of the complex within the membrane. This current powers the active transport of four protons to the intermembrane space per two electrons from NADH.
Complex II.
In Complex II (succinate dehydrogenase or succinate-CoQ reductase; EC 1.3.5.1) additional electrons are delivered into the quinone pool (Q) originating from succinate and transferred (via flavin adenine dinucleotide (FAD)) to Q. Complex II consists of four protein subunits: succinate dehydrogenase (SDHA); succinate dehydrogenase [ubiquinone] iron–sulfur subunit mitochondrial (SDHB); succinate dehydrogenase complex subunit C (SDHC); and succinate dehydrogenase complex subunit D (SDHD). Other electron donors (e.g., fatty acids and glycerol 3-phosphate) also direct electrons into Q (via FAD). Complex II is a parallel electron transport pathway to Complex I, but unlike Complex I, no protons are transported to the intermembrane space in this pathway. Therefore, the pathway through Complex II contributes less energy to the overall electron transport chain process.
Complex III.
In Complex III (cytochrome "bc1" complex or CoQH2-cytochrome "c" reductase; EC 1.10.2.2), the Q-cycle contributes to the proton gradient by an asymmetric absorption/release of protons. Two electrons are removed from QH2 at the QO site and sequentially transferred to two molecules of cytochrome "c", a water-soluble electron carrier located within the intermembrane space. The two other electrons sequentially pass across the protein to the Qi site where the quinone part of ubiquinone is reduced to quinol. A proton gradient is formed by one quinol (<chem>2H+2e-</chem>) oxidations at the Qo site to form one quinone (<chem>2H+2e-</chem>) at the Qi site. (In total, four protons are translocated: two protons reduce quinone to quinol and two protons are released from two ubiquinol molecules.)
<chem> QH2 + 2</chem>formula_0<chem>(Fe^{III}) + 2 H</chem>formula_1<chem> -> Q + 2</chem>formula_0<chem>(Fe^{II}) + 4 H</chem>formula_2
When electron transfer is reduced (by a high membrane potential or respiratory inhibitors such as antimycin A), Complex III may leak electrons to molecular oxygen, resulting in superoxide formation.
This complex is inhibited by dimercaprol (British Anti-Lewisite, BAL), naphthoquinone and antimycin.
Complex IV.
In Complex IV (cytochrome "c" oxidase; EC 1.9.3.1), sometimes called cytochrome AA3, four electrons are removed from four molecules of cytochrome "c" and transferred to molecular oxygen (O2) and four protons, producing two molecules of water. The complex contains coordinated copper ions and several heme groups. At the same time, eight protons are removed from the mitochondrial matrix (although only four are translocated across the membrane), contributing to the proton gradient. The exact details of proton pumping in Complex IV are still under study. Cyanide is an inhibitor of Complex IV.
Coupling with oxidative phosphorylation.
According to the chemiosmotic coupling hypothesis, proposed by Nobel Prize in Chemistry winner Peter D. Mitchell, the electron transport chain and oxidative phosphorylation are coupled by a proton gradient across the inner mitochondrial membrane. The efflux of protons from the mitochondrial matrix creates an electrochemical gradient (proton gradient). This gradient is used by the FOF1 ATP synthase complex to make ATP via oxidative phosphorylation. ATP synthase is sometimes described as "Complex V" of the electron transport chain. The FO component of ATP synthase acts as an ion channel that provides for a proton flux back into the mitochondrial matrix. It is composed of a, b and c subunits. Protons in the inter-membrane space of mitochondria first enter the ATP synthase complex through an "a" subunit channel. Then protons move to the c subunits. The number of c subunits determines how many protons are required to make the FO turn one full revolution. For example, in humans, there are 8 c subunits, thus 8 protons are required. After "c" subunits, protons finally enter the matrix through an "a" subunit channel that opens into the mitochondrial matrix. This reflux releases free energy produced during the generation of the oxidized forms of the electron carriers (NAD+ and Q) with energy provided by O2. The free energy is used to drive ATP synthesis, catalyzed by the F1 component of the complex.<br>Coupling with oxidative phosphorylation is a key step for ATP production. However, in specific cases, uncoupling the two processes may be biologically useful. The uncoupling protein, thermogenin—present in the inner mitochondrial membrane of brown adipose tissue—provides for an alternative flow of protons back to the inner mitochondrial matrix. Thyroxine is also a natural uncoupler. This alternative flow results in thermogenesis rather than ATP production.
Reverse electron flow.
Reverse electron flow is the transfer of electrons through the electron transport chain through the reverse redox reactions. Usually requiring a significant amount of energy to be used, this can reduce the oxidized forms of electron donors. For example, NAD+ can be reduced to NADH by Complex I. There are several factors that have been shown to induce reverse electron flow. However, more work needs to be done to confirm this. One example is blockage of ATP synthase, resulting in a build-up of protons and therefore a higher proton-motive force, inducing reverse electron flow.
Prokaryotic electron transport chains.
In eukaryotes, NADH is the most important electron donor. The associated electron transport chain is NADH → "Complex I" → Q → "Complex III" → cytochrome "c" → "Complex IV" → O2 where "Complexes I, III" and" IV" are proton pumps, while Q and cytochrome "c" are mobile electron carriers. The electron acceptor for this process is molecular oxygen.
In prokaryotes (bacteria and archaea) the situation is more complicated, because there are several different electron donors and several different electron acceptors. The generalized electron transport chain in bacteria is:
Donor Donor Donor
dehydrogenase → quinone → "bc1" → cytochrome
oxidase(reductase) oxidase(reductase)
Acceptor Acceptor
Electrons can enter the chain at three levels: at the level of a dehydrogenase, at the level of the quinone pool, or at the level of a mobile cytochrome electron carrier. These levels correspond to successively more positive redox potentials, or to successively decreased potential differences relative to the terminal electron acceptor. In other words, they correspond to successively smaller Gibbs free energy changes for the overall redox reaction.
Individual bacteria use multiple electron transport chains, often simultaneously. Bacteria can use a number of different electron donors, a number of different dehydrogenases, a number of different oxidases and reductases, and a number of different electron acceptors. For example, "E. coli" (when growing aerobically using glucose and oxygen as an energy source) uses two different NADH dehydrogenases and two different quinol oxidases, for a total of four different electron transport chains operating simultaneously.
A common feature of all electron transport chains is the presence of a proton pump to create an electrochemical gradient over a membrane. Bacterial electron transport chains may contain as many as three proton pumps, like mitochondria, or they may contain two or at least one.
Electron donors.
In the current biosphere, the most common electron donors are organic molecules. Organisms that use organic molecules as an electron source are called "organotrophs". Chemoorganotrophs (animals, fungi, protists) and "photolithotrophs" (plants and algae) constitute the vast majority of all familiar life forms.
Some prokaryotes can use inorganic matter as an electron source. Such an organism is called a "(chemo)lithotroph" ("rock-eater"). Inorganic electron donors include hydrogen, carbon monoxide, ammonia, nitrite, sulfur, sulfide, manganese oxide, and ferrous iron. Lithotrophs have been found growing in rock formations thousands of meters below the surface of Earth. Because of their volume of distribution, lithotrophs may actually outnumber organotrophs and phototrophs in our biosphere.
The use of inorganic electron donors such as hydrogen as an energy source is of particular interest in the study of evolution. This type of metabolism must logically have preceded the use of organic molecules and oxygen as an energy source.
Dehydrogenases: equivalants to complexes I and II.
Bacteria can use several different electron donors. When organic matter is the electron source, the donor may be NADH or succinate, in which case electrons enter the electron transport chain via NADH dehydrogenase (similar to "Complex I" in mitochondria) or succinate dehydrogenase (similar to "Complex II"). Other dehydrogenases may be used to process different energy sources: formate dehydrogenase, lactate dehydrogenase, glyceraldehyde-3-phosphate dehydrogenase, H2 dehydrogenase (hydrogenase), electron transport chain. Some dehydrogenases are also proton pumps, while others funnel electrons into the quinone pool. Most dehydrogenases show induced expression in the bacterial cell in response to metabolic needs triggered by the environment in which the cells grow. In the case of lactate dehydrogenase in "E. coli", the enzyme is used aerobically and in combination with other dehydrogenases. It is inducible and is expressed when the concentration of DL-lactate in the cell is high.
Quinone carriers.
Quinones are mobile, lipid-soluble carriers that shuttle electrons (and protons) between large, relatively immobile macromolecular complexes embedded in the membrane. Bacteria use ubiquinone (Coenzyme Q, the same quinone that mitochondria use) and related quinones such as menaquinone (Vitamin K2). Archaea in the genus "Sulfolobus" use caldariellaquinone. The use of different quinones is due to slight changes in redox potentials caused by changes in structure. The change in redox potentials of these quinones may be suited to changes in the electron acceptors or variations of redox potentials in bacterial complexes.
Proton pumps.
A "proton pump" is any process that creates a proton gradient across a membrane. Protons can be physically moved across a membrane, as seen in mitochondrial "Complexes I" and "IV". The same effect can be produced by moving electrons in the opposite direction. The result is the disappearance of a proton from the cytoplasm and the appearance of a proton in the periplasm. Mitochondrial "Complex III" is this second type of proton pump, which is mediated by a quinone (the Q cycle).
Some dehydrogenases are proton pumps, while others are not. Most oxidases and reductases are proton pumps, but some are not. Cytochrome "bc1" is a proton pump found in many, but not all, bacteria (not in "E. coli"). As the name implies, bacterial "bc1" is similar to mitochondrial "bc1" ("Complex III").
Cytochrome electron carriers.
Cytochromes are proteins that contain iron. They are found in two very different environments.
Some cytochromes are water-soluble carriers that shuttle electrons to and from large, immobile macromolecular structures imbedded in the membrane. The mobile cytochrome electron carrier in mitochondria is cytochrome "c". Bacteria use a number of different mobile cytochrome electron carriers.
Other cytochromes are found within macromolecules such as "Complex III" and "Complex IV". They also function as electron carriers, but in a very different, intramolecular, solid-state environment.
Electrons may enter an electron transport chain at the level of a mobile cytochrome or quinone carrier. For example, electrons from inorganic electron donors (nitrite, ferrous iron, electron transport chain) enter the electron transport chain at the cytochrome level. When electrons enter at a redox level greater than NADH, the electron transport chain must operate in reverse to produce this necessary, higher-energy molecule.
Electron acceptors and terminal oxidase/reductase.
As there are a number of different electron donors (organic matter in organotrophs, inorganic matter in lithotrophs), there are a number of different electron acceptors, both organic and inorganic. As with other steps of the ETC, an enzyme is required to help with the process.
If oxygen is available, it is most often used as the terminal electron acceptor in aerobic bacteria and facultative anaerobes. An oxidase reduces the O2 to water while oxidizing something else. In mitochondria, the terminal membrane complex ("Complex IV") is cytochrome oxidase, which oxidizes the cytochrome. Aerobic bacteria use a number of differet terminal oxidases. For example, "E. coli" (a facultative anaerobe) does not have a cytochrome oxidase or a "bc1" complex. Under aerobic conditions, it uses two different terminal quinol oxidases (both proton pumps) to reduce oxygen to water.
Bacterial terminal oxidases can be split into classes according to the molecules act as terminal electron acceptors. Class I oxidases are cytochrome oxidases and use oxygen as the terminal electron acceptor. Class II oxidases are quinol oxidases and can use a variety of terminal electron acceptors. Both of these classes can be subdivided into categories based on what redox-active components they contain. E.g. Heme aa3 Class 1 terminal oxidases are much more efficient than Class 2 terminal oxidases.
Mostly in anaerobic environments different electron acceptors are used, including nitrate, nitrite, ferric iron, sulfate, carbon dioxide, and small organic molecules such as fumarate. When bacteria grow in anaerobic environments, the terminal electron acceptor is reduced by an enzyme called a reductase. "E. coli" can use fumarate reductase, nitrate reductase, nitrite reductase, DMSO reductase, or trimethylamine-N-oxide reductase, depending on the availability of these acceptors in the environment.
Most terminal oxidases and reductases are "inducible". They are synthesized by the organism as needed, in response to specific environmental conditions.
Photosynthetic.
In oxidative phosphorylation, electrons are transferred from an electron donor such as NADH to an acceptor such as O2 through an electron transport chain, releasing energy. In photophosphorylation, the energy of sunlight is used to create a high-energy electron donor which can subsequently reduce oxidized components and couple to ATP synthesis via proton translocation by the electron transport chain.
Photosynthetic electron transport chains, like the mitochondrial chain, can be considered as a special case of the bacterial systems. They use mobile, lipid-soluble quinone carriers (phylloquinone and plastoquinone) and mobile, water-soluble carriers (cytochromes). They also contain a proton pump. The proton pump in "all" photosynthetic chains resembles mitochondrial "Complex III". The commonly-held theory of symbiogenesis proposes that both organelles descended from bacteria.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\text{ cytochrome }c"
},
{
"math_id": 1,
"text": "^+_\\text{in}"
},
{
"math_id": 2,
"text": "^+_\\text{out}"
}
] | https://en.wikipedia.org/wiki?curid=92236 |
922382 | Smith–Volterra–Cantor set |
In mathematics, the Smith–Volterra–Cantor set (SVC), ε-Cantor set, or fat Cantor set is an example of a set of points on the real line that is nowhere dense (in particular it contains no intervals), yet has positive measure. The Smith–Volterra–Cantor set is named after the mathematicians Henry Smith, Vito Volterra and Georg Cantor. In an 1875 paper, Smith discussed a nowhere-dense set of positive measure on the real line, and Volterra introduced a similar example in 1881. The Cantor set as we know it today followed in 1883. The Smith–Volterra–Cantor set is topologically equivalent to the middle-thirds Cantor set.
Construction.
Similar to the construction of the Cantor set, the Smith–Volterra–Cantor set is constructed by removing certain intervals from the unit interval formula_0
The process begins by removing the middle 1/4 from the interval formula_1 (the same as removing 1/8 on either side of the middle point at 1/2) so the remaining set is
formula_2
The following steps consist of removing subintervals of width formula_3 from the middle of each of the formula_4 remaining intervals. So for the second step the intervals formula_5 and formula_6 are removed, leaving
formula_7
Continuing indefinitely with this removal, the Smith–Volterra–Cantor set is then the set of points that are never removed. The image below shows the initial set and five iterations of this process.
Each subsequent iterate in the Smith–Volterra–Cantor set's construction removes proportionally less from the remaining intervals. This stands in contrast to the Cantor set, where the proportion removed from each interval remains constant. Thus, the Smith–Volterra–Cantor set has positive measure while the Cantor set has zero measure.
Properties.
By construction, the Smith–Volterra–Cantor set contains no intervals and therefore has empty interior. It is also the intersection of a sequence of closed sets, which means that it is closed.
During the process, intervals of total length
formula_8
are removed from formula_9 showing that the set of the remaining points has a positive measure of 1/2. This makes the Smith–Volterra–Cantor set an example of a closed set whose boundary has positive Lebesgue measure.
Other fat Cantor sets.
In general, one can remove formula_10 from each remaining subinterval at the formula_11th step of the algorithm, and end up with a Cantor-like set. The resulting set will have positive measure if and only if the sum of the sequence is less than the measure of the initial interval. For instance, suppose the middle intervals of length formula_12 are removed from formula_1 for each formula_11th iteration, for some formula_13 Then, the resulting set has Lebesgue measure
formula_14
which goes from formula_15 to formula_16 as formula_17 goes from formula_18 to formula_19 (formula_20 is impossible in this construction.)
Cartesian products of Smith–Volterra–Cantor sets can be used to find totally disconnected sets in higher dimensions with nonzero measure. By applying the Denjoy–Riesz theorem to a two-dimensional set of this type, it is possible to find an Osgood curve, a Jordan curve such that the points on the curve have positive area.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[0, 1]."
},
{
"math_id": 1,
"text": "[0, 1]"
},
{
"math_id": 2,
"text": "\\left[0, \\tfrac{3}{8}\\right] \\cup \\left[\\tfrac{5}{8}, 1\\right]."
},
{
"math_id": 3,
"text": "1/4^n"
},
{
"math_id": 4,
"text": "2^{n-1}"
},
{
"math_id": 5,
"text": "(5/32, 7/32)"
},
{
"math_id": 6,
"text": "(25/32, 27/32)"
},
{
"math_id": 7,
"text": "\\left[0, \\tfrac{5}{32}\\right] \\cup \\left[\\tfrac{7}{32}, \\tfrac{3}{8}\\right] \\cup \\left[\\tfrac{5}{8}, \\tfrac{25}{32}\\right] \\cup \\left[\\tfrac{27}{32}, 1\\right]."
},
{
"math_id": 8,
"text": "\\sum_{n=0}^\\infty \\frac{2^n}{2^{2n + 2}} = \\frac{1}{4} + \\frac{1}{8} + \\frac{1}{16} + \\cdots = \\frac{1}{2}\\,"
},
{
"math_id": 9,
"text": "[0, 1],"
},
{
"math_id": 10,
"text": "r_n"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "a^n"
},
{
"math_id": 13,
"text": "0 \\leq a \\leq \\dfrac{1}{3}."
},
{
"math_id": 14,
"text": "\\begin{align}\n1 - \\sum _{n=0}^\\infty 2^n a ^ {n+1} &= 1 - a \\sum _{n=0}^\\infty (2a)^n \\\\[5pt]\n &= 1 - a \\frac{1}{1 - 2a} \\\\[5pt]\n &= \\frac{1 - 3a}{1 - 2a}\n\\end{align}\n"
},
{
"math_id": 15,
"text": "0"
},
{
"math_id": 16,
"text": "1"
},
{
"math_id": 17,
"text": "a"
},
{
"math_id": 18,
"text": "1/3"
},
{
"math_id": 19,
"text": "0."
},
{
"math_id": 20,
"text": "a > 1/3"
}
] | https://en.wikipedia.org/wiki?curid=922382 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.