id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1196215 | Elongated triangular cupola | Polyhedron with triangular cupola and hexagonal prism
In geometry, the elongated triangular cupola is a polyhedron constructed from a hexagonal prism by attaching a triangular cupola. It is an example of a Johnson solid.
Construction.
The elongated triangular cupola is constructed from a hexagonal prism by attaching a triangular cupola onto one of its bases, a process known as the elongation. This cupola covers the hexagonal face so that the resulting polyhedron has four equilateral triangles, nine squares, and one regular hexagon. A convex polyhedron in which all of the faces are regular polygons is the Johnson solid. The elongated triangular cupola is one of them, enumerated as the eighteenth Johnson solid formula_0.
Properties.
The surface area of an elongated triangular cupola formula_1 is the sum of all polygonal face's area. The volume of an elongated triangular cupola can be ascertained by dissecting it into a cupola and a hexagonal prism, after which summing their volume. Given the edge length formula_2, its surface and volume can be formulated as:
formula_3
It has the three-dimensional same symmetry as the triangular cupola, the cyclic group formula_4 of order 6. Its dihedral angle can be calculated by adding the angle of a triangular cupola and a hexagonal prism:
Dual polyhedron.
The dual of the elongated triangular cupola has 15 faces: 6 isosceles triangles, 3 rhombi, and 6 quadrilaterals.
Related polyhedra and honeycombs.
The elongated triangular cupola can form a tessellation of space with tetrahedra and square pyramids.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " J_{18} "
},
{
"math_id": 1,
"text": " A "
},
{
"math_id": 2,
"text": " a "
},
{
"math_id": 3,
"text": "\n\\begin{align}\n A &= \\frac{18 + 5\\sqrt{3}}{2}a^2 &\\approx 13.330a^2, \\\\\n V &= \\frac{5\\sqrt{2} + 9\\sqrt{3}}{6}a^3 &\\approx 3.777a^3.\n\\end{align}\n"
},
{
"math_id": 4,
"text": " C_{3\\mathrm{v}} "
}
] | https://en.wikipedia.org/wiki?curid=1196215 |
1196222 | Gyroelongated triangular cupola | In geometry, the gyroelongated triangular cupola is one of the Johnson solids ("J"22). It can be constructed by attaching a hexagonal antiprism to the base of a triangular cupola ("J"3). This is called "gyroelongation", which means that an antiprism is joined to the base of a solid, or between the bases of more than one solid.
The gyroelongated triangular cupola can also be seen as a gyroelongated triangular bicupola ("J"44) with one triangular cupola removed. Like all cupolae, the base polygon has twice as many sides as the top (in this case, the bottom polygon is a hexagon because the top is a triangle).
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.
Formulae.
The following formulae for volume and surface area can be used if all faces are regular, with edge length "a":
formula_0
formula_1
Dual polyhedron.
The dual of the gyroelongated triangular cupola has 15 faces: 6 kites, 3 rhombi, and 6 pentagons.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V=\\left(\\frac{1}{3}\\sqrt{\\frac{61}{2}+18\\sqrt{3}+30\\sqrt{1+\\sqrt{3}}}\\right)a^3\\approx3.51605...a^3"
},
{
"math_id": 1,
"text": "A=\\left(3+\\frac{11\\sqrt{3}}{2}\\right)a^2\\approx12.5263...a^2"
}
] | https://en.wikipedia.org/wiki?curid=1196222 |
11962384 | Fisher's noncentral hypergeometric distribution | In probability theory and statistics, Fisher's noncentral hypergeometric distribution is a generalization of the hypergeometric distribution where sampling probabilities are modified by weight factors. It can also be defined as the conditional distribution of two or more binomially distributed variables dependent upon their fixed sum.
The distribution may be illustrated by the following urn model. Assume, for example, that an urn contains "m"1 red balls and "m"2 white balls, totalling "N" = "m"1 + "m"2 balls. Each red ball has the weight ω1 and each white ball has the weight ω2. We will say that the odds ratio is ω = ω1 / ω2. Now we are taking balls randomly in such a way that the probability of taking a particular ball is proportional to its weight, but independent of what happens to the other balls. The number of balls taken of a particular color follows the binomial distribution. If the total number "n" of balls taken is known then the conditional distribution of the number of taken red balls for given "n" is Fisher's noncentral hypergeometric distribution. To generate this distribution experimentally, we have to repeat the experiment until it happens to give "n" balls.
If we want to fix the value of "n" prior to the experiment then we have to take the balls one by one until we have "n" balls. The balls are therefore no longer independent. This gives a slightly different distribution known as Wallenius' noncentral hypergeometric distribution. It is far from obvious why these two distributions are different. See the entry for noncentral hypergeometric distributions for an explanation of the difference between these two distributions and a discussion of which distribution to use in various situations.
The two distributions are both equal to the (central) hypergeometric distribution when the odds ratio is 1.
Unfortunately, both distributions are known in the literature as "the" noncentral hypergeometric distribution. It is important to be specific about which distribution is meant when using this name.
Fisher's noncentral hypergeometric distribution was first given the name extended hypergeometric distribution (Harkness, 1965), and some authors still use this name today.
Univariate distribution.
The probability function, mean and variance are given in the adjacent table.
An alternative expression of the distribution has both the number of balls taken of each color and the number of balls not taken as random variables, whereby the expression for the probability becomes symmetric.
The calculation time for the probability function can be high when the sum in "P"0 has many terms. The calculation time can be reduced by calculating the terms in the sum recursively relative to the term for "y" = "x" and ignoring negligible terms in the tails (Liao and Rosen, 2001).
The mean can be approximated by:
formula_0 ,
where formula_1, formula_2, formula_3.
The variance can be approximated by:
formula_4 .
Better approximations to the mean and variance are given by Levin (1984, 1990), McCullagh and Nelder (1989), Liao (1992), and Eisinga and Pelzer (2011). The saddlepoint methods to approximate the mean and the variance suggested Eisinga and Pelzer (2011) offer extremely accurate results.
Properties.
The following symmetry relations apply:
formula_5
formula_6
formula_7
Recurrence relation:
formula_8
The distribution is affectionately called "finchy-pig," based on the abbreviation convention above.
Derivation.
The univariate noncentral hypergeometric distribution may be derived alternatively as a conditional distribution in the context of two binomially distributed random variables, for example when considering the response to a particular treatment in two different groups of patients participating in a clinical trial. An important application of the noncentral hypergeometric distribution in this context is the computation of exact confidence intervals for the odds ratio comparing treatment response between the two groups.
Suppose "X" and "Y" are binomially distributed random variables counting the number of responders in two corresponding groups of size "m"X and "m"Y respectively,
formula_9.
Their odds ratio is given as
formula_10.
The responder prevalence formula_11 is fully defined in terms of the odds formula_12, formula_13, which correspond to the sampling bias in the urn scheme above, i.e.
formula_14.
The trial can be summarized and analyzed in terms of the following contingency table.
In the table, formula_15 corresponds to the total number of responders across groups, and "N" to the total number of patients recruited into the trial. The dots denote corresponding frequency counts of no further relevance.
The sampling distribution of responders in group X conditional upon the trial outcome and prevalences,
formula_16,
is noncentral hypergeometric:
formula_17
Note that the denominator is essentially just the numerator, summed over all events of the joint sample space formula_18 for which it holds that formula_19. Terms independent of "X" can be factored out of the sum and cancel out with the numerator.
Multivariate distribution.
The distribution can be expanded to any number of colors "c" of balls in the urn. The multivariate distribution is used when there are more than two colors.
The probability function and a simple approximation to the mean are given to the right. Better approximations to the mean and variance are given by McCullagh and Nelder (1989).
Properties.
The order of the colors is arbitrary so that any colors can be swapped.
The weights can be arbitrarily scaled:
formula_20 for all formula_21
Colors with zero number ("m""i" = 0) or zero weight (ω"i" = 0) can be omitted from the equations.
Colors with the same weight can be joined:
formula_22
where formula_23 is the (univariate, central) hypergeometric distribution probability.
Applications.
Fisher's noncentral hypergeometric distribution is useful for models of biased sampling or biased selection where the individual items are sampled independently of each other with no competition. The bias or odds can be estimated from an experimental value of the mean. Use Wallenius' noncentral hypergeometric distribution instead if items are sampled one by one with competition.
Fisher's noncentral hypergeometric distribution is used mostly for tests in contingency tables where a conditional distribution for fixed margins is desired. This can be useful, for example, for testing or measuring the effect of a medicine. See McCullagh and Nelder (1989). | [
{
"math_id": 0,
"text": "\\mu \\approx \\frac{-2c}{b - \\sqrt{b^2-4ac}} \\,"
},
{
"math_id": 1,
"text": "a=\\omega-1"
},
{
"math_id": 2,
"text": "b=m_1 + n - N -(m_1+n)\\omega"
},
{
"math_id": 3,
"text": "c=m_1 n \\omega"
},
{
"math_id": 4,
"text": "\\sigma^2 \\approx \\frac{N}{N-1} \\bigg/ \\left( \\frac{1}{\\mu}+ \\frac{1}{m_1-\\mu}+ \\frac{1}{n-\\mu}+ \\frac{1}{\\mu+m_2-n} \\right)"
},
{
"math_id": 5,
"text": "\\operatorname{fnchypg}(x;n,m_1,N,\\omega) = \\operatorname{fnchypg}(n-x;n,m_2,N,1/\\omega)\\,."
},
{
"math_id": 6,
"text": "\\operatorname{fnchypg}(x;n,m_1,N,\\omega) = \\operatorname{fnchypg}(x;m_1,n,N,\\omega)\\,."
},
{
"math_id": 7,
"text": "\\operatorname{fnchypg}(x;n,m_1,N,\\omega) = \\operatorname{fnchypg}(m_1-x;N-n,m_1,N,1/\\omega)\\,."
},
{
"math_id": 8,
"text": "\\operatorname{fnchypg}(x;n,m_1,N,\\omega) = \\operatorname{fnchypg}(x-1;n,m_1,N,\\omega) \\frac{(m_1-x+1)(n-x+1)}{x(m_2-n+x)}\\omega\\,."
},
{
"math_id": 9,
"text": "\nX \\sim \\operatorname{Bin}(m_X, \\pi_X),\\quad Y \\sim \\operatorname{Bin}(m_Y, \\pi_Y) \\,\n"
},
{
"math_id": 10,
"text": "\n\\omega = \\frac{\\omega_X}{\\omega_Y} = \\frac{\\pi_X/(1-\\pi_X)}{\\pi_Y/(1-\\pi_Y)}\n"
},
{
"math_id": 11,
"text": "\\pi_i"
},
{
"math_id": 12,
"text": "\\omega_i"
},
{
"math_id": 13,
"text": "i \\in \\{X,Y\\}"
},
{
"math_id": 14,
"text": "\\pi_i = \\frac{\\omega_i}{1+\\omega_i}"
},
{
"math_id": 15,
"text": "n=x+y"
},
{
"math_id": 16,
"text": "Pr(X = x \\; | \\; X+Y = n,m_X,m_Y,\\omega_X,\\omega_Y)"
},
{
"math_id": 17,
"text": " \n\\begin{align}\nF(X,\\omega) :&= Pr(X = x \\; | \\; X+Y = n,m_X,m_Y,\\omega_X,\\omega_Y)\\\\\n&= \\frac{Pr(X = x, X+Y = n \\; | \\; m_X,m_Y,\\omega_X,\\omega_Y)}{Pr(X+Y = n \\; | \\; m_X,m_Y,\\omega_X,\\omega_Y)}\\\\\n&= \\frac{Pr(X = x \\; | \\; m_X,\\omega_X)Pr(Y = n-x \\; | \\; m_Y,\\omega_Y,X=x)}{Pr(X+Y = n \\; | \\; m_X,m_Y,\\omega_X,\\omega_Y)}\\\\\n&= \\frac{\\binom{m_X}{x}\\pi_X^x(1-\\pi_X)^{m_X-x}\\binom{m_Y}{n-x}\\pi_Y^{n-x}(1-\\pi_Y)^{m_Y-(n-x)}}{Pr(X+Y = n \\; | \\; m_X,m_Y,\\omega_X,\\omega_Y)}\\\\\n&= \\frac{\\binom{m_X}{x}\\omega_X^x(1-\\pi_X)^{m_X}\\binom{m_Y}{n-x}\\omega_Y^{n-x}(1-\\pi_Y)^{m_Y}}{Pr(X+Y = n \\; | \\; m_X,m_Y,\\omega_X,\\omega_Y)}\\\\\n&= \\frac{\\binom{m_X}{x}\\binom{m_Y}{n-x}\\omega^x(1-\\pi_X)^{m_X}\\omega_Y^{n}(1-\\pi_Y)^{m_Y}}{(1-\\pi_X)^{m_X}\\omega_Y^{n}(1-\\pi_Y)^{m_Y}\\sum_{u=\\max(0,n-m_Y)}^{\\min(m_X,n)}\\binom{m_X}{u}\\binom{m_Y}{n-u}\\omega^u}\\\\\n&= \\frac{\\binom{m_X}{x}\\binom{m_Y}{n-x}\\omega^x}{\\sum_{u=\\max(0,n-m_Y)}^{\\min(m_X,n)}\\binom{m_X}{u}\\binom{m_Y}{n-u}\\omega^u}\n\\end{align}\n"
},
{
"math_id": 18,
"text": "(X,Y)"
},
{
"math_id": 19,
"text": "X+Y = n"
},
{
"math_id": 20,
"text": "\\operatorname{mfnchypg}(\\mathbf{x};n,\\mathbf{m}, \\boldsymbol{\\omega}) = \\operatorname{mfnchypg}(\\mathbf{x};n,\\mathbf{m}, r\\boldsymbol{\\omega})\\,\\,"
},
{
"math_id": 21,
"text": "r \\in \\mathbb{R}_+."
},
{
"math_id": 22,
"text": "\n\\begin{align}\n& {} \\operatorname{mfnchypg}\\left(\\mathbf{x};n,\\mathbf{m}, (\\omega_1,\\ldots,\\omega_{c-1},\\omega_{c-1})\\right) \\\\\n& {} = \\operatorname{mfnchypg}\\left((x_1,\\ldots,x_{c-1}+x_c); n,(m_1,\\ldots,m_{c-1}+m_c), (\\omega_1,\\ldots,\\omega_{c-1})\\right)\\, \\cdot \\\\\n& \\qquad \\operatorname{hypg}(x_c; x_{c-1}+x_c, m_c, m_{c-1}+m_c)\n\\end{align}\n"
},
{
"math_id": 23,
"text": "\\operatorname{hypg}(x;n,m,N)"
}
] | https://en.wikipedia.org/wiki?curid=11962384 |
1196250 | Triangular orthobicupola | 27th Johnson solid; 2 triangular cupolae joined base-to-base
In geometry, the triangular orthobicupola is one of the Johnson solids ("J"27). As the name suggests, it can be constructed by attaching two triangular cupolas ("J"3) along their bases. It has an equal number of squares and triangles at each vertex; however, it is not vertex-transitive. It is also called an "anticuboctahedron", "twisted cuboctahedron" or "disheptahedron". It is also a canonical polyhedron.
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.
The "triangular orthobicupola" is the first in an infinite set of orthobicupolae.
Construction.
The "triangular orthobicupola" can be constructed by attaching two triangular cupolas onto their bases. Similar to the cuboctahedron, which would be known as the "triangular gyrobicupola", the difference is that the two triangular cupolas that make up the triangular orthobicupola are joined so that pairs of matching sides abut (hence, "ortho"); the cuboctahedron is joined so that triangles abut squares and vice versa. Given a triangular orthobicupola, a 60-degree rotation of one cupola before the joining yields a cuboctahedron. Hence, another name for the triangular orthobicupola is the "anticuboctahedron". Because the triangular orthobicupola has the property of convexity and its faces are regular polygons—eight equilateral triangles and six squares—it is categorized as a Johnson solid. It is enumerated as the twenty-seventh Johnson solid formula_0
Properties.
The surface area formula_1 and the volume formula_2 of a triangular orthobicupola are the same as those with cuboctahedron. Its surface area can be obtained by summing all of its polygonal faces, and its volume is by slicing it off into two triangular cupolas and adding their volume. With edge length formula_3, they are:
formula_4
The dual polyhedron of a triangular orthobicupola is the trapezo-rhombic dodecahedron. It has 6 rhombic and 6 trapezoidal faces, and is similar to the rhombic dodecahedron.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " J_{27} "
},
{
"math_id": 1,
"text": " A "
},
{
"math_id": 2,
"text": " V "
},
{
"math_id": 3,
"text": " a "
},
{
"math_id": 4,
"text": " \\begin{align}\n A &= 2\\left(3+\\sqrt{3}\\right)a^2 \\approx 9.464a^2, \\\\\n V &= \\frac{5\\sqrt{2}}{3}a^3 \\approx 2.357a^3.\n\\end{align} "
}
] | https://en.wikipedia.org/wiki?curid=1196250 |
11962567 | Caristi fixed-point theorem | In mathematics, the Caristi fixed-point theorem (also known as the Caristi–Kirk fixed-point theorem) generalizes the Banach fixed-point theorem for maps of a complete metric space into itself. Caristi's fixed-point theorem modifies the formula_0-variational principle of Ekeland (1974, 1979). The conclusion of Caristi's theorem is equivalent to metric completeness, as proved by Weston (1977).
The original result is due to the mathematicians James Caristi and William Arthur Kirk.
Caristi fixed-point theorem can be applied to derive other classical fixed-point results, and also to prove the existence of bounded solutions of a functional equation.
Statement of the theorem.
Let formula_1 be a complete metric space. Let formula_2 and formula_3 be a lower semicontinuous function from formula_4 into the non-negative real numbers. Suppose that, for all points formula_5 in formula_6
formula_7
Then formula_8 has a fixed point in formula_9 that is, a point formula_10 such that formula_11 The proof of this result utilizes Zorn's lemma to guarantee the existence of a minimal element which turns out to be a desired fixed point.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": "(X, d)"
},
{
"math_id": 2,
"text": "T : X \\to X"
},
{
"math_id": 3,
"text": "f : X \\to [0, +\\infty)"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "X,"
},
{
"math_id": 7,
"text": "d(x, T(x)) \\leq f(x) - f(T(x))."
},
{
"math_id": 8,
"text": "T"
},
{
"math_id": 9,
"text": "X;"
},
{
"math_id": 10,
"text": "x_0"
},
{
"math_id": 11,
"text": "T(x_0) = x_0."
}
] | https://en.wikipedia.org/wiki?curid=11962567 |
1196275 | Pentagonal orthocupolarotunda | 32nd Johnson solid; pentagonal cupola and rotunda joined base-to-base
In geometry, the pentagonal orthocupolarotunda is one of the Johnson solids ("J"32). As the name suggests, it can be constructed by joining a pentagonal cupola ("J"5) and a pentagonal rotunda ("J"6) along their decagonal bases, matching the pentagonal faces. A 36-degree rotation of one of the halves before the joining yields a pentagonal gyrocupolarotunda ("J"33).
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.
Formulae.
The following formulae for volume and surface area can be used if all faces are regular, with edge length "a":
formula_0
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V=\\frac{5}{12}\\left(11+5\\sqrt{5}\\right)a^3\\approx9.24181...a^3"
},
{
"math_id": 1,
"text": "A=\\left(5+\\frac{1}{4}\\sqrt{1900+490\\sqrt{5}+210\\sqrt{75+30\\sqrt{5}}}\\right)a^2\\approx23.5385...a^2"
}
] | https://en.wikipedia.org/wiki?curid=1196275 |
1196281 | Pentagonal gyrocupolarotunda | 33rd Johnson solid; pentagonal cupola and rotunda joined base-to-base
In geometry, the pentagonal gyrocupolarotunda is one of the Johnson solids ("J"33). Like the pentagonal orthocupolarotunda ("J"32), it can be constructed by joining a pentagonal cupola ("J"5) and a pentagonal rotunda ("J"6) along their decagonal bases. The difference is that in this solid, the two halves are rotated 36 degrees with respect to one another.
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.
Formulae.
The following formulae for volume and surface area can be used if all faces are regular, with edge length "a":
formula_0
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V=\\frac{5}{12}\\left(11+5\\sqrt{5}\\right)a^3\\approx9.24181...a^3"
},
{
"math_id": 1,
"text": "A= \\left(5+\\frac{15}{4}\\sqrt{3}+\\frac{7}{4}\\sqrt{25+10\\sqrt{5}}\\right) a^2\\approx23.5385...a^2"
}
] | https://en.wikipedia.org/wiki?curid=1196281 |
1196304 | Elongated triangular orthobicupola | Johnson solid with 20 faces
In geometry, the elongated triangular orthobicupola is a polyhedron constructed by attaching two regular triangular cupola into the base of a regular hexagonal prism. It is an example of Johnson solid.
Construction.
The elongated triangular orthobicupola can be constructed from a hexagonal prism by attaching two regular triangular cupolae onto its base, covering its hexagonal faces. This construction process known as elongation, giving the resulting polyhedron has 8 equilateral triangles and 12 squares. A convex polyhedron in which all faces are regular is Johnson solid, and the elongated triangular orthobicupola is one among them, enumerated as 35th Johnson solid formula_1.
Properties.
An elongated triangular orthobicupola with a given edge length formula_2 has a surface area, by adding the area of all regular faces:
formula_3
Its volume can be calculated by cutting it off into two triangular cupolae and a hexagonal prism with regular faces, and then adding their volumes up:
formula_4
It has the same three-dimensional symmetry groups as the triangular orthobicupola, the dihedral group formula_0 of order 12. Its dihedral angle can be calculated by adding the angle of the triangular cupola and hexagonal prism. The dihedral angle of a hexagonal prism between two adjacent squares is the internal angle of a regular hexagon formula_5, and that between its base and square face is formula_6. The dihedral angle of a regular triangular cupola between each triangle and the hexagon is approximately formula_7, that between each square and the hexagon is formula_8, and that between square and triangle is formula_9. The dihedral angle of an elongated triangular orthobicupola between the triangle-to-square and square-to-square, on the edge where the triangular cupola and the prism is attached, is respectively:
formula_10
Related polyhedra and honeycombs.
The elongated triangular orthobicupola forms space-filling honeycombs with tetrahedra and square pyramids.<br>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " D_{3h} "
},
{
"math_id": 1,
"text": " J_{35} "
},
{
"math_id": 2,
"text": " a "
},
{
"math_id": 3,
"text": " \\left(12 + 2\\sqrt{3}\\right)a^2 \\approx 15.464a^2. "
},
{
"math_id": 4,
"text": " \\left(\\frac{5\\sqrt{2}}{3} + \\frac{3\\sqrt{3}}{2}\\right)a^3 \\approx 4.955a^3. "
},
{
"math_id": 5,
"text": " 120^\\circ = 2\\pi/3"
},
{
"math_id": 6,
"text": " \\pi/2 = 90^\\circ "
},
{
"math_id": 7,
"text": " 70.5^\\circ "
},
{
"math_id": 8,
"text": " 54.7^\\circ "
},
{
"math_id": 9,
"text": " 125.3^\\circ "
},
{
"math_id": 10,
"text": " \\begin{align}\n \\frac{\\pi}{2} + 70.5^\\circ &\\approx 160.5^\\circ, \\\\\n \\frac{\\pi}{2} + 54.7^\\circ &\\approx 144.7^\\circ.\n\\end{align} "
}
] | https://en.wikipedia.org/wiki?curid=1196304 |
1196309 | Elongated triangular gyrobicupola | 36th Johnson solid
In geometry, the elongated triangular gyrobicupola is a polyhedron constructed by attaching two regular triangular cupolas to the base of a regular hexagonal prism, with one of them rotated in formula_0. It is an example of Johnson solid.
Construction.
The elongated triangular gyrobicupola is similarly can be constructed as the elongated triangular orthobicupola, started from a hexagonal prism by attaching two regular triangular cupolae onto its base, covering its hexagonal faces. This construction process is known as elongation, giving the resulting polyhedron has 8 equilateral triangles and 12 squares. The difference between those two polyhedrons is one of two triangular cupolas in the elongated triangular gyrobicupola is rotated in formula_0. A convex polyhedron in which all faces are regular is Johnson solid, and the elongated triangular gyrobicupola is one among them, enumerated as 36th Johnson solid formula_1.
Properties.
An elongated triangular gyrobicupola with a given edge length formula_2 has a surface area by adding the area of all regular faces:
formula_3
Its volume can be calculated by cutting it off into two triangular cupolae and a hexagonal prism with regular faces, and then adding their volumes up:
formula_4
Its three-dimensional symmetry groups is the prismatic symmetry, the dihedral group formula_5 of order 12. Its dihedral angle can be calculated by adding the angle of the triangular cupola and hexagonal prism. The dihedral angle of a hexagonal prism between two adjacent squares is the internal angle of a regular hexagon formula_6, and that between its base and square face is formula_7. The dihedral angle of a regular triangular cupola between each triangle and the hexagon is approximately formula_8, that between each square and the hexagon is formula_9, and that between square and triangle is formula_10. The dihedral angle of an elongated triangular orthobicupola between the triangle-to-square and square-to-square, on the edge where the triangular cupola and the prism is attached, is respectively:
formula_11
Related polyhedra and honeycombs.
The elongated triangular gyrobicupola forms space-filling honeycombs with tetrahedra and square pyramids.<br>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " 60^\\circ "
},
{
"math_id": 1,
"text": " J_{36} "
},
{
"math_id": 2,
"text": " a "
},
{
"math_id": 3,
"text": " \\left(12 + 2\\sqrt{3}\\right)a^2 \\approx 15.464a^2. "
},
{
"math_id": 4,
"text": " \\left(\\frac{5\\sqrt{2}}{3} + \\frac{3\\sqrt{3}}{2}\\right)a^3 \\approx 4.955a^3. "
},
{
"math_id": 5,
"text": " D_{3d} "
},
{
"math_id": 6,
"text": " 120^\\circ = 2\\pi/3"
},
{
"math_id": 7,
"text": " \\pi/2 = 90^\\circ "
},
{
"math_id": 8,
"text": " 70.5^\\circ "
},
{
"math_id": 9,
"text": " 54.7^\\circ "
},
{
"math_id": 10,
"text": " 125.3^\\circ "
},
{
"math_id": 11,
"text": " \\begin{align}\n \\frac{\\pi}{2} + 70.5^\\circ &\\approx 160.5^\\circ, \\\\\n \\frac{\\pi}{2} + 54.7^\\circ &\\approx 144.7^\\circ.\n\\end{align} "
}
] | https://en.wikipedia.org/wiki?curid=1196309 |
1196315 | Gyroelongated triangular bicupola | 44th Johnson solid
In geometry, the gyroelongated triangular bicupola is one of the Johnson solids ("J"44). As the name suggests, it can be constructed by gyroelongating a triangular bicupola (either triangular orthobicupola, "J"27, or the cuboctahedron) by inserting a hexagonal antiprism between its congruent halves.
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.
The gyroelongated triangular bicupola is one of five Johnson solids which are chiral, meaning that they have a "left-handed" and a "right-handed" form. In the illustration to the right, each square face on the bottom half of the figure is connected by a path of two triangular faces to a square face above it and to the right. In the figure of opposite chirality (the mirror image of the illustrated figure), each bottom square would be connected to a square face above it and to the left. The two chiral forms of "J"44 are not considered different Johnson solids.
Formulae.
The following formulae for volume and surface area can be used if all faces are regular, with edge length "a":
formula_0
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V= \\sqrt{2} \\left(\\frac{5}{3}+\\sqrt{1+\\sqrt{3}}\\right) a^3 \\approx 4.69456...a^3"
},
{
"math_id": 1,
"text": "A=\\left(6+5\\sqrt{3}\\right)a^2 \\approx 14.6603...a^2"
}
] | https://en.wikipedia.org/wiki?curid=1196315 |
11964 | Genus–differentia definition | Type of intensional definition
A genus–differentia definition is a type of intensional definition, and it is composed of two parts:
For example, consider these two definitions:
Those definitions can be expressed as one genus and two "differentiae":
The use of a genus (Greek: "genos") and a differentia (Greek: "diaphora") in constructing a definition goes back at least as far as Aristotle (384–322 BCE). Furthermore, a genus may fulfill certain characteristics (described below) that qualify it to be referred to as "a species", a term derived from the Greek word "eidos", which means "form" in Plato's dialogues but should be taken to mean "species" in Aristotle's corpus.
Differentiation and Abstraction.
The process of producing new definitions by "extending" existing definitions is commonly known as differentiation (and also as derivation). The reverse process, by which just part of an existing definition is used itself as a new definition, is called abstraction; the new definition is called "an abstraction" and it is said to have been "abstracted away from" the existing definition.
For instance, consider the following:
A part of that definition may be singled out (using parentheses here):
and with that part, an abstraction may be formed:
Then, the definition of "a square" may be recast with that abstraction as its genus:
Similarly, the definition of "a square" may be rearranged and another portion singled out:
leading to the following abstraction:
Then, the definition of "a square" may be recast with that abstraction as its genus:
In fact, the definition of "a square" may be recast in terms of both of the abstractions, where one acts as the genus and the other acts as the differentia:
Hence, abstraction is crucial in simplifying definitions.
Multiplicity.
When multiple definitions could serve equally well, then all such definitions apply simultaneously. Thus, "a square" is a member of both the genus "[a] rectangle" and the genus "[a] rhombus". In such a case, it is notationally convenient to consolidate the definitions into one definition that is expressed with multiple genera (and possibly no differentia, as in the following):
or completely equivalently:
More generally, a collection of formula_0 equivalent definitions (each of which is expressed with one unique genus) can be recast as one definition that is expressed with formula_1 genera. Thus, the following:
could be recast as:
Structure.
A genus of a definition provides a means by which to specify an "is-a relationship":
The non-genus portion of the differentia of a definition provides a means by which to specify a "has-a relationship":
When a system of definitions is constructed with genera and differentiae, the definitions can be thought of as nodes forming a hierarchy or—more generally—a directed acyclic graph; a node that has no predecessor is "a most general definition"; each node along a directed path is "more differentiated (or "more derived) than any one of its predecessors, and a node with no successor is "a most differentiated" (or "a most derived") definition.
When a definition, "S", is the tail of each of its successors (that is, "S" has at least one successor and each direct successor of "S" is a most differentiated definition), then "S" is often called "the species of each of its successors, and each direct successor of "S" is often called "an individual (or "an entity") of the species "S"; that is, the genus of an individual is synonymously called "the species" of that individual. Furthermore, the differentia of an individual is synonymously called "the identity" of that individual. For instance, consider the following definition:
In this case:
As in that example, the identity itself (or some part of it) is often used to refer to the entire individual, a phenomenon that is known in linguistics as a "pars pro toto synecdoche".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n>1"
},
{
"math_id": 1,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=11964 |
11965490 | Active EMI reduction | In the field of EMC, active EMI reduction (or active EMI filtering) refers to techniques aimed to reduce or to filter electromagnetic noise (EMI) making use of active electronic components. Active EMI reduction contrasts with "passive" filtering techniques, such as RC filters, LC filters RLC filters, which includes only passive electrical components. Hybrid solutions including both active and passive elements exist.
Standards concerning conducted and radiated emissions published by IEC
and FCC
set the maximum noise level allowed for different classes of electrical devices. The frequency range of interest spans from 150 kHz to 30 MHz for conducted emissions and from 30 MHz to 40 GHz for radiated emissions. Meeting these requirements and guaranteeing the functionality of an electrical apparatus subject to electromagnetic interference are the main reason to include an EMI filter. In an electrical system, power converters, i.e. DC/DC converters, inverters and rectifiers, are the major sources of conducted EMI, due to their high-frequency switching ratio which gives rise to unwanted fast current and voltage transients. Since power electronics is nowadays spread in many fields, from power industrial application to automotive industry, EMI filtering has become necessary. In other fields, such as the telecommunication industry where the major focus is on radiated emissions, other techniques have been developed for EMI reduction, such as spread spectrum clocking which makes use of digital electronics, or electromagnetic shielding.
Working principle.
The concept behind active EMI reduction has already been implemented previously in acoustics with the active noise control and it can be described considering the following three different blocks:
The active EMI reduction device should not affect the normal operation of the raw system. Active filters are intended to act only on the high-frequency noises produced by the system and should not modify normal operation at DC or power-line frequency.
Filter topologies.
The EMI noise can be categorized as common mode (CM) and differential mode (DM).
Depending on the noise component that should be compensated, different topologies and configurations are possible. Two families of active filter exist, the feedback and the feed forward controlled: the first detects the noise at the receiver and generates a compensation signal to suppress the noise; the latter detects the noise at the noise source and generates an opposite signal to cancel out the noise.
Even though the spectrum of an EMI noise is composed by several spectral components, a single frequency at the time is taken into account to make possible a simple circuit representation, as shown in Fig. 1. The noise source formula_0 is represented as a sinusoidal source with its Norton representation which delivers a sinusoidal current formula_1 to the load impedance formula_2.
The target of the filter is to suppress every single frequency noise current flowing through the load, and in order to understand how it achieves the task, two very basic circuit elements are introduced: the nullator and the norator.
The nullator is an element whose voltage and current are always zero, while the norator is an element whose voltage and current can assume any value.
For example, by placing the nullator in series or in parallel to the load impedance we can either cancel the single frequency noise current or voltage across formula_2. Then the norator must be placed to satisfy the Kirchhoff's current and voltage laws (KVL and KCL). The active EMI filter always tries to keep a constant value of current or voltage at the load, in this specific case this value is equal to zero. The combination of a nullator and a norator forms a nullor, which is an element that can be represented by an ideal controlled voltage/current source.
The series and parallel combinations of Norator and Nullator gives four possible configurations of ideal controlled sources which, for the case of feedback topology, are shown in Fig. 2 and in Fig. 3 for the feedback topology.
The four implementation that can be actualized are:
Feedback.
To assess the performances and the effectiveness of the filter, the Insertion loss (IL) can be evaluated in each case. The IL, expressed in dB, represents the achievable noise attenuation and it is defined as:
formula_3
where formula_4 is the load voltage measured "without" the filter and formula_5 is the load voltage "with" the filter included in the system. By applying KVL, KCL and Ohm's law to the circuit, these two voltages can be calculated.
If formula_6 is the filter's gain, i.e. the transfer function between the sensed and the injected signal, IL results to be:
Larger IL implies a greater attenuation, while a smaller than unity IL implies an undesired noise signal amplification caused by the active filter. For example, type (a) (current sensing and compensation) and (d) (voltage sensing and compensation) filters, if the mismatch between formula_2 and formula_7 is large enough so that one of the two becomes negligible compared to the other, provide ILs irrespective of the system impedances, which means the higher the gain, the better the performances. The large mismatch between formula_2 and formula_7 occurs in most of real applications, where the noise source impedance formula_7 is much smaller (for the differential mode test setup) or much larger (for the common mode test setup) than the load impedance formula_2, that, in standard test setup, is equal to the formula_8 LISN impedance. In these two cases ILs can be approximated to:
On the other hand, in the type (c) (current sensing and voltage compensation) active filter, the gain of the active filter should be larger than the total impedance of the given system to obtain the maximum IL. This means that the filter should provide a high series impedance between the noise source and the receiver to block the noise current. Similar conclusion can be made for a type (b) (voltage detecting and current compensating) active filter; the equivalent admittance of the active filter should be much higher than the total admittance of the system without the filter, so that the active filter reroutes the noise current and minimizes the noise voltage at the receiver port. In this way, active filters try to block and divert the noise propagation path as conventional passive LC filters do. Nevertheless, active filters employing type (b) or (c) topologies require a gain A larger than the total impedance (or admittance) of the raw system and, in other words, their ILs are always dependent on system impedance formula_2 and formula_7, even though the mismatch between them is large.
Feed forward.
While feedback filters register the noise at load side and inject the compensation signal at source side, the feed forward devices do the opposite: the sensing is at source end and the compensation at load port. For this reason, there cannot be feedforward-type implementation for type (b) and (c). Type (a) (current sensing and injecting) and type (d) (voltage sensing and injecting) can be implemented and the calculated ILs result to be:
Considering also in these two cases the condition for maximum noise reduction, i.e. maximum IL, it can be achieved when the filter's gain is equal to one. If formula_9, it follows that formula_10. It can also be noted that, if formula_11or, generally speaking, formula_12, the insertion loss becomes negative and thus the active filter amplifies the noise instead of reducing it.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I_s"
},
{
"math_id": 1,
"text": "I_L"
},
{
"math_id": 2,
"text": "Z_L"
},
{
"math_id": 3,
"text": "IL=20log_{10}\\frac{|V_{without}|}{|V_{with}|}"
},
{
"math_id": 4,
"text": "V_{without}"
},
{
"math_id": 5,
"text": "V_{with}"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "Z_s"
},
{
"math_id": 8,
"text": "50\\Omega"
},
{
"math_id": 9,
"text": "A\\rightarrow 1^-"
},
{
"math_id": 10,
"text": "IL\\rightarrow \\infty"
},
{
"math_id": 11,
"text": "A\\rightarrow 1^+"
},
{
"math_id": 12,
"text": "A > 1"
}
] | https://en.wikipedia.org/wiki?curid=11965490 |
11965603 | Stellar rotation | Angular motion of a star about its axis
Stellar rotation is the angular motion of a star about its axis. The rate of rotation can be measured from the spectrum of the star, or by timing the movements of active features on the surface.
The rotation of a star produces an equatorial bulge due to centrifugal force. As stars are not solid bodies, they can also undergo differential rotation. Thus the equator of the star can rotate at a different angular velocity than the higher latitudes. These differences in the rate of rotation within a star may have a significant role in the generation of a stellar magnetic field.
In its turn, the magnetic field of a star interacts with the stellar wind. As the wind moves away from the star its angular speed decreases. The magnetic field of the star interacts with the wind, which applies a drag to the stellar rotation. As a result, angular momentum is transferred from the star to the wind, and over time this gradually slows the star's rate of rotation.
Measurement.
Unless a star is being observed from the direction of its pole, sections of the surface have some amount of movement toward or away from the observer. The component of movement that is in the direction of the observer is called the radial velocity. For the portion of the surface with a radial velocity component toward the observer, the radiation is shifted to a higher frequency because of Doppler shift. Likewise the region that has a component moving away from the observer is shifted to a lower frequency. When the absorption lines of a star are observed, this shift at each end of the spectrum causes the line to broaden. However, this broadening must be carefully separated from other effects that can increase the line width.
The component of the radial velocity observed through line broadening depends on the inclination of the star's pole to the line of sight. The derived value is given as formula_1, where formula_2 is the rotational velocity at the equator and formula_0 is the inclination. However, formula_0 is not always known, so the result gives a minimum value for the star's rotational velocity. That is, if formula_0 is not a right angle, then the actual velocity is greater than formula_1. This is sometimes referred to as the projected rotational velocity. In fast rotating stars polarimetry offers a method of recovering the actual velocity rather than just the rotational velocity; this technique has so far been applied only to Regulus.
For giant stars, the atmospheric microturbulence can result in line broadening that is much larger than effects of rotational, effectively drowning out the signal. However, an alternate approach can be employed that makes use of gravitational microlensing events. These occur when a massive object passes in front of the more distant star and functions like a lens, briefly magnifying the image. The more detailed information gathered by this means allows the effects of microturbulence to be distinguished from rotation.
If a star displays magnetic surface activity such as starspots, then these features can be tracked to estimate the rotation rate. However, such features can form at locations other than equator and can migrate across latitudes over the course of their life span, so differential rotation of a star can produce varying measurements. Stellar magnetic activity is often associated with rapid rotation, so this technique can be used for measurement of such stars. Observation of starspots has shown that these features can actually vary the rotation rate of a star, as the magnetic fields modify the flow of gases in the star.
Physical effects.
Equatorial bulge.
Gravity tends to contract celestial bodies into a perfect sphere, the shape where all the mass is as close to the center of gravity as possible. But a rotating star is not spherical in shape, it has an equatorial bulge.
As a rotating proto-stellar disk contracts to form a star its shape becomes more and more spherical, but the contraction doesn't proceed all the way to a perfect sphere. At the poles all of the gravity acts to increase the contraction, but at the equator the effective gravity is diminished by the centrifugal force. The final shape of the star after star formation is an equilibrium shape, in the sense that the effective gravity in the equatorial region (being diminished) cannot pull the star to a more spherical shape. The rotation also gives rise to gravity darkening at the equator, as described by the von Zeipel theorem.
An extreme example of an equatorial bulge is found on the star Regulus A (α Leonis A). The equator of this star has a measured rotational velocity of 317 ± 3 km/s. This corresponds to a rotation period of 15.9 hours, which is 86% of the velocity at which the star would break apart. The equatorial radius of this star is 32% larger than polar radius. Other rapidly rotating stars include Alpha Arae, Pleione, Vega and Achernar.
The break-up velocity of a star is an expression that is used to describe the case where the centrifugal force at the equator is equal to the gravitational force. For a star to be stable the rotational velocity must be below this value.
Differential rotation.
Surface differential rotation is observed on stars such as the Sun when the angular velocity varies with latitude. Typically the angular velocity decreases with increasing latitude. However the reverse has also been observed, such as on the star designated HD 31993. The first such star, other than the Sun, to have its differential rotation mapped in detail is AB Doradus.
The underlying mechanism that causes differential rotation is turbulent convection inside a star. Convective motion carries energy toward the surface through the mass movement of plasma. This mass of plasma carries a portion of the angular velocity of the star. When turbulence occurs through shear and rotation, the angular momentum can become redistributed to different latitudes through meridional flow.
The interfaces between regions with sharp differences in rotation are believed to be efficient sites for the dynamo processes that generate the stellar magnetic field. There is also a complex interaction between a star's rotation distribution and its magnetic field, with the conversion of magnetic energy into kinetic energy modifying the velocity distribution.
Rotation braking.
During formation.
Stars are believed to form as the result of a collapse of a low-temperature cloud of gas and dust. As the cloud collapses, conservation of angular momentum causes any small net rotation of the cloud to increase, forcing the material into a rotating disk. At the dense center of this disk a protostar forms, which gains heat from the gravitational energy of the collapse.
As the collapse continues, the rotation rate can increase to the point where the accreting protostar can break up due to centrifugal force at the equator. Thus the rotation rate must be braked during the first 100,000 years to avoid this scenario. One possible explanation for the braking is the interaction of the protostar's magnetic field with the stellar wind in magnetic braking. The expanding wind carries away the angular momentum and slows down the rotation rate of the collapsing protostar.
Most main-sequence stars with a spectral class between O5 and F5 have been found to rotate rapidly. For stars in this range, the measured rotation velocity increases with mass. This increase in rotation peaks among young, massive B-class stars. "As the expected life span of a star decreases with increasing mass, this can be explained as a decline in rotational velocity with age."
After formation.
For main-sequence stars, the decline in rotation can be approximated by a mathematical relation:
formula_3
where formula_4 is the angular velocity at the equator and formula_5 is the star's age. This relation is named "Skumanich's law" after Andrew P. Skumanich who discovered it in 1972.
Gyrochronology is the determination of a star's age based on the rotation rate, calibrated using the Sun.
Stars slowly lose mass by the emission of a stellar wind from the photosphere. The star's magnetic field exerts a torque on the ejected matter, resulting in a steady transfer of angular momentum away from the star. Stars with a rate of rotation greater than 15 km/s also exhibit more rapid mass loss, and consequently a faster rate of rotation decay. Thus as the rotation of a star is slowed because of braking, there is a decrease in rate of loss of angular momentum. Under these conditions, stars gradually approach, but never quite reach, a condition of zero rotation.
At the end of the main sequence.
Ultracool dwarfs and brown dwarfs experience faster rotation as they age, due to gravitational contraction. These objects also have magnetic fields similar to the coolest stars. However, the discovery of rapidly rotating brown dwarfs such as the T6 brown dwarf WISEPC J112254.73+255021.5 lends support to theoretical models that show that rotational braking by stellar winds is over 1000 times less effective at the end of the main sequence.
Close binary systems.
A close binary star system occurs when two stars orbit each other with an average separation that is of the same order of magnitude as their diameters. At these distances, more complex interactions can occur, such as tidal effects, transfer of mass and even collisions. Tidal interactions in a close binary system can result in modification of the orbital and rotational parameters. The total angular momentum of the system is conserved, but the angular momentum can be transferred between the orbital periods and the rotation rates.
Each of the members of a close binary system raises tides on the other through gravitational interaction. However the bulges can be slightly misaligned with respect to the direction of gravitational attraction. Thus the force of gravity produces a torque component on the bulge, resulting in the transfer of angular momentum (tidal acceleration). This causes the system to steadily evolve, although it can approach a stable equilibrium. The effect can be more complex in cases where the axis of rotation is not perpendicular to the orbital plane.
For contact or semi-detached binaries, the transfer of mass from a star to its companion can also result in a significant transfer of angular momentum. The accreting companion can spin up to the point where it reaches its critical rotation rate and begins losing mass along the equator.
Degenerate stars.
After a star has finished generating energy through thermonuclear fusion, it evolves into a more compact, degenerate state. During this process the dimensions of the star are significantly reduced, which can result in a corresponding increase in angular velocity.
White dwarf.
A white dwarf is a star that consists of material that is the by-product of thermonuclear fusion during the earlier part of its life, but lacks the mass to burn those more massive elements. It is a compact body that is supported by a quantum mechanical effect known as electron degeneracy pressure that will not allow the star to collapse any further. Generally most white dwarfs have a low rate of rotation, most likely as the result of rotational braking or by shedding angular momentum when the progenitor star lost its outer envelope. (See planetary nebula.)
A slow-rotating white dwarf star can not exceed the Chandrasekhar limit of 1.44 solar masses without collapsing to form a neutron star or exploding as a Type Ia supernova. Once the white dwarf reaches this mass, such as by accretion or collision, the gravitational force would exceed the pressure exerted by the electrons. If the white dwarf is rotating rapidly, however, the effective gravity is diminished in the equatorial region, thus allowing the white dwarf to exceed the Chandrasekhar limit. Such rapid rotation can occur, for example, as a result of mass accretion that results in a transfer of angular momentum.
Neutron star.
A neutron star is a highly dense remnant of a star that is primarily composed of neutrons—a particle that is found in most atomic nuclei and has no net electrical charge. The mass of a neutron star is in the range of 1.2 to 2.1 times the mass of the Sun. As a result of the collapse, a newly formed neutron star can have a very rapid rate of rotation; on the order of a hundred rotations per second.
Pulsars are rotating neutron stars that have a magnetic field. A narrow beam of electromagnetic radiation is emitted from the poles of rotating pulsars. If the beam sweeps past the direction of the Solar System then the pulsar will produce a periodic pulse that can be detected from the Earth. The energy radiated by the magnetic field gradually slows down the rotation rate, so that older pulsars can require as long as several seconds between each pulse.
Black hole.
A black hole is an object with a gravitational field that is sufficiently powerful that it can prevent light from escaping. When they are formed from the collapse of a rotating mass, they retain all of the angular momentum that is not shed in the form of ejected gas. This rotation causes the space within an oblate spheroid-shaped volume, called the "ergosphere", to be dragged around with the black hole. Mass falling into this volume gains energy by this process and some portion of the mass can then be ejected without falling into the black hole. When the mass is ejected, the black hole loses angular momentum (the "Penrose process"). The rotation rate of a black hole has been measured as high as 98.7% of the speed of light.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i"
},
{
"math_id": 1,
"text": "v_\\mathrm{e} \\cdot \\sin i"
},
{
"math_id": 2,
"text": "v_\\mathrm{e}"
},
{
"math_id": 3,
"text": "\\Omega_\\mathrm{e} \\propto t^{-\\frac{1}{2}},"
},
{
"math_id": 4,
"text": "\\Omega_\\mathrm{e}"
},
{
"math_id": 5,
"text": "t"
}
] | https://en.wikipedia.org/wiki?curid=11965603 |
1196760 | Rudvalis group | Sporadic simple group
In the area of modern algebra known as group theory, the Rudvalis group "Ru" is a sporadic simple group of order
214 · 33 · 53 · 7 · 13 · 29
= 145926144000
≈ 1×1011.
History.
"Ru" is one of the 26 sporadic groups and was found by Arunas Rudvalis (1973, 1984) and constructed by John H. Conway and David B. Wales (1973). Its Schur multiplier has order 2, and its outer automorphism group is trivial.
In 1982 Robert Griess showed that "Ru" cannot be a subquotient of the monster group. Thus it is one of the 6 sporadic groups called the pariahs.
Properties.
The Rudvalis group acts as a rank 3 permutation group on 4060 points, with one point stabilizer being the Ree group
2"F"4(2), the automorphism group of the Tits group. This representation implies a strongly regular graph srg(4060, 2304, 1328, 1280). That is, each vertex has 2304 neighbors and 1755 non-neighbors, any two adjacent vertices have 1328 common neighbors, while any two non-adjacent ones have 1280 (Griess 1998, p. 125).
Its double cover acts on a 28-dimensional lattice over the Gaussian integers. The lattice has 4×4060 minimal vectors; if minimal vectors are identified whenever one is 1, "i", –1, or –"i" times another, then the 4060 equivalence classes can be identified with the points of the rank 3 permutation representation. Reducing this lattice modulo the principal ideal
formula_0
gives an action of the Rudvalis group on a 28-dimensional vector space over the field formula_1 with 2 elements. Duncan (2006) used the 28-dimensional lattice to construct a vertex operator algebra acted on by the double cover.
Alternatively, the double cover can be defined abstractly, by starting with the graph and lifting Ru to 2Ru in the double cover 2A4060. This is because 1 of the conjugacy classes of involutions does not fix any points. Such an involution partitions the 4060 points of the graph into 2030 pairs, which can be regarded as 1015 double transpositions in the alternating group A4060. Since 1015 is odd, these involutions are lifted to order 4 elements in the double cover 2A4060. For more information, see Covering groups of the alternating and symmetric groups.
characterized the Rudvalis group by the centralizer of a central involution. gave another characterization as part of their identification of the Rudvalis group as one of the quasithin groups.
Maximal subgroups.
found the 15 conjugacy classes of maximal subgroups of "Ru" as follows:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(1 + i)\\ "
},
{
"math_id": 1,
"text": "\\mathbb F_2"
}
] | https://en.wikipedia.org/wiki?curid=1196760 |
1196772 | Harada–Norton group | Sporadic simple group
In the area of modern algebra known as group theory, the Harada–Norton group "HN" is a sporadic simple group of order
273,030,912,000,000
= 214 · 36 · 56 · 7 · 11 · 19
≈ 3×1014.
History and properties.
"HN" is one of the 26 sporadic groups and was found by Harada (1976) and Norton (1975)).
Its Schur multiplier is trivial and its outer automorphism group has order 2.
"HN" has an involution whose centralizer is of the form 2.HS.2, where HS is the Higman-Sims group (which is how Harada found it).
The prime 5 plays a special role in the group. For example, it centralizes an element of order 5 in the Monster group (which is how Norton found it), and as a result acts naturally on a vertex operator algebra over the field with 5 elements . This implies that it acts on a 133 dimensional algebra over F5 with a commutative but nonassociative product, analogous to the Griess algebra .
The full nomralizer of a 5A element in the Monster group is (D10 × HN).2, so HN centralizes 5 involutions alongside the 5-cycle. These involutions are centralized by the Baby monster group, which therefore contains HN as a subgroup.
Generalized monstrous moonshine.
Conway and Norton suggested in their 1979 paper that monstrous moonshine is not limited to the monster, but that similar phenomena may be found for other groups. Larissa Queen and others subsequently found that one can construct the expansions of many Hauptmoduln from simple combinations of dimensions of sporadic groups.
To recall, the prime number 5 plays a special role in the group and for "HN", the relevant McKay-Thompson series is formula_0 where one can set the constant term a(0)
−6 (OEIS: ),
formula_1
and "η"("τ") is the Dedekind eta function.
Maximal subgroups.
found the 14 conjugacy classes of maximal subgroups of "HN" as follows: | [
{
"math_id": 0,
"text": "T_{5A}(\\tau)"
},
{
"math_id": 1,
"text": "\\begin{align}\n j_{5A}(\\tau)\n &= T_{5A}(\\tau)-6\\\\\n &= \\left(\\frac{\\eta(\\tau)}{\\eta(5\\tau)}\\right)^{6}+5^3 \\left(\\frac{\\eta(5\\tau)}{\\eta(\\tau)}\\right)^6\\\\\n &= \\frac{1}{q} - 6 + 134q + 760q^2 + 3345q^3 + 12256q^4 + 39350q^5 + \\dots\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=1196772 |
11969002 | Nobuo Okishio | was a Japanese Marxian economist and emeritus professor of Kobe University. In 1979, he was elected President of the Japan Association of Economics and Econometrics, which is now called Japanese Economic Association.
Okishio studied mathematical economics under Kazuo Mizutani. In 1950 he graduated from Kobe University and later taught there. He soon began to doubt the premises and results of modern economics, and decided to search for alternatives by studying Marxian economics.
Okishio worked to clarify the logic of Karl Marx’s economic system, offering formal and mathematical proofs for many Marxian theorems. For example, in 1955, he gave the world's first proof of the “Marxian fundamental theorem”, as it was later named by Michio Morishima, which is the theory that the exploitation of surplus labor is the necessary condition for the existence of positive profit. Concerning Marx’s Falling Rate of Profit, Okishio considered that his famous theorem would not deny it.
Okishio wrote many papers covering various important fields in modern and Marxian economics, for example value and price, accumulation theory, critical analysis of Keynesian economics, trade cycle theory and on the long run tendency of capitalistic economy. They were published in over twenty books and two hundred papers, almost all in Japanese. About thirty of his published papers have been translated in English, and much of these materials are collected in the book (Nobuo Okishio, Michael Kruger and Peter Flaschel, 1993).
Value and exploitation theory.
Formulation of labor embodied value.
Okishio showed how Marxian value is determined quantitatively. The value equation presented by Okishio determines the amount of labor directly and indirectly needed to produce one unit of commodity as follows.
formula_0
where formula_1 is the amount of formula_2-th goods and formula_3 is the direct labor input needed to produce one unit of formula_4-th goods. He first got this idea when he was writing “On Exchange Theory” in 1954 in Japanese and a little later in 1955 more clearly wrote in English paper “Monopoly and the Rates of Profits” Kobe University Economic Review.
Fundamental Marxian theorem.
Using this equation Okishio proved Marx's fundamental proposition that the exploitation of surplus labor is the necessary condition for the existence of positive profit, later called as Fundamental Marxian Theorem by Michio Morishima. This proof has some characteristics. First, it does not preclude the acceptance of value theory in advance. Starting from the existence of profit expressed in price terms, we can deduct the existence of surplus value as a logical consequence. This is the opposite way to Marx, who started from value and reached price and profit. Okishio's proof has effects to persuade the validity of Marxian propositions to much more non-Marxian economists at present.
Measurement.
According to this value equation we can make quantitative measurements using input–output tables developed after World War Two. Okishio himself tried it at the first time in 1958 in Japanese economy and since then we have many measurement investigations in many countries. Measurements from 1955 to 1985 in Japanese economy show that values and prices move differently in the short run but in the long run these two magnitudes very much coincidentally move. Thus at least in the long run values can be said to be playing a gravitating role of prices.
Some propositions of Marxian economics.
Transformation problem.
Okishio's work is related to clarifying the logic of Marx's theory. First, there is the transformation problem. Marx argued in Das Kapital Book Three about the transformation from values to prices. There he discussed that output prices also enter input prices in various sectors. And he warned us that we could make a mistake if we ignore this fact because there exists some discrepancy between prices and values. Here, of course, prices mean output prices and values mean input prices at the first stage. So Marx suggested the need to proceed this iterative transformation process to the end. Marx showed the transformation formula although he left others to do it. Okishio executed the iteration process to the end using mathematical tools and proved that it converges to production price equilibrium with positive profits, i.e. equal to Bortkiewicz equation. One important finding relating this work is that the equilibrium rate of profit and the production prices are determined depending on real wage rate and technologies in basic sectors only. This result was accepted with surprise, because many economists considered that non-basic sectors also have some relations with the equilibrium rate of profit. As far as Japan is concerned, some heated arguments are held between Okishio and some other Marxian economists.
Formal proof of the Marxian theorem.
Next is the Marx's propositions of dynamic movement of capitalistic economy. In the paper “A Formal Proof of Marx’s Two Theorems” he tried to prove Marx's two theorems; first, the tendencial falling rate of profit and, second, the tendencial increase in unemployment. By “formal” Okishio meant whether we can deduct two propositions from Marx's presumptions of increasing organic composition of production. He showed that if new technologies with increasing organic composition of production are continuously introduced, then the rate of profit must fall and the unemployment must increase. Here the crucial assumption is the introduction of increasing organic composition technologies. Then he proceeds to examine the validity of this assumption from the viewpoint of capitalistic behavior of technical choice.
Technical change and the rate of profit.
In the paper “Technical Change and the Rate of Profit” in 1961 he presented famous Okishio Theorem. There he showed that if we assume the viability condition, i.e. for the new technology to be introduced, it must be cost reducing, then new technologies never decrease the rate of profit; if it is introduced into basic sectors, the rate of profit will necessarily increase. His arguments depend on several assumptions: (1) the real wage rate is constant before and after the technical change, (2) the comparison is made about the equilibrium rate of profit, (3) the rate of profit is defined by the reproduction-cost principle. This theorem was later extended to the case of "joint production" in Morishima (1974), and later to the case of "fixed capital" by Nakatani (1978) and Roemer (1979). This work stimulated much discussion about its validity and implications for Marxist theory when it was first published, and has been a hotly debated subject to this day.
Okishio theorem and the falling rate of profit.
Okishio does not believe his famous Okishio's theorem rules out entirely the possibility of the Falling Rate of Profit taking effect. A falling rate of profit might be realized in the long run due to competitive pressures among capitalists, bargaining power of labor, or other reasons. The crux of Okishio's theorem is that, given constant technological progress in the capitalist system of change, if the rate of profit falls in the long run, real wage rate must be increasing. The real wage rate will change in the process of technical change and it is very much doubtful whether this dynamic process converges to a stationary production price situation. Nevertheless, Okishio's theorem is relevant because it denies that the FRP is established automatically from technical change by itself.
Critical investigation of Keynes.
Keynesian economics compared with classical economics.
Okishio critically investigated non-Marxian economists with a lot of energy, especially Keynes and Harrod. Although Keynes is not sympathetic to Marx, Okishio thought that Keynes is an important criticizer of neoclassical economics inside modern economics, because Keynes denied the harmonic adjusting mechanism of market economy. Keynes also emphasized the independent and volatile role of investment demand in capitalistic economy. In these respects Keynes shared the similar viewpoint with Marx. Recent New Keynesians or Neo-Keynesians have been neglecting these fundamental characteristics of Keynes's original theory.
Aggregate supply function.
Okishio's critique to Keynes is that he denied the possibility to change the capitalist's decision making. As is well known, Keynes devoted almost all his investigation to the demand-side and as far as the supply-side he only said that there remains almost no materials that is not known to us and Keynes left it as technically given. Okishio examined the capitalistic property of Keynes aggregate supply function Z(N) and showed an alternative way of raising employment by changing Aggregate Supply Function. His critical examination of Keynesian economics is the jointly published book “Keynesian Economics” in 1957.
Determination of wage rate.
One of Keynes's criticisms of classical economics is the idea that a "market clearing wage" is determined in the labor market. On the contrary in Keynes, the real wage rate is determined in commodity markets. Many Marxian economists consider real wage rate is affected by labor market. However labor market can affect the nominal wage rate for the present and the commodity market can affect nominal prices. So in order to determine real wage rate. In classical economics the real wage rate is how real wage rate move, we have to both markets, namely the economy as a whole. Okishio investigated the movement of real wage rate in an accumulation process and considered investment demands as the most dominant determinant of real wage rate in the short run and the natural growth rate as determining the long run real wage rate.
Instability of capitalistic accumulation path.
Instability.
Okishio agreed with Roy F. Harrod that the market economy was not only from a static perspective but also from a dynamic perspectives. Harrod arguments are necessarily clear, however, about investment decision making. Okishio wrote many papers to clarify the Harrod instability logic and showed that instability is an inherent characteristic in the accumulation process of the capitalist economy. Problems examined are (A) to make clear capitalists’ investment decision of Harrod and (B) to investigate the instability postulate taking into consideration other possibilities like substitutive technical changes, changes in saving ratio, and movements in relative prices. He obtained the conclusion that instability is the robust property of capitalist accumulation.
Crisis theory.
Capitalistic accumulation process displays instability. However, for one production system to survive for many years, some kind of equilibrium or near equilibrium conditions must be satisfied. In the short run the economy diverges from the equilibrium growth path due to Harrod instability, but in the long run it satisfies several conditions as shown in Reproduction Formula of Marx Book Two. Okishio proceeds to investigate the crisis theory by reconciling these two requirements and by introducing crisis theory as a regulating mechanism. His accumulation theory is published in his main publication CHIKUSEKIRON (“Accumulation Theory” in Japanese).
Competition.
Profit and competition.
Okishio scrutinized the relation between profits and competition. Okishio's theorem is the proposition obtained by comparing the equilibrium rate of profit before and after the introduction of new technology. Whether the economic disturbances due to technical change will smoothly converge to new stationary state is very problematic.
Relating production price.
In other words, how can the Marxian production-price constellation be justified in real economy? Marx considered, of course, that in the long run the average positive rate of profit is realized in capitalistic market economy. Then what is the logic to guarantee it considering the change of real wage rate? Adam Smith considered that competition among capitals effects downward pressures on profits. But as is well known, Ricardo criticized Smith and claimed that competition can only equalize uneven rates of profits among capitals and never affect the level of the rate of profit itself, which is inherited by Marx. Walras and more clearly Schumpeter asserted that competition sweeps out profits completely.
Tentative results.
Okishio's tentative conclusion on this problem is that competition can drive the economy to zero-profit equilibrium unless there exist no continuous technical innovations or an increase in labor supply or independent capitalist consumption. This investigation is still under way.
The long-run processes of a capitalist economy.
Two Requirements.
On this point Okishio's argument is composed of the following two propositions. First, in order for the capitalist economy to work effectively, the production power of humankind in that society should exceed some minimum level, but also should not exceed some maximum level. Second, the production power in the capitalistic economy necessarily advances due to the mechanisms of competition and commercial expansion inherent to the capitalistic mode of production.
Dialectical materialism.
This viewpoint is exactly the same with Marx's historical dialectic. If this is correct, the necessity for a capitalistic society to be switched to another economic system can be proved by demonstrating the following two. First, we have to prove how production power advances in capitalistic society. Next, we have to show what is the upper bound of production power for a capitalistic economy to be able to work effectively.
Necessity of a new society.
As for the first, the introduction of new technologies are most important as shown by many economists as Schumpeter and others. As for the latter upper bound, he stresses the controlability of the whole economy. We are living in the world where even a local economic activity can have effects of global and long lasting consequences in all over the world. In this sense the production activities are already socialized in their effects. The decision making, however, is still grasped exclusively by small part of members in the society and it is executed based on profit maximizing principle. So he considers that in order to guarantee the existence of humankind we have to change the capitalistic economy to an alternative much more socialized economic system, which is called socialistic economy. | [
{
"math_id": 0,
"text": "\nt_i = \\sum_j a_{ij}+\\tau_i,\\quad i = 1,\\ldots, n\n"
},
{
"math_id": 1,
"text": "a_{ij}"
},
{
"math_id": 2,
"text": "j"
},
{
"math_id": 3,
"text": "\\tau_i"
},
{
"math_id": 4,
"text": "i"
}
] | https://en.wikipedia.org/wiki?curid=11969002 |
1196909 | Mohr–Mascheroni theorem | Constructions performed by a compass and straightedge can be performed by a compass alone
In mathematics, the Mohr–Mascheroni theorem states that any geometric construction that can be performed by a compass and straightedge can be performed by a compass alone.
It must be understood that "any geometric construction" refers to figures that contain no straight lines, as it is clearly impossible to draw a straight line without a straightedge. It is understood that a line is determined provided that two distinct points on that line are given or constructed, even though no visual representation of the line will be present. The theorem can be stated more precisely as:
"Any Euclidean construction, insofar as the given and required elements are points (or circles), may be completed with the compass alone if it can be completed with both the compass and the straightedge together."
Though the use of a straightedge can make a construction significantly easier, the theorem shows that any set of points that fully defines a constructed figure can be determined with compass alone, and the only reason to use a straightedge is for the aesthetics of seeing straight lines, which for the purposes of construction is functionally unnecessary.
History.
The result was originally published by Georg Mohr in 1672, but his proof languished in obscurity until 1928. The theorem was independently discovered by Lorenzo Mascheroni in 1797 and it was known as "Mascheroni's Theorem" until Mohr's work was rediscovered.
Several proofs of the result are known. Mascheroni's proof of 1797 was generally based on the idea of using reflection in a line as the major tool. Mohr's solution was different. In 1890, August Adler published a proof using the inversion transformation.
An algebraic approach uses the isomorphism between the Euclidean plane and the real coordinate space formula_0. In this way, a stronger version of the theorem was proven in 1990. It also shows the dependence of the theorem on Archimedes' axiom (which cannot be formulated in a first-order language).
Constructive proof.
Outline.
To prove the theorem, each of the basic constructions of compass and straightedge need to be proven to be possible by using a compass alone, as these are the foundations of, or elementary steps for, all other constructions. These are:
It is understood that a straight line cannot be drawn without a straightedge. A line is considered to be given by any two points, as any such pair define a unique line. In keeping with the intent of the theorem which we aim to prove, the actual line need not be drawn but for aesthetic reasons.
This can be done with a compass alone. A straightedge is not required for this.
This construction can also be done directly with a compass.
Thus, to prove the theorem, only compass-only constructions for #3 and #4 need to be given.
Notation and remarks.
The following notation will be used throughout this article. A circle whose center is located at point U and that passes through point V will be denoted by "U"("V"). A circle with center U and radius specified by a number, r, or a line segment will be denoted by "U"("r") or "U"("AB"), respectively.
In general constructions there are often several variations that will produce the same result. The choices made in such a variant can be made without loss of generality. However, when a construction is being used to prove that something can be done, it is not necessary to describe all these various choices and, for the sake of clarity of exposition, only one variant will be given below. However, many constructions come in different forms depending on whether or not they use circle inversion and these alternatives will be given if possible.
It is also important to note that some of the constructions below proving the Mohr–Mascheroni theorem require the arbitrary placement of points in space, such as finding the center of a circle when not already provided (see construction below). In some construction paradigms - such as in the geometric definition of the constructible number - the arbitrary placement of points may be prohibited. In such a paradigm, however, for example, various constructions exist so that arbitrary point placement is unnecessary. It is also worth pointing out that no circle could be constructed without the compass, thus there is no reason in practice for a center point not to exist.
Some preliminary constructions.
To prove the above constructions #3 and #4, which are included below, a few necessary intermediary constructions are also explained below since they are used and referenced frequently. These are also compass-only constructions. All constructions below rely on #1,#2,#5, and any other construction that is listed prior to it.
Compass equivalence theorem (circle translation).
The ability to translate, or copy, a circle to a new center is vital in these proofs and fundamental to establishing the veracity of the theorem. The creation of a new circle with the same radius as the first, but centered at a different point, is the key feature distinguishing the collapsing compass from the modern, rigid compass. With the rigid compass this is a triviality, but with the collapsing compass it is a question of construction possibility. The equivalence of a collapsing compass and a rigid compass was proved by Euclid (Book I Proposition 2 of "The Elements") using straightedge and collapsing compass when he, essentially, constructs a copy of a circle with a different center. This equivalence can also be established with (collapsing) compass alone, a proof of which can be found in the main article.
Extending the length of a line segment.
This construction can be repeated as often as necessary to find a point Q so that the length of line segment = "n"⋅ length of line segment for any positive integer "n".
Inversion in a circle.
Point I is such that the radius r of "B"("r") is to IB as DB is to the radius; or "IB" / "r" = "r" / "DB".
In the event that the above construction fails (that is, the red circle and the black circle do not intersect in two points), find a point Q on the line so that the length of line segment is a positive integral multiple, say n, of the length of and is greater than "r" / 2 (this is possible by Archimede's axiom). Find Q' the inverse of Q in circle "B"("r") as above (the red and black circles must now intersect in two points). The point I is now obtained by extending so that = "n" ⋅ "BQ' ".
Intersection of a line and a circle (construction #4).
The compass-only construction of the intersection points of a line and a circle breaks into two cases depending upon whether the center of the circle is or is not collinear with the line.
Circle center is not collinear with the line.
Assume that center of the circle does not lie on the line.
An alternate construction, using circle inversion can also be given.
Circle center is collinear with the line.
Thus it has been shown that all of the basic construction one can perform with a straightedge and compass can be done with a compass alone, provided that it is understood that a line cannot be literally drawn but merely defined by two points.
Other types of restricted construction.
Restrictions involving the compass.
Renaissance mathematicians Lodovico Ferrari, Gerolamo Cardano and Niccolò Fontana Tartaglia and others were able to show in the 16th century that any ruler-and-compass construction could be accomplished with a straightedge and a fixed-width compass (i.e. a rusty compass).
The compass equivalency theorem shows that in all the constructions mentioned above, the familiar modern compass with its fixable aperture, which can be used to transfer distances, may be replaced with a "collapsible compass", a compass that collapses whenever it is lifted from a page, so that it may not be directly used to transfer distances. Indeed, Euclid's original constructions use a collapsible compass. It is possible to translate any circle in the plane with a collapsing compass using no more than three additional applications of the compass over that of a rigid compass.
Restrictions excluding the compass.
Motivated by Mascheroni's result, in 1822 Jean Victor Poncelet conjectured a variation on the same theme. His work paved the way for the field of projective geometry, wherein he proposed that any construction possible by straightedge and compass could be done with straightedge alone. However, the one stipulation is that no less than a single circle with its center identified must be provided. This statement, now known as the Poncelet–Steiner theorem, was proved by Jakob Steiner eleven years later.
A proof later provided in 1904 by Francesco Severi relaxes the requirement that one full circle be provided, and shows that any small arc of the circle, so long as the center is still provided, is still sufficient.
Additionally, the center itself may be omitted instead of portions of the arc, if it is substituted for something else sufficient, such as a second concentric circle, a second intersecting circle, a third circle, or a second circle which is neither intersecting nor concentric, provided that a point on either the centerline or the radical axis between them is given or two parallel lines exist in the plane. Other unique conditions may exist.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}^2"
},
{
"math_id": 1,
"text": "D=B"
},
{
"math_id": 2,
"text": "E=E'"
},
{
"math_id": 3,
"text": "D"
},
{
"math_id": 4,
"text": "EB"
},
{
"math_id": 5,
"text": "DB"
},
{
"math_id": 6,
"text": "B"
},
{
"math_id": 7,
"text": "EE'"
},
{
"math_id": 8,
"text": "P=Q"
},
{
"math_id": 9,
"text": "C(r)"
}
] | https://en.wikipedia.org/wiki?curid=1196909 |
11971 | Galaxy formation and evolution | The study of galaxy formation and evolution is concerned with the processes that formed a heterogeneous universe from a homogeneous beginning, the formation of the first galaxies, the way galaxies change over time, and the processes that have generated the variety of structures observed in nearby galaxies. Galaxy formation is hypothesized to occur from structure formation theories, as a result of tiny quantum fluctuations in the aftermath of the Big Bang. The simplest model in general agreement with observed phenomena is the Lambda-CDM model—that is, clustering and merging allows galaxies to accumulate mass, determining both their shape and structure. Hydrodynamics simulation, which simulates both baryons and dark matter, is widely used to study galaxy formation and evolution.
Commonly observed properties of galaxies.
Because of the inability to conduct experiments in outer space, the only way to “test” theories and models of galaxy evolution is to compare them with observations. Explanations for how galaxies formed and evolved must be able to predict the observed properties and types of galaxies.
Edwin Hubble created an early galaxy classification scheme, now known as the Hubble tuning-fork diagram. It partitioned galaxies into ellipticals, normal spirals, barred spirals (such as the Milky Way), and irregulars. These galaxy types exhibit the following properties which can be explained by current galaxy evolution theories:
Astronomers now believe that disk galaxies likely formed first, then evolved into elliptical galaxies through galaxy mergers.
Current models also predict that the majority of mass in galaxies is made up of dark matter, a substance which is not directly observable, and might not interact through any means except gravity. This observation arises because galaxies could not have formed as they have, or rotate as they are seen to, unless they contain far more mass than can be directly observed.
Formation of disk galaxies.
The earliest stage in the evolution of galaxies is their formation. When a galaxy forms, it has a disk shape and is called a spiral galaxy due to spiral-like "arm" structures located on the disk. There are different theories on how these disk-like distributions of stars develop from a cloud of matter: however, at present, none of them exactly predicts the results of observation.
Top-down theories.
Olin Eggen, Donald Lynden-Bell, and Allan Sandage in 1962, proposed a theory that disk galaxies form through a monolithic collapse of a large gas cloud. The distribution of matter in the early universe was in clumps that consisted mostly of dark matter. These clumps interacted gravitationally, putting tidal torques on each other that acted to give them some angular momentum. As the baryonic matter cooled, it dissipated some energy and contracted toward the center. With angular momentum conserved, the matter near the center speeds up its rotation. Then, like a spinning ball of pizza dough, the matter forms into a tight disk. Once the disk cools, the gas is not gravitationally stable, so it cannot remain a singular homogeneous cloud. It breaks, and these smaller clouds of gas form stars. Since the dark matter does not dissipate as it only interacts gravitationally, it remains distributed outside the disk in what is known as the dark halo. Observations show that there are stars located outside the disk, which does not quite fit the "pizza dough" model. It was first proposed by Leonard Searle and Robert Zinn that galaxies form by the coalescence of smaller progenitors. Known as a top-down formation scenario, this theory is quite simple yet no longer widely accepted.
Bottom-up theory.
More recent theories include the clustering of dark matter halos in the bottom-up process. Instead of large gas clouds collapsing to form a galaxy in which the gas breaks up into smaller clouds, it is proposed that matter started out in these “smaller” clumps (mass on the order of globular clusters), and then many of these clumps merged to form galaxies, which then were drawn by gravitation to form galaxy clusters. This still results in disk-like distributions of baryonic matter with dark matter forming the halo for all the same reasons as in the top-down theory. Models using this sort of process predict more small galaxies than large ones, which matches observations.
Astronomers do not currently know what process stops the contraction. In fact, theories of disk galaxy formation are not successful at producing the rotation speed and size of disk galaxies. It has been suggested that the radiation from bright newly formed stars, or from an active galactic nucleus can slow the contraction of a forming disk. It has also been suggested that the dark matter halo can pull the galaxy, thus stopping disk contraction.
The Lambda-CDM model is a cosmological model that explains the formation of the universe after the Big Bang. It is a relatively simple model that predicts many properties observed in the universe, including the relative frequency of different galaxy types; however, it underestimates the number of thin disk galaxies in the universe. The reason is that these galaxy formation models predict a large number of mergers. If disk galaxies merge with another galaxy of comparable mass (at least 15 percent of its mass) the merger will likely destroy, or at a minimum greatly disrupt the disk, and the resulting galaxy is not expected to be a disk galaxy (see next section). While this remains an unsolved problem for astronomers, it does not necessarily mean that the Lambda-CDM model is completely wrong, but rather that it requires further refinement to accurately reproduce the population of galaxies in the universe.
Galaxy mergers and the formation of elliptical galaxies.
Elliptical galaxies (most notably supergiant ellipticals, such as ESO 306-17) are among some of the largest known thus far. Their stars are on orbits that are randomly oriented within the galaxy (i.e. they are not rotating like disk galaxies). A distinguishing feature of elliptical galaxies is that the velocity of the stars does not necessarily contribute to flattening of the galaxy, such as in spiral galaxies. Elliptical galaxies have central supermassive black holes, and the masses of these black holes correlate with the galaxy's mass.
Elliptical galaxies have two main stages of evolution. The first is due to the supermassive black hole growing by accreting cooling gas. The second stage is marked by the black hole stabilizing by suppressing gas cooling, thus leaving the elliptical galaxy in a stable state. The mass of the black hole is also correlated to a property called sigma which is the dispersion of the velocities of stars in their orbits. This relationship, known as the M-sigma relation, was discovered in 2000. Elliptical galaxies mostly lack disks, although some bulges of disk galaxies resemble elliptical galaxies. Elliptical galaxies are more likely found in crowded regions of the universe (such as galaxy clusters).
Astronomers now see elliptical galaxies as some of the most evolved systems in the universe. It is widely accepted that the main driving force for the evolution of elliptical galaxies is mergers of smaller galaxies. Many galaxies in the universe are gravitationally bound to other galaxies, which means that they will never escape their mutual pull. If those colliding galaxies are of similar size, the resultant galaxy will appear similar to neither of the progenitors, but will instead be elliptical. There are many types of galaxy mergers, which do not necessarily result in elliptical galaxies, but result in a structural change. For example, a minor merger event is thought to be occurring between the Milky Way and the Magellanic Clouds.
Mergers between such large galaxies are regarded as violent, and the frictional interaction of the gas between the two galaxies can cause gravitational shock waves, which are capable of forming new stars in the new elliptical galaxy. By sequencing several images of different galactic collisions, one can observe the timeline of two spiral galaxies merging into a single elliptical galaxy.
In the Local Group, the Milky Way and the Andromeda Galaxy are gravitationally bound, and currently approaching each other at high speed. Simulations show that the Milky Way and Andromeda are on a collision course, and are expected to collide in less than five billion years. During this collision, it is expected that the Sun and the rest of the Solar System will be ejected from its current path around the Milky Way. The remnant could be a giant elliptical galaxy.
Galaxy quenching.
One observation that must be explained by a successful theory of galaxy evolution is the existence of two different populations of galaxies on the galaxy color-magnitude diagram. Most galaxies tend to fall into two separate locations on this diagram: a "red sequence" and a "blue cloud". Red sequence galaxies are generally non-star-forming elliptical galaxies with little gas and dust, while blue cloud galaxies tend to be dusty star-forming spiral galaxies.
As described in previous sections, galaxies tend to evolve from spiral to elliptical structure via mergers. However, the current rate of galaxy mergers does not explain how all galaxies move from the "blue cloud" to the "red sequence". It also does not explain how star formation ceases in galaxies. Theories of galaxy evolution must therefore be able to explain how star formation turns off in galaxies. This phenomenon is called galaxy "quenching".
Stars form out of cold gas (see also the Kennicutt–Schmidt law), so a galaxy is quenched when it has no more cold gas. However, it is thought that quenching occurs relatively quickly (within 1 billion years), which is much shorter than the time it would take for a galaxy to simply use up its reservoir of cold gas. Galaxy evolution models explain this by hypothesizing other physical mechanisms that remove or shut off the supply of cold gas in a galaxy. These mechanisms can be broadly classified into two categories: (1) preventive feedback mechanisms that stop cold gas from entering a galaxy or stop it from producing stars, and (2) ejective feedback mechanisms that remove gas so that it cannot form stars.
One theorized preventive mechanism called “strangulation” keeps cold gas from entering the galaxy. Strangulation is likely the main mechanism for quenching star formation in nearby low-mass galaxies. The exact physical explanation for strangulation is still unknown, but it may have to do with a galaxy's interactions with other galaxies. As a galaxy falls into a galaxy cluster, gravitational interactions with other galaxies can strangle it by preventing it from accreting more gas. For galaxies with massive dark matter halos, another preventive mechanism called “virial shock heating” may also prevent gas from becoming cool enough to form stars.
Ejective processes, which expel cold gas from galaxies, may explain how more massive galaxies are quenched. One ejective mechanism is caused by supermassive black holes found in the centers of galaxies. Simulations have shown that gas accreting onto supermassive black holes in galactic centers produces high-energy jets; the released energy can expel enough cold gas to quench star formation.
Our own Milky Way and the nearby Andromeda Galaxy currently appear to be undergoing the quenching transition from star-forming blue galaxies to passive red galaxies.
Hydrodynamics Simulation.
Dark energy and dark matter account for most of the Universe's energy, so it is valid to ignore baryons when simulating large-scale structure formation (using methods such as N-body simulation). However, since the visible components of galaxies consist of baryons, it is crucial to include baryons in the simulation to study the detailed structures of galaxies. At first, the baryon component consists of mostly hydrogen and helium gas, which later transforms into stars during the formation of structures. From observations, models used in simulations can be tested and the understanding of different stages of galaxy formation can be improved.
Euler equations.
In cosmological simulations, astrophysical gases are typically modeled as inviscid ideal gases that follow the Euler equations, which can be expressed mainly in three different ways: Lagrangian, Eulerian, or arbitrary Lagrange-Eulerian methods. Different methods give specific forms of hydrodynamical equations. When using the Lagrangian approach to specify the field, it is assumed that the observer tracks a specific fluid parcel with its unique characteristics during its movement through space and time. In contrast, the Eulerian approach emphasizes particular locations in space that the fluid passes through as time progresses.
Baryonic Physics.
To shape the population of galaxies, the hydrodynamical equations must be supplemented by a variety of astrophysical processes mainly governed by baryonic physics.
Gas cooling.
Processes, such as collisional excitation, ionization, and inverse Compton scattering, can cause the internal energy of the gas to be dissipated. In the simulation, cooling processes are realized by coupling cooling functions to energy equations. Besides the primordial cooling, at high temperature,formula_0, heavy elements (metals) cooling dominates. When formula_1, the fine structure and molecular cooling also need to be considered to simulate the cold phase of the interstellar medium.
Interstellar medium.
Complex multi-phase structure, including relativistic particles and magnetic field, makes simulation of interstellar medium difficult. In particular, modeling the cold phase of the interstellar medium poses technical difficulties due to the short timescales associated with the dense gas. In the early simulations, the dense gas phase is frequently not modeled directly but rather characterized by an effective polytropic equation of state. More recent simulations use a multimodal distribution to describe
the gas density and temperature distributions, which directly model the multi-phase structure. However, more detailed physics processes needed to be considered in future simulations, since the structure of the interstellar medium directly affects star formation.
Star formation.
As cold and dense gas accumulates, it undergoes gravitational collapse and eventually forms stars. To simulate this process, a portion of the gas is transformed into collisionless star particles, which represent coeval, single-metallicity stellar populations and are described by an initial underlying mass function. Observations suggest that star formation efficiency in molecular gas is almost universal, with around 1% of the gas being converted into stars per free fall time. In simulations, the gas is typically converted into star particles using a probabilistic sampling scheme based on the calculated star formation rate. Some simulations seek an alternative to the probabilistic sampling scheme and aim to better capture the clustered nature of star formation by treating star clusters as the fundamental unit of star formation. This approach permits the growth of star particles by accreting material from the surrounding medium. In addition to this, modern models of galaxy formation track the evolution of these stars and the mass they return to the gas component, leading to an enrichment of the gas with metals.
Stellar feedback.
Stars have an influence on their surrounding gas by injecting energy and momentum. This creates a feedback loop that regulates the process of star formation. To effectively control star formation, stellar feedback must generate galactic-scale outflows that expel gas from galaxies. Various methods are utilized to couple energy and momentum, particularly through supernova explosions, to the surrounding gas. These methods differ in how the energy is deposited, either thermally or kinetically. However, excessive radiative gas cooling must be avoided in the former case. Cooling is expected in dense and cold gas, but it cannot be reliably modeled in cosmological simulations due to low resolution. This leads to artificial and excessive cooling of the gas, causing the supernova feedback energy to be lost via radiation and significantly reducing its effectiveness. In the latter case, kinetic energy cannot be radiated away until it thermalizes. However, using hydrodynamically-decoupled wind particles to inject momentum non-locally into the gas surrounding active star-forming regions may still be necessary to achieve large-scale galactic outflows. Recent models explicitly model stellar feedback. These models not only incorporate supernova feedback but also consider other feedback channels such as energy and momentum injection from stellar winds, photoionization, and radiation pressure resulting from radiation emitted by young, massive stars. During the Cosmic Dawn, galaxy formation occurred in short bursts of 5 to 30 Myr due to stellar feedbacks.
Supermassive black holes.
Simulation of supermassive black holes is also considered, numerically seeding them in dark matter haloes, due to their observation in many galaxies and the impact of their mass on the mass density distribution. Their mass accretion rate is frequently modeled by the Bondi-Hoyle model.
Active galactic nuclei.
Active galactic nuclei (AGN) have an impact on the observational phenomena of supermassive black holes, and further have a regulation of black hole growth and star formation. In simulations, AGN feedback is usually classified into two modes, namely quasar and radio mode. Quasar mode feedback is linked to the radiatively efficient mode of black hole growth and is frequently incorporated through energy or momentum injection. The regulation of star formation in massive galaxies is believed to be significantly influenced by radio mode feedback, which occurs due to the presence of highly-collimated jets of relativistic particles. These jets are typically linked to X-ray bubbles that possess enough energy to counterbalance cooling losses.
Magnetic fields.
The ideal magnetohydrodynamics approach is commonly utilized in cosmological simulations since it provides a good approximation for cosmological magnetic fields. The effect of magnetic fields on the dynamics of gas is generally negligible on large cosmological scales. Nevertheless, magnetic fields are a critical component of the interstellar medium since they provide pressure support against gravity and affect the propagation of cosmic rays.
Cosmic rays.
Cosmic rays play a significant role in the interstellar medium by contributing to its pressure, serving as a crucial heating channel, and potentially driving galactic gas outflows. The propagation of cosmic rays is highly affected by magnetic fields. So in the simulation, equations describing the cosmic ray energy and flux are coupled to magnetohydrodynamics equations.
Radiation Hydrodynamics.
Radiation hydrodynamics simulations are computational methods used to study the interaction of radiation with matter. In astrophysical contexts, radiation hydrodynamics is used to study the epoch of reionization when the Universe had high redshift. There are several numerical methods used for radiation hydrodynamics simulations, including ray-tracing, Monte Carlo, and moment-based methods. Ray-tracing involves tracing the paths of individual photons through the simulation and computing their interactions with matter at each step. This method is computationally expensive but can produce very accurate results.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ 10^5K< T < 10^7K\\,"
},
{
"math_id": 1,
"text": "\\ T < 10^4K\\,"
}
] | https://en.wikipedia.org/wiki?curid=11971 |
1197123 | Initial topology | Coarsest topology making certain functions continuous
In general topology and related areas of mathematics, the initial topology (or induced topology or weak topology or limit topology or projective topology) on a set formula_0 with respect to a family of functions on formula_0 is the coarsest topology on formula_1 that makes those functions continuous.
The subspace topology and product topology constructions are both special cases of initial topologies. Indeed, the initial topology construction can be viewed as a generalization of these.
The dual notion is the final topology, which for a given family of functions mapping to a set formula_2 is the finest topology on formula_2 that makes those functions continuous.
Definition.
Given a set formula_1 and an indexed family formula_3 of topological spaces with functions
formula_4
the initial topology formula_5 on formula_1 is the coarsest topology on formula_1 such that each
formula_6
is continuous.
Definition in terms of open sets
If formula_7 is a family of topologies formula_1 indexed by formula_8 then the least upper bound topology of these topologies is the coarsest topology on formula_1 that is finer than each formula_9 This topology always exists and it is equal to the topology generated by formula_10
If for every formula_11 formula_12 denotes the topology on formula_13 then formula_14 is a topology on formula_1, and the initial topology of the formula_15 by the mappings formula_16 is the least upper bound topology of the formula_17-indexed family of topologies formula_18 (for formula_19).
Explicitly, the initial topology is the collection of open sets generated by all sets of the form formula_20 where formula_21 is an open set in formula_15 for some formula_11 under finite intersections and arbitrary unions.
Sets of the form formula_22 are often called cylinder sets. If formula_17 contains exactly one element, then all the open sets of the initial topology formula_23 are cylinder sets.
Examples.
Several topological constructions can be regarded as special cases of the initial topology.
Properties.
Characteristic property.
The initial topology on formula_1 can be characterized by the following characteristic property:<br>
A function formula_28 from some space formula_29 to formula_1 is continuous if and only if formula_30 is continuous for each formula_31
Note that, despite looking quite similar, this is not a universal property. A categorical description is given below.
A filter formula_32 on formula_1 converges to a point formula_33 if and only if the prefilter formula_34 converges to formula_35 for every formula_31
Evaluation.
By the universal property of the product topology, we know that any family of continuous maps formula_36 determines a unique continuous map
formula_37
This map is known as the <templatestyles src="Template:Visible anchor/styles.css" />evaluation map.
A family of maps formula_38 is said to "<templatestyles src="Template:Visible anchor/styles.css" />separate points" in formula_1 if for all formula_39 in formula_1 there exists some formula_40 such that formula_41 The family formula_42 separates points if and only if the associated evaluation map formula_43 is injective.
The evaluation map formula_43 will be a topological embedding if and only if formula_1 has the initial topology determined by the maps formula_42 and this family of maps separates points in formula_26
Hausdorffness
If formula_1 has the initial topology induced by formula_44 and if every formula_15 is Hausdorff, then formula_1 is a Hausdorff space if and only if these maps separate points on formula_26
Transitivity of the initial topology.
If formula_1 has the initial topology induced by the formula_17-indexed family of mappings formula_44 and if for every formula_11 the topology on formula_15 is the initial topology induced by some formula_45-indexed family of mappings formula_46 (as formula_47 ranges over formula_45), then the initial topology on formula_1 induced by formula_44 is equal to the initial topology induced by the formula_48-indexed family of mappings formula_49 as formula_40 ranges over formula_17 and formula_47 ranges over formula_50
Several important corollaries of this fact are now given.
In particular, if formula_51 then the subspace topology that formula_52 inherits from formula_1 is equal to the initial topology induced by the inclusion map formula_53 (defined by formula_54). Consequently, if formula_1 has the initial topology induced by formula_44 then the subspace topology that formula_52 inherits from formula_1 is equal to the initial topology induced on formula_52 by the restrictions formula_55 of the formula_16 to formula_56
The product topology on formula_57 is equal to the initial topology induced by the canonical projections formula_58 as formula_40 ranges over formula_59
Consequently, the initial topology on formula_1 induced by formula_44 is equal to the inverse image of the product topology on formula_57 by the evaluation map formula_60 Furthermore, if the maps formula_61 separate points on formula_1 then the evaluation map is a homeomorphism onto the subspace formula_62 of the product space formula_63
Separating points from closed sets.
If a space formula_1 comes equipped with a topology, it is often useful to know whether or not the topology on formula_1 is the initial topology induced by some family of maps on formula_26 This section gives a sufficient (but not necessary) condition.
A family of maps formula_44 "separates points from closed sets" in formula_1 if for all closed sets formula_64 in formula_1 and all formula_65 there exists some formula_40 such that
formula_66
where formula_67 denotes the closure operator.
Theorem. A family of continuous maps formula_44 separates points from closed sets if and only if the cylinder sets formula_68 for formula_69 open in formula_13 form a base for the topology on formula_26
It follows that whenever formula_70 separates points from closed sets, the space formula_1 has the initial topology induced by the maps formula_71 The converse fails, since generally the cylinder sets will only form a subbase (and not a base) for the initial topology.
If the space formula_1 is a T0 space, then any collection of maps formula_70 that separates points from closed sets in formula_1 must also separate points. In this case, the evaluation map will be an embedding.
Initial uniform structure.
If formula_72 is a family of uniform structures on formula_1 indexed by formula_8 then the least upper bound uniform structure of formula_72 is the coarsest uniform structure on formula_1 that is finer than each formula_73 This uniform always exists and it is equal to the filter on formula_74 generated by the filter subbase formula_75
If formula_76 is the topology on formula_1 induced by the uniform structure formula_77 then the topology on formula_1 associated with least upper bound uniform structure is equal to the least upper bound topology of formula_78
Now suppose that formula_44 is a family of maps and for every formula_11 let formula_77 be a uniform structure on formula_79 Then the initial uniform structure of the formula_15 by the mappings formula_16 is the unique coarsest uniform structure formula_80 on formula_1 making all formula_81 uniformly continuous. It is equal to the least upper bound uniform structure of the formula_17-indexed family of uniform structures formula_82 (for formula_19).
The topology on formula_1 induced by formula_80 is the coarsest topology on formula_1 such that every formula_36 is continuous.
The initial uniform structure formula_80 is also equal to the coarsest uniform structure such that the identity mappings formula_83 are uniformly continuous.
Hausdorffness: The topology on formula_1 induced by the initial uniform structure formula_80 is Hausdorff if and only if for whenever formula_84 are distinct (formula_39) then there exists some formula_19 and some entourage formula_85 of formula_15 such that formula_86
Furthermore, if for every index formula_11 the topology on formula_15 induced by formula_77 is Hausdorff then the topology on formula_1 induced by the initial uniform structure formula_80 is Hausdorff if and only if the maps formula_44 separate points on formula_1 (or equivalently, if and only if the evaluation map formula_87 is injective)
Uniform continuity: If formula_80 is the initial uniform structure induced by the mappings formula_88 then a function formula_28 from some uniform space formula_29 into formula_89 is uniformly continuous if and only if formula_90 is uniformly continuous for each formula_31
Cauchy filter: A filter formula_32 on formula_1 is a Cauchy filter on formula_89 if and only if formula_91 is a Cauchy prefilter on formula_15 for every formula_31
Transitivity of the initial uniform structure: If the word "topology" is replaced with "uniform structure" in the statement of "transitivity of the initial topology" given above, then the resulting statement will also be true.
Categorical description.
In the language of category theory, the initial topology construction can be described as follows. Let formula_2 be the functor from a discrete category formula_92 to the category of topological spaces formula_93 which maps formula_94. Let formula_21 be the usual forgetful functor from formula_93 to formula_95. The maps formula_96 can then be thought of as a cone from formula_1 to formula_97 That is, formula_98 is an object of formula_99—the category of cones to formula_97 More precisely, this cone formula_98 defines a formula_21-structured cosink in formula_100
The forgetful functor formula_101 induces a functor formula_102. The characteristic property of the initial topology is equivalent to the statement that there exists a universal morphism from formula_103 to formula_104 that is, a terminal object in the category formula_105
Explicitly, this consists of an object formula_106 in formula_107 together with a morphism formula_108 such that for any object formula_109 in formula_107 and morphism formula_110 there exists a unique morphism formula_111 such that the following diagram commutes:
The assignment formula_112 placing the initial topology on formula_1 extends to a functor
formula_113
which is right adjoint to the forgetful functor formula_114 In fact, formula_17 is a right-inverse to formula_103; since formula_115 is the identity functor on formula_116
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X,"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "Y"
},
{
"math_id": 3,
"text": "\\left(Y_i\\right)_{i \\in I}"
},
{
"math_id": 4,
"text": "f_i : X \\to Y_i,"
},
{
"math_id": 5,
"text": "\\tau"
},
{
"math_id": 6,
"text": "f_i : (X, \\tau) \\to Y_i"
},
{
"math_id": 7,
"text": "\\left(\\tau_i\\right)_{i \\in I}"
},
{
"math_id": 8,
"text": "I \\neq \\varnothing,"
},
{
"math_id": 9,
"text": "\\tau_i."
},
{
"math_id": 10,
"text": "{\\textstyle \\bigcup\\limits_{i \\in I} \\tau_i}."
},
{
"math_id": 11,
"text": "i \\in I,"
},
{
"math_id": 12,
"text": "\\sigma_i"
},
{
"math_id": 13,
"text": "Y_i,"
},
{
"math_id": 14,
"text": "f_i^{-1}\\left(\\sigma_i\\right) = \\left\\{f_i^{-1}(V) : V \\in \\sigma_i\\right\\}"
},
{
"math_id": 15,
"text": "Y_i"
},
{
"math_id": 16,
"text": "f_i"
},
{
"math_id": 17,
"text": "I"
},
{
"math_id": 18,
"text": "f_i^{-1}\\left(\\sigma_i\\right)"
},
{
"math_id": 19,
"text": "i \\in I"
},
{
"math_id": 20,
"text": "f_i^{-1}(U),"
},
{
"math_id": 21,
"text": "U"
},
{
"math_id": 22,
"text": "f_i^{-1}(V)"
},
{
"math_id": 23,
"text": "(X, \\tau)"
},
{
"math_id": 24,
"text": "\\left\\{\\tau_i\\right\\}"
},
{
"math_id": 25,
"text": "\\operatorname{id}_i : X \\to \\left(X, \\tau_i\\right)"
},
{
"math_id": 26,
"text": "X."
},
{
"math_id": 27,
"text": "\\left\\{\\tau_i\\right\\}."
},
{
"math_id": 28,
"text": "g"
},
{
"math_id": 29,
"text": "Z"
},
{
"math_id": 30,
"text": "f_i \\circ g"
},
{
"math_id": 31,
"text": "i \\in I."
},
{
"math_id": 32,
"text": "\\mathcal{B}"
},
{
"math_id": 33,
"text": "x \\in X"
},
{
"math_id": 34,
"text": "f_i(\\mathcal{B})"
},
{
"math_id": 35,
"text": "f_i(x)"
},
{
"math_id": 36,
"text": "f_i : X \\to Y_i"
},
{
"math_id": 37,
"text": "\\begin{alignat}{4}\nf :\\;&& X &&\\;\\to \\;& \\prod_i Y_i \\\\[0.3ex]\n && x &&\\;\\mapsto\\;& \\left(f_i(x)\\right)_{i \\in I} \\\\\n\\end{alignat}"
},
{
"math_id": 38,
"text": "\\{f_i : X \\to Y_i\\}"
},
{
"math_id": 39,
"text": "x \\neq y"
},
{
"math_id": 40,
"text": "i"
},
{
"math_id": 41,
"text": "f_i(x) \\neq f_i(y)."
},
{
"math_id": 42,
"text": "\\{f_i\\}"
},
{
"math_id": 43,
"text": "f"
},
{
"math_id": 44,
"text": "\\left\\{f_i : X \\to Y_i\\right\\}"
},
{
"math_id": 45,
"text": "J_i"
},
{
"math_id": 46,
"text": "\\left\\{g_j : Y_i \\to Z_j\\right\\}"
},
{
"math_id": 47,
"text": "j"
},
{
"math_id": 48,
"text": "{\\textstyle \\bigcup\\limits_{i \\in I} J_i}"
},
{
"math_id": 49,
"text": "\\left\\{g_j \\circ f_i : X \\to Z_j\\right\\}"
},
{
"math_id": 50,
"text": "J_i."
},
{
"math_id": 51,
"text": "S \\subseteq X"
},
{
"math_id": 52,
"text": "S"
},
{
"math_id": 53,
"text": "S \\to X"
},
{
"math_id": 54,
"text": "s \\mapsto s"
},
{
"math_id": 55,
"text": "\\left\\{\\left.f_i\\right|_S : S \\to Y_i\\right\\}"
},
{
"math_id": 56,
"text": "S."
},
{
"math_id": 57,
"text": "\\prod_i Y_i"
},
{
"math_id": 58,
"text": "\\operatorname{pr}_i : \\left(x_k\\right)_{k \\in I} \\mapsto x_i"
},
{
"math_id": 59,
"text": "I."
},
{
"math_id": 60,
"text": "f : X \\to \\prod_i Y_i\\,."
},
{
"math_id": 61,
"text": "\\left\\{f_i\\right\\}_{i \\in I}"
},
{
"math_id": 62,
"text": "f(X)"
},
{
"math_id": 63,
"text": "\\prod_i Y_i."
},
{
"math_id": 64,
"text": "A"
},
{
"math_id": 65,
"text": "x \\not\\in A,"
},
{
"math_id": 66,
"text": "f_i(x) \\notin \\operatorname{cl}(f_i(A))"
},
{
"math_id": 67,
"text": "\\operatorname{cl}"
},
{
"math_id": 68,
"text": "f_i^{-1}(V),"
},
{
"math_id": 69,
"text": "V"
},
{
"math_id": 70,
"text": "\\left\\{f_i\\right\\}"
},
{
"math_id": 71,
"text": "\\left\\{f_i\\right\\}."
},
{
"math_id": 72,
"text": "\\left(\\mathcal{U}_i\\right)_{i \\in I}"
},
{
"math_id": 73,
"text": "\\mathcal{U}_i."
},
{
"math_id": 74,
"text": "X \\times X"
},
{
"math_id": 75,
"text": "{\\textstyle \\bigcup\\limits_{i \\in I} \\mathcal{U}_i}."
},
{
"math_id": 76,
"text": "\\tau_i"
},
{
"math_id": 77,
"text": "\\mathcal{U}_i"
},
{
"math_id": 78,
"text": "\\left(\\tau_i\\right)_{i \\in I}."
},
{
"math_id": 79,
"text": "Y_i."
},
{
"math_id": 80,
"text": "\\mathcal{U}"
},
{
"math_id": 81,
"text": "f_i : \\left(X, \\mathcal{U}\\right) \\to \\left(Y_i, \\mathcal{U}_i\\right)"
},
{
"math_id": 82,
"text": "f_i^{-1}\\left(\\mathcal{U}_i\\right)"
},
{
"math_id": 83,
"text": "\\operatorname{id} : \\left(X, \\mathcal{U}\\right) \\to \\left(X, f_i^{-1}\\left(\\mathcal{U}_i\\right)\\right)"
},
{
"math_id": 84,
"text": "x, y \\in X"
},
{
"math_id": 85,
"text": "V_i \\in \\mathcal{U}_i"
},
{
"math_id": 86,
"text": "\\left(f_i(x), f_i(y)\\right) \\not\\in V_i."
},
{
"math_id": 87,
"text": "f : X \\to \\prod_i Y_i"
},
{
"math_id": 88,
"text": "\\left\\{f_i : X \\to Y_i\\right\\},"
},
{
"math_id": 89,
"text": "(X, \\mathcal{U})"
},
{
"math_id": 90,
"text": "f_i \\circ g : Z \\to Y_i"
},
{
"math_id": 91,
"text": "f_i\\left(\\mathcal{B}\\right)"
},
{
"math_id": 92,
"text": "J"
},
{
"math_id": 93,
"text": "\\mathrm{Top}"
},
{
"math_id": 94,
"text": "j\\mapsto Y_j"
},
{
"math_id": 95,
"text": "\\mathrm{Set}"
},
{
"math_id": 96,
"text": "f_j : X \\to Y_j"
},
{
"math_id": 97,
"text": "UY."
},
{
"math_id": 98,
"text": "(X,f)"
},
{
"math_id": 99,
"text": "\\mathrm{Cone}(UY) := (\\Delta\\downarrow{UY})"
},
{
"math_id": 100,
"text": "\\mathrm{Set}."
},
{
"math_id": 101,
"text": "U : \\mathrm{Top} \\to \\mathrm{Set}"
},
{
"math_id": 102,
"text": "\\bar{U} : \\mathrm{Cone}(Y) \\to \\mathrm{Cone}(UY)"
},
{
"math_id": 103,
"text": "\\bar{U}"
},
{
"math_id": 104,
"text": "(X,f);"
},
{
"math_id": 105,
"text": "\\left(\\bar{U}\\downarrow(X,f)\\right)."
},
{
"math_id": 106,
"text": "I(X,f)"
},
{
"math_id": 107,
"text": "\\mathrm{Cone}(Y)"
},
{
"math_id": 108,
"text": "\\varepsilon : \\bar{U} I(X,f) \\to (X,f)"
},
{
"math_id": 109,
"text": "(Z,g)"
},
{
"math_id": 110,
"text": "\\varphi : \\bar{U}(Z,g) \\to (X,f)"
},
{
"math_id": 111,
"text": "\\zeta : (Z,g) \\to I(X,f)"
},
{
"math_id": 112,
"text": "(X,f) \\mapsto I(X,f)"
},
{
"math_id": 113,
"text": "I : \\mathrm{Cone}(UY) \\to \\mathrm{Cone}(Y)"
},
{
"math_id": 114,
"text": "\\bar{U}."
},
{
"math_id": 115,
"text": "\\bar{U}I"
},
{
"math_id": 116,
"text": "\\mathrm{Cone}(UY)."
}
] | https://en.wikipedia.org/wiki?curid=1197123 |
1197201 | Direct digital synthesis | Method for creating waveforms
Direct digital synthesis (DDS) is a method employed by frequency synthesizers used for creating arbitrary waveforms from a single, fixed-frequency reference clock. DDS is used in applications such as signal generation, local oscillators in communication systems, function generators, mixers, modulators, sound synthesizers and as part of a digital phase-locked loop.
Overview.
A basic Direct Digital Synthesizer consists of a frequency reference (often a crystal or SAW oscillator), a numerically controlled oscillator (NCO) and a digital-to-analog converter (DAC) as shown in Figure 1.
The reference oscillator provides a stable time base for the system and determines the frequency accuracy of the DDS. It provides the clock to the "NCO", which produces at its output a discrete-time, quantized version of the desired output waveform (often a sinusoid) whose period is controlled by the digital word contained in the "Frequency Control Register". The sampled, digital waveform is converted to an analog waveform by the "DAC". The output reconstruction filter rejects the spectral replicas produced by the zero-order hold inherent in the analog conversion process.
Performance.
A DDS has many advantages over its analog counterpart, the phase-locked loop (PLL), including much better frequency agility, improved phase noise, and precise control of the output phase across frequency switching transitions. Disadvantages include spurious responses mainly due to truncation effects in the NCO, crossing spurs resulting from high order (>1) Nyquist images, and a higher noise floor at large frequency offsets due mainly to the digital-to-analog converter.
Because a DDS is a sampled system, in addition to the desired waveform at output frequency Fout, Nyquist images are also generated (the primary image is at Fclk-Fout, where Fclk is the reference clock frequency). In order to reject these undesired images, a DDS is generally used in conjunction with an analog reconstruction lowpass filter as shown in Figure 1.
Frequency agility.
The output frequency of a DDS is determined by the value stored in the frequency control register (FCR) (see Fig.1), which in turn controls the NCO's phase accumulator step size. Because the NCO operates in the discrete-time domain, it changes frequency instantaneously at the clock edge coincident with a change in the value stored in the FCR. The DDS output frequency settling time is determined mainly by the phase response of the reconstruction filter. An ideal reconstruction filter with a linear phase response (meaning the output is simply a delayed version of the input signal) would allow instantaneous frequency response at its output because a linear system can not create frequencies not present at its input.
Phase noise and jitter.
The superior close-in phase noise performance of a DDS stems from the fact that it is a feed-forward system. In a traditional phase locked loop (PLL), the frequency divider in the feedback path acts to multiply the phase noise of the reference oscillator and, within the PLL loop bandwidth, impresses this excess noise onto the VCO output. A DDS, on the other hand, reduces the reference clock phase noise by the ratio formula_0 because the fractional division of the clock derives its output. Reference clock jitter translates directly to the output, but this jitter is a smaller percentage of the output period (by the ratio above). Since the maximum output frequency is limited to formula_1, the output phase noise at close-in offsets is always at least 6 dB below the reference clock phase noise.
At offsets far removed from the carrier, the phase-noise floor of a DDS is determined by the power sum of the DAC quantization noise floor and the reference clock phase noise floor. | [
{
"math_id": 0,
"text": "f_{clk}/f_o"
},
{
"math_id": 1,
"text": "f_{clk}/2"
}
] | https://en.wikipedia.org/wiki?curid=1197201 |
11973292 | Hamada's equation | In corporate finance, Hamada’s equation is an equation used as a way to separate the financial risk of a levered firm from its business risk. The equation combines the Modigliani–Miller theorem with the capital asset pricing model. It is used to help determine the levered beta and, through this, the optimal capital structure of firms. It was named after Robert Hamada, the Professor of Finance behind the theory.
Hamada’s equation relates the beta of a levered firm (a firm financed by both debt and equity) to that of its unlevered (i.e., a firm which has no debt) counterpart. It has proved useful in several areas of finance, including capital structuring, portfolio management and risk management, to name just a few. This formula is commonly taught in MBA Corporate Finance and Valuation classes. It is used to determine the cost of capital of a levered firm based on the cost of capital of comparable firms. Here, the comparable firms would be the ones having similar business risk and, thus, similar unlevered betas as the firm of interest.
Equation.
The equation is
formula_0
where "βL" and "βU" are the levered and unlevered betas, respectively, "T" the tax rate and formula_1 the leverage, defined here as the ratio of debt, "D", to equity, "E", of the firm.
The importance of Hamada's equation is that it separates the risk of the business, reflected here by the beta of an unlevered firm, "βU", from that of its levered counterpart, "βL", which contains the financial risk of leverage. Apart from the effect of the tax rate, which is generally taken as constant, the discrepancy between the two betas can be attributed solely to how the business is financed.
The equation is often wrongly thought to hold in general. However, there are several key "assumptions" behind the Hamada equation:
Derivation.
This simplified proof is based on Hamada's original paper (Hamada, R.S. 1972). We know
that, the beta of a company is :
formula_2
We also know that, the return on equity of a nonleveraged and a leveraged firm is:
formula_3
formula_4
Where formula_5 is sum of the net capital expenditure and the change in net working capital. If we substitute the (3) and (4) equation into the (2), then we get these formulas (5), if we suppose that the covariances between the market and the components of equity cash flow are zero (hence "β∆IC=βDebtnew=βInterest=0"), except the covariance between EBIT and the market:
formula_6
formula_7
formula_8
To get the well-known equation, suppose that the value of a firm's assets and the value of firm's equity are equal, if the firm is completely financed by equity and tax rate is zero. Mathematically this means the value of an unleveraged firm, when tax rate is zero: "VU=VA=EU". If we fix the value of the unleveraged firm, and change some equity to debt ("D>0"), the value of the firm is still the same, because there is no corporate tax. In this situation the value of the leveraged firm is (6):
formula_9
If the tax rate is bigger than zero ("T>0") and there is financial leverage ("D>0"), then the leveraged and the unlevaraged firm are not equal because the value of the leveraged firm is bigger by the present value of the tax shield:
formula_10,
so (7):
formula_11
Where "VA" is the value of the unleveraged firm's assets, which we fixed in above. From the (7) equation "EU" is (8)
formula_12
Combine the (5) and (8) equation to get the well-known formula for the leveraged and non leveraged equity beta:
formula_13
Where "I" is the sum of interest payments, "E" is Equity, "D" is Debt, "V" is the value of a firm category (leveraged or non leveraged), "A" is assets, "M" is referred to the market, "L" means leveraged, "U" means non leveraged category, "r" is the return rate and "T" denotes the tax rate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\beta_{L} = \\beta_{U}[1+(1-T)\\phi]\n \\qquad(1)"
},
{
"math_id": 1,
"text": "\\phi\\,\\!"
},
{
"math_id": 2,
"text": "\n\\beta_{i}=\\frac{cov(r_{i},r_{M})}{\\sigma^{2}(r_{M})}\n\\qquad (2)"
},
{
"math_id": 3,
"text": "\nr_{E,U} = \\frac{EBIT(1-T)-\\Delta IC}{E_{U}}\n\\qquad (3)"
},
{
"math_id": 4,
"text": "\nr_{E,L} = \\frac{EBIT(1-T)-\\Delta IC+Debt_{new}-Interest}{E_{L}}\n\\qquad (4)"
},
{
"math_id": 5,
"text": "\\Delta IC"
},
{
"math_id": 6,
"text": "\n\\beta_{U} = \\frac{cov(\\frac{EBIT(1-T)}{E_{U}},r_{M})}{\\sigma^{2}(r_{M})}\n"
},
{
"math_id": 7,
"text": "\n\\beta_{L} = \\frac{cov(\\frac{EBIT(1-T)}{E_{L}},r_{M})}{\\sigma^{2}(r_{M})}\n"
},
{
"math_id": 8,
"text": "\nE_{L}\\beta_{L} = E_{U}\\beta_{U}\\rightarrow\\beta_{L}=\\frac{E_{U}}{E_{L}}\\beta_{U}\n"
},
{
"math_id": 9,
"text": "\nV_{L} = V_{U} = V_{A}=E_{U} = E_{L|T=0}+D\n"
},
{
"math_id": 10,
"text": "\n\\sum_{i} \\frac{Dr_{D}T}{(1+r_{D})^i}=\\frac{Dr_{D}T}{r_{D}}=DT\n"
},
{
"math_id": 11,
"text": "\nV_{L} = V_{\\{U,A\\}}+DT=E_{U}+DT = E_{L|T=0}+D+DT = E_{L|T>0}+D\n"
},
{
"math_id": 12,
"text": "\nE_{U}=E_{L|T>0}+D-DT\n"
},
{
"math_id": 13,
"text": "\n\\beta_{L}=\\frac{E_{L}+D-DT}{E_{L}}\\beta_{U}=\\left[1+\\frac{D}{E_{L}}(1-T)\\right]\\beta_{U} =\\beta_{U}[1+(1-T)\\phi]\n"
}
] | https://en.wikipedia.org/wiki?curid=11973292 |
1197356 | Leonor Michaelis | German chemist and physician (1875–1949)
Leonor Michaelis (16 January 1875 – 8 October 1949) was a German biochemist, physical chemist, and physician, known for his work with Maud Menten on enzyme kinetics in 1913, as well as for work on enzyme inhibition, pH and quinones.
Early life and education.
Leonor Michaelis was born in Berlin, Germany, on 16 January 1875, and graduated from the humanistic Köllnisches Gymnasium in 1893 after passing the Abiturienten Examen. It was here that Michaelis's interest in physics and chemistry was first sparked as he was encouraged by his teachers to utilize the relatively unused laboratories at his school.
With concerns about the financial stability of a pure scientist, he commenced his study of medicine at Berlin University in 1893. Among his instructors were Emil du Bois-Reymond for physiology, Emil Fischer for chemistry, and Oscar Hertwig for histology and embryology.
During his time at Berlin University, Michaelis worked in the lab of Oscar Hertwig, even receiving a prize for a paper on the histology of milk secretion. Michaelis's doctoral thesis work on cleavage determination in frog eggs led him to write a textbook on embryology. Through his work at Hertwig's lab, Michaelis came to know Paul Ehrlich and his work on blood cytology; he worked as Ehrlich's private research assistant from 1898 to 1899.
He passed his physician's examination in 1896 in Freiburg, and then moved to Berlin, where he received his doctorate in 1897. After receiving his medical degree, Michaelis worked as a private research assistant to Moritz Litten (1899–1902) and for Ernst Viktor von Leyden (1902–1906).
Life and work.
From 1900 to 1904, Michaelis continued his study of clinical medicine at a municipal hospital in Berlin, where he found time to establish a chemical laboratory.
He attained the position of Privatdocent at the University of Berlin in 1903. In 1905 he accepted a position as director of the bacteriology lab in the Klinikum Am Urban, becoming Professor extraordinary at Berlin University in 1908. In 1914 he published a paper suggesting that Emil Abderhalden's pregnancy tests could not be reproduced, a paper which fatally compromised Michaelis's position as an academic in Germany. In addition to that, he feared that being Jewish would make further advancement in the university unlikely, and in 1922, Michaelis moved to the Medical School of the University of Nagoya (Japan) as Professor of biochemistry, becoming one of the first foreign professors at a Japanese university, bringing with him several documents, apparatuses and chemicals from Germany. His research in Japan focussed on potentiometric measurements and the cellular membrane. Nagatsu has provided an account of Michaelis's contributions to biochemistry in Japan.
In 1926, he moved to Johns Hopkins University in Baltimore as resident lecturer in medical research and in 1929 to the Rockefeller Institute of Medical Research in New York City, where he retired in 1941.
The Michaelis-Menten equation.
Michaelis's work with Menten led to the Michaelis–Menten equation. This is now available in English.
formula_0
for a steady-state rate formula_1 in terms of the substrate concentration formula_2 and constants formula_3 and formula_4 (written with modern symbols).
An equation of the same form and with the same meaning appeared in the doctoral thesis of Victor Henri, a decade before Michaelis and Menten. However, Henri did not take it further: in particular he did not discuss the advantages of considering initial rates rather than time courses. Nonetheless, it is historically more accurate to refer to the "Henri–Michaelis–Menten equation".
Classification of Inhibition types.
Michaelis was one of the first to study enzyme inhibition, and to classify inhibition types as "competitive" or "non-competitive". In competitive inhibition the apparent value of formula_4 is increased, and in non-competitive inhibition the apparent value of formula_3 is decreased. Nowadays we consider the apparent value of formula_5 to be decreased in competitive inhibition, with no effect on the apparent value of formula_3: Michaelis's competitive inhibitors are still competitive inhibitors by this definition. However, non-competitive inhibition by his criterion is very rare, but "mixed inhibition", with effects on the apparent values of both formula_5 and formula_3 is important. Some authors call this non-competitive inhibition, but it is not non-competitive inhibition as understood by Michaelis. The remaining important kind of inhibition, "uncompetitive inhibition", in which the apparent value of formula_3 is decreased
with no effect on the apparent value of formula_5, was not considered by Michaelis. Fuller discussion can be found elsewhere.
Hydrogen ion concentration.
Michaelis built virtually immediately on Sørensen's 1909 introduction of the pH scale with a study of the effect of hydrogen ion concentration on invertase, and he became the leading world expert on pH and buffers. His book was the major reference on the subject for decades.
Quinones.
In his later career he worked extensively on quinones, and discovered Janus green as a supravital stain for mitochondria and the Michaelis–Gutmann body in urinary tract infections (1902). He found that thioglycolic acid could dissolve keratin, a discovery that would come to have several implications in the cosmetic industry, including the permanent wave ("perm").
A full discussion of his life and contributions to biochemistry may be consulted for more information.
"Catalysing" the Suzuki method of music teaching.
During his time in Japan Michaelis knew the young Shinichi Suzuki, later famous for the Suzuki method of teaching the violin and other instruments. Suzuki asked his advice about whether he should become a professional violinist. Perhaps more honest than tactful, Michaelis advised him to take up teaching, and thus catalysed the invention of the Suzuki method.
Personal life and death.
Michaelis was married to Hedwig Philipsthal; they had two daughters, Ilse Wolman and Eva M. Jacoby. Leonor Michaelis died on 8 October or 10 October, 1949 in New York City.
Honors.
Michaelis was a Harvey Lecturer in 1924 and a Sigma Xi Lecturer in 1946. He was elected to be a Fellow of the American Association for the Advancement of Science in 1929, a member of the National Academy of Sciences in 1943. In 1945, he received an honorary LL.D. from the University of California, Los Angeles.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v = \\frac{Va}{K_\\mathrm{m} + a}"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "K_\\mathrm{m}"
},
{
"math_id": 5,
"text": "V/K_\\mathrm{m}"
}
] | https://en.wikipedia.org/wiki?curid=1197356 |
11973947 | Graph partition | Subdivision of vertices into disjoint sets
In mathematics, a graph partition is the reduction of a graph to a smaller graph by partitioning its set of nodes into mutually exclusive groups. Edges of the original graph that cross between the groups will produce edges in the partitioned graph. If the number of resulting edges is small compared to the original graph, then the partitioned graph may be better suited for analysis and problem-solving than the original. Finding a partition that simplifies graph analysis is a hard problem, but one that has applications to scientific computing, VLSI circuit design, and task scheduling in multiprocessor computers, among others. Recently, the graph partition problem has gained importance due to its application for clustering and detection of cliques in social, pathological and biological networks. For a survey on recent trends in computational methods and applications see .
Two common examples of graph partitioning are minimum cut and maximum cut problems.
Problem complexity.
Typically, graph partition problems fall under the category of NP-hard problems. Solutions to these problems are generally derived using heuristics and approximation algorithms. However, uniform graph partitioning or a balanced graph partition problem can be shown to be NP-complete to approximate within any finite factor. Even for special graph classes such as trees and grids, no reasonable approximation algorithms exist, unless P=NP. Grids are a particularly interesting case since they model the graphs resulting from Finite Element Model (FEM) simulations. When not only the number of edges between the components is approximated, but also the sizes of the components, it can be shown that no reasonable fully polynomial algorithms exist for these graphs.
Problem.
Consider a graph "G" = ("V", "E"), where "V" denotes the set of "n" vertices and "E" the set of edges. For a ("k","v") balanced partition problem, the objective is to partition "G" into "k" components of at most size "v" · ("n"/"k"), while minimizing the capacity of the edges between separate components. Also, given "G" and an integer "k" > 1, partition "V" into "k" parts (subsets) "V"1, "V"2, ..., "Vk" such that the parts are disjoint and have equal size, and the number of edges with endpoints in different parts is minimized. Such partition problems have been discussed in literature as bicriteria-approximation or resource augmentation approaches. A common extension is to hypergraphs, where an edge can connect more than two vertices. A hyperedge is not cut if all vertices are in one partition, and cut exactly once otherwise, no matter how many vertices are on each side. This usage is common in electronic design automation.
Analysis.
For a specific ("k", 1 + "ε") balanced partition problem, we seek to find a minimum cost partition of "G" into "k" components with each component containing a maximum of (1 + "ε")·("n"/"k") nodes. We compare the cost of this approximation algorithm to the cost of a ("k",1) cut, wherein each of the "k" components must have the same size of ("n"/"k") nodes each, thus being a more restricted problem. Thus,
formula_0
We already know that (2,1) cut is the minimum bisection problem and it is NP-complete. Next, we assess a 3-partition problem wherein "n" = 3"k", which is also bounded in polynomial time. Now, if we assume that we have a finite approximation algorithm for ("k", 1)-balanced partition, then, either the 3-partition instance can be solved using the balanced ("k",1) partition in "G" or it cannot be solved. If the 3-partition instance can be solved, then ("k", 1)-balanced partitioning problem in "G" can be solved without cutting any edge. Otherwise, if the 3-partition instance cannot be solved, the optimum ("k", 1)-balanced partitioning in "G" will cut at least one edge. An approximation algorithm with a finite approximation factor has to differentiate between these two cases. Hence, it can solve the 3-partition problem which is a contradiction under the assumption that "P" = "NP". Thus, it is evident that ("k",1)-balanced partitioning problem has no polynomial-time approximation algorithm with a finite approximation factor unless "P" = "NP".
The planar separator theorem states that any "n"-vertex planar graph can be partitioned into roughly equal parts by the removal of O(√"n") vertices. This is not a partition in the sense described above, because the partition set consists of vertices rather than edges. However, the same result also implies that every planar graph of bounded degree has a balanced cut with O(√"n") edges.
Graph partition methods.
Since graph partitioning is a hard problem, practical solutions are based on heuristics. There are two broad categories of methods, local and global. Well-known local methods are the Kernighan–Lin algorithm, and Fiduccia-Mattheyses algorithms, which were the first effective 2-way cuts by local search strategies. Their major drawback is the arbitrary initial partitioning of the vertex set, which can affect the final solution quality. Global approaches rely on properties of the entire graph and do not rely on an arbitrary initial partition. The most common example is spectral partitioning, where a partition is derived from approximate eigenvectors of the adjacency matrix, or spectral clustering that groups graph vertices using the eigendecomposition of the graph Laplacian matrix.
Multi-level methods.
A multi-level graph partitioning algorithm works by applying one or more stages. Each stage reduces the size of
the graph by collapsing vertices and edges, partitions the smaller graph, then maps back and refines this partition of the original graph. A wide variety of partitioning and refinement methods can be applied within the overall multi-level scheme. In many cases, this approach can give both fast execution times and very high quality results.
One widely used example of such an approach is METIS, a graph partitioner, and hMETIS, the corresponding partitioner for hypergraphs.
An alternative approach originated from
and implemented, e.g., in scikit-learn is spectral clustering with the partitioning determined from eigenvectors of the graph Laplacian matrix for the original graph computed by LOBPCG solver with multigrid preconditioning.
Spectral partitioning and spectral bisection.
Given a graph formula_1 with adjacency matrix formula_2, where an entry formula_3 implies an edge between node formula_4 and formula_5, and degree matrix formula_6, which is a diagonal matrix, where each diagonal entry of a row formula_4, formula_7, represents the node degree of node formula_4. The Laplacian matrix formula_8 is defined as formula_9. Now, a ratio-cut partition for graph formula_10 is defined as a partition of formula_11 into disjoint formula_12, and formula_13, minimizing the ratio
formula_14
of the number of edges that actually cross this cut to the number of pairs of vertices that could support such edges. Spectral graph partitioning can be motivated by analogy with partitioning of a vibrating string or a mass-spring system and similarly extended to the case of negative weights of the graph.
Fiedler eigenvalue and eigenvector.
In such a scenario, the second smallest eigenvalue (formula_15) of formula_8, yields a "lower bound" on the optimal cost (formula_16) of ratio-cut partition with formula_17. The eigenvector (formula_18) corresponding to formula_15, called the "Fiedler vector", bisects the graph into only two communities based on the "sign of the corresponding vector entry". Division into a larger number of communities can be achieved by repeated "bisection" or by using "multiple eigenvectors" corresponding to the smallest eigenvalues. The examples in Figures 1,2 illustrate the spectral bisection approach.
Modularity and ratio-cut.
Minimum cut partitioning however fails when the number of communities to be partitioned, or the partition sizes are unknown. For instance, optimizing the cut size for free group sizes puts all vertices in the same community. Additionally, cut size may be the wrong thing to minimize since a good division is not just one with small number of edges between communities. This motivated the use of Modularity (Q) as a metric to optimize a balanced graph partition. The example in Figure 3 illustrates 2 instances of the same graph such that in "(a)" modularity (Q) is the partitioning metric and in "(b)", ratio-cut is the partitioning metric.
Applications.
Conductance.
Another objective function used for graph partitioning is Conductance which is the ratio between the number of cut edges and the volume of the smallest part. Conductance is related to electrical flows and random walks. The Cheeger bound guarantees that spectral bisection provides partitions with nearly optimal conductance. The quality of this approximation depends on the second smallest eigenvalue of the Laplacian λ2.
Immunization.
Graph partition can be useful for identifying the minimal set of nodes or links that should be immunized in order to stop epidemics.
Other graph partition methods.
Spin models have been used for clustering of multivariate data wherein similarities are translated into coupling strengths. The properties of ground state spin configuration can be directly interpreted as communities. Thus, a graph is partitioned to minimize the Hamiltonian of the partitioned graph. The Hamiltonian (H) is derived by assigning the following partition rewards and penalties.
Additionally, Kernel-PCA-based Spectral clustering takes a form of least squares Support Vector Machine framework, and hence it becomes possible to project the data entries to a kernel induced feature space that has maximal variance, thus implying a high separation between the projected communities.
Some methods express graph partitioning as a multi-criteria optimization problem which can be solved using local methods expressed in a game theoretic framework where each node makes a decision on the partition it chooses.
For very large-scale distributed graphs classical partition methods might not apply (e.g., spectral partitioning, Metis) since they require full access to graph data in order to perform global operations. For such large-scale scenarios distributed graph partitioning is used to perform partitioning through asynchronous local operations only.
Software tools.
scikit-learn implements spectral clustering with the partitioning determined from eigenvectors of the graph Laplacian matrix for the original graph computed by ARPACK, or by LOBPCG solver with multigrid preconditioning.
METIS is a graph partitioning family by Karypis and Kumar. Among this family, kMetis aims at greater partitioning speed, hMetis, applies to hypergraphs and aims at partition quality, and ParMetis is a parallel implementation of the Metis graph partitioning algorithm.
KaHyPar is a multilevel hypergraph partitioning framework providing direct k-way and recursive bisection based partitioning algorithms. It instantiates the multilevel approach in its most extreme version, removing only a single vertex in every level of the hierarchy. By using this very fine grained "n"-level approach combined with strong local search heuristics, it computes solutions of very high quality.
Scotch is graph partitioning framework by Pellegrini. It uses recursive multilevel bisection and includes sequential as well as parallel partitioning techniques.
Jostle is a sequential and parallel graph partitioning solver developed by Chris Walshaw.
The commercialized version of this partitioner is known as NetWorks.
Party implements the Bubble/shape-optimized framework and the Helpful Sets algorithm.
The software packages DibaP and its MPI-parallel variant PDibaP by Meyerhenke implement the Bubble framework using diffusion; DibaP also uses AMG-based techniques for coarsening and solving linear systems arising in the diffusive approach.
Sanders and Schulz released a graph partitioning package KaHIP (Karlsruhe High Quality Partitioning) that implements for example flow-based methods, more-localized local searches and several parallel and sequential meta-heuristics.
The tools Parkway by Trifunovic and
Knottenbelt as well as Zoltan by Devine et al. focus on hypergraph
partitioning. | [
{
"math_id": 0,
"text": "\\max_i |V_i| \\le (1+\\varepsilon) \\left\\lceil\\frac{|V|}{k}\\right\\rceil."
},
{
"math_id": 1,
"text": "G=(V,E)"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "A_{ij}"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "D"
},
{
"math_id": 7,
"text": "d_{ii}"
},
{
"math_id": 8,
"text": "L"
},
{
"math_id": 9,
"text": "L = D - A"
},
{
"math_id": 10,
"text": "G = (V, E)"
},
{
"math_id": 11,
"text": "V"
},
{
"math_id": 12,
"text": "U"
},
{
"math_id": 13,
"text": "W"
},
{
"math_id": 14,
"text": "\\frac{|E(G)\\cap(U\\times W)|}{|U|\\cdot|W|}"
},
{
"math_id": 15,
"text": "\\lambda_2"
},
{
"math_id": 16,
"text": "c"
},
{
"math_id": 17,
"text": "c\\geq \\frac{\\lambda_2}{n}"
},
{
"math_id": 18,
"text": "V_2"
}
] | https://en.wikipedia.org/wiki?curid=11973947 |
1197421 | Bipolar coordinates | 2-dimensional orthogonal coordinate system based on Apollonian circles
Bipolar coordinates are a two-dimensional orthogonal coordinate system based on the Apollonian circles. Confusingly, the same term is also sometimes used for two-center bipolar coordinates. There is also a third system, based on two poles (biangular coordinates).
The term "bipolar" is further used on occasion to describe other curves having two singular points (foci), such as ellipses, hyperbolas, and Cassini ovals. However, the term "bipolar coordinates" is reserved for the coordinates described here, and never used for systems associated with those other curves, such as elliptic coordinates.
Definition.
The system is based on two foci "F"1 and "F"2. Referring to the figure at right, the "σ"-coordinate of a point "P" equals the angle "F"1 "P" "F"2, and the "τ"-coordinate equals the natural logarithm of the ratio of the distances "d"1 and "d"2:
formula_0
If, in the Cartesian system, the foci are taken to lie at (−"a", 0) and ("a", 0), the coordinates of the point "P" are
formula_1
The coordinate "τ" ranges from formula_2 (for points close to "F"1) to formula_3 (for points close to "F"2). The coordinate "σ" is only defined modulo "2π", and is best taken to range from "-π" to "π", by taking it as the negative of the acute angle "F"1 "P" "F"2 if "P" is in the lower half plane.
Proof that coordinate system is orthogonal.
The equations for "x" and "y" can be combined to give
formula_4
or
formula_5
This equation shows that "σ" and "τ" are the real and imaginary parts of an analytic function of "x+iy" (with logarithmic branch points at the foci), which in turn proves (by appeal to the general theory of conformal mapping) (the Cauchy-Riemann equations) that these particular curves of "σ" and "τ" intersect at right angles, i.e., it is an orthogonal coordinate system.
Curves of constant "σ" and "τ".
The curves of constant "σ" correspond to non-concentric circles
that intersect at the two foci. The centers of the constant-"σ" circles lie on the "y"-axis at formula_6 with radius formula_7. Circles of positive "σ" are centered above the "x"-axis, whereas those of negative "σ" lie below the axis. As the magnitude |"σ"|- "π"/2 decreases, the radius of the circles decreases and the center approaches the origin (0, 0), which is reached when |"σ"| = "π"/2. (From elementary geometry, all triangles on a circle with 2 vertices on opposite ends of a diameter are right triangles.)
The curves of constant formula_8 are non-intersecting circles of different radii
that surround the foci but again are not concentric. The centers of the constant-"τ" circles lie on the "x"-axis at formula_9 with radius formula_10. The circles of positive "τ" lie in the right-hand side of the plane ("x" > 0), whereas the circles of negative "τ" lie in the left-hand side of the plane ("x" < 0). The "τ" = 0 curve corresponds to the "y"-axis ("x" = 0). As the magnitude of "τ" increases, the radius of the circles decreases and their centers approach the foci.
Inverse relations.
The passage from the Cartesian coordinates towards the bipolar coordinates can be done via the following formulas:
formula_11
and
formula_12
The coordinates also have the identities:
formula_13
and
formula_14
which can derived by solving Eq. (1) and (2) for formula_15 and formula_16, respectively.
Scale factors.
To obtain the scale factors for bipolar coordinates, we take the differential of the equation for formula_17, which gives
formula_18
Multiplying this equation with its complex conjugate yields
formula_19
Employing the trigonometric identities for products of sines and cosines, we obtain
formula_20
from which it follows that
formula_21
Hence the scale factors for "σ" and "τ" are equal, and given by
formula_22
Many results now follow in quick succession from the general formulae for orthogonal coordinates.
Thus, the infinitesimal area element equals
formula_23
and the Laplacian is given by
formula_24
Expressions for formula_25, formula_26, and formula_27 can be expressed obtained by substituting the scale factors into the general formulae found in orthogonal coordinates.
Applications.
The classic applications of bipolar coordinates are in solving partial differential equations, e.g., Laplace's equation or the Helmholtz equation, for which bipolar coordinates allow a separation of variables. An example is the electric field surrounding two parallel cylindrical conductors with unequal diameters.
Extension to 3-dimensions.
Bipolar coordinates form the basis for several sets of three-dimensional orthogonal coordinates.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\tau = \\ln \\frac{d_1}{d_2}.\n"
},
{
"math_id": 1,
"text": "\nx = a \\ \\frac{\\sinh \\tau}{\\cosh \\tau - \\cos \\sigma}, \\qquad y = a \\ \\frac{\\sin \\sigma}{\\cosh \\tau - \\cos \\sigma}.\n"
},
{
"math_id": 2,
"text": "-\\infty"
},
{
"math_id": 3,
"text": "\\infty"
},
{
"math_id": 4,
"text": "\nx + i y = a i \\cot\\left( \\frac{\\sigma + i \\tau}{2}\\right)\n"
},
{
"math_id": 5,
"text": "\nx + i y = a \\coth\\left( \\frac{\\tau-i\\sigma}{2}\\right).\n"
},
{
"math_id": 6,
"text": "a\\cot \\sigma"
},
{
"math_id": 7,
"text": "\\tfrac{a}{\\sin\\sigma}"
},
{
"math_id": 8,
"text": "\\tau"
},
{
"math_id": 9,
"text": "a \\coth\\tau"
},
{
"math_id": 10,
"text": "\\tfrac{a}{\\sinh\\tau}"
},
{
"math_id": 11,
"text": "\n \\tau = \\frac{1}{2} \\ln \\frac{(x + a)^2 + y^2}{(x - a)^2 + y^2}\n"
},
{
"math_id": 12,
"text": "\n \\pi - \\sigma = 2 \\arctan \\frac{2ay}{a^2 - x^2 - y^2 + \\sqrt{(a^2 - x^2 - y^2)^2 + 4 a^2 y^2} }.\n"
},
{
"math_id": 13,
"text": "\n \\tanh \\tau = \\frac{2 a x}{x^2 + y^2 + a^2}\n"
},
{
"math_id": 14,
"text": "\n \\tan \\sigma = \\frac{2 a y}{x^2 + y^2 - a^2},\n"
},
{
"math_id": 15,
"text": "\\cot \\sigma"
},
{
"math_id": 16,
"text": "\\coth\\tau"
},
{
"math_id": 17,
"text": " x + iy "
},
{
"math_id": 18,
"text": "\ndx + i\\, dy = \\frac{-ia}{\\sin^2\\bigl(\\tfrac{1}{2}(\\sigma + i \\tau)\\bigr)}(d\\sigma +i\\,d\\tau).\n"
},
{
"math_id": 19,
"text": "\n(dx)^2 + (dy)^2 = \\frac{a^2}{\\bigl[2\\sin\\tfrac{1}{2}\\bigl(\\sigma + i\\tau\\bigr) \\sin\\tfrac{1}{2}\\bigl(\\sigma - i\\tau\\bigr)\\bigr]^2} \\bigl((d\\sigma)^2 + (d\\tau)^2\\bigr).\n"
},
{
"math_id": 20,
"text": "\n2\\sin\\tfrac{1}{2}\\bigl(\\sigma + i\\tau\\bigr) \\sin\\tfrac{1}{2}\\bigl(\\sigma - i\\tau\\bigr)\n = \\cos\\sigma - \\cosh\\tau,\n"
},
{
"math_id": 21,
"text": "\n(dx)^2 + (dy)^2 = \\frac{a^2}{(\\cosh \\tau - \\cos\\sigma)^2} \\bigl((d\\sigma)^2 + (d\\tau)^2\\bigr).\n"
},
{
"math_id": 22,
"text": "\nh_\\sigma = h_\\tau = \\frac{a}{\\cosh \\tau - \\cos\\sigma}.\n"
},
{
"math_id": 23,
"text": "\ndA = \\frac{a^2}{\\left( \\cosh \\tau - \\cos\\sigma \\right)^2} \\, d\\sigma\\, d\\tau,\n"
},
{
"math_id": 24,
"text": "\n\\nabla^2 \\Phi =\n\\frac{1}{a^2} \\left( \\cosh \\tau - \\cos\\sigma \\right)^2\n\\left( \n\\frac{\\partial^2 \\Phi}{\\partial \\sigma^2} + \n\\frac{\\partial^2 \\Phi}{\\partial \\tau^2}\n\\right).\n"
},
{
"math_id": 25,
"text": "\\nabla f"
},
{
"math_id": 26,
"text": "\\nabla \\cdot \\mathbf{F}"
},
{
"math_id": 27,
"text": "\\nabla \\times \\mathbf{F}"
}
] | https://en.wikipedia.org/wiki?curid=1197421 |
1197531 | Hamiltonian system | Dynamical system governed by Hamilton's equations
A Hamiltonian system is a dynamical system governed by Hamilton's equations. In physics, this dynamical system describes the evolution of a physical system such as a planetary system or an electron in an electromagnetic field. These systems can be studied in both Hamiltonian mechanics and dynamical systems theory.
Overview.
Informally, a Hamiltonian system is a mathematical formalism developed by Hamilton to describe the evolution equations of a physical system. The advantage of this description is that it gives important insights into the dynamics, even if the initial value problem cannot be solved analytically. One example is the planetary movement of three bodies: while there is no closed-form solution to the general problem, Poincaré showed for the first time that it exhibits deterministic chaos.
Formally, a Hamiltonian system is a dynamical system characterised by the scalar function formula_0, also known as the Hamiltonian. The state of the system, formula_1, is described by the generalized coordinates formula_2 and formula_3, corresponding to generalized momentum and position respectively. Both formula_2 and formula_3 are real-valued vectors with the same dimension "N". Thus, the state is completely described by the 2"N"-dimensional vector
formula_4
and the evolution equations are given by Hamilton's equations:
formula_5
The trajectory formula_6 is the solution of the initial value problem defined by Hamilton's equations and the initial condition formula_7.
Time-independent Hamiltonian systems.
If the Hamiltonian is not explicitly time-dependent, i.e. if formula_8, then the Hamiltonian does not vary with time at all:
and thus the Hamiltonian is a constant of motion, whose constant equals the total energy of the system: formula_9. Examples of such systems are the undamped pendulum, the harmonic oscillator, and dynamical billiards.
Example.
An example of a time-independent Hamiltonian system is the harmonic oscillator. Consider the system defined by the coordinates formula_10 and formula_11. Then the Hamiltonian is given by
formula_12
The Hamiltonian of this system does not depend on time and thus the energy of the system is conserved.
Symplectic structure.
One important property of a Hamiltonian dynamical system is that it has a symplectic structure. Writing
formula_13
the evolution equation of the dynamical system can be written as
formula_14
where
formula_15
and "I""N" is the "N"×"N" identity matrix.
One important consequence of this property is that an infinitesimal phase-space volume is preserved. A corollary of this is Liouville's theorem, which states that on a Hamiltonian system, the phase-space volume of a closed surface is preserved under time evolution.
formula_16
where the third equality comes from the divergence theorem.
Hamiltonian chaos.
Certain Hamiltonian systems exhibit chaotic behavior. When the evolution of a Hamiltonian system is highly sensitive to initial conditions, and the motion appears random and erratic, the system is said to exhibit Hamiltonian chaos.
Origins.
The concept of chaos in Hamiltonian systems has its roots in the works of Henri Poincaré, who in the late 19th century made pioneering contributions to the understanding of the three-body problem in celestial mechanics. Poincaré showed that even a simple gravitational system of three bodies could exhibit complex behavior that could not be predicted over the long term. His work is considered to be one of the earliest explorations of chaotic behavior in physical systems.
Characteristics.
Hamiltonian chaos is characterized by the following features:
Sensitivity to Initial Conditions: A hallmark of chaotic systems, small differences in initial conditions can lead to vastly different trajectories. This is known as the butterfly effect.
Mixing: Over time, the phases of the system become uniformly distributed in phase space.
Recurrence: Though unpredictable, the system eventually revisits states that are arbitrarily close to its initial state, known as Poincaré recurrence.
Hamiltonian chaos is also associated with the presence of "chaotic invariants" such as the Lyapunov exponent and Kolmogorov-Sinai entropy, which quantify the rate at which nearby trajectories diverge and the complexity of the system, respectively.
Applications.
Hamiltonian chaos is prevalent in many areas of physics, particularly in classical mechanics and statistical mechanics. For instance, in plasma physics, the behavior of charged particles in a magnetic field can exhibit Hamiltonian chaos, which has implications for nuclear fusion and astrophysical plasmas. Moreover, in quantum mechanics, Hamiltonian chaos is studied through quantum chaos, which seeks to understand the quantum analogs of classical chaotic behavior. Hamiltonian chaos also plays a role in astrophysics, where it is used to study the dynamics of star clusters and the stability of galactic structures.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H(\\boldsymbol{q},\\boldsymbol{p},t)"
},
{
"math_id": 1,
"text": "\\boldsymbol{r}"
},
{
"math_id": 2,
"text": "\\boldsymbol{p}"
},
{
"math_id": 3,
"text": "\\boldsymbol{q}"
},
{
"math_id": 4,
"text": "\\boldsymbol{r} = (\\boldsymbol{q},\\boldsymbol{p})"
},
{
"math_id": 5,
"text": "\\begin{align}\n& \\frac{d\\boldsymbol{p}}{dt} = -\\frac{\\partial H}{\\partial \\boldsymbol{q}}, \\\\[5pt]\n& \\frac{d\\boldsymbol{q}}{dt} = +\\frac{\\partial H}{\\partial \\boldsymbol{p}}.\n\\end{align} "
},
{
"math_id": 6,
"text": "\\boldsymbol{r}(t)"
},
{
"math_id": 7,
"text": "\\boldsymbol{r}(t = 0) = \\boldsymbol{r}_0\\in\\mathbb{R}^{2N}"
},
{
"math_id": 8,
"text": "H(\\boldsymbol{q},\\boldsymbol{p},t) = H(\\boldsymbol{q},\\boldsymbol{p})"
},
{
"math_id": 9,
"text": "H = E"
},
{
"math_id": 10,
"text": "\\boldsymbol{p} = m\\dot{x}"
},
{
"math_id": 11,
"text": "\\boldsymbol{q} = x"
},
{
"math_id": 12,
"text": " H = \\frac{p^2}{2m} + \\frac{kq^2}{2}."
},
{
"math_id": 13,
"text": "\\nabla_{\\boldsymbol{r}} H(\\boldsymbol{r}) = \\begin{bmatrix}\n\\frac{\\partial H(\\boldsymbol{q},\\boldsymbol{p})}{\\partial \\boldsymbol{q}} \\\\\n\\frac{\\partial H(\\boldsymbol{q},\\boldsymbol{p})}{\\partial \\boldsymbol{p}} \\\\\n\\end{bmatrix}"
},
{
"math_id": 14,
"text": "\\frac{d\\boldsymbol{r}}{dt} = M_N \\nabla_{\\boldsymbol{r}} H(\\boldsymbol{r})"
},
{
"math_id": 15,
"text": "M_N =\n\\begin{bmatrix}\n0 & I_N \\\\\n-I_N & 0 \\\\\n\\end{bmatrix}"
},
{
"math_id": 16,
"text": "\\begin{align}\n\\frac{d}{dt}\\oint_{\\partial V} d\\boldsymbol{r} &= \\oint_{\\partial V}\\frac{d\\boldsymbol{r}}{dt}\\cdot d\\hat{\\boldsymbol{n}}_{\\partial V} \\\\\n&= \\oint_{\\partial V} \\left(M_N \\nabla_{\\boldsymbol{r}} H(\\boldsymbol{r})\\right) \\cdot d\\hat{\\boldsymbol{n}}_{\\partial V} \\\\\n&= \\int_{V}\\nabla_{\\boldsymbol{r}}\\cdot \\left(M_N \\nabla_{\\boldsymbol{r}} H(\\boldsymbol{r})\\right) \\, dV \\\\\n&= \\int_{V}\\sum_{i=1}^N\\sum_{j=1}^N\\left(\\frac{\\partial^2 H}{\\partial q_i \\partial p_j} - \\frac{\\partial^2 H}{\\partial p_i \\partial q_j}\\right) \\, dV \\\\\n&= 0 \n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=1197531 |
1197818 | Bird strike | Collision between an aircraft and a bird
A bird strike (sometimes called birdstrike, bird ingestion (for an engine), bird hit, or bird aircraft strike hazard (BASH)) is a collision between an airborne animal (usually a bird or bat) and a moving vehicle (usually an aircraft). The term is also used for bird deaths resulting from collisions with structures, such as power lines, towers and wind turbines (see bird–skyscraper collisions and towerkill).
A significant threat to flight safety, bird strikes have caused a number of accidents with human casualties. There are over 13,000 bird strikes annually in the US alone. However, the number of major accidents involving civil aircraft is quite low and it has been estimated that there is only about one accident resulting in human death in one billion (109) flying hours. The majority of bird strikes (65%) cause little damage to the aircraft; however, the collision is usually fatal to the bird(s) involved.
Vultures and geese have been ranked the second and third most hazardous kinds of wildlife to aircraft in the United States, after deer, with approximately 240 goose–aircraft collisions in the United States each year. 80% of all bird strikes go unreported.
Most accidents occur when a bird (or group of birds) collides with the windscreen or is sucked into the engine of jet aircraft. These cause annual damages that have been estimated at $400 million within the United States alone and up to $1.2 billion to commercial aircraft worldwide. In addition to property damage, collisions between man-made structures and conveyances and birds is a contributing factor, among many others, to the worldwide decline of many avian species.
The International Civil Aviation Organization (ICAO) received 65,139 bird strike reports for 2011–14, and the Federal Aviation Administration counted 177,269 wildlife strike reports on civil aircraft between 1990 and 2015, growing 38% in seven years from 2009 to 2015. Birds accounted for 97%.
Event description.
Bird strikes happen most often during takeoff or landing, or during low altitude flight. However, bird strikes have also been reported at high altitudes, some as high as above the ground. Bar-headed geese have been seen flying as high as above sea level. An aircraft over the Ivory Coast collided with a Rüppell's vulture at the altitude of , the current record avian height. The majority of bird collisions occur near or at airports (90%, according to the ICAO) during takeoff, landing and associated phases. According to the FAA wildlife hazard management manual for 2005, less than 8% of strikes occur above and 61% occur at less than .
The point of impact is usually any forward-facing edge of the vehicle such as a wing leading edge, nose cone, jet engine cowling or engine inlet.
Jet engine ingestion is extremely serious due to the rotation speed of the engine fan and engine design. As the bird strikes a fan blade, that blade can be displaced into another blade and so forth, causing a cascading failure. Jet engines are particularly vulnerable during the takeoff phase when the engine is turning at a very high speed and the plane is at a low altitude where birds are more commonly found.
The force of the impact on an aircraft depends on the weight of the animal and the speed difference and direction at the point of impact. The energy of the impact increases with the square of the speed difference. High-speed impacts, as with jet aircraft, can cause considerable damage and even catastrophic failure to the vehicle. The energy of a bird moving at a relative velocity of approximately equals the energy of a weight dropped from a height of . However, according to the FAA only 15% of strikes (ICAO 11%) actually result in damage to the aircraft.
Bird strikes can damage vehicle components, or injure passengers. Flocks of birds are especially dangerous and can lead to multiple strikes, with corresponding damage. Depending on the damage, aircraft at low altitudes or during take-off and landing often cannot recover in time. US Airways Flight 1549 is a classic example of this. The engines on the Airbus A320 used on that flight were torn apart by multiple bird strikes at low altitude. There was no time to make a safe landing at an airport, forcing a water landing in the Hudson River.
Remains of the bird, termed "snarge", are sent to identification centers where forensic techniques may be used to identify the species involved. These samples need to be taken carefully by trained personnel to ensure proper analysis and reduce the risks of infection (zoonoses).
Species.
Most bird strikes involve large birds with big populations, particularly geese and gulls in the United States. In parts of the US, Canada geese and migratory snow geese populations have risen significantly while feral Canada geese and greylag geese have increased in parts of Europe, increasing the risk of these large birds to aircraft. In other parts of the world, large birds of prey such as "Gyps" vultures and "Milvus" kites are often involved. In the US, reported strikes are mainly from waterfowl (30%), gulls (22%), raptors (20%), and pigeons and doves (7%). The Smithsonian Institution's Feather Identification Laboratory has identified turkey vultures as the most damaging birds, followed by Canada geese and white pelicans, all of which are very large birds. In terms of frequency, the laboratory most commonly finds mourning doves and horned larks involved in the strike.
The largest numbers of strikes happen during the spring and fall migrations. Bird strikes above altitude are about 7 times more common at night than during the day during the bird migration season.
Large land animals, such as deer, can also be a problem to aircraft during takeoff and landing. Between 1990 and 2013, civil aircraft experienced more than 1,000 collisions with deer and 440 with coyotes.
An animal hazard reported from London Stansted Airport in England is rabbits: they get run over by ground vehicles and planes, and they pass large amounts of droppings, which attract mice, which in turn attract owls, which then become another birdstrike hazard.
Countermeasures.
There are three approaches to reduce the effect of bird strikes. The vehicles can be designed to be more bird-resistant, the birds can be moved out of the way of the vehicle, or the vehicle can be moved out of the way of the birds.
Vehicle design.
Most large commercial jet engines include design features that ensure they can shut down after ingesting a bird weighing up to . The engine does not have to survive the ingestion, just be safely shut down. This is a standalone requirement, meaning the engine alone, not the aircraft, must pass the test. Multiple strikes (such as from hitting a flock of birds) on twin-engine jet aircraft are very serious events because they can disable multiple aircraft systems. Emergency action may be required to land the aircraft, as in the January 15, 2009 forced ditching of US Airways Flight 1549.
As required by the European Aviation Safety Agency (EASA)'s CS 25.631 or the Federal Aviation Administration (FAA)'s 14 CFR § 25.571(e)(1) post Amdt 25-96, modern jet aircraft structures are designed for continued safe flight and landing after withstanding one bird impact anywhere on the aircraft (including the flight deck windshields). Per the FAA's 14 CFR § 25.631, they must also withstand one bird impact anywhere on the empennage. Flight deck windows on jet aircraft must be able to withstand one bird collision without yielding or spalling. For the empennage, this is usually accomplished by designing redundant structures and protected locations for control system elements or protective devices such as splitter plates or energy-absorbing material. Often, one aircraft manufacturer will use similar protective design features for all of its aircraft models, to minimize testing and certification costs. Transport Canada also pays particular attention to these requirements during aircraft certification, considering there are many documented cases in North America of bird strikes with large Canada geese which weigh approximately on average, and can sometimes weigh as much as .
At first, bird strike testing by manufacturers involved firing a bird carcass from a gas cannon and sabot system into the tested unit. The carcass was soon replaced with suitable density blocks, often gelatin, to ease testing. Current certification efforts are mainly conducted with limited testing, supported by more detailed analysis using computer simulation, although final testing usually involves some physical experiments (see birdstrike simulator).
Based on US National Transportation Safety Board recommendations following US Airways Flight 1549 in 2009, EASA proposed in 2017 that engines should also be capable of sustaining a bird strike in descent. During descent, turbofans turn more slowly than during takeoff and climb. This proposal was echoed a year later by the FAA; new regulations could apply for the Boeing NMA engines.
Wildlife management.
Though there are many methods available to wildlife managers at airports, no single method will work in all instances and with all species. Wildlife management in the airport environment can be grouped into two broad categories: non-lethal and lethal. Integration of multiple non-lethal methods with lethal methods results in the most effective airfield wildlife management strategy.
Non-lethal.
Non-lethal management can be further broken down into habitat manipulation, exclusion, visual, auditory, tactile, or chemical repellents, and relocation.
Habitat manipulation.
One of the primary reasons that wildlife is seen in airports is an abundance of food. Food resources on airports can be either removed or made less desirable. One of the most abundant food resources found on airports is turfgrass. This grass is planted to reduce runoff, control erosion, absorb jet wash, allow passage of emergency vehicles, and to be aesthetically pleasing. However, turfgrass is a preferred food source for species of birds that pose a serious risk to aircraft, chiefly the Canada goose ("Branta canadensis"). Turfgrass planted at airports should be a species that geese do not prefer (e.g. St. Augustine grass) and should be managed in such a way that reduces its attractiveness to other wildlife such as small rodents and raptors. It has been recommended that turfgrass be maintained at a height of 7–14 inches through regular mowing and fertilization.
Wetlands are another major attractant of wildlife in the airport environment. They are of particular concern because they attract waterfowl, which have a high potential to damage aircraft. With large areas of impervious surfaces, airports must employ methods to collect runoff and reduce its flow velocity. These best management practices often involve temporarily ponding runoff. Short of redesigning existing runoff control systems to include non-accessible water such as subsurface flow wetlands, frequent drawdowns and covering of exposed water with floating covers and wire grids should be employed. The implementation of covers and wire grids must not hinder emergency services.
Exclusion.
Though excluding birds (and flying animals in general) from the entire airport environment is virtually impossible, it is possible to exclude deer and other mammals that constitute a small percentage of wildlife strikes. Three-meter-high fences made of chain link or woven wire, with barbed wire outriggers, are the most effective. When used as a perimeter fence, these fences also serve to keep unauthorized people off of the airport. Realistically, every fence must have gates. Gates that are left open allow deer and other mammals onto the airport. 15 foot (4.6 meter) long cattle guards have been shown to be effective at deterring deer up to 98% of the time.
Hangars with open superstructures often attract birds to nest and roost in. Hangar doors are often left open to increase ventilation, especially in the evenings. Birds in hangars are in proximity to the airfield and their droppings are both a health and damage concern. Netting is often deployed across the superstructure of a hangar denying access to the rafters where the birds roost and nest while still allowing the hangar doors to remain open for ventilation and aircraft movements. Strip curtains and door netting may also be used but are subject to improper use (e.g. tying the strips to the side of the door) by those working in the hangar.
Visual repellents.
There have been a variety of visual repellent and harassment techniques used in airport wildlife management. They include using birds of prey and dogs, effigies, landing lights, and lasers. Birds of prey have been used with great effectiveness at landfills where there were large populations of feeding gulls. Dogs have also been used with success as visual deterrents and means of harassment for birds at airfields. Airport wildlife managers must consider the risk of knowingly releasing animals in the airport environment. Both birds of prey and dogs must be monitored by a handler when deployed and must be cared for, when not deployed. Airport wildlife managers must consider the economics of these methods.
Effigies of both predators and conspecifics have been used with success to disperse gulls and vultures. The effigies of conspecifics are often placed in unnatural positions where they can freely move with the wind. Effigies have been found to be the most effective in situations where the nuisance birds have other options (e.g. other forage, loafing, and roosting areas) available. Time to habituation varies.
Lasers have been used with success to disperse several species of birds. However, lasers are species-specific as certain species will only react to certain wavelengths. Lasers become more effective as ambient light levels decrease, thereby limiting effectiveness during daylight hours. Some species show a very short time to habituation. The risks of lasers to aircrews must be evaluated when determining whether or not to deploy lasers on airfields. Southampton Airport utilizes a laser device which disables the laser past a certain elevation, eliminating the risk of the beam being shone directly at aircraft and air traffic control tower.
Auditory repellents.
Auditory repellents are commonly used in both agricultural and aviation contexts. Devices such as propane exploders (cannons), pyrotechnics, and bioacoustics are frequently deployed on airports. Propane exploders are capable of creating noises of approximately 130 decibels. They can be programmed to fire at designated intervals, can be remote controlled, or motion activated. Due to their stationary and often predictable nature, wildlife quickly becomes habituated to propane cannons. Lethal control may be used to extend the effectiveness of propane exploders.
Pyrotechnics utilizing either an exploding shell or a screamer can effectively scare birds away from runways. They are commonly launched from a 12 gauge shotgun or a flare pistol, or from a wireless specialized launcher and as such, can be aimed to allow control personnel to "steer" the species that is being harassed. Birds show varying degrees of habituation to pyrotechnics. Studies have shown that lethal reinforcement of pyrotechnic harassment has extended its usefulness. Screamer type cartridges are still intact at the end of their flight (as opposed to exploding shells that destroy themselves) constituting a foreign object damage hazard and must be picked up. The use of pyrotechnics is considered "take" by the U.S. Fish and Wildlife Service (USFWS) and USFWS must be consulted if federally threatened or endangered species could be affected. Pyrotechnics are a potential fire hazard and must be deployed judiciously in dry conditions.
Bioacoustics, or the playing of conspecific distress or predator calls to frighten animals, is widely used. This method relies on the animal's evolutionary danger response. One limitation is that bioacoustics are species-specific and birds may quickly become habituated to them. They should therefore not be used as a primary means of control.
In 2012, operators at Gloucestershire Airport in England stated that songs by the American-Swiss singer Tina Turner were more effective than animal noises for scaring birds from its runways.
Tactile repellents.
Sharpened spikes to deter perching and loafing are commonly used. Generally, large birds require different applications than small birds do.
Chemical repellents.
There are only two chemical bird repellents registered for use in the United States, methyl anthranilate and anthraquinone. Methyl anthranilate is a primary repellent that produces an immediate unpleasant sensation that is reflexive and does not have to be learned. As such it is most effective for transient populations of birds. Methyl anthranilate has been used with great success at rapidly dispersing birds from flight lines at Homestead Air Reserve Station. Anthraquinone is a secondary repellent that has a laxative effect that is not instantaneous. Because of this it is most effective on resident populations of wildlife that will have time to learn an aversive response.
Relocation.
Relocation of raptors from airports is often considered preferable to lethal control methods by both biologists and the public. There are complex legal issues surrounding the capture and relocation of species protected by the Migratory Bird Treaty Act of 1918 and the Bald and Golden Eagle Protection Act of 1940. Prior to capture, proper permits must be obtained and the high mortality rates as well as the risk of disease transmission associated with relocation must be weighed. Between 2008 and 2010, U.S. Department of Agriculture Wildlife Services personnel relocated 606 red-tailed hawks from airports in the United States after the failure of multiple harassment attempts. The return rate of these hawks was 6%; the relocation mortality rate for these hawks was never determined.
Lethal.
Lethal wildlife control on airports falls into two categories: reinforcement of other non-lethal methods and population control.
Reinforcement.
The premise of effigies, pyrotechnics, and propane exploders is that there be a perceived immediate danger to the species to be dispersed. Initially, the sight of an unnaturally positioned effigy or the sound of pyrotechnics or exploders is enough to elicit a danger response from wildlife. As wildlife become habituated to non-lethal methods the culling of small numbers of wildlife in the presence of conspecifics can restore the danger response.
Population control.
Under certain circumstances, lethal wildlife control is needed to control the population of a species. This control can be localized or regional. Localized population control is often used to control species that are residents of the airfield such as deer that have bypassed the perimeter fence. In this instance sharpshooting would be highly effective, such as is seen at Chicago O'Hare International Airport.
Regional population control has been used on species that cannot be excluded from the airport environment. A nesting colony of laughing gulls at Jamaica Bay Wildlife Refuge contributed to 98–315 bird strikes per year, in 1979–1992, at adjacent John F. Kennedy International Airport (JFK). Though JFK had an active bird management program that precluded birds from feeding and loafing on the airport, it did not stop them from overflying the airport to other feeding sites. U.S. Department of Agriculture Wildlife Services personnel began shooting all gulls that flew over the airport, hypothesizing that eventually, the gulls would alter their flight patterns. They shot 28,352 gulls in two years (approximately half of the population at Jamaica Bay and 5–6% of the nationwide population per year). Strikes with laughing gulls decreased by 89% by 1992. However this was more a function of the population reduction than the gulls altering their flight pattern.
Flight path.
Pilots should not take off or land in the presence of wildlife and should avoid migratory routes, wildlife reserves, estuaries and other sites where birds may congregate. When operating in the presence of bird flocks, pilots should seek to climb above as rapidly as possible as most bird strikes occur below that altitude. Additionally, pilots should slow down their aircraft when confronted with birds. The energy that must be dissipated in the collision is approximately the relative kinetic energy (formula_0) of the bird, defined by the equation formula_1 where formula_2 is the mass of the bird and formula_3 is the relative velocity (the difference of the velocities of the bird and the plane, resulting in a lower absolute value if they are flying in the same direction and higher absolute value if they are flying in opposite directions). Therefore, the speed of the aircraft is much more important than the size of the bird when it comes to reducing energy transfer in a collision. The same can be said for jet engines: the slower the rotation of the engine, the less energy which will be imparted onto the engine at collision.
The body density of the bird is also a parameter that influences the amount of damage caused.
The United States Air Force (USAF)'s Avian Hazard Advisory System (AHAS) uses near-real-time data from the National Weather Service's NEXRAD system to provide current bird hazard conditions for published military low-level routes, ranges, and military operating areas (MOAs). Additionally, AHAS incorporates weather forecast data with the Bird Avoidance Model (BAM) to predict soaring bird activity within the next 24 hours and then defaults to the BAM for planning purposes when activity is scheduled outside the 24-hour window. The BAM is a static historical hazard model based on many years of bird distribution data from the Christmas Bird Count, the Breeding Bird Survey, and National Wildlife Refuge data. The BAM also incorporates potentially hazardous bird attractions such as landfills and golf courses. AHAS is now an integral part of military low-level mission planning, with aircrew being able to access the current bird hazard conditions at a dedicated website. AHAS will provide relative risk assessments for the planned mission and give aircrew the opportunity to select a less hazardous route should the planned route be rated severe or moderate. Prior to 2003, the USAF BASH Team bird strike database indicated that approximately 25% of all strikes were associated with low-level routes and bombing ranges. More importantly, these strikes accounted for more than 50% of all of the reported damage costs. After a decade of using AHAS for avoiding routes with severe ratings, the strike percentage associated with low-level flight operations has been reduced to 12% and associated costs cut in half.
Avian radar is an important tool for aiding in bird strike mitigation as part of overall safety management systems at civilian and military airfields. Properly designed and equipped avian radars can track thousands of birds simultaneously in real-time, night and day, through 360 degrees of coverage, out to ranges of and beyond for flocks, updating every target's position (longitude, latitude, altitude), speed, heading, and size every 2–3 seconds. Data from these systems can be used to generate information products ranging from real-time threat alerts to historical analyses of bird activity patterns in both time and space. The FAA and United States Department of Defense (DoD) have conducted extensive science-based field testing and validation of commercial avian radar systems for civil and military applications, respectively. The FAA used evaluations of commercial three-dimensional avian radar systems developed and marketed by Accipiter Radar as the basis for an advisory circular and a guidance letter on using Airport Improvement Program funds to acquire avian radar systems at Part 139 airports. Similarly, the DoD-sponsored Integration and Validation of Avian Radars (IVAR) project evaluated the functional and performance characteristics of Accipiter avian radars under operational conditions at Navy, Marine Corps, and Air Force airfields. Accipiter avian radar systems operating at Seattle–Tacoma International Airport, Chicago O'Hare International Airport, and Marine Corps Air Station Cherry Point made significant contributions to the evaluations carried out in the aforementioned FAA and DoD initiatives.
In 2003, a US company, DeTect, developed the only production model bird radar in operational use for real-time, tactical bird–aircraft strike avoidance by air traffic controllers. These systems are operational at both commercial airports and military airfields. The system has widely used technology available for BASH management and for real-time detection, tracking and alerting of hazardous bird activity at commercial airports, military airfields, and military training and bombing ranges. After extensive evaluation and on-site testing, MERLIN technology was chosen by NASA and was ultimately used for detecting and tracking dangerous vulture activity during the 22 Space Shuttle launches from 2006 to the conclusion of the program in 2011. The USAF has contracted DeTect since 2003 to provide the Avian Hazard Advisory System (AHAS) previously mentioned.
The Netherlands Organisation for Applied Scientific Research, a research and development organization, has developed the successful ROBIN (Radar Observation of Bird Intensity) for the Royal Netherlands Air Force (RNLAF). ROBIN is a near real-time monitoring system for flight movements of birds. ROBIN identifies flocks of birds within the signals of large radar systems. This information is used to warn air force pilots during take-off and landing. Years of observation of bird migration with ROBIN have also provided a better insight into bird migration behavior, which has had an influence on averting collisions with birds, and therefore on flight safety. Since the implementation of the ROBIN system at the RNLAF, the number of collisions between birds and aircraft in the vicinity of military airbases has decreased by more than 50%.
There are no civil aviation counterparts to the above military strategies. Some experimentation with small portable radar units has taken place at some airports, but no standard has been adopted for radar warning nor has any governmental policy regarding warnings been implemented.
History.
In aviation.
The Federal Aviation Administration (FAA) estimates bird strikes cost US aviation 400 million dollars annually and have resulted in over 200 worldwide deaths since 1988. In the United Kingdom, the Central Science Laboratory estimated that worldwide, birdstrikes cost airlines around US$1.2 billion annually. This includes repair cost and lost revenue while the damaged aircraft is out of service. In 2003, there were 4,300 bird strikes listed by the United States Air Force and 5,900 by US civil aircraft.
The first reported bird strike was by Orville Wright in 1905. According to the Wright brothers' diaries, "Orville [...] flew 4,751 meters in 4 minutes 45 seconds, four complete circles. Twice passed over the fence into Beard's cornfield. Chased flock of birds for two rounds and killed one which fell on top of the upper surface and after a time fell off when swinging a sharp curve."
During the 1911 Paris to Madrid air race, French pilot Eugène Gilbert encountered an angry mother eagle over the Pyrenees. Gilbert, flying an open-cockpit Blériot XI, was able to ward off the large bird by firing pistol shots at it but did not kill it.
The first recorded bird strike fatality was reported in 1912 when aero-pioneer Calbraith Rodgers collided with a gull which became jammed in his aircraft control cables. He crashed at Long Beach, California, was pinned under the wreckage, and drowned.
The greatest loss of life directly linked to a bird strike was on October 4, 1960, when a Lockheed L-188 Electra, flying from Boston as Eastern Air Lines Flight 375, flew through a flock of common starlings during take-off, damaging all four engines. The aircraft crashed into Boston Harbor shortly after takeoff, with 62 fatalities out of 72 passengers. Subsequently, minimum bird ingestion standards for jet engines were developed by the FAA.
NASA astronaut Theodore Freeman was killed in 1964 when a goose shattered the plexiglass cockpit canopy of his Northrop T-38 Talon. Shards of plexiglass were ingested by the engines, leading to a fatal crash.
In November 12, 1975, Overseas National Airways Flight 032, the flight crew initiated a rejected takeoff after accelerating through a large flock of gulls at John F. Kennedy International Airport, resulting in a runway excursion. Of the 139 aircraft occupants, all survived, while the aircraft was destroyed by an intense post-crash fire. An investigation was carried out on the #3 engine by General Electric Aircraft Engines (GEAE) in Ohio. Disassembly revealed that several engine fan blades were damaged and broken, causing blades to abrade the epoxy fan shroud; as the epoxy combusted, it ignited jet fuel leaking from a broken fuel line. However, GEAE denied that the ingested birds were the underlying cause of the damage. Company investigators speculated that a tire or landing gear failure had occurred prior to the bird strikes, and that tire, wheel or landing gear debris ingested into the engine caused the fan blade damage and cut the fuel line. To demonstrate that the General Electric CF6 engine was capable of withstanding a bird strike, the National Transportation Safety Board conducted a test with a sample engine.
In 1988, Ethiopian Airlines Flight 604 sucked pigeons into both engines during takeoff and then crashed, killing 35 passengers.
In 1995, a Dassault Falcon 20 crashed at Paris–Le Bourget Airport during an emergency landing attempt after sucking lapwings into an engine, which caused an engine failure and a fire in the airplane's fuselage; all 10 people on board were killed.
On September 22, 1995, a U.S. Air Force Boeing E-3 Sentry AWACS aircraft (Callsign Yukla 27, serial number 77-0354), crashed shortly after takeoff from Elmendorf AFB. The aircraft lost power in both port side engines after these engines ingested several Canada geese during takeoff. It crashed about from the runway, killing all 24 crew members on board.
On November 28, 2004, the nose landing gear of KLM Flight 1673, a Boeing 737-400, struck a bird during takeoff at Amsterdam Airport Schiphol. The incident was reported to air traffic control, the landing gear was raised normally, and the flight continued normally to its destination. Upon touching down at Barcelona International Airport, the aircraft started deviating to the left of the runway centreline. The crew applied right rudder, braking, and the nose wheel steering tiller but could not keep the aircraft on the runway. After it veered off the paved surface of the runway at about 100 knots, the jet went through an area of soft sand. The nose landing gear leg collapsed and the left main landing gear leg detached from its fittings shortly before the aircraft came to a stop perched over the edge of a drainage canal. All 140 passengers and six crew evacuated safely, but the aircraft itself had to be written off. The cause was discovered to be a broken cable in the nose wheel steering system caused by the bird collision. Contributing to the snapped cable was the improper application of grease during routine maintenance which led to severe wear of the cable.
During the launch of STS-114 on July 26, 2005, a vulture was hit by the Space Shuttle Discovery shortly after liftoff. The collision proved fatal to the vulture, however the space shuttle was undamaged.
In April 2007, a Thomsonfly Boeing 757 from Manchester Airport to Lanzarote Airport suffered a bird strike when at least one bird, thought to be a crow, was ingested by the starboard engine. The plane landed safely back at Manchester Airport a while later. The incident was captured by two plane spotters on opposite sides of the airport, as well as the emergency calls picked up by a plane spotter's radio.
On November 10, 2008, Ryanair Flight 4102 from Frankfurt to Rome made an emergency landing at Ciampino Airport after multiple bird strikes caused both engines to fail. After touchdown, the left main landing gear collapsed, and the aircraft briefly veered off the runway. Passengers and crew were evacuated through the starboard emergency exits.
On January 4, 2009, a Sikorsky S-76 helicopter hit a red-tailed hawk in Louisiana. The hawk hit the helicopter just above the windscreen. The impact forced the activation of the engine fire suppression control handles, retarding the throttles and causing the engines to lose power. Eight of the nine people on board died in the subsequent crash; the survivor, a passenger, was seriously injured.
On January 15, 2009, US Airways Flight 1549 from LaGuardia Airport to Charlotte/Douglas International Airport ditched into the Hudson River after experiencing a loss of both turbines. The engine failure was caused by running into a flock of geese at an altitude of about , shortly after takeoff. All 150 passengers and 5 crew members were safely evacuated after a successful water landing. On May 28, 2010, the NTSB published its final report into the accident.
On August 15, 2019, Ural Airlines Flight 178 from Moscow–Zhukovsky to Simferopol, Crimea, suffered a bird strike after taking off from Zhukovsky and crash landed in a cornfield 5 kilometers away from the airport. 74 people were injured, all with minor injuries.
On September 16, 2023, the Italian Frecce Tricolori Aermacchi MB-339 squadron departed from the Turin Airport for an airshow. One jet experienced a sudden loss of engine power shortly after takeoff, possibly due to a bird strike, and crashed. The pilot ejected before the ground impact and was admitted to the hospital for burn injuries. A five-year-old girl died in the crash and subsequent fireball, and three other people were brought to the hospital for burns.
In ground transportation.
During the 1952 edition of the Carrera Panamericana, Karl Kling and Hans Klenk suffered a bird strike incident when the Mercedes-Benz W194 was struck by a vulture in the windscreen. During a long right-hand bend in the opening stage taken at almost , Kling failed to spot vultures sitting by the side of the road. When the vultures were scattered after hearing the loud W194 coming towards them, one vulture impacted through the windscreen on the passenger side. The impact was severe enough to briefly knock Klenk unconscious. Despite bleeding badly from facial injuries caused by the shattered windscreen, Klenk ordered Kling to maintain speed. He waited until a tire change almost later to clean himself and the car up, and the two eventually won the race. For extra protection, eight vertical steel bars were bolted over the new windscreen. Kling and Klenk discussed the species and size of the dead bird, agreeing that it had a minimum wingspan and weighed as much as five fattened geese.
Alan Stacey's fatal accident during the 1960 Belgian Grand Prix was caused when a bird hit him in the face on lap 25, causing his Lotus 18-Climax to crash at the fast, sweeping right hand Burnenville curve. According to fellow driver Innes Ireland's testimony in a mid-1980s edition of "Road & Track" magazine, spectators claimed that a bird had flown into Stacey's face while he was approaching the curve. Ireland stated that the impact might have knocked him unconscious, or possibly killed him by breaking his neck or inflicting a fatal head injury even before the car crashed.
On lap 2 of the 1991 Daytona 500 driver Dale Earnhardt hit a seagull causing cosmetic damage to the front of his car. Despite this he would fight all the way up to second before spinning on the last lap and finishing fifth.
On March 30, 1999, during the inaugural run of the hypercoaster Apollo's Chariot in Virginia, passenger Fabio Lanzoni suffered a bird strike by a goose and required three stitches to his face. The roller coaster has a height of over 200 feet and reaches speeds over 70 miles per hour.
Bug strikes.
Flying insect strikes, like bird strikes, have been encountered by pilots since aircraft were invented. Future United States Air Force general Henry H. Arnold, as a young officer, nearly lost control of his Wright Model B in 1911 after a bug flew into his eye while he was not wearing goggles, distracting him.
In 1968, North Central Airlines Flight 261, a Convair 580, encountered large concentrations of insects between Chicago and Milwaukee. The accumulated insect remains on the windshield severely impaired the flight crew's forward visibility; as a result, while descending to land at Milwaukee, the aircraft suffered a mid-air collision with a private Cessna 150 that the Convair's flight crew had been unable to see until a split second before the collision, killing the three occupants of the Cessna and severely injuring the Convair's first officer.
In 1986, a Boeing B-52 Stratofortress on a low-level training mission entered a swarm of locusts. The insects' impacts on the aircraft's windscreens rendered the crew unable to see, forcing them to abort the mission and fly using the aircraft's instruments alone. The aircraft eventually landed safely.
In 2010, the Australian Civil Aviation Safety Authority (CASA) issued a warning to pilots about the potential dangers of flying through a locust swarm. CASA warned that the insects could cause loss of engine power and loss of visibility, and blocking of an aircraft's pitot tubes, causing inaccurate airspeed readings.
Bug strikes can also affect the operation of machinery on the ground, especially motorcycles. The team on the US TV show "MythBusters" – in a 2010 episode entitled "Bug Special" – concluded that death could occur if a motorist were hit by a flying insect of sufficient mass in a vulnerable part of the body. Anecdotal evidence from motorcyclists supports pain, bruising, soreness, stings, and forced dismount caused by collision with an insect at speed.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_{k}"
},
{
"math_id": 1,
"text": "E_{k} = \\frac{1}{2} m v^{2}"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "v"
}
] | https://en.wikipedia.org/wiki?curid=1197818 |
1198015 | Decisional Diffie–Hellman assumption | The decisional Diffie–Hellman (DDH) assumption is a computational hardness assumption about a certain problem involving discrete logarithms in cyclic groups. It is used as the basis to prove the security of many cryptographic protocols, most notably the ElGamal and Cramer–Shoup cryptosystems.
Definition.
Consider a (multiplicative) cyclic group formula_0 of order formula_1, and with generator formula_2. The DDH assumption states that, given formula_3 and formula_4 for uniformly and independently chosen formula_5, the value formula_6 "looks like" a random element in formula_0.
This intuitive notion can be formally stated by saying that the following two probability distributions are computationally indistinguishable (in the security parameter, formula_7):
Triples of the first kind are often called DDH triplet or DDH tuples.
Relation to other assumptions.
The DDH assumption is related to the discrete log assumption. If it were possible to efficiently compute discrete logs in formula_0, then the DDH assumption would not hold in formula_0. Given formula_14, one could efficiently decide whether formula_15 by first taking the discrete formula_16 of formula_3, and then comparing formula_17 with formula_18.
DDH is considered to be a stronger assumption than the discrete logarithm assumption, because there are groups for which computing discrete logs is believed to be hard (And thus the DL Assumption is believed to be true), but detecting DDH tuples is easy (And thus DDH is false). Because of this, requiring that the DDH assumption holds in a group is believed to be a more restrictive requirement than DL.
The DDH assumption is also related to the computational Diffie–Hellman assumption (CDH). If it were possible to efficiently compute formula_6 from formula_19, then one could easily distinguish the two probability distributions above. DDH is considered to be a stronger assumption than CDH because if CDH is solved, which means we can get formula_6, the answer to DDH will become obvious.
Other properties.
The problem of detecting DDH tuples is random self-reducible, meaning, roughly, that if it is hard for even a small fraction of inputs, it is hard for almost all inputs; if it is easy for even a small fraction of inputs, it is easy for almost all inputs.
Groups for which DDH is assumed to hold.
When using a cryptographic protocol whose security depends on the DDH assumption, it is important that the protocol is implemented using groups where DDH is believed to hold:
Importantly, the DDH assumption does not hold in the multiplicative group formula_33, where formula_32 is prime. This is because if formula_2 is a generator of formula_33, then the Legendre symbol of formula_3 reveals if formula_9 is even or odd. Given formula_3, formula_4 and formula_6, one can thus efficiently compute and compare the least significant bit of formula_9, formula_10 and formula_34, respectively, which provides a probabilistic method to distinguish formula_6 from a random group element.
The DDH assumption does not hold on elliptic curves over formula_31 with small embedding degree (say, less than formula_35), a class which includes supersingular elliptic curves. This is because the Weil pairing or Tate pairing can be used to solve the problem directly as follows: given formula_36 on such a curve, one can compute formula_37 and formula_38. By the bilinearity of the pairings, the two expressions are equal if and only if formula_39 modulo the order of formula_40. If the embedding degree is large (say around the size of formula_32) then the DDH assumption will still hold because the pairing cannot be computed. Even if the embedding degree is small, there are some subgroups of the curve in which the DDH assumption is believed to hold. | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "g"
},
{
"math_id": 3,
"text": "g^a"
},
{
"math_id": 4,
"text": "g^b"
},
{
"math_id": 5,
"text": "a,b \\in \\mathbb{Z}_q"
},
{
"math_id": 6,
"text": "g^{ab}"
},
{
"math_id": 7,
"text": "n=\\log(q)"
},
{
"math_id": 8,
"text": "(g^a,g^b,g^{ab})"
},
{
"math_id": 9,
"text": "a"
},
{
"math_id": 10,
"text": "b"
},
{
"math_id": 11,
"text": "\\mathbb{Z}_q"
},
{
"math_id": 12,
"text": "(g^a,g^b,g^c)"
},
{
"math_id": 13,
"text": "a,b,c"
},
{
"math_id": 14,
"text": "(g^a,g^b,z)"
},
{
"math_id": 15,
"text": "z=g^{ab}"
},
{
"math_id": 16,
"text": "\\log_g"
},
{
"math_id": 17,
"text": "z"
},
{
"math_id": 18,
"text": "(g^b)^a"
},
{
"math_id": 19,
"text": "(g^a,g^b)"
},
{
"math_id": 20,
"text": "\\mathbb{G}_q"
},
{
"math_id": 21,
"text": "k"
},
{
"math_id": 22,
"text": "p=kq+1"
},
{
"math_id": 23,
"text": "k=2"
},
{
"math_id": 24,
"text": "\\mathbb{Z}^*_p/\\{1,-1\\}"
},
{
"math_id": 25,
"text": "p=2q+1"
},
{
"math_id": 26,
"text": "\\{\\{1,-1\\},\\ldots\\{q,-q\\}\\}"
},
{
"math_id": 27,
"text": "\\{x,-x\\}"
},
{
"math_id": 28,
"text": "x"
},
{
"math_id": 29,
"text": "\\mathbb{Z}^*_p/\\{1,-1\\}\\equiv\\{1,\\ldots,q\\}"
},
{
"math_id": 30,
"text": "E"
},
{
"math_id": 31,
"text": "GF(p)"
},
{
"math_id": 32,
"text": "p"
},
{
"math_id": 33,
"text": "\\mathbb{Z}^*_p"
},
{
"math_id": 34,
"text": "ab"
},
{
"math_id": 35,
"text": "\\log^2(p)"
},
{
"math_id": 36,
"text": "P,aP,bP,cP"
},
{
"math_id": 37,
"text": "e(P,cP)"
},
{
"math_id": 38,
"text": "e(aP,bP)"
},
{
"math_id": 39,
"text": "ab=c"
},
{
"math_id": 40,
"text": "P"
}
] | https://en.wikipedia.org/wiki?curid=1198015 |
11980877 | Loewe additivity | In toxicodynamics and pharmacodynamics, Loewe additivity (or dose additivity) is one of several common reference models used for measuring the effects of drug combinations.
Definition.
Let formula_0 and formula_1 be doses of compounds 1 and 2 producing in combination an effect formula_2. We denote by formula_3 and formula_4 the doses of compounds 1 and 2 required to produce effect formula_2 alone (assuming this conditions uniquely define them, i.e. that the individual dose-response functions are bijective).
formula_5 quantifies the potency of compound 1 relatively to that of compound 2.
formula_6 can be interpreted as the dose formula_1 of compound 2 converted into the corresponding dose of compound 1 after accounting for difference in potency.
Loewe additivity is defined as the situation where formula_7 or
formula_8.
Geometrically, Loewe additivity is the situation where isoboles are segments joining the points formula_9 and formula_10 in the domain formula_11.
If we denote by formula_12, formula_13 and formula_14 the dose-response functions of compound 1, compound 2 and of the mixture respectively, then dose additivity holds when
formula_15
Testing.
The Loewe additivity equation provides a prediction of the dose combination eliciting a given effect. Departure from Loewe additivity can be assessed informally by comparing this prediction to observations. This approach is known in toxicology as the model deviation ratio (MDR).
This approach can be rooted in a more formal statistical method with the derivation of approximate p-values with Monte Carlo simulation, as implemented in the R package MDR.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d_1"
},
{
"math_id": 1,
"text": "d_2"
},
{
"math_id": 2,
"text": "e"
},
{
"math_id": 3,
"text": "D_{e1}"
},
{
"math_id": 4,
"text": "D_{e2}"
},
{
"math_id": 5,
"text": "D_{e1}/D_{e2}"
},
{
"math_id": 6,
"text": "d_2 D_{e1}/D_{e2}"
},
{
"math_id": 7,
"text": "d_1 + d_2 D_{e1}/D_{e2} = D_{e1}"
},
{
"math_id": 8,
"text": "d_1 / D_{e1} + d_2/D_{e2} = 1"
},
{
"math_id": 9,
"text": "(D_{e1},0)"
},
{
"math_id": 10,
"text": "(0,D_{e2})"
},
{
"math_id": 11,
"text": "(d_1,d_2)"
},
{
"math_id": 12,
"text": "f_1(d_1)"
},
{
"math_id": 13,
"text": "f_2(d_2)"
},
{
"math_id": 14,
"text": "f_{12}(d_1,d_2)"
},
{
"math_id": 15,
"text": " \\frac{d_1}{f_1^{-1}(f_{12}(d_1,d_2))} + \\frac{d_2}{f_2^{-1}(f_{12}(d_1,d_2))} = 1"
}
] | https://en.wikipedia.org/wiki?curid=11980877 |
1198268 | Subgroup growth | In mathematics, subgroup growth is a branch of group theory, dealing with quantitative questions about subgroups of a given group.
Let formula_0 be a finitely generated group. Then, for each integer formula_1 define formula_2 to be the number of subgroups formula_3 of index formula_1 in formula_0. Similarly, if formula_0 is a topological group, formula_4 denotes the number of open subgroups formula_5 of index formula_1 in formula_0. One similarly defines formula_6 and formula_7 to denote the number of maximal and normal subgroups of index formula_1, respectively.
Subgroup growth studies these functions, their interplay, and the characterization of group theoretical properties in terms of these functions.
The theory was motivated by the desire to enumerate finite groups of given order, and the analogy with Mikhail Gromov's notion of word growth.
Nilpotent groups.
Let formula_8 be a finitely generated torsionfree nilpotent group. Then there exists a composition series with infinite cyclic factors, which induces a bijection (though not necessarily a homomorphism).
formula_9
such that group multiplication can be expressed by polynomial functions in these coordinates; in particular, the multiplication is definable. Using methods from the model theory of p-adic integers, F. Grunewald, D. Segal and G. Smith showed that the local zeta function
formula_10
is a rational function in formula_11.
As an example, let formula_12 be the discrete Heisenberg group. This group has a "presentation" with generators formula_13 and relations
formula_14
Hence, elements of formula_12 can be represented as triples formula_15 of integers with group operation given by
formula_16
To each finite index subgroup formula_17 of formula_12, associate the set of all "good bases" of formula_18 as follows. Note that formula_12 has a normal series
formula_19
with infinite cyclic factors. A triple formula_20 is called a "good basis" of formula_17, if formula_21 generate formula_17, and formula_22. In general, it is quite complicated to determine the set of good bases for a fixed subgroup formula_17. To overcome this difficulty, one determines the set of all good bases of all finite index subgroups, and determines how many of these belong to one given subgroup. To make this precise, one has to embed the Heisenberg group over the integers into the group over p-adic numbers. After some computations, one arrives at the formula
formula_23
where formula_24 is the Haar measure on formula_25, formula_26 denotes the p-adic absolute value and formula_27 is the set of tuples of formula_28-adic integers
formula_29
such that
formula_30
is a good basis of some finite-index subgroup. The latter condition can be translated into
formula_31.
Now, the integral can be transformed into an iterated sum to yield
formula_32
where the final evaluation consists of repeated application of the formula for the value of the geometric series. From this we deduce that formula_33 can be expressed in terms of the Riemann zeta function as
formula_34
For more complicated examples, the computations become difficult, and in general one cannot expect a closed expression for formula_35. The local factor
formula_36
can always be expressed as a definable formula_28-adic integral. Applying a result of MacIntyre on the model theory of formula_37-adic integers, one deduces again that formula_38 is a rational function in formula_11. Moreover, M. du Sautoy and F. Grunewald showed that the integral can be approximated by Artin L-functions. Using the fact that Artin L-functions are holomorphic in a neighbourhood of the line formula_39, they showed that for any torsionfree nilpotent group, the function formula_35 is meromorphic in the domain
formula_40
where formula_41 is the abscissa of convergence of formula_38, and formula_42 is some positive number, and holomorphic in some neighbourhood of formula_43. Using a Tauberian theorem this implies
formula_44
for some real number formula_41 and a non-negative integer formula_45.
Subgroup growth and coset representations.
Let formula_12 be a group, formula_17 a subgroup of index formula_46. Then formula_12 acts on the set of left cosets of formula_18 in formula_47 by left shift:
formula_48
In this way, formula_17 induces a homomorphism of formula_12 into the symmetric group on formula_49. formula_47 acts transitively on formula_49, and vice versa, given a transitive action of formula_12 on
formula_50
the stabilizer of the point 1 is a subgroup of index formula_46 in formula_12. Since the set
formula_51
can be permuted in
formula_52
ways, we find that formula_4 is equal to the number of transitive formula_47-actions divided by formula_52. Among all formula_8-actions, we can distinguish transitive actions by a sifting argument, to arrive at the following formula
formula_53
where formula_54 denotes the number of homomorphisms
formula_55
In several instances the function formula_54 is easier to be approached then formula_4, and, if formula_54 grows sufficiently large, the sum is of negligible order of magnitude, hence, one obtains an asymptotic formula for formula_4.
As an example, let formula_56 be the free group on two generators. Then every map of the generators of formula_56 extends to a homomorphism
formula_57
that is
formula_58
From this we deduce
formula_59
For more complicated examples, the estimation of formula_54 involves the representation theory and statistical properties of symmetric groups.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "a_n(G)"
},
{
"math_id": 3,
"text": "H"
},
{
"math_id": 4,
"text": "s_n(G)"
},
{
"math_id": 5,
"text": "U"
},
{
"math_id": 6,
"text": "m_n(G)"
},
{
"math_id": 7,
"text": "s_n^\\triangleleft(G)"
},
{
"math_id": 8,
"text": "G "
},
{
"math_id": 9,
"text": "\\mathbb{Z}^n \\longrightarrow G "
},
{
"math_id": 10,
"text": "\n\\zeta_{G, p}(s) = \\sum_{\\nu=0}^\\infty s_{p^n}(G) p^{-ns}\n"
},
{
"math_id": 11,
"text": "p^{-s} "
},
{
"math_id": 12,
"text": " G "
},
{
"math_id": 13,
"text": "x, \\, y, \\, z "
},
{
"math_id": 14,
"text": "\n[x, y] = z, [x, z] = [y, z] = 1.\n"
},
{
"math_id": 15,
"text": " (a,\\, b, \\, c) "
},
{
"math_id": 16,
"text": "\n(a, b, c)\\circ(a', b', c') = (a+a', b+b', c+c'+ab').\n "
},
{
"math_id": 17,
"text": " U "
},
{
"math_id": 18,
"text": " U"
},
{
"math_id": 19,
"text": "\nG=\\langle x, y, z\\rangle\\triangleright\\langle y, z\\rangle\\triangleright\\langle z\\rangle\\triangleright 1\n"
},
{
"math_id": 20,
"text": "(g_1, g_2, g_3) \\in G "
},
{
"math_id": 21,
"text": "g_1, g_2, g_3 "
},
{
"math_id": 22,
"text": "g_2\\in\\langle y, z\\rangle, g_3\\in\\langle z\\rangle"
},
{
"math_id": 23,
"text": "\n\\zeta_{G, p}(s) = \\frac{1}{(1-p^{-1})^3}\\int_\\mathcal{M} |a_{11}|_p^{s-1} |a_{22}|_p^{s-2} |a_{33}|_p^{s-3}\\;d\\mu,\n"
},
{
"math_id": 24,
"text": "\\mu "
},
{
"math_id": 25,
"text": "\\mathbb{Z}_p "
},
{
"math_id": 26,
"text": "|\\cdot|_p"
},
{
"math_id": 27,
"text": "\\mathcal{M}"
},
{
"math_id": 28,
"text": " p "
},
{
"math_id": 29,
"text": "\n\\{a_{11}, a_{12}, a_{13}, a_{22}, a_{23}, a_{33}\\}\n"
},
{
"math_id": 30,
"text": "\n\\{x^{a_{11}}y^{a_{12}}z^{a_{13}}, y^{a_{22}}z^{a_{23}}, z^{a_{33}}\\}\n"
},
{
"math_id": 31,
"text": "a_{33}|a_{11}\\cdot a_{22}"
},
{
"math_id": 32,
"text": "\n\\zeta_{G, p}(s) = \\sum_{a\\geq 0}\\sum_{b\\geq 0}\\sum_{c=0}^{a+b} p^{-as-b(s-1)-c(s-2)} = \\frac{1-p^{3-3s}}{(1-p^{-s})(1-p^{1-s})(1-p^{2-2s})(1-p^{2-3s})}\n"
},
{
"math_id": 33,
"text": "\\zeta_G (s) "
},
{
"math_id": 34,
"text": "\n\\zeta_G(s) = \\frac{\\zeta(s)\\zeta(s-1)\\zeta(2s-2)\\zeta(2s-3)}{\\zeta(3s-3)}.\n"
},
{
"math_id": 35,
"text": " \\zeta_G(s)"
},
{
"math_id": 36,
"text": "\\zeta_{G, p}(s)"
},
{
"math_id": 37,
"text": " p"
},
{
"math_id": 38,
"text": "\\zeta_G(s) "
},
{
"math_id": 39,
"text": "\\Re (s)=1"
},
{
"math_id": 40,
"text": "\\Re(s)>\\alpha-\\delta "
},
{
"math_id": 41,
"text": "\\alpha "
},
{
"math_id": 42,
"text": " \\delta "
},
{
"math_id": 43,
"text": "\\Re (s)=\\alpha"
},
{
"math_id": 44,
"text": "\n\\sum_{n\\leq x} s_n(G) \\sim x^\\alpha\\log^k x\n"
},
{
"math_id": 45,
"text": " k "
},
{
"math_id": 46,
"text": " n"
},
{
"math_id": 47,
"text": " G"
},
{
"math_id": 48,
"text": "g(hU)=(gh)U."
},
{
"math_id": 49,
"text": "G/U"
},
{
"math_id": 50,
"text": "\\{1, \\ldots, n\\},"
},
{
"math_id": 51,
"text": "\\{2, \\ldots, n\\}"
},
{
"math_id": 52,
"text": "(n-1)!"
},
{
"math_id": 53,
"text": "\ns_n(G) = \\frac{h_n(G)}{(n-1)!} - \\sum_{\\nu=1}^{n-1} \\frac{h_{n-\\nu}(G)s_\\nu(G)}{(n-\\nu)!},\n"
},
{
"math_id": 54,
"text": "h_n(G)"
},
{
"math_id": 55,
"text": "\\varphi:G\\rightarrow S_n."
},
{
"math_id": 56,
"text": "F_2"
},
{
"math_id": 57,
"text": "F_2\\rightarrow S_n,"
},
{
"math_id": 58,
"text": "h_n(F_2)=(n!)^2."
},
{
"math_id": 59,
"text": "s_n(F_2)\\sim n\\cdot n!."
}
] | https://en.wikipedia.org/wiki?curid=1198268 |
1198288 | Group B | Motor racing regulations
Group B was a set of regulations for grand touring (GT) vehicles used in sports car racing and rallying introduced in 1982 by the Fédération Internationale de l'Automobile (FIA). Although permitted to enter a GT class of the World Sportscar Championship alongside the more popular racing prototypes of Group C, Group B are commonly associated with the international rallying scene during 1982 to 1986 in popular culture, when they were the highest class used across rallying, including the World Rally Championship, regional and national championships.
The Group B regulations fostered some of the fastest, most powerful, and most sophisticated rally cars ever built and their era is commonly referred to as the golden era of rallying. However, a series of major accidents, some fatal, were blamed on their outright speed with lack of crowd control at events. After the death of Henri Toivonen and his co-driver Sergio Cresto in the 1986 Tour de Corse, the FIA banned the group from competing in the WRC from the following season, dropped its prior plans to introduce Group S, and designated Group A as the top-line formula with engine limits of 2000 cc and 300 bhp.
In the following years, ex-rally Group B cars found a niche in the European Rallycross Championship until being dropped in 1993. By 1991 the World Sportscar Championship had moved on from Group B and C, with the GT championships formed in the nineties preferring other classes such as the new Group GT. The last cars were homologated in Group B in 1993, though the FIA made provisions for national championships and domestic racing until as late as 2011.
Overview.
New FISA groups.
In 1982 the FISA restructured the production car category of Appendix J to consist of three new groups. The outgoing Group 1 and Group 2 were replaced with Group N and Group A for unmodified and modified production touring cars respectively. These cars had to have four seats (although the minimum size of the rear seats was small enough that some 2+2 cars could qualify) and be produced in large numbers. Their homologation requirement was 5000 units in a 12 month period between 1982 and 1992. From 1993 the requirement reduced to 2500 units.
Group B was for grand touring (GT) cars with a minimum two seats, redefined as sports grand touring cars in 1986. It combined and replaced Group 3 and Group 4, two grand touring groups already used in rallying, and the production-derived special builds of Group 5 used in circuit racing. Group 5 had never been permitted in the World Rally Championship for Manufacturers.
Homologation.
The number of cars required for homologation, 200, was just 4% of the other groups' requirement and half what was previously accepted in Group 4. As the homologation periods could be extended by producing only 10% of the initial requirement each subsequent year, 20 in Group B's case compared to 500 for A and N, the group made motorsport and the championships more accessible for car manufacturers before taking the group's technicalities and performance into account. 'Evolutions' could be included within the original homologation without needing to produce a new initial run, allowing manufacturers to tweak various aspects of their competing car within the requirement to produce only 20 'evolved' cars. Together, these homologation rules resulted in Group B 'homologation specials' (cars that were only produced to satisfy the group quota rather than for sales) extremely rare, if they continued to exist beyond presentation to FIA officials in the first place.
Group B could be used to homologate production sports cars which could not be homologated in Group N or A, because they did not have four seats or were not produced in large enough numbers (e.g. cars like the Ferrari 308, the Porsche 911, etc.). Further, the low production requirement encouraged manufacturers to use space frame chassis instead of bodyshells typically used in most series-production road cars.
Existing cars within Group 2, Group 3 and Group 4 homologation could be transferred to Group B, with many being automatically transferred by the FISA secretariat.
Regulations.
Specific regulations.
Group B followed Article 252 and 253, which covered such things as safety cages or parts defining a car like windscreens or rear view mirrors. Article 256 covered the specific regulations for Group B with 5 paragraphs over half a page and includes most of the 7 pages of article 255 (Group A). The first two paragraphs of 256 covered the definition of (Sports) Grand Touring Cars (with a "minimum" of two seats) and the homologation requirements.
The section, "3) FITTINGS AND MODIFICATIONS ALLOWED" states, "All those allowed for Group A..." These rules give the base rule sets of what is allowed to be modified, how it can be modified, and what can be removed from the homologation road cars.
If a car has a supercharger (this includes turbochargers), then the engine capacity is considered 1.4 times larger for its other restrictions stated above. If the engine is a rotary or similar, then the capacity is considered to be "twice the volume determined between the maximum and minimum capacity of the combustion chamber." The equivalent capacity, formula_0, for a turbine engine is much more complicated, derived with the formula formula_1 (1982) or formula_2 (1986), where formula_3 is the "high pressure nozzle area" (cm2), and formula_4/formula_5 is the "pressure ratio" of the compressor.
Resulting builds.
Ultimately, there were few restrictions on technology, design or materials permitted. For example, fibreglass bodywork was used in the case of the Ford RS200, a car without a commercially available counterpart, though silhouette race cars using space frame chassis were still common even when consumer car equivalents were mass produced, for example in the case of the Peugeot 205 T16 or Lancia Delta S4.
The rules provided for manufacturers who wanted to compete in rallying with mid-engine and RWD or 4WD, but their RWD production models had been gradually replaced by FWD counterparts. By reducing the homologation minimum from 400 in Group 4 to 200, FISA enabled manufacturers to design specialized RWD or 4WD rally car homologation specials without the financial commitment of producing their production counterparts in such large numbers.
There were no restrictions on boost, resulting in the power output of the winning cars increasing from 250 hp in 1981, to there being at least two cars producing in excess of 500 by 1986, the final year of Group B in rally. Turbocharged engines weren't common in commercial cars and had only been introduced since the early 1960s, but in the early and mid-1980s engineers learnt how to extract extraordinary amounts of power from turbo engines. Some Group B manufacturers went further, Peugeot for example, installed an F1-derived Turbo Lag system to their engine, although the technology was new and not very effective. Lancia twincharged their Delta S4, adding both a supercharger and turbocharger to their engine. When the Group N, A and B rules were decided, the weight/engine displacement restrictions were thought the only way to control speed. Nowadays, the power of turbo engines is limited by mandating a restrictor in the intake, and the Groups Rally hierarchy for example, each have limits on weight/engine power (kg/hp).
Within all the groups, there were 15 classes based on engine displacement with a 1.4 equivalence factor applied for forced induction engines. Each class had weight limits and wheel sizes. Notable classes for Group B were the 3000 cc class (2142.8 cc with turbo or supercharger), 960 kg minimum weight (Audi Quattro, Lancia 037); and 2500 cc (1785 cc), 890 kg (Peugeot 205 T16, Lancia Delta S4).
The original Renault 5 Turbo had a 1.4 L engine so it was in the 2000 cc class. Renault later increased the size of the engine somewhat for the Turbo Maxi, so as to be able to fit larger tires (at the expense of somewhat higher weight). The Ferrari 288 GTO and the Porsche 959 were in the 4000 cc (2857 cc), 1100 kg class, which would have probably become the normal class for track racing if Group B had seen much use there.
Classes in Group B:
Rallying.
1982–1983.
The existing Groups 1–4 were still permitted in the World Rally Championship during the first year of the new groups. Although some freshly homologated Group B cars were entered from the first round in Monte Carlo, no car from the group podiumed at any of the season's 12 rallies.
Although the Audi Quattro was still in essence a Group 4 car, it carried Hannu Mikkola to the driver's title in 1983. Lancia had designed a new car to Group B specifications, but the Lancia 037 still had rear wheel drive and was thus less stable than the Audi over different surfaces (generally the Lancia had the upper hand on tarmac, but the Audi remained superior on looser surfaces such as snow and gravel). Nevertheless, the 037 performed well enough for Lancia to capture the manufacturers title, which was generally considered more prestigious at the time, with a rally to spare. In fact, so low was Lancia's regard for the Drivers Championship, they did not enter a single car into the season finale RAC Rally, despite the fact that driver Walter Röhrl was still in the hunt for the title. This may have been, in part, because Röhrl "never dreamed of becoming a world champion."
The low homologation requirements quickly attracted manufacturers to Group B. Opel replaced their production-derived Ascona with the Group B Manta 400, and Toyota built a new car based on their Celica. Like the Lancia 037, both cars were rear wheel drive, and while proving successful in national rallying in various countries, they were less so at the World Championship level, although Toyota won the 1983 Ivory Coast Rally after hiring Swedish desert driving specialist, the late Björn Waldegård.
1984–1985.
In 1984, Audi beat Lancia for both the manufacturers' title and the drivers' title, the latter of which was won by Stig Blomqvist, but received an unexpected new competition midway through the year: Peugeot had joined the rallying scene with its Group B 205 T16. The T16 also had four wheel drive and was smaller and lighter than the Audi Quattro. At the wheel was the 1981 driver's champion Ari Vatanen, with future Ferrari Formula One team manager and FIA President Jean Todt overseeing the operation. A crash prevented the T16 from winning its first rally but the writing was on the wall for Audi.
Despite massive revisions to the Quattro, including a shorter wheelbase, Peugeot dominated the 1985 season. Although not without mishap: Vatanen plunged off the road in Argentina and was seriously injured when his seat mountings broke in the ensuing crash. Timo Salonen won the 1985 champion title with five wins.
Although the crash was a sign that Group B cars had already become dangerously quick (despite Vatanen having a consistent record of crashing out while leading), several new Group B cars entered the rallying world in 1985:
1986.
For the 1986 season, defending champion Timo Salonen had the new Evolution 2 version of Peugeot's 205 T16 with ex Toyota driver, Juha Kankkunen. Audi's new Sport Quattro S1 boasted over 600 hp (450 kW) and a huge snowplow-like front end. Lancia's Delta S4 would be in the hands of the Finnish prodigy Henri Toivonen and Markku Alén, and Ford was ready with its high tech RS200 with Stig Blomqvist and Kalle Grundel.
On the "Lagoa Azul" stage of the Portuguese Rally near Sintra, Portuguese driver Joaquim Santos crested a rise, turning to his right to avoid a small group of spectators. This caused him to lose control of his RS200. The car veered to the right and slid off the road into another group of spectators. Thirty-one people were injured and three were killed. All the top teams immediately pulled out of the rally and Group B was placed in jeopardy.
Disaster struck again in early May at the Tour de Corse. Lancia's Toivonen was the championship favorite, and once the rally got underway he was the pace setter. Seven kilometers into the 18th stage, Toivonen's S4 flew off the unguarded edge of a tightening left hand bend and plunged down a steep wooded hillside. The car landed inverted with the fuel tanks ruptured by the impact. The combination of a red hot turbocharger, Kevlar bodywork, and the ruptured fuel tank ignited the car and set fire to the dry undergrowth. Toivonen and co-driver Sergio Cresto died in their seats. With no witnesses to the accident it was impossible to determine what caused the crash other than Toivonen had left the road at high speed. Some cite Toivonen's ill health at the time (he reportedly was suffering from flu); others suggest mechanical failure, or simply the difficulty of driving the car, although Toivonen, like Vatanen, had a career full of crashing out while leading rallies. Up until that stage he was leading the rally by a large margin, with no other driver challenging him.
The crash came a year after Lancia driver Attilio Bettega had crashed and died in his 037. While that fatality was largely blamed on the unforgiving Corsican scenery (and bad luck, as his co-driver, Maurizio Perissinot, was unharmed), Toivonen and Cresto's deaths, combined with the Portugal tragedy and televised accident of F1 driver Marc Surer in another RS200 which killed co-driver Michel Wyder, compelled the FIA to ban all Group B cars immediately for 1987. Audi decided to quit Group B entirely after the Corsica rally.
The final days of Group B were also controversial. The Peugeots were disqualified from the Rally Sanremo by the Italian scrutineers as the 'skirts' around the bottom of the car were found to be illegal. Peugeot immediately accused the Italians of favouring Lancia. Their case was strengthened at the next event, the RAC Rally, when the British scrutineers passed the Peugeots as legal in identical trim. FISA annulled the result of the Sanremo Rally eleven days after the final round in the United States. As a result, the championship title was passed from Lancia's Markku Alén to Peugeot's Juha Kankkunen. Timo Salonen had won another two rallies during the 1986 season and became the most successful group B era driver with a total of seven wins.
Beyond WRC.
Although 1987 saw the end of Group B rally car development and their appearance on the world rally scene, they did not disappear. They were still permitted in regional championships providing they met the limit of 1600cc for four-wheel-drive or were homologated prior to 1984. Future FIA president Mohammed Ben Sulayem was one privateer who contested rounds of the 1987 Middle East Rally Championship in an Audi Quattro A2 and Opel Manta 400. Independent teams would enter the European Championship too, though the limited options of permitted Group B cars were not as competitive or ubiquitous as newer Group A cars.
Porsche's 959 never entered a WRC event, although it did compete in the Middle East championship and won the Paris-Dakar Rally in 1986. Peugeot adapted their T16 to run in the Dakar Rally. Ari Vatanen won the event in 1987, 1989 and 1990. Improved Peugeot and Audi cars also competed in the Pikes Peak Hillclimb in Colorado. Walter Röhrl's S1 Rally car won the Pikes Peak International Hill Climb in 1987 and set a new record at the time. Audi used their Group B experience to develop a production based racing car for the Trans-Am and IMSA GTO series in 1988 and 1989 respectively.
Many ex-rally cars found homes in European Rallycross events from the beginning of 1987 until the end of 1992. The MG Metro 6R4 and Ford RS200 became frequent entries in national championships. For 1993, the FIA replaced the Group B models in the European Rallycross Championship with prototypes that had to be based on existing Group A models.
Group S.
The cancellation of Group B, coupled with the tragedies of 1986, brought about the scrapping of Group B's proposed replacement: Group S.
Group S rules would have limited car engine power to 300 hp (225 kW). To encourage innovative designs, ten examples of a car would have been required for homologation, rather than the 200 required for Group B. By the time of its cancellation, at least four Group S prototypes had been built: The Lancia ECV, the Toyota MR2-based 222D, the Opel Kadett Rallye 4x4 (a.k.a. Vauxhall Astra 4S) and the Lada Samara S-proto, and new cars were also planned by both Audi (the 002 Quattro) and Ford (a Group S modification of the RS200). The cancellation of Group S angered many rally insiders who believed the new specification to be safer than Group B and more exciting than Group A.
The Group S concept was revived by the FIA in 1997 as the World Rally Car specification, which persisted until 2021. WRC cars were limited to and required 2500 examples of a model but, unlike Group S, also had to share certain parts with their base production models.
Circuit racing.
From their introduction in 1982 Group B found a home in the World Endurance Championship, a new name for the World Sports Car Championship, though were secondary to the racing prototype Group C cars. The 1983 season had the first significant entry list including Porsche 930, BMW M1 and Ferrari 308 GTB LM vehicles. Porsche won the FIA GT Cup in 1983, handing it over to BMW in 1984 and 1985. From 1986 the championship retired Group B in favor of IMSA regulated cars and the championship became known as the World Sports-Prototype Championship the same year.
The Porsche 961 prototype, intended to be the basis for a Group B homologation, won the GTX class at the 24 Hours of Le Mans 1986 race but crashed and caught fire in 1987. The Ferrari 288 GTO was built and sold the minimum requirement of cars to the public, but never saw action in its category. The WSPC grids it was intended for was filled up by a batch of Group C cars (there would be no production sports car-based racers in European racing, including Le Mans, until 1993), but it saw limited use in an IMSA GTO race in 1989.
Legacy.
The era of Group B is often considered one of the most competitive and compelling periods in rallying. The combination of a lightweight chassis, sophisticated aerodynamics and massive amounts of horsepower resulted in the development of a class of cars whose performance has not yet been surpassed within their category, even three decades later. In reference to their dubious safety record, the class has also earned an unsavory nickname among rally enthusiasts: "Killer B's". In contrast to this, many enthusiasts refer the Group B era as the Golden Age of Rallying.
Many racing video games feature Group B cars for the player to drive. The 2017 video game "Gran Turismo Sport" features a rally car category known as "Gr. B", an obvious homage to Group B. This particular category features predominantly fictional rally cars based on newer models, such as the Mitsubishi Lancer Evolution X and the Subaru WRX STI, although it does include the Pikes Peak version of the Audi Quattro. For the game's sequel, "Gran Turismo 7", an actual Group B car (the Peugeot 205) was added to the class.
Cars.
Group B.
This list includes under-development and prototype cars that did not receive homologation.
Notes
<templatestyles src="Reflist/styles.css" />
Notable drivers.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "C = \\frac{S(( 3.10 \\times T ) - 7.63) }{0.09625}"
},
{
"math_id": 2,
"text": "C = \\frac{S( 3.10 \\times R ) - 7.63}{0.09625}"
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "R"
}
] | https://en.wikipedia.org/wiki?curid=1198288 |
11985661 | Finitely generated algebra | In mathematics, a finitely generated algebra (also called an algebra of finite type) is a commutative associative algebra "A" over a field "K" where there exists a finite set of elements "a"1...,"a""n" of "A" such that every element of "A" can be expressed as a polynomial in "a"1...,"a""n", with coefficients in "K".
Equivalently, there exist elements formula_0 such that the evaluation homomorphism at formula_1
formula_2
is surjective; thus, by applying the first isomorphism theorem, formula_3.
Conversely, formula_4 for any ideal formula_5 is a formula_6-algebra of finite type, indeed any element of formula_7 is a polynomial in the cosets formula_8 with coefficients in formula_6. Therefore, we obtain the following characterisation of finitely generated formula_6-algebras
formula_7 is a finitely generated formula_6-algebra if and only if it is isomorphic as a formula_6-algebra to a quotient ring of the type formula_9 by an ideal formula_10.
If it is necessary to emphasize the field "K" then the algebra is said to be finitely generated over "K". Algebras that are not finitely generated are called infinitely generated.
Relation with affine varieties.
Finitely generated reduced commutative algebras are basic objects of consideration in modern algebraic geometry, where they correspond to affine algebraic varieties; for this reason, these algebras are also referred to as (commutative) affine algebras. More precisely, given an affine algebraic set formula_11 we can associate a finitely generated formula_6-algebra
formula_12
called the affine coordinate ring of formula_13; moreover, if formula_14 is a regular map between the affine algebraic sets formula_11 and formula_15, we can define a homomorphism of formula_6-algebras
formula_16
then, formula_17 is a contravariant functor from the category of affine algebraic sets with regular maps to the category of reduced finitely generated formula_6-algebras: this functor turns out to be an equivalence of categories
formula_18
and, restricting to affine varieties (i.e. irreducible affine algebraic sets),
formula_19
Finite algebras vs algebras of finite type.
We recall that a commutative formula_20-algebra formula_7 is a ring homomorphism formula_21; the formula_20-module structure of formula_7 is defined by
formula_22
An formula_20-algebra formula_7 is called finite if it is finitely generated as an formula_20-module, i.e. there is a surjective homomorphism of formula_20-modules
formula_23
Again, there is a characterisation of finite algebras in terms of quotients
An formula_20-algebra formula_7 is finite if and only if it is isomorphic to a quotient formula_24 by an formula_20-submodule formula_25.
By definition, a finite formula_20-algebra is of finite type, but the converse is false: the polynomial ring formula_26 is of finite type but not finite.
Finite algebras and algebras of finite type are related to the notions of finite morphisms and morphisms of finite type.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_1,\\dots,a_n\\in A"
},
{
"math_id": 1,
"text": "{\\bf a}=(a_1,\\dots,a_n)"
},
{
"math_id": 2,
"text": "\\phi_{\\bf a}\\colon K[X_1,\\dots,X_n]\\twoheadrightarrow A"
},
{
"math_id": 3,
"text": "A \\simeq K[X_1,\\dots,X_n]/{\\rm ker}(\\phi_{\\bf a})"
},
{
"math_id": 4,
"text": "A:= K[X_1,\\dots,X_n]/I"
},
{
"math_id": 5,
"text": " I\\subseteq K[X_1,\\dots,X_n]"
},
{
"math_id": 6,
"text": "K"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": "a_i:=X_i+I, i=1,\\dots,n"
},
{
"math_id": 9,
"text": "K[X_1,\\dots,X_n]/I"
},
{
"math_id": 10,
"text": "I\\subseteq K[X_1,\\dots,X_n]"
},
{
"math_id": 11,
"text": "V\\subseteq \\mathbb{A}^n"
},
{
"math_id": 12,
"text": "\\Gamma(V):=K[X_1,\\dots,X_n]/I(V)"
},
{
"math_id": 13,
"text": "V"
},
{
"math_id": 14,
"text": "\\phi\\colon V\\to W"
},
{
"math_id": 15,
"text": "W\\subseteq \\mathbb{A}^m"
},
{
"math_id": 16,
"text": "\\Gamma(\\phi)\\equiv\\phi^*\\colon\\Gamma(W)\\to\\Gamma(V),\\,\\phi^*(f)=f\\circ\\phi,"
},
{
"math_id": 17,
"text": "\\Gamma"
},
{
"math_id": 18,
"text": "\\Gamma\\colon\n(\\text{affine algebraic sets})^{\\rm opp}\\to(\\text{reduced finitely generated }K\\text{-algebras}),"
},
{
"math_id": 19,
"text": "\\Gamma\\colon\n(\\text{affine algebraic varieties})^{\\rm opp}\\to(\\text{integral finitely generated }K\\text{-algebras})."
},
{
"math_id": 20,
"text": "R"
},
{
"math_id": 21,
"text": "\\phi\\colon R\\to A"
},
{
"math_id": 22,
"text": " \\lambda \\cdot a := \\phi(\\lambda)a,\\quad\\lambda\\in R, a\\in A."
},
{
"math_id": 23,
"text": " R^{\\oplus_n}\\twoheadrightarrow A."
},
{
"math_id": 24,
"text": "R^{\\oplus_n}/M"
},
{
"math_id": 25,
"text": "M\\subseteq R"
},
{
"math_id": 26,
"text": "R[X]"
}
] | https://en.wikipedia.org/wiki?curid=11985661 |
1198667 | False vacuum | Hypothetical vacuum, less stable than true vacuum
In quantum field theory, a false vacuum is a hypothetical vacuum state that is locally stable but does not occupy the most stable possible ground state. In this condition it is called metastable. It may last for a very long time in this state, but could eventually decay to the more stable one, an event known as false vacuum decay. The most common suggestion of how such a decay might happen in our universe is called bubble nucleation – if a small region of the universe by chance reached a more stable vacuum, this "bubble" (also called "bounce") would spread.
A false vacuum exists at a local minimum of energy and is therefore not completely stable, in contrast to a true vacuum, which exists at a global minimum and is stable.
Definition of true vs. false vacuum.
A vacuum is defined as a space with as little energy in it as possible. Despite the name, the vacuum still has quantum fields. A true vacuum is stable because it is at a global minimum of energy, and is commonly assumed to coincide with the physical vacuum state we live in. It is possible that a physical vacuum state is a configuration of quantum fields representing a local minimum but not global minimum of energy. This type of vacuum state is called a "false vacuum".
Implications.
Existential threat.
If our universe is in a false vacuum state rather than a true vacuum state, then the decay from the less stable false vacuum to the more stable true vacuum (called false vacuum decay) could have dramatic consequences. The effects could range from complete cessation of existing fundamental forces, elementary particles and structures comprising them, to subtle change in some cosmological parameters, mostly depending on the potential difference between true and false vacuum. Some false vacuum decay scenarios are compatible with the survival of structures like galaxies, stars, and even biological life, while others involve the full destruction of baryonic matter or even immediate gravitational collapse of the universe. In this more extreme case, the likelihood of a "bubble" forming is very low (i.e. false vacuum decay may be impossible).
A paper by Coleman and de Luccia that attempted to include simple gravitational assumptions into these theories noted that if this was an accurate representation of nature, then the resulting universe "inside the bubble" in such a case would appear to be extremely unstable and would almost immediately collapse:
<templatestyles src="Template:Blockquote/styles.css" />In general, gravitation makes the probability of vacuum decay smaller; in the extreme case of very small energy-density difference, it can even stabilize the false vacuum, preventing vacuum decay altogether. We believe we understand this. For the vacuum to decay, it must be possible to build a bubble of total energy zero. In the absence of gravitation, this is no problem, no matter how small the energy-density difference; all one has to do is make the bubble big enough, and the volume/surface ratio will do the job. In the presence of gravitation, though, the negative energy density of the true vacuum distorts geometry within the bubble with the result that, for a small enough energy density, there is no bubble with a big enough volume/surface ratio. Within the bubble, the effects of gravitation are more dramatic. The geometry of space-time within the bubble is that of anti-de Sitter space, a space much like conventional de Sitter space except that its group of symmetries is O(3, 2) rather than O(4, 1). Although this space-time is free of singularities, it is unstable under small perturbations, and inevitably suffers gravitational collapse of the same sort as the end state of a contracting Friedmann universe. The time required for the collapse of the interior universe is on the order of ... microseconds or less.
The possibility that we are living in a false vacuum has never been a cheering one to contemplate. Vacuum decay is the ultimate ecological catastrophe; in the new vacuum there are new constants of nature; after vacuum decay, not only is life as we know it impossible, so is chemistry as we know it. Nonetheless, one could always draw stoic comfort from the possibility that perhaps in the course of time the new vacuum would sustain, if not life as we know it, at least some structures capable of knowing joy. This possibility has now been eliminated.
The second special case is decay into a space of vanishing cosmological constant, the case that applies if we are now living in the debris of a false vacuum that decayed at some early cosmic epoch. This case presents us with less interesting physics and with fewer occasions for rhetorical excess than the preceding one. It is now the interior of the bubble that is ordinary Minkowski space ...
In a 2005 paper published in "Nature", as part of their investigation into global catastrophic risks, MIT physicist Max Tegmark and Oxford philosopher Nick Bostrom calculate the natural risks of the destruction of the Earth at less than 1/109 per year from all natural (i.e. non-anthropogenic) events, including a transition to a lower vacuum state. They argue that due to observer selection effects, we might underestimate the chances of being destroyed by vacuum decay because any information about this event would reach us only at the instant when we too were destroyed. This is in contrast to events like risks from impacts, gamma-ray bursts, supernovae and hypernovae, the frequencies of which we have adequate direct measures.
Inflation.
A number of theories suggest that cosmic inflation may be an effect of a false vacuum decaying into the true vacuum. The inflation itself may be the consequence of the Higgs field trapped in a false vacuum state with Higgs self-coupling λ and its βλ function very close to zero at the planck scale. A future electron-positron collider would be able to provide the precise measurements of the top quark needed for such calculations.
Chaotic inflation theory suggests that the universe may be in either a false vacuum or a true vacuum state. Alan Guth, in his original proposal for cosmic inflation, proposed that inflation could end through quantum mechanical bubble nucleation of the sort described above. See history of Chaotic inflation theory. It was soon understood that a homogeneous and isotropic universe could not be preserved through the violent tunneling process. This led Andrei Linde and, independently, Andreas Albrecht and Paul Steinhardt, to propose "new inflation" or "slow roll inflation" in which no tunnelling occurs, and the inflationary scalar field instead graphs as a gentle slope.
In 2014, researchers at the Chinese Academy of Sciences' Wuhan Institute of Physics and Mathematics suggested that the universe could have been spontaneously created from nothing (no space, time, nor matter) by quantum fluctuations of metastable false vacuum causing an expanding bubble of true vacuum.
Vacuum decay varieties.
Electroweak vacuum decay.
The stability criteria for the electroweak interaction was first formulated in 1979 as a function of the masses of the theoretical Higgs boson and the heaviest fermion. Discovery of the top quark in 1995 and the Higgs boson in 2012 have allowed physicists to validate the criteria against experiment, therefore since 2012 the electroweak interaction is considered as the most promising candidate for a metastable fundamental force. The corresponding false vacuum hypothesis is called either "Electroweak vacuum instability" or "Higgs vacuum instability". The present false vacuum state is called formula_0 (de Sitter space), while tentative true vacuum is called formula_1 (Anti-de Sitter space).
The diagrams show the uncertainty ranges of Higgs boson and top quark masses as oval-shaped lines. Underlying colors indicate if the electroweak vacuum state is likely to be stable, merely long-lived or completely unstable for given combination of masses. The "electroweak vacuum decay" hypothesis was sometimes misreported as the Higgs boson "ending" the universe.
A 125.18±0.16 Higgs boson mass is likely to be on the metastable side of stable-metastable boundary (estimated in 2012 as 123.8–135.0 GeV.) A definitive answer requires much more precise measurements of the top quark's pole mass, however, although improved measurement precision of Higgs boson and top quark masses further reinforced the claim of physical electroweak vacuum being in the metastable state as of 2018. Nonetheless, new physics beyond the Standard Model of Particle Physics could drastically change the stability landscape division lines, rendering previous stability and metastability criteria incorrect.
Reanalysis of 2015-2018 LHC run in 2022 has yielded a slightly lower top quark mass of 171.77±0.38 GeV, close to vacuum stability line but still in the metastable zone.
If measurements of the Higgs boson and top quark suggest that our universe lies within a false vacuum of this kind, this would imply that the bubble's effects will propagate across the universe at nearly the speed of light from its origin in space-time. A direct calculation within the Standard Model of the lifetime of our vacuum state finds that it is greater than formula_2years with 95% confidence.
Bubble nucleation.
When the false vacuum decays, the lower-energy true vacuum forms through a process known as bubble nucleation. In this process, instanton effects cause a bubble containing the true vacuum to appear. The walls of the bubble (or domain walls) have a positive surface tension, as energy is expended as the fields roll over the potential barrier to the true vacuum. The former tends as the cube of the bubble's radius while the latter is proportional to the square of its radius, so there is a critical size formula_3 at which the total energy of the bubble is zero; smaller bubbles tend to shrink, while larger bubbles tend to grow. To be able to nucleate, the bubble must overcome an energy barrier of height
where formula_4 is the difference in energy between the true and false vacuums, formula_5 is the unknown (possibly extremely large) surface tension of the domain wall, and formula_6 is the radius of the bubble. Rewriting Eq. 1 gives the critical radius as
A bubble smaller than the critical size can overcome the potential barrier via quantum tunnelling of instantons to lower energy states. For a large potential barrier, the tunneling rate per unit volume of space is given by
where formula_7 is the reduced Planck constant. As soon as a bubble of lower-energy vacuum grows beyond the critical radius defined by Eq. 2, the bubble's wall will begin to accelerate outward. Due to the typically large difference in energy between the false and true vacuums, the speed of the wall approaches the speed of light extremely quickly. The bubble does not produce any gravitational effects because the negative energy density of the bubble interior is cancelled out by the positive kinetic energy of the wall.
Small bubbles of true vacuum can be inflated to critical size by providing energy, although required energy densities are several orders of magnitude larger than what is attained in any natural or artificial process. It is also thought that certain environments can catalyze bubble formation by lowering the potential barrier.
Bubble wall has a finite thickness, depending on ratio between energy barrier and energy gain obtained by creating true vacuum. In the case when potential barrier height between true and false vacua is much smaller than energy difference between vacua, shell thickness become comparable with critical radius.
Nucleation seeds.
In general, gravity is believed to stabilize a false vacuum state, at least for transition from formula_0 (de Sitter space) to formula_1 (Anti-de Sitter space), while topological defects including cosmic strings and magnetic monopoles may enhance decay probability.
Black holes as nucleation seeds.
In a study in 2015, it was pointed out that the vacuum decay rate could be vastly increased in the vicinity of black holes, which would serve as a nucleation seed. According to this study, a potentially catastrophic vacuum decay could be triggered at any time by primordial black holes, should they exist. The authors note, however, that if primordial black holes cause a false vacuum collapse, then it should have happened long before humans evolved on Earth. A subsequent study in 2017 indicated that the bubble would collapse into a primordial black hole rather than originate from it, either by ordinary collapse or by bending space in such a way that it breaks off into a new universe. In 2019, it was found that although small non-spinning black holes may increase true vacuum nucleation rate, rapidly spinning black holes will stabilize false vacuums to decay rates lower than expected for flat space-time.
If particle collisions produce mini black holes, then energetic collisions such as the ones produced in the Large Hadron Collider (LHC) could trigger such a vacuum decay event, a scenario that has attracted the attention of the news media. It is likely to be unrealistic, because if such mini black holes can be created in collisions, they would also be created in the much more energetic collisions of cosmic radiation particles with planetary surfaces or during the early life of the universe as tentative primordial black holes. Hut and Rees note that, because cosmic ray collisions have been observed at much higher energies than those produced in terrestrial particle accelerators, these experiments should not, at least for the foreseeable future, pose a threat to our current vacuum. Particle accelerators have reached energies of only approximately eight tera electron volts (8×1012 eV). Cosmic ray collisions have been observed at and beyond energies of 5×1019 eV, six million times more powerful – the so-called Greisen–Zatsepin–Kuzmin limit – and cosmic rays in vicinity of origin may be more powerful yet. John Leslie has argued that if present trends continue, particle accelerators will exceed the energy given off in naturally occurring cosmic ray collisions by the year 2150. Fears of this kind were raised by critics of both the Relativistic Heavy Ion Collider and the Large Hadron Collider at the time of their respective proposal, and determined to be unfounded by scientific inquiry.
In a 2021 paper by Rostislav Konoplich and others, it was postulated that the area between a pair of large black holes on the verge of colliding could provide the conditions to create bubbles of "true vacuum". Intersecting surfaces between these bubbles could then become infinitely dense and form micro-black holes. These would in turn evaporate by emitting Hawking radiation in the 10 milliseconds or so before the larger black holes collided and devoured any bubbles or micro-black holes in their way. The theory could be tested by looking for the Hawking radiation emitted just before the black holes merge.
Bubble propagation.
A bubble wall, propagating outward at nearly the speed of light, has a finite thickness, depending on the ratio between the energy barrier and the energy gain obtained by creating true vacuum. In the case when the potential barrier height between true and false vacua is much smaller than the energy difference between vacua, the bubble wall thickness becomes comparable to the critical radius.
Elementary particles entering the wall will likely decay to other particles or black holes. If all decay paths lead to very massive particles, the energy barrier of such a decay may result in a stable bubble of false vacuum (also known as a Fermi ball) enclosing the false-vacuum particle instead of immediate decay. Multi-particle objects can be stabilized as Q-balls, although these objects will eventually collide and decay either into black holes or true-vacuum particles.
False vacuum decay in fiction.
False vacuum decay event is occasionally used as a plot device in works picturing a doomsday event.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "dS"
},
{
"math_id": 1,
"text": "AdS"
},
{
"math_id": 2,
"text": "10^{65}"
},
{
"math_id": 3,
"text": "R_c"
},
{
"math_id": 4,
"text": "\\Delta\\Phi"
},
{
"math_id": 5,
"text": "\\gamma"
},
{
"math_id": 6,
"text": "R"
},
{
"math_id": 7,
"text": "\\hbar"
}
] | https://en.wikipedia.org/wiki?curid=1198667 |
1198757 | Kruskal–Wallis test | Non-parametric method for testing whether samples originate from the same distribution
The Kruskal–Wallis test by ranks, Kruskal–Wallis formula_0 test (named after William Kruskal and W. Allen Wallis), or one-way ANOVA on ranks is a non-parametric statistical test for testing whether samples originate from the same distribution. It is used for comparing two or more independent samples of equal or different sample sizes. It extends the Mann–Whitney "U" test, which is used for comparing only two groups. The parametric equivalent of the Kruskal–Wallis test is the one-way analysis of variance (ANOVA).
A significant Kruskal–Wallis test indicates that at least one sample stochastically dominates one other sample. The test does not identify where this stochastic dominance occurs or for how many pairs of groups stochastic dominance obtains. For analyzing the specific sample pairs for stochastic dominance, Dunn's test, pairwise Mann–Whitney tests with Bonferroni correction, or the more powerful but less well known Conover–Iman test are sometimes used.
It is supposed that the treatments significantly affect the response level and then there is an order among the treatments: one tends to give the lowest response, another gives the next lowest response is second, and so forth. Since it is a nonparametric method, the Kruskal–Wallis test does not assume a normal distribution of the residuals, unlike the analogous one-way analysis of variance. If the researcher can make the assumptions of an identically shaped and scaled distribution for all groups, except for any difference in medians, then the null hypothesis is that the medians of all groups are equal, and the alternative hypothesis is that at least one population median of one group is different from the population median of at least one other group. Otherwise, it is impossible to say, whether the rejection of the null hypothesis comes from the shift in locations or group dispersions. This is the same issue that happens also with the Mann-Whitney test. If the data contains potential outliers, if the population distributions have heavy tails, or if the population distributions are significantly skewed, the Kruskal-Wallis test is more powerful at detecting differences among treatments than ANOVA F-test. On the other hand, if the population distributions are normal or are light-tailed and symmetric, then ANOVA F-test will generally have greater power which is the probability of rejecting the null hypothesis when it indeed should be rejected.
Exact probability tables.
A large amount of computing resources is required to compute exact probabilities for the Kruskal–Wallis test. Existing software only provides exact probabilities for sample sizes of less than about 30 participants. These software programs rely on the asymptotic approximation for larger sample sizes.
Exact probability values for larger sample sizes are available. Spurrier (2003) published exact probability tables for samples as large as 45 participants. Meyer and Seaman (2006) produced exact probability distributions for samples as large as 105 participants.
Exact distribution of"H".
Choi et al.
made a review of two methods that had been developed to compute the exact distribution of formula_0, proposed a new one, and compared the exact distribution to its chi-squared approximation.
Example.
Test for differences in ozone levels by month.
The following example uses data from Chambers et al.
on daily readings of ozone for May 1 to September 30, 1973, in New York City. The data are in the R data set codice_0, and the analysis is included in the documentation for the R function codice_1. Boxplots of ozone values by month are shown in the figure.
The Kruskal-Wallis test finds a significant difference (p = 6.901e-06) indicating that ozone differs among the 5 months.
kruskal.test(Ozone ~ Month, data = airquality)
Kruskal-Wallis rank sum test
data: Ozone by Month
Kruskal-Wallis chi-squared = 29.267, df = 4, p-value = 6.901e-06
To determine which months differ, post-hoc tests may be performed using a Wilcoxon test for each pair of months, with a Bonferroni (or other) correction for multiple hypothesis testing.
pairwise.wilcox.test(airquality$Ozone, airquality$Month, p.adjust.method = "bonferroni")
Pairwise comparisons using Wilcoxon rank sum test
data: airquality$Ozone and airquality$Month
5 6 7 8
6 1.0000 - - -
7 0.0003 0.1414 - -
8 0.0012 0.2591 1.0000 -
9 1.0000 1.0000 0.0074 0.0325
P value adjustment method: bonferroni
The post-hoc tests indicate that, after Bonferroni correction for multiple testing, the following differences are significant (adjusted p < 0.05).
Implementation.
The Kruskal-Wallis test can be implemented in many programming tools and languages.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "\n\\definecolor{Orange}{RGB}{255, 128, 0}\n\\definecolor{ChromeYellow}{RGB}{255, 167, 3}\n\\definecolor{Green}{RGB}{0, 128, 0}\n\\definecolor{green}{RGB}{0, 128, 0}\n\\definecolor{Blue}{RGB}{0, 0, 255}\n\\definecolor{Purple}{RGB}{128, 0, 128}\nH = ({\\color{Red}N}-1)\\frac{\\sum_{i=1}^{\\color{Orange}g} {\\color{ChromeYellow}n_i}({\\color{Blue}\\bar{r}_{i\\cdot}} - {\\color{Purple}\\bar{r}})^2}{\\sum_{i=1}^ {\\color{Orange}g} \\sum_{j=1}^{{\\color{ChromeYellow}n_i}}({\\color{Green}r_{ij}} - {\\color{Purple}\\bar{r}})^2},"
},
{
"math_id": 2,
"text": "\\color{Red}N"
},
{
"math_id": 3,
"text": " \\definecolor{Orange}{RGB}{255, 128, 0}\\color{Orange}g"
},
{
"math_id": 4,
"text": "\\definecolor{ChromeYellow}{RGB}{255, 167, 3}\n\\color{ChromeYellow}n_i"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "\\definecolor{Green}{RGB}{0, 128, 0}\n\\definecolor{green}{RGB}{0, 128, 0}\n\\color{Green}r_{ij}"
},
{
"math_id": 7,
"text": "j"
},
{
"math_id": 8,
"text": "\\definecolor{blue}{RGB}{0, 0, 255}\n{\\color{blue}\\bar{r}_{i\\cdot}} = \\frac{\\sum_{j=1}^{n_i}{r_{ij}}}{n_i}"
},
{
"math_id": 9,
"text": "\\definecolor{Purple}{RGB}{128, 0, 128}\n\n{\\color{Purple}\\bar{r}} =\\tfrac 12 (N+1)"
},
{
"math_id": 10,
"text": "\\definecolor{Green}{RGB}{0, 128, 0}\n\\definecolor{green}{RGB}{0, 128, 0}\n\\color{Green}r_{ij} "
},
{
"math_id": 11,
"text": "(N-1)N(N+1)/12"
},
{
"math_id": 12,
"text": "\\bar{r}=\\tfrac{N+1}{2}"
},
{
"math_id": 13,
"text": "\n\\begin{align}\nH & = \\frac{12}{N(N+1)}\\sum_{i=1}^g n_i \\left(\\bar{r}_{i\\cdot} - \\frac{N+1}{2}\\right)^2 \\\\ & = \\frac{12}{N(N+1)}\\sum_{i=1}^g n_i \\bar{r}_{i\\cdot }^2 -\\ 3(N+1)\n\\end{align}\n"
},
{
"math_id": 14,
"text": "1 - \\frac{\\sum_{i=1}^G (t_i^3 - t_i)}{N^3-N}"
},
{
"math_id": 15,
"text": "G"
},
{
"math_id": 16,
"text": "t_i"
},
{
"math_id": 17,
"text": "i"
},
{
"math_id": 18,
"text": "H"
},
{
"math_id": 19,
"text": "\\bar{a}=\\frac{\\alpha}{\\Bbbk}"
},
{
"math_id": 20,
"text": "\\bar{a}"
},
{
"math_id": 21,
"text": "\\alpha"
},
{
"math_id": 22,
"text": "\\Bbbk"
},
{
"math_id": 23,
"text": "H_c"
},
{
"math_id": 24,
"text": "g-1"
},
{
"math_id": 25,
"text": "n_i"
},
{
"math_id": 26,
"text": "\\chi^2_{\\alpha: g-1}"
}
] | https://en.wikipedia.org/wiki?curid=1198757 |
11988974 | Opial property | In mathematics, the Opial property is an abstract property of Banach spaces that plays an important role in the study of weak convergence of iterates of mappings of Banach spaces, and of the asymptotic behaviour of nonlinear semigroups. The property is named after the Polish mathematician Zdzisław Opial.
Definitions.
Let ("X", || ||) be a Banach space. "X" is said to have the Opial property if, whenever ("x""n")"n"∈N is a sequence in "X" converging weakly to some "x"0 ∈ "X" and "x" ≠ "x"0, it follows that
formula_0
Alternatively, using the contrapositive, this condition may be written as
formula_1
If "X" is the continuous dual space of some other Banach space "Y", then "X" is said to have the weak-∗ Opial property if, whenever ("x""n")"n"∈N is a sequence in "X" converging weakly-∗ to some "x"0 ∈ "X" and "x" ≠ "x"0, it follows that
formula_2
or, as above,
formula_1
A (dual) Banach space "X" is said to have the uniform (weak-∗) Opial property if, for every "c" > 0, there exists an "r" > 0 such that
formula_3
for every "x" ∈ "X" with ||"x"|| ≥ c and every sequence ("x""n")"n"∈N in "X" converging weakly (weakly-∗) to 0 and with
formula_4 | [
{
"math_id": 0,
"text": "\\liminf_{n \\to \\infty} \\| x_{n} - x_{0} \\| < \\liminf_{n \\to \\infty} \\| x_{n} - x \\|."
},
{
"math_id": 1,
"text": "\\liminf_{n \\to \\infty} \\| x_{n} - x \\| \\leq \\liminf_{n \\to \\infty} \\| x_{n} - x_{0} \\| \\implies x = x_{0}."
},
{
"math_id": 2,
"text": "\\liminf_{n \\to \\infty} \\| x_{n} - x_{0} \\| < \\liminf_{n \\to \\infty} \\| x_{n} - x \\|,"
},
{
"math_id": 3,
"text": "1 + r \\leq \\liminf_{n \\to \\infty} \\| x_{n} - x \\|"
},
{
"math_id": 4,
"text": "\\liminf_{n \\to \\infty} \\| x_{n} \\| \\geq 1."
},
{
"math_id": 5,
"text": "\\ell^p"
},
{
"math_id": 6,
"text": "1\\le p<\\infty"
}
] | https://en.wikipedia.org/wiki?curid=11988974 |
11989433 | Scott information system | Logical deductive system
In domain theory, a branch of mathematics and computer science, a Scott information system is a primitive kind of logical deductive system often used as an alternative way of presenting Scott domains.
Definition.
A Scott information system, "A", is an ordered triple formula_0
satisfying
Here formula_9 means formula_10
Examples.
Natural numbers.
The return value of a partial recursive function, which either returns a natural number or goes into an infinite recursion, can be expressed as a simple Scott information system as follows:
That is, the result can either be a natural number, represented by the singleton set formula_14, or "infinite recursion," represented by formula_15.
Of course, the same construction can be carried out with any other set instead of formula_16.
Propositional calculus.
The propositional calculus gives us a very simple Scott information system as follows:
Scott domains.
Let "D" be a Scott domain. Then we may define an information system as follows
Let formula_24 be the mapping that takes us from a Scott domain, "D", to the information system defined above.
Information systems and Scott domains.
Given an information system, formula_25, we can build a Scott domain as follows.
Let formula_29 denote the set of points of "A" with the subset ordering. formula_29 will be a countably based Scott domain when "T" is countable. In general, for any Scott domain "D" and information system "A"
where the second congruence is given by approximable mappings. | [
{
"math_id": 0,
"text": "(T, Con, \\vdash) "
},
{
"math_id": 1,
"text": "T \\mbox{ is a set of tokens (the basic units of information)} "
},
{
"math_id": 2,
"text": "Con \\subseteq \\mathcal{P}_f(T) \\mbox{ the finite subsets of } T"
},
{
"math_id": 3,
"text": "{\\vdash} \\subseteq (Con \\setminus \\lbrace \\emptyset \\rbrace)\\times T"
},
{
"math_id": 4,
"text": "\\mbox{If } a \\in X \\in Con\\mbox{ then }X \\vdash a"
},
{
"math_id": 5,
"text": "\\mbox{If } X \\vdash Y \\mbox{ and }Y \\vdash a \\mbox{, then }X \\vdash a"
},
{
"math_id": 6,
"text": "\\mbox{If }X \\vdash a \\mbox{ then } X \\cup \\{ a \\} \\in Con"
},
{
"math_id": 7,
"text": "\\forall a \\in T : \\{ a\\} \\in Con"
},
{
"math_id": 8,
"text": "\\mbox{If }X \\in Con \\mbox{ and } X^\\prime\\, \\subseteq X \\mbox{ then }X^\\prime \\in Con."
},
{
"math_id": 9,
"text": "X \\vdash Y"
},
{
"math_id": 10,
"text": "\\forall a \\in Y, X \\vdash a."
},
{
"math_id": 11,
"text": "T := \\mathbb{N}"
},
{
"math_id": 12,
"text": "Con := \\{ \\empty \\} \\cup \\{ \\{ n \\} \\mid n \\in \\mathbb{N} \\}"
},
{
"math_id": 13,
"text": "X \\vdash a\\iff a \\in X."
},
{
"math_id": 14,
"text": "\\{n\\}"
},
{
"math_id": 15,
"text": "\\empty"
},
{
"math_id": 16,
"text": "\\mathbb{N}"
},
{
"math_id": 17,
"text": "T := \\{ \\phi \\mid \\phi \\mbox{ is satisfiable} \\}"
},
{
"math_id": 18,
"text": "Con := \\{ X \\in \\mathcal{P}_f(T) \\mid X \\mbox{ is consistent} \\}"
},
{
"math_id": 19,
"text": "X \\vdash a\\iff X \\vdash a \\mbox{ in the propositional calculus}."
},
{
"math_id": 20,
"text": "T := D^0 "
},
{
"math_id": 21,
"text": "D"
},
{
"math_id": 22,
"text": "Con := \\{ X \\in \\mathcal{P}_f(T) \\mid X \\mbox{ has an upper bound} \\}"
},
{
"math_id": 23,
"text": "X \\vdash d\\iff d \\sqsubseteq \\bigsqcup X."
},
{
"math_id": 24,
"text": "\\mathcal{I}"
},
{
"math_id": 25,
"text": "A = (T, Con, \\vdash) "
},
{
"math_id": 26,
"text": "x \\subseteq T"
},
{
"math_id": 27,
"text": "\\mbox{If }X \\subseteq_f x \\mbox{ then } X \\in Con"
},
{
"math_id": 28,
"text": "\\mbox{If }X \\vdash a \\mbox{ and } X \\subseteq_f x \\mbox{ then } a \\in x."
},
{
"math_id": 29,
"text": "\\mathcal{D}(A)"
},
{
"math_id": 30,
"text": "\\mathcal{D}(\\mathcal{I}(D)) \\cong D"
},
{
"math_id": 31,
"text": "\\mathcal{I}(\\mathcal{D}(A)) \\cong A"
}
] | https://en.wikipedia.org/wiki?curid=11989433 |
1198956 | Conditional convergence | A property of infinite series
In mathematics, a series or integral is said to be conditionally convergent if it converges, but it does not converge absolutely.
Definition.
More precisely, a series of real numbers formula_0 is said to converge conditionally if
formula_1 exists (as a finite real number, i.e. not formula_2 or formula_3), but formula_4
A classic example is the alternating harmonic series given by formula_5 which converges to formula_6, but is not absolutely convergent (see Harmonic series).
Bernhard Riemann proved that a conditionally convergent series may be rearranged to converge to any value at all, including ∞ or −∞; see Riemann series theorem. The Lévy–Steinitz theorem identifies the set of values to which a series of terms in R"n" can converge.
A typical conditionally convergent integral is that on the non-negative real axis of formula_7 (see Fresnel integral). | [
{
"math_id": 0,
"text": "\\sum_{n=0}^\\infty a_n"
},
{
"math_id": 1,
"text": "\\lim_{m\\rightarrow\\infty}\\,\\sum_{n=0}^m a_n"
},
{
"math_id": 2,
"text": "\\infty"
},
{
"math_id": 3,
"text": "-\\infty"
},
{
"math_id": 4,
"text": "\\sum_{n=0}^\\infty \\left|a_n\\right| = \\infty."
},
{
"math_id": 5,
"text": "1 - {1 \\over 2} + {1 \\over 3} - {1 \\over 4} + {1 \\over 5} - \\cdots =\\sum\\limits_{n=1}^\\infty {(-1)^{n+1} \\over n},"
},
{
"math_id": 6,
"text": "\\ln (2)"
},
{
"math_id": 7,
"text": "\\sin (x^2)"
}
] | https://en.wikipedia.org/wiki?curid=1198956 |
11990241 | Oil immersion | Light microscopy technique
In light microscopy, oil immersion is a technique used to increase the resolving power of a microscope. This is achieved by immersing both the objective lens and the specimen in a transparent oil of high refractive index, thereby increasing the numerical aperture of the objective lens.
Without oil, light waves reflect off the slide specimen through the glass cover slip, through the air, and into the microscope lens (see the colored figure to the right). Unless a wave comes out at a 90-degree angle, it bends when it hits a new substance, the amount of bend depending on the angle. This distorts the image. Air has a very different index of refraction from glass, making for a larger bend compared to oil, which has an index more similar to glass. Specially manufactured oil can have nearly exactly the same refractive index as glass, making an oil immersed lens nearly as effective as having entirely glass to the sample (which would be impractical).
Immersion oils are transparent oils that have specific optical and viscosity characteristics necessary for use in microscopy. Typical oils used have an index of refraction of around 1.515. An oil immersion objective is an objective lens specially designed to be used in this way. Many condensers also give optimal resolution when the condenser lens is immersed in oil.
Theoretical background.
Lenses reconstruct the light scattered by an object. To successfully achieve this end, ideally, all the diffraction orders have to be collected. This is related to the opening angle of the lens and its refractive index. The resolution of a microscope is defined as the minimum separation needed between two objects under examination in order for the microscope to discern them as separate objects. This minimum distance is labelled δ. If two objects are separated by a distance shorter than δ, they will appear as a single object in the microscope.
A measure of the resolving power, R.P., of a lens is given by its numerical aperture, NA:
formula_0
where λ is the wavelength of light. From this it is clear that a good resolution (small δ) is connected with a high numerical aperture.
The numerical aperture of a lens is defined as
formula_1
where α0 is half the angle spanned by the objective lens seen from the sample, and "n" is the refractive index of the medium between the lens and specimen (≈1 for air).
State of the art objectives can have a numerical aperture of up to 0.95. Because sin α0 is always less than or equal to unity (the number "1"), the numerical aperture can never be greater than unity for an objective lens in air. If the space between the objective lens and the specimen is filled with oil however, the numerical aperture can obtain values greater than unity. This is because oil has a refractive index greater than 1.
Oil immersion objectives.
From the above it is understood that oil between the specimen and the objective lens improves the resolving power by a factor 1/"n". Objectives specifically designed for this purpose are known as oil immersion objectives.
Oil immersion objectives are used only at very large magnifications that require high resolving power. Objectives with high power magnification have short focal lengths, facilitating the use of oil. The oil is applied to the specimen (conventional microscope), and the stage is raised, immersing the objective in oil. (In inverted microscopes the oil is applied to the objective).
The refractive indices of the oil and of the glass in the first lens element are nearly the same, which means that the refraction of light will be small upon entering the lens (the oil and glass are optically very similar). The correct immersion oil for an objective lens has to be used to ensure that the refractive indices match closely. Use of an oil immersion lens with the incorrect immersion oil, or without immersion oil altogether, will suffer from spherical aberration. The strength of this effect depends on the size of the refractive index mismatch.
Oil immersion can generally only be used on rigidly mounted specimens otherwise the surface tension of the oil can move the coverslip and so move the sample underneath. This can also happen on inverted microscopes because the coverslip is below the slide.
Immersion oil.
Before the development of synthetic immersion oils in the 1940s, cedar tree oil was widely used. Cedar oil has an index of refraction of approximately 1.516. The numerical aperture of cedar tree oil objectives is generally around 1.3. Cedar oil has a number of disadvantages however: it absorbs blue and ultraviolet light, yellows with age, has sufficient acidity to potentially damage objectives with repeated use (by attacking the cement used to join lenses), and diluting it with solvent changes its viscosity (and refraction index and dispersion). Cedar oil must be removed from the objective immediately after use before it can harden, since removing hardened cedar oil can damage the lens.
In modern microscopy synthetic immersion oils are more commonly used, as they eliminate most of these problems. NA values of 1.6 can be achieved with different oils. Unlike natural oils, synthetic ones do not harden on the lens and can typically be left on the objective for months at a time, although to best maintain a microscope it is best to remove the oil daily. Over time oil can enter for the front lens of the objective or into the barrel of the objective and damage the objective. There are different types of immersion oils with different properties based on the type of microscopy you will be performing. Type A and Type B are both general purpose immersion oils with different viscosities. Type F immersion oil is best used for fluorescent imaging at room temperature (23°C), while type N oil is made to be used at body temperature (37°C) for live cell imaging applications. All have a nD of 1.515, quite similar to the original cedar oil. | [
{
"math_id": 0,
"text": "\\delta=\\frac{\\lambda}{\\mathrm{2NA}}"
},
{
"math_id": 1,
"text": "\\mathrm{NA} = n \\sin \\alpha_0\\;"
}
] | https://en.wikipedia.org/wiki?curid=11990241 |
1199161 | Kuhn poker | Kuhn poker is a simplified form of poker developed by Harold W. Kuhn as a simple model zero-sum two-player imperfect-information game, amenable to a complete game-theoretic analysis. In Kuhn poker, the deck includes only three playing cards, for example, a King, Queen, and Jack. One card is dealt to each player, which may place bets similarly to a standard poker. If both players bet or both players pass, the player with the higher card wins, otherwise, the betting player wins. It was recently solved using Perfect Bayesian Equilibrium notions by Loriente and Diez (2023).
Game description.
In conventional poker terms, a game of Kuhn poker proceeds as follows:
Optimal strategy.
The game has a mixed-strategy Nash equilibrium; when both players play equilibrium strategies, the first player should expect to lose at a rate of −1/18 per hand (as the game is zero-sum, the second player should expect to win at a rate of +1/18). There is no pure-strategy equilibrium.
Kuhn demonstrated there are infinitely many equilibrium strategies for the first player, forming a continuum governed by a single parameter. In one possible formulation, player one freely chooses the probability formula_0 with which they will bet when having a Jack (otherwise they check; if the other player bets, they should always fold). When having a King, they should bet with the probability of formula_1 (otherwise they check; if the other player bets, they should always call). They should always check when having a Queen, and if the other player bets after this check, they should call with the probability of formula_2.
The second player has a single equilibrium strategy: Always betting or calling when having a King; when having a Queen, checking if possible, otherwise calling with the probability of 1/3; when having a Jack, never calling and betting with the probability of 1/3.
Generalized versions.
In addition to the basic version invented by Kuhn, other versions appeared adding bigger deck, more players, betting rounds, etc., increasing the complexity of the game.
3-player Kuhn Poker.
A variant for three players was introduced in 2010 by Nick Abou Risk and Duane Szafron. In this version, the deck includes four cards (adding a ten card), from which three are dealt to the players; otherwise, the basic structure is the same: while there is no outstanding bet, a player can check or bet, with an outstanding bet, a player can call or fold. If all players checked or at least one player called, the game proceeds to showdown, otherwise, the betting player wins.
A family of Nash equilibria for 3-player Kuhn poker is known analytically, which makes it the largest game with more than two players with analytic solution. The family is parameterized using 4–6 parameters (depending on the chosen equilibrium). In all equilibria, player 1 has a fixed strategy, and they always check as the first action; player 2's utility is constant, equal to –1/48 per hand. The discovered equilibrium profiles show an interesting feature: by adjusting a strategy parameter formula_3 (between 0 and 1), player 2 can freely shift utility between the other two players while still remaining in equilibrium; player 1's utility is equal to formula_4 (which is always worse than player 2's utility), player 3's utility is formula_5.
It is not known if this equilibrium family covers all Nash equilibria for the game. | [
{
"math_id": 0,
"text": "\\alpha \\isin [0, 1/3]"
},
{
"math_id": 1,
"text": "3\\alpha"
},
{
"math_id": 2,
"text": "\\alpha + 1/3"
},
{
"math_id": 3,
"text": "\\beta"
},
{
"math_id": 4,
"text": "-\\frac{1+2\\beta}{48}"
},
{
"math_id": 5,
"text": "\\frac{1+\\beta}{24}"
}
] | https://en.wikipedia.org/wiki?curid=1199161 |
1199421 | Frequency multiplier | Electronic circuit
In electronics, a frequency multiplier is an electronic circuit that generates an output signal and that output frequency is a harmonic (multiple) of its input frequency. Frequency multipliers consist of a nonlinear circuit that distorts the input signal and consequently generates harmonics of the input signal. A subsequent bandpass filter selects the desired harmonic frequency and removes the unwanted fundamental and other harmonics from the output.
Frequency multipliers are often used in frequency synthesizers and communications circuits. It can be more economical to develop a lower frequency signal with lower power and less expensive devices, and then use a frequency multiplier chain to generate an output frequency in the microwave or millimeter wave range. Some modulation schemes, such as frequency modulation, survive the nonlinear distortion without ill effect (but schemes such as amplitude modulation do not).
Frequency multiplication is also used in nonlinear optics. The nonlinear distortion in crystals can be used to generate harmonics of laser light.
Theory.
A pure sine wave has a single frequency "f"
formula_0
If the sine wave is applied to a linear circuit, such as a non–distortion amplifier, the output is still a sine wave (but may acquire a phase shift).
However, if the sine wave is applied to a nonlinear circuit, the resulting distortion creates harmonics; frequency components at integer multiples "nf" of the fundamental frequency "f". The distorted signal can be described by a Fourier series in "f".
formula_1
The nonzero "ck" represent the generated harmonics. The Fourier coefficients are given by integrating over the fundamental period "T":
formula_2
So a frequency multiplier can be built from a nonlinear electronic component which generates a series of harmonics, followed by a bandpass filter which passes one of the harmonics to the output and blocks the others.
From a conversion efficiency standpoint, the nonlinear circuit should maximize the coefficient for the desired harmonic and minimize the others. Consequently, the transcribing function is often specially chosen. Easy choices are to use an even function to generate even harmonics or an odd function for odd harmonics. See Even and odd functions#Harmonics. A full wave rectifier, for example, is good for making a doubler. To produce a times-3 multiplier, the original signal may be input to an amplifier that is over driven to produce nearly a square wave. This signal is high in 3rd order harmonics and can be filtered to produce the desired
x3 outcome.
YIG multipliers often want to select an arbitrary harmonic, so they use a stateful distortion circuit that converts the input sine wave into an approximate impulse train. The ideal (but impractical) impulse train generates an infinite number of (weak) harmonics. In practice, an impulse train generated by a monostable circuit will have many usable harmonics. YIG multipliers using step recovery diodes may, for example, take an input frequency of 1 to 2 GHz and produce outputs up to 18 GHz. Sometimes the frequency multiplier circuit will adjust the width of the impulses to improve conversion efficiency for a specific harmonic.
Circuits.
Diode.
Clipping circuits. Full wave bridge doubler.
Class C amplifier and multiplier.
Efficiently generating power becomes more important at high power levels. Linear Class A amplifiers are at best 25 percent efficient. Push-pull Class B amplifiers are at best 50 percent efficient. The basic problem is the amplifying element is dissipating power. Switching Class C amplifiers are nonlinear, but they can be better than 50 percent efficient because an ideal switch does not dissipate any power.
A clever design can use the nonlinear Class C amplifier for both gain and as a frequency multiplier.
Step recovery diode.
Generating a large number of useful harmonics requires a fast nonlinear device.
Step recovery diodes.
Microwave generators may use a step recovery diode impulse generator followed by a tunable YIG filter. The YIG filter has a yttrium iron garnet sphere that is tuned with a magnetic field. The step recovery diode impulse generator is driven at a subharmonic of the desired output frequency. An electromagnet then tunes the YIG filter to select the desired harmonic.
Varactor diode.
Resistive loaded varactors. Regenerative varactors. Penfield.
Frequency multipliers have much in common with frequency mixers, and some of the same nonlinear devices are used for both: transistors operated in Class C and diodes. In transmitting circuits many of the amplifying devices (vacuum tubes or transistors) operate nonlinearly and create harmonics, so an amplifier stage can be made a multiplier by tuning the tuned circuit at the output to a multiple of the input frequency. Usually the power (gain) produced by the nonlinear device drops off rapidly at the higher harmonics, so most frequency multipliers just double or triple the frequency, and multiplication by higher factors is accomplished by cascading doubler and tripler stages.
Previous uses.
Frequency multipliers use circuits tuned to a harmonic of the input frequency. Non-linear elements such as diodes may be added to enhance the production of harmonic frequencies. Since the power in the harmonics declines rapidly, usually a frequency multiplier is tuned to only a small multiple (twice, three times, or five times) of the input frequency. Usually amplifiers are inserted in a chain of frequency multipliers to ensure adequate signal level at the final frequency.
Since the tuned circuits have a limited bandwidth, if the base frequency is changed significantly (more than one percent or so), the multiplier stages may have to be adjusted; this can take significant time if there are many stages.
Microelectromechanical (MEMS) frequency doubler.
An electric-field driven micromechanical cantilever resonator is one of the most fundamental and widely studied structures in MEMS, which can provide a high Q and narrow bandpass filtering function. The inherent square-law nonlinearity of the voltage-to-force transfer function of a cantilever resonator's capacitive transducer can be employed for the realization of frequency doubling effect. Due to the low-loss attribute (or equivalently, a high Q) offered by MEMS devices, improved circuit performance can be expected from a micromechanical frequency doubler than semiconductor devices utilized for the same task.
Graphene based frequency multipliers.
Graphene based FETs have also been employed for frequency doubling with more than 90% converting efficiency.
In fact, all ambipolar transistors can be used for designing frequency multiplier circuits. Graphene can work over a large frequency range due to its unique characteristics.
Phase-locked loops with frequency dividers.
A phase-locked loop (PLL) uses a reference frequency to generate a multiple of that frequency. A voltage controlled oscillator (VCO) is initially tuned roughly to the range of the desired frequency multiple. The signal from the VCO is divided down using frequency dividers by the multiplication factor. The divided
signal and the reference frequency are fed into a phase comparator. The output of the phase comparator is
a voltage that is proportional to the phase difference. After passing through a low pass filter and being converted to the proper voltage range, this voltage is fed to the VCO to adjust the frequency. This adjustment increases the frequency as the phase of the VCO's signal lags that of the reference signal and decreases the frequency as the lag decreases (or lead increases). The VCO will stabilize at the desired frequency multiple. This type of PLL is a type of frequency synthesizer.
Fractional-N synthesizer.
In some PLLs the reference frequency may also be divided by an integer multiple before being input to the phase comparator. This allows the synthesis of frequencies that are N/M times the reference frequency.
This can be accomplished in a different manner by periodically changing the integer value of an integer-N frequency divider, effectively resulting in a multiplier with both whole number and fractional component. Such a multiplier is called a fractional-N synthesizer after its fractional component. Fractional-N synthesizers provide an effective means of achieving fine frequency resolution with lower values of N, allowing loop architectures with tens of thousands of times less phase noise than alternative designs with lower reference frequencies and higher integer N values. They also allow a faster settling time because of their higher reference frequencies, allowing wider closed and open loop bandwidths.
Delta sigma synthesizer.
A delta sigma synthesizer adds a randomization to programmable-N frequency divider of the fractional-N synthesizer. This is done to shrink sidebands created by periodic changes of an integer-N frequency divider.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x(t) = A\\sin(2 \\pi ft)\\,"
},
{
"math_id": 1,
"text": "x(t) = \\sum_{k=-\\infty}^{\\infty} c_k e^{j 2 \\pi k f t}."
},
{
"math_id": 2,
"text": "c_k = \\frac{1}{2\\pi}\\int_{0}^{T} x(t) \\, e^{-j 2 \\pi k t / T}\\, dt"
}
] | https://en.wikipedia.org/wiki?curid=1199421 |
11999755 | Shewhart individuals control chart | For data monitoring in statistical quality control
In statistical quality control, the individual/moving-range chart is a type of control chart used to monitor variables data from a business or industrial process for which it is impractical to use rational subgroups.
The chart is necessary in the following situations:
The "chart" actually consists of a pair of charts: one, the individuals chart, displays the individual measured values; the other, the moving range chart, displays the difference from one point to the next. As with other control charts, these two charts enable the user to monitor a process for shifts in the process that alter the mean or variance of the measured statistic.
Interpretation.
As with other control charts, the individuals and moving range charts consist of points plotted with the control limits, or natural process limits. These limits reflect what the process will deliver without fundamental changes. Points outside of these control limits are signals indicating that the process is not operating as consistently as possible; that some assignable cause has resulted in a change in the process. Similarly, runs of points on one side of the average line should also be interpreted as a signal of some change in the process. When such signals exist, action should be taken to identify and eliminate them. When no such signals are present, no changes to the process control variables (i.e. "tampering") are necessary or desirable.
Assumptions.
The normal distribution is NOT assumed nor required in the calculation of control limits. Thus making the IndX/mR chart a very robust tool. This is demonstrated by Wheeler using real-world data, and for a number of highly non-normal probability distributions.
Calculation and plotting.
Calculation of moving range.
The difference between data point, formula_0, and its predecessor, formula_1, is calculated as formula_2. For formula_3 individual values, there are formula_4 ranges.
Next, the arithmetic mean of these values is calculated as
formula_5
If the data are normally distributed with standard deviation formula_6 then the expected value of formula_7 is formula_8, the mean absolute difference of the normal distribution.
Calculation of moving range control limit.
The upper control limit for the range (or upper range limit) is calculated by multiplying the average of the moving range by 3.267:
formula_9.
The value 3.267 is taken from the sample size-specific D4 anti-biasing constant for n
2, as given in most textbooks on statistical process control (see, for example, Montgomery).
Calculation of individuals control limits.
First, the average of the individual values is calculated:
formula_10.
Next, the upper control limit (UCL) and lower control limit (LCL) for the individual values (or upper and lower natural process limits) are calculated by adding or subtracting 2.66 times the average moving range to the process average:
formula_11.
formula_12
The value 2.66 is obtained by dividing 3 by the sample size-specific d2 anti-biasing constant for n
2, as given in most textbooks on statistical process control (see, for example, Montgomery).
Creation of graphs.
Once the averages and limits are calculated, all of the individuals data are plotted serially, in the order in which they were recorded. To this plot is added a line at the average value, and lines at the UCL and LCL values.
On a separate graph, the calculated ranges MRi are plotted. A line is added for the average value, and second line is plotted for the range upper control limit (UCLr).
Analysis.
The resulting plots are analyzed as for other control charts, using the rules that are deemed appropriate for the process and the desired level of control. At the least, any points above either upper control limits or below the lower control limit are marked and considered a signal of changes in the underlying process that are worth further investigation.
Potential pitfalls.
The moving ranges involved are serially correlated so runs or cycles can show up on the moving average chart that do not indicate real problems in the underlying process.
In some cases, it may be advisable to use the median of the moving range rather than its average, as when the calculated range data contains a few large values that may inflate the estimate of the population's dispersion.
Some have alleged that departures in normality in the process output significantly reduce the effectiveness of the charts to the point where it may require control limits to be set based on percentiles of the empirically-determined distribution of the process output although this assertion has been consistently refuted. See Footnote 6.
Many software packages will, given the individuals data, perform all of the needed calculations and plot the results. Care should be taken to ensure that the control limits are correctly calculated, per the above and standard texts on SPC. In some cases, the software's default settings may produce incorrect results; in others, user modifications to the settings could result in incorrect results. Sample data and results are presented by Wheeler for the explicit purpose of testing SPC software. Performing such software validation is generally a good idea with any SPC software.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x_i"
},
{
"math_id": 1,
"text": "x_{i-1}"
},
{
"math_id": 2,
"text": "{MR}_i = \\big| x_i - x_{i - 1} \\big|"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "m-1"
},
{
"math_id": 5,
"text": "\\overline{MR}=\\frac{\\sum_{i=2}^{m}{MR_i}}{m-1}"
},
{
"math_id": 6,
"text": "\\sigma"
},
{
"math_id": 7,
"text": "\\overline{MR}"
},
{
"math_id": 8,
"text": "d_{2} \\sigma= 2\\sigma/\\sqrt \\pi"
},
{
"math_id": 9,
"text": "UCL_r = 3.267\\overline{MR}"
},
{
"math_id": 10,
"text": "\\overline{x}=\\frac{\\sum_{i=1}^{m}{x_i}}{m}"
},
{
"math_id": 11,
"text": "UCL=\\overline{x}+2.66\\overline{MR}"
},
{
"math_id": 12,
"text": "LCL=\\overline{x}-2.66\\overline{MR}"
},
{
"math_id": 13,
"text": "\\overline{x}"
}
] | https://en.wikipedia.org/wiki?curid=11999755 |
1200004 | Elongated pentagonal gyrocupolarotunda | 41st Johnson solid
In geometry, the elongated pentagonal gyrocupolarotunda is one of the Johnson solids ("J"41). As the name suggests, it can be constructed by elongating a pentagonal gyrocupolarotunda ("J"33) by inserting a decagonal prism between its halves. Rotating either the pentagonal cupola ("J"5) or the pentagonal rotunda ("J"6) through 36 degrees before inserting the prism yields an elongated pentagonal orthocupolarotunda ("J"40).
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.
Formulae.
The following formulae for volume and surface area can be used if all faces are regular, with edge length "a":
formula_0
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V=\\frac{5}{12}\\left(11+5\\sqrt{5}+6\\sqrt{5+2\\sqrt{5}}\\right)a^3\\approx16.936...a^3"
},
{
"math_id": 1,
"text": "A=\\frac{1}{4}\\left(60+\\sqrt{10\\left(190+49\\sqrt{5}+21\\sqrt{75+30\\sqrt{5}}\\right)}\\right)a^2\\approx33.5385...a^2"
}
] | https://en.wikipedia.org/wiki?curid=1200004 |
1200006 | Elongated pentagonal orthocupolarotunda | 40th Johnson solid
In geometry, the elongated pentagonal orthocupolarotunda is one of the Johnson solids ("J"40). As the name suggests, it can be constructed by elongating a pentagonal orthocupolarotunda ("J"32) by inserting a decagonal prism between its halves. Rotating either the cupola or the rotunda through 36 degrees before inserting the prism yields an elongated pentagonal gyrocupolarotunda ("J"41).
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.
Formulae.
The following formulae for volume and surface area can be used if all faces are regular, with edge length "a":
formula_0
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V=\\frac{5}{12}\\left(11+5\\sqrt{5}+6\\sqrt{5+2\\sqrt{5}}\\right)a^3\\approx16.936...a^3"
},
{
"math_id": 1,
"text": "A=\\frac{1}{4}\\left(60+\\sqrt{10\\left(190+49\\sqrt{5}+21\\sqrt{75+30\\sqrt{5}}\\right)}\\right)a^2\\approx33.5385...a^2"
}
] | https://en.wikipedia.org/wiki?curid=1200006 |
1200024 | Gyroelongated pentagonal cupolarotunda | 47th Johnson solid
In geometry, the gyroelongated pentagonal cupolarotunda is one of the Johnson solids ("J"47). As the name suggests, it can be constructed by gyroelongating a pentagonal cupolarotunda ("J"32 or "J"33) by inserting a decagonal antiprism between its two halves.
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.
The gyroelongated pentagonal cupolarotunda is one of five Johnson solids which are chiral, meaning that they have a "left-handed" and a "right-handed" form. In the illustration to the right, each pentagonal face on the bottom half of the figure is connected by a path of two triangular faces to a square face above it and to the left. In the figure of opposite chirality (the mirror image of the illustrated figure), each bottom pentagon would be connected to a square face above it and to the right. The two chiral forms of "J"47 are not considered different Johnson solids.
Area and Volume.
With edge length a, the surface area is
formula_0
and the volume is
formula_1 | [
{
"math_id": 0,
"text": "A=\\frac{1}{4}\\left(20+35\\sqrt{3}+7\\sqrt{25+10\\sqrt{5}}\\right)a^2\\approx32.198786370...a^2,"
},
{
"math_id": 1,
"text": "V=\\left(\\frac{55}{12}+\\frac{25}{12}\\sqrt{5}+ \\frac{5}{6}\\sqrt{2\\sqrt{650+290\\sqrt{5}}-2\\sqrt{5}-2}\\right) a^3\\approx15.991096162...a^3."
}
] | https://en.wikipedia.org/wiki?curid=1200024 |
1200120 | Augmented truncated tetrahedron | 65th Johnson solid
In geometry, the augmented truncated tetrahedron is a polyhedron constructed by attaching a triangular cupola onto an truncated tetrahedron. It is an example of a Johnson solid.
Construction.
The augmented truncated tetrahedron is constructed from a truncated tetrahedron by attaching a triangular cupola. This cupola covers one of the truncated tetrahedron's four hexagonal faces, so that the resulting polyhedron's faces are eight equilateral triangles, three squares, and three regular hexagons. Since it has the property of convexity and has regular polygonal faces, the augmented truncated tetrahedron is a Johnson solid, denoted as the sixty-fifth Johnson solid formula_0.
Properties.
The surface area of an augmented truncated tetrahedron is:
formula_1
the sum of the areas of its faces. Its volume can be calculated by slicing it off into both truncated tetrahedron and triangular cupola, and adding their volume:
formula_2
It has the same three-dimensional symmetry group as the triangular cupola, the pyramidal symmetry formula_3. Its dihedral angles can be obtained by adding the angle of a triangular cupola and an augmented truncated tetrahedron in the following:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " J_{65} "
},
{
"math_id": 1,
"text": " \\frac{6 + 13 \\sqrt{3}}{2}a^2 \\approx 14.258a^2, "
},
{
"math_id": 2,
"text": " \\frac{11 \\sqrt{2}}{4}a^3 \\approx 3.889a^3. "
},
{
"math_id": 3,
"text": " C_{3 \\mathrm{v}} "
}
] | https://en.wikipedia.org/wiki?curid=1200120 |
12002936 | Displacement–length ratio | Ship measurement
The displacement–length ratio (DLR or D/L ratio) is a calculation used to express how heavy a boat is relative to its waterline length.
DLR was first published in
It is calculated by dividing a boat's displacement in long tons (2,240 pounds) by the cube of one one-hundredth of the waterline length (in feet):
formula_0
DLR can be used to compare the relative mass of various boats no matter what their length. A DLR less than 200 is indicative of a racing boat, while a DLR greater than 300 or so is indicative of a heavy cruising boat.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathit{DLR} = \\frac{\\mathit{displacement}(\\mathrm{lb}) ~/~ 2240} {(0.01 \\times \\mathit{LWL}(\\mathrm{ft}))^3}"
}
] | https://en.wikipedia.org/wiki?curid=12002936 |
12003118 | Loewner's torus inequality | In differential geometry, Loewner's torus inequality is an inequality due to Charles Loewner. It relates the systole and the area of an arbitrary Riemannian metric on the 2-torus.
Statement.
In 1949 Charles Loewner proved that every metric on the 2-torus formula_0 satisfies the optimal inequality
formula_1
where "sys" is its systole, i.e. least length of a noncontractible loop. The constant appearing on the right hand side is the Hermite constant formula_2 in dimension 2, so that Loewner's torus inequality can be rewritten as
formula_3
The inequality was first mentioned in the literature in .
Case of equality.
The boundary case of equality is attained if and only if the metric is flat and homothetic to the so-called "equilateral torus", i.e. torus whose group of deck transformations is precisely the hexagonal lattice spanned by the cube roots of unity in formula_4.
Alternative formulation.
Given a doubly periodic metric on formula_5 (e.g. an imbedding in formula_6 which is invariant by a formula_7 isometric action), there is a nonzero element formula_8 and a point formula_9 such that formula_10, where formula_11 is a fundamental domain for the action, while formula_12 is the Riemannian distance, namely least length of a path joining formula_13 and formula_14.
Proof of Loewner's torus inequality.
Loewner's torus inequality can be proved most easily by using the computational formula for the variance,
formula_15
Namely, the formula is applied to the probability measure defined by the measure of the unit area flat torus in the conformal class of the given torus. For the random variable "X", one takes the conformal factor of the given metric with respect to the flat one. Then the expected value E("X" 2) of "X" 2 expresses the total area of the given metric. Meanwhile, the expected value E("X") of "X" can be related to the systole by using Fubini's theorem. The variance of "X" can then be thought of as the isosystolic defect, analogous to the isoperimetric defect of Bonnesen's inequality. This approach therefore produces the following version of Loewner's torus inequality with isosystolic defect:
formula_16
where "ƒ" is the conformal factor of the metric with respect to a unit area flat metric in its conformal class.
Higher genus.
Whether or not the inequality
formula_17
is satisfied by all surfaces of nonpositive Euler characteristic is unknown. For orientable surfaces of genus 2 and genus 20 and above, the answer is affirmative, see work by Katz and Sabourau below. | [
{
"math_id": 0,
"text": "\\mathbb T^2"
},
{
"math_id": 1,
"text": " \\operatorname{sys}^2 \\leq \\frac{2}{\\sqrt{3}} \\operatorname{area}(\\mathbb T^2),"
},
{
"math_id": 2,
"text": "\\gamma_2"
},
{
"math_id": 3,
"text": " \\operatorname{sys}^2 \\leq \\gamma_2\\;\\operatorname{area}(\\mathbb T^2)."
},
{
"math_id": 4,
"text": "\\mathbb C"
},
{
"math_id": 5,
"text": "\\mathbb R^2"
},
{
"math_id": 6,
"text": "\\mathbb R^3"
},
{
"math_id": 7,
"text": "\\mathbb Z^2"
},
{
"math_id": 8,
"text": "g\\in \\mathbb Z^2"
},
{
"math_id": 9,
"text": "p\\in \\mathbb R^2"
},
{
"math_id": 10,
"text": "\\operatorname{dist}(p, g.p)^2 \\leq \\frac{2}{\\sqrt{3}} \\operatorname{area} (F)"
},
{
"math_id": 11,
"text": "F"
},
{
"math_id": 12,
"text": "\\operatorname{dist}"
},
{
"math_id": 13,
"text": "p"
},
{
"math_id": 14,
"text": " g . p "
},
{
"math_id": 15,
"text": "\\operatorname{E}(X^2)-(\\operatorname{E}(X))^2=\\mathrm{var}(X). "
},
{
"math_id": 16,
"text": "\\mathrm{area}-\\frac{\\sqrt{3}}{2}(\\mathrm{sys})^2\\geq \\mathrm{var}(f),"
},
{
"math_id": 17,
"text": " (\\mathrm{sys})^2 \\leq \\gamma_2\\,\\mathrm{area}"
}
] | https://en.wikipedia.org/wiki?curid=12003118 |
12003132 | Visibility polygon | Polygonal region of all points visible from a given point in a plane
In computational geometry, the visibility polygon or visibility region for a point p in the plane among obstacles is the possibly unbounded polygonal region of all points of the plane visible from p. The visibility polygon can also be defined for visibility from a segment, or a polygon. Visibility polygons are useful in robotics, video games, and in various optimization problems such as the facility location problem and the art gallery problem.
If the visibility polygon is bounded then it is a star-shaped polygon. A visibility polygon is bounded if all rays shooting from the point eventually terminate in some obstacle. This is the case, e.g., if the obstacles are the edges of a simple polygon and p is inside the polygon. In the latter case the visibility polygon may be found in linear time.
Definitions.
Formally, we can define the planar visibility polygon problem as such. Let formula_0 be a set of obstacles (either segments, or polygons) in formula_1. Let formula_2 be a point in formula_1 that is not within an obstacle. Then, the "point visibility polygon" formula_3 is the set of points in formula_1, such that for every point formula_4 in formula_3, the segment formula_5 does not intersect any obstacle in formula_0.
Likewise, the "segment visibility polygon" or "edge visibility polygon" is the portion visible to any point along a line segment.
Applications.
Visibility polygons are useful in robotics. For example, in robot localization, a robot using sensors such as a lidar will detect obstacles that it can see, which is similar to a visibility polygon.
They are also useful in video games, with numerous online tutorials explaining simple algorithms for implementing it.
Algorithms for point visibility polygons.
Numerous algorithms have been proposed for computing the point visibility polygon. For different variants of the problem (e.g. different types of obstacles), algorithms vary in time complexity.
Naive algorithms.
Naive algorithms are easy to understand and implement, but they are not optimal, since they can be much slower than other algorithms.
Uniform ray casting "physical" algorithm.
In real life, a glowing point illuminates the region visible to it because it emits light in every direction. This can be simulated by shooting rays in many directions. Suppose that the point is formula_2 and the set of obstacles is formula_0. Then, the pseudocode may be expressed in the following way:
algorithm naive_bad_algorithm(formula_2, formula_0) is
formula_3 := formula_6
for formula_7:
// shoot a ray with angle formula_8
formula_9 := formula_10
for each obstacle in formula_0:
formula_9 := min(formula_9, distance from formula_2 to the obstacle in this direction)
add vertex formula_11 to formula_3
return formula_3
Now, if it were possible to try all the infinitely many angles, the result would be correct. Unfortunately, it is impossible to really try every single direction due to the limitations of computers. An approximation can be created by casting many, say, 50 rays spaced uniformly apart. However, this is not an exact solution, since small obstacles might be missed by two adjacent rays entirely. Furthermore, it is very slow, since one may have to shoot many rays to gain a roughly similar solution, and the output visibility polygon may have many more vertices in it than necessary.
Ray casting to every vertex.
The previously described algorithm can be significantly improved in both speed and correctness by making the observation that it suffices to only shoot rays to every obstacle's vertices. This is because any bends or corners along the boundary of a visibility polygon must be due to some corner (i.e. a vertex) in an obstacle.
algorithm naive_better_algorithm(formula_2, formula_0) is
formula_3 := formula_6
for each obstacle formula_12 in formula_0:
for each vertex formula_13 of formula_12:
// shoot a ray from formula_2 to formula_13
formula_9 := distance from formula_2 to formula_13
formula_8 := angle of formula_13 with respect to formula_2
for each obstacle formula_14 in formula_0:
formula_9 := min(formula_9, distance from formula_2 to formula_14)
add vertex formula_11 to formula_3
return formula_3
The above algorithm may not be correct. See the discussion under Talk.
The time complexity of this algorithm is formula_15. This is because the algorithm shoots a ray to every one of the formula_16 vertices, and to check where the ray ends, it has to check for intersection with every one of the formula_17 obstacles. This is sufficient for many simple applications such as video games, and as such many online tutorials teach this method. However, as we shall see later, there are faster formula_18 algorithms (or even faster ones if the obstacle is a simple polygon or if there are a fixed number of polygonal holes).
Optimal algorithms for a point in a simple polygon.
Given a simple polygon formula_19 and a point formula_2, a linear time algorithm is optimal for computing the region in formula_19 that is visible from formula_2. Such an algorithm was first proposed in 1981. However, it is quite complicated. In 1983, a conceptually simpler algorithm was proposed, which had a minor error that was corrected in 1987.
The latter algorithm will be briefly explained here. It simply walks around the boundary of the polygon formula_19, processing the vertices in the order in which they appear, while maintaining a stack of vertices formula_20 where formula_21 is the top of the stack. The stack constitutes the vertices encountered so far which are visible to formula_2. If, later during the execution of the algorithm, some new vertices are encountered that obscure part of formula_22, then the obscured vertices in formula_22 will be popped from the stack. By the time the algorithm terminates, formula_22 will consist of all the visible vertices, i.e. the desired visibility polygon. An efficient implementation was published in 2014.
Optimal algorithms for a point in a polygon with holes.
For a point in a polygon with formula_23 holes and formula_16 vertices in total, it can be shown that in the worst case, a formula_24 algorithm is optimal. Such an algorithm was proposed in 1995 together with its proof of optimality. However, it relies on the linear time polygon triangulation algorithm by Chazelle, which is extremely complex.
Optimal algorithms for a point among segments.
Segments that do not intersect except at their endpoints.
For a point among a set of formula_16 segments that do not intersect except at their endpoints, it can be shown that in the worst case, a formula_25 algorithm is optimal. This is because a visibility polygon algorithm must output the vertices of the visibility polygon in sorted order, hence the problem of sorting can be reduced to computing a visibility polygon.
Notice that any algorithm that computes a visibility polygon for a point among segments can be used to compute a visibility polygon for all other kinds of polygonal obstacles, since any polygon can be decomposed into segments.
Divide and conquer.
A divide-and-conquer algorithm to compute the visibility polygon was proposed in 1987.
Angular sweep.
An "angular sweep", i.e. rotational plane sweep algorithm to compute the visibility polygon was proposed in 1985 and 1986. The idea is to first sort all the segment endpoints by polar angle, and then iterate over them in that order. During the iteration, the event line is maintained as a heap. An efficient implementation was published in 2014.
Generally intersecting segments.
For a point among generally intersecting segments, the visibility polygon problem is reducible, in linear time, to the lower envelope problem. By the Davenport–Schinzel argument, the lower envelope in the worst case has formula_26 number of vertices, where formula_27 is the inverse Ackermann function. A worst case optimal divide-and-conquer algorithm running in formula_25 time was created by John Hershberger in 1989. | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "\\mathbb{R}^2"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "q"
},
{
"math_id": 5,
"text": "pq"
},
{
"math_id": 6,
"text": "\\emptyset"
},
{
"math_id": 7,
"text": "\\theta = 0, \\cdots, 2\\pi"
},
{
"math_id": 8,
"text": "\\theta"
},
{
"math_id": 9,
"text": "r"
},
{
"math_id": 10,
"text": "\\infty"
},
{
"math_id": 11,
"text": "(\\theta, r)"
},
{
"math_id": 12,
"text": "b"
},
{
"math_id": 13,
"text": "v"
},
{
"math_id": 14,
"text": "b'"
},
{
"math_id": 15,
"text": "O(n^2)"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "O(n)"
},
{
"math_id": 18,
"text": "O(n\\log n)"
},
{
"math_id": 19,
"text": "\\mathcal{P}"
},
{
"math_id": 20,
"text": "\\mathcal{S}=s_0, s_1,\\cdots, s_t"
},
{
"math_id": 21,
"text": "s_t"
},
{
"math_id": 22,
"text": "\\mathcal{S}"
},
{
"math_id": 23,
"text": "h"
},
{
"math_id": 24,
"text": "\\Theta(n + h\\log h)"
},
{
"math_id": 25,
"text": "\\Theta(n\\log n)"
},
{
"math_id": 26,
"text": "\\Theta(n\\alpha(n))"
},
{
"math_id": 27,
"text": "\\alpha(n)"
}
] | https://en.wikipedia.org/wiki?curid=12003132 |
1200324 | Convective available potential energy | Measure of instability in the air as a buoyancy force
In meteorology, convective available potential energy (commonly abbreviated as CAPE), is a measure of the capacity of the atmosphere to support upward air movement that can lead to cloud formation and storms. Some atmospheric conditions, such as very warm, moist, air in an atmosphere that cools rapidly with height, can promote strong and sustained upward air movement, possibly stimulating the formation of cumulus clouds or cumulonimbus (thunderstorm clouds). In that situation the potential energy of the atmosphere to cause upward air movement is very high, so CAPE (a measure of potential energy) would be high and positive. By contrast, other conditions, such as a less warm air parcel or a parcel in an atmosphere with a temperature inversion (in which the temperature increases above a certain height) have much less capacity to support vigorous upward air movement, thus the potential energy level (CAPE) would be much lower, as would the probability of thunderstorms.
More technically, CAPE is the integrated amount of work that the upward (positive) buoyancy force would perform on a given mass of air (called an air parcel) if it rose vertically through the entire atmosphere. Positive CAPE will cause the air parcel to rise, while negative CAPE will cause the air parcel to sink.
Nonzero CAPE is an indicator of atmospheric instability in any given atmospheric sounding, a necessary condition for the development of cumulus and cumulonimbus clouds with attendant severe weather hazards.
Mechanics.
CAPE exists within the conditionally unstable layer of the troposphere, the free convective layer (FCL), where an ascending air parcel is warmer than the ambient air. CAPE is measured in joules per kilogram of air (J/kg). Any value greater than 0 J/kg indicates instability and an increasing possibility of thunderstorms and hail. Generic CAPE is calculated by integrating vertically the local buoyancy of a parcel from the level of free convection (LFC) to the equilibrium level (EL):
formula_0
Where formula_1 is the height of the level of free convection and formula_2 is the height of the equilibrium level (neutral buoyancy), where formula_3 is the virtual temperature of the specific parcel, where formula_4 is the virtual temperature of the environment (note that temperatures must be in the Kelvin scale), and where formula_5 is the acceleration due to gravity. This integral is the work done by the buoyant force minus the work done against gravity, hence it's the excess energy that can become kinetic energy.
CAPE for a given region is most often calculated from a thermodynamic or sounding diagram (e.g., a Skew-T log-P diagram) using air temperature and dew point data usually measured by a weather balloon.
CAPE is effectively positive buoyancy, expressed "B+" or simply "B"; the opposite of convective inhibition (CIN), which is expressed as "B-", and can be thought of as "negative CAPE". As with CIN, CAPE is usually expressed in J/kg but may also be expressed as m2/s2, as the values are equivalent. In fact, CAPE is sometimes referred to as "positive buoyant energy" ("PBE"). This type of CAPE is the maximum energy available to an ascending parcel and to moist convection. When a layer of CIN is present, the layer must be eroded by surface heating or mechanical lifting, so that convective boundary layer parcels may reach their level of free convection (LFC).
On a sounding diagram, CAPE is the "positive area" above the LFC, the area between the parcel's virtual temperature line and the environmental virtual temperature line where the ascending parcel is warmer than the environment. Neglecting the virtual temperature correction may result in substantial relative errors in the calculated value of CAPE for small CAPE values. CAPE may also exist below the LFC, but if a layer of CIN (subsidence) is present, it is unavailable to deep, moist convection until CIN is exhausted. When there is mechanical lift to saturation, cloud base begins at the lifted condensation level (LCL); absent forcing, cloud base begins at the convective condensation level (CCL) where heating from below causes spontaneous buoyant lifting to the point of condensation when the convective temperature is reached. When CIN is absent or is overcome, saturated parcels at the LCL or CCL, which had been small cumulus clouds, will rise to the LFC, and then spontaneously rise until hitting the stable layer of the equilibrium level. The result is deep, moist convection (DMC), or simply, a thunderstorm.
When a parcel is unstable, it will continue to move vertically, in either direction, dependent on whether it receives upward or downward forcing, until it reaches a stable layer (though momentum, gravity, and other forcing may cause the parcel to continue). There are multiple types of CAPE, "downdraft CAPE" ("DCAPE"), estimates the potential strength of rain and evaporatively cooled downdrafts. Other types of CAPE may depend on the depth being considered. Other examples are "surface based CAPE" ("SBCAPE"), "mixed layer" or "mean layer CAPE" ("MLCAPE"), "most unstable" or "maximum usable CAPE" ("MUCAPE"), and "normalized CAPE" ("NCAPE").
Fluid elements displaced upwards or downwards in such an atmosphere expand or compress adiabatically in order to remain in pressure equilibrium with their surroundings, and in this manner become less or more dense.
If the adiabatic decrease or increase in density is "less" than the decrease or increase in the density of the ambient (not moved) medium, then the displaced fluid element will be subject to downwards or upwards pressure, which will function to restore it to its original position. Hence, there will be a counteracting force to the initial displacement. Such a condition is referred to as "convective stability".
On the other hand, if adiabatic decrease or increase in density is "greater" than in the ambient fluid, the upwards or downwards displacement will be met with an additional force in the same direction exerted by the ambient fluid. In these circumstances, small deviations from the initial state will become amplified. This condition is referred to as "convective instability".
Convective instability is also termed "static instability", because the instability does not depend on the existing motion of the air; this contrasts with dynamic instability where instability is dependent on the motion of air and its associated effects such as dynamic lifting.
Significance to thunderstorms.
Thunderstorms form when air parcels are lifted vertically. Deep, moist convection requires a parcel to be lifted to the LFC where it then rises spontaneously until reaching a layer of non-positive buoyancy. The atmosphere is warm at the surface and lower levels of the troposphere where there is mixing (the planetary boundary layer (PBL)), but becomes substantially cooler with height. The temperature profile of the atmosphere, the change in temperature, the degree that it cools with height, is the lapse rate. When the rising air parcel cools more slowly than the surrounding atmosphere, it remains warmer and less dense. The parcel continues to rise freely (convectively; without mechanical lift) through the atmosphere until it reaches an area of air less dense (warmer) than itself.
The amount, and shape, of the positive-buoyancy area modulates the speed of updrafts, thus extreme CAPE can result in explosive thunderstorm development; such rapid development usually occurs when CAPE stored by a capping inversion is released when the "lid" is broken by heating or mechanical lift. The amount of CAPE also modulates how low-level vorticity is entrained and then stretched in the updraft, with importance to tornadogenesis. The most important CAPE for tornadoes is within the lowest 1 to 3 km (0.6 to 1.9 mi) of the atmosphere, whilst deep layer CAPE and the width of CAPE at mid-levels is important for supercells. Tornado outbreaks tend to occur within high CAPE environments. Large CAPE is required for the production of very large hail, owing to updraft strength, although a rotating updraft may be stronger with less CAPE. Large CAPE also promotes lightning activity.
Two notable days for severe weather exhibited CAPE values over 5 kJ/kg. Two hours before the 1999 Oklahoma tornado outbreak occurred on May 3, 1999, the CAPE value sounding at Oklahoma City was at 5.89 kJ/kg. A few hours later, an F5 tornado ripped through the southern suburbs of the city. Also on May 4, 2007, CAPE values of 5.5 kJ/kg were reached and an EF5 tornado tore through Greensburg, Kansas. On these days, it was apparent that conditions were ripe for tornadoes and CAPE wasn't a crucial factor. However, extreme CAPE, by modulating the updraft (and downdraft), can allow for exceptional events, such as the deadly F5 tornadoes that hit Plainfield, Illinois on August 28, 1990, and Jarrell, Texas on May 27, 1997, on days which weren't readily apparent as conducive to large tornadoes. CAPE was estimated to exceed 8 kJ/kg in the environment of the Plainfield storm and was around 7 kJ/kg for the Jarrell storm.
Severe weather and tornadoes can develop in an area of low CAPE values. The surprise severe weather event that occurred in Illinois and Indiana on April 20, 2004, is a good example. Importantly in that case, was that although overall CAPE was weak, there was strong CAPE in the lowest levels of the troposphere which enabled an outbreak of minisupercells producing large, long-track, intense tornadoes.
Example from meteorology.
A good example of convective instability can be found in our own atmosphere. If dry mid-level air is drawn over very warm, moist air in the lower troposphere, a hydrolapse (an area of rapidly decreasing dew point temperatures with height) results in the region where the moist boundary layer and mid-level air meet. As daytime heating increases mixing within the moist boundary layer, some of the moist air will begin to interact with the dry mid-level air above it. Owing to thermodynamic processes, as the dry mid-level air is slowly saturated its temperature begins to drop, increasing the adiabatic lapse rate. Under certain conditions, the lapse rate can increase significantly in a short amount of time, resulting in convection. High convective instability can lead to severe thunderstorms and tornadoes as moist air which is trapped in the boundary layer eventually becomes highly negatively buoyant relative to the adiabatic lapse rate and escapes as a rapidly rising bubble of humid air triggering the development of a cumulus or cumulonimbus cloud.
Limitations.
As with most parameters used in meteorology, there are some caveats to keep in mind, one of which is what CAPE represents physically and in what instances CAPE can be used. One example where the more common method for determining CAPE might start to break down is in the presence of tropical cyclones (such as tropical depressions, tropical storms, or hurricanes).
The more common method of determining CAPE can break down near tropical cyclones because CAPE assumes that liquid water is lost instantaneously during condensation. This process is thus irreversible upon adiabatic descent. This process is not realistic for Tropical Cyclones (TC for short). To make the process more realistic for Tropical Cyclones is to use Reversible CAPE (RCAPE for short). RCAPE assumes the opposite extreme to the standard convention of CAPE and is that no liquid water will be lost during the process. This new process gives parcels a greater density related to water loading.
RCAPE is calculated using the same formula as CAPE, the difference in the formula being in the virtual temperature. In this new formulation, we replace the parcel saturation mixing ratio (which leads to the condensation and vanishing of liquid water) with the parcel water content. This slight change can drastically change the values we get through the integration.
RCAPE does have some limitations, one of which is that RCAPE assumes no evaporation keeping consistent for the use within a TC but should be used sparingly elsewhere.
Another limitation of both CAPE and RCAPE is that currently, both systems do not consider entrainment.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{CAPE} = \\int_{z_\\mathrm{f}}^{z_\\mathrm{n}} g \\left(\\frac{T_\\mathrm{v,parcel} - T_\\mathrm{v,env}}{T_\\mathrm{v,env}}\\right) \\, dz"
},
{
"math_id": 1,
"text": "z_\\mathrm{f}"
},
{
"math_id": 2,
"text": "z_\\mathrm{n}"
},
{
"math_id": 3,
"text": "T_\\mathrm{v,parcel}"
},
{
"math_id": 4,
"text": "T_\\mathrm{v,env}"
},
{
"math_id": 5,
"text": "g"
}
] | https://en.wikipedia.org/wiki?curid=1200324 |
12004717 | Pu's inequality | In differential geometry, Pu's inequality, proved by Pao Ming Pu, relates the area of an arbitrary Riemannian surface homeomorphic to the real projective plane with the lengths of the closed curves contained in it.
Statement.
A student of Charles Loewner, Pu proved in his 1950 thesis that every Riemannian surface formula_0 homeomorphic to the real projective plane satisfies the inequality
formula_1
where formula_2 is the systole of formula_0.
The equality is attained precisely when the metric has constant Gaussian curvature.
In other words, if all noncontractible loops in formula_0 have length at least formula_3, then formula_4 and the equality holds if and only if formula_0 is obtained from a Euclidean sphere of radius formula_5 by identifying each point with its antipodal.
Pu's paper also stated for the first time Loewner's inequality, a similar result for Riemannian metrics on the torus.
Proof.
Pu's original proof relies on the uniformization theorem and employs an averaging argument, as follows.
By uniformization, the Riemannian surface formula_6 is conformally diffeomorphic to a round projective plane. This means that we may assume that the surface formula_0 is obtained from the Euclidean unit sphere formula_7 by identifying antipodal points, and the Riemannian length element at each point formula_8 is
formula_9
where formula_10 is the Euclidean length element and the function formula_11, called the conformal factor, satisfies formula_12.
More precisely, the universal cover of formula_0 is formula_7, a loop formula_13 is noncontractible if and only if its lift formula_14 goes from one point to its opposite, and the length of each curve formula_15 is
formula_16
Subject to the restriction that each of these lengths is at least formula_3, we want to find an formula_17 that minimizes the
formula_18
where formula_19 is the upper half of the sphere.
A key observation is that if we average several different formula_20 that satisfy the length restriction and have the same area formula_21, then we obtain a better conformal factor formula_22, that also satisfies the length restriction and has
formula_23
formula_24
and the inequality is strict unless the functions formula_20 are equal.
A way to improve any non-constant formula_17 is to obtain the different functions formula_20 from formula_17 using rotations of the sphere formula_25, defining formula_26. If we average over all possible rotations, then we get an formula_27 that is constant over all the sphere. We can further reduce this constant to minimum value formula_28 allowed by the length restriction. Then we obtain the obtain the unique metric that attains the minimum area formula_29.
Reformulation.
Alternatively, every metric on the sphere formula_30 invariant under the antipodal map admits a pair of opposite points formula_31 at Riemannian distance formula_32 satisfying formula_33
A more detailed explanation of this viewpoint may be found at the page Introduction to systolic geometry.
Filling area conjecture.
An alternative formulation of Pu's inequality is the following. Of all possible fillings of the Riemannian circle of length formula_34 by a formula_35-dimensional disk with the strongly isometric property, the round hemisphere has the least area.
To explain this formulation, we start with the observation that the equatorial circle of the unit formula_35-sphere formula_36 is a Riemannian circle formula_37 of length formula_34. More precisely, the Riemannian distance function
of formula_37 is induced from the ambient Riemannian distance on the sphere. Note that this property is not satisfied by the standard imbedding of the unit circle in the Euclidean plane. Indeed, the Euclidean distance between a pair of opposite points of the circle is
only formula_35, whereas in the Riemannian circle it is formula_38.
We consider all fillings of formula_37 by a formula_35-dimensional disk, such that the metric induced by the inclusion of the circle as the boundary of the disk is the Riemannian
metric of a circle of length formula_34. The inclusion of the circle as the boundary is then called a strongly isometric imbedding of the circle.
Gromov conjectured that the round hemisphere gives the "best" way of filling the circle even when the filling surface is allowed to have positive genus .
Isoperimetric inequality.
Pu's inequality bears a curious resemblance to the classical isoperimetric inequality
formula_39
for Jordan curves in the plane, where formula_40 is the length of the curve while formula_41 is the area of the region it bounds. Namely, in both cases a 2-dimensional quantity (area) is bounded by (the square of) a 1-dimensional quantity (length). However, the inequality goes in the opposite direction. Thus, Pu's inequality can be thought of as an
"opposite" isoperimetric inequality. | [
{
"math_id": 0,
"text": " M "
},
{
"math_id": 1,
"text": " \\operatorname{Area}(M) \\geq \\frac{2}{\\pi} \\operatorname{Systole}(M)^2 ,"
},
{
"math_id": 2,
"text": " \\operatorname{Systole}(M) "
},
{
"math_id": 3,
"text": " L "
},
{
"math_id": 4,
"text": " \\operatorname{Area}(M) \\geq \\frac{2}{\\pi} L^2, "
},
{
"math_id": 5,
"text": " r=L/\\pi "
},
{
"math_id": 6,
"text": " (M,g) "
},
{
"math_id": 7,
"text": " S^2 "
},
{
"math_id": 8,
"text": " x "
},
{
"math_id": 9,
"text": " \\mathrm{dLength} = f(x) \\mathrm{dLength}_{\\text{Euclidean}}, "
},
{
"math_id": 10,
"text": " \\mathrm{dLength}_{\\text{Euclidean}} "
},
{
"math_id": 11,
"text": " f: S^2\\to(0,+\\infty) "
},
{
"math_id": 12,
"text": " f(-x)=f(x) "
},
{
"math_id": 13,
"text": "\\gamma\\subseteq M "
},
{
"math_id": 14,
"text": " \\widetilde\\gamma\\subseteq S^2"
},
{
"math_id": 15,
"text": "\\gamma"
},
{
"math_id": 16,
"text": " \\operatorname{Length}(\\gamma)=\\int_{\\widetilde\\gamma} f \\, \\mathrm{dLength}_{\\text{Euclidean}}."
},
{
"math_id": 17,
"text": " f "
},
{
"math_id": 18,
"text": " \\operatorname{Area}(M,g)=\\int_{S^2_+} f(x)^2\\,\\mathrm{dArea}_{\\text{Euclidean}}(x),"
},
{
"math_id": 19,
"text": " S^2_+ "
},
{
"math_id": 20,
"text": " f_i "
},
{
"math_id": 21,
"text": " A "
},
{
"math_id": 22,
"text": " f_{\\text{new}} = \\frac{1}{n} \\sum_{0\\leq i<n} f_i"
},
{
"math_id": 23,
"text": " \\operatorname{Area}(M,g_{\\text{new}}) = \\int_{S^2_+}\\left(\\frac 1n\\sum_i f_i(x)\\right)^2\\mathrm{dArea}_{\\text{Euclidean}}(x) "
},
{
"math_id": 24,
"text": " \\qquad\\qquad \\leq \\frac{1}{n}\\sum_i\\left(\\int_{S^2_+} f_i(x)^2\\mathrm{dArea}_{\\text{Euclidean}}(x)\\right) = A,"
},
{
"math_id": 25,
"text": " R_i\\in SO^3 "
},
{
"math_id": 26,
"text": " f_i(x)=f(R_i(x))"
},
{
"math_id": 27,
"text": " f_{\\text{new}} "
},
{
"math_id": 28,
"text": " r=\\frac L\\pi "
},
{
"math_id": 29,
"text": " 2\\pi r^2 = \\frac 2\\pi L^2 "
},
{
"math_id": 30,
"text": "S^2"
},
{
"math_id": 31,
"text": "p,q\\in S^2"
},
{
"math_id": 32,
"text": "d=d(p,q)"
},
{
"math_id": 33,
"text": "d^2 \\leq \\frac{\\pi}{4} \\operatorname{area} (S^2)."
},
{
"math_id": 34,
"text": "2\\pi"
},
{
"math_id": 35,
"text": "2"
},
{
"math_id": 36,
"text": "S^2 \\subset \\mathbb R^3"
},
{
"math_id": 37,
"text": "S^1"
},
{
"math_id": 38,
"text": "\\pi"
},
{
"math_id": 39,
"text": " L^2 \\geq 4\\pi A "
},
{
"math_id": 40,
"text": "L"
},
{
"math_id": 41,
"text": "A"
}
] | https://en.wikipedia.org/wiki?curid=12004717 |
1200537 | Comodule | In mathematics, a comodule or corepresentation is a concept dual to a module. The definition of a comodule over a coalgebra is formed by dualizing the definition of a module over an associative algebra.
Formal definition.
Let "K" be a field, and "C" be a coalgebra over "K". A (right) comodule over "C" is a "K"-vector space "M" together with a linear map
formula_0
such that
where Δ is the comultiplication for "C", and ε is the counit.
Note that in the second rule we have identified formula_3 with formula_4.
# Let the comultiplication on formula_5 be given by formula_8.
# Let the counit on formula_5 be given by formula_9.
# Let the map formula_10 on "V" be given by formula_11, where formula_12 is the "i"-th homogeneous piece of formula_13.
Examples.
In algebraic topology.
One important result in algebraic topology is the fact that homology formula_14 over the dual Steenrod algebra formula_15 forms a comodule. This comes from the fact the Steenrod algebra formula_16 has a canonical action on the cohomologyformula_17When we dualize to the dual Steenrod algebra, this gives a comodule structureformula_18This result extends to other cohomology theories as well, such as complex cobordism and is instrumental in computing its cohomology ring formula_19. The main reason for considering the comodule structure on homology instead of the module structure on cohomology lies in the fact the dual Steenrod algebra formula_15 is a commutative ring, and the setting of commutative algebra provides more tools for studying its structure.
Rational comodule.
If "M" is a (right) comodule over the coalgebra "C", then "M" is a (left) module over the dual algebra "C"∗, but the converse is not true in general: a module over "C"∗ is not necessarily a comodule over "C". A rational comodule is a module over "C"∗ which becomes a comodule over "C" in the natural way.
Comodule morphisms.
Let "R" be a ring, "M", "N", and "C" be "R"-modules, and
formula_20
be right "C"-comodules. Then an "R"-linear map formula_21 is called a (right) comodule morphism, or (right) C-colinear, if
formula_22
This notion is dual to the notion of a linear map between vector spaces, or, more generally, of a homomorphism between "R"-modules.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho\\colon M \\to M \\otimes C"
},
{
"math_id": 1,
"text": "(\\mathrm{id} \\otimes \\Delta) \\circ \\rho = (\\rho \\otimes \\mathrm{id}) \\circ \\rho"
},
{
"math_id": 2,
"text": "(\\mathrm{id} \\otimes \\varepsilon) \\circ \\rho = \\mathrm{id}"
},
{
"math_id": 3,
"text": "M \\otimes K"
},
{
"math_id": 4,
"text": "M\\,"
},
{
"math_id": 5,
"text": "C_I"
},
{
"math_id": 6,
"text": "e_i"
},
{
"math_id": 7,
"text": "i \\in I"
},
{
"math_id": 8,
"text": "\\Delta(e_i) = e_i \\otimes e_i"
},
{
"math_id": 9,
"text": "\\varepsilon(e_i) = 1\\ "
},
{
"math_id": 10,
"text": "\\rho"
},
{
"math_id": 11,
"text": "\\rho(v) = \\sum v_i \\otimes e_i"
},
{
"math_id": 12,
"text": "v_i"
},
{
"math_id": 13,
"text": "v"
},
{
"math_id": 14,
"text": "H_*(X)"
},
{
"math_id": 15,
"text": "\\mathcal{A}^*"
},
{
"math_id": 16,
"text": "\\mathcal{A}"
},
{
"math_id": 17,
"text": "\\mu: \\mathcal{A}\\otimes H^*(X) \\to H^*(X)"
},
{
"math_id": 18,
"text": "\\mu^*:H_*(X) \\to \\mathcal{A}^*\\otimes H_*(X)"
},
{
"math_id": 19,
"text": "\\Omega_U^*(\\{pt\\})"
},
{
"math_id": 20,
"text": "\\rho_M: M \\rightarrow M \\otimes C,\\ \\rho_N: N \\rightarrow N \\otimes C"
},
{
"math_id": 21,
"text": "f: M \\rightarrow N"
},
{
"math_id": 22,
"text": "\\rho_N \\circ f = (f \\otimes 1) \\circ \\rho_M."
}
] | https://en.wikipedia.org/wiki?curid=1200537 |
1200773 | Midparent | In human genetics the midparent value of a trait is defined as the average of the trait value of the father and a scaled version of that of the mother. This value can be used in a study to analyze the data set without heeding sex effects. Studying quantitative traits in heritability studies may be complicated by sex differences observed for the trait.
Well-known examples include studies of humans' height, whose midparent value "h""mp" is given by:
formula_0
where "h""f" is the father's height, and "h""m" the mother's.
The coefficient 1.08 serves as a scaling factor. After the 1.08 scaling, the mean of the mother's height is the same as that of the father's, and the variance is closer to the father's; in this way, sex difference can be ignored.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "h_{mp}=\\frac{h_f+(1.08\\times h_m)}{2}"
}
] | https://en.wikipedia.org/wiki?curid=1200773 |
12008116 | Alpha recursion theory | In recursion theory, α recursion theory is a generalisation of recursion theory to subsets of admissible ordinals formula_0. An admissible set is closed under formula_1 functions, where formula_2 denotes a rank of Godel's constructible hierarchy. formula_0 is an admissible ordinal if formula_3 is a model of Kripke–Platek set theory. In what follows formula_0 is considered to be fixed.
Definitions.
The objects of study in formula_0 recursion are subsets of formula_0. These sets are said to have some properties:
There are also some similar definitions for functions mapping formula_0 to formula_0:
Additional connections between recursion theory and α recursion theory can be drawn, although explicit definitions may not have yet been written to formalize them:
We say R is a reduction procedure if it is formula_0 recursively enumerable and every member of R is of the form formula_18 where "H", "J", "K" are all α-finite.
"A" is said to be α-recursive in "B" if there exist formula_19 reduction procedures such that:
formula_20
formula_21
If "A" is recursive in "B" this is written formula_22. By this definition "A" is recursive in formula_23 (the empty set) if and only if "A" is recursive. However A being recursive in B is not equivalent to A being formula_24.
We say "A" is regular if formula_25 or in other words if every initial portion of "A" is α-finite.
Work in α recursion.
Shore's splitting theorem: Let A be formula_0 recursively enumerable and regular. There exist formula_0 recursively enumerable formula_26 such that formula_27
Shore's density theorem: Let "A", "C" be α-regular recursively enumerable sets such that formula_28 then there exists a regular α-recursively enumerable set "B" such that formula_29.
Barwise has proved that the sets formula_11-definable on formula_30 are exactly the sets formula_31-definable on formula_6, where formula_32 denotes the next admissible ordinal above formula_0, and formula_33 is from the Levy hierarchy.
There is a generalization of limit computability to partial formula_34 functions.
A computational interpretation of formula_0-recursion exists, using "formula_0-Turing machines" with a two-symbol tape of length formula_0, that at limit computation steps take the limit inferior of cell contents, state, and head position. For admissible formula_0, a set formula_4 is formula_0-recursive iff it is computable by an formula_0-Turing machine, and formula_35 is formula_0-recursively-enumerable iff formula_35 is the range of a function computable by an formula_0-Turing machine.
A problem in α-recursion theory which is open (as of 2019) is the embedding conjecture for admissible ordinals, which is whether for all admissible formula_0, the automorphisms of the formula_0-enumeration degrees embed into the automorphisms of the formula_0-enumeration degrees.
Relationship to analysis.
Some results in formula_0-recursion can be translated into similar results about second-order arithmetic. This is because of the relationship formula_9 has with the ramified analytic hierarchy, an analog of formula_9 for the language of second-order arithmetic, that consists of sets of integers.
In fact, when dealing with first-order logic only, the correspondence can be close enough that for some results on formula_36, the arithmetical and Levy hierarchies can become interchangeable. For example, a set of natural numbers is definable by a formula_37 formula iff it's formula_11-definable on formula_38, where formula_11 is a level of the Levy hierarchy. More generally, definability of a subset of ω over HF with a formula_16 formula coincides with its arithmetical definability using a formula_39 formula. | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\Sigma_1(L_\\alpha)"
},
{
"math_id": 2,
"text": "L_\\xi"
},
{
"math_id": 3,
"text": "L_{\\alpha}"
},
{
"math_id": 4,
"text": "A\\subseteq\\alpha"
},
{
"math_id": 5,
"text": " \\Sigma_1"
},
{
"math_id": 6,
"text": "L_\\alpha"
},
{
"math_id": 7,
"text": "\\alpha \\setminus A"
},
{
"math_id": 8,
"text": "L_{\\alpha+1}"
},
{
"math_id": 9,
"text": "L"
},
{
"math_id": 10,
"text": "<Math>L_{\\alpha+1}</math>"
},
{
"math_id": 11,
"text": "\\Sigma_1"
},
{
"math_id": 12,
"text": "(L_\\alpha,\\in)"
},
{
"math_id": 13,
"text": "\\Delta_1"
},
{
"math_id": 14,
"text": "f:\\alpha\\rightarrow\\alpha"
},
{
"math_id": 15,
"text": "n\\in\\omega"
},
{
"math_id": 16,
"text": "\\Sigma_n"
},
{
"math_id": 17,
"text": "\\Delta_0"
},
{
"math_id": 18,
"text": " \\langle H,J,K \\rangle "
},
{
"math_id": 19,
"text": "R_0,R_1"
},
{
"math_id": 20,
"text": "K \\subseteq A \\leftrightarrow \\exists H: \\exists J:[\\langle H,J,K \\rangle \\in R_0 \\wedge H \\subseteq B \\wedge J \\subseteq \\alpha / B ],"
},
{
"math_id": 21,
"text": "K \\subseteq \\alpha / A \\leftrightarrow \\exists H: \\exists J:[\\langle H,J,K \\rangle \\in R_1 \\wedge H \\subseteq B \\wedge J \\subseteq \\alpha / B ]."
},
{
"math_id": 22,
"text": "\\scriptstyle A \\le_\\alpha B"
},
{
"math_id": 23,
"text": "\\scriptstyle\\varnothing"
},
{
"math_id": 24,
"text": "\\Sigma_1(L_\\alpha[B])"
},
{
"math_id": 25,
"text": "\\forall \\beta \\in \\alpha: A \\cap \\beta \\in L_\\alpha"
},
{
"math_id": 26,
"text": "B_0,B_1"
},
{
"math_id": 27,
"text": "A=B_0 \\cup B_1 \\wedge B_0 \\cap B_1 = \\varnothing \\wedge A \\not\\le_\\alpha B_i (i<2)."
},
{
"math_id": 28,
"text": "\\scriptstyle A <_\\alpha C"
},
{
"math_id": 29,
"text": "\\scriptstyle A <_\\alpha B <_\\alpha C"
},
{
"math_id": 30,
"text": "L_{\\alpha^+}"
},
{
"math_id": 31,
"text": "\\Pi_1^1"
},
{
"math_id": 32,
"text": "\\alpha^+"
},
{
"math_id": 33,
"text": "\\Sigma"
},
{
"math_id": 34,
"text": "\\alpha\\to\\alpha"
},
{
"math_id": 35,
"text": "A"
},
{
"math_id": 36,
"text": "L_\\omega=\\textrm{HF}"
},
{
"math_id": 37,
"text": "\\Sigma_1^0"
},
{
"math_id": 38,
"text": "L_\\omega"
},
{
"math_id": 39,
"text": "\\Sigma_n^0"
}
] | https://en.wikipedia.org/wiki?curid=12008116 |
12008694 | Junction temperature | Junction temperature, short for transistor junction temperature, is the highest operating temperature of the actual semiconductor in an electronic device. In operation, it is higher than case temperature and the temperature of the part's exterior. The difference is equal to the amount of heat transferred from the junction to case multiplied by the junction-to-case thermal resistance.
Microscopic effects.
Various physical properties of semiconductor materials are temperature dependent. These include the diffusion rate of dopant elements, carrier mobilities and the thermal production of charge carriers. At the low end, sensor diode noise can be reduced by cryogenic cooling. On the high end, the resulting increase in local power dissipation can lead to thermal runaway that may cause transient or permanent device failure.
Maximum junction temperature calculation.
Maximum junction temperature (sometimes abbreviated TJMax) is specified in a part's datasheet and is used when calculating the necessary case-to-ambient thermal resistance for a given power dissipation. This in turn is used to select an appropriate heat sink if applicable. Other cooling methods include thermoelectric cooling and coolants.
In modern processors from manufacturer such as Intel, AMD, Qualcomm, the core temperature is measured by a network of sensors. Every time the temperature sensing network determines that a rise above the specified junction temperature (formula_0), is imminent, measures such as clock gating, clock stretching, clock speed reduction and others (commonly referred to as thermal throttling) are applied to prevent the temperature to raise further. If the applied mechanisms are not compensating enough for the processor to stay below the junction temperature, the device may shut down to prevent permanent damage.
An estimation of the chip-junction temperature formula_0 can be obtained from the following equation:
formula_1
where:
formula_2= ambient temperature for the package [°C]
formula_3= junction to ambient thermal resistance [°C / W]
formula_4= power dissipation in package [W]
Measuring junction temperature (TJ).
Many semiconductors and their surrounding optics are small, making it difficult to measure junction temperature with direct methods such as thermocouples and infrared cameras.
Junction temperature may be measured indirectly using the device's inherent voltage/temperature dependency characteristic. Combined with a Joint Electron Device Engineering Council (JEDEC) technique such as JESD 51-1 and JESD 51-51, this method will produce accurate formula_0 measurements. However, this measurement technique is difficult to implement in multi-LED series circuits due to high common mode voltages and the need for fast, high duty cycle current pulses. This difficulty can be overcome by combining high-speed sampling digital multimeters and fast high-compliance pulsed current sources.
Once junction temperature is known, another important parameter, thermal resistance (Rθ), may be calculated using the following equation:
formula_5
Junction temperature of LEDs and laser diodes.
An LED or laser diode’s junction temperature (Tj) is a primary determinate for long-term reliability; it also is a key factor for photometry. For example, a typical white LED output declines 20% for a 50 °C rise in junction temperature. Because of this temperature sensitivity, LED measurement standards, like IESNA’s LM-85, require that the junction temperature is determined when making photometric measurements.
Junction heating can be minimized in these devices by using the Continuous Pulse Test Method specified in LM-85. An L-I sweep conducted with an Osram Yellow LED shows that Single Pulse Test Method measurements yield a 25% drop in luminous flux output and DC Test Method measurements yield a 70% drop.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_J"
},
{
"math_id": 1,
"text": "T_J = T_A + (R_{\\theta JA}P_D)"
},
{
"math_id": 2,
"text": "T_A"
},
{
"math_id": 3,
"text": "R_{\\theta JA}"
},
{
"math_id": 4,
"text": "P_D"
},
{
"math_id": 5,
"text": "R_\\theta = \\frac{\\Delta T}{V_f I_f}"
}
] | https://en.wikipedia.org/wiki?curid=12008694 |
12010034 | Np-chart | In statistical quality control, the np-chart is a type of control chart used to monitor the number of nonconforming units in a sample. It is an adaptation of the p-chart and used in situations where personnel find it easier to interpret process performance in terms of concrete numbers of units rather than the somewhat more abstract proportion.
The np-chart differs from the p-chart in only the three following aspects:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n\\bar p \\pm 3\\sqrt{n\\bar p(1-\\bar p)}"
},
{
"math_id": 1,
"text": "\\bar p"
},
{
"math_id": 2,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=12010034 |
12010787 | Magnetorotational instability | Fluid instability that causes turbulence in accretion disks
The magnetorotational instability (MRI) is a fluid instability that causes an accretion disk orbiting a massive central object to become turbulent. It arises when the angular velocity of a conducting fluid in a magnetic field decreases as the distance from the rotation center increases. It is also known as the Velikhov–Chandrasekhar instability or Balbus–Hawley instability in the literature, not to be confused with the electrothermal Velikhov instability. The MRI is of particular relevance in astrophysics where it is an important part of the dynamics in accretion disks.
Gases or liquids containing mobile electrical charges are subject to the influence of a magnetic field. In addition to hydrodynamical forces such as pressure and gravity, an element of magnetized fluid also feels the Lorentz force formula_0 where formula_1 is the current density and formula_2 is the magnetic field vector. If the fluid is in a state of differential rotation about a fixed origin, this Lorentz force can be surprisingly disruptive, even if the magnetic field is very weak. In particular, if the angular velocity of rotation formula_3 decreases with radial distance formula_4 the motion is unstable: a fluid element undergoing a small displacement from circular motion experiences a destabilizing force that increases at a rate which is itself proportional to the displacement. This process is known as the "Magnetorotational Instability", or "MRI".
In astrophysical settings, differentially rotating systems are very common and magnetic fields are ubiquitous. In particular, thin disks of gas are often found around forming stars or in binary star systems, where they are known as accretion disks. Accretion disks are also commonly present in the centre of galaxies, and in some cases can be extremely luminous: quasars, for example, are thought to originate from a gaseous disk surrounding a very massive black hole. Our modern understanding of the MRI arose from attempts to understand the behavior of accretion disks in the presence of magnetic fields; it is now understood that the MRI is likely to occur in a very wide variety of different systems.
Discovery.
The MRI was first noticed in a non-astrophysical context by Evgeny Velikhov in 1959 when considering the stability of Couette flow of an ideal hydromagnetic fluid. His result was later generalized by Subrahmanyan Chandrasekhar in 1960. This mechanism was proposed by D. J. Acheson and Raymond Hide (1973) to perhaps play a role in the context of the Earth's geodynamo problem. Although there was some follow-up work in later decades (Fricke, 1969; Acheson and Hide 1972; Acheson and Gibbons 1978), the generality and power of the instability were not fully appreciated until 1991, when Steven A. Balbus and John F. Hawley gave a relatively simple elucidation and physical explanation of this important process.
Physical process.
In a magnetized, perfectly conducting fluid, the magnetic forces behave in some very important respects as though the elements of fluid were connected with elastic bands: trying to displace such an element perpendicular to a magnetic line of force causes an attractive force proportional to the displacement, like a spring under tension. Normally, such a force is restoring, a strongly stabilizing influence that would allow a type of magnetic wave to propagate. If the fluid medium is not stationary but rotating, however, attractive forces can actually be destabilizing. The MRI is a consequence of this surprising behavior.
Consider, for example, two masses, mi ("inner") and mo ("outer") connected by a spring under tension, both masses in orbit around a central body, Mc. In such a system, the angular velocity of circular orbits near the center is greater than the angular velocity of orbits farther from the center, but the angular momentum of the inner orbits is smaller than that of the outer orbits. If mi is allowed to orbit a little bit closer to the center than mo, it will have a slightly higher angular velocity. The connecting spring will pull back on mi, and drag mo forward. This means that mi experiences a retarding torque, loses angular momentum, and must fall inward to an orbit of smaller radius, corresponding to a smaller angular momentum. mo, on the other hand, experiences a positive torque, acquires more angular momentum, and moves outward to a higher orbit. The spring stretches yet more, the torques become yet larger, and the motion is unstable! Because magnetic forces act like a spring under tension connecting fluid elements, the behavior of a magnetized fluid is almost exactly analogous to this simple mechanical system. This is the essence of the MRI .
A more detailed explanation.
To see this unstable behavior more quantitatively, consider the equations of motion for a fluid element mass in circular motion with an angular velocity formula_5 In general formula_3 will be a function of the distance from the rotation axis formula_4 and we assume that the orbital radius is formula_6 The centripetal acceleration required to keep the mass in orbit is formula_7; the minus sign indicates a direction toward the center. If this force is gravity from a point mass at the center, then the centripetal acceleration is simply formula_8 where formula_9 is the gravitational constant and formula_10 is the central mass.
Let us now consider small departures from the circular motion of the orbiting mass element caused by some perturbing force. We transform variables into a rotating frame moving with the orbiting mass element at angular velocity formula_11 with origin located at the unperturbed, orbiting location of the mass element. As usual when working in a rotating frame, we need to add to the equations of motion a Coriolis force formula_12 plus a centrifugal force formula_13 The velocity formula_14 is the velocity as measured in the rotating frame. Furthermore, we restrict our attention to a small neighborhood near formula_15 say formula_16 with formula_17 much smaller than formula_18 Then the sum of the centrifugal and centripetal forces is
to linear order in formula_19 With our formula_17 axis pointing radial outward from the unperturbed location of the fluid element and our formula_20 axis pointing in the direction of increasing azimuthal angle (the direction of the unperturbed orbit), the formula_17 and formula_20 equations of motion for a small departure from a circular orbit formula_21 are:
where formula_22 and formula_23 are the forces per unit mass in the formula_17 and formula_20 directions, and a dot indicates a time derivative (i.e., formula_24 is the formula_17 velocity, formula_25 is the formula_17 acceleration, etc.). Provided that formula_22 and formula_23 are either 0 or linear in x and y, this is a system of coupled second-order linear differential equations that can be solved analytically.
In the absence of external forces, formula_26 and formula_27, the equations of motion have solutions with the time dependence formula_28 where the angular frequency formula_29 satisfies the equation
where formula_30 is known as the epicyclic frequency. In our solar system, for example, deviations from a sun-centered circular orbit that are familiar ellipses when viewed by an external viewer at rest, appear instead as small radial and azimuthal oscillations of the orbiting element when viewed by an observer moving with the undisturbed circular motion.
These oscillations trace out a small retrograde ellipse (i.e. rotating in the opposite sense of the large circular orbit), centered on the undisturbed orbital location of the mass element.
The epicyclic frequency may equivalently be written formula_31 which shows that it is proportional to the radial derivative of the angular momentum per unit mass, or specific angular momentum. The specific angular momentum must increase outward if stable epicyclic oscillations are to exist, otherwise displacements would grow exponentially, corresponding to instability. This is a very general result known as the "Rayleigh criterion" (Chandrasekhar 1961) for stability. For orbits around a point mass, the specific angular momentum is proportional to formula_32 so the Rayleigh criterion is well satisfied.
Consider next the solutions to the equations of motion if the mass element is subjected to an external restoring force, formula_33 formula_34 where formula_35 is an arbitrary constant (the "spring constant"). If we now seek solutions for the modal displacements in formula_17 and formula_20 with time dependence formula_28 we find a much more complex equation for formula_36
Even though the spring exerts an attractive force, it may destabilize. For example, if the spring constant formula_35 is sufficiently weak, the dominant balance will be between the final two terms on the left side of the equation. Then, a decreasing outward angular velocity profile will produce negative values for formula_37 and both positive and negative imaginary values for formula_38 The negative imaginary root results not in oscillations, but in exponential growth of very small displacements. A weak spring therefore causes the type of instability described qualitatively at the end of the previous section. A "strong" spring on the other hand, will produce oscillations, as one intuitively expects.
The spring-like nature of magnetic fields.
The conditions inside a perfectly conducting fluid in motion is often a good approximation to astrophysical gases. In the presence of a magnetic field formula_39 a moving conductor responds by trying to eliminate the Lorentz force on the free charges. The magnetic force acts in such a way as to locally rearrange these charges to produce an internal electric field of formula_40 In this way, the direct Lorentz force on the charges formula_41 vanishes. (Alternatively, the electric field in the local rest frame of the moving charges vanishes.) This induced electric field can now itself induce further changes in the magnetic field formula_2 according to Faraday's law,
Another way to write this equation is that if in time formula_42 the fluid makes a displacement formula_43 then the magnetic field changes by
The equation of a magnetic field in a perfect conductor in motion has a special property: the combination of Faraday induction and zero Lorentz force makes the field lines behave as though they were painted, or "frozen," into the fluid. In particular, if formula_2 is initially nearly constant and formula_44 is a divergence-free displacement, then our equation reduces to
because of the vector calculus identity formula_45
Out of these 4 terms, formula_46 is one of Maxwell's equations. By the divergence-free assumption, formula_47. formula_48 because B is assumed to be nearly constant. Equation 8 shows that formula_2 changes only when there is a shearing displacement along the field line.
To understand the MRI, it is sufficient to consider the case in which formula_2 is uniform in vertical formula_49 direction, and formula_44 varies as formula_50 Then
where it is understood that the real part of this equation expresses its physical content. (If formula_51 is proportional to formula_52 for example, then formula_53 is proportional to formula_54)
A magnetic field exerts a force per unit volume on an electrically neutral, conducting fluid equal to formula_55 Ampere's circuital law gives formula_56 because Maxwell's correction is neglected in the MHD approximation. The force per unit volume becomes
where we have used the same vector calculus identity. This equation is fully general, and makes no assumptions about the strength or direction of the magnetic field.
The first term on the right is analogous to a pressure gradient. In our problem it may be neglected because it exerts no force in the plane of the disk, perpendicular to formula_57 The second term acts like a magnetic tension force, analogous to a taut string. For a small disturbance formula_58 it exerts an acceleration given by force divided by mass, or equivalently, force per unit volume divided by mass per unit volume:
Thus, a magnetic tension force gives rise to a return force which is directly proportional to the displacement. This means that the oscillation frequency formula_29 for small displacements in the plane of rotation of a disk with a uniform magnetic field in the vertical direction satisfies an equation ("dispersion relation") exactly analogous to equation 5, with the "spring constant" formula_59
As before, if formula_60 there is an exponentially growing root of this equation for wavenumbers formula_61 satisfying formula_62
This corresponds to the MRI.
Notice that the magnetic field appears in equation 12 only as the product formula_63 Thus, even if formula_64 is very small, for very large wavenumbers formula_61 this magnetic tension can be important. This is why the MRI is so sensitive to even very weak magnetic fields: their effect is amplified by multiplication by formula_65 Moreover, it can be shown that MRI is present regardless of the magnetic field geometry, as long as the field is not too strong.
In astrophysics, one is generally interested in the case for which the disk is supported by rotation against the gravitational attraction of a central mass. A balance between the Newtonian gravitational force and the radial centripetal force immediately gives
where formula_9 is the Newtonian gravitational constant, formula_10 is the central mass, and formula_66 is radial location in the disk. Since formula_67 this so-called "Keplerian disk" is unstable to the MRI . Without a weak magnetic field, the flow would be stable.
For a Keplerian disk, the maximum growth rate is formula_68 which occurs at a wavenumber satisfying formula_69
formula_70 is very rapid, corresponding to an amplification factor of more than 100 per rotation period.
The nonlinear development of the MRI into fully developed turbulence may be followed via large scale numerical computation.
Applications and laboratory experiments.
Interest in the MRI is based on the fact that it appears to give an explanation for the origin of turbulent flow in astrophysical accretion disks (Balbus and Hawley, 1991).
A promising model for the compact, intense X-ray sources discovered in the 1960s was that of a neutron star or black hole drawing in ("accreting") gas from its surroundings (Prendergast and Burbidge, 1968). Such gas always accretes with a finite amount of angular momentum relative to the central object, and so it must first form a rotating disk — it cannot accrete directly onto the object without first losing its angular momentum. But how an element of gaseous fluid managed to lose its angular momentum and spiral onto the central object was not at all obvious.
One explanation involved shear-driven turbulence (Shakura and Sunyaev, 1973). There would be significant shear in an accretion disk (gas closer to the centre rotates more rapidly than outer disk regions), and shear layers often break down into turbulent flow. The presence of shear-generated turbulence, in turn, produces the powerful torques needed to transport angular momentum from one (inner) fluid element to another (farther out).
The breakdown of shear layers into turbulence is routinely observed in flows with velocity gradients, but without systematic rotation. This is an important point, because rotation produces strongly stabilizing Coriolis forces, and this is precisely what occurs in accretion disks . As can be seen in equation 5, the K = 0 limit produces Coriolis-stabilized oscillations, not exponential growth. These oscillations are present under much more general conditions as well: a recent laboratory experiment (Ji et al., 2006) has shown stability of the flow profile expected in accretion disks under conditions in which otherwise troublesome dissipation effects are (by a standard measure known as the Reynolds number) well below one part in a million. All of this changes, however, are when even a very weak magnetic field is present. The MRI produces torques that are not stabilized by Coriolis forces. Large scale numerical simulations of the MRI indicate that the rotational disk flow breaks down into turbulence (Hawley et al., 1995), with strongly enhanced angular momentum transport properties. This is just what is required for the accretion disk model to work. The formation of stars (Stone et al., 2000), the production of X-rays in neutron star and black hole systems (Blaes, 2004), and the creation of active galactic nuclei (Krolik, 1999) and gamma ray bursts (Wheeler, 2004) are all thought to involve the development of the MRI at some level.
Thus far, we have focused rather exclusively on the dynamical breakdown of laminar flow into turbulence triggered by a weak magnetic field, but it is also the case that the resulting highly agitated flow can act back on this same magnetic field. Embedded magnetic field lines are stretched by the turbulent flow, and it is possible that systematic field amplification could result. The process by which fluid motions are converted to magnetic field energy is known as a dynamo (Moffatt, 1978); the two best studied examples are the Earth's liquid outer core and the layers close to the surface of the Sun. Dynamo activity in these regions is thought to be responsible for maintaining the terrestrial and solar magnetic fields. In both of these cases thermal convection is likely to be the primary energy source, though in the case of the Sun differential rotation may also play an important role. Whether the MRI is an efficient dynamo process in accretion disks is currently an area of active research (Fromang and Papaloizou, 2007).
There may also be applications of the MRI outside of the classical accretion disk venue. Internal rotation in stars (Ogilvie, 2007), and even planetary dynamos (Petitdemange et al., 2008) may, under some circumstances, be vulnerable to the MRI in combination with convective instabilities. These studies are also ongoing.
Finally, the MRI can, in principle, be studied in the laboratory (Ji et al., 2001), though these experiments are very difficult to implement. A typical set-up involves either concentric spherical shells or coaxial cylindrical shells. Between (and confined by) the shells, there is a conducting liquid metal such as sodium or gallium. The inner and outer shells are set in rotation at different rates, and viscous torques compel the trapped liquid metal to differentially rotate. The experiment then investigates whether the differential rotation profile is stable or not in the presence of an applied magnetic field.
A claimed detection of the MRI in a spherical shell experiment (Sisan et al., 2004), in which the underlying state was itself turbulent, awaits confirmation at the time of this writing (2009). A magnetic instability that bears some similarity to the MRI can be excited if both vertical and azimuthal magnetic fields are present in the undisturbed state (Hollerbach and Rüdiger, 2005). This is sometimes referred to as the "helical-MRI," (Liu et al., 2006) though its precise relation to the MRI described above has yet to be fully elucidated. Because it is less sensitive to stabilizing ohmic resistance than is the classical MRI, this helical magnetic instability is easier to excite in the laboratory, and there are indications that it may have been found (Stefani et al., 2006). The detection of the classical MRI in a hydrodynamically quiescent background state has yet to be achieved in the laboratory, however.
The spring-mass analogue of the standard MRI has been demonstrated in rotating Taylor–Couette / Keplerian-like flow (Hung et al. 2019). | [
{
"math_id": 0,
"text": "\\boldsymbol J\\times\\boldsymbol B\\ ,"
},
{
"math_id": 1,
"text": "\\boldsymbol J"
},
{
"math_id": 2,
"text": "\\boldsymbol B"
},
{
"math_id": 3,
"text": "\\Omega"
},
{
"math_id": 4,
"text": "R\\ ,"
},
{
"math_id": 5,
"text": "\\Omega\\ ."
},
{
"math_id": 6,
"text": "r=R_0\\ ."
},
{
"math_id": 7,
"text": "-R\\Omega^2(R)"
},
{
"math_id": 8,
"text": "-GM/R^2,"
},
{
"math_id": 9,
"text": "G"
},
{
"math_id": 10,
"text": "M"
},
{
"math_id": 11,
"text": "\\Omega(R_0)=\\Omega_0\\ ,"
},
{
"math_id": 12,
"text": "-2\\boldsymbol\\Omega_0\\times\\boldsymbol v"
},
{
"math_id": 13,
"text": "R\\Omega_0^2\\ ."
},
{
"math_id": 14,
"text": "v"
},
{
"math_id": 15,
"text": "R_0\\ ,"
},
{
"math_id": 16,
"text": "R_0+x\\ ,"
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "R_0\\ ."
},
{
"math_id": 19,
"text": "x\\ ."
},
{
"math_id": 20,
"text": "y"
},
{
"math_id": 21,
"text": "R=R_0"
},
{
"math_id": 22,
"text": "f_x"
},
{
"math_id": 23,
"text": "f_y"
},
{
"math_id": 24,
"text": "\\dot x"
},
{
"math_id": 25,
"text": "\\ddot x"
},
{
"math_id": 26,
"text": "f_x = 0"
},
{
"math_id": 27,
"text": "f_y = 0"
},
{
"math_id": 28,
"text": "e^{i\\omega t}\\ ,"
},
{
"math_id": 29,
"text": "\\omega"
},
{
"math_id": 30,
"text": "\\kappa^2"
},
{
"math_id": 31,
"text": "(1/R^3)(dR^4\\Omega^2/dR)\\ ,"
},
{
"math_id": 32,
"text": "R^{1/2}\\ ,"
},
{
"math_id": 33,
"text": "f_x=-Kx\\ ,"
},
{
"math_id": 34,
"text": "f_y=-Ky"
},
{
"math_id": 35,
"text": "K"
},
{
"math_id": 36,
"text": "\\omega\\ :"
},
{
"math_id": 37,
"text": "\\omega^2\\ ,"
},
{
"math_id": 38,
"text": "\\omega\\ ."
},
{
"math_id": 39,
"text": "\\boldsymbol B\\ ,"
},
{
"math_id": 40,
"text": "\\boldsymbol {E=-{v\\times B}}\\ ."
},
{
"math_id": 41,
"text": "\\boldsymbol {E+v\\times B}"
},
{
"math_id": 42,
"text": "\\delta t"
},
{
"math_id": 43,
"text": "\\boldsymbol \\xi = \\boldsymbol v\\delta t\\ ,"
},
{
"math_id": 44,
"text": "\\xi"
},
{
"math_id": 45,
"text": " \\nabla \\times (\\mathbf{\\xi} \\times \\mathbf{B}) = \\mathbf{\\xi} (\\nabla \\cdot \\mathbf{B}) - \\mathbf{B} (\\nabla \\cdot \\mathbf{\\xi}) + (\\mathbf{B} \\cdot \\nabla) \\mathbf{\\xi} - (\\mathbf{\\xi} \\cdot \\nabla) \\mathbf{B} \\ . "
},
{
"math_id": 46,
"text": "\\nabla \\cdot \\mathbf{B} = 0"
},
{
"math_id": 47,
"text": "\\nabla \\cdot \\mathbf{\\xi} = 0"
},
{
"math_id": 48,
"text": "(\\mathbf{\\xi} \\cdot \\nabla) \\mathbf{B} = 0 "
},
{
"math_id": 49,
"text": "z"
},
{
"math_id": 50,
"text": "e^{ikz}\\ ."
},
{
"math_id": 51,
"text": "\\boldsymbol \\xi"
},
{
"math_id": 52,
"text": "\\cos(kz)\\ ,"
},
{
"math_id": 53,
"text": "\\delta\\boldsymbol B"
},
{
"math_id": 54,
"text": "-\\sin(kz)\\ ."
},
{
"math_id": 55,
"text": "\\boldsymbol J\\times\\boldsymbol B\\ ."
},
{
"math_id": 56,
"text": "\\mu_0\\boldsymbol {J=\\nabla\\times B}\\ ,"
},
{
"math_id": 57,
"text": "z\\ ."
},
{
"math_id": 58,
"text": "\\delta\\boldsymbol B\\ ,"
},
{
"math_id": 59,
"text": "K={k^2B^2/\\mu_0\\rho}\\ :"
},
{
"math_id": 60,
"text": "d\\Omega^2/dR<0\\ ,"
},
{
"math_id": 61,
"text": "k"
},
{
"math_id": 62,
"text": "(k^2B^2/\\mu_0\\rho)< - Rd\\Omega^2/dR\\ ."
},
{
"math_id": 63,
"text": "kB\\ ."
},
{
"math_id": 64,
"text": "B"
},
{
"math_id": 65,
"text": "k\\ ."
},
{
"math_id": 66,
"text": "R"
},
{
"math_id": 67,
"text": "Rd\\Omega^2/dR=-3\\Omega^2<0\\ ,"
},
{
"math_id": 68,
"text": "\\gamma=3\\Omega/4\\ ,"
},
{
"math_id": 69,
"text": "(k^2B^2/\\mu_0\\rho)=15\\Omega^2/16\\ ."
},
{
"math_id": 70,
"text": "\\gamma"
}
] | https://en.wikipedia.org/wiki?curid=12010787 |
12012158 | List of optimization software | Given a transformation between input and output values, described by a mathematical function, optimization deals with generating and selecting the best solution from some set of available alternatives, by systematically choosing input values from within an allowed set, computing the output of the function and recording the best output values found during the process. Many real-world problems can be modeled in this way. For example, the inputs could be design parameters for a motor, the output could be the power consumption. For another optimization, the inputs could be business choices and the output could be the profit obtained.
An optimization problem, (in this case a minimisation problem), can be represented in the following way:
"Given:" a function "f" : "A" formula_0 R from some set "A" to the real numbers
"Search for:" an element "x"0 in "A" such that "f"("x"0) ≤ "f"("x") for all "x" in "A".
In continuous optimization, "A" is some subset of the Euclidean space R"n", often specified by a set of "constraints", equalities or inequalities that the members of "A" have to satisfy. In combinatorial optimization, "A" is some subset of a discrete space, like binary strings, permutations, or sets of integers.
The use of optimisation software requires that the function "f" is defined in a suitable programming language and connected at compilation or run time to the optimisation software. The optimisation software will deliver input values in "A", the software module realizing "f" will deliver the computed value "f"("x") and, in some cases, additional information about the function like derivatives.
In this manner, a clear separation of concerns is obtained: different optimisation software modules can be easily tested on the same function "f", or a given optimisation software can be used for different functions "f".
The following tables provide a list of notable optimisation software organised according to license and business model type.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\to"
}
] | https://en.wikipedia.org/wiki?curid=12012158 |
12013 | Girth (graph theory) | Length of a shortest cycle contained in the graph
In graph theory, the girth of an undirected graph is the length of a shortest cycle contained in the graph. If the graph does not contain any cycles (that is, it is a forest), its girth is defined to be infinity.
For example, a 4-cycle (square) has girth 4. A grid has girth 4 as well, and a triangular mesh has girth 3. A graph with girth four or more is triangle-free.
Cages.
A cubic graph (all vertices have degree three) of girth g that is as small as possible is known as a g-cage (or as a (3,"g")-cage). The Petersen graph is the unique 5-cage (it is the smallest cubic graph of girth 5), the Heawood graph is the unique 6-cage, the McGee graph is the unique 7-cage and the Tutte eight cage is the unique 8-cage. There may exist multiple cages for a given girth. For instance there are three nonisomorphic 10-cages, each with 70 vertices: the Balaban 10-cage, the Harries graph and the Harries–Wong graph.
Girth and graph coloring.
For any positive integers g and χ, there exists a graph with girth at least g and chromatic number at least χ; for instance, the Grötzsch graph is triangle-free and has chromatic number 4, and repeating the Mycielskian construction used to form the Grötzsch graph produces triangle-free graphs of arbitrarily large chromatic number. Paul Erdős was the first to prove the general result, using the probabilistic method. More precisely, he showed that a random graph on n vertices, formed by choosing independently whether to include each edge with probability "n"(1–"g")/"g", has, with probability tending to 1 as n goes to infinity, at most cycles of length g or less, but has no independent set of size . Therefore, removing one vertex from each short cycle leaves a smaller graph with girth greater than g, in which each color class of a coloring must be small and which therefore requires at least k colors in any coloring.
Explicit, though large, graphs with high girth and chromatic number can be constructed as certain Cayley graphs of linear groups over finite fields. These remarkable "Ramanujan graphs" also have large expansion coefficient.
Related concepts.
The odd girth and even girth of a graph are the lengths of a shortest odd cycle and shortest even cycle respectively.
The <templatestyles src="Template:Visible anchor/styles.css" />circumference of a graph is the length of the "longest" (simple) cycle, rather than the shortest.
Thought of as the least length of a non-trivial cycle, the girth admits natural generalisations as the 1-systole or higher systoles in systolic geometry.
Girth is the dual concept to edge connectivity, in the sense that the girth of a planar graph is the edge connectivity of its dual graph, and vice versa. These concepts are unified in matroid theory by the girth of a matroid, the size of the smallest dependent set in the matroid. For a graphic matroid, the matroid girth equals the girth of the underlying graph, while for a co-graphic matroid it equals the edge connectivity.
Computation.
The girth of an undirected graph can be computed by running a breadth-first search from each node, with complexity formula_0 where formula_1 is the number of vertices of the graph and formula_2 is the number of edges. A practical optimization is to limit the depth of the BFS to a depth that depends on the length of the smallest cycle discovered so far. Better algorithms are known in the case where the girth is even and when the graph is planar. In terms of lower bounds, computing the girth of a graph is at least as hard as solving the triangle finding problem on the graph. | [
{
"math_id": 0,
"text": "O(nm)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "m"
}
] | https://en.wikipedia.org/wiki?curid=12013 |
1201310 | Accumulated cyclone energy | Measure of tropical cyclone activity
Accumulated cyclone energy (ACE) is a metric used to compare overall activity of tropical cyclones, utilizing the available records of windspeeds at six-hour intervals to synthesize storm duration and strength into a single index value. The ACE index may refer to a single storm or to groups of storms such as those within a particular month, a full season or combined seasons. It is calculated by summing the square of tropical cyclones' maximum sustained winds, as recorded every six hours, but only for windspeeds of at least tropical storm strength (≥ 34 kn; 63 km/h; 39 mph); the resulting figure is divided by 10,000 to place it on a more manageable scale.
The calculation originated as the Hurricane Destruction Potential (HDP) index, which sums the squares of tropical cyclones' maximum sustained winds while at hurricane strength, at least 64 knots (≥ 119 km/h; 74 mph) at six-hour recorded intervals across an entire season. The HDP index was later modified to further include tropical storms, that is, all wind speeds of at least 34 knots (≥ 63 km/h; 39 mph), to become the "accumulated cyclone energy" index.
The highest ACE calculated for a single tropical cyclone on record worldwide is 87.01, set by Cyclone Freddy in 2023.
History.
The ACE index is an offshoot of Hurricane Destruction Potential (HDP), an index created in 1988 by William Gray and his associates at Colorado State University who argued the destructiveness of a hurricane's wind and storm surge is better related to the square of the maximum wind speed (formula_0)
than simply to the maximum wind speed (formula_1). The HDP index is calculated by squaring the estimated maximum sustained wind speeds for tropical cyclones while at hurricane strength, that is, wind speeds of at least 64 knots (≥ 119 km/h; 74 mph). The squared windspeeds from six-hourly recorded intervals are then summed across an entire season. This scale was subsequently modified in 1999 by the United States National Oceanic and Atmospheric Administration (NOAA) to include not only hurricanes but also tropical storms, that is, all cyclones while windspeeds are at least 34 knots (≥ 63 km/h; 39 mph). Since the calculation was more broadly adjusted by NOAA, the index has been used in a number of different ways such as to compare individual storms, and by various agencies and researchers including the Australian Bureau of Meteorology and the India Meteorological Department. The purposes of the ACE index include to categorize how active tropical cyclone seasons were as well as to identify possible long-term trends in a certain area such as the Lesser Antilles.
Calculation.
"Accumulated cyclone energy" is calculated by summing the squares of the estimated maximum sustained velocity of tropical cyclones when wind speeds are at least tropical storm strength (≥ 34 kn; 63 km/h; 39 mph) at recorded six-hour intervals. The sums are usually divided by 10,000 to make them more manageable. One unit of ACE equals 10−4 kn2, and for use as an index the unit is assumed. Thus:
formula_2 (for formula_1 ≥ 34 kn),
where formula_1 is estimated sustained wind speed in knots at six-hour intervals.
Kinetic energy is proportional to the square of velocity. However, unlike the measure defined above, kinetic energy is also proportional to the mass formula_3(corresponding to the size of the storm) and represents an integral of force equal to mass times acceleration, formula_4, where acceleration is the antiderivative of velocity, or formula_1. The integral is a difference at the limits of the square antiderivative, rather than a sum of squares at regular intervals. Thus, the term applied to the index, "accumulated cyclone energy", is a misnomer since the index is neither a measure of kinetic energy nor "accumulated energy."
Atlantic Ocean.
Within the Atlantic Ocean, the United States National Oceanic and Atmospheric Administration and others use the ACE index of a season to classify the season into one of four categories. These four categories are extremely active, above-normal, near-normal, and below-normal, and are worked out using an approximate quartile partitioning of seasons based on the ACE index over the 70 years between 1951 and 2020. The median value of the ACE index from 1951 to 2020 is 96.7 x 104 kt2.
Individual storms in the Atlantic.
The highest ever ACE estimated for a single storm in the Atlantic is 73.6, for the San Ciriaco hurricane in 1899. A Category 4 hurricane which lasted for four weeks, this single storm had an ACE higher than many whole Atlantic storm seasons. Other Atlantic storms with high ACEs include Hurricane Ivan in 2004, with an ACE of 70.4, Hurricane Irma in 2017, with an ACE of 64.9, the Great Charleston Hurricane in 1893, with an ACE of 63.5, Hurricane Isabel in 2003, with an ACE of 63.3, and the 1932 Cuba hurricane, with an ACE of 59.8.
Since 1950, the highest ACE of a tropical storm was Tropical Storm Philippe in 2023, which attained an ACE of 9.4. The highest ACE of a Category 1 hurricane was Hurricane Nadine in 2012, which attained an ACE of 26.3. The record for lowest ACE of a tropical storm is jointly held by Tropical Storm Chris in 2000 and Tropical Storm Philippe in 2017, both of which were tropical storms for only six hours and had an ACE of just 0.1225. The lowest ACE of any hurricane was 2005's Hurricane Cindy, which was only a hurricane for six hours, and 2007's Hurricane Lorenzo, which was a hurricane for twelve hours; Cindy had an ACE of just 1.5175 and Lorenzo had a lower ACE of only 1.475. The lowest ACE of a major hurricane (Category 3 or higher), was Hurricane Gerda in 1969, with an ACE of 5.3.
The following table shows those storms in the Atlantic basin from 1851–2021 that have attained over 50 points of ACE.
Historical ACE in recorded Atlantic hurricane history.
There is an undercount bias of tropical storms, hurricanes, and major hurricanes before the satellite era (prior to the mid–1960s), due to the difficulty in identifying storms.
Classification criteria
<templatestyles src="Legend/styles.css" /> Extremely active
<templatestyles src="Legend/styles.css" /> Above-normal
<templatestyles src="Legend/styles.css" /> Near-normal
<templatestyles src="Legend/styles.css" /> Below-normal
Eastern Pacific.
Within the Eastern Pacific Ocean, the United States National Oceanic and Atmospheric Administration and others use the ACE index of a season to classify the season into one of three categories. These three categories are above-, near-, and below-normal and are worked out using an approximate tercile partitioning of seasons based on the ACE index and the number of tropical storms, hurricanes, and major hurricanes over the 30 years between 1991 and 2020.
For a season to be defined as above-normal, the ACE index criterion and two or more of the other criteria given in the table below must be satisfied.
The mean value of the ACE index from 1991 to 2020 is 108.7 x 104 kt2, while the median value is 97.2 x 104 kt2.
Individual storms in the Pacific.
The highest ever ACE estimated for a single storm in the Eastern or Central Pacific, while located east of the International Date Line is 62.8, for Hurricane Fico of 1978. Other Eastern Pacific storms with high ACEs include Hurricane John in 1994, with an ACE of 54.0, Hurricane Kevin in 1991, with an ACE of 52.1, and Hurricane Hector of 2018, with an ACE of 50.5.<ref name="HURDAT2 - EPAC/CPAC"></ref>
The following table shows those storms in the Eastern and Central Pacific basins from 1971 through 2023 that have attained over 30 points of ACE.
† – Indicates that the storm formed in the Eastern/Central Pacific, but crossed 180°W at least once; therefore, only the ACE and number of days spent in the Eastern/Central Pacific are included.
Historical ACE in recorded Pacific hurricane history.
Data on ACE is considered reliable starting with the 1971 season.
Classification criteria
<templatestyles src="Legend/styles.css" /> Above-normal
<templatestyles src="Legend/styles.css" /> Near-normal
<templatestyles src="Legend/styles.css" /> Below-normal
Western Pacific.
Historical ACE in recorded Western Pacific typhoon history.
There is an undercount bias of tropical storms, typhoons, and super typhoon before the satellite era (prior to the mid–1950s), due to the difficulty in identifying storms.
Classification criteria
<templatestyles src="Legend/styles.css" /> Extremely active
<templatestyles src="Legend/styles.css" /> Above-normal
<templatestyles src="Legend/styles.css" /> Near-normal
<templatestyles src="Legend/styles.css" /> Below-normal
North Indian.
There are various agencies over the North Indian Ocean that monitor and forecast tropical cyclones, including the United States Joint Typhoon Warning Center, as well as the Bangladesh, Pakistan and India Meteorological Department. As a result, the track and intensity of tropical cyclones differ from each other, and as a result, the accumulated cyclone energy also varies over the region. However, the India Meteorological Department has been designated as the official Regional Specialised Meteorological Centre by the WMO for the region and has worked out the ACE for all cyclonic systems above based on their best track analysis which goes back to 1982.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v_\\max^2"
},
{
"math_id": 1,
"text": "v_\\max"
},
{
"math_id": 2,
"text": "\\text{ACE} = 10^{-4} \\sum v_\\max^2"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "F = m \\times a"
}
] | https://en.wikipedia.org/wiki?curid=1201310 |
1201321 | Superposition principle | Fundamental physics principle stating that physical solutions of linear systems are linear
The superposition principle, also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually. So that if input "A" produces response "X", and input "B" produces response "Y", then input ("A" + "B") produces response ("X" + "Y").
A function formula_0 that satisfies the superposition principle is called a linear function. Superposition can be defined by two simpler properties: additivity
formula_1
and homogeneity
formula_2
for scalar a.
This principle has many applications in physics and engineering because many physical systems can be modeled as linear systems. For example, a beam can be modeled as a linear system where the input stimulus is the load on the beam and the output response is the deflection of the beam. The importance of linear systems is that they are easier to analyze mathematically; there is a large body of mathematical techniques, frequency-domain linear transform methods such as Fourier and Laplace transforms, and linear operator theory, that are applicable. Because physical systems are generally only approximately linear, the superposition principle is only an approximation of the true physical behavior.
The superposition principle applies to "any" linear system, including algebraic equations, linear differential equations, and systems of equations of those forms. The stimuli and responses could be numbers, functions, vectors, vector fields, time-varying signals, or any other object that satisfies certain axioms. Note that when vectors or vector fields are involved, a superposition is interpreted as a vector sum. If the superposition holds, then it automatically also holds for all linear operations applied on these functions (due to definition), such as gradients, differentials or integrals (if they exist).
Relation to Fourier analysis and similar methods.
By writing a very general stimulus (in a linear system) as the superposition of stimuli of a specific and simple form, often the response becomes easier to compute.
For example, in Fourier analysis, the stimulus is written as the superposition of infinitely many sinusoids. Due to the superposition principle, each of these sinusoids can be analyzed separately, and its individual response can be computed. (The response is itself a sinusoid, with the same frequency as the stimulus, but generally a different amplitude and phase.) According to the superposition principle, the response to the original stimulus is the sum (or integral) of all the individual sinusoidal responses.
As another common example, in Green's function analysis, the stimulus is written as the superposition of infinitely many impulse functions, and the response is then a superposition of impulse responses.
Fourier analysis is particularly common for waves. For example, in electromagnetic theory, ordinary light is described as a superposition of plane waves (waves of fixed frequency, polarization, and direction). As long as the superposition principle holds (which is often but not always; see nonlinear optics), the behavior of any light wave can be understood as a superposition of the behavior of these simpler plane waves.
Wave superposition.
Waves are usually described by variations in some parameters through space and time—for example, height in a water wave, pressure in a sound wave, or the electromagnetic field in a light wave. The value of this parameter is called the amplitude of the wave and the wave itself is a function specifying the amplitude at each point.
In any system with waves, the waveform at a given time is a function of the sources (i.e., external forces, if any, that create or affect the wave) and initial conditions of the system. In many cases (for example, in the classic wave equation), the equation describing the wave is linear. When this is true, the superposition principle can be applied. That means that the net amplitude caused by two or more waves traversing the same space is the sum of the amplitudes that would have been produced by the individual waves separately. For example, two waves traveling towards each other will pass right through each other without any distortion on the other side. (See image at the top.)
Wave diffraction vs. wave interference.
With regard to wave superposition, Richard Feynman wrote:
<templatestyles src="Template:Blockquote/styles.css" />No-one has ever been able to define the difference between interference and diffraction satisfactorily. It is just a question of usage, and there is no specific, important physical difference between them. The best we can do, roughly speaking, is to say that when there are only a few sources, say two, interfering, then the result is usually called interference, but if there is a large number of them, it seems that the word diffraction is more often used.
Other authors elaborate:
<templatestyles src="Template:Blockquote/styles.css" />The difference is one of convenience and convention. If the waves to be superposed originate from a few coherent sources, say, two, the effect is called interference. On the other hand, if the waves to be superposed originate by subdividing a wavefront into infinitesimal coherent wavelets (sources), the effect is called diffraction. That is the difference between the two phenomena is [a matter] of degree only, and basically, they are two limiting cases of superposition effects.
Yet another source concurs:
<templatestyles src="Template:Blockquote/styles.css" />In as much as the interference fringes observed by Young were the diffraction pattern of the double slit, this chapter [Fraunhofer diffraction] is, therefore, a continuation of Chapter 8 [Interference]. On the other hand, few opticians would regard the Michelson interferometer as an example of diffraction. Some of the important categories of diffraction relate to the interference that accompanies division of the wavefront, so Feynman's observation to some extent reflects the difficulty that we may have in distinguishing division of amplitude and division of wavefront.
Wave interference.
The phenomenon of interference between waves is based on this idea. When two or more waves traverse the same space, the net amplitude at each point is the sum of the amplitudes of the individual waves. In some cases, such as in noise-canceling headphones, the summed variation has a smaller amplitude than the component variations; this is called "destructive interference". In other cases, such as in a line array, the summed variation will have a bigger amplitude than any of the components individually; this is called "constructive interference".
Departures from linearity.
In most realistic physical situations, the equation governing the wave is only approximately linear. In these situations, the superposition principle only approximately holds. As a rule, the accuracy of the approximation tends to improve as the amplitude of the wave gets smaller. For examples of phenomena that arise when the superposition principle does not exactly hold, see the articles nonlinear optics and nonlinear acoustics.
Quantum superposition.
In quantum mechanics, a principal task is to compute how a certain type of wave propagates and behaves. The wave is described by a wave function, and the equation governing its behavior is called the Schrödinger equation. A primary approach to computing the behavior of a wave function is to write it as a superposition (called "quantum superposition") of (possibly infinitely many) other wave functions of a certain type—stationary states whose behavior is particularly simple. Since the Schrödinger equation is linear, the behavior of the original wave function can be computed through the superposition principle this way.
The projective nature of quantum-mechanical-state space causes some confusion, because a quantum mechanical state is a "ray" in projective Hilbert space, not a "vector".
According to Dirac: ""if the ket vector corresponding to a state is multiplied by any complex number, not zero, the resulting ket vector will correspond to the same state" [italics in original]."
However, the sum of two rays to compose a superpositioned ray is undefined. As a result, Dirac himself
uses ket vector representations of states to decompose or split,
for example, a ket vector formula_3
into superposition of component ket vectors formula_4 as:
formula_5
where the formula_6.
The equivalence class of the formula_3 allows a well-defined meaning to be given to the relative phases of the formula_7., but an absolute (same amount
for all the formula_7) phase change on the formula_7
does not affect the equivalence class of the formula_3.
There are exact correspondences between the superposition presented in the main on this page and the quantum superposition.
For example, the Bloch sphere to represent pure state of a two-level quantum mechanical system
(qubit) is also known as the Poincaré sphere representing different types of classical
pure polarization states.
Nevertheless, on the topic of quantum superposition, Kramers writes: "The principle of [quantum] superposition ... has no analogy in classical physics".
According to Dirac: ""the superposition that occurs in quantum mechanics is of an essentially different nature from any occurring in the classical theory" [italics in original]."
Though reasoning by Dirac includes atomicity of observation, which is valid, as for phase,
they actually mean phase translation symmetry derived from time translation symmetry, which is also
applicable to classical states, as shown above with classical polarization states.
Boundary-value problems.
A common type of boundary value problem is (to put it abstractly) finding a function "y" that satisfies some equation
formula_8
with some boundary specification
formula_9
For example, in Laplace's equation with Dirichlet boundary conditions, "F" would be the Laplacian operator in a region "R", "G" would be an operator that restricts "y" to the boundary of "R", and "z" would be the function that "y" is required to equal on the boundary of "R".
In the case that "F" and "G" are both linear operators, then the superposition principle says that a superposition of solutions to the first equation is another solution to the first equation:
formula_10
while the boundary values superpose:
formula_11
Using these facts, if a list can be compiled of solutions to the first equation, then these solutions can be carefully put into a superposition such that it will satisfy the second equation. This is one common method of approaching boundary-value problems.
Additive state decomposition.
Consider a simple linear system:
formula_12
By superposition principle, the system can be decomposed into
formula_13
with
formula_14
Superposition principle is only available for linear systems. However, the additive state decomposition can be applied to both linear and nonlinear systems. Next, consider a nonlinear system
formula_15
where formula_16 is a nonlinear function. By the additive state decomposition, the system can be additively decomposed into
formula_17
with
formula_14
This decomposition can help to simplify controller design.
History.
According to Léon Brillouin, the principle of superposition was first stated by Daniel Bernoulli in 1753: "The general motion of a vibrating system is given by a superposition of its proper vibrations." The principle was rejected by Leonhard Euler and then by Joseph Lagrange. Bernoulli argued that any sonorous body could vibrate in a series of simple modes with a well-defined frequency of oscillation. As he had earlier indicated, these modes could be superposed to produce more complex vibrations. In his reaction to Bernoulli's memoirs, Euler praised his colleague for having best developed the physical part of the problem of vibrating strings, but denied the generality and superiority of the multi-modes solution.
Later it became accepted, largely through the work of Joseph Fourier.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "F(x)"
},
{
"math_id": 1,
"text": "F(x_1 + x_2) = F(x_1) + F(x_2)"
},
{
"math_id": 2,
"text": "F(ax) = a F(x)"
},
{
"math_id": 3,
"text": "|\\psi_i\\rangle"
},
{
"math_id": 4,
"text": "|\\phi_j\\rangle"
},
{
"math_id": 5,
"text": "|\\psi_i\\rangle = \\sum_{j}{C_j}|\\phi_j\\rangle,"
},
{
"math_id": 6,
"text": "C_j\\in \\textbf{C}"
},
{
"math_id": 7,
"text": "C_j"
},
{
"math_id": 8,
"text": "F(y) = 0"
},
{
"math_id": 9,
"text": "G(y) = z."
},
{
"math_id": 10,
"text": "F(y_1) = F(y_2) = \\cdots = 0 \\quad \\Rightarrow \\quad F(y_1 + y_2 + \\cdots) = 0,"
},
{
"math_id": 11,
"text": "G(y_1) + G(y_2) = G(y_1 + y_2)."
},
{
"math_id": 12,
"text": "\\dot{x} = Ax + B(u_1 + u_2), \\qquad x(0) = x_0."
},
{
"math_id": 13,
"text": "\\begin{align}\n \\dot{x}_1 &= Ax_1 + Bu_1, && x_1(0) = x_0,\\\\\n \\dot{x}_2 &= Ax_2 + Bu_2, && x_2(0) = 0\n\\end{align}"
},
{
"math_id": 14,
"text": "x = x_1 + x_2."
},
{
"math_id": 15,
"text": "\\dot{x} = Ax + B(u_1 + u_2) + \\phi\\left(c^\\mathsf{T} x\\right), \\qquad x(0) = x_0,"
},
{
"math_id": 16,
"text": "\\phi"
},
{
"math_id": 17,
"text": "\\begin{align}\n \\dot{x}_1 &= Ax_1 + Bu_1 + \\phi(y_d), && x_1(0) = x_0, \\\\\n \\dot{x}_2 &= Ax_2 + Bu_2 + \\phi\\left(c^\\mathsf{T} x_1 + c^\\mathsf{T} x_2\\right) - \\phi (y_d), && x_2(0) = 0\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=1201321 |
1201430 | DLVO theory | Theoretical model for aggregation and stability of aqueous dispersions
The DLVO theory (named after Boris Derjaguin and Lev Landau, Evert Verwey and Theodoor Overbeek) explains the aggregation and kinetic stability of aqueous dispersions quantitatively and describes the force between charged surfaces interacting through a liquid medium.
It combines the effects of the van der Waals attraction and the electrostatic repulsion due to the so-called double layer of counterions.
The electrostatic part of the DLVO interaction is computed in the mean field approximation in the limit of low surface potentials - that is when the potential energy of an elementary charge on the surface is much smaller than the thermal energy scale, formula_0. For two spheres of radius formula_1 each having a charge formula_2 (expressed in units of the elementary charge) separated by a center-to-center distance formula_3 in a fluid of dielectric constant formula_4 containing a concentration formula_5 of monovalent ions, the electrostatic potential takes the form of a screened-Coulomb or Yukawa potential,
formula_6
where
Overview.
DLVO theory is a theory of colloidal dispersion stability in which zeta potential is used to explain that as two particles approach one another their ionic atmospheres begin to overlap and a repulsion force is developed. In this theory, two forces are considered to impact on colloidal stability: Van der Waals forces and electrical double layer forces.
The total potential energy is described as the sum of the attraction potential and the repulsion potential. When two particles approach each other, electrostatic repulsion increases and the interference between their electrical double layers increases. However, the Van der Waals attraction also increases as they get closer. At each distance, the net potential energy of the smaller value is subtracted from the larger value.
At very close distances, the combination of these forces results in a deep attractive well, which is referred to as the primary minimum. At larger distances, the energy profile goes through a maximum, or energy barrier, and subsequently passes through a shallow minimum, which is referred to as the secondary minimum.
At the maximum of the energy barrier, repulsion is greater than attraction. Particles rebound after interparticle contact, and remain dispersed throughout the medium. The maximum energy needs to be greater than the thermal energy. Otherwise, particles will aggregate due to the attraction potential. The height of the barrier indicates how stable the system is. Since particles have to overcome this barrier in order to aggregate, two particles on a collision course must have sufficient kinetic energy due to their velocity and mass. If the barrier is cleared, then the net interaction is all attractive, and as a result the particles aggregate. This inner region is often referred to as an energy trap since the colloids can be considered to be trapped together by Van der Waals forces.
For a colloidal system, the thermodynamic equilibrium state may be reached when the particles are in deep primary minimum. At primary minimum, attractive forces overpower the repulsive forces at low molecular distances. Particles coagulate and this process is not reversible. However, when the maximum energy barrier is too high to overcome, the colloid particles may stay in the secondary minimum, where particles are held together but more weakly than in the primary minimum. Particles form weak attractions but are easily redispersed. Thus, the adhesion at secondary minimum can be reversible.
History.
In 1923, Debye and Hückel reported the first successful theory for the distribution of charges in ionic solutions.
The framework of linearized Debye–Hückel theory subsequently was applied to colloidal dispersions by Levine and Dube
who found that charged colloidal particles should experience a strong medium-range repulsion and a weaker long-range attraction.
This theory did not explain the observed instability of colloidal dispersions against irreversible aggregation in solutions of high ionic strength.
In 1941, Derjaguin and Landau introduced a theory for the stability of colloidal dispersions that invoked a fundamental instability driven by strong but short-ranged van der Waals attractions countered by the stabilizing influence of electrostatic repulsions.
In 1948, Verwey and Overbeek independently arrived at the same result.
This so-called DLVO theory resolved the failure of the Levine–Dube theory to account for the dependence of colloidal dispersions' stability on the ionic strength of the electrolyte.
Derivation.
DLVO theory is the combined effect of van der Waals and double layer force. For the derivation, different conditions must be taken into account and different equations can be obtained. But some useful assumptions can effectively simplify the process, which are suitable for ordinary conditions. The simplified way to derive it is to add the two parts together.
van der Waals attraction.
van der Waals force is actually the total name of dipole-dipole force, dipole-induced dipole force and dispersion forces, in which dispersion forces are the most important part because they are always present.
Assume that the pair potential between two atoms or small molecules is purely attractive and of the form w = −C/rn, where C is a constant for interaction energy, decided by the molecule's property and n = 6 for van der Waals attraction. With another assumption of additivity, the net interaction energy between a molecule and planar surface made up of like molecules will be the sum of the interaction energy between the molecule and every molecule in the surface body. So the net interaction energy for a molecule at a distance D away from the surface will therefore be
formula_15
where
Then the interaction energy of a large sphere of radius "R" and a flat surface can be calculated as
formula_17
where
For convenience, Hamaker constant "A" is given as
formula_19
and the equation becomes
formula_20
With a similar method and according to Derjaguin approximation, the van der Waals interaction energy between particles with different shapes can be calculated, such as energy between
Double layer force.
A surface in a liquid may be charged by dissociation of surface groups (e.g. silanol groups for glass or silica surfaces) or by adsorption of charged molecules such as polyelectrolyte from the surrounding solution. This results in the development of a wall surface potential which will attract counterions from the surrounding solution and repel co-ions. In equilibrium, the surface charge is balanced by oppositely charged counterions in solution. The region near the surface of enhanced
counterion concentration is called the electrical double layer (EDL). The EDL can be approximated by a sub-division into two regions. Ions in the region closest to the charged wall surface are strongly bound to the surface. This immobile layer is called the Stern or Helmholtz layer. The region adjacent to the Stern layer is called the diffuse layer and contains loosely associated ions that are comparatively mobile. The total electrical double layer due to the formation of the counterion layers results in electrostatic screening of the wall charge and minimizes the Gibbs free energy of EDL formation.
The thickness of the diffuse electric double layer is known as the Debye screening length formula_24. At a distance of two Debye screening lengths the electrical potential energy is reduced to 2 percent of the value at the surface wall.
formula_25
with unit of m−1, where
The repulsive free energy per unit area between two planar surfaces is shown as
formula_28
where
The interaction free energy between two spheres of radius "R" is
formula_32
Combining the van der Waals interaction energy and the double layer interaction energy, the interaction between two particles or two surfaces in a liquid can be expressed as
formula_33
where "W"("D")R is the repulsive interaction energy due to electric repulsion, and "W"("D")A is the attractive interaction energy due to van der Waals interaction.
Effect of shear flows.
Alessio Zaccone and collaborators investigated the effects of shear-flow on particle aggregation which can play an important role in applications e.g. microfluidics, chemical reactors, atmospheric and environmental flows. Their work showed a characteristic lag-time in the shear-induced aggregation of the particles, which decreases exponentially with the shear rate.
Application.
Since the 1940s, the DLVO theory has been used to explain phenomena found in colloidal science, adsorption and many other fields. Due to the more recent popularity of nanoparticle research, DLVO theory has become even more popular because it can be used to explain behavior of both material nanoparticles such as fullerene particles and microorganisms.
Shortcomings.
Additional forces beyond the DLVO construct have been reported to also play a major role in determining colloid stability.
DLVO theory is not effective in describing ordering processes such as the evolution of colloidal crystals in dilute dispersions with low salt concentrations. It also cannot explain the relation between the formation of colloidal crystals and salt concentrations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " k_\\text{B} T"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "Z"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "\\epsilon_r"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "\\beta U(r) = Z^2 \\lambda_\\text{B} \\, \\left(\\frac{e^{\\kappa a}}{1 + \\kappa a}\\right)^2 \\, \\frac{e^{-\\kappa r}}{r},\n"
},
{
"math_id": 7,
"text": "\\lambda_\\text{B}"
},
{
"math_id": 8,
"text": "U"
},
{
"math_id": 9,
"text": "e"
},
{
"math_id": 10,
"text": "\\kappa"
},
{
"math_id": 11,
"text": "\\lambda_\\text{D}"
},
{
"math_id": 12,
"text": "\\kappa^2 = 4 \\pi \\lambda_\\text{B} n"
},
{
"math_id": 13,
"text": "\\beta^{-1} = k_\\text{B} T"
},
{
"math_id": 14,
"text": "T"
},
{
"math_id": 15,
"text": "w(D) = -2 \\pi \\, C \\rho _1\\, \\int_{z=D}^{z= \\infty \\,} dz \\int_{x=0}^{x=\\infty \\,}\\frac{x \\, dx}{(z^2+x^2)^3} = \\frac{2 \\pi C \\rho _1}{4} \\int_D^\\infty \\frac{dz}{z^4} = - \\frac{ \\pi C \\rho _1 }{ 6 D^3 }"
},
{
"math_id": 16,
"text": " \\rho_1 "
},
{
"math_id": 17,
"text": "W(D) = -\\frac{2 \\pi C \\rho _1 \\rho _2}{12} \\int_{z=0}^{z=2R}\\frac {(2R-z)zdz}{(D+z)^3} \\approx -\\frac{ \\pi ^2 C \\rho _1 \\rho _2 R}{6D}"
},
{
"math_id": 18,
"text": "\\rho_2"
},
{
"math_id": 19,
"text": " A = \\pi^2C\\rho_1\\rho_2, "
},
{
"math_id": 20,
"text": "W(D) = -\\frac{AR}{6D}. "
},
{
"math_id": 21,
"text": "W(D) = -\\frac{A}{6D} \\frac{R_1 R_2}{(R_1 +R_2 )},"
},
{
"math_id": 22,
"text": "W(D) = -\\frac{AR}{6D},"
},
{
"math_id": 23,
"text": "W(D) = -\\frac{A}{12 \\pi D^2}"
},
{
"math_id": 24,
"text": "1 / \\kappa"
},
{
"math_id": 25,
"text": "\\kappa = \\sqrt{\\sum_i \\frac{\\rho_{\\infty i} e^2z^2_i}{\\epsilon_r \\epsilon_0 k_\\text{B} T}}"
},
{
"math_id": 26,
"text": "\\rho_{\\infty i}"
},
{
"math_id": 27,
"text": "\\varepsilon_0"
},
{
"math_id": 28,
"text": "W = \\frac{64k_\\text{B} T\\rho_{\\infty } \\gamma ^2}{\\kappa}e^{-\\kappa D}"
},
{
"math_id": 29,
"text": "\\gamma"
},
{
"math_id": 30,
"text": "\\gamma = \\tanh\\left(\\frac{ze\\psi_0}{4k_\\text{B}T}\\right)"
},
{
"math_id": 31,
"text": "\\psi_0"
},
{
"math_id": 32,
"text": "W = \\frac{64\\pi k_\\text{B} TR\\rho_{\\infty} \\gamma ^2}{\\kappa ^2}e^{-\\kappa D}."
},
{
"math_id": 33,
"text": "W(D) = W(D)_\\text{A} + W(D)_\\text{R},"
}
] | https://en.wikipedia.org/wiki?curid=1201430 |
12015467 | Kaya identity | Identity regarding anthropogenic carbon dioxide emissions
The Kaya identity is a mathematical identity stating that the total emission level of the greenhouse gas carbon dioxide can be expressed as the product of four factors: human population, GDP per capita, energy intensity (per unit of GDP), and carbon intensity (emissions per unit of energy consumed). It is a concrete form of the more general I = PAT equation relating factors that determine the level of human impact on climate. Although the terms in the Kaya identity would in theory cancel out, it is useful in practice to calculate emissions in terms of more readily available data, namely population, GDP per capita, energy per unit GDP, and emissions per unit energy. It furthermore highlights the elements of the global economy on which one could act to reduce emissions, notably the energy intensity per unit GDP and the emissions per unit energy.
formula_0
Overview.
The Kaya identity was developed by Japanese energy economist Yoichi Kaya. It is the subject of his book "Environment, Energy, and Economy: strategies for sustainability" co-authored with Keiichi Yokobori as the output of the "Conference on Global Environment, Energy, and Economic Development (1993 : Tokyo, Japan). "It is a variation of Paul R. Ehrlich & John Holdren's I=PAT formula that describes the factors of environmental impact."
Kaya identity is expressed in the form:
formula_1
Where:
And:
Use in IPCC reports.
The Kaya identity plays a core role in the development of future emissions scenarios in the IPCC Special Report on Emissions Scenarios. The scenarios set out a range of assumed conditions for future development of each of the four inputs. Population growth projections are available independently from demographic research; GDP per capita trends are available from economic statistics and econometrics; similarly for energy intensity and emission levels. The projected carbon emissions can drive carbon cycle and climate models to predict future CO2 concentration and global warming.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F = P \\times \\frac{G}{P} \\times \\frac{E}{G} \\times \\frac{F}{E}"
},
{
"math_id": 1,
"text": "F = P \\cdot \\frac{G}{P} \\cdot \\frac{E}{G} \\cdot \\frac{F}{E}"
}
] | https://en.wikipedia.org/wiki?curid=12015467 |
12017057 | Self-focusing | Self-focusing is a non-linear optical process induced by the change in refractive index of materials exposed to intense electromagnetic radiation. A medium whose refractive index increases with the electric field intensity acts as a focusing lens for an electromagnetic wave characterized by an initial transverse intensity gradient, as in a laser beam. The peak intensity of the self-focused region keeps increasing as the wave travels through the medium, until defocusing effects or medium damage interrupt this process. Self-focusing of light was discovered by Gurgen Askaryan.
Self-focusing is often observed when radiation generated by femtosecond lasers propagates through many solids, liquids and gases. Depending on the type of material and on the intensity of the radiation, several mechanisms produce variations in the refractive index which result in self-focusing: the main cases are Kerr-induced self-focusing and plasma self-focusing.
Kerr-induced self-focusing.
Kerr-induced self-focusing was first predicted in the 1960s and experimentally verified by studying the interaction of ruby lasers with glasses and liquids. Its origin lies in the optical Kerr effect, a non-linear process which arises in media exposed to intense electromagnetic radiation, and which produces a variation of the refractive index formula_0 as described by the formula formula_1, where "n"0 and "n"2 are the linear and non-linear components of the refractive index, and "I" is the intensity of the radiation. Since "n"2 is positive in most materials, the refractive index becomes larger in the areas where the intensity is higher, usually at the centre of a beam, creating a focusing density profile which potentially leads to the collapse of a beam on itself. Self-focusing beams have been found to naturally evolve into a Townes profile regardless of their initial shape.
Self-focusing beyond a threshold of power can lead to laser collapse and damage to the medium, which occurs if the radiation power is greater than the critical power
formula_2,
where λ is the radiation wavelength in vacuum and α is a constant which depends on the initial spatial distribution of the beam. Although there is no general analytical expression for α, its value has been derived numerically for many beam profiles. The lower limit is α ≈ 1.86225, which corresponds to Townes beams, whereas for a Gaussian beam α ≈ 1.8962.
For air, n0 ≈ 1, n2 ≈ 4×10−23 m2/W for λ = 800 nm, and the critical power is Pcr ≈ 2.4 GW, corresponding to an energy of about 0.3 mJ for a pulse duration of 100 fs. For silica, n0 ≈ 1.453, n2 ≈ 2.4×10−20 m2/W,
and the critical power is Pcr ≈ 2.8 MW.
Kerr-induced self-focusing is crucial for many applications in laser physics, both as a key ingredient and as a limiting factor. For example, the technique of chirped pulse amplification was developed to overcome the nonlinearities and damage of optical components that self-focusing would produce in the amplification of femtosecond laser pulses. On the other hand, self-focusing is a major mechanism behind Kerr-lens modelocking, laser filamentation in transparent media, self-compression of ultrashort laser pulses, parametric generation, and many areas of laser-matter interaction in general.
Self-focusing and defocusing in gain medium.
Kelley predicted that homogeneously broadened two-level atoms may focus or defocus light when carrier frequency formula_3 is detuned downward or upward the center of gain line formula_4. Laser pulse propagation with slowly varying envelope formula_5 is governed in gain medium by the nonlinear Schrödinger-Frantz-Nodvik equation.
When formula_3 is detuned downward or upward from formula_4 the refractive index is changed. "Red" detuning leads to an increased index of refraction during saturation of the resonant transition, i.e. to self-focusing, while for "blue" detuning the radiation is defocused during saturation:
formula_6
formula_7
formula_8
where formula_9 is the stimulated emission cross section, formula_10 is the population inversion density before pulse arrival, formula_11 and formula_12 are longitudinal and transverse lifetimes of two-level medium and formula_13 is the propagation axis.
Filamentation.
The laser beam with a smooth spatial profile formula_14 is affected by modulational instability. The small perturbations caused by roughnesses and medium defects are amplified in propagation. This effect is referred to as Bespalov-Talanov instability.
In a framework of nonlinear Schrödinger equation :
formula_15.
The rate of the perturbation growth or instability increment
formula_16 is linked with filament size formula_17 via simple equation:
formula_18. Generalization of this link between Bespalov-Talanov increments and filament size in gain medium as a function of linear gain formula_19 and detuning formula_20
had been realized in
Plasma self-focusing.
Advances in laser technology have recently enabled the observation of self-focusing in the interaction of intense laser pulses with plasmas. Self-focusing in plasma can occur through thermal, relativistic and ponderomotive effects. Thermal self-focusing is due to collisional heating of a plasma exposed to electromagnetic radiation: the rise in temperature induces a hydrodynamic expansion which leads to an increase of the index of refraction and further heating.
Relativistic self-focusing is caused by the mass increase of electrons travelling at speed approaching the speed of light, which modifies the plasma refractive index "nrel" according to the equation
formula_21,
where ω is the radiation angular frequency and ωp the relativistically corrected plasma frequency formula_22.
Ponderomotive self-focusing is caused by the ponderomotive force, which pushes electrons away from the region where the laser beam is more intense, therefore increasing the refractive index and inducing a focusing effect.
The evaluation of the contribution and interplay of these processes is a complex task, but a reference threshold for plasma self-focusing is the relativistic critical power
formula_23,
where "me" is the electron mass, "c" the speed of light, ω the radiation angular frequency, "e" the electron charge and ωp the plasma frequency. For an electron density of 1019 cm−3 and radiation at the wavelength of 800 nm, the critical power is about 3 TW. Such values are realisable with modern lasers, which can exceed PW powers. For example, a laser delivering 50 fs pulses with an energy of 1 J has a peak power of 20 TW.
Self-focusing in a plasma can balance the natural diffraction and channel a laser beam. Such effect is beneficial for many applications, since it helps increasing the length of the interaction between laser and medium. This is crucial, for example, in laser-driven particle acceleration, laser-fusion schemes and high harmonic generation.
Accumulated self-focusing.
Self-focusing can be induced by a permanent refractive index change resulting from a multi-pulse exposure. This effect has been observed in glasses which increase the refractive index during an exposure to ultraviolet laser radiation. Accumulated self-focusing develops as a wave guiding, rather than a lensing effect. The scale of actively forming beam filaments is a function of the exposure dose. Evolution of each beam filament towards a singularity is limited by the maximum induced refractive index change or by laser damage resistance of the glass.
Self-focusing in soft matter and polymer systems.
Self-focusing can also been observed in a number of soft matter systems, such as solutions of polymers and particles as well as photo-polymers. Self-focusing was observed in photo-polymer systems with microscale laser beams of either UV or visible light. The self-trapping of incoherent light was also later observed. Self-focusing can also be observed in wide-area beams, wherein the beam undergoes filamentation, or Modulation Instability, spontaneous dividing into a multitude of microscale self-focused beams, or filaments. The balance of self-focusing and natural beam divergence results in the beams propagating divergence-free. Self-focusing in photopolymerizable media is possible, owing to a photoreaction dependent refractive index, and the fact that refractive index in polymers is proportional to molecular weight and crosslinking degree which increases over the duration of photo-polymerization.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n = n_0 + n_2 I"
},
{
"math_id": 2,
"text": "P_{\\text{cr}}= \\alpha \\frac{\\lambda^2}{4 \\pi n_0 n_2}"
},
{
"math_id": 3,
"text": "\\omega"
},
{
"math_id": 4,
"text": "\\omega_0"
},
{
"math_id": 5,
"text": "E(\\vec \\mathbf{r},t)"
},
{
"math_id": 6,
"text": " {\\frac {\\partial {{E}(\\vec \\mathbf{r},t)}} {\\partial z} } + {\\frac {1} {c} } {\\frac {\\partial {{E}(\\vec \\mathbf{r},t)}} {\\partial t} } + {\\frac {i} {2k} } \\nabla_{\\bot}^2 E (\\vec \\mathbf{r},t)\n=+ i k n_2 |E(\\vec \\mathbf{r},t)|^2 {{E}(\\vec \\mathbf{r},t)}+ "
},
{
"math_id": 7,
"text": " \\frac {\\sigma N (\\vec \\mathbf{r},t)}{2} [1 + i ( \\omega_0 - \\omega )T_2] {{E}(\\vec \\mathbf{r},t)} , \\nabla_{\\bot}^2= {\\frac {\\partial^2}{{\\partial x }^2}}+{\\frac {\\partial^2}{{\\partial y }^2}}, "
},
{
"math_id": 8,
"text": " {\\frac {\\partial {{N}(\\vec \\mathbf{r},t)}} {\\partial t} } = -{\\frac {{{N_0}(\\vec \\mathbf{r})}} {T_1} }- \\sigma (\\omega) N (\\vec \\mathbf{r},t) |E(\\vec \\mathbf{r},t)|^2 , \n"
},
{
"math_id": 9,
"text": "\\sigma (\\omega)= \\frac {\\sigma_0}{1+T_2^2 ( \\omega_0 - \\omega )^2}"
},
{
"math_id": 10,
"text": "{N_0}(\\vec \\mathbf{r})"
},
{
"math_id": 11,
"text": "T_1"
},
{
"math_id": 12,
"text": "T_2"
},
{
"math_id": 13,
"text": "z"
},
{
"math_id": 14,
"text": " {E}(\\vec \\mathbf{r},t)"
},
{
"math_id": 15,
"text": " {\\frac {\\partial {{E}(\\vec \\mathbf{r},t)}} {\\partial z} } + {\\frac {1} {c} } {\\frac {\\partial {{E}(\\vec \\mathbf{r},t)}} {\\partial t} } + {\\frac {i} {2k} } \\nabla_{\\bot}^2 E (\\vec \\mathbf{r},t)\n=+ i k n_2 |E(\\vec \\mathbf{r},t)|^2 {{E}(\\vec \\mathbf{r},t)}"
},
{
"math_id": 16,
"text": "h"
},
{
"math_id": 17,
"text": "\\kappa^{-1}"
},
{
"math_id": 18,
"text": "h^2=\\kappa^2(n_2 |E(\\vec \\mathbf{r},t)|^2-\\kappa^2/4k^2) "
},
{
"math_id": 19,
"text": "\n{\\sigma N (\\vec \\mathbf{r},t)}"
},
{
"math_id": 20,
"text": "\\delta \\omega=\\omega_0 - \\omega"
},
{
"math_id": 21,
"text": "n_{rel} = \\sqrt{1 - \\frac{\\omega_p^2}{\\omega^2}}"
},
{
"math_id": 22,
"text": " \\omega_p= \\sqrt{\\frac{n e^{2}}{\\gamma m\\epsilon_0}} "
},
{
"math_id": 23,
"text": "P_{cr}= \\frac{m_e^2 c^5 \\omega^2}{e^2 \\omega_{p}^2} \\simeq 17 \\bigg(\\frac{\\omega}{\\omega_{p}}\\bigg)^2\\ \\textrm{GW}"
}
] | https://en.wikipedia.org/wiki?curid=12017057 |
1201931 | Stative verb | Verb that describes a state of being
According to some linguistics theories, a stative verb is a verb that describes a state of being, in contrast to a dynamic verb, which describes an action. The difference can be categorized by saying that stative verbs describe situations that are static, or unchanging throughout their entire duration, whereas dynamic verbs describe processes that entail change over time. Many languages distinguish between these two types in terms of how they can be used grammatically.
Contrast to dynamic.
Some languages use the same verbs for dynamic and stative situations, and others use different (but often related) verbs with some kind of qualifiers to distinguish between them. Some verbs may act as either stative or dynamic. A phrase like "he plays the piano" may be either stative or dynamic, according to the context. When, in a given context, the verb "play" relates to a state (an interest or a profession), he could be an amateur who enjoys music or a professional pianist. The dynamic interpretation emerges from a specific context in the case "play" describes an action, "what does he do on Friday evening? He plays the piano".
The distinction between stative and dynamic verbs can be correlated with:
Progressive aspect.
In English and certain other languages, stative and dynamic verbs differ in whether or not they typically occur in a progressive form. Dynamic verbs such as "go" can be used in the progressive ("I am going to school") whereas stative verbs such as "know" cannot (*"I am knowing the answer"). A verb that has both dynamic and stative uses cannot normally be used in the progressive when a stative meaning is intended: e.g. one cannot normally say, idiomatically, "Every morning, I am going to school". In other languages, statives can be used in the progressive as well: in Korean, for example, the sentence 미나가 인호를 사랑하고 있다 ("Mina is loving Inho") is perfectly valid.
Morphological markers.
In some languages, stative and dynamic verbs will use entirely different morphological markers on the verbs themselves. For example, in the Mantauran dialect of Rukai, an indigenous language of Taiwan, the two types of verbs take different prefixes in their finite forms, with dynamic verbs taking "o-" and stative verbs taking "ma-". Thus, the dynamic verb "jump" is "o-coroko" in the active voice, while the stative verb "love" is "ma-ðalamə". This sort of marking is characteristic of other Formosan languages as well.
Difference from inchoative.
In English, a verb that expresses a state can also express the entrance into a state. This is called inchoative aspect. The simple past is sometimes inchoative. For example, the present-tense verb in the sentence "He understands his friend" is stative, while the past-tense verb in the sentence "Suddenly he understood what she said" is inchoative, because it means "He understood henceforth". On the other hand, the past-tense verb in "At one time, he understood her" is stative.
The only way the difference between stative and inchoative can be expressed in English is through the use of modifiers, as in the above examples ("suddenly" and "at one time").
Likewise, in ancient Greek, a verb that expresses a state (e.g., "ebasíleuon" "I was king") may use the aorist to express entrance into the state (e.g., "ebasíleusa" "I became king"). But the aorist can also simply express the state as a whole, with no focus on the beginning of the state ("eíkosi étē ebasíleusa" "I ruled for twenty years").
Formal definitions.
In some theories of formal semantics, including David Dowty's, stative verbs have a logical form that is the lambda expression
formula_0
Apart from Dowty, Z. Vendler and C. S. Smith have also written influential work on aspectual classification of verbs.
English.
Dowty's analysis.
Dowty gives several tests to decide whether an English verb is stative. They are as follows:
Categories.
Stative verbs are often divided into sub-categories, based on their semantics or syntax.
Semantic divisions mainly involve verbs that express someone's state of mind, or something's properties (of course, things can also be expressed via other language mechanisms as well, particularly adjectives). The precise categories vary by linguist. Huddleston and Pullum, for example, divide stative verbs into the following semantic categories: verbs of perception and sensation ("see, hear"), verbs of hurting ("ache, itch"), stance verbs ("stand, sit"), and verbs of cognition, emotion, and sensation ("believe, regret"). Novakov, meanwhile, uses the slightly different categories: verbs denoting sensations ("feel, hear"), verbs denoting reasoning and mental attitude ("believe, understand"), verbs denoting positions/stance ("lie, surround"), and verbs denoting relations ("resemble, contain").
Syntactic divisions involve the types of clause structures in which a verb may be used. In the following examples, an asterisk (*) indicates that the sentence is ungrammatical:
John believes in Fido barking.
John believes Fido to bark.
Joan depends on Fido barking.
*Joan depends Fido to bark.
*Jim loathes on Fido barking.
*Jim loathes Fido to bark.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda (x): \\ [\\operatorname{STATE} \\ x]"
}
] | https://en.wikipedia.org/wiki?curid=1201931 |
1201933 | Cadherin | Calcium-dependent cell adhesion molecule
Cadherins (named for "calcium-dependent adhesion") are cell adhesion molecules important in forming adherens junctions that let cells adhere to each other. Cadherins are a class of type-1 transmembrane proteins, and they depend on calcium (Ca2+) ions to function, hence their name. Cell-cell adhesion is mediated by extracellular cadherin domains, whereas the intracellular cytoplasmic tail associates with numerous adaptors and signaling proteins, collectively referred to as the cadherin adhesome.
Background.
The cadherin family is essential in maintaining cell-cell contact and regulating cytoskeletal complexes. The cadherin superfamily includes cadherins, protocadherins, desmogleins, desmocollins, and more. In structure, they share "cadherin repeats", which are the extracellular Ca2+-binding domains. There are multiple classes of cadherin molecules, each designated with a prefix for tissues with which it associates. Classical cadherins maintain the tone of tissues by forming a homodimer in cis while desmosomal cadherins are heterodimeric. The intracellular portion of classical cadherins interacts with a complex of proteins that allows connection to the actin cytoskeleton. Although classical cadherins take a role in cell layer formation and structure formation, desmosomal cadherins focus on resisting cell damage. Desmosomal cadherins maintain the function of desmosomes that is to overturn the mechanical stress of the tissues. Similar to classical cadherins, desmosomal cadherins have a single transmembrane domain, five EC repeats, and an intracellular domain. There are two types of desmosomal cadherins: desmogleins and desmocollins. These contain an intracellular anchor and cadherin like sequence (ICS). The adaptor proteins that associate with desmosomal cadherins are plakoglobin (related to formula_0-catenin), plakophilins (p120 catenin subfamily), and desmoplakins. The major function of desmoplakins is to bind to intermediate filament by interacting with plakoglobin, which attach to the ICS of desmogleins, desmocollins and plakophilins. Atypical cadherins, such as CELSR1, retain the extracellular repeats and binding activities of the other cadherins, but may otherwise differ significantly in structure, and are typically involved in transmitting developmental signals rather than adhesion.
Cells containing a specific cadherin subtype tend to cluster together to the exclusion of other types, both in cell culture and during development. For example, cells containing N-cadherin tend to cluster with other N-cadherin-expressing cells. However, mixing speed in cell culture experiments can effect the extent of homotypic specificity. In addition, several groups have observed heterotypic binding affinity (i.e., binding of different types of cadherin together) in various assays. One current model proposes that cells distinguish cadherin subtypes based on kinetic specificity rather than thermodynamic specificity, as different types of cadherin homotypic bonds have different lifetimes.
Structure.
Cadherins are synthesized as polypeptides and undergo many post-translational modifications to become the proteins which mediate cell-cell adhesion and recognition. These polypeptides are approximately 720–750 amino acids long. Each cadherin has a small C-terminal cytoplasmic component, a transmembrane component, and the remaining bulk of the protein is extra-cellular (outside the cell). The transmembrane component consists of single chain glycoprotein repeats. Because cadherins are Ca2+ dependent, they have five tandem extracellular domain repeats that act as the binding site for Ca2+ ions. Their extracellular domain interacts with two separate "trans" dimer conformations: strand-swap dimers (S-dimers) and X-dimers. To date, over 100 types of cadherins in humans have been identified and sequenced.
The functionality of cadherins relies upon the formation of two identical subunits, known as homodimers. The homodimeric cadherins create cell-cell adhesion with cadherins present in the membranes of other cells through changing conformation from "cis"-dimers to "trans"-dimers. Once the cell-cell adhesion between cadherins present in the cell membranes of two different cells has formed, adherens junctions can then be made when protein complexes, usually composed of α-, β-, and γ-catenins, bind to the cytoplasmic portion of the cadherin. Regulatory proteins include p-120 catenin, formula_1-catenin, formula_0-catenin, and vinculin. Binding of p-120 catenin and formula_0-catenin to the homodimer increases the stability of the classical cadherin. formula_1-catenin is engaged by p120-catenin complex, where vinculin is recruited to take a role in indirect association with actin cytoskeleton. However, cadherin-catenin complex can also bind directly to the actin without the help of vinculin. Moreover, the strength of cadherin adhesion can increase by dephosphorylation of p120 catenin and the binding of formula_1-catenin and vinculin.
Function.
Development.
Cadherins behave as both receptors and ligands for other molecules. During development, their behavior assists at properly positioning cells: they are responsible for the separation of the different tissue layers and for cellular migration. In the very early stages of development, E-cadherins (epithelial cadherin) are most greatly expressed. Many cadherins are specified for specific functions in the cell, and they are differentially expressed in a developing embryo. For example, during neurulation, when a neural plate forms in an embryo, the tissues residing near the cranial neural folds have decreased N-cadherin expression. Conversely, the expression of the N-cadherins remains unchanged in other regions of the neural tube that is located on the anterior-posterior axis of the vertebrate. N-cadherins have different functions that maintain the cell structure, cell-cell adhesion, internal adhesions. They participate greatly in keeping the ability of the structured heart due to pumping and release blood. Because of the contribution of N-cadherins adhering strongly between the cardiomyocytes, the heart can overcome the fracture, deformation, and fatigue that can result from the blood pressure. N-cadherin takes part in the development of the heart during embryogenesis, especially in sorting out of the precardiac mesoderm. N-cadherins are robustly expressed in precardiac mesoderm, but they do not take a role in cardiac linage. An embryo with N-cadherin mutation still forms the primitive heart tube; however, N-cadherin deficient mice will have difficulties in maintaining the cardiomyocytes development. The myocytes of these mice will end up with dissociated myocytes surrounding the endocardial cell layer when they cannot preserve the cell adhesion due to the heart starting to pump. As a result, the cardiac outflow tract will be blocked causing cardiac swelling. The expression of different types of cadherins in the cells varies dependent upon the specific differentiation and specification of an organism during development. Cadherins play a vital role in the migration of cells through the epithelial–mesenchymal transition, which requires cadherins to form adherents junctions with neighboring cells. In neural crest cells, which are transient cells that arise in the developing organism during gastrulation and function in the patterning of the vertebrate body plan, the cadherins are necessary to allow migration of cells to form tissues or organs. In addition, cadherins that are responsible in the epithelial–mesenchymal transition event in early development have also been shown to be critical in the reprogramming of specified adult cells into a pluripotent state, forming induced pluripotent stem cells (iPSCs).
After development, cadherins play a role in maintaining cell and tissue structure, and in cellular movement. Regulation of cadherin expression can occur through promoter methylation among other epigenetic mechanisms.
Tumour metastasis.
The E-cadherin–catenin complex plays a key role in cellular adhesion; loss of this function has been associated with increased invasiveness and metastasis of tumors. The suppression of E-cadherin expression is regarded as one of the main molecular events responsible for dysfunction in cell-cell adhesion, which can lead to local invasion and ultimately tumor development. Because E-cadherins play an important role in tumor suppression, they are also referred to as the "suppressors of invasion".
Additionally, the overexpression of type 5, 6, and 17 cadherins alone or in combination can lead to cancer metastasis, and ongoing research aims to block their ability to function as ligands for integral membrane proteins.
Correlation to cancer.
It has been discovered that cadherins and other additional factors are correlated to the formation and growth of some cancers and how a tumor continues to grow. The E-cadherins, known as the epithelial cadherins, are on the surface of one cell and can bind with those of the same kind on another to form bridges. The loss of the cell adhesion molecules, E cadherins, is causally involved in the formation of epithelial types of cancers such as carcinomas. The changes in any types of cadherin expression may not only control tumor cell adhesion but also may affect signal transduction leading to the cancer cells growing uncontrollably.
In epithelial cell cancers, disrupted cell to cell adhesion might lead to the development of secondary malignant growths; they are distant from the primary site of cancer and can result from the abnormalities in the expression of E-cadherins or its associated catenins. CAMs such as the cadherin glycoproteins that normally function as the glue and holds cells together act as important mediators of cell to cell interactions. E-cadherins, on the surface of all epithelial cells, are linked to the actin cytoskeleton through interactions with catenins in the cytoplasm. Thus, anchored to the cytoskeleton, E-cadherins on the surface of one cell can bind with those on another to form bridges. In epithelial cell cancers, disrupted cell-cell adhesion that might lead to metastases can result from abnormalities in the expression of E-cadherin or its associated catenins.
Correlation to endometrium and embryogenesis.
This family of glycoproteins is responsible for calcium-dependent mechanism of intracellular adhesion. E-cadherins are crucial in embryogenesis during several processes, including gastrulation, neurulation, and organogenesis. Furthermore, suppression of E-cadherins impairs intracellular adhesion. The levels of these molecules increase during the luteal phase while their expression is regulated by progesterone with endometrial calcitonin.
Types.
There are said to be over 100 different types of cadherins found in vertebrates, which can be classified into four groups: classical, desmosomal, protocadherins, and unconventional. These large amount of diversities are accomplished by having multiple cadherin encoding genes combined with alternative RNA splicing mechanisms. Invertebrates contain fewer than 20 types of cadherins.
Classical.
Different members of the cadherin family are found in different locations.
Protocadherins.
Protocadherins are the largest mammalian subgroup of the cadherin superfamily of homophilic cell-adhesion proteins.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\beta"
},
{
"math_id": 1,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=1201933 |
12020310 | GALLEX | GALLEX or Gallium Experiment was a radiochemical neutrino detection experiment that ran between 1991 and 1997 at the Laboratori Nazionali del Gran Sasso (LNGS). This project was performed by an international collaboration of French, German, Italian, Israeli, Polish and American scientists led by the Max-Planck-Institut für Kernphysik Heidelberg. After brief interruption, the experiment was continued under a new name GNO (Gallium Neutrino Observatory) from May 1998 to April 2003.
It was designed to detect solar neutrinos and prove theories related to the Sun's energy creation mechanism. Before this experiment (and the SAGE experiment that ran concurrently), there had been no observation of low energy solar neutrinos.
Location.
The experiment's main components, the tank and the counters, were located in the underground astrophysical laboratory Laboratori Nazionali del Gran Sasso in the Italian Abruzzo province, near L'Aquila, and situated inside the 2912-metre-high Gran Sasso mountain. Its place under a depth of rock equivalent of 3200 metres of water was important to shield from cosmic rays. This laboratory is accessible by a highway A-24, which runs through the mountain.
Detector.
The 54-m3 detector tank was filled with 101 tons of gallium trichloride-hydrochloric acid solution, which contained 30.3 tons of gallium. The gallium in this solution acted as the target for a neutrino-induced nuclear reaction, which transmuted it into germanium through the following reaction:
νe + 71Ga → 71Ge + e−.
The threshold for neutrino detection by this reaction is very low (233.2 keV), and this is also the reason why gallium was chosen: other reactions (as with chlorine-37) have higher thresholds and are thus unable to detect low-energy neutrinos.
In fact, the low energy threshold makes the reaction with gallium suitable to the detection of neutrinos emitted in the initial proton fusion reaction of the proton-proton chain reaction, which have a maximum energy of 420 keV.
The produced germanium-71 was chemically extracted from the detector, converted to germane (71GeH4). Its decay, with a half-life of 11.43 days, was detected by counters. Each detected decay corresponded to one detected neutrino.
Results.
During the period 1991-1997, the detector measured capture rate of 73.1 SNU (Solar neutrino units). The follow-up GNO experiment found the capture rate 62.9.
The rate of neutrinos detected by this experiment disagreed with standard solar model predictions. Thanks to the use of gallium, it was the first experiment to observe solar initial pp neutrinos. Another important result was the detection of a smaller number of neutrinos than the standard model predicted (the solar neutrino problem). After detector calibration the amount did not change. This discrepancy has since been explained: such radiochemical neutrino detectors are sensitive only to electron neutrinos, and not to muon neutrinos or tau neutrinos, hence the neutrino oscillation of electron neutrinos emitted from the sun and travelling to the earth accounts for the discrepancy.
Results from GALLEX together with SAGE and later confirmed by the BEST experiment have reported a deficit in the expected decay of formula_0 that has been named the gallium anomaly.
Other experiments.
The first solar neutrino detection, the Homestake Experiment, used chlorine-37 to detect neutrinos with energies down to 814 keV.
After the end of GALLEX its successor project, the Gallium Neutrino Observatory or G.N.O., was started at LNGS in April 1998. The project continued until 2003.
A similar experiment detecting solar neutrinos using liquid gallium-71 was the Russian-American Gallium Experiment SAGE.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " {}^{71}\\text{Ga}+\\nu_e \\rightarrow e^{-}+{}^{71}\\text{Ge} \n"
}
] | https://en.wikipedia.org/wiki?curid=12020310 |
1202098 | Voigt profile | Probability distribution
The Voigt profile (named after Woldemar Voigt) is a probability distribution given by a convolution of a Cauchy-Lorentz distribution and a Gaussian distribution. It is often used in analyzing data from spectroscopy or diffraction.
Definition.
Without loss of generality, we can consider only centered profiles, which peak at zero. The Voigt profile is then
formula_0
where "x" is the shift from the line center, formula_1 is the centered Gaussian profile:
formula_2
and formula_3 is the centered Lorentzian profile:
formula_4
The defining integral can be evaluated as:
formula_5
where Re["w"("z")] is the real part of the Faddeeva function evaluated for
formula_6
In the limiting cases of formula_7 and formula_8 then formula_9 simplifies to formula_3 and formula_1, respectively.
History and applications.
In spectroscopy, a Voigt profile results from the convolution of two broadening mechanisms, one of which alone would produce a Gaussian profile (usually, as a result of the Doppler broadening), and the other would produce a Lorentzian profile. Voigt profiles are common in many branches of spectroscopy and diffraction. Due to the expense of computing the Faddeeva function, the Voigt profile is sometimes approximated using a pseudo-Voigt profile.
Properties.
The Voigt profile is normalized:
formula_10
since it is a convolution of normalized profiles. The Lorentzian profile has no moments (other than the zeroth), and so the moment-generating function for the Cauchy distribution is not defined. It follows that the Voigt profile will not have a moment-generating function either, but the characteristic function for the Cauchy distribution is well defined, as is the characteristic function for the normal distribution. The characteristic function for the (centered) Voigt profile will then be the product of the two:
formula_11
Since normal distributions and Cauchy distributions are stable distributions, they are each closed under convolution (up to change of scale), and it follows that the Voigt distributions are also closed under convolution.
Cumulative distribution function.
Using the above definition for "z" , the cumulative distribution function (CDF) can be found as follows:
formula_12
Substituting the definition of the Faddeeva function (scaled complex error function) yields for the indefinite integral:
formula_13
which may be solved to yield
formula_14
where formula_15 is a hypergeometric function. In order for the function to approach zero as "x" approaches negative infinity (as the CDF must do), an integration constant of 1/2 must be added. This gives for the CDF of Voigt:
formula_16
The uncentered Voigt profile.
If the Gaussian profile is centered at formula_17 and the Lorentzian profile is centered at formula_18, the convolution is centered at formula_19 and the characteristic function is:
formula_20
The probability density function is simply offset from the centered profile by formula_21:
formula_22
where:
formula_23
The mode and median are both located at formula_21.
Derivatives.
Using the definition above for formula_27 and formula_28, the first and second derivatives can be expressed in terms of the Faddeeva function as
formula_29
and
formula_30
respectively.
Often, one or multiple Voigt profiles and/or their respective derivatives need to be fitted to a measured signal by means of non-linear least squares, e.g., in spectroscopy. Then, further partial derivatives can be utilised to accelerate computations. Instead of approximating the Jacobian matrix with respect to the parameters formula_24, formula_25, and formula_26 with the aid of finite differences, the corresponding analytical expressions can be applied. With formula_31 and formula_32, these are given by:
formula_33
formula_34
formula_35
for the original voigt profile formula_36;
formula_37
formula_38
formula_39
for the first order partial derivative formula_40; and
formula_41
formula_42
formula_43
for the second order partial derivative formula_44. Since formula_24 and formula_26 play a relatively similar role in the calculation of formula_27, their respective partial derivatives also look quite similar in terms of their structure, although they result in totally different derivative profiles. Indeed, the partial derivatives with respect to formula_25 and formula_26 show more similarity since both are width parameters. All these derivatives involve only simple operations (multiplications and additions) because the computationally expensive formula_45 and formula_46 are readily obtained when computing formula_47. Such a reuse of previous calculations allows for a derivation at minimum costs. This is not the case for finite difference gradient approximation as it requires the evaluation of formula_47 for each gradient respectively.
Voigt functions.
The Voigt functions "U", "V", and "H" (sometimes called the line broadening function) are defined by
formula_48
formula_49
where
formula_50
erfc is the complementary error function, and "w"("z") is the Faddeeva function.
formula_51
Relation to Voigt profile.
with Gaussian sigma relative variables formula_52
and formula_53
Numeric approximations.
Tepper-García Function.
The Tepper-García function, named after German-Mexican Astrophysicist Thor Tepper-García, is a combination of an exponential function and rational functions that approximates the line broadening function formula_54 over a wide range of its parameters.
It is obtained from a truncated power series expansion of the exact line broadening function.
In its most computationally efficient form, the Tepper-García function can be expressed as
formula_55
where formula_56, formula_57, and formula_58.
Thus the line broadening function can be viewed, to first order, as a pure Gaussian function plus a correction factor that depends linearly on the microscopic properties of the absorbing medium (encoded in formula_59); however, as a result of the early truncation in the series expansion, the error in the approximation is still of order formula_59, i.e. formula_60. This approximation has a relative accuracy of
formula_61
over the full wavelength range of formula_54, provided that formula_62.
In addition to its high accuracy, the function formula_63 is easy to implement as well as computationally fast. It is widely used in the field of quasar absorption line analysis.
Pseudo-Voigt approximation.
The pseudo-Voigt profile (or pseudo-Voigt function) is an approximation of the Voigt profile "V"("x") using a linear combination of a Gaussian curve "G"("x") and a Lorentzian curve "L"("x") instead of their convolution.
The pseudo-Voigt function is often used for calculations of experimental spectral line shapes.
The mathematical definition of the normalized pseudo-Voigt profile is given by
formula_64 with formula_65.
formula_66 is a function of full width at half maximum (FWHM) parameter.
There are several possible choices for the formula_66 parameter. A simple formula, accurate to 1%, is
formula_67
where now, formula_66 is a function of Lorentz (formula_68), Gaussian (formula_69) and total (formula_70) Full width at half maximum (FWHM) parameters. The total FWHM (formula_70) parameter is described by:
formula_71
The width of the Voigt profile.
The full width at half maximum (FWHM) of the Voigt profile can be found from the
widths of the associated Gaussian and Lorentzian widths. The FWHM of the Gaussian profile
is
formula_72
The FWHM of the Lorentzian profile is
formula_73
An approximate relation (accurate to within about 1.2%) between the widths of the Voigt, Gaussian, and Lorentzian profiles is:
formula_74
By construction, this expression is exact for a pure Gaussian or Lorentzian.
A better approximation with an accuracy of 0.02% is given by (originally found by Kielkopf)
formula_75
Again, this expression is exact for a pure Gaussian or Lorentzian.
In the same publication, a slightly more precise (within 0.012%), yet significantly more complicated expression can be found.
Asymmetric Pseudo-Voigt (Martinelli) function.
The asymmetry pseudo-Voigt (Martinelli) function resembles a split normal distribution by having different widths on each side of the peak position. Mathematically this is expressed as:
formula_64
with formula_65 being the weight of the Lorentzian and the width formula_76 being a split function (formula_77for formula_78 and formula_79 for formula_80). In the limit formula_81, the Martinelli function returns to a symmetry pseudo Voigt function. The Martinelli function has been used to model elastic scattering on resonant inelastic X-ray scattering instruments.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n V(x;\\sigma,\\gamma) \\equiv \\int_{-\\infty}^\\infty G(x';\\sigma)L(x-x';\\gamma)\\, dx',\n"
},
{
"math_id": 1,
"text": "G(x;\\sigma)"
},
{
"math_id": 2,
"text": "\n G(x;\\sigma) \\equiv \\frac{e^{-\\frac{x^2}{2\\sigma^2}}}{\\sqrt{2\\pi}\\,\\sigma},\n"
},
{
"math_id": 3,
"text": "L(x;\\gamma)"
},
{
"math_id": 4,
"text": "\n L(x;\\gamma) \\equiv \\frac{\\gamma}{\\pi(\\gamma^2+x^2)}.\n"
},
{
"math_id": 5,
"text": "\n V(x;\\sigma,\\gamma)=\\frac{\\operatorname{Re}[w(z)]}{\\sqrt{2 \\pi}\\,\\sigma},\n"
},
{
"math_id": 6,
"text": "\nz=\\frac{x+i\\gamma}{\\sqrt{2}\\, \\sigma}.\n"
},
{
"math_id": 7,
"text": "\\sigma=0"
},
{
"math_id": 8,
"text": " \\gamma =0 "
},
{
"math_id": 9,
"text": " V(x;\\sigma,\\gamma) "
},
{
"math_id": 10,
"text": "\n \\int_{-\\infty}^\\infty V(x;\\sigma,\\gamma)\\,dx = 1,\n"
},
{
"math_id": 11,
"text": "\n \\varphi_f(t;\\sigma,\\gamma) = E(e^{ixt}) = e^{-\\sigma^2t^2/2 - \\gamma |t|}.\n"
},
{
"math_id": 12,
"text": "F(x_0;\\mu,\\sigma)\n=\\int_{-\\infty}^{x_0} \\frac{\\operatorname{Re}(w(z))}{\\sigma\\sqrt{2\\pi}}\\,dx\n=\\operatorname{Re}\\left(\\frac{1}{\\sqrt{\\pi}}\\int_{z(-\\infty)}^{z(x_0)} w(z)\\,dz\\right).\n"
},
{
"math_id": 13,
"text": "\n\\frac{1}{\\sqrt{\\pi}}\\int w(z)\\,dz =\\frac{1}{\\sqrt{\\pi}}\n\\int e^{-z^2}\\left[1-\\operatorname{erf}(-iz)\\right]\\,dz,\n"
},
{
"math_id": 14,
"text": "\n\\frac{1}{\\sqrt{\\pi}}\\int w(z)\\,dz = \\frac{\\operatorname{erf}(z)}{2}\n+\\frac{iz^2}{\\pi}\\,_2F_2\\left(1,1;\\frac{3}{2},2;-z^2\\right),\n"
},
{
"math_id": 15,
"text": "{}_2F_2"
},
{
"math_id": 16,
"text": "F(x;\\mu,\\sigma)=\\operatorname{Re}\\left[\\frac{1}{2}+\n\\frac{\\operatorname{erf}(z)}{2}\n+\\frac{iz^2}{\\pi}\\,_2F_2\\left(1,1;\\frac{3}{2},2;-z^2\\right)\\right].\n"
},
{
"math_id": 17,
"text": "\\mu_G"
},
{
"math_id": 18,
"text": "\\mu_L"
},
{
"math_id": 19,
"text": "\\mu_V = \\mu_G+\\mu_L"
},
{
"math_id": 20,
"text": "\n\\varphi_f(t;\\sigma,\\gamma,\\mu_\\mathrm{G},\\mu_\\mathrm{L})= e^{i(\\mu_\\mathrm{G}+\\mu_\\mathrm{L})t-\\sigma^2t^2/2 - \\gamma |t|}.\n"
},
{
"math_id": 21,
"text": "\\mu_V"
},
{
"math_id": 22,
"text": "\nV(x;\\mu_V,\\sigma,\\gamma)=\\frac{\\operatorname{Re}[w(z)]}{\\sigma\\sqrt{2 \\pi}},\n"
},
{
"math_id": 23,
"text": "\nz= \\frac{x-\\mu_V+i \\gamma}{\\sigma\\sqrt{2}}\n"
},
{
"math_id": 24,
"text": "\\mu_{V}"
},
{
"math_id": 25,
"text": "\\sigma"
},
{
"math_id": 26,
"text": "\\gamma"
},
{
"math_id": 27,
"text": "z"
},
{
"math_id": 28,
"text": "x_{c}=x-\\mu_{V}"
},
{
"math_id": 29,
"text": "\n\\begin{aligned}\n \\frac{\\partial}{\\partial x} V(x_{c};\\sigma,\\gamma) &= \n-\\frac{\\operatorname{Re}\\left[z ~w(z)\\right]}{\\sigma^2\\sqrt{\\pi}}\n= -\\frac{x_{c}}{\\sigma^2} \\frac{\\operatorname{Re}\\left[w(z)\\right]}{\\sigma\\sqrt{2\\pi}}+\\frac{\\gamma}{\\sigma^2} \\frac{\\operatorname{Im}\\left[w(z)\\right]}{\\sigma\\sqrt{2\\pi}} \\\\\n&= \\frac{1}{\\sigma^{3}\\sqrt{2\\pi}}\\cdot\\left(\\gamma\\cdot\\operatorname{Im}\\left[w(z)\\right]-x_{c}\\cdot\\operatorname{Re}\\left[w(z)\\right]\\right)\n\\end{aligned}\n"
},
{
"math_id": 30,
"text": "\n\\begin{aligned}\n \\frac{\\partial^2}{\\left(\\partial x\\right)^2} V(x_{c};\\sigma,\\gamma)\n&= \\frac{x_{c}^{2}-\\gamma^2-\\sigma^2}{\\sigma^4} \\frac{\\operatorname{Re}\\left[w(z)\\right]}{\\sigma\\sqrt{2\\pi}}\n-\\frac{2 x_{c} \\gamma}{\\sigma^4} \\frac{\\operatorname{Im}\\left[w(z)\\right]}{\\sigma\\sqrt{2\\pi}}\n+\\frac{\\gamma}{\\sigma^4}\\frac{1}{\\pi} \\\\\n&= -\\frac{1}{\\sigma^{5}\\sqrt{2\\pi}}\\cdot\\left(\\gamma\\cdot\\left(2x_{c}\\cdot\\operatorname{Im}\\left[w(z)\\right] - \\sigma\\cdot\\sqrt{\\frac{2}{\\pi}}\\right) + \\left(\\gamma^{2} + \\sigma^{2} - x_{c}^{2}\\right)\\cdot\\operatorname{Re}\\left[w(z)\\right]\\right),\n\\end{aligned}\n"
},
{
"math_id": 31,
"text": "\\operatorname{Re}\\left[w(z)\\right] = \\Re_{w}"
},
{
"math_id": 32,
"text": "\\operatorname{Im}\\left[w(z)\\right] = \\Im_{w}"
},
{
"math_id": 33,
"text": "\n\\begin{align}\n\\frac{\\partial V}{\\partial \\mu_{V}} = -\\frac{\\partial V}{\\partial x} = \\frac{1}{\\sigma^{3}\\sqrt{2\\pi}}\\cdot\\left(x_{c}\\cdot\\Re_{w} - \\gamma\\cdot\\Im_{w}\\right)\n\\end{align}\n"
},
{
"math_id": 34,
"text": "\n\\begin{align} \n\\frac{\\partial V}{\\partial \\sigma} = \\frac{1}{\\sigma^{4}\\sqrt{2\\pi}}\\cdot\\left(\\left(x_{c}^{2} - \\gamma^{2}-\\sigma^{2}\\right)\\cdot\\Re_{w} - 2x_{c}\\gamma\\cdot\\Im_{w} + \\gamma\\sigma\\cdot\\sqrt{\\frac{2}{\\pi}}\\right)\n\\end{align}\n"
},
{
"math_id": 35,
"text": "\n\\begin{align} \n\\frac{\\partial V}{\\partial \\gamma} = -\\frac{1}{\\sigma^{3}\\sqrt{2\\pi}}\\cdot\\left(\\sigma\\cdot\\sqrt{\\frac{2}{\\pi}} - x_{c}\\cdot\\Im_{w} - \\gamma\\cdot\\Re_{w}\\right)\n\\end{align}\n"
},
{
"math_id": 36,
"text": "V"
},
{
"math_id": 37,
"text": "\n\\begin{align}\n\\frac{\\partial V'}{\\partial \\mu_{V}} = -\\frac{\\partial V'}{\\partial x} = -\\frac{\\partial^{2} V}{\\left(\\partial x\\right)^{2}} = \\frac{1}{\\sigma^{5}\\sqrt{2\\pi}}\\cdot\\left(\\gamma\\cdot\\left(2x_{c}\\cdot\\Im_{w} - \\sigma\\cdot\\sqrt{\\frac{2}{\\pi}}\\right) + \\left(\\gamma^{2} + \\sigma^{2} - x_{c}^{2}\\right)\\cdot\\Re_{w}\\right)\n\\end{align}\n"
},
{
"math_id": 38,
"text": "\n\\begin{align}\n\\frac{\\partial V'}{\\partial \\sigma} = \\frac{3}{\\sigma^{6}\\sqrt{2\\pi}}\\cdot\\left(-\\gamma\\sigma x_{c}\\cdot\\frac{2\\sqrt{2}}{3\\sqrt{\\pi}} + \\left(x_{c}^{2} - \\frac{\\gamma^{2}}{3} - \\sigma^{2}\\right)\\cdot\\gamma\\cdot\\Im_{w} + \\left(\\gamma^{2} + \\sigma^{2} - \\frac{x_{c}^{2}}{3}\\right)\\cdot x_{c}\\cdot\\Re_{w}\\right)\n\\end{align}\n"
},
{
"math_id": 39,
"text": "\n\\begin{align}\n\\frac{\\partial V'}{\\partial \\gamma} = \\frac{1}{\\sigma^{5}\\sqrt{2\\pi}}\\cdot\\left(x_{c}\\cdot\\left(\\sigma\\cdot\\sqrt{\\frac{2}{\\pi}} - 2\\gamma\\cdot\\Re_{w}\\right) + \\left(\\gamma^{2} + \\sigma^{2} - x_{c}^{2}\\right)\\cdot\\Im_{w}\\right)\n\\end{align}\n"
},
{
"math_id": 40,
"text": "V' = \\frac{\\partial V}{\\partial x}"
},
{
"math_id": 41,
"text": "\n\\begin{align}\n\\frac{\\partial V''}{\\partial \\mu_{V}} = -\\frac{\\partial V''}{\\partial x} = -\\frac{\\partial^{3} V}{\\left(\\partial x\\right)^{3}} = -\\frac{3}{\\sigma^{7}\\sqrt{2\\pi}}\\cdot\\left(\\left(x_{c}^{2} - \\frac{\\gamma^{2}}{3} - \\sigma^{2}\\right)\\cdot\\gamma\\cdot\\Im_{w} + \\left(\\gamma^{2} + \\sigma^{2} - \\frac{x_{c}^{2}}{3}\\right)\\cdot x_{c}\\cdot\\Re_{w} - \\gamma\\sigma x_{c}\\cdot\\frac{2\\sqrt{2}}{3\\sqrt{\\pi}}\\right)\n\\end{align}\n"
},
{
"math_id": 42,
"text": "\n\\begin{align}\n& \\frac{\\partial V''}{\\partial \\sigma} = -\\frac{1}{\\sigma^{8}\\sqrt{2\\pi}}\\cdot \\\\ \n& \\left(\\left(-3\\gamma x_{c}\\sigma^{2} + \\gamma x_{c}^{3} - \\gamma^{3} x_{c}\\right)\\cdot 4\\cdot\\Im_{w} + \\left(\\left(2x_{c}^{2} - 2\\gamma^{2} - \\sigma^{2}\\right)\\cdot 3\\sigma^{2} + 6\\gamma^{2} x_{c}^{2} - x_{c}^{4} - \\gamma^{4}\\right)\\cdot\\Re_{w} + \\left(\\gamma^{2} + 5\\sigma^{2} - 3x_{c}^{2}\\right)\\cdot\\gamma\\sigma\\cdot\\sqrt{\\frac{2}{\\pi}}\\right)\n\\end{align}\n"
},
{
"math_id": 43,
"text": "\n\\begin{align}\n\\frac{\\partial V''}{\\partial \\gamma} = -\\frac{3}{\\sigma^{7}\\sqrt{2\\pi}}\\cdot\\left(\\left(\\gamma^{2} + \\sigma^{2} - \\frac{x_{c}^{2}}{3}\\right)\\cdot x_{c}\\cdot\\Im_{w} + \\left(\\frac{\\gamma^{2}}{3} + \\sigma^{2} - x_{c}^{2}\\right)\\cdot \\gamma\\cdot\\Re_{w} + \\left(x_{c}^{2} - \\gamma^{2} - 2\\sigma^{2} \\right)\\cdot\\sigma\\cdot\\frac{\\sqrt{2}}{3\\sqrt{\\pi}}\\right)\n\\end{align}\n"
},
{
"math_id": 44,
"text": "V'' = \\frac{\\partial^{2} V}{\\left(\\partial x\\right)^{2}}"
},
{
"math_id": 45,
"text": "\\Re_{w}"
},
{
"math_id": 46,
"text": "\\Im_{w}"
},
{
"math_id": 47,
"text": "w\\left(z\\right)"
},
{
"math_id": 48,
"text": "U(x,t)+iV(x,t) = \\sqrt \\frac{\\pi}{4t} e^{z^2} \\operatorname{erfc}(z) = \\sqrt \\frac{\\pi}{4t} w(iz),"
},
{
"math_id": 49,
"text": "H(a,u) = \\frac{U(\\frac{u}{a},\\frac{1}{4a^2})}{\\sqrt \\pi\\,a},"
},
{
"math_id": 50,
"text": "z = \\frac{1-ix}{2\\sqrt t},"
},
{
"math_id": 51,
"text": " V(x;\\sigma,\\gamma) = \\frac{H(a,u)}{\\sqrt{2\\pi}\\,\\sigma}, "
},
{
"math_id": 52,
"text": " u = \\frac{x}{\\sqrt 2\\, \\sigma} "
},
{
"math_id": 53,
"text": " a = \\frac{\\gamma}{\\sqrt 2\\,\\sigma}. "
},
{
"math_id": 54,
"text": "H(a,u)"
},
{
"math_id": 55,
"text": "\nT(a,u) = R - \\left(a /\\sqrt{\\pi} P \\right) ~\\left[R^2 ~(4 P^2 + 7 P + 4 + Q) - Q - 1\\right] \\, ,\n"
},
{
"math_id": 56,
"text": "P \\equiv u^2"
},
{
"math_id": 57,
"text": "Q \\equiv 3 / (2 P) "
},
{
"math_id": 58,
"text": "R \\equiv e^{-P}"
},
{
"math_id": 59,
"text": "a"
},
{
"math_id": 60,
"text": "H(a,u) \\approx T(a,u) + \\mathcal{O}(a)"
},
{
"math_id": 61,
"text": "\n\\epsilon \\equiv \\frac{\\vert H(a,u) - T(a,u) \\vert}{H(a,u)} \\lesssim 10^{-4}\n"
},
{
"math_id": 62,
"text": "a \\lesssim 10^{-4}"
},
{
"math_id": 63,
"text": "T(a,u)"
},
{
"math_id": 64,
"text": "\nV_p(x,f) = \\eta \\cdot L(x,f) + (1 - \\eta) \\cdot G(x,f) \n"
},
{
"math_id": 65,
"text": " 0 < \\eta < 1 "
},
{
"math_id": 66,
"text": " \\eta "
},
{
"math_id": 67,
"text": "\n\\eta = 1.36603 (f_L/f) - 0.47719 (f_L/f)^2 + 0.11116(f_L/f)^3,\n"
},
{
"math_id": 68,
"text": " f_L "
},
{
"math_id": 69,
"text": " f_G "
},
{
"math_id": 70,
"text": " f "
},
{
"math_id": 71,
"text": "\nf = [f_G^5 + 2.69269 f_G^4 f_L + 2.42843 f_G^3 f_L^2 + 4.47163 f_G^2 f_L^3 + 0.07842 f_G f_L^4 + f_L^5]^{1/5}.\n"
},
{
"math_id": 72,
"text": "f_\\mathrm{G}=2\\sigma\\sqrt{2\\ln(2)}."
},
{
"math_id": 73,
"text": "f_\\mathrm{L}=2\\gamma."
},
{
"math_id": 74,
"text": "f_\\mathrm{V}\\approx f_\\mathrm{L}/2+\\sqrt{f_\\mathrm{L}^2/4+f_\\mathrm{G}^2}."
},
{
"math_id": 75,
"text": "f_\\mathrm{V}\\approx 0.5346 f_\\mathrm{L}+\\sqrt{0.2166f_\\mathrm{L}^2+f_\\mathrm{G}^2}."
},
{
"math_id": 76,
"text": "f"
},
{
"math_id": 77,
"text": "f=f_1"
},
{
"math_id": 78,
"text": "x<0"
},
{
"math_id": 79,
"text": "f=f_2"
},
{
"math_id": 80,
"text": "x\\geq 0"
},
{
"math_id": 81,
"text": "f_1 \\rightarrow\nf_2\n"
}
] | https://en.wikipedia.org/wiki?curid=1202098 |
1202168 | Relational operator | Programming language construct
In computer science, a relational operator is a programming language construct or operator that tests or defines some kind of relation between two entities. These include numerical equality ("e.g.", 5
5) and inequalities ("e.g.", 4 ≥ 3).
In programming languages that include a distinct boolean data type in their type system, like Pascal, Ada, or Java, these operators usually evaluate to true or false, depending on if the conditional relationship between the two operands holds or not. In languages such as C, relational operators return the integers 0 or 1, where 0 stands for false and any non-zero value stands for true.
An expression created using a relational operator forms what is termed a "relational expression" or a "condition". Relational operators can be seen as special cases of logical predicates.
Equality.
Usage.
Equality is used in many programming language constructs and data types. It is used to test if an element already exists in a set, or to access to a value through a key. It is used in switch statements to dispatch the control flow to the correct branch, and during the unification process in logic programming.
There can be multiple valid definitions of equality, and any particular language might adopt one or more of them, depending on various design aspects. One possible meaning of equality is that "if "a" equals "b", then either "a" or "b" can be used interchangeably in any context without noticing any difference". But this statement does not necessarily hold, particularly when taking into account mutability together with content equality.
Location equality vs. content equality.
Sometimes, particularly in object-oriented programming, the comparison raises questions of data types and inheritance, equality, and identity. It is often necessary to distinguish between:
In many modern programming languages, objects and data structures are accessed through references. In such languages, there becomes a need to test for two different types of equality:
* Structural equality (that is, their contents are the same). which may be either shallow (testing only immediate subparts), or deep (testing for equality of subparts recursively). A simple way to achieve this is through representational equality: checking that the values have the same representation.
* Some other tailor-made equality, preserving the external behavior. For example, 1/2 and 2/4 are considered equal when seen as a rational number. A possible requirement would be that "A = B if and only if all operations on objects A and B will have the same result", in addition to reflexivity, symmetry, and transitivity.
The first type of equality usually implies the second (except for things like "not a number" (NaN) which are unequal to themselves), but the converse is not necessarily true. For example, two string objects may be distinct objects (unequal in the first sense) but contain the same sequence of characters (equal in the second sense). See identity for more of this issue.
Real numbers, including many simple fractions, cannot be represented exactly in floating-point arithmetic, and it may be necessary to test for equality within a given tolerance. Such tolerance, however, can easily break desired properties such as transitivity, whereas reflexivity breaks too: the IEEE floating-point standard requires that "NaN ≠ NaN" holds. In contrast, the (2022) private standard for posit arithmetic (posit proponents mean to replace IEEE floats) has a similar concept, NaR (Not a Real), where "NaR = NaR" holds.
Other programming elements such as computable functions, may either have no sense of equality, or an equality that is uncomputable. For these reasons, some languages define an explicit notion of "comparable", in the form of a base class, an interface, a trait or a protocol, which is used either explicitly, by declaration in source code, or implicitly, via the structure of the type involved.
Comparing values of different types.
In JavaScript, PHP, VBScript and a few other dynamically typed languages, the standard equality operator follows so-called "loose typing", that is it evaluates to "true" even if two values are not equal and are of incompatible types, but can be "coerced" to each other by some set of language-specific rules, making the number 4 compare equal to the text string "4", for instance. Although such behaviour is typically meant to make the language easier, it can lead to surprising and difficult to predict consequences that many programmers are unaware of. For example, Javascript's loose equality rules can cause equality to be intransitive (ie. codice_0 and codice_1, but codice_2), or make certain values be equal to their own negation.
A strict equality operator is also often available in those languages, returning true only for values with identical or equivalent types (in PHP, codice_3 is false although codice_4 is true). For languages where the number 0 may be interpreted as "false", this operator may simplify things such as checking for zero (as codice_5 would be true for x being either 0 or "0" using the type agnostic equality operator).
Ordering.
"Greater than" and "less than" comparison of non-numeric data is performed according to a sort convention (such as, for text strings, lexicographical order) which may be built into the programming language and/or configurable by a programmer.
When it is desired to associate a numeric value with the result of a comparison between two data items, say "a" and "b", the usual convention is to assign −1 if a < b, 0 if a = b and 1 if a > b. For example, the C function codice_6 performs a three-way comparison and returns −1, 0, or 1 according to this convention, and qsort expects the comparison function to return values according to this convention. In sorting algorithms, the efficiency of comparison code is critical since it is one of the major factors contributing to sorting performance.
Comparison of programmer-defined data types (data types for which the programming language has no in-built understanding) may be carried out by custom-written or library functions (such as codice_6 mentioned above), or, in some languages, by "overloading" a comparison operator – that is, assigning a programmer-defined meaning that depends on the data types being compared. Another alternative is using some convention such as member-wise comparison.
Logical equivalence.
Though perhaps unobvious at first, like the boolean logical operators XOR, AND, OR, and NOT, relational operators can be designed to have logical equivalence, such that they can all be defined in terms of one another. The following four conditional statements all have the same logical equivalence "E" (either all true or all false) for any given "x" and "y" values:
formula_0
This relies on the domain being well ordered.
Standard relational operators.
The most common numerical relational operators used in programming languages are shown below. Standard SQL uses the same operators as BASIC, while many databases allow codice_8 in addition to codice_9 from the standard. SQL follows strict boolean algebra, i.e. doesn't use short-circuit evaluation, which is common to most languages below. E.g. PHP has it, but otherwise it has these same two operators defined as aliases, like many SQL databases.
<templatestyles src="Reflist/styles.css" />
Other conventions are less common: Common Lisp and Macsyma/Maxima use Basic-like operators for numerical values, except for inequality, which is codice_10 in Common Lisp and codice_11 in Macsyma/Maxima. Common Lisp has multiple other sets of equality and relational operators serving different purposes, including codice_12, codice_13, codice_14, codice_15, and codice_16. Older Lisps used codice_14, codice_18, and codice_19; and negated them using codice_20 for the remaining operators.
Syntax.
Relational operators are also used in technical literature instead of words. Relational operators are usually written in infix notation, if supported by the programming language, which means that they appear between their operands (the two expressions being related). For example, an expression in Python will print the message if the "x" is less than "y":
if x < y:
print("x is less than y in this example")
Other programming languages, such as Lisp, use prefix notation, as follows:
Operator chaining.
In mathematics, it is common practice to chain relational operators, such as in 3 < x < y < 20 (meaning 3 < x "and" x < y "and" y < 20). The syntax is clear since these relational operators in mathematics are transitive.
However, many recent programming languages would see an expression like 3 < x < y as consisting of two left (or right-) associative operators, interpreting it as something like codice_21. If we say that x=4, we then get codice_22, and evaluation will give codice_23 which generally does not make sense. However, it does compile in C/C++ and some other languages, yielding surprising result (as "true" would be represented by the number 1 here).
It is possible to give the expression codice_24 its familiar mathematical meaning, and some programming languages such as Python and Raku do that. Others, such as C# and Java, do not, partly because it would differ from the way most other infix operators work in C-like languages. The D programming language does not do that since it maintains some compatibility with C, and "Allowing C expressions but with subtly different semantics (albeit arguably in the right direction) would add more confusion than convenience".
Some languages, like Common Lisp, use multiple argument predicates for this. In Lisp codice_25 is true when x is between 1 and 10.
Confusion with assignment operators.
Early FORTRAN (1956–57) was bounded by heavily restricted character sets where codice_26 was the only relational operator available. There were no codice_27 or codice_28 (and certainly no codice_29 or codice_30). This forced the designers to define symbols such as codice_31, codice_32, codice_33, codice_34 etc. and subsequently made it tempting to use the remaining codice_26 character for copying, despite the obvious incoherence with mathematical usage (codice_36 should be impossible).
International Algebraic Language (IAL, ALGOL 58) and ALGOL (1958 and 1960) thus introduced codice_37 for assignment, leaving the standard codice_26 available for equality, a convention followed by CPL, ALGOL W, ALGOL 68, Basic Combined Programming Language (BCPL), Simula, SET Language (SETL), Pascal, Smalltalk, Modula-2, Ada, Standard ML, OCaml, Eiffel, Object Pascal (Delphi), Oberon, Dylan, VHSIC Hardware Description Language (VHDL), and several other languages.
B and C.
This uniform de facto standard among most programming languages was eventually changed, indirectly, by a minimalist compiled language named B. Its sole intended application was as a vehicle for a first port of (a then very primitive) Unix, but it also evolved into the very influential C language.
B started off as a syntactically changed variant of the systems programming language BCPL, a simplified (and typeless) version of CPL. In what has been described as a "strip-down" process, the codice_39 and codice_40 operators of BCPL were replaced with codice_41 and codice_42 (which would later become codice_43 and codice_44, respectively.). In the same process, the ALGOL style codice_37 of BCPL was replaced by codice_26 in B. The reason for all this being unknown. As variable updates had no special syntax in B (such as codice_47 or similar) and were allowed in expressions, this non standard meaning of the equal sign meant that the traditional semantics of the equal sign now had to be associated with another symbol. Ken Thompson used the ad hoc codice_48 combination for this.
As a small type system was later introduced, B then became C. The popularity of this language along with its association with Unix, led to Java, C#, and many other languages following suit, syntactically, despite this needless conflict with the mathematical meaning of the equal sign.
Languages.
Assignments in C have a value and since any non-zero scalar value is interpreted as "true" in conditional expressions, the code codice_49 is legal, but has a very different meaning from codice_50. The former code fragment means "assign "y" to "x", and if the new value of "x" is not zero, execute the following statement". The latter fragment means "if and only if "x" is equal to "y", execute the following statement".
int x = 1;
int y = 2;
if (x = y) {
/* This code will always execute if y is anything but 0*/
printf("x is %d and y is %d\n", x, y);
Though Java and C# have the same operators as C, this mistake usually causes a compile error in these languages instead, because the if-condition must be of type codice_51, and there is no implicit way to convert from other types ("e.g.", numbers) into codice_51s. So unless the variable that is assigned to has type codice_51 (or wrapper type codice_54), there will be a compile error.
In ALGOL-like languages such as Pascal, Delphi, and Ada (in the sense that they allow nested function definitions), and in Python, and many functional languages, among others, assignment operators cannot appear in an expression (including codice_55 clauses), thus precluding this class of error. Some compilers, such as GNU Compiler Collection (GCC), provide a warning when compiling code containing an assignment operator inside an if statement, though there are some legitimate uses of an assignment inside an if-condition. In such cases, the assignment must be wrapped in an extra pair of parentheses explicitly, to avoid the warning.
Similarly, some languages, such as BASIC use just the codice_26 symbol for both assignment "and" equality, as they are syntactically separate (as with Pascal, Ada, Python, etc., assignment operators cannot appear in expressions).
Some programmers get in the habit of writing comparisons against a constant in the reverse of the usual order:
if (2 == a) { /* Mistaken use of = versus == would be a compile-time error */
If codice_26 is used accidentally, the resulting code is invalid because 2 is not a variable. The compiler will generate an error message, on which the proper operator can be substituted. This coding style is termed left-hand comparison, or Yoda conditions.
This table lists the different mechanisms to test for these two types of equality in various languages:
<templatestyles src="Reflist/styles.css" />
Ruby uses codice_58 to mean "b is a member of the set a", though the details of what it means to be a member vary considerably depending on the data types involved. codice_59 is here known as the "case equality" or "case subsumption" operator.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nE = \\begin{cases}\nx < y \\\\\ny > x \\\\\nx \\ngeq y \\\\\ny \\nleq x\n\\end{cases}"
}
] | https://en.wikipedia.org/wiki?curid=1202168 |
1202193 | Stopping time | Time at which a random variable stops exhibiting a behavior of interest
In probability theory, in particular in the study of stochastic processes, a stopping time (also Markov time, Markov moment, optional stopping time or optional time) is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time.
Stopping times occur in decision theory, and the optional stopping theorem is an important result in this context. Stopping times are also frequently applied in mathematical proofs to “tame the continuum of time”, as Chung put it in his book (1982).
Definition.
Discrete time.
Let formula_0 be a random variable, which is defined on the filtered probability space formula_1 with values in formula_2. Then formula_0 is called a stopping time (with respect to the filtration formula_3), if the following condition holds:
formula_4 for all formula_5
Intuitively, this condition means that the "decision" of whether to stop at time formula_6 must be based only on the information present at time formula_6, not on any future information.
General case.
Let formula_0 be a random variable, which is defined on the filtered probability space formula_7 with values in formula_8. In most cases, formula_9. Then formula_0 is called a stopping time (with respect to the filtration formula_10), if the following condition holds:
formula_11 for all formula_12
As adapted process.
Let formula_0 be a random variable, which is defined on the filtered probability space formula_7 with values in formula_8. Then formula_0 is called a stopping time if the stochastic process formula_13, defined by
formula_14
is adapted to the filtration formula_15
Comments.
Some authors explicitly exclude cases where formula_0 can be formula_16, whereas other authors allow formula_0 to take any value in the closure of formula_8.
Examples.
To illustrate some examples of random times that are stopping rules and some that are not, consider a gambler playing roulette with a typical house edge, starting with $100 and betting $1 on red in each game:
To illustrate the more general definition of stopping time, consider Brownian motion, which is a stochastic process formula_17, where each formula_18 is a random variable defined on the probability space formula_19. We define a filtration on this probability space by letting formula_20 be the "σ"-algebra generated by all the sets of the form formula_21 where formula_22 and formula_23 is a Borel set. Intuitively, an event "E" is in formula_20 if and only if we can determine whether "E" is true or false just by observing the Brownian motion from time 0 to time "t".
Hitting times like the second example above can be important examples of stopping times. While it is relatively straightforward to show that essentially all stopping times are hitting times, it can be much more difficult to show that a certain hitting time is a stopping time. The latter types of results are known as the Début theorem.
Localization.
Stopping times are frequently used to generalize certain properties of stochastic processes to situations in which the required property is satisfied in only a local sense. First, if "X" is a process and τ is a stopping time, then "X""τ" is used to denote the process "X" stopped at time "τ".
formula_32
Then, "X" is said to locally satisfy some property "P" if there exists a sequence of stopping times τ"n", which increases to infinity and for which the processes
formula_33
satisfy property "P". Common examples, with time index set "I" = [0, ∞), are as follows:
Local martingale process. A process "X" is a local martingale if it is càdlàg and there exists a sequence of stopping times τ"n" increasing to infinity, such that
formula_33
is a martingale for each "n".
Locally integrable process. A non-negative and increasing process "X" is locally integrable if there exists a sequence of stopping times "τ""n" increasing to infinity, such that
formula_34
for each "n".
Types of stopping times.
Stopping times, with time index set "I" = [0,∞), are often divided into one of several types depending on whether it is possible to predict when they are about to occur.
A stopping time "τ" is predictable if it is equal to the limit of an increasing sequence of stopping times "τ""n" satisfying "τ""n" < "τ" whenever "τ" > 0. The sequence "τ""n" is said to "announce" "τ", and predictable stopping times are sometimes known as "announceable".
Examples of predictable stopping times are hitting times of continuous and adapted processes. If "τ" is the first time at which a continuous and real valued process "X" is equal to some value "a", then it is announced by the sequence "τ""n", where "τ""n" is the first time at which "X" is within a distance of 1/"n" of "a".
Accessible stopping times are those that can be covered by a sequence of predictable times. That is, stopping time "τ" is accessible if, P("τ" = "τ""n" for some "n") = 1, where "τ""n" are predictable times.
A stopping time "τ" is totally inaccessible if it can never be announced by an increasing sequence of stopping times. Equivalently, P("τ" = "σ" < ∞) = 0 for every predictable time "σ". Examples of totally inaccessible stopping times include the jump times of Poisson processes.
Every stopping time "τ" can be uniquely decomposed into an accessible and totally inaccessible time. That is, there exists a unique accessible stopping time σ and totally inaccessible time υ such that "τ" = "σ" whenever "σ" < ∞, "τ" = "υ" whenever "υ" < ∞, and "τ" = ∞ whenever "σ" = "υ" = ∞. Note that in the statement of this decomposition result, stopping times do not have to be almost surely finite, and can equal ∞.
Stopping rules in clinical trials.
Clinical trials in medicine often perform interim analysis, in order to determine whether the trial has already met its endpoints.
However, interim analysis create the risk of false-positive results, and therefore stopping boundaries are used to determine the number and timing of interim analysis (also known as alpha-spending, to denote the rate of false positives).
At each of R interim tests, the trial is stopped if the likelihood is below a threshold p, which depends on the method used. See Sequential analysis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\tau "
},
{
"math_id": 1,
"text": " (\\Omega, \\mathcal F, (\\mathcal F_n)_{n \\in \\N}, P) "
},
{
"math_id": 2,
"text": " \\mathbb N \\cup \\{ +\\infty \\}"
},
{
"math_id": 3,
"text": " \\mathbb F= ((\\mathcal F_n)_{n \\in \\N} "
},
{
"math_id": 4,
"text": " \\{ \\tau =n \\} \\in \\mathcal F_n "
},
{
"math_id": 5,
"text": " n "
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": " (\\Omega, \\mathcal F, (\\mathcal F_t)_{t \\in T}, P) "
},
{
"math_id": 8,
"text": " T"
},
{
"math_id": 9,
"text": " T=[0,+ \\infty) "
},
{
"math_id": 10,
"text": " \\mathbb F= (\\mathcal F_t)_{t \\in T} "
},
{
"math_id": 11,
"text": " \\{ \\tau \\leq t \\} \\in \\mathcal F_t "
},
{
"math_id": 12,
"text": " t \\in T "
},
{
"math_id": 13,
"text": " X=(X_t)_{t \\in T}"
},
{
"math_id": 14,
"text": " X_t:= \\begin{cases} 1 & \\text{ if } t < \\tau \\\\ 0 &\\text{ if } t \\geq \\tau \\end{cases} "
},
{
"math_id": 15,
"text": " \\mathbb F= (\\mathcal F_t)_{t \\in T}"
},
{
"math_id": 16,
"text": " + \\infty "
},
{
"math_id": 17,
"text": "(B_t)_{t\\geq 0}"
},
{
"math_id": 18,
"text": "B_t"
},
{
"math_id": 19,
"text": "(\\Omega, \\mathcal{F}, \\mathbb{P})"
},
{
"math_id": 20,
"text": "\\mathcal{F}_t"
},
{
"math_id": 21,
"text": "(B_s)^{-1}(A)"
},
{
"math_id": 22,
"text": "0\\leq s \\leq t"
},
{
"math_id": 23,
"text": "A\\subseteq \\mathbb{R}"
},
{
"math_id": 24,
"text": "\\tau:=t_0"
},
{
"math_id": 25,
"text": "t_0"
},
{
"math_id": 26,
"text": "a\\in\\mathbb{R}."
},
{
"math_id": 27,
"text": "\\tau:=\\inf \\{t\\geq 0 \\mid B_t = a\\}"
},
{
"math_id": 28,
"text": "\\tau:=\\inf \\{t\\geq 1 \\mid B_s > 0 \\text{ for all } s\\in[t-1,t]\\}"
},
{
"math_id": 29,
"text": "\\left(\\Omega, \\mathcal{F}, \\left\\{ \\mathcal{F}_{t} \\right \\}_{t \\geq 0}, \\mathbb{P}\\right)"
},
{
"math_id": 30,
"text": "\\tau _1 \\wedge \\tau _2"
},
{
"math_id": 31,
"text": "\\tau _1 \\vee \\tau _2"
},
{
"math_id": 32,
"text": " X^\\tau_t=X_{\\min(t,\\tau)}"
},
{
"math_id": 33,
"text": "\\mathbf{1}_{\\{\\tau_n>0\\}}X^{\\tau_n}"
},
{
"math_id": 34,
"text": "\\operatorname{E} \\left [\\mathbf{1}_{\\{\\tau_n>0\\}}X^{\\tau_n} \\right ]<\\infty"
}
] | https://en.wikipedia.org/wiki?curid=1202193 |
12022054 | Cannon's algorithm | Algorithm for matrix multiplication
In computer science, Cannon's algorithm is a distributed algorithm for matrix multiplication for two-dimensional meshes first described in 1969 by Lynn Elliot Cannon.
It is especially suitable for computers laid out in an "N" × "N" mesh. While Cannon's algorithm works well in homogeneous 2D grids, extending it to heterogeneous 2D grids has been shown to be difficult.
The main advantage of the algorithm is that its storage requirements remain constant and are independent of the number of processors.
The Scalable Universal Matrix Multiplication Algorithm (SUMMA)
is a more practical algorithm that requires less workspace and overcomes the need for a square 2D grid. It is used by the ScaLAPACK, PLAPACK, and Elemental libraries.
Algorithm overview.
When multiplying two "n"×"n" matrices A and B, we need "n"×"n" processing nodes p arranged in a 2D grid.
// PE(i , j)
k := (i + j) mod N;
a := a[i][k];
b := b[k][j];
c[i][j] := 0;
for (l := 0; l < N; l++) {
c[i][j] := c[i][j] + a * b;
concurrently {
send a to PE(i, (j + N − 1) mod N);
send b to PE((i + N − 1) mod N, j);
} with {
receive a' from PE(i, (j + 1) mod N);
receive b' from PE((i + 1) mod N, j );
a := a';
b := b';
We need to select k in every iteration for every Processor Element (PE) so that processors don't access the same data for computing formula_0.
Therefore processors in the same row / column must begin summation with different indexes. If for example "PE(0,0)" calculates formula_1 in the first step, "PE(0,1) chooses" formula_2 first. The selection of "k := (i + j) mod n" for "PE(i,j)" satisfies this constraint for the first step.
In the first step we distribute the input matrices between the processors based on the previous rule.
In the next iterations we choose a new "k' := (k + 1) mod n" for every processor. This way every processor will continue accessing different values of the matrices. The needed data is then always at the neighbour processors. "A PE(i,j)" needs then the formula_3 from "PE(i,(j + 1) mod n)" and the formula_4 from "PE((i + 1) mod n,j)" for the next step. This means that formula_3 has to be passed cyclically to the left and also formula_4 cyclically upwards. The results of the multiplications are summed up as usual. After n steps, each processor has calculated all formula_0 once and its sum is thus the searched formula_5.
After the initial distribution of each processor, only the data for the next step has to be stored. These are the intermediate result of the previous sum, a formula_6 and a formula_7. This means that all three matrices only need to be stored in memory once evenly distributed across the processors.
Generalisation.
In practice we have much fewer processors than the matrix elements. We can replace the matrix elements with submatrices, so that every processor processes more values. The scalar multiplication and addition become sequential matrix multiplication and addition. The width and height of the submatrices will be formula_8.
The runtime of the algorithm is formula_9 , where formula_10 is the time of the initial distribution of the matrices in the first step, formula_11 is the calculation of the intermediate results and formula_12 and formula_13 stands for the time it takes to establish a connection and transmission of byte respectively.
A disadvantage of the algorithm is that there are many connection setups, with small message sizes. It would be better to be able to transmit more data in each message. | [
{
"math_id": 0,
"text": "a_{ik} * b_{kj}"
},
{
"math_id": 1,
"text": "a_{00} * b_{00}"
},
{
"math_id": 2,
"text": "a_{01} * b_{11}"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "c_{ij}"
},
{
"math_id": 6,
"text": "a_{ik}"
},
{
"math_id": 7,
"text": " b_{kj}"
},
{
"math_id": 8,
"text": "N=n/\\sqrt {p}"
},
{
"math_id": 9,
"text": "T\\mathcal{(n, p)} = T_{coll} (n/N, p) + N*T_{seq}(n/N) + 2(N - 1)(T_{start} + T_{byte}(n/ N)^2)\n"
},
{
"math_id": 10,
"text": "T_{coll}"
},
{
"math_id": 11,
"text": "T_{seq}"
},
{
"math_id": 12,
"text": "T_{start}"
},
{
"math_id": 13,
"text": "T_{byte}"
}
] | https://en.wikipedia.org/wiki?curid=12022054 |
1202314 | Autocovariance | Concept in probability and statistics
In probability theory and statistics, given a stochastic process, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. Autocovariance is closely related to the autocorrelation of the process in question.
Auto-covariance of stochastic processes.
Definition.
With the usual notation formula_0 for the expectation operator, if the stochastic process formula_1 has the mean function formula_2, then the autocovariance is given by
where formula_3 and formula_4 are two instances in time.
Definition for weakly stationary process.
If formula_1 is a weakly stationary (WSS) process, then the following are true:
formula_5 for all formula_6
and
formula_7 for all formula_8
and
formula_9
where formula_10 is the lag time, or the amount of time by which the signal has been shifted.
The autocovariance function of a WSS process is therefore given by:
which is equivalent to
formula_11.
Normalization.
It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably.
The definition of the normalized auto-correlation of a stochastic process is
formula_12.
If the function formula_13 is well-defined, its value must lie in the range formula_14, with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.
For a WSS process, the definition is
formula_15.
where
formula_16.
formula_17
Properties.
Symmetry property.
respectively for a WSS process:
formula_18
Linear filtering.
The autocovariance of a linearly filtered process formula_19
formula_20
is
formula_21
Calculating turbulent diffusivity.
Autocovariance can be used to calculate turbulent diffusivity. Turbulence in a flow can cause the fluctuation of velocity in space and time. Thus, we are able to identify turbulence through the statistics of those fluctuations.
Reynolds decomposition is used to define the velocity fluctuations formula_22 (assume we are now working with 1D problem and formula_23 is the velocity along formula_24 direction):
formula_25
where formula_23 is the true velocity, and formula_26 is the expected value of velocity. If we choose a correct formula_26, all of the stochastic components of the turbulent velocity will be included in formula_22. To determine formula_26, a set of velocity measurements that are assembled from points in space, moments in time or repeated experiments is required.
If we assume the turbulent flux formula_27 (formula_28, and "c" is the concentration term) can be caused by a random walk, we can use Fick's laws of diffusion to express the turbulent flux term:
formula_29
The velocity autocovariance is defined as
formula_30 or formula_31
where formula_32 is the lag time, and formula_33 is the lag distance.
The turbulent diffusivity formula_34 can be calculated using the following 3 methods:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{E}"
},
{
"math_id": 1,
"text": "\\left\\{X_t\\right\\}"
},
{
"math_id": 2,
"text": "\\mu_t = \\operatorname{E}[X_t]"
},
{
"math_id": 3,
"text": "t_1"
},
{
"math_id": 4,
"text": "t_2"
},
{
"math_id": 5,
"text": "\\mu_{t_1} = \\mu_{t_2} \\triangleq \\mu"
},
{
"math_id": 6,
"text": "t_1,t_2"
},
{
"math_id": 7,
"text": "\\operatorname{E}[|X_t|^2] < \\infty"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": "\\operatorname{K}_{XX}(t_1,t_2) = \\operatorname{K}_{XX}(t_2 - t_1,0) \\triangleq \\operatorname{K}_{XX}(t_2 - t_1) = \\operatorname{K}_{XX}(\\tau),"
},
{
"math_id": 10,
"text": "\\tau = t_2 - t_1"
},
{
"math_id": 11,
"text": "\\operatorname{K}_{XX}(\\tau) = \\operatorname{E}[(X_{t+ \\tau} - \\mu_{t +\\tau})(X_{t} - \\mu_{t})] = \\operatorname{E}[X_{t+\\tau} X_t] - \\mu^2 "
},
{
"math_id": 12,
"text": "\\rho_{XX}(t_1,t_2) = \\frac{\\operatorname{K}_{XX}(t_1,t_2)}{\\sigma_{t_1}\\sigma_{t_2}} = \\frac{\\operatorname{E}[(X_{t_1} - \\mu_{t_1})(X_{t_2} - \\mu_{t_2})]}{\\sigma_{t_1}\\sigma_{t_2}}"
},
{
"math_id": 13,
"text": "\\rho_{XX}"
},
{
"math_id": 14,
"text": "[-1,1]"
},
{
"math_id": 15,
"text": "\\rho_{XX}(\\tau) = \\frac{\\operatorname{K}_{XX}(\\tau)}{\\sigma^2} = \\frac{\\operatorname{E}[(X_t - \\mu)(X_{t+\\tau} - \\mu)]}{\\sigma^2}"
},
{
"math_id": 16,
"text": "\\operatorname{K}_{XX}(0) = \\sigma^2"
},
{
"math_id": 17,
"text": "\\operatorname{K}_{XX}(t_1,t_2) = \\overline{\\operatorname{K}_{XX}(t_2,t_1)}"
},
{
"math_id": 18,
"text": "\\operatorname{K}_{XX}(\\tau) = \\overline{\\operatorname{K}_{XX}(-\\tau)}"
},
{
"math_id": 19,
"text": "\\left\\{Y_t\\right\\}"
},
{
"math_id": 20,
"text": "Y_t = \\sum_{k=-\\infty}^\\infty a_k X_{t+k}\\,"
},
{
"math_id": 21,
"text": "K_{YY}(\\tau) = \\sum_{k,l=-\\infty}^\\infty a_k a_l K_{XX}(\\tau+k-l).\\,"
},
{
"math_id": 22,
"text": "u'(x,t)"
},
{
"math_id": 23,
"text": "U(x,t)"
},
{
"math_id": 24,
"text": "x"
},
{
"math_id": 25,
"text": "U(x,t) = \\langle U(x,t) \\rangle + u'(x,t),"
},
{
"math_id": 26,
"text": "\\langle U(x,t) \\rangle"
},
{
"math_id": 27,
"text": "\\langle u'c' \\rangle"
},
{
"math_id": 28,
"text": "c' = c - \\langle c \\rangle"
},
{
"math_id": 29,
"text": "J_{\\text{turbulence}_x} = \\langle u'c' \\rangle \\approx D_{T_x} \\frac{\\partial \\langle c \\rangle}{\\partial x}."
},
{
"math_id": 30,
"text": "K_{XX} \\equiv \\langle u'(t_0) u'(t_0 + \\tau)\\rangle"
},
{
"math_id": 31,
"text": "K_{XX} \\equiv \\langle u'(x_0) u'(x_0 + r)\\rangle,"
},
{
"math_id": 32,
"text": "\\tau"
},
{
"math_id": 33,
"text": "r"
},
{
"math_id": 34,
"text": "D_{T_x}"
}
] | https://en.wikipedia.org/wiki?curid=1202314 |
12024 | General relativity | Theory of gravitation as curved spacetime
General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time or four-dimensional spacetime. In particular, the "curvature of spacetime" is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of second-order partial differential equations.
Newton's law of universal gravitation, which describes classical gravity, can be seen as a prediction of general relativity for the almost flat spacetime geometry around stationary mass distributions. Some predictions of general relativity, however, are beyond Newton's law of universal gravitation in classical physics. These predictions concern the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light, and include gravitational time dilation, gravitational lensing, the gravitational redshift of light, the Shapiro time delay and singularities/black holes. So far, all tests of general relativity have been shown to be in agreement with the theory. The time-dependent solutions of general relativity enable us to talk about the history of the universe and have provided the modern framework for cosmology, thus leading to the discovery of the Big Bang and cosmic microwave background radiation. Despite the introduction of a number of alternative theories, general relativity continues to be the simplest theory consistent with experimental data.
Reconciliation of general relativity with the laws of quantum physics remains a problem, however, as there is a lack of a self-consistent theory of quantum gravity. It is not yet known how gravity can be unified with the three non-gravitational forces: strong, weak and electromagnetic.
Einstein's theory has astrophysical implications, including the prediction of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape from them. Black holes are the end-state for massive stars. Microquasars and active galactic nuclei are believed to be stellar black holes and supermassive black holes. It also predicts gravitational lensing, where the bending of light results in multiple images of the same distant astronomical phenomenon. Other predictions include the existence of gravitational waves, which have been observed directly by the physics collaboration LIGO and other observatories. In addition, general relativity has provided the base of cosmological models of an expanding universe.
Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as the most beautiful of all existing physical theories.
<templatestyles src="Template:TOC limit/styles.css" />
History.
Henri Poincaré's 1905 theory of the dynamics of the electron was a relativistic theory which he applied to all forces, including gravity. While others thought that gravity was instantaneous or of electromagnetic origin, he suggested that relativity was "something due to our methods of measurement". In his theory, he showed that gravitational waves propagate at the speed of light. Soon afterwards, Einstein started thinking about how to incorporate gravity into his relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall (FFO), he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations, which form the core of Einstein's general theory of relativity. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present. A version of non-Euclidean geometry, called Riemannian geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity. This idea was pointed out by mathematician Marcel Grossmann and published by Grossmann and Einstein in 1913.
The Einstein field equations are nonlinear and considered difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But in 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, eventually resulting in the Reissner–Nordström solution, which is now associated with electrically charged black holes. In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot and dense earlier state. Einstein later declared the cosmological constant the biggest blunder of his life.
During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein showed in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"), and in 1919 an expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of 29 May 1919, instantly making Einstein famous. Yet the theory remained outside the mainstream of theoretical physics and astrophysics until developments between approximately 1960 and 1975, now known as the golden age of general relativity. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology also became amenable to direct observational tests.
General relativity has acquired a reputation as a theory of extraordinary beauty. Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed a "strangeness in the proportion" ("i.e". elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time "versus" matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory. Other elements of beauty associated with the general theory of relativity are its simplicity and symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency.
In the preface to "", Einstein said "The present book is intended, as far as possible, to give an exact insight into the theory of Relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics. The work presumes a standard of education corresponding to that of a university matriculation examination, and, despite the shortness of the book, a fair amount of patience and force of will on the part of the reader. The author has spared himself no pains in his endeavour to present the main ideas in the simplest and most intelligible form, and on the whole, in the sequence and connection in which they actually originated."
From classical mechanics to general relativity.
General relativity can be understood by examining its similarities with and departures from classical physics. The first step is the realization that classical mechanics and Newton's law of gravity admit a geometric description. The combination of this description with the laws of special relativity results in a heuristic derivation of general relativity.
Geometry of Newtonian gravity.
At the base of classical mechanics is the notion that a body's motion can be described as a combination of free (or inertial) motion, and deviations from this free motion. Such deviations are caused by external forces acting on a body in accordance with Newton's second law of motion, which states that the net force acting on a body is equal to that body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are related to the geometry of space and time: in the standard reference frames of classical mechanics, objects in free motion move along straight lines at constant speed. In modern parlance, their paths are geodesics, straight world lines in curved spacetime.
Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties. A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in an enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration.
Given the universality of free fall, there is no observable distinction between inertial motion and motion under the influence of the gravitational force. This suggests the definition of a new class of inertial motion, namely that of objects in free fall under the influence of gravity. This new class of preferred motions, too, defines a geometry of space and time—in mathematical terms, it is the geodesic motion associated with a specific connection which depends on the gradient of the gravitational potential. Space, in this construction, still has the ordinary Euclidean geometry. However, space"time" as a whole is more complicated. As can be shown using simple thought experiments following the free-fall trajectories of different test particles, the result of transporting spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with the particle's trajectory; mathematically speaking, the Newtonian connection is not integrable. From this, one can deduce that spacetime is curved. The resulting Newton–Cartan theory is a geometric formulation of Newtonian gravity using only covariant concepts, i.e. a description which is valid in any desired coordinate system. In this geometric description, tidal effects—the relative acceleration of bodies in free fall—are related to the derivative of the connection, showing how the modified geometry is caused by the presence of mass.
Relativistic generalization.
As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics. In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations, boosts and reflections.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena.
With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event A, there is a set of events that can, in principle, either influence or be influenced by A via signals or interactions that do not need to travel faster than light (such as event B in the image), and a set of events for which such an influence is impossible (such as event C in the image). These sets are observer-independent. In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the spacetime's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure or conformal geometry.
Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall motion, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry.
A priori, it is not clear whether the new local frames in free fall coincide with the reference frames in which the laws of special relativity hold—that theory is based on the propagation of light, and thus on electromagnetism, which could have a different set of preferred frames. But using different assumptions about the special-relativistic frames (such as their being earth-fixed, or in free fall), one can derive different predictions for the gravitational redshift, that is, the way in which the frequency of light shifts as the light propagates through a gravitational field (cf. below). The actual measurements show that free-falling frames are the ones in which light propagates as it does in special relativity. The generalization of this statement, namely that the laws of special relativity hold to good approximation in freely falling (and non-rotating) reference frames, is known as the Einstein equivalence principle, a crucial guiding principle for generalizing special-relativistic physics to include gravity.
The same experimental data shows that time as measured by clocks in a gravitational field—proper time, to give the technical term—does not follow the rules of special relativity. In the language of spacetime geometry, it is not measured by the Minkowski metric. As in the Newtonian case, this is suggestive of a more general geometry. At small scales, all reference frames that are in free fall are equivalent, and approximately Minkowskian. Consequently, we are now dealing with a curved generalization of Minkowski space. The metric tensor that defines the geometry—in particular, how lengths and angles are measured—is not the Minkowski metric of special relativity, it is a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each Riemannian metric is naturally associated with one particular kind of connection, the Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence principle and makes space locally Minkowskian (that is, in suitable locally inertial coordinates, the metric is Minkowskian, and its first partial derivatives and the connection coefficients vanish).
Einstein's equations.
Having formulated the relativistic, geometric version of the effects of gravity, the question of gravity's source remains. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the energy–momentum tensor, which includes both energy and momentum densities as well as stress: pressure and shear. Using the equivalence principle, this tensor is readily generalized to curved spacetime. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the energy–momentum tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the energy–momentum tensor, and hence of whatever is on the other side of the equation, is zero—the simplest nontrivial set of equations are what are called Einstein's (field) equations:
Einstein's field equations
formula_0
On the left-hand side is the Einstein tensor, formula_1, which is symmetric and a specific divergence-free combination of the Ricci tensor formula_2 and the metric. In particular,
formula_3
is the curvature scalar. The Ricci tensor itself is related to the more general Riemann curvature tensor as
formula_4
On the right-hand side, formula_5 is a constant and formula_6 is the energy–momentum tensor. All tensors are written in abstract index notation. Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant formula_5 is found to be formula_7, where formula_8 is the Newtonian constant of gravitation and formula_9 the speed of light in vacuum. When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations,
formula_10
In general relativity, the world line of a particle free from all external, non-gravitational force is a particular type of geodesic in curved spacetime. In other words, a freely moving or falling particle always moves along a geodesic.
The geodesic equation is:
formula_11
where formula_12 is a scalar parameter of motion (e.g. the proper time), and formula_13 are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices formula_14 and formula_15. The quantity on the left-hand-side of this equation is the acceleration of a particle, and so this equation is analogous to Newton's laws of motion which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four spacetime coordinates, and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation.
Total force in general relativity.
In general relativity, the effective gravitational potential energy of an object of mass "m" revolving around a massive central body "M" is given by
formula_16
A conservative total force can then be obtained as its negative gradient
formula_17
where "L" is the angular momentum. The first term represents the force of Newtonian gravity, which is described by the inverse-square law. The second term represents the centrifugal force in the circular motion. The third term represents the relativistic effect.
Alternatives to general relativity.
There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, "f"("R") gravity and Einstein–Cartan theory.
Definition and basic applications.
The derivation outlined in the previous section contains all the information needed to define general relativity, describe its key properties, and address a question of crucial importance in physics, namely how the theory can be used for model-building.
Definition and basic properties.
General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime. Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve.
While general relativity replaces the scalar gravitational potential of classical physics by a symmetric rank-two tensor, the latter reduces to the former in certain limiting cases. For weak gravitational fields and slow speed relative to the speed of light, the theory's predictions converge on those of Newton's law of universal gravitation.
As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems. Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers. Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance.
Model-building.
The core concept of general-relativistic model-building is that of a solution of Einstein's equations. Given both Einstein's equations and suitable equations for the properties of matter, such a solution consists of a specific semi-Riemannian manifold (usually defined by giving the metric in specific coordinates), and specific matter fields defined on that manifold. Matter and geometry must satisfy Einstein's equations, so in particular, the matter's energy–momentum tensor must be divergence-free. The matter must, of course, also satisfy whatever additional equations were imposed on its properties. In short, such a solution is a model universe that satisfies the laws of general relativity, and possibly additional laws governing whatever matter might be present.
Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub–NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture).
Given the difficulty of finding exact solutions, Einstein's field equations are also solved frequently by numerical integration on a computer, or by considering small perturbations of exact solutions. In the field of numerical relativity, powerful computers are employed to simulate the geometry of spacetime and to solve Einstein's equations for interesting situations such as two colliding black holes. In principle, such methods may be applied to any system, given sufficient computer resources, and may address fundamental questions such as naked singularities. Approximate solutions may also be found by perturbation theories such as linearized gravity and its generalization, the post-Newtonian expansion, both of which were developed by Einstein. The latter provides a systematic approach to solving for the geometry of a spacetime that contains a distribution of matter that moves slowly compared with the speed of light. The expansion involves a series of terms; the first terms represent Newtonian gravity, whereas the later terms represent ever smaller corrections to Newton's theory due to general relativity. An extension of this expansion is the parametrized post-Newtonian (PPN) formalism, which allows quantitative comparisons between the predictions of general relativity and alternative theories.
Consequences of Einstein's theory.
General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of many years of research that followed Einstein's initial publication.
Gravitational time dilation and frequency shift.
Assuming that the equivalence principle holds, gravity influences the passage of time. Light sent down into a gravity well is blueshifted, whereas light sent in the opposite direction (i.e., climbing out of the gravity well) is redshifted; collectively, these two effects are known as the gravitational frequency shift. More generally, processes close to a massive body run more slowly when compared with processes taking place farther away; this effect is known as gravitational time dilation.
Gravitational redshift has been measured in the laboratory and using astronomical observations. Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks, while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). Tests in stronger gravitational fields are provided by the observation of binary pulsars. All results are in agreement with general relativity. However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid.
Light deflection and gravitational time delay.
General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a star. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun.
This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity. As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion), several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light, the angle of deflection resulting from such calculations is only half the value given by general relativity.
Closely related to light deflection is the Shapiro Time Delay, the phenomenon that light signals take longer to move through a gravitational field than they would in the absence of that field. There have been numerous successful tests of this prediction. In the parameterized post-Newtonian formalism (PPN), measurements of both the deflection of light and the gravitational time delay determine a parameter called γ, which encodes the influence of gravity on the geometry of space.
Gravitational waves.
Predicted in 1916 by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On 11 February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging.
The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right). Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, linear approximations of gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by formula_18 or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed.
Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models.
Orbital effects and the relativity of direction.
General relativity differs from classical mechanics in a number of predictions concerning orbiting bodies. It predicts an overall rotation (precession) of planetary orbits, as well as orbital decay caused by the emission of gravitational waves and effects related to the relativity of direction.
Precession of apsides.
In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations.
The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism. It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude.
In general relativity the perihelion shift formula_19, expressed in radians per revolution, is approximately given by
formula_20
where:
Orbital decay.
According to general relativity, a binary system will emit gravitational waves, thereby losing energy. Due to this loss, the distance between the two orbiting bodies decreases, and so does their orbital period. Within the Solar System or for ordinary double stars, the effect is too small to be observable. This is not the case for a close binary pulsar, a system of two orbiting neutron stars, one of which is a pulsar: from the pulsar, observers on Earth receive a regular series of radio pulses that can serve as a highly accurate clock, which allows precise measurements of the orbital period. Because neutron stars are immensely compact, significant amounts of energy are emitted in the form of gravitational radiation.
The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics. Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737−3039, where both stars are pulsars and which was last reported to also be in agreement with general relativity in 2021 after 16 years of observations.
Geodetic precession and frame-dragging.
Several relativistic effects are directly related to the relativity of direction. One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport"). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging. More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%.
Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used.
Astrophysical applications.
Gravitational lensing.
The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs.
The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed.
Gravitational lensing has developed into a tool of observational astronomy. It is used to detect the presence and distribution of dark matter, provide a "natural telescope" for observing distant galaxies, and to obtain an independent estimate of the Hubble constant. Statistical evaluations of lensing data provide valuable insight into the structural evolution of galaxies.
Gravitational-wave astronomy.
Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research. Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO. Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 hertz frequency range, which originate from binary supermassive blackholes. A European space-based detector, eLISA / NGO, is currently under development, with a precursor mission (LISA Pathfinder) having launched in December 2015.
Observations of gravitational waves promise to complement observations in the electromagnetic spectrum. They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string. In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger.
Black holes and other compact objects.
Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures.
Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed.
General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory.
Black holes are also sought-after targets in the search for gravitational waves (cf. Gravitational waves, above). Merging black hole binaries should lead to some of the strongest gravitational wave signals reaching detectors here on Earth, and the phase directly before the merger ("chirp") could be used as a "standard candle" to deduce the distance to the merger events–and hence serve as a probe of cosmic expansion at large distances. The gravitational waves produced as a stellar black hole plunges into a supermassive one should provide direct information about the supermassive black hole's geometry.
Cosmology.
The current models of cosmology are based on Einstein's field equations, which include the cosmological constant formula_24 since it has important influence on the large-scale dynamics of the cosmos,
formula_25
where "formula_26" is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis, the large-scale structure of the universe, and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation.
Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. There is no generally accepted description of this new kind of matter, within the framework of known particle physics or otherwise. Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear.
An inflationary phase, an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations. An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity, below).
Exotic solutions: time travel, warp drives.
Kurt Gödel showed that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Stephen Hawking introduced chronology protection conjecture, which is an assumption beyond those of standard general relativity to prevent time travel.
Some exact solutions in general relativity such as Alcubierre drive present examples of warp drive but these solutions requires exotic matter distribution, and generally suffers from semiclassical instability.
Advanced concepts.
Asymptotic symmetries.
The spacetime symmetry group for special relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries if any might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, "viz.", the Poincaré group.
In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at light-like infinity to characterize what it means to say a metric is asymptotically flat, making no "a priori" assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as "supertranslations". This implies the conclusion that General Relativity (GR) does "not" reduce to special relativity in the case of weak fields at long distances. It turns out that the BMS symmetry, suitably modified, could be seen as a restatement of the universal soft graviton theorem in quantum field theory (QFT), which relates universal infrared (soft) QFT with GR asymptotic spacetime symmetries.
Causal structure and global geometry.
In general relativity, no material body can catch up with or overtake a light pulse. No influence from an event "A" can reach any other location "X" before light sent out at "A" to "X". In consequence, an exploration of all light worldlines (null geodesics) yields key information about the spacetime's causal structure. This structure can be displayed using Penrose–Carter diagrams in which infinitely large regions of space and infinite time intervals are shrunk ("compactified") so as to fit onto a finite map, while light still travels along diagonals as in standard spacetime diagrams.
Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of energy conditions) are used to derive general results.
Horizons.
Using global geometry, some spacetimes can be shown to contain boundaries called horizons, which demarcate one region from the rest of spacetime. The best-known examples are black holes: if mass is compressed into a sufficiently compact region of space (as specified in the hoop conjecture, the relevant length scale is the Schwarzschild radius), no light from inside can escape to the outside. Since no object can overtake a light pulse, all interior matter is imprisoned as well. Passage from the exterior to the interior is still possible, showing that the boundary, the black hole's "horizon", is not a physical barrier.
Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. With time they become rather simple objects characterized by eleven parameters specifying: electric charge, mass–energy, linear momentum, angular momentum, and location at a specified time. This is stated by the black hole uniqueness theorem: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple.
Even more remarkably, there is a general set of laws known as black hole mechanics, which is analogous to the laws of thermodynamics. For instance, by the second law of black hole mechanics, the area of the event horizon of a general black hole will never decrease with time, analogous to the entropy of a thermodynamic system. This limits the energy that can be extracted by classical means from a rotating black hole (e.g. by the Penrose process). There is strong evidence that the laws of black hole mechanics are, in fact, a subset of the laws of thermodynamics, and that the black hole area is proportional to its entropy. This leads to a modification of the original laws of black hole mechanics: for instance, as the second law of black hole mechanics becomes part of the second law of thermodynamics, it is possible for the black hole area to decrease as long as other processes ensure that entropy increases overall. As thermodynamical objects with nonzero temperature, black holes should emit thermal radiation. Semiclassical calculations indicate that indeed they do, with the surface gravity playing the role of temperature in Planck's law. This radiation is known as Hawking radiation (cf. the quantum theory section, below).
There are many other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semiclassical radiation known as Unruh radiation.
Singularities.
Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well.
Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity.
Evolution equations.
Each solution of Einstein's equation encompasses the whole history of a universe—it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. Due to its general covariance, Einstein's theory is not sufficient by itself to determine the time evolution of the metric tensor. It must be combined with a coordinate condition, which is analogous to gauge fixing in other field theories.
To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism. These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity.
Global and quasi-local quantities.
The notion of evolution equations is intimately tied in with another aspect of general relativistic physics. In Einstein's theory, it turns out to be impossible to find a general definition for a seemingly simple property such as a system's total mass (or energy). The main reason is that the gravitational field—like any physical field—must be ascribed a certain energy, but that it proves to be fundamentally impossible to localize that energy.
Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass) or suitable symmetries (Komar mass). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity. Just as in classical physics, it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define "quasi-local" quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture.
Relationship with quantum theory.
If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid-state physics, would be the other. However, how to reconcile quantum theory with general relativity is still an open question.
Quantum field theory in curved spacetime.
Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time. As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes.
Quantum gravity.
The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist.
Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability").
One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.
Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff. However, with the introduction of what are now known as Ashtekar variables, this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps.
Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge calculus, dynamical triangulations, causal sets, twistor models or the path integral based models of quantum cosmology.
All candidate theories still have major formal and conceptual problems to overcome. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests (and thus to decide between the candidates where their predictions vary), although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available.
Current status.
General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications that the theory is incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics.
Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes). In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on 14 September 2015. A century after its introduction, general relativity remains a highly active area of research.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "G_{\\mu\\nu}\\equiv R_{\\mu\\nu} - {\\textstyle 1 \\over 2}R\\,g_{\\mu\\nu} = \\kappa T_{\\mu\\nu}\\,"
},
{
"math_id": 1,
"text": "G_{\\mu\\nu}"
},
{
"math_id": 2,
"text": "R_{\\mu\\nu}"
},
{
"math_id": 3,
"text": "R=g^{\\mu\\nu}R_{\\mu\\nu}"
},
{
"math_id": 4,
"text": "R_{\\mu\\nu}={R^\\alpha}_{\\mu\\alpha\\nu}."
},
{
"math_id": 5,
"text": "\\kappa"
},
{
"math_id": 6,
"text": "T_{\\mu\\nu}"
},
{
"math_id": 7,
"text": "\\kappa=\\frac{8\\pi G}{c^4}"
},
{
"math_id": 8,
"text": "G"
},
{
"math_id": 9,
"text": "c"
},
{
"math_id": 10,
"text": "R_{\\mu\\nu}=0."
},
{
"math_id": 11,
"text": " {d^2 x^\\mu \\over ds^2}+\\Gamma^\\mu {}_{\\alpha \\beta}{d x^\\alpha \\over ds}{d x^\\beta \\over ds}=0,"
},
{
"math_id": 12,
"text": "s"
},
{
"math_id": 13,
"text": " \\Gamma^\\mu {}_{\\alpha \\beta}"
},
{
"math_id": 14,
"text": "\\alpha"
},
{
"math_id": 15,
"text": "\\beta"
},
{
"math_id": 16,
"text": "U_f(r) =-\\frac{GMm}{r}+\\frac{L^{2}}{2mr^{2}}-\\frac{GML^{2}}{mc^{2}r^{3}}"
},
{
"math_id": 17,
"text": "F_f(r)=-\\frac{GMm}{r^{2}}+\\frac{L^{2}}{mr^{3}}-\\frac{3GML^{2}}{mc^{2}r^{4}}"
},
{
"math_id": 18,
"text": "10^{-21}"
},
{
"math_id": 19,
"text": "\\sigma"
},
{
"math_id": 20,
"text": "\\sigma=\\frac {24\\pi^3L^2} {T^2c^2(1-e^2)} \\ ,"
},
{
"math_id": 21,
"text": "L"
},
{
"math_id": 22,
"text": "T"
},
{
"math_id": 23,
"text": "e"
},
{
"math_id": 24,
"text": "\\Lambda"
},
{
"math_id": 25,
"text": " R_{\\mu\\nu} - {\\textstyle 1 \\over 2}R\\,g_{\\mu\\nu} + \\Lambda\\ g_{\\mu\\nu} = \\frac{8\\pi G}{c^{4}}\\, T_{\\mu\\nu} "
},
{
"math_id": 26,
"text": "g_{\\mu\\nu}"
}
] | https://en.wikipedia.org/wiki?curid=12024 |
12024508 | Small-bias sample space | In theoretical computer science, a small-bias sample space (also known as formula_0-biased sample space, formula_0-biased generator, or small-bias probability space) is a probability distribution that fools parity functions.
In other words, no parity function can distinguish between a small-bias sample space and the uniform distribution with high probability, and hence, small-bias sample spaces naturally give rise to pseudorandom generators for parity functions.
The main useful property of small-bias sample spaces is that they need far fewer truly random bits than the uniform distribution to fool parities. Efficient constructions of small-bias sample spaces have found many applications in computer science, some of which are derandomization, error-correcting codes, and probabilistically checkable proofs.
The connection with error-correcting codes is in fact very strong since formula_0-biased sample spaces are "equivalent" to formula_0-balanced error-correcting codes.
Definition.
Bias.
Let formula_1 be a probability distribution over formula_2.
The "bias" of formula_1 with respect to a set of indices formula_3 is defined as
formula_4
where the sum is taken over formula_5, the finite field with two elements. In other words, the sum formula_6 equals formula_7 if the number of ones in the sample formula_8 at the positions defined by formula_9 is even, and otherwise, the sum equals formula_10.
For formula_11, the empty sum is defined to be zero, and hence formula_12.
ϵ-biased sample space.
A probability distribution formula_1 over formula_2 is called an "formula_0-biased sample space" if
formula_13
holds for all non-empty subsets formula_14.
ϵ-biased set.
An formula_0-biased sample space formula_1 that is generated by picking a uniform element from a multiset formula_15 is called "formula_0-biased set".
The "size" formula_16 of an formula_0-biased set formula_1 is the size of the multiset that generates the sample space.
ϵ-biased generator.
An formula_0-biased generator formula_17 is a function that maps strings of length formula_18 to strings of length formula_19 such that the multiset formula_20 is an formula_0-biased set. The "seed length" of the generator is the number formula_18 and is related to the size of the formula_0-biased set formula_21 via the equation formula_22.
Connection with epsilon-balanced error-correcting codes.
There is a close connection between formula_0-biased sets and "formula_0-balanced" linear error-correcting codes.
A linear code formula_23 of message length formula_19 and block length formula_16 is
"formula_0-balanced" if the Hamming weight of every nonzero codeword formula_24 is between formula_25 and formula_26.
Since formula_27 is a linear code, its generator matrix is an formula_28-matrix formula_29 over formula_5 with formula_30.
Then it holds that a multiset formula_31 is formula_0-biased if and only if the linear code formula_32, whose columns are exactly elements of formula_1, is formula_0-balanced.
Constructions of small epsilon-biased sets.
Usually the goal is to find formula_0-biased sets that have a small size formula_16 relative to the parameters formula_19 and formula_0.
This is because a smaller size formula_16 means that the amount of randomness needed to pick a random element from the set is smaller, and so the set can be used to fool parities using few random bits.
Theoretical bounds.
The probabilistic method gives a non-explicit construction that achieves size formula_33.
The construction is non-explicit in the sense that finding the formula_0-biased set requires a lot of true randomness, which does not help towards the goal of reducing the overall randomness.
However, this non-explicit construction is useful because it shows that these efficient codes exist.
On the other hand, the best known lower bound for the size of formula_0-biased sets is formula_34, that is, in order for a set to be formula_0-biased, it must be at least that big.
Explicit constructions.
There are many explicit, i.e., deterministic constructions of formula_0-biased sets with various parameter settings:
These bounds are mutually incomparable. In particular, none of these constructions yields the smallest formula_0-biased sets for all settings of formula_0 and formula_19.
Application: almost k-wise independence.
An important application of small-bias sets lies in the construction of almost k-wise independent sample spaces.
k-wise independent spaces.
A random variable formula_40 over formula_2 is a "k-wise independent space" if, for all index sets formula_41 of size formula_42, the marginal distribution formula_43 is exactly equal to the uniform distribution over formula_44.
That is, for all such formula_9 and all strings formula_45, the distribution formula_40 satisfies formula_46.
Constructions and bounds.
k-wise independent spaces are fairly well understood.
Joffe's construction.
constructs a formula_42-wise independent space formula_40 over the finite field with some prime number formula_49 of elements, i.e., formula_40 is a distribution over formula_50. The initial formula_42 marginals of the distribution are drawn independently and uniformly at random:
formula_51.
For each formula_52 with formula_53, the marginal distribution of formula_54 is then defined as
formula_55
where the calculation is done in formula_56.
proves that the distribution formula_40 constructed in this way is formula_42-wise independent as a distribution over formula_50.
The distribution formula_40 is uniform on its support, and hence, the support of formula_40 forms a "formula_42-wise independent set".
It contains all formula_47 strings in formula_57 that have been extended to strings of length formula_19 using the deterministic rule above.
Almost k-wise independent spaces.
A random variable formula_40 over formula_2 is a "formula_58-almost k-wise independent space" if, for all index sets formula_41 of size formula_42, the restricted distribution formula_43 and the uniform distribution formula_59 on formula_44 are formula_58-close in 1-norm, i.e., formula_60.
Constructions.
give a general framework for combining small k-wise independent spaces with small formula_0-biased spaces to obtain formula_58-almost k-wise independent spaces of even smaller size.
In particular, let formula_61 be a linear mapping that generates a k-wise independent space and let formula_62 be a generator of an formula_0-biased set over formula_63.
That is, when given a uniformly random input, the output of formula_64 is a k-wise independent space, and the output of formula_65 is formula_0-biased.
Then formula_66 with formula_67 is a generator of an formula_58-almost formula_42-wise independent space, where formula_68.
As mentioned above, construct a generator formula_64 with formula_69, and construct a generator formula_65 with formula_70.
Hence, the concatenation formula_71 of formula_64 and formula_65 has seed length formula_72.
In order for formula_71 to yield a formula_58-almost k-wise independent space, we need to set formula_73, which leads to a seed length of formula_74 and a sample space of total size formula_75.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\epsilon"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\{0,1\\}^n"
},
{
"math_id": 3,
"text": "I \\subseteq \\{1,\\dots,n\\}"
},
{
"math_id": 4,
"text": "\n\\text{bias}_I(X)\n=\n\\left|\n\\Pr_{x\\sim X} \\left(\\sum_{i\\in I} x_i = 0\\right)\n-\n\\Pr_{x\\sim X} \\left(\\sum_{i\\in I} x_i = 1\\right)\n\\right|\n=\n\\left|\n2 \\cdot \\Pr_{x\\sim X} \\left(\\sum_{i\\in I} x_i = 0\\right)\n-1\n\\right|\n\\,,"
},
{
"math_id": 5,
"text": "\\mathbb F_2"
},
{
"math_id": 6,
"text": "\\sum_{i\\in I} x_i"
},
{
"math_id": 7,
"text": "0"
},
{
"math_id": 8,
"text": "x\\in\\{0,1\\}^n"
},
{
"math_id": 9,
"text": "I"
},
{
"math_id": 10,
"text": "1"
},
{
"math_id": 11,
"text": "I=\\emptyset"
},
{
"math_id": 12,
"text": "\\text{bias}_{\\emptyset} (X) = 1"
},
{
"math_id": 13,
"text": "\n\\text{bias}_I(X) \\leq \\epsilon\n"
},
{
"math_id": 14,
"text": "I \\subseteq \\{1,2,\\ldots,n\\}"
},
{
"math_id": 15,
"text": "X\\subseteq \\{0,1\\}^n"
},
{
"math_id": 16,
"text": "s"
},
{
"math_id": 17,
"text": "G:\\{0,1\\}^\\ell \\to \\{0,1\\}^n"
},
{
"math_id": 18,
"text": "\\ell"
},
{
"math_id": 19,
"text": "n"
},
{
"math_id": 20,
"text": "X_G=\\{G(y) \\;\\vert\\; y\\in\\{0,1\\}^\\ell \\}"
},
{
"math_id": 21,
"text": "X_G"
},
{
"math_id": 22,
"text": "s=2^\\ell"
},
{
"math_id": 23,
"text": "C:\\{0,1\\}^n\\to\\{0,1\\}^s"
},
{
"math_id": 24,
"text": "C(x)"
},
{
"math_id": 25,
"text": "(\\frac{1}{2}-\\epsilon)s"
},
{
"math_id": 26,
"text": "(\\frac{1}{2}+\\epsilon)s"
},
{
"math_id": 27,
"text": "C"
},
{
"math_id": 28,
"text": "(n\\times s)"
},
{
"math_id": 29,
"text": "A"
},
{
"math_id": 30,
"text": "C(x)=x \\cdot A"
},
{
"math_id": 31,
"text": "X\\subset\\{0,1\\}^{n}"
},
{
"math_id": 32,
"text": "C_X"
},
{
"math_id": 33,
"text": "s=O(n/\\epsilon^2)"
},
{
"math_id": 34,
"text": "s=\\Omega(n/ (\\epsilon^2 \\log (1/\\epsilon))"
},
{
"math_id": 35,
"text": "\\displaystyle s= \\frac{n}{\\text{poly}(\\epsilon)}"
},
{
"math_id": 36,
"text": "\\displaystyle s= O\\left(\\frac{n}{\\epsilon \\log (n/\\epsilon)}\\right)^2"
},
{
"math_id": 37,
"text": "\\displaystyle s= O\\left(\\frac{n}{\\epsilon^3 \\log (1/\\epsilon)}\\right)"
},
{
"math_id": 38,
"text": "\\displaystyle s= O\\left(\\frac{n}{\\epsilon^2 \\log (1/\\epsilon)}\\right)^{5/4}"
},
{
"math_id": 39,
"text": "\\displaystyle s= O\\left(\\frac{n}{\\epsilon^{2+o(1)}}\\right)"
},
{
"math_id": 40,
"text": "Y"
},
{
"math_id": 41,
"text": "I\\subseteq\\{1,\\dots,n\\}"
},
{
"math_id": 42,
"text": "k"
},
{
"math_id": 43,
"text": "Y|_I"
},
{
"math_id": 44,
"text": "\\{0,1\\}^k"
},
{
"math_id": 45,
"text": "z\\in\\{0,1\\}^k"
},
{
"math_id": 46,
"text": "\\Pr_Y (Y|_I = z) = 2^{-k}"
},
{
"math_id": 47,
"text": "n^k"
},
{
"math_id": 48,
"text": "n^{k/2}"
},
{
"math_id": 49,
"text": "n>k"
},
{
"math_id": 50,
"text": "\\mathbb F_n^n"
},
{
"math_id": 51,
"text": "(Y_0,\\dots,Y_{k-1}) \\sim\\mathbb F_n^k"
},
{
"math_id": 52,
"text": "i"
},
{
"math_id": 53,
"text": "k \\leq i < n"
},
{
"math_id": 54,
"text": "Y_i"
},
{
"math_id": 55,
"text": "Y_i=Y_0 + Y_1 \\cdot i + Y_2 \\cdot i^2 + \\dots + Y_{k-1} \\cdot i^{k-1}\\,,"
},
{
"math_id": 56,
"text": "\\mathbb F_n"
},
{
"math_id": 57,
"text": "\\mathbb F_n^k"
},
{
"math_id": 58,
"text": "\\delta"
},
{
"math_id": 59,
"text": "U_k"
},
{
"math_id": 60,
"text": "\\Big\\|Y|_I - U_k\\Big\\|_1 \\leq \\delta"
},
{
"math_id": 61,
"text": "G_1:\\{0,1\\}^h\\to\\{0,1\\}^n"
},
{
"math_id": 62,
"text": "G_2:\\{0,1\\}^\\ell \\to \\{0,1\\}^h"
},
{
"math_id": 63,
"text": "\\{0,1\\}^h"
},
{
"math_id": 64,
"text": "G_1"
},
{
"math_id": 65,
"text": "G_2"
},
{
"math_id": 66,
"text": "G : \\{0,1\\}^\\ell \\to \\{0,1\\}^n"
},
{
"math_id": 67,
"text": "G(x) = G_1(G_2(x))"
},
{
"math_id": 68,
"text": "\\delta=2^{k/2} \\epsilon"
},
{
"math_id": 69,
"text": "h=\\tfrac{k}{2} \\log n"
},
{
"math_id": 70,
"text": "\\ell=\\log s=\\log h + O(\\log(\\epsilon^{-1}))"
},
{
"math_id": 71,
"text": "G"
},
{
"math_id": 72,
"text": "\\ell = \\log k + \\log \\log n + O(\\log(\\epsilon^{-1}))"
},
{
"math_id": 73,
"text": "\\epsilon = \\delta 2^{-k/2}"
},
{
"math_id": 74,
"text": "\\ell = \\log \\log n + O(k+\\log(\\delta^{-1}))"
},
{
"math_id": 75,
"text": "2^\\ell \\leq \\log n \\cdot \\text{poly}(2^k \\cdot\\delta^{-1})"
}
] | https://en.wikipedia.org/wiki?curid=12024508 |
1202488 | Prüfer rank | In mathematics, especially in the area of algebra known as group theory, the Prüfer rank of a pro-p group measures the size of a group in terms of the ranks of its elementary abelian sections. The rank is well behaved and helps to define analytic pro-p-groups. The term is named after Heinz Prüfer.
Definition.
The Prüfer rank of pro-p-group formula_0 is
formula_1
where formula_2 is the rank of the abelian group
formula_3,
where formula_4 is the Frattini subgroup of formula_5.
As the Frattini subgroup of formula_5 can be thought of as the group of non-generating elements of formula_5, it can be seen that formula_2 will be equal to the "size of any minimal generating set" of formula_5.
Properties.
Those profinite groups with finite Prüfer rank are more amenable to analysis.
Specifically in the case of finitely generated pro-p groups, having finite Prüfer rank is equivalent to having an open normal subgroup that is powerful. In turn these are precisely the class of pro-p groups that are p-adic analytic – that is groups that can be imbued with a
p-adic manifold structure.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "\\sup\\{d(H)|H\\leq G\\}"
},
{
"math_id": 2,
"text": "d(H)"
},
{
"math_id": 3,
"text": "H/\\Phi(H)"
},
{
"math_id": 4,
"text": "\\Phi(H)"
},
{
"math_id": 5,
"text": "H"
}
] | https://en.wikipedia.org/wiki?curid=1202488 |
12025092 | Signal-recognition-particle GTPase | Class of enzymes
Signal-recognition-particle GTPase (EC 3.6.5.4) is an enzyme with systematic name "GTP phosphohydrolase (protein-synthesis-assisting)". This enzyme catalyses the following chemical reaction
GTP + H2O formula_0 GDP + phosphate
Enzyme activity is associated with the signal-recognition particle.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=12025092 |
12025650 | Chaperonin ATPase | Class of enzymes
Chaperonin ATPase (EC 3.6.4.9, "chaperonin") is an enzyme with systematic name "ATP phosphohydrolase (polypeptide-unfolding)". This enzyme catalyses the following chemical reaction
ATP + H2O formula_0 ADP + phosphate
These enzymes are a subclass of molecular chaperones.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=12025650 |
12025656 | Non-chaperonin molecular chaperone ATPase | Class of enzymes
Non-chaperonin molecular chaperone ATPase (EC 3.6.4.10, "molecular chaperone Hsc70 ATPase") is an enzyme with systematic name "ATP phosphohydrolase (polypeptide-polymerizing)". This enzyme catalyses the following chemical reaction
ATP + H2O formula_0 ADP + phosphate
These enzymes perform many functions that are similar to those of chaperonins.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=12025656 |
12025676 | X-bar chart | Statistical chart of arithmetic means
In industrial statistics, the X-bar chart is a type of Shewhart control chart that is used to monitor the arithmetic means of successive samples of constant size, n. This type of control chart is used for characteristics that can be measured on a continuous scale, such as weight, temperature, thickness etc. For example, one might take a sample of 5 shafts from production every hour, measure the diameter of each, and then plot, for each sample, the average of the five diameter values on the chart.
For the purposes of control limit calculation, the sample means are assumed to be normally distributed, an assumption justified by the Central Limit Theorem.
The X-bar chart is always used in conjunction with a variation chart such as the formula_0 and R chart or formula_0 and s chart. The R-chart shows sample ranges (difference between the largest and the smallest values in the sample), while the s-chart shows the samples' standard deviation. The R-chart was preferred in times when calculations were performed manually, as the range is far easier to calculate than the standard deviation; with the advent of computers, ease of calculation ceased to be an issue, and the s-chart is preferred these days, as it is statistically more meaningful and efficient. Depending on the type of variation chart used, the average sample range or the average sample standard deviation is used to derive the X-bar chart's control limits. | [
{
"math_id": 0,
"text": "\\bar x"
}
] | https://en.wikipedia.org/wiki?curid=12025676 |
12026454 | Farouk Kamoun | Tunisian computer scientist
Farouk Kamoun (born October 20, 1946) is a Tunisian computer scientist and professor of computer science at the National School of Computer Sciences (ENSI) of Manouba University, Tunisia. He contributed in the late 1970s to significant research in the field of computer networking in relation with the first ARPANET network. He is also one of the pioneers of the development of the Internet in Tunisia in the early 1990s.
Contribution to computer science.
The contribution of Dr. Kamoun in the domain of hierarchical routing begun in 1979 with his professor at the University of California UCLA, Leonard Kleinrock. They argued that the optimal number of levels for an formula_0 router subnet is formula_1, requiring a total of formula_2 entries per router. They also shown that the increase in effective mean path length caused by hierarchical routing is sufficiently small that it is usually acceptable.
The research work driven on Tunisia in the 1980s on network flow control based on buffer management is considered as a base to the now selective reject algorithms used in the Internet.
Education and career.
He received a French engineering degree in 1970, a Master's and a PhD degree from the University of California at Los Angeles Computer Science Department. His PhD work, undertaken under the supervision of Dr Leonard Kleinrock, a pioneer in the area of networking, was related to the design of large computer networks. They were among the first to introduce and evaluate cluster-based hierarchical routing. Results obtained then are still very relevant and have recently inspired considerable work in the area of "ad hoc" networking.
He returned to Tunisia in 1976 and was given the task to create and chair the first Computer Science School of the country (ENSI). From 1982 to 1993, he chaired the CNI, Centre National de l'Informatique, a Government Office in charge of IT policies and strategic development.
He promoted the field of networking in Tunisia, through the organization of several national conferences and workshops dealing with networks. As chairman of CNI, he represented Tunisia at the general assembly and executive board meetings of the Intergovernmental Bureau for Informatics (IBI) in Rome, in the '80s, as well as those of the UNESCO IT Intergovernmental Committee, with a focus on issues related to developing countries.
From 1993 till 1999, he served as Dean of ENSI, Ecole Nationale des Sciences de l'Informatique, a School of Engineering dedicated to the training of computer engineers. Since 1999, he continue teaching and serve as director of the CRISTAL Research Laboratory of the ENSI. He is also an advisor in IT fields to the Tunisian minister of Higher Education, Scientific Research and Technology.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "ln(N)"
},
{
"math_id": 2,
"text": "e\\cdot ln(N)"
}
] | https://en.wikipedia.org/wiki?curid=12026454 |
12029165 | AN/FPS-17 | The AN/FPS-17 was a ground-based fixed-beam radar system that was installed at three locations worldwide, including Pirinçlik Air Base (formerly Diyarbakir Air Station) in south-eastern Turkey, Laredo, Texas and Shemya Island, Alaska.
This system was deployed to satisfy scientific and technical intelligence collection requirements during the Cold War. The first installation (designated AN/FPS-17, XW-1) at Diyarbakir was originally intended to provide surveillance of the USSR's missile test range at Kapustin Yar south of Stalingrad - especially to detect missile launchings. The data it produced, however, exceeded surveillance requirements, permitting the derivation of missile trajectories, the identification of earth satellite launches, the calculation of a satellite's ephemeris (position and orbit), and the synthesis of booster rocket performance. The success achieved by this fixed-beam radar led to the co-location of a tracking radar (AN/FPS-79), beginning in mid-1964. Together, these radars had the capability for estimating the configuration and dimensions of satellites or missiles and observing the reentry of manned or unmanned vehicles.
A second FPS-17 installation was made at Laredo, Texas, which was used primarily as a research and development site. The final operational installation was made at Shemya Island, Alaska, for missile detection.
Classification of radar systems.
Under the Joint Electronics Type Designation System (JETDS), all U.S. military radar and tracking systems are assigned a unique identifying alphanumeric designation. The letters “AN” (for Army-Navy) are placed ahead of a three-letter code.
Thus, the AN/FPS-17 represents the 17th design of an Army-Navy “Fixed, Radar, Search” electronic device.
Genesis.
Experimentation with the detection of missiles by a modified SCR-270 radar in 1948 and 1949 at Holloman Air Force Base, New Mexico along with U.S. experience in the use of high-power components on other radars, created a basis for believing that a megawatt-rated radar could be fabricated for operation over much longer ranges than ever before. The need for intelligence on Soviet missile activity being acute, a formal requirement for such a radar was established, and Rome Air Development Center was given responsibility for engineering the system.
In October 1954 General Electric, which had experience in producing high-power VHF equipment and radars, was awarded a contract for the fabrication, installation, and testing of what was to be at the time the world's largest and most powerful operational radar. The contract stipulated that the equipment was to be in operation at Site IX near Diyarbakir within nine months: by 1 June 1955. Construction began in February, and the scheduled operational date was missed by fifteen minutes.
The original antenna installation was a large D.S. Kennedy parabolic reflector, high by wide, radiating in the frequency range 175 to 215 megahertz. Standard GE high-power television transmitters, modified for pulse operation, were used initially.
Surveillance was carried out by six horizontal beams over the Kapustin Yar area. In 1958 a second antenna, high by long (called the Cinerama antenna), and new 1.2-megawatt transmitters were installed as part of a modification kit which provided three additional horizontal beams, a seven-beam vertical fan, and greater range capability.
The elaborate system included automatic alarm circuitry, range-finding circuitry, and data-processing equipment; it was equipped to make 35 mm photographic recordings of all signals received. A preliminary reduction of data was accomplished on-site, but the final processing was done in the Foreign Technology Division at Wright Patterson Air Force Base.
From 15 June 1955, when the first Soviet missile was detected, to 1 March 1964, 508 incidents (sightings) were reported, 147 of them during the last two years of the period.
Operation.
The post-1958 Pirinçlik system had eight separate radar sets or channels, each with its own exciter, transmitter, duplexer, receiver, and data display unit. These eight channels fed electromagnetic energy into sixteen fixed beams formed by the two antennas, each channel, or transmitter-receiver combination, being time-shared between two beams. Pneumatically driven switches operated on a three-second cycle to power each beam alternately for 1.5 seconds. There were antenna feeds for two additional beams which could be made to function with some patchwork in the wiring.
The antenna feeds were positioned to produce in space the beam pattern depicted in the figure. Beams 1 and 18 were those not ordinarily energized. Beams 1 through 7 used the older of the two antennas; 8 through 18 were formed by the newer, "cinerama" antenna, whose width gave them their narrow horizontal dimension.
Beams 2 through 9 were projected in horizontal array; 10 through 17 (although 10 actually lies in the horizontal row) were grouped as the vertical component. All beams of each group were powered simultaneously. Except for being controlled by a master timing signal, each of the eight channels operated independently of the others. Each transmitter was on a slightly different frequency to prevent interaction with the others. The transmitted pulse, 2000 microseconds long, was coded, or tagged, by being passed through a tapped delay line which may reversed the phase at 20-microsecond intervals. Upon reception the returned signal was passed through the same tapped delay line and compressed 100:1, to 20 microseconds in order to increase the accuracy and resolution of the range measurement, which was of course a function of the interval between transmission and return.
A delay line was an artificial transmission detour that served to retard the signal, made up with series inductances and parallel capacitances that yielded a constant delay. Pick-off points at 20-microsecond intervals permitted these sub-pulses to be extracted in such sequence that they all arrive together, to achieve the compression effect.
The total azimuthal coverage was from 18° to 49.7°. The system normally detected missiles or satellites launched from Kapustin Yar at a nominal range of ; it tracked one type of missile out as far as . The missiles and satellites were not sensed at their maximum detectable range because the coverage of the fixed beam configuration did not conform with the test range layout.
The electrical characteristics of each of the channels were:
To illustrate how the capability of the system is calculated,
we can take typical logs which show channel 4, for example,
operating with the following parameters:
Channel 4's maximum range of intercept capability for
a target one square meter in cross section is then determined
by using these parameters in the radar range equation
formula_0
where:
*formula_1 is the range in meters
*formula_2 is the peak power transmitted in watts
*formula_3 is the antenna gain over isotropic (omnidirectional) radiator
*formula_4 is the wavelength in meters
*formula_5 is the minimum discernible signal in watts
*formula_6 is the target size in square meters
Substituting,
formula_7
where:
*formula_8 is the speed of light in meters per second
*formula_9 is the frequency in hertz (1/s)
formula_10
converting.
formula_11
and
formula_12
Range = .
Sightings made by the fixed-beam system included vertical firings (for upper-atmosphere research vehicles or booster checkout ), ballistic missiles fired to the nominal , , and impact areas, launches of Cosmos satellites, orbiting satellites, and natural abnormalities such as ionospheric disturbances or aurora.
Measurements and processing.
Data on target missiles or satellites were recorded in each radar channel by photographing a five-inch (127 mm) intensity-modulated oscilloscope with the camera shutter open on a 35 mm film moving approximately five inches per minute. The range of an individual target was represented by its location across the width of the film, the time by a dotdash code along the length. In addition to this positional information, the target's approximate radial velocity (velocity in the direction of observation) was determined by measuring the Doppler frequency shift in the radar signal when it returned. The doppler shift was found to within 500 cycles by determining which of eighteen frequency filters covering successive bands 500 cycles per second wide passed the return signal. This measurement of radial velocity ran from -4 to -f- per second in increments of . All these data, together with the elevation and azimuth of the observing beam, were automatically converted to serial form, encoded in standard teleprinter code, and punched on paper tape for transmission.
Data was thus received at Wright-Patterson Foreign Technology Division (FTD) first by teleprinter and then on film, the latter accompanied by logs giving data on the target as read by site personnel and data on equipment performance such as peak transmitted power, frequency, and receiver sensitivity. Upon arrival, the film when was edited and marked to facilitate reading on the "Oscar" (preliminary processing) equipment. Targets were sorted by differences in range and rate of range change, and the returns on each were numbered in time sequence.
The FTD Oscar equipment consisted of a film reader which gave time and range data in analog form, a converter unit which changed them to digital form, and an IBM printing card punch which received the digital data. The Oscar equipment and human operator thus generated a deck of IBM cards for computer processing which contains the history of each target's position through time.
The first step in the computer processing was to translate Oscar units into actual radar range, "Z" (Greenwich mean) time, and beam number, the latter fixing the azimuth and elevation of the return. During this first step, three separate quality-control checks were made on each IBM card to eliminate erroneous data.
Those observations that succeed in passing all these tests were taken to the second step of computer processing, with fitting of a second-degree polynomial curve to the raw range/time data in accordance with least squares criteria. In this method, a mathematical function was fit to best approximate a series of observations where the sum of squares of its residuals (deviations from the raw data) was least. If there was systematic irregularity in the reliability of the data, the residuals were weighted accordingly.
A standard deviation from this curve was established, and any raw datum point showing a deviation as large as three times the standard was discarded. Then second-degree curves were similarly fitted to the azimuth/time and elevation/time data. The three second-degree polynomials - for range/time, azimuth/time, and elevation/time - were used to generate a value for position and velocity at mean time of observation, and on the basis of these values an initial estimate of the elliptical trajectory was made.
In computing the elliptical path, the earth is physically considered a rotating homogeneous sphere and geometrically considered an ellipsoid -that is, its equatorial bulge is ignored in the gravitational computation but not with respect to intersections of its surface. An ellipse not intersecting the Earth's surface represents a satellite orbit; one intersecting the Earth's surface describes a trajectory above the point of intersection.
The parameters of the ellipse are iterated with the computer, establishing a best-fit ellipse constrained by a weighted least-squares criterion. Along this ellipse the target's track is computed -the history through time of latitude, longitude, altitude, and such velocity and angular parameters as may be of interest. A missile's actual range is probably shorter than that of its computed trajectory because of its non-elliptical thrusting path and atmospheric drag after its reentry. The difference is on the order of to for short and medium range missiles, for ICBMs.
Laredo, Texas.
GE and the Air Force recognized a need to conduct further research, development and testing that would not have been possible at the operational site in Turkey, so a similar FPS-17 was installed near Laredo, Texas, to facilitate that work. The location was sometimes known as Laredo Test Site, Laredo Tracking Site, or Laredo AFS, but is not to be confused with Laredo AFB. The site was declared operational on 29 February 1956 and a mechanical tracker, designated AN/FPS-78 was added around 1960. The site shut down in 1962 or 1963. Some documents claim Laredo was the first FPS-17 but this appears to derive from the period when the existence of Diyarbakir was a closely held secret.
The Laredo FPS-17 underwent numerous reconfigurations over time. The antenna reflector was the same as Diyarbakir’s initial FPS-17 antenna, but the feed horn numbers and configurations changed several times (it is a curiosity that none of the three FPS-17 sites were exactly alike). Laredo tracked missiles from White Sands and conducted experiments in detection, meteor effects, ionospheric propagation effects and hardware testing.
Shemya Island, Alaska.
Soviet rocket tests to Kamchatka during the late 1950s increased interest in Shemya Island, Alaska at the western Aleutians as a location for monitoring missile tests from the far northeastern Soviet Union. Old site facilities were rehabilitated and new ones constructed on the island, including a large detection radar (AN/FPS-17), which went into operation in 1960. Each of three antenna reflectors were similar to the initial FPS-17 at Diyarbakir but employed a different feed horn array and beam scanning method. In 1961, the AN/FPS-80 tracking radar was constructed nearby. Blue Fox refers to a modification of the AN/FPS-80 tracking radar to the AN/FPS-80(M) configuration in 1964. These radars were closed in the 1970s when the Cobra Dane phased array radar was built to monitor missile tests. Shemya was redesignated from an Air Force station to an Air Force base in 1968.
The AN/FPS-17 Detection Radar at the Shemya AFB became operational in May 1960, and the AN/FPS-80 Tracking Radar became operational on April 1, 1962.
Blue Nine refers to the project which produced the AN/FPS-79 Tracking Radar Set built by General Electric, used with the Air Force 466L Electromagnetic Intelligence System (ELINT).
Aftermath.
The Diyarbakir space surveillance site operated a detection radar (FPS-17) and a tracking radar (FPS-79) throughout the 1960s and 1970s. If a new space object was sensed by the detection radar's fans, then the tracking radar could be oriented to achieve lock-on and tracking. The orientation was governed by knowledge of the appropriate "normal" object's astrodynamic laws of motion, or by an assumption as to launch point. Thus, if an unknown was detected, and if it followed an unusual path, it was unlikely that it could, or would, be tracked. Furthermore, the director of the radar could make a decision that the unknown object detected is not of interest (because of the location of the FPS-17 fan penetration or because of the lack of prior information on a possible new launch). In the absence of detection fan penetration (the fan had a rather limited coverage), the FPS-79 tracking radar was tasked to follow other space objects on a schedule provided by the Space Defense Center, and again there was almost no likelihood that an anomalistic object could, or would, be tracked.
The success of the FPS-17 technology led directly to the development of the larger and more powerful Ballistic Missile Warning System (BMEWS). BMEWS detection and tracking radars were prototyped on Trinidad Island and the operational installations were made at Thule, Greenland; Clear, Alaska; and Fylingdales Moor, UK.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R = \\left[ \\frac {P_t \\, G^2 \\, \\lambda^2 \\, A }{(4 \\, \\pi)^3 \\, \\sigma_{min}} \\right] ^{1 \\over 4} \\,\\!"
},
{
"math_id": 1,
"text": " R \\,\\!"
},
{
"math_id": 2,
"text": " P_t \\,\\!"
},
{
"math_id": 3,
"text": " G \\,\\!"
},
{
"math_id": 4,
"text": " \\lambda \\,\\!"
},
{
"math_id": 5,
"text": " \\sigma_{min} \\,\\!"
},
{
"math_id": 6,
"text": " A \\,\\!"
},
{
"math_id": 7,
"text": " \\lambda = \\frac {c} {f} \\,\\!"
},
{
"math_id": 8,
"text": " c \\,\\!"
},
{
"math_id": 9,
"text": " f \\,\\!"
},
{
"math_id": 10,
"text": " \\lambda = \\frac {3 \\times 10^8 \\, \\mathrm{m/s}} {192 \\times 10^6 \\, \\mathrm{Hz}} = 1.56 \\, \\mathrm{m} \\,\\!"
},
{
"math_id": 11,
"text": " \\sigma_{min} = -130 \\, \\mathrm{dBm} = 10 ^{-130/10} \\, \\mathrm{mW} = 10 ^ {-16} \\, \\mathrm{W} \\,\\!"
},
{
"math_id": 12,
"text": " R = \\left[ \\frac {10^6 \\, \\mathrm{W} \\times 5000^2 \\times \\left(1.56 \\, \\mathrm{m} \\right)^2 \\times 1 \\, \\mathrm{m}^2} {12.57^3 \\times 10^{-16} \\, \\mathrm{W}} \\right] ^{1 \\over 4} \\,\\!"
}
] | https://en.wikipedia.org/wiki?curid=12029165 |
1203063 | Level-set method | Conceptual framework used in numerical analysis of surfaces and shapes
The Level-set method (LSM) is a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. LSM can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects. LSM makes it easier to perform computations on shapes with sharp corners and shapes that change topology (such as by splitting in two or developing holes). These characteristics make LSM effective for modeling objects that vary in time, such as an airbag inflating or a drop of oil floating in water.
Overview.
The figure on the right illustrates several ideas about LSM. In the upper left corner is a bounded region with a well-behaved boundary. Below it, the red surface is the graph of a level set function formula_0 determining this shape, and the flat blue region represents the "X-Y" plane. The boundary of the shape is then the zero-level set of formula_0, while the shape itself is the set of points in the plane for which formula_0 is positive (interior of the shape) or zero (at the boundary).
In the top row, the shape's topology changes as it is split in two. It is challenging to describe this transformation numerically by parameterizing the boundary of the shape and following its evolution. An algorithm can be used to detect the moment the shape splits in two and then construct parameterizations for the two newly obtained curves. On the bottom row, however, the plane at which the level set function is sampled is translated upwards, on which the shape's change in topology is described. It is less challenging to work with a shape through its level-set function rather than with itself directly, in which a method would need to consider all the possible deformations the shape might undergo.
Thus, in two dimensions, the level-set method amounts to representing a closed curve formula_1 (such as the shape boundary in our example) using an auxiliary function formula_0, called the level-set function. The curve formula_1 is represented as the zero-level set of formula_0 by
formula_2
and the level-set method manipulates formula_1 "implicitly" through the function formula_0. This function formula_0 is assumed to take positive values inside the region delimited by the curve formula_1 and negative values outside.
The level-set equation.
If the curve formula_1 moves in the normal direction with a speed formula_3, then by chain rule and implicit differentiation, it can be determined that the level-set function formula_0 satisfies the "level-set equation"
formula_4
Here, formula_5 is the Euclidean norm (denoted customarily by single bars in partial differential equations), and formula_6 is time. This is a partial differential equation, in particular a Hamilton–Jacobi equation, and can be solved numerically, for example, by using finite differences on a Cartesian grid.
However, the numerical solution of the level set equation may require advanced techniques. Simple finite difference methods fail quickly. Upwinding methods such as the Godunov method are considered better; however, the level set method does not guarantee preservation of the volume and shape of the set level in an advection field that maintains shape and size, for example, a uniform or rotational velocity field. Instead, the shape of the level set may become distorted, and the level set may disappear over a few time steps. Therefore, high-order finite difference schemes, such as high-order essentially non-oscillatory (ENO) schemes, are often required, and even then, the feasibility of long-term simulations is questionable. More advanced methods have been developed to overcome this; for example, combinations of the leveling method with tracking marker particles suggested by the velocity field.
Example.
Consider a unit circle in formula_7, shrinking in on itself at a constant rate, i.e. each point on the boundary of the circle moves along its inwards pointing normally at some fixed speed. The circle will shrink and eventually collapse down to a point. If an initial distance field is constructed (i.e. a function whose value is the signed Euclidean distance to the boundary, positive interior, negative exterior) on the initial circle, the normalized gradient of this field will be the circle normal.
If the field has a constant value subtracted from it in time, the zero level (which was the initial boundary) of the new fields will also be circular and will similarly collapse to a point. This is due to this being effectively the temporal integration of the Eikonal equation with a fixed front velocity.
History.
The level-set method was developed in 1979 by Alain Dervieux, and subsequently popularized by Stanley Osher and James Sethian. It has since become popular in many disciplines, such as image processing, computer graphics, computational geometry, optimization, computational fluid dynamics, and computational biology.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varphi"
},
{
"math_id": 1,
"text": "\\Gamma"
},
{
"math_id": 2,
"text": "\\Gamma = \\{(x, y) \\mid \\varphi(x, y) = 0 \\},"
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "\\frac{\\partial\\varphi}{\\partial t} = v|\\nabla \\varphi|."
},
{
"math_id": 5,
"text": "|\\cdot|"
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": "\\mathbb{R}^2"
}
] | https://en.wikipedia.org/wiki?curid=1203063 |
1203150 | Smith–Purcell effect | The Smith–Purcell effect was the precursor of the free-electron laser (FEL). It was studied by Steve Smith, a graduate student under the guidance of Edward Purcell. In their experiment, they sent an energetic beam of electrons very closely parallel to the surface of a ruled optical diffraction grating, and thereby generated visible light. Smith showed there was negligible effect on the trajectory of the inducing electrons. Essentially, this is a form of Cherenkov radiation where the phase velocity of the light has been altered by the periodic grating. However, unlike Cherenkov radiation, there is no minimum or threshold particle velocity.
Smith–Purcell radiation is particularly attractive for applications involving non-destructive beam diagnostics (bunch-length diagnostics in accelerators for example) and especially as a viable THz radiation source, which has further broad-range uses in diverse and high-impact fields like materials sciences, biotechnology, security and communications, manufacturing and medicine. Operating at THz frequencies also allows for potentially large accelerating gradients (~10s GeV/m) to be realised. This, paired with plasma-wakefield acceleration methods under development and linear accelerator (linac) technology, could pave the way to next-generation, compact (and hence cheaper), less prone to RF breakdown (current limits for surface E fields are of the order of 10s-100 MV/m), high energy output linacs.
Background.
Charged particles usually radiate/generate radiation via two different mechanisms:
The benefit of using polarisation radiation in particular is the lack of direct effect on the original beam; the beam inducing the radiative emission can continue its original path unaltered and having induced EM radiation. This is unlike the bremsstrahlung or synchrotron effects which actually alter or bend the incoming beam. Due to this non-destructive feature, SPR has become an interesting prospect for beam diagnostics, also offering the possibility of reliable technologies due to theoretically no contact or scattering interactions between the beam and the target.
Dispersion relation.
When a charged particle travels above a periodic grating (or periodic media inhomogeneity), a current is induced on the surface of the grating. This induced current then emits radiation at the discontinuities of the grating due to the scattering of the Coulomb field of the induced charges at the grating boundaries. The dispersion relation for the Smith–Purcell effect (SPE) is given as follows:
formula_0,
where the wavelength formula_1 is observed at an angle formula_2 to the direction of the electron beam for the formula_3 order reflection mode, and formula_4 is the grating period and formula_5 is the relative electron velocity (formula_6). This relation can be derived through considering energy and momentum conservation laws.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda = \\frac{L}{n}\\left(\\frac{1}{\\beta}-\\cos{\\theta}\\right)"
},
{
"math_id": 1,
"text": "\\lambda"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "n^{th}"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "\\beta "
},
{
"math_id": 6,
"text": "v/c"
}
] | https://en.wikipedia.org/wiki?curid=1203150 |
12034549 | Morita conjectures | The Morita conjectures in general topology are certain problems about normal spaces, now solved in the affirmative. The conjectures, formulated by Kiiti Morita in 1976, asked
The answers were believed to be affirmative. Here a normal P-space "Y" is characterised by the property that the product with every metrizable "X" is normal; thus the conjecture was that the converse holds.
Keiko Chiba, Teodor C. Przymusiński, and Mary Ellen Rudin proved conjecture (1) and showed that conjectures (2) and (3) cannot be proven false under the standard ZFC axioms for mathematics (specifically, that the conjectures hold under the axiom of constructibility "V=L").
Fifteen years later, Zoltán Tibor Balogh succeeded in showing that conjectures (2) and (3) are true. | [
{
"math_id": 0,
"text": "X \\times Y"
}
] | https://en.wikipedia.org/wiki?curid=12034549 |
1203577 | C4 | C4, C04, C.IV, C-4, or C-04 may refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
<templatestyles src="Dmbox/styles.css" />
Topics referred to by the same termThis page lists articles associated with the same title formed as a letter–number combination. | [
{
"math_id": 0,
"text": "C_4"
}
] | https://en.wikipedia.org/wiki?curid=1203577 |
12036119 | Gromov's inequality for complex projective space | Optimal stable 2-systolic inequality
In Riemannian geometry, Gromov's optimal stable 2-systolic inequality is the inequality
formula_0,
valid for an arbitrary Riemannian metric on the complex projective space, where the optimal bound is attained
by the symmetric Fubini–Study metric, providing a natural geometrisation of quantum mechanics. Here formula_1 is the stable 2-systole, which in this case can be defined as the infimum of the areas of rational 2-cycles representing the class of the complex projective line formula_2 in 2-dimensional homology.
The inequality first appeared in as Theorem 4.36.
The proof of Gromov's inequality relies on the Wirtinger inequality for exterior 2-forms.
Projective planes over division algebras formula_3.
In the special case n=2, Gromov's inequality becomes formula_4. This inequality can be thought of as an analog of Pu's inequality for the real projective plane formula_5. In both cases, the boundary case of equality is attained by the symmetric metric of the projective plane. Meanwhile, in the quaternionic case, the symmetric metric on formula_6 is not its systolically optimal metric. In other words, the manifold formula_6 admits Riemannian metrics with higher systolic ratio formula_7 than for its symmetric metric . | [
{
"math_id": 0,
"text": "\\mathrm{stsys}_2{}^n \\leq n!\n\\;\\mathrm{vol}_{2n}(\\mathbb{CP}^n)"
},
{
"math_id": 1,
"text": "\\operatorname{stsys_2}"
},
{
"math_id": 2,
"text": "\\mathbb{CP}^1 \\subset \\mathbb{CP}^n"
},
{
"math_id": 3,
"text": " \\mathbb{R,C,H}"
},
{
"math_id": 4,
"text": "\\mathrm{stsys}_2{}^2 \\leq 2 \\mathrm{vol}_4(\\mathbb{CP}^2)"
},
{
"math_id": 5,
"text": "\\mathbb{RP}^2"
},
{
"math_id": 6,
"text": "\\mathbb{HP}^2"
},
{
"math_id": 7,
"text": "\\mathrm{stsys}_4{}^2/\\mathrm{vol}_8"
}
] | https://en.wikipedia.org/wiki?curid=12036119 |
12037102 | Dowker space | In the mathematical field of general topology, a Dowker space is a topological space that is T4 but not countably paracompact. They are named after Clifford Hugh Dowker.
The non-trivial task of providing an example of a Dowker space (and therefore also proving their existence as mathematical objects) helped mathematicians better understand the nature and variety of topological spaces.
Equivalences.
Dowker showed, in 1951, the following:
If "X" is a normal T1 space (that is, a T4 space), then the following are equivalent:
Dowker conjectured that there were no Dowker spaces, and the conjecture was not resolved until Mary Ellen Rudin constructed one in 1971. Rudin's counterexample is a very large space (of cardinality formula_0). Zoltán Balogh gave the first ZFC construction of a small (cardinality continuum) example, which was more well-behaved than Rudin's. Using PCF theory, M. Kojman and S. Shelah constructed a subspace of Rudin's Dowker space of cardinality formula_1 that is also Dowker.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\aleph_\\omega^{\\aleph_0}"
},
{
"math_id": 1,
"text": "\\aleph_{\\omega+1}"
}
] | https://en.wikipedia.org/wiki?curid=12037102 |
12037838 | Lens clock | Mechanical device for measuring lenses
A lens clock is a mechanical dial indicator that is used to measure the dioptric power of a lens. It is a specialized version of a spherometer. A lens clock measures the curvature of a surface, but gives the result as an optical power in diopters, assuming the lens is made of a material with a particular refractive index.
How it works.
The lens clock has three pointed probes that make contact with the surface of the lens. The outer two probes are fixed while the center one moves, retracting as the instrument is pressed down on the lens's surface. As the probe retracts, the hand on the face of the dial turns by an amount proportional to the distance.
The optical power formula_0 of the surface is given by
formula_1
where formula_2 is the index of refraction of the glass, formula_3 is the vertical distance ("sagitta") between the center and outer probes, and formula_4 is the horizontal separation of the outer probes. To calculate formula_0 in diopters, both formula_3
and formula_4 must be specified in meters.
A typical lens clock is calibrated to display the power of a crown glass surface, with a refractive index of 1.523. If the lens is made of some other material, the reading must be adjusted to correct for the difference in refractive index.
Measuring both sides of the lens and adding the surface powers together gives the approximate optical power of the whole lens. (This approximation relies on the assumption that the lens is relatively thin.)
Radius of curvature.
The radius of curvature formula_5 of the surface can be obtained from the optical power given by the lens clock using the formula
formula_6
where formula_2 is the index of refraction "for which the lens clock is calibrated", regardless of the actual index of the lens being measured. If the lens is made of glass with some other index formula_7, the true optical power of the surface can be obtained using
formula_8
Example—correcting for refractive index.
A biconcave lens made of flint glass with an index of 1.7 is measured with a lens clock calibrated for crown glass with an index of 1.523. For this particular lens, the lens clock gives surface powers of −3.0 and −7.0 diopters (dpt). Because the clock is calibrated for a different refractive index the optical power of the lens is "not" the sum of the surface powers given by the clock. The optical power of the lens is instead obtained as follows:
First, the radii of curvature are obtained:
formula_9
formula_10
Next, the optical powers of each surface are obtained:
formula_11
formula_12
Finally, if the lens is thin the powers of each surface can be added to give the approximate optical power of the whole lens: −13.4 diopters. The actual power, as read by a vertometer or lensometer, might differ by as much as 0.1 diopters.
Estimating thickness.
A lens clock can also be used to estimate the thickness of thin objects, such as a hard or gas-permeable contact lens. Ideally, a contact lens dial thickness gauge would be used for this, but a lens clock can be used if a dial thickness gauge is not available. To do this, the contact lens is placed concave side up on a table or other hard surface. The lens clock is then brought down on it such that the center prong contacts the lens as close to its center as possible, and the outer prongs rest on the table. The thickness of the lens is then the sagitta formula_3 in the formula above, and can be calculated from the optical power reading, if the distance between the outer prongs is known. | [
{
"math_id": 0,
"text": "\\phi"
},
{
"math_id": 1,
"text": "\\phi = {2 (n-1)s \\over (D/2)^2},"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "s"
},
{
"math_id": 4,
"text": "D"
},
{
"math_id": 5,
"text": "R"
},
{
"math_id": 6,
"text": "R={(n-1) \\over \\phi},"
},
{
"math_id": 7,
"text": "n_2"
},
{
"math_id": 8,
"text": "\\phi={(n_2-1) \\over R}."
},
{
"math_id": 9,
"text": "R_1={(1.523-1) \\over -3.0\\ \\mathrm{dpt}} =-0.174\\ \\mathrm{m}"
},
{
"math_id": 10,
"text": "R_2={(1.523-1) \\over -7.0\\ \\mathrm{dpt}} =-0.0747\\ \\mathrm{m}"
},
{
"math_id": 11,
"text": "\\phi_1={(1.7-1) \\over -0.174\\ \\mathrm{m}}=-4.02\\ \\mathrm{dpt}"
},
{
"math_id": 12,
"text": "\\phi_2={(1.7-1) \\over -0.0747\\ \\mathrm{m}}=-9.37\\ \\mathrm{dpt}"
}
] | https://en.wikipedia.org/wiki?curid=12037838 |
1204123 | Space gun | Method of launching an object into outer space via a large gun or cannon
A space gun, sometimes called a Verne gun because of its appearance in "From the Earth to the Moon" by Jules Verne, is a method of launching an object into space using a large gun- or cannon-like structure. Space guns could thus potentially provide a method of non-rocket spacelaunch. It has been conjectured that space guns could place satellites into Earth's orbit (although after-launch propulsion of the satellite would be necessary to achieve a stable orbit), and could also launch spacecraft beyond Earth's gravitational pull and into other parts of the Solar System by exceeding Earth's escape velocity of about . However, these speeds are too far into the hypersonic range for most practical propulsion systems and also would cause most objects to burn up due to aerodynamic heating or be torn apart by aerodynamic drag.
Therefore, a more likely future use of space guns would be to launch objects into Low Earth orbit, at which point attached rockets could be fired or the objects could be "collected" by maneuverable orbiting satellites.
In Project HARP, a 1960s joint United States and Canada defence project, a U.S. Navy 100 caliber gun was used to fire a projectile at , reaching an apogee of , hence performing a suborbital spaceflight. However, a space gun has never been successfully used to launch an object into orbit or out of Earth's gravitational pull.
Technical issues.
The large g-force likely to be experienced by a ballistic projectile launched in this manner would mean that a space gun would be incapable of safely launching humans or delicate instruments, rather being restricted to freight, fuel or ruggedized satellites.
Getting to orbit.
A space gun by itself is not capable of placing objects into a stable orbit around the object (planet or otherwise) they are launched from. The orbit is a parabolic orbit, a hyperbolic orbit, or part of an elliptic orbit which ends at the planet's surface at the point of launch or another point. This means that an uncorrected ballistic payload will always strike the planet within its first orbit unless the velocity was so high as to reach or exceed escape velocity. As a result, all payloads intended to reach a closed orbit need at least to perform some sort of course correction to create another orbit that does not intersect the planet's surface.
A rocket can be used for additional boost, as planned in both Project HARP and the Quicklaunch project. The magnitude of such correction may be small; for instance, the StarTram Generation 1 reference design involves a total of of rocket burn to raise perigee well above the atmosphere when entering an low Earth orbit.
In a three-body or larger system, a gravity assist trajectory might be available such that a carefully aimed escape velocity projectile would have its trajectory modified by the gravitational fields of other bodies in the system such that the projectile would eventually return to orbit the initial planet using only the launch delta-v.
Isaac Newton avoided this objection in his thought experiment by placing his notional cannon atop a tall mountain and positing negligible air resistance. If in a stable orbit, the projectile would circle the planet and return to the altitude of launch after one orbit (see Newton's cannonball).
Acceleration.
For a space gun with a gun barrel of length (formula_0), and the needed velocity (formula_1), the acceleration (formula_2) is provided by the following formula:
formula_3
For instance, with a space gun with a vertical "gun barrel" through both the Earth's crust and the troposphere, totalling ~ of length (formula_0), and a velocity (formula_1) enough to escape the Earth's gravity (escape velocity, which is on Earth), the acceleration (formula_2) would theoretically be more than , which is more than 100 g-forces, which is about 3 times the human tolerance to g-forces of maximum 20 to 35 "g" during the ~10 seconds such a firing would take.This calculation does not take into account the decreasing escape velocity at higher altitudes.
Practical attempts.
V3 Cannon (1944-45).
The German V-3 cannon program, during World War II was an attempt to build something approaching a space gun. Based in the Pas-de-Calais area of France it was planned to be more devastating than the other Nazi 'Vengeance weapons'. The cannon was capable of launching , diameter shells over a distance of . It was destroyed by RAF bombing using Tallboy blockbuster bombs in July 1944.
Super High Altitude Research Project (1985-95).
The US Ballistic Missile Defense program sponsored the Super High Altitude Research Project (SHARP) in the 1980s. Developed at Lawrence Livermore Laboratory, it is a light-gas gun and has been used to test fire objects at Mach 9.
Project Babylon (1988-90).
The most prominent recent attempt to make a space gun was artillery engineer Gerald Bull's Project Babylon, which was also known as the 'Iraqi supergun' by the media. During Project Babylon, Bull used his experience from Project HARP to build a massive cannon for Saddam Hussein, leader of Ba'athist Iraq. Bull was assassinated before the project was completed.
Quicklaunch (1996-2016).
After cancellation of SHARP, lead developer John Hunter founded the Jules Verne Launcher Company in 1996 and the Quicklaunch company. As of September 2012, Quicklaunch was seeking to raise $500 million to build a gun that could refuel a propellant depot or send bulk materials into space.
Ram accelerators have also been proposed as an alternative to light-gas guns. Other proposals use electromagnetic techniques for accelerating the payload, such as coilguns and railguns.
In fiction.
The first publication of the concept may be Newton's cannonball in his 1728 book "A Treatise of the System of the World", although it was primarily used as a thought experiment regarding gravity.
Perhaps the most famous representations of a space gun appear in Jules Verne's 1865 novel "From the Earth to the Moon" and his 1869 novel "Around the Moon" (loosely interpreted into the 1902 film "Le Voyage dans la Lune"), in which astronauts fly to the Moon aboard a ship launched from a cannon. Another famous example is used by the Martians to launch their invasion in H. G. Wells' 1897 book "The War of the Worlds". Wells also used the concept in the climax of the 1936 film "Things to Come". In one of the first Polish Sci-Fi novels, "On the Silver Globe" by Andrzej Żuławski, published in 1903, astronauts are launched to the Moon using a space gun. The device was featured in films as late as 1967, such as "Jules Verne's Rocket to the Moon".
In the 1991 video game "", Percival Lowell builds a space gun to send a spacecraft to Mars.
The 1992 video game "Steel Empire", a shoot 'em up with steampunk aesthetics, features a space gun in its seventh level that is used by the main villain General Styron to launch himself to the Moon.
In Hannu Rajaniemi's 2012 novel "The Fractal Prince", a space gun at the "Jannah-of-the-cannon", powered by a 150-kiloton nuclear bomb, is used to launch a spaceship from Earth.
The 2015 video game "SOMA" features a space gun used to launch satellites.
Gerald Bull's assassination and the Project Babylon gun were also the starting point for Frederick Forsyth's 1994 novel "The Fist of God". In Larry Bond's 2001 novella and 2015 novel "Lash-Up", China uses a space gun to destroy American GPS satellites.
In the 2004 role-playing game "", a village of Bob-ombs operates a space gun to send Paper Mario and company to the X-Naut's base on the Moon.
Gerald Bull and Project Babylon are integral to the plot of Louise Penny's 2015 novel "The Nature of the Beast".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "l"
},
{
"math_id": 1,
"text": "v_e"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": " a = \\frac{v_e^2}{2l} "
}
] | https://en.wikipedia.org/wiki?curid=1204123 |
1204294 | Gluing axiom | Axiom specifying the requisites of a sheaf on a topological space
In mathematics, the gluing axiom is introduced to define what a sheaf formula_0 on a topological space formula_1 must satisfy, given that it is a presheaf, which is by definition a contravariant functor
formula_2
to a category formula_3 which initially one takes to be the category of sets. Here formula_4 is the partial order of open sets of formula_1 ordered by inclusion maps; and considered as a category in the standard way, with a unique morphism
formula_5
if formula_6 is a subset of formula_7, and none otherwise.
As phrased in the sheaf article, there is a certain axiom that formula_8 must satisfy, for any open cover of an open set of formula_1. For example, given open sets formula_6 and formula_7 with union formula_1 and intersection formula_9, the required condition is that
formula_10 is the subset of formula_11 With equal image in formula_12
In less formal language, a section formula_13 of formula_8 over formula_1 is equally well given by a pair of sections :formula_14 on formula_6 and formula_7 respectively, which 'agree' in the sense that formula_15 and formula_16 have a common image in formula_12 under the respective restriction maps
formula_17
and
formula_18.
The first major hurdle in sheaf theory is to see that this "gluing" or "patching" axiom is a correct abstraction from the usual idea in geometric situations. For example, a vector field is a section of a tangent bundle on a smooth manifold; this says that a vector field on the union of two open sets is (no more and no less than) vector fields on the two sets that agree where they overlap.
Given this basic understanding, there are further issues in the theory, and some will be addressed here. A different direction is that of the Grothendieck topology, and yet another is the logical status of 'local existence' (see Kripke–Joyal semantics).
Removing restrictions on "C".
To rephrase this definition in a way that will work in any category formula_3 that has sufficient structure, we note that we can write the objects and morphisms involved in the definition above in a diagram which we will call (G), for "gluing":
formula_19
Here the first map is the product of the restriction maps
formula_20
and each pair of arrows represents the two restrictions
formula_21
and
formula_22.
It is worthwhile to note that these maps exhaust all of the possible restriction maps among formula_6, the formula_23, and the formula_24.
The condition for formula_0 to be a sheaf is that for any open set formula_6 and any collection of open sets formula_25 whose union is formula_6, the diagram (G) above is an equalizer.
One way of understanding the gluing axiom is to notice that formula_6 is the colimit of the following diagram:
formula_26
The gluing axiom says that formula_0 turns colimits of such diagrams into limits.
Sheaves on a basis of open sets.
In some categories, it is possible to construct a sheaf by specifying only some of its sections. Specifically, let formula_1 be a topological space with basis formula_27. We can define a category "O"′("X") to be the full subcategory of formula_4 whose objects are the formula_28. A B-sheaf on formula_1 with values in formula_3 is a contravariant functor
formula_29
which satisfies the gluing axiom for sets in formula_30. That is, on a selection of open sets of formula_1, formula_0 specifies all of the sections of a sheaf, and on the other open sets, it is undetermined.
B-sheaves are equivalent to sheaves (that is, the category of sheaves is equivalent to the category of B-sheaves). Clearly a sheaf on formula_1 can be restricted to a B-sheaf. In the other direction, given a B-sheaf formula_0 we must determine the sections of formula_0 on the other objects of formula_4. To do this, note that for each open set formula_6, we can find a collection formula_31 whose union is formula_6. Categorically speaking, this choice makes formula_6 the colimit of the full subcategory of formula_30 whose objects are formula_31. Since formula_0 is contravariant, we define formula_32 to be the limit of the formula_33 with respect to the restriction maps. (Here we must assume that this limit exists in formula_3.) If formula_6 is a basic open set, then formula_6 is a terminal object of the above subcategory of formula_30, and hence formula_34. Therefore, formula_35 extends formula_0 to a presheaf on formula_1. It can be verified that formula_35 is a sheaf, essentially because every element of every open cover of formula_1 is a union of basis elements (by the definition of a basis), and every pairwise intersection of elements in an open cover of formula_1 is a union of basis elements (again by the definition of a basis).
The logic of "C".
The first needs of sheaf theory were for sheaves of abelian groups; so taking the category formula_3 as the category of abelian groups was only natural. In applications to geometry, for example complex manifolds and algebraic geometry, the idea of a "sheaf of local rings" is central. This, however, is not quite the same thing; one speaks instead of a locally ringed space, because it is not true, except in trite cases, that such a sheaf is a functor into a category of local rings. It is the "stalks" of the sheaf that are local rings, not the collections of "sections" (which are rings, but in general are not close to being "local"). We can think of a locally ringed space formula_1 as a parametrised family of local rings, depending on formula_36 in formula_1.
A more careful discussion dispels any mystery here. One can speak freely of a sheaf of abelian groups, or rings, because those are algebraic structures (defined, if one insists, by an explicit signature). Any category formula_3 having finite products supports the idea of a group object, which some prefer just to call a group "in" formula_3. In the case of this kind of purely algebraic structure, we can talk "either" of a sheaf having values in the category of abelian groups, or an "abelian group in the category of sheaves of sets"; it really doesn't matter.
In the local ring case, it does matter. At a foundational level we must use the second style of definition, to describe what a local ring means in a category. This is a logical matter: axioms for a local ring require use of existential quantification, in the form that for any formula_37 in the ring, one of formula_37 and formula_38 is invertible. This allows one to specify what a 'local ring in a category' should be, in the case that the category supports enough structure.
Sheafification.
To turn a given presheaf formula_39 into a sheaf formula_0, there is a standard device called sheafification or sheaving. The rough intuition of what one should do, at least for a presheaf of sets, is to introduce an equivalence relation, which makes equivalent data given by different covers on the overlaps by refining the covers. One approach is therefore to go to the stalks and recover the sheaf space of the "best possible" sheaf formula_0 produced from formula_39.
This use of language strongly suggests that we are dealing here with adjoint functors. Therefore, it makes sense to observe that the sheaves on formula_1 form a full subcategory of the presheaves on formula_1. Implicit in that is the statement that a morphism of sheaves is nothing more than a natural transformation of the sheaves, considered as functors. Therefore, we get an abstract characterisation of sheafification as left adjoint to the inclusion. In some applications, naturally, one does need a description.
In more abstract language, the sheaves on formula_1 form a reflective subcategory of the presheaves (Mac Lane–Moerdijk "Sheaves in Geometry and Logic" p. 86). In topos theory, for a Lawvere–Tierney topology and its sheaves, there is an analogous result (ibid. p. 227).
Other gluing axioms.
The gluing axiom of sheaf theory is rather general. One can note that the Mayer–Vietoris axiom of homotopy theory, for example, is a special case.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal F"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "{\\mathcal F}:{\\mathcal O}(X) \\rightarrow C"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "{\\mathcal O}(X)"
},
{
"math_id": 5,
"text": "U \\rightarrow V"
},
{
"math_id": 6,
"text": "U"
},
{
"math_id": 7,
"text": "V"
},
{
"math_id": 8,
"text": "F"
},
{
"math_id": 9,
"text": "W"
},
{
"math_id": 10,
"text": "{\\mathcal F}(X)"
},
{
"math_id": 11,
"text": "{\\mathcal F}(U) \\times {\\mathcal F}(V)"
},
{
"math_id": 12,
"text": "{\\mathcal F}(W)"
},
{
"math_id": 13,
"text": "s"
},
{
"math_id": 14,
"text": "(s', s'')"
},
{
"math_id": 15,
"text": "s'"
},
{
"math_id": 16,
"text": "s''"
},
{
"math_id": 17,
"text": "{\\mathcal F}(U) \\rightarrow {\\mathcal F}(W)"
},
{
"math_id": 18,
"text": "{\\mathcal F}(V) \\rightarrow {\\mathcal F}(W)"
},
{
"math_id": 19,
"text": "{\\mathcal F}(U)\\rightarrow\\prod_i{\\mathcal F}(U_i){{{} \\atop \\longrightarrow}\\atop{\\longrightarrow \\atop {}}}\\prod_{i,j}{\\mathcal F}(U_i\\cap U_j)"
},
{
"math_id": 20,
"text": "{res}_{U,U_{i}}:{\\mathcal F}(U)\\rightarrow{\\mathcal F}(U_{i})"
},
{
"math_id": 21,
"text": "res_{U_i,U_i\\cap U_j}:{\\mathcal F}(U_i)\\rightarrow{\\mathcal F}(U_i\\cap U_j)"
},
{
"math_id": 22,
"text": "res_{U_j,U_i\\cap U_j}:{\\mathcal F}(U_j)\\rightarrow{\\mathcal F}(U_i\\cap U_j)"
},
{
"math_id": 23,
"text": "U_i"
},
{
"math_id": 24,
"text": "U_i\\cap U_j"
},
{
"math_id": 25,
"text": "\\{U_i\\}_{i\\in I}"
},
{
"math_id": 26,
"text": "\\coprod_{i,j}U_i\\cap U_j{{{} \\atop \\longrightarrow}\\atop{\\longrightarrow \\atop {}}}\\coprod_iU_i"
},
{
"math_id": 27,
"text": "\\{ B_i \\}_{i \\in I}"
},
{
"math_id": 28,
"text": "\\{ B_i \\}"
},
{
"math_id": 29,
"text": "{\\mathcal F}:{\\mathcal O}'(X) \\rightarrow C"
},
{
"math_id": 30,
"text": "{\\mathcal O}'(X)"
},
{
"math_id": 31,
"text": "\\{ B_j \\}_{j \\in J}"
},
{
"math_id": 32,
"text": "{\\mathcal F}'(U)"
},
{
"math_id": 33,
"text": "\\{ {\\mathcal F}(B_j) \\}_{j \\in J}"
},
{
"math_id": 34,
"text": "{\\mathcal F}'(U) = {\\mathcal F}(U)"
},
{
"math_id": 35,
"text": "{\\mathcal F}'"
},
{
"math_id": 36,
"text": "x"
},
{
"math_id": 37,
"text": "r"
},
{
"math_id": 38,
"text": "1-r"
},
{
"math_id": 39,
"text": "\\mathcal P"
}
] | https://en.wikipedia.org/wiki?curid=1204294 |
1204310 | Linearizability | Property of some operation(s) in concurrent programming
In concurrent programming, an operation (or set of operations) is linearizable if it consists of an ordered list of invocation and response events, that may be extended by adding response events such that:
Informally, this means that the unmodified list of events is linearizable if and only if its invocations were serializable, but some of the responses of the serial schedule have yet to return.
In a concurrent system, processes can access a shared object at the same time. Because multiple processes are accessing a single object, a situation may arise in which while one process is accessing the object, another process changes its contents. Making a system linearizable is one solution to this problem. In a linearizable system, although operations overlap on a shared object, each operation appears to take place instantaneously. Linearizability is a strong correctness condition, which constrains what outputs are possible when an object is accessed by multiple processes concurrently. It is a safety property which ensures that operations do not complete unexpectedly or unpredictably. If a system is linearizable it allows a programmer to reason about the system.
History of linearizability.
Linearizability was first introduced as a consistency model by Herlihy and Wing in 1987. It encompassed more restrictive definitions of atomic, such as "an atomic operation is one which cannot be (or is not) interrupted by concurrent operations", which are usually vague about when an operation is considered to begin and end.
An atomic object can be understood immediately and completely from its sequential definition, as a set of operations run in parallel which always appear to occur one after the other; no inconsistencies may emerge. Specifically, linearizability guarantees that the invariants of a system are "observed" and "preserved" by all operations: if all operations individually preserve an invariant, the system as a whole will.
Definition of linearizability.
A concurrent system consists of a collection of processes communicating through shared data structures or objects. Linearizability is important in these concurrent systems where objects may be accessed by multiple processes at the same time and a programmer needs to be able to reason about the expected results. An execution of a concurrent system results in a "history", an ordered sequence of completed operations.
A "history" is a sequence of "invocations" and "responses" made of an object by a set of threads or processes. An invocation can be thought of as the start of an operation, and the response being the signaled end of that operation. Each invocation of a function will have a subsequent response. This can be used to model any use of an object. Suppose, for example, that two threads, A and B, both attempt to grab a lock, backing off if it's already taken. This would be modeled as both threads invoking the lock operation, then both threads receiving a response, one successful, one not.
A "sequential" history is one in which all invocations have immediate responses; that is the invocation and response are considered to take place instantaneously. A sequential history should be trivial to reason about, as it has no real concurrency; the previous example was not sequential, and thus is hard to reason about. This is where linearizability comes in.
A history is "linearizable" if there is a linear order formula_0 of the completed operations such that:
In other words:
Let us look at two ways of reordering the locking example above.
Reordering B's invocation below A's response yields a sequential history. This is easy to reason about, as all operations now happen in an obvious order. Unfortunately, it doesn't match the sequential definition of the object (it doesn't match the semantics of the program): A should have successfully obtained the lock, and B should have subsequently aborted.
This is another correct sequential history. It is also a linearization! Note that the definition of linearizability only precludes responses that precede invocations from being reordered; since the original history had no responses before invocations, we can reorder it as we wish. Hence the original history is indeed linearizable.
An object (as opposed to a history) is linearizable if all valid histories of its use can be linearized. Note that this is a much harder assertion to prove.
Linearizability versus serializability.
Consider the following history, again of two objects interacting with a lock:
This history is not valid because there is a point at which both A and B hold the lock; moreover, it cannot be reordered to a valid sequential history without violating the ordering rule. Therefore, it is not linearizable. However, under serializability, B's unlock operation may be moved to "before" A's original lock, which is a valid history (assuming the object begins the history in a locked state):
This reordering is sensible provided there is no alternative means of communicating between A and B. Linearizability is better when considering individual objects separately, as the reordering restrictions ensure that multiple linearizable objects are, considered as a whole, still linearizable.
Linearization points.
This definition of linearizability is equivalent to the following:
This alternative is usually much easier to prove. It is also much easier to reason about as a user, largely due to its intuitiveness. This property of occurring instantaneously, or indivisibly, leads to the use of the term "atomic" as an alternative to the longer "linearizable".
In the examples below, the linearization point of the counter built on compare-and-swap is the linearization point of the first (and only) successful compare-and-swap update. The counter built using locking can be considered to linearize at any moment while the locks are held, since any potentially conflicting operations are excluded from running during that period.
Primitive atomic instructions.
Processors have instructions that can be used to implement locking and lock-free and wait-free algorithms. The ability to temporarily inhibit interrupts, ensuring that the currently running process cannot be context switched, also suffices on a uniprocessor. These instructions are used directly by compiler and operating system writers but are also abstracted and exposed as bytecodes and library functions in higher-level languages:
Most processors include store operations that are not atomic with respect to memory. These include multiple-word stores and string operations. Should a high priority interrupt occur when a portion of the store is complete, the operation must be completed when the interrupt level is returned. The routine that processes the interrupt must not modify the memory being changed. It is important to take this into account when writing interrupt routines.
When there are multiple instructions which must be completed without interruption, a CPU instruction which temporarily disables interrupts is used. This must be kept to only a few instructions and the interrupts must be re-enabled to avoid unacceptable response time to interrupts or even losing interrupts. This mechanism is not sufficient in a multi-processor environment since each CPU can interfere with the process regardless of whether interrupts occur or not. Further, in the presence of an instruction pipeline, uninterruptible operations present a security risk, as they can potentially be chained in an infinite loop to create a denial of service attack, as in the Cyrix coma bug.
The C standard and SUSv3 provide codice_0 for simple atomic reads and writes; incrementing or decrementing is not guaranteed to be atomic. More complex atomic operations are available in C11, which provides codice_1. Compilers use the hardware features or more complex methods to implement the operations; an example is libatomic of GCC.
The ARM instruction set provides codice_2 and codice_3 instructions which can be used to implement atomic memory access by using exclusive monitors implemented in the processor to track memory accesses for a specific address. However, if a context switch occurs between calls to codice_2 and codice_3, the documentation notes that codice_3 will fail, indicating the operation should be retried. In the case of 64-bit ARMv8-A architecture, it provides codice_7 and codice_8 instructions for byte, half-word, word, and double-word size.
High-level atomic operations.
The easiest way to achieve linearizability is running groups of primitive operations in a critical section. Strictly, independent operations can then be carefully permitted to overlap their critical sections, provided this does not violate linearizability. Such an approach must balance the cost of large numbers of locks against the benefits of increased parallelism.
Another approach, favoured by researchers (but not yet widely used in the software industry), is to design a linearizable object using the native atomic primitives provided by the hardware. This has the potential to maximise available parallelism and minimise synchronisation costs, but requires mathematical proofs which show that the objects behave correctly.
A promising hybrid of these two is to provide a transactional memory abstraction. As with critical sections, the user marks sequential code that must be run in isolation from other threads. The implementation then ensures the code executes atomically. This style of abstraction is common when interacting with databases; for instance, when using the Spring Framework, annotating a method with @Transactional will ensure all enclosed database interactions occur in a single database transaction. Transactional memory goes a step further, ensuring that all memory interactions occur atomically. As with database transactions, issues arise regarding composition of transactions, especially database and in-memory transactions.
A common theme when designing linearizable objects is to provide an all-or-nothing interface: either an operation succeeds completely, or it fails and does nothing. (ACID databases refer to this principle as atomicity.) If the operation fails (usually due to concurrent operations), the user must retry, usually performing a different operation. For example:
Examples of linearizability.
Counters.
To demonstrate the power and necessity of linearizability we will consider a simple counter which different processes can increment.
We would like to implement a counter object which multiple processes can access. Many common systems make use of counters to keep track of the number of times an event has occurred.
The counter object can be accessed by multiple processes and has two available operations.
We will attempt to implement this counter object using shared registers.
Our first attempt which we will see is non-linearizable has the following implementation using one shared register among the processes.
Non-atomic.
The naive, non-atomic implementation:
Increment:
Read:
Read register R
This simple implementation is not linearizable, as is demonstrated by the following example.
Imagine two processes are running accessing the single counter object initialized to have value 0:
The second process is finished running and the first process continues running from where it left off:
In the above example, two processes invoked an increment command, however the value of the object only increased from 0 to 1, instead of 2 as it should have. One of the increment operations was lost as a result of the system not being linearizable.
The above example shows the need for carefully thinking through implementations of data structures and how linearizability can have an effect on the correctness of the system.
Atomic.
To implement a linearizable or atomic counter object we will modify our previous implementation so each process P will use its own register R
Each process increments and reads according to the following algorithm:
Increment:
Read:
This implementation solves the problem with our original implementation. In this system the increment operations are linearized at the write step. The linearization point of an increment operation is when that operation writes the new value in its register R The read operations are linearized to a point in the system when the value returned by the read is equal to the sum of all the values stored in each register R
This is a trivial example. In a real system, the operations can be more complex and the errors introduced extremely subtle. For example, reading a 64-bit value from memory may actually be implemented as two sequential reads of two 32-bit memory locations. If a process has only read the first 32 bits, and before it reads the second 32 bits the value in memory gets changed, it will have neither the original value nor the new value but a mixed-up value.
Furthermore, the specific order in which the processes run can change the results, making such an error difficult to detect, reproduce and debug.
Compare-and-swap.
Most systems provide an atomic compare-and-swap instruction that reads from a memory location, compares the value with an "expected" one provided by the user, and writes out a "new" value if the two match, returning whether the update succeeded. We can use this to fix the non-atomic counter algorithm as follows:
# Read the value in the memory location;
# add one to the value;
# use compare-and-swap to write the incremented value back;
# retry if the value read in by the compare-and-swap did not match the value we originally read.
Since the compare-and-swap occurs (or appears to occur) instantaneously, if another process updates the location while we are in-progress, the compare-and-swap is guaranteed to fail.
Fetch-and-increment.
Many systems provide an atomic fetch-and-increment instruction that reads from a memory location, unconditionally writes a new value (the old value plus one), and returns the old value.
We can use this to fix the non-atomic counter algorithm as follows:
# Use fetch-and-increment to read the old value and write the incremented value back.
Using fetch-and increment is always better (requires fewer memory references) for some algorithms—such as the one shown here—than compare-and-swap, even though Herlihy earlier proved that compare-and-swap is better for certain other algorithms that can't be implemented at all using only fetch-and-increment.
So CPU designs with both fetch-and-increment and compare-and-swap (or equivalent instructions) may be a better choice than ones with only one or the other.
Locking.
Another approach is to turn the naive algorithm into a critical section, preventing other threads from disrupting it, using a lock. Once again fixing the non-atomic counter algorithm:
# Acquire a lock, excluding other threads from running the critical section (steps 2–4) at the same time;
# read the value in the memory location;
# add one to the value;
# write the incremented value back to the memory location;
# release the lock.
This strategy works as expected; the lock prevents other threads from updating the value until it is released. However, when compared with direct use of atomic operations, it can suffer from significant overhead due to lock contention. To improve program performance, it may therefore be a good idea to replace simple critical sections with atomic operations for non-blocking synchronization (as we have just done for the counter with compare-and-swap and fetch-and-increment), instead of the other way around, but unfortunately a significant improvement is not guaranteed and lock-free algorithms can easily become too complicated to be worth the effort.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma"
}
] | https://en.wikipedia.org/wiki?curid=1204310 |
12044399 | List decoding | In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding.
The unique decoding model in coding theory, which is constrained to output a single valid codeword from the received word could not tolerate a greater fraction of errors. This resulted in a gap between the error-correction performance for stochastic noise models (proposed by Shannon) and the adversarial noise model (considered by Richard Hamming). Since the mid 90s, significant algorithmic progress by the coding theory community has bridged this gap. Much of this progress is based on a relaxed error-correction model called list decoding, wherein the decoder outputs a list of codewords for worst-case pathological error patterns where the actual transmitted codeword is included in the output list. In case of typical error patterns though, the decoder outputs a unique single codeword, given a received word, which is almost always the case (However, this is not known to be true for all codes). The improvement here is significant in that the error-correction performance doubles. This is because now the decoder is not confined by the half-the-minimum distance barrier. This model is very appealing because having a list of codewords is certainly better than just giving up. The notion of list-decoding has many interesting applications in complexity theory.
The way the channel noise is modeled plays a crucial role in that it governs the rate at which reliable communication is possible. There are two main schools of thought in modeling the channel behavior:
The highlight of list-decoding is that even under adversarial noise conditions, it is possible to achieve the information-theoretic optimal trade-off between rate and fraction of errors that can be corrected. Hence, in a sense this is like improving the error-correction performance to that possible in case of a weaker, stochastic noise model.
Mathematical formulation.
Let formula_0 be a formula_1 error-correcting code; in other words, formula_0 is a code of length formula_2, dimension formula_3 and minimum distance formula_4 over an alphabet formula_5 of size formula_6. The list-decoding problem can now be formulated as follows:
Input: Received word formula_7, error bound formula_8
Output: A list of all codewords formula_9 whose hamming distance from formula_10 is at most formula_8.
Motivation for list decoding.
Given a received word formula_11, which is a noisy version of some transmitted codeword formula_12, the decoder tries to output the transmitted codeword by placing its bet on a codeword that is “nearest” to the received word. The Hamming distance between two codewords is used as a metric in finding the nearest codeword, given the received word by the decoder. If formula_4 is the minimum Hamming distance of a code formula_0, then there exists two codewords formula_13 and formula_14 that differ in exactly formula_4 positions. Now, in the case where the received word formula_11 is equidistant from the codewords formula_13 and formula_14, unambiguous decoding becomes impossible as the decoder cannot decide which one of formula_13 and formula_14 to output as the original transmitted codeword. As a result, the half-the minimum distance acts as a combinatorial barrier beyond which unambiguous error-correction is impossible, if we only insist on unique decoding. However, received words such as formula_11 considered above occur only in the worst-case and if one looks at the way Hamming balls are packed in high-dimensional space, even for error patterns formula_8 beyond half-the minimum distance, there is only a single codeword formula_12 within Hamming distance formula_8 from the received word. This claim has been shown to hold with high probability for a random code picked from a natural ensemble and more so for the case of Reed–Solomon codes which is well studied and quite ubiquitous in the real world applications. In fact, Shannon's proof of the capacity theorem for "q"-ary symmetric channels can be viewed in light of the above claim for random codes.
Under the mandate of list-decoding, for worst-case errors, the decoder is allowed to output a small list of codewords. With some context specific or side information, it may be possible to prune the list and recover the original transmitted codeword. Hence, in general, this seems to be a stronger error-recovery model than unique decoding.
List-decoding potential.
For a polynomial-time list-decoding algorithm to exist, we need the combinatorial guarantee that any Hamming ball of radius formula_15 around a received word formula_16 (where formula_17 is the fraction of errors in terms of the block length formula_2) has a small number of codewords. This is because the list size itself is clearly a lower bound on the running time of the algorithm. Hence, we require the list size to be a polynomial in the block length formula_2 of the code. A combinatorial consequence of this requirement is that it imposes an upper bound on the rate of a code. List decoding promises to meet this upper bound. It has been shown non-constructively that codes of rate formula_18 exist that can be list decoded up to a fraction of errors approaching formula_19. The quantity formula_19 is referred to in the literature as the list-decoding capacity. This is a substantial gain compared to the unique decoding model as we now have the potential to correct twice as many errors. Naturally, we need to have at least a fraction formula_18 of the transmitted symbols to be correct in order to recover the message. This is an information-theoretic lower bound on the number of correct symbols required to perform decoding and with list decoding, we can potentially achieve this information-theoretic limit. However, to realize this potential, we need explicit codes (codes that can be constructed in polynomial time) and efficient algorithms to perform encoding and decoding.
("p", "L")-list-decodability.
For any error fraction formula_20 and an integer formula_21, a code formula_22 is said to be list decodable up to a fraction formula_17 of errors with list size at most formula_23 or formula_24-list-decodable if for every formula_25, the number of codewords formula_26 within Hamming distance formula_27 from formula_11 is at most formula_28
Combinatorics of list decoding.
The relation between list decodability of a code and other fundamental parameters such as minimum distance and rate have been fairly well studied. It has been shown that every code can be list decoded using small lists beyond half the minimum distance up to a bound called the Johnson radius. This is quite significant because it proves the existence of formula_24-list-decodable codes of good rate with a list-decoding radius much larger than formula_29 In other words, the Johnson bound rules out the possibility of having a large number of codewords in a Hamming ball of radius slightly greater than formula_30 which means that it is possible to correct far more errors with list decoding.
Theorem (List-Decoding Capacity). Let formula_31 and formula_32 The following two statements hold for large enough block length formula_2.
i) If formula_33, then there exists a formula_34-list decodable code.
ii) If formula_35, then every formula_24-list-decodable code has formula_36.
Where
formula_37
is the formula_6-ary entropy function defined for formula_38 and extended by continuity to formula_39
List-decoding capacity.
What this means is that for rates approaching the channel capacity, there exists list decodable codes with polynomial sized lists enabling efficient decoding algorithms whereas for rates exceeding the channel capacity, the list size becomes exponential which rules out the existence of efficient decoding algorithms.
The proof for list-decoding capacity is a significant one in that it exactly matches the capacity of a formula_6-ary symmetric channel formula_40. In fact, the term "list-decoding capacity" should actually be read as the capacity of an adversarial channel under list decoding. Also, the proof for list-decoding capacity is an important result that pin points the optimal trade-off between rate of a code and the fraction of errors that can be corrected under list decoding.
Sketch of proof.
The idea behind the proof is similar to that of Shannon's proof for capacity of the binary symmetric channel formula_41 where a random code is picked and showing that it is formula_24-list-decodable with high probability as long as the rate formula_42 For rates exceeding the above quantity, it can be shown that the list size formula_23 becomes super-polynomially large.
A "bad" event is defined as one in which, given a received word formula_43 and formula_44 messages formula_45 it so happens that formula_46, for every formula_47 where formula_17 is the fraction of errors that we wish to correct and formula_48 is the Hamming ball of radius formula_49 with the received word formula_50 as the center.
Now, the probability that a codeword formula_51 associated with a fixed message formula_52 lies in a Hamming ball formula_53 is given by
formula_54
where the quantity formula_55 is the volume of a Hamming ball of radius formula_49 with the received word formula_50 as the center. The inequality in the above relation follows from the upper bound on the volume of a Hamming ball. The quantity formula_56 gives a very good estimate on the volume of a Hamming ball of radius formula_17 centered on any word in formula_57 Put another way, the volume of a Hamming ball is translation invariant. To continue with the proof sketch, we conjure the union bound in probability theory which tells us that the probability of a bad event happening for a given formula_58 is upper bounded by the quantity formula_59.
With the above in mind, the probability of "any" bad event happening can be shown to be less than formula_60. To show this, we work our way over all possible received words formula_61 and every possible subset of formula_23 messages in formula_62
Now turning to the proof of part (ii), we need to show that there are super-polynomially many codewords around every formula_63 when the rate exceeds the list-decoding capacity. We need to show that formula_64 is super-polynomially large if the rate formula_35. Fix a codeword formula_65. Now, for every formula_63 picked at random, we have
formula_66
since Hamming balls are translation invariant. From the definition of the volume of a Hamming ball and the fact that formula_50 is chosen uniformly at random from formula_67 we also have
formula_68
Let us now define an indicator variable formula_69 such that
formula_70
Taking the expectation of the volume of a Hamming ball we have
formula_71
Therefore, by the probabilistic method, we have shown that if the rate exceeds the list-decoding capacity, then the list size becomes super-polynomially large. This completes the proof sketch for the list-decoding capacity.
List decodability of Reed-Solomon Codes.
In 2023, building upon three seminal works, coding theorists showed that, with high probability, Reed-Solomon codes defined over random evaluation points are list decodable up to the list-decoding capacity over linear size alphabets.
List-decoding algorithms.
In the period from 1995 to 2007, the coding theory community developed progressively more efficient list-decoding algorithms. Algorithms for Reed–Solomon codes that can decode up to the Johnson radius which is formula_72 exist where formula_73 is the normalised distance or relative distance. However, for Reed-Solomon codes, formula_74 which means a fraction formula_75 of errors can be corrected. Some of the most prominent list-decoding algorithms are the following:
Because of their ubiquity and the nice algebraic properties they possess, list-decoding algorithms for Reed–Solomon codes were a main focus of researchers. The list-decoding problem for Reed–Solomon codes can be formulated as follows:
Input: For an formula_81 Reed-Solomon code, we are given the pair formula_82 for formula_83, where formula_84 is the formula_85th bit of the received word and the formula_86's are distinct points in the finite field formula_87 and an error parameter formula_88.
Output: The goal is to find all the polynomials formula_89 of degree at most formula_90 which is the message length such that formula_91 for at least formula_92 values of formula_93. Here, we would like to have formula_92 as small as possible so that greater number of errors can be tolerated.
With the above formulation, the general structure of list-decoding algorithms for Reed-Solomon codes is as follows:
Step 1: (Interpolation) Find a non-zero bivariate polynomial formula_94 such that formula_95 for formula_83.
Step 2: (Root finding/Factorization) Output all degree formula_90 polynomials formula_96 such that formula_97 is a factor of formula_94 i.e. formula_98. For each of these polynomials, check if formula_99 for at least formula_92 values of formula_100. If so, include such a polynomial formula_96 in the output list.
Given the fact that bivariate polynomials can be factored efficiently, the above algorithm runs in polynomial time.
Applications in complexity theory and cryptography.
Algorithms developed for list decoding of several interesting code families have found interesting applications in computational complexity and the field of cryptography. Following is a sample list of applications outside of coding theory:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{C}"
},
{
"math_id": 1,
"text": "(n,k,d)_q"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "d"
},
{
"math_id": 5,
"text": "\\Sigma"
},
{
"math_id": 6,
"text": "q"
},
{
"math_id": 7,
"text": "x \\in \\Sigma^{n}"
},
{
"math_id": 8,
"text": "e"
},
{
"math_id": 9,
"text": "x_{1},x_{2},\\ldots,x_{m} \\in \\mathcal{C}"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "y"
},
{
"math_id": 12,
"text": "c"
},
{
"math_id": 13,
"text": "c_1"
},
{
"math_id": 14,
"text": "c_2"
},
{
"math_id": 15,
"text": "pn "
},
{
"math_id": 16,
"text": "r"
},
{
"math_id": 17,
"text": "p"
},
{
"math_id": 18,
"text": "R"
},
{
"math_id": 19,
"text": "1-R"
},
{
"math_id": 20,
"text": "0 \\leqslant p \\leqslant 1"
},
{
"math_id": 21,
"text": "L \\geqslant 1"
},
{
"math_id": 22,
"text": "\\mathcal{C} \\subseteq \\Sigma^{n}"
},
{
"math_id": 23,
"text": "L"
},
{
"math_id": 24,
"text": "(p, L)"
},
{
"math_id": 25,
"text": "y \\in \\Sigma^{n}"
},
{
"math_id": 26,
"text": " c \\in C "
},
{
"math_id": 27,
"text": "pn"
},
{
"math_id": 28,
"text": "L."
},
{
"math_id": 29,
"text": "\\tfrac{d}{2}."
},
{
"math_id": 30,
"text": "\\tfrac{d}{2}"
},
{
"math_id": 31,
"text": " q \\geqslant 2, 0 \\leqslant p \\leqslant 1 - \\tfrac{1}{q} "
},
{
"math_id": 32,
"text": " \\epsilon \\geqslant 0."
},
{
"math_id": 33,
"text": " R \\leqslant 1 - H_q(p) - \\epsilon "
},
{
"math_id": 34,
"text": "(p, O(1 / \\epsilon))"
},
{
"math_id": 35,
"text": " R \\geqslant 1 - H_q(p) + \\epsilon "
},
{
"math_id": 36,
"text": " L = q^{\\Omega(n)}"
},
{
"math_id": 37,
"text": " H_q(p) = p\\log_q(q - 1) - p\\log_qp - (1 - p)\\log_q (1 - p)"
},
{
"math_id": 38,
"text": "p \\in (0,1)"
},
{
"math_id": 39,
"text": "[0,1]."
},
{
"math_id": 40,
"text": "qSC_{p}"
},
{
"math_id": 41,
"text": " BSC_p "
},
{
"math_id": 42,
"text": " R \\leqslant 1 - H_q(p) - \\tfrac{1}{L}."
},
{
"math_id": 43,
"text": "y \\in [q]^n"
},
{
"math_id": 44,
"text": "L+1"
},
{
"math_id": 45,
"text": "m_0, \\ldots, m_L \\in [q]^k,"
},
{
"math_id": 46,
"text": "\\mathcal{C}(m_i) \\in B(y, pn)"
},
{
"math_id": 47,
"text": " 0 \\leqslant i \\leqslant L "
},
{
"math_id": 48,
"text": "B(y, pn)"
},
{
"math_id": 49,
"text": " pn "
},
{
"math_id": 50,
"text": " y "
},
{
"math_id": 51,
"text": " \\mathcal{C}(m_i)"
},
{
"math_id": 52,
"text": " m_i \\in [q]^k "
},
{
"math_id": 53,
"text": " B(y, pn) "
},
{
"math_id": 54,
"text": " \\Pr \\left [C(m_i) \\in B(y, pn) \\right ] = \\frac{\\mathrm{Vol}_q(y, pn)}{q^n} \\leqslant q^{-n(1 - H_q(p))}, "
},
{
"math_id": 55,
"text": " Vol_q(y, pn)"
},
{
"math_id": 56,
"text": " q^{H_q(p)}"
},
{
"math_id": 57,
"text": "[q]^n."
},
{
"math_id": 58,
"text": " (y, m_0, \\dots , m_L) "
},
{
"math_id": 59,
"text": " q^{-n(L + 1) (1 - H_q(p))} "
},
{
"math_id": 60,
"text": "1"
},
{
"math_id": 61,
"text": " y \\in [q]^n "
},
{
"math_id": 62,
"text": "[q]^k."
},
{
"math_id": 63,
"text": "y \\in [q]^n "
},
{
"math_id": 64,
"text": "|\\mathcal{C} \\cap B(y, pn)| "
},
{
"math_id": 65,
"text": " c \\in \\mathcal{C}"
},
{
"math_id": 66,
"text": " \\Pr[c \\in B(y, pn)] = \\Pr[y \\in B(c, pn)]"
},
{
"math_id": 67,
"text": "[q]^n"
},
{
"math_id": 68,
"text": " \\Pr[c \\in B(y, pn)] = \\Pr[y \\in B(c, pn)] = \\frac{\\mathrm{Vol}(y, pn)}{q^n} \\geqslant q^{-n(1-H_q(p)) - o(n)}"
},
{
"math_id": 69,
"text": " X_c "
},
{
"math_id": 70,
"text": "X_c = \\begin{cases} 1 & c \\in B(y, pn) \\\\ 0 & \\text{otherwise} \\end{cases}"
},
{
"math_id": 71,
"text": "\\begin{align}\nE[|B(y, pn)|] & = \\sum_{c \\in \\mathcal{C}} E[X_c]\\\\[4pt]\n& = \\sum_{c \\in \\mathcal{C}} \\Pr[X_c = 1] \\\\[4pt]\n& \\geqslant \\sum q^{-n(1 - H_q(p) + o(n))} \\\\[4pt]\n& = \\sum q^{n(R - 1 + H_q(p) + o(1))} \\\\[4pt]\n& \\geqslant q^{\\Omega(n)}\n\\end{align} "
},
{
"math_id": 72,
"text": " 1 - \\sqrt{1 - \\delta} "
},
{
"math_id": 73,
"text": " \\delta "
},
{
"math_id": 74,
"text": " \\delta = 1 - R "
},
{
"math_id": 75,
"text": " 1 - \\sqrt{R}"
},
{
"math_id": 76,
"text": " 1 - \\sqrt{2R} "
},
{
"math_id": 77,
"text": "1 - \\sqrt{R}"
},
{
"math_id": 78,
"text": "m \\geqslant 1"
},
{
"math_id": 79,
"text": "1-R-\\epsilon"
},
{
"math_id": 80,
"text": "\\epsilon>0"
},
{
"math_id": 81,
"text": " [n, k + 1]_q "
},
{
"math_id": 82,
"text": " (\\alpha_i, y_i) "
},
{
"math_id": 83,
"text": " 1 \\leq i \\leq n "
},
{
"math_id": 84,
"text": " y_i "
},
{
"math_id": 85,
"text": "i"
},
{
"math_id": 86,
"text": "\\alpha_i "
},
{
"math_id": 87,
"text": " F_q "
},
{
"math_id": 88,
"text": " e = n - t "
},
{
"math_id": 89,
"text": " P(X) \\in F_q[X] "
},
{
"math_id": 90,
"text": " k "
},
{
"math_id": 91,
"text": " p(\\alpha_i) = y_i"
},
{
"math_id": 92,
"text": " t "
},
{
"math_id": 93,
"text": " i "
},
{
"math_id": 94,
"text": "Q(X,Y)"
},
{
"math_id": 95,
"text": " Q(\\alpha_i, y_i) = 0 "
},
{
"math_id": 96,
"text": " p(X) "
},
{
"math_id": 97,
"text": " Y - p(X) "
},
{
"math_id": 98,
"text": "Q(X,p(X)) = 0"
},
{
"math_id": 99,
"text": " p(\\alpha_i) = y_i "
},
{
"math_id": 100,
"text": " i \\in [n] "
}
] | https://en.wikipedia.org/wiki?curid=12044399 |
1205131 | Regge theory | Study of the analytic properties of scattering amplitudes
In quantum physics, Regge theory ( , ) is the study of the analytic properties of scattering as a function of angular momentum, where the angular momentum is not restricted to be an integer multiple of "ħ" but is allowed to take any complex value. The nonrelativistic theory was developed by Tullio Regge in 1959.
Details.
The simplest example of Regge poles is provided by the quantum mechanical treatment of the Coulomb potential formula_0 or, phrased differently, by the quantum mechanical treatment of the binding or scattering of an electron of mass formula_1 and electric charge formula_2 off a proton of mass formula_3 and charge formula_4. The energy formula_5 of the binding of the electron to the proton is negative whereas for scattering the energy is positive. The formula for the binding energy is the expression
formula_6
where formula_7, formula_8 is the Planck constant, and formula_9 is the permittivity of the vacuum. The principal quantum number formula_10 is in quantum mechanics (by solution of the radial Schrödinger equation) found to be given by formula_11, where formula_12 is the radial quantum number and formula_13 the quantum number of the orbital angular momentum. Solving the above equation for formula_14, one obtains the equation
formula_15
Considered as a complex function of formula_5 this expression describes in the complex formula_14-plane a path which is called a Regge trajectory. Thus in this consideration the orbital
momentum can assume complex values.
Regge trajectories can be obtained for many other potentials, in particular also for the Yukawa potential.
Regge trajectories appear as poles of the scattering amplitude or in the related formula_16-matrix. In the case of the Coulomb potential considered above this formula_16-matrix is given by the following expression as can be checked by reference to any textbook on quantum mechanics:
formula_17
where formula_18 is the gamma function, a generalization of factorial formula_19. This gamma function is a meromorphic function of its argument with simple poles at formula_20. Thus the expression for formula_16 (the gamma function in the numerator) possesses poles at precisely those points which are given by the above expression for the Regge trajectories; hence the name Regge poles.
History and implications.
The main result of the theory is that the scattering amplitude for potential scattering grows as a function of the cosine formula_21 of the scattering angle as a power that changes as the scattering energy changes:
formula_22
where formula_23 is the noninteger value of the angular momentum of a would-be bound state with energy formula_5. It is determined by solving the radial Schrödinger equation and it smoothly interpolates the energy of wavefunctions with different angular momentum but with the same radial excitation number. The trajectory function is a function of formula_24 for relativistic generalization. The expression formula_25 is known as the Regge trajectory function, and when it is an integer, the particles form an actual bound state with this angular momentum. The asymptotic form applies when formula_21 is much greater than one, which is not a physical limit in nonrelativistic scattering.
Shortly afterwards, Stanley Mandelstam noted that in relativity the purely formal limit of formula_21 large is near to a physical limit — the limit of large formula_26. Large formula_26 means large energy in the crossed channel, where one of the incoming particles has an energy momentum that makes it an energetic outgoing antiparticle. This observation turned Regge theory from a mathematical curiosity into a physical theory: it demands that the function that determines the falloff rate of the scattering amplitude for particle-particle scattering at large energies is the same as the function that determines the bound state energies for a particle-antiparticle system as a function of angular momentum.
The switch required swapping the Mandelstam variable formula_27, which is the square of the energy, for formula_26, which is the squared momentum transfer, which for elastic soft collisions of identical particles is s times one minus the cosine of the scattering angle. The relation in the crossed channel becomes
formula_28
which says that the amplitude has a different power law falloff as a function of energy at different corresponding angles, where corresponding angles are those with the same value of formula_26. It predicts that the function that determines the power law is the same function that interpolates the energies where the resonances appear. The range of angles where scattering can be productively described by Regge theory shrinks into a narrow cone around the beam-line at large energies.
In 1960 Geoffrey Chew and Steven Frautschi conjectured from limited data that the strongly interacting particles had a very simple dependence of the squared-mass on the angular momentum: the particles fall into families where the Regge trajectory functions were straight lines: formula_29 with the same constant formula_30 for all the trajectories. The straight-line Regge trajectories were later understood as arising from massless endpoints on rotating relativistic strings. Since a Regge description implied that the particles were bound states, Chew and Frautschi concluded that none of the strongly interacting particles were elementary.
Experimentally, the near-beam behavior of scattering did fall off with angle as explained by Regge theory, leading many to accept that the particles in the strong interactions were composite. Much of the scattering was "diffractive", meaning that the particles hardly scatter at all — staying close to the beam line after the collision. Vladimir Gribov noted that the Froissart bound combined with the assumption of maximum possible scattering implied there was a Regge trajectory that would lead to logarithmically rising cross sections, a trajectory nowadays known as the pomeron. He went on to formulate a quantitative perturbation theory for near beam line scattering dominated by multi-pomeron exchange.
From the fundamental observation that hadrons are composite, there grew two points of view. Some correctly advocated that there were elementary particles, nowadays called quarks and gluons, which made a quantum field theory in which the hadrons were bound states. Others also correctly believed that it was possible to formulate a theory without elementary particles — where all the particles were bound states lying on Regge trajectories and scatter self-consistently. This was called "S"-matrix theory.
The most successful "S"-matrix approach centered on the narrow-resonance approximation, the idea that there is a consistent expansion starting from stable particles on straight-line Regge trajectories. After many false starts, Richard Dolen, David Horn, and Christoph Schmid understood a crucial property that led Gabriele Veneziano to formulate a self-consistent scattering amplitude, the first string theory. Mandelstam noted that the limit where the Regge trajectories are straight is also the limit where the lifetime of the states is long.
As a fundamental theory of strong interactions at high energies, Regge theory enjoyed a period of interest in the 1960s, but it was largely succeeded by quantum chromodynamics. As a phenomenological theory, it is still an indispensable tool for understanding near-beam line scattering and scattering at very large energies. Modern research focuses both on the connection to perturbation theory and to string theory.
See also.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in physics:
How does Regge theory emerge from quantum chromodynamics at long distances?
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V(r) = -e^2/(4\\pi\\epsilon_0r)"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "-e"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "+e"
},
{
"math_id": 5,
"text": "E"
},
{
"math_id": 6,
"text": "E\\rightarrow E_N = - \\frac{2m'\\pi^2e^4}{h^2N^2(4\\pi\\epsilon_0)^2} = - \\frac{13.6\\,\\mathrm{eV}}{N^2}, \\;\\;\\; m^' = \\frac{mM}{M+m}, "
},
{
"math_id": 7,
"text": "N = 1,2,3,..."
},
{
"math_id": 8,
"text": "h"
},
{
"math_id": 9,
"text": "\\epsilon_0"
},
{
"math_id": 10,
"text": "N"
},
{
"math_id": 11,
"text": "N = n+l+1"
},
{
"math_id": 12,
"text": "n=0,1,2,..."
},
{
"math_id": 13,
"text": "l=0,1,2,3,..."
},
{
"math_id": 14,
"text": "l"
},
{
"math_id": 15,
"text": "l\\rightarrow l(E) = -n +g(E), \\;\\; g(E) = -1+i\\frac{\\pi e^2}{4\\pi\\epsilon_0h}(2m'/E)^{1/2}."
},
{
"math_id": 16,
"text": "S"
},
{
"math_id": 17,
"text": " \nS = \\frac{\\Gamma(l-g(E))}{\\Gamma(l+g(E))}e^{-i\\pi l},\n"
},
{
"math_id": 18,
"text": "\\Gamma(x)"
},
{
"math_id": 19,
"text": "(x-1)!"
},
{
"math_id": 20,
"text": "x=-n, n=0,1,2,..."
},
{
"math_id": 21,
"text": "z"
},
{
"math_id": 22,
"text": "\nA(z) \\propto z^{l(E^2)}\n"
},
{
"math_id": 23,
"text": "l(E^2)"
},
{
"math_id": 24,
"text": "s=E^2"
},
{
"math_id": 25,
"text": "l(s)"
},
{
"math_id": 26,
"text": "t"
},
{
"math_id": 27,
"text": "s"
},
{
"math_id": 28,
"text": "\nA(z) \\propto s^{l(t)}\n"
},
{
"math_id": 29,
"text": "l(s)=ks"
},
{
"math_id": 30,
"text": "k"
}
] | https://en.wikipedia.org/wiki?curid=1205131 |
12052214 | Dynamic stochastic general equilibrium | Macroeconomic method
Dynamic stochastic general equilibrium modeling (abbreviated as DSGE, or DGE, or sometimes SDGE) is a macroeconomic method which is often employed by monetary and fiscal authorities for policy analysis, explaining historical time-series data, as well as future forecasting purposes. DSGE econometric modelling applies general equilibrium theory and microeconomic principles in a tractable manner to postulate economic phenomena, such as economic growth and business cycles, as well as policy effects and market shocks.
Terminology.
As a practical matter, people often use the term "DSGE models" to refer to a particular class of classically quantitative econometric models of business cycles or economic growth called real business cycle (RBC) models. DSGE models were initially proposed by Kydland & Prescott, and Long & Plosser; Charles Plosser described RBC models as a precursor for DSGE modeling.
As mentioned in the Introduction, DSGE models are the predominant framework of macroeconomic analysis. They are multifaceted, and their combination of micro-foundations and optimising economic behaviour of rational agents allows for a comprehensive analysis of macro effects. As indicated by their name, their defining characteristics are as follows:
RBC modeling.
The formulation and analysis of monetary policy has undergone significant evolution in recent decades and the development of DSGE models has played a key role in this process. As was aforementioned DSGE models are seen to be an update of RBC (real business cycle) models.
Early real business-cycle models postulated an economy populated by a representative consumer who operates in perfectly competitive markets. The only sources of uncertainty in these models are "shocks" in technology. RBC theory builds on the neoclassical growth model, under the assumption of flexible prices, to study how real shocks to the economy might cause business cycle fluctuations.
The "representative consumer" assumption can either be taken literally or reflect a Gorman aggregation of heterogenous consumers who are facing idiosyncratic income shocks and complete markets in all assets. These models took the position that fluctuations in aggregate economic activity are actually an "efficient response" of the economy to exogenous shocks.
The models were criticized on a number of issues:
The Lucas critique.
In a 1976 paper, Robert Lucas argued that it is naive to try to predict the effects of a change in economic policy entirely on the basis of relationships observed in historical data, especially highly aggregated historical data. Lucas claimed that the decision rules of Keynesian models, such as the fiscal multiplier, cannot be considered as structural, in the sense that they cannot be invariant with respect to changes in government policy variables, stating:
Given that the structure of an econometric model consists of optimal decision-rules of economic agents, and that optimal decision-rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models.
This meant that, because the parameters of the models were not structural, i.e. not indifferent to policy, they would necessarily change whenever policy was changed. The so-called Lucas critique followed similar criticism undertaken earlier by Ragnar Frisch, in his critique of Jan Tinbergen's 1939 book "Statistical Testing of Business-Cycle Theories", where Frisch accused Tinbergen of not having discovered autonomous relations, but "coflux" relations, and by Jacob Marschak, in his 1953 contribution to the "Cowles Commission Monograph", where he submitted that
In predicting the effect of its decisions (policies), the government...has to take account of exogenous variables, whether controlled by it (the decisions themselves, if they are exogenous variables) or uncontrolled (e.g. weather), and of structural changes, whether controlled by it (the decisions themselves, if they change the structure) or uncontrolled (e.g. sudden changes in people's attitude).
The Lucas critique is representative of the paradigm shift that occurred in macroeconomic theory in the 1970s towards attempts at establishing micro-foundations.
Response to the Lucas critique.
In the 1980s, macro models emerged that attempted to directly respond to Lucas through the use of rational expectations econometrics.
In 1982, Finn E. Kydland and Edward C. Prescott created a real business cycle (RBC) model to "predict the consequence of a particular policy rule upon the operating characteristics of the economy." The stated, exogenous, stochastic components in their model are "shocks to technology" and "imperfect indicators of productivity." The shocks involve random fluctuations in the productivity level, which shift up or down the trend of economic growth. Examples of such shocks include innovations, the weather, sudden and significant price increases in imported energy sources, stricter environmental regulations, etc. The shocks directly change the effectiveness of capital and labour, which, in turn, affects the decisions of workers and firms, who then alter what they buy and produce. This eventually affects output.
The authors stated that, since fluctuations in employment are central to the business cycle, the "stand-in consumer [of the model] values not only consumption but also leisure," meaning that unemployment movements essentially reflect the changes in the number of people who want to work. "Household-production theory," as well as "cross-sectional evidence" ostensibly support a "non-time-separable utility function that admits greater inter-temporal substitution of leisure, something which is needed," according to the authors, "to explain aggregate movements in employment in an equilibrium model." For the K&P model, monetary policy is irrelevant for economic fluctuations.
The associated policy implications were clear: There is no need for any form of government intervention since, ostensibly, government policies aimed at stabilizing the business cycle are welfare-reducing. Since microfoundations are based on the preferences of decision-makers in the model, DSGE models feature a natural benchmark for evaluating the welfare effects of policy changes. Furthermore, the integration of such microfoundations in DSGE modeling enables the model to accurately adjust to shifts in fundamental behaviour of agents and is thus regarded as an "impressive response" to the Lucas critique. The Kydland/Prescott 1982 paper is often considered the starting point of RBC theory and of DSGE modeling in general and its authors were awarded the 2004 Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel.
DSGE modeling.
Structure.
By applying dynamic principles, dynamic stochastic general equilibrium models contrast with the static models studied in applied general equilibrium models and some computable general equilibrium models.
DSGE models employed by governments and central banks for policy analysis are relatively simple. Their structure is built around three interrelated sections including that of demand, supply, and the monetary policy equation. These three sections are formally defined by micro-foundations and make explicit assumptions about the behavior of the main economic agents in the economy, i.e. households, firms, and the government. The interaction of the agents in markets cover every period of the business cycle which ultimately qualifies the "general equilibrium" aspect of this model. The preferences (objectives) of the agents in the economy must be specified. For example, households might be assumed to maximize a utility function over consumption and labor effort. Firms might be assumed to maximize profits and to have a production function, specifying the amount of goods produced, depending on the amount of labor, capital and other inputs they employ. Technological constraints on firms' decisions might include costs of adjusting their capital stocks, their employment relations, or the prices of their products.
Below is an example of the set of assumptions a DSGE is built upon:
to which the following frictions are added:
The models' general equilibrium nature is presumed to capture the interaction between policy actions and agents' behavior, while the models specify assumptions about the stochastic shocks that give rise to economic fluctuations. Hence, the models are presumed to "trace more clearly the shocks' transmission to the economy." This is exemplified in the below explanation of a simplified DSGE model.
As such a complete simplified model of the relationship between three key features is defined. This dynamic interaction between the endogenous variables of output, inflation, and the nominal interest rate, is fundamental in DSGE modelling.
Schools.
Two schools of analysis form the bulk of DSGE modeling: the classic RBC models, and the New-Keynesian DSGE models that build on a structure similar to RBC models, but instead assume that prices are set by monopolistically competitive firms, and cannot be instantaneously and costlessly adjusted. Rotemberg & Woodford introduced this framework in 1997. Introductory and advanced textbook presentations of DSGE modeling are given by Galí (2008) and Woodford (2003). Monetary policy implications are surveyed by Clarida, Galí, and Gertler (1999).
The European Central Bank (ECB) has developed a DSGE model, called the Smets–Wouters model, which it uses to analyze the economy of the Eurozone as a whole. The Bank's analysts state that
developments in the construction, simulation and estimation of DSGE models have made it possible to combine a rigorous microeconomic derivation of the behavioural equations of macro models with an empirically plausible calibration or estimation which fits the main features of the macroeconomic time series.
The main difference between "empirical" DSGE models and the "more traditional macroeconometric models, such as the Area-Wide Model", according to the ECB, is that "both the parameters and the shocks to the structural equations are related to deeper structural parameters describing household preferences and technological and institutional constraints."
The Smets-Wouters model uses seven Eurozone area macroeconomic series: real GDP; consumption; investment; employment; real wages; inflation; and the nominal, short-term interest rate. Using Bayesian estimation and validation techniques, the bank's modeling is ostensibly able to compete with "more standard, unrestricted time series models, such as vector autoregression, in out-of-sample forecasting."
Criticism.
Bank of Lithuania Deputy Chairman Raimondas Kuodis disputes the very title of DSGE analysis: The models, he claims, are neither dynamic (since they contain no evolution of stocks of financial assets and liabilities), stochastic (because we live in the world of Knightian uncertainty and, since future outcomes or possible choices are unknown, then risk analysis or expected utility theory are not very helpful), general (they lack a full accounting framework, a stock-flow consistent framework, which would significantly reduce the number of degrees of freedom in the economy), or even about equilibrium (since markets clear only in a few quarters).
Willem Buiter, Citigroup Chief Economist, has argued that DSGE models rely excessively on an assumption of complete markets, and are unable to describe the highly nonlinear dynamics of economic fluctuations, making training in 'state-of-the-art' macroeconomic modeling "a privately and socially costly waste of time and resources". Narayana Kocherlakota, President of the Federal Reserve Bank of Minneapolis, wrote that
many modern macro models...do not capture an intermediate messy reality in which market participants can trade multiple assets in a wide array of somewhat segmented markets. As a consequence, the models do not reveal much about the benefits of the massive amount of daily or quarterly re-allocations of wealth within financial markets. The models also say nothing about the relevant costs and benefits of resulting fluctuations in financial structure (across bank loans, corporate debt, and equity).
N. Gregory Mankiw, regarded as one of the founders of New Keynesian DSGE modeling, has argued that
New classical and New Keynesian research has had little impact on practical macroeconomists who are charged with [...] policy. [...] From the standpoint of macroeconomic engineering, the work of the past several decades looks like an unfortunate wrong turn.
In the 2010 United States Congress hearings on macroeconomic modeling methods, held on 20 July 2010, and aiming to investigate why macroeconomists failed to foresee the financial crisis of 2007-2010, MIT professor of Economics Robert Solow criticized the DSGE models currently in use:
I do not think that the currently popular DSGE models pass the smell test. They take it for granted that the whole economy can be thought about as if it were a single, consistent person or dynasty carrying out a rationally designed, long-term plan, occasionally disturbed by unexpected shocks, but adapting to them in a rational, consistent way... The protagonists of this idea make a claim to respectability by asserting that it is founded on what we know about microeconomic behavior, but I think that this claim is generally phony. The advocates no doubt believe what they say, but they seem to have stopped sniffing or to have lost their sense of smell altogether.
Commenting on the Congressional session, "The Economist" asked whether agent-based models might better predict financial crises than DSGE models.
Former Chief Economist and Senior Vice President of the World Bank Paul Romer has criticized the "mathiness" of DSGE models and dismisses the inclusion of "imaginary shocks" in DSGE models that ignore "actions that people take." Romer submits a simplified presentation of real business cycle (RBC) modelling, which, as he states, essentially involves two mathematical expressions: The well known formula of the quantity theory of money, and an identity that defines the growth accounting residual A as the difference between growth of output Y and growth of an index X of inputs in production.
Δ%A
Δ%Y − Δ%X
Romer assigned to residual A the label "phlogiston" while he criticized the lack of consideration given to monetary policy in DSGE analysis.
Joseph Stiglitz finds "staggering" shortcomings in the "fantasy world" the models create and argues that "the failure [of macroeconomics] were the wrong microfoundations, which failed to incorporate key aspects of economic behavior". He suggested the models have failed to incorporate "insights from information economics and behavioral economics" and are "ill-suited for predicting or responding to a financial crisis." Oxford University's John Muellbauer put it this way: "It is as if the information economics revolution, for which George Akerlof, Michael Spence and Joe Stiglitz shared the Nobel Prize in 2001, had not occurred. The combination of assumptions, when coupled with the trivialisation of risk and uncertainty...render money, credit and asset prices largely irrelevant... [The models] typically ignore inconvenient truths." Nobel laureate Paul Krugman asked, "Were there any interesting predictions from DSGE models that were validated by events? If there were, I'm not aware of it."
Austrian economists reject DSGE modelling. Critique of DSGE-style macromodeling is at the core of Austrian theory, where, as opposed to RBC and New Keynesian models where capital is homogeneous capital is heterogeneous and multi-specific and, therefore, production functions for the multi-specific capital are simply discovered over time. Lawrence H. White concludes that present-day mainstream macroeconomics is dominated by Walrasian DSGE models, with restrictions added to generate Keynesian properties:
Mises consistently attributed the boom-initiating shock to unexpectedly expansive policy by a central bank trying to lower the market interest rate. Hayek added two alternate scenarios. [One is where] fresh producer-optimism about investment raises the demand for loanable funds, and thus raises the natural rate of interest, but the central bank deliberately prevents the market rate from rising by expanding credit. [Another is where,] in response to the same kind of increase the demand for loanable funds, but without central bank impetus, the commercial banking system by itself expands credit more than is sustainable.
Hayek had criticized Wicksell for the confusion of thinking that establishing a rate of interest consistent with intertemporal equilibrium also implies a constant price level. Hayek posited that intertemporal equilibrium requires not a natural rate but the "neutrality of money," in the sense that money does not "distort" (influence) relative prices.
Post-Keynesians reject the notions of macro-modelling typified by DSGE. They consider such attempts as "a chimera of authority," pointing to the 2003 statement by Lucas, the pioneer of modern DSGE modelling:
Macroeconomics in [its] original sense [of preventing the recurrence of economic disasters] has succeeded. Its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades.
A basic Post Keynesian presumption, which Modern Monetary Theory proponents share, and which is central to Keynesian analysis, is that the future is unknowable and so, at best, we can make guesses about it that would be based broadly on habit, custom, gut-feeling, etc. In DSGE modeling, the central equation for consumption supposedly provides a way in which the consumer links decisions to consume "now" with decisions to consume "later" and thus achieves maximum utility in each period. Our marginal Utility from consumption today must equal our marginal utility from consumption in the future, with a weighting parameter that refers to the valuation that we place on the future relative to today. And since the consumer is supposed to always the equation for consumption, this means that all of us do it individually, if this approach is to reflect the DSGE microfoundational notions of consumption. However, post-Keynesians state that: no consumer is the same with another in terms of random shocks and uncertainty of income (since some consumers will spend every cent of any extra income they receive while others, typically higher-income earners, spend comparatively little of any extra income); no consumer is the same with another in terms of access to credit; not every consumer really considers what they will be doing at the end of their life in any coherent way, so there is no concept of a "permanent lifetime income", which is central to DSGE models; and, therefore, trying to "aggregate" all these differences into one, single "representative agent" is impossible. These assumptions are similar to the assumptions made in the so-called Ricardian equivalence, whereby consumers are assumed to be forward looking and to internalize the government's budget constraints when making consumption decisions, and therefore taking decisions on the basis of practically perfect evaluations of available information.
Extrinsic unpredictability, post-Keynesians state, has "dramatic consequences" for the standard, macroeconomic, forecasting, DSGE models used by governments and other institutions around the world. The mathematical basis of every DSGE model fails when distributions shift, since general-equilibrium theories rely heavily on "ceteris paribus" assumptions. They point to the Bank of England's explicit admission that none of the models they used and evaluated coped well during the 2007–2008 financial crisis, which, for the Bank, "underscores the role that large structural breaks can have in contributing to forecast failure, even if they turn out to be temporary."
Christian Mueller points out that the fact that DSGE models evolve (see next section) constitutes a contradiction of the modelling approach in its own right and, ultimately, makes DSGE models subject to the Lucas critique. This contradiction arises because the economic agents in the DSGE models fail to account for the fact that the very models on the basis of which they form expectations evolve due to progress in economic research. While the evolution of DSGE models as such is predictable the direction of this evolution is not. In effect, Lucas' notion of the systematic instability of economic models carries over to DSGE models proving that they are not solving one of the key problems they are thought to be overcoming.
Evolution of viewpoints.
Federal Reserve Bank of Minneapolis president Narayana Kocherlakota acknowledges that DSGE models were "not very useful" for analyzing the financial crisis of 2007-2010 but argues that the applicability of these models is "improving," and claims that there is growing consensus among macroeconomists that DSGE models need to incorporate both "price stickiness and financial market frictions." Despite his criticism of DSGE modelling, he states that modern models are useful:
In the early 2000s, ...[the] problem of fit disappeared for modern macro models with sticky prices. Using novel Bayesian estimation methods, Frank Smets and Raf Wouters demonstrated that a sufficiently rich New Keynesian model could fit European data well. Their finding, along with similar work by other economists, has led to widespread adoption of New Keynesian models for policy analysis and forecasting by central banks around the world.
Still, Kocherlakota observes that in "terms of fiscal policy (especially short-term fiscal policy), modern macro-modeling seems to have had little impact. ... [M]ost, if not all, of the motivation for the fiscal stimulus was based largely on the long-discarded models of the 1960s and 1970s.
In 2010, Rochelle M. Edge, of the Federal Reserve System Board of Directors, contested that the work of Smets & Wouters has "led DSGE models to be taken more seriously by central bankers around the world" so that "DSGE models are now quite prominent tools for macroeconomic analysis at many policy institutions, with forecasting being one of the key areas where these models are used, in conjunction with other forecasting methods."
University of Minnesota professor of economics V.V. Chari has pointed out that state-of-the-art DSGE models are more sophisticated than their critics suppose:
The models have all kinds of heterogeneity in behavior and decisions... people's objectives differ, they differ by age, by information, by the history of their past experiences.
Chari also argued that current DSGE models frequently incorporate frictional unemployment, financial market imperfections, and sticky prices and wages, and therefore imply that the macroeconomy behaves in a suboptimal way which monetary and fiscal policy may be able to improve. Columbia University's Michael Woodford concedes that policies considered by DSGE models might not be Pareto optimal and they may as well not satisfy some other social welfare criterion. Nonetheless, in replying to Mankiw, Woodford argues that the DSGE models commonly used by central banks today and strongly influencing policy makers like Ben Bernanke, do not provide an analysis so different from traditional Keynesian analysis:
It is true that the modeling efforts of many policy institutions can reasonably be seen as an evolutionary development within the macroeconomic modeling program of the postwar Keynesians; thus if one expected, with the early New Classicals, that adoption of the new tools would require building anew from the ground up, one might conclude that the new tools have not been put to use. But in fact they have been put to use, only not with such radical consequences as had once been expected.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Y=f^Y(Y^e,i-\\pi^e,...)"
},
{
"math_id": 1,
"text": "\\pi=f^\\pi(\\pi^e,Y,...)"
},
{
"math_id": 2,
"text": "i=f^i(\\pi-\\pi^*,Y,...)"
}
] | https://en.wikipedia.org/wiki?curid=12052214 |
1205310 | Practical number | Number such that it and all smaller numbers may be represented as sums of its distinct divisors
In number theory, a practical number or panarithmic number is a positive integer formula_0 such that all smaller positive integers can be represented as sums of distinct divisors of formula_0. For example, 12 is a practical number because all the numbers from 1 to 11 can be expressed as sums of its divisors 1, 2, 3, 4, and 6: as well as these divisors themselves, we have 5 = 3 + 2, 7 = 6 + 1, 8 = 6 + 2, 9 = 6 + 3, 10 = 6 + 3 + 1, and 11 = 6 + 3 + 2.
The sequence of practical numbers (sequence in the OEIS) begins
<templatestyles src="Block indent/styles.css"/>1, 2, 4, 6, 8, 12, 16, 18, 20, 24, 28, 30, 32, 36, 40, 42, 48, 54, 56, 60, 64, 66, 72, 78, 80, 84, 88, 90, 96, 100, 104, 108, 112, 120, 126, 128, 132, 140, 144, 150...
Practical numbers were used by Fibonacci in his "Liber Abaci" (1202) in connection with the problem of representing rational numbers as Egyptian fractions. Fibonacci does not formally define practical numbers, but he gives a table of Egyptian fraction expansions for fractions with practical denominators.
The name "practical number" is due to . He noted that "the subdivisions of money, weights, and measures involve numbers like 4, 12, 16, 20 and 28 which are usually supposed to be so inconvenient as to deserve replacement by powers of 10." His partial classification of these numbers was completed by and . This characterization makes it possible to determine whether a number is practical by examining its prime factorization. Every even perfect number and every power of two is also a practical number.
Practical numbers have also been shown to be analogous with prime numbers in many of their properties.
Characterization of practical numbers.
The original characterisation by stated that a practical number cannot be a deficient number, that is one of which the sum of all divisors (including 1 and itself) is less than twice the number unless the deficiency is one. If the ordered set of all divisors of the practical number formula_0 is formula_1 with formula_2 and formula_3, then Srinivasan's statement can be expressed by the inequality
formula_4
In other words, the ordered sequence of all divisors formula_5 of a practical number has to be a complete sub-sequence.
This partial characterization was extended and completed by and who showed that it is straightforward to determine whether a number is practical from its prime factorization.
A positive integer greater than one with prime factorization formula_6 (with the primes in sorted order formula_7) is practical if and only if each of its prime factors formula_8 is small enough for formula_9 to have a representation as a sum of smaller divisors. For this to be true, the first prime formula_10 must equal 2 and, for every i from 2 to k, each successive prime formula_8 must obey the inequality
formula_11
where formula_12 denotes the sum of the divisors of "x". For example, 2 × 32 × 29 × 823 = 429606 is practical, because the inequality above holds for each of its prime factors: 3 ≤ σ(2) + 1 = 4, 29 ≤ σ(2 × 32) + 1 = 40, and 823 ≤ σ(2 × 32 × 29) + 1 = 1171.
The condition stated above is necessary and sufficient for a number to be practical. In one direction, this condition is necessary in order to be able to represent formula_9 as a sum of divisors of formula_0, because if the inequality failed to be true then even adding together all the smaller divisors would give a sum too small to reach formula_9. In the other direction, the condition is sufficient, as can be shown by induction.
More strongly, if the factorization of formula_0 satisfies the condition above, then any formula_13 can be represented as a sum of divisors of formula_0, by the following sequence of steps:
Properties.
<templatestyles src="Block indent/styles.css"/>1, 2, 6, 20, 28, 30, 42, 66, 78, 88, 104, 140, 204, 210, 220, 228, 260, 272, 276, 304, 306, 308, 330, 340, 342, 348, 364, 368, 380, 390, 414, 460 ...
Relation to other classes of numbers.
Several other notable sets of integers consist only of practical numbers:
Practical numbers and Egyptian fractions.
If formula_0 is practical, then any rational number of the form formula_37 with formula_38 may be represented as a sum formula_39 where each formula_40 is a distinct divisor of formula_0. Each term in this sum simplifies to a unit fraction, so such a sum provides a representation of formula_37 as an Egyptian fraction. For instance,
formula_41
Fibonacci, in his 1202 book "Liber Abaci" lists several methods for finding Egyptian fraction representations of a rational number. Of these, the first is to test whether the number is itself already a unit fraction, but the second is to search for a representation of the numerator as a sum of divisors of the denominator, as described above. This method is only guaranteed to succeed for denominators that are practical. Fibonacci provides tables of these representations for fractions having as denominators the practical numbers 6, 8, 12, 20, 24, 60, and 100.
showed that every rational number formula_42 has an Egyptian fraction representation with formula_43 terms. The proof involves finding a sequence of practical numbers formula_44 with the property that every number less than formula_44 may be written as a sum of formula_45 distinct divisors of formula_44. Then, formula_33 is chosen so that formula_46, and formula_47 is divided by formula_48 giving quotient formula_20 and remainder formula_49. It follows from these choices that formula_50. Expanding both numerators on the right hand side of this formula into sums of divisors of formula_44 results in the desired Egyptian fraction representation. use a similar technique involving a different sequence of practical numbers to show that every rational number formula_42 has an Egyptian fraction representation in which the largest denominator is formula_51.
According to a September 2015 conjecture by Zhi-Wei Sun, every positive rational number has an Egyptian fraction representation in which every denominator is a practical number. The conjecture was proved by David Eppstein (2021).
Analogies with prime numbers.
One reason for interest in practical numbers is that many of their properties are similar to properties of the prime numbers.
Indeed, theorems analogous to Goldbach's conjecture and the twin prime conjecture are known for practical numbers: every positive even integer is the sum of two practical numbers, and there exist infinitely many triples of practical numbers formula_52. Melfi also showed that there are infinitely many practical Fibonacci numbers (sequence in the OEIS); the analogous question of the existence of infinitely many Fibonacci primes is open. showed that there always exists a practical number in the interval formula_53 for any positive real formula_54, a result analogous to Legendre's conjecture for primes. Moreover, for all sufficiently large formula_54, the interval formula_55 contains many practical numbers.
Let formula_56 count how many practical numbers are at most formula_54.
conjectured that formula_56 is asymptotic to formula_57 for some constant formula_58, a formula which resembles the prime number theorem, strengthening the earlier claim of that the practical numbers have density zero in the integers.
Improving on an estimate of , found that formula_56 has order of magnitude formula_59.
proved Margenstern's conjecture. We have
formula_60
where formula_61 Thus the practical numbers are about 33.6% more numerous than the prime numbers. The exact value of the constant factor formula_58 is given by
formula_62
where formula_63 is the Euler–Mascheroni constant and formula_64 runs over primes.
As with prime numbers in an arithmetic progression, given two natural numbers formula_65 and formula_20,
we have
formula_66
The constant factor formula_67 is positive if, and only if, there is more than one practical number congruent to formula_68.
If formula_69, then formula_70.
For example, about 38.26% of practical numbers have a last decimal digit of 0, while the last digits of 2, 4, 6, 8 each occur with the same relative frequency of 15.43%.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "{d_1, d_2,..., d_j}"
},
{
"math_id": 2,
"text": "d_1=1"
},
{
"math_id": 3,
"text": "d_j=n"
},
{
"math_id": 4,
"text": "2n\\leq1+\\sum_{i=1}^j d_i."
},
{
"math_id": 5,
"text": "{d_1<d_2<...<d_j}"
},
{
"math_id": 6,
"text": "n=p_1^{\\alpha_1}...p_k^{\\alpha_k}"
},
{
"math_id": 7,
"text": "p_1<p_2<\\dots<p_k"
},
{
"math_id": 8,
"text": "p_i"
},
{
"math_id": 9,
"text": "p_i-1"
},
{
"math_id": 10,
"text": "p_1"
},
{
"math_id": 11,
"text": "p_i\\leq1+\\sigma(p_1^{\\alpha_1}p_2^{\\alpha_2}\\dots p_{i-1}^{\\alpha_{i-1}})=1+\\sigma(p_1^{\\alpha_1})\\sigma(p_2^{\\alpha_2})\\dots \\sigma(p_{i-1}^{\\alpha_{i-1}})=1+\\prod_{j=1}^{i-1}\\frac{p_j^{\\alpha_j+1}-1}{p_j-1},"
},
{
"math_id": 12,
"text": "\\sigma(x)"
},
{
"math_id": 13,
"text": "m \\le \\sigma(n)"
},
{
"math_id": 14,
"text": "j\\in[1,\\alpha_k]"
},
{
"math_id": 15,
"text": "p_k^j\\leq 1+\\sigma(n/p_k^{\\alpha_k-(j-1)})"
},
{
"math_id": 16,
"text": "p_k^{\\alpha_k}\\leq 1+\\sigma(n/p_k)"
},
{
"math_id": 17,
"text": "[q p_k^{\\alpha_k}, q p_k^{\\alpha_k}+\\sigma(n/p_k)]"
},
{
"math_id": 18,
"text": "[1,\\sigma(n)]"
},
{
"math_id": 19,
"text": "1\\leq q\\leq \\sigma(n/p_k^{\\alpha_k})"
},
{
"math_id": 20,
"text": "q"
},
{
"math_id": 21,
"text": "r\\in[0,\\sigma(n/p_k)]"
},
{
"math_id": 22,
"text": "m=q p_k^{\\alpha_k}+r"
},
{
"math_id": 23,
"text": "q\\le\\sigma(n/p_k^{\\alpha_k})"
},
{
"math_id": 24,
"text": "n/p_k^{\\alpha_k}"
},
{
"math_id": 25,
"text": "r\\le \\sigma(n/p_k)"
},
{
"math_id": 26,
"text": "n/p_k"
},
{
"math_id": 27,
"text": "p_k^{\\alpha_k}"
},
{
"math_id": 28,
"text": "d"
},
{
"math_id": 29,
"text": "n\\cdot d"
},
{
"math_id": 30,
"text": "2^{\\lfloor\\log_2 n\\rfloor}n"
},
{
"math_id": 31,
"text": "d|n"
},
{
"math_id": 32,
"text": "2^{k-1}(2^k-1)"
},
{
"math_id": 33,
"text": "i"
},
{
"math_id": 34,
"text": "p_{i-1}"
},
{
"math_id": 35,
"text": "p_i<2p_{i-1}"
},
{
"math_id": 36,
"text": "k"
},
{
"math_id": 37,
"text": "m/n"
},
{
"math_id": 38,
"text": "m<n"
},
{
"math_id": 39,
"text": "\\sum d_i/n"
},
{
"math_id": 40,
"text": "d_i"
},
{
"math_id": 41,
"text": "\\frac{13}{20}=\\frac{10}{20}+\\frac{2}{20}+\\frac{1}{20}=\\frac12+\\frac1{10}+\\frac1{20}."
},
{
"math_id": 42,
"text": "x/y"
},
{
"math_id": 43,
"text": "O(\\sqrt{\\log y})"
},
{
"math_id": 44,
"text": "n_i"
},
{
"math_id": 45,
"text": "O(\\sqrt{\\log n_{i-1}})"
},
{
"math_id": 46,
"text": "n_{i-1}<y<n_i"
},
{
"math_id": 47,
"text": "xn_i"
},
{
"math_id": 48,
"text": "y"
},
{
"math_id": 49,
"text": "r"
},
{
"math_id": 50,
"text": "\\frac{x}{y}=\\frac{q}{n_i}+\\frac{r}{yn_i}"
},
{
"math_id": 51,
"text": "O(y\\log^2 y/\\log\\log y)"
},
{
"math_id": 52,
"text": "(x-2,x,x+2)"
},
{
"math_id": 53,
"text": "[x^2,(x+1)^2)]"
},
{
"math_id": 54,
"text": "x"
},
{
"math_id": 55,
"text": "[x-x^{0.4872},x]"
},
{
"math_id": 56,
"text": "p(x)"
},
{
"math_id": 57,
"text": "cx/\\log x"
},
{
"math_id": 58,
"text": "c"
},
{
"math_id": 59,
"text": "x/\\log x"
},
{
"math_id": 60,
"text": "p(x) = \\frac{c x}{\\log x}\\left(1 + O\\!\\left(\\frac{\\log\\log x}{\\log x}\\right)\\right),"
},
{
"math_id": 61,
"text": "c=1.33607..."
},
{
"math_id": 62,
"text": " c= \\frac{1}{1-e^{-\\gamma}} \\sum_{n \\ \\text{practical}} \\frac{1}{n} \\Biggl( \\sum_{p\\le \\sigma(n)+1}\\frac{\\log p}{p-1} - \\log n\\Biggr) \\prod_{p\\le \\sigma(n)+1} \\left(1-\\frac{1}{p}\\right),"
},
{
"math_id": 63,
"text": "\\gamma"
},
{
"math_id": 64,
"text": "p"
},
{
"math_id": 65,
"text": "a"
},
{
"math_id": 66,
"text": "\n|\\{ n \\le x: n \\text{ practical and } n\\equiv a \\bmod q \\}|=\\frac{c_{q,a} x}{\\log x} +O_q\\left(\\frac{x}{(\\log x)^2}\\right).\n"
},
{
"math_id": 67,
"text": "c_{q,a}"
},
{
"math_id": 68,
"text": " a \\bmod q "
},
{
"math_id": 69,
"text": "\\gcd(q,a)=\\gcd(q,b)"
},
{
"math_id": 70,
"text": "c_{q,a}=c_{q,b}"
}
] | https://en.wikipedia.org/wiki?curid=1205310 |
1205475 | Charles Anderson-Pelham, 2nd Earl of Yarborough | British nobleman
Charles Anderson Worsley Anderson-Pelham, 2nd Earl of Yarborough (12 April 1809 – 7 January 1862) was a British nobleman who succeeded to the Earldom of Yarborough in 1846.
Before his accession, he was the Member of Parliament (MP) for Newtown 1830–1831, Lincolnshire 1831–1832 and North Lincolnshire 1835–1846.
Lord Yarborough gave his name to a hand of cards dealt in contract bridge that has no card higher than a nine (see Yarborough). The probability of getting a Yarborough is formula_0 which is formula_1 or about formula_2. The Earl offered £1,000 to anyone who achieved a "Yarborough" – on condition they paid him £1 each time they did not succeed!
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\binom{32}{13}}{\\binom{52}{13}}"
},
{
"math_id": 1,
"text": "\\frac{347,373,600}{635,013,559,600}"
},
{
"math_id": 2,
"text": "\\frac{1}{1828}"
}
] | https://en.wikipedia.org/wiki?curid=1205475 |
12055125 | Bombieri norm | In mathematics, the Bombieri norm, named after Enrico Bombieri, is a norm on homogeneous polynomials with coefficient in formula_0 or formula_1 (there is also a version for non homogeneous univariate polynomials). This norm has many remarkable properties, the most important being listed in this article.
Bombieri scalar product for homogeneous polynomials.
To start with the geometry, the "Bombieri scalar product" for homogeneous polynomials with "N" variables can be defined as follows using multi-index notation:
formula_2
by definition different monomials are orthogonal, so that
formula_3 if formula_4
while formula_5 by definition formula_6
In the above definition and in the rest of this article the following notation applies:
if formula_7 write formula_8 and formula_9 and formula_10
Bombieri inequality.
The fundamental property of this norm is the Bombieri inequality:
let formula_11 be two homogeneous polynomials respectively of degree formula_12 and formula_13 with formula_14 variables, then, the following inequality holds:
formula_15
Here the Bombieri inequality is the left hand side of the above statement, while the right side means that the Bombieri norm is an algebra norm. Giving the left hand side is meaningless without that constraint, because in this case, we can achieve the same result with any norm by multiplying the norm by a well chosen factor.
This multiplicative inequality implies that the product of two polynomials is bounded from below by a quantity that depends on the multiplicand polynomials. Thus, this product can not be arbitrarily small. This multiplicative inequality is useful in metric algebraic geometry and number theory.
Invariance by isometry.
Another important property is that the Bombieri norm is invariant by composition with an
isometry:
let formula_11 be two homogeneous polynomials of degree formula_16 with formula_14 variables and let formula_17 be an isometry
of formula_18 (or formula_19). Then we have formula_20. When formula_21 this implies formula_22.
This result follows from a nice integral formulation of the scalar product:
formula_23
where formula_24 is the unit sphere of formula_19 with its canonical measure formula_25.
Other inequalities.
Let formula_26 be a homogeneous polynomial of degree formula_16 with formula_14 variables and let formula_27. We have:
where formula_30 denotes the Euclidean norm.
The Bombieri norm is useful in polynomial factorization, where it has some advantages over the Mahler measure, according to Knuth (Exercises 20-21, pages 457-458 and 682-684). | [
{
"math_id": 0,
"text": "\\mathbb R"
},
{
"math_id": 1,
"text": "\\mathbb C"
},
{
"math_id": 2,
"text": "\\forall \\alpha,\\beta \\in \\mathbb{N}^N"
},
{
"math_id": 3,
"text": "\\langle X^\\alpha | X^\\beta \\rangle = 0"
},
{
"math_id": 4,
"text": "\\alpha \\neq \\beta,"
},
{
"math_id": 5,
"text": "\\forall \\alpha \\in \\mathbb{N}^N"
},
{
"math_id": 6,
"text": "\\|X^\\alpha\\|^2 = \\frac{\\alpha!}{|\\alpha|!}."
},
{
"math_id": 7,
"text": "\\alpha = (\\alpha_1,\\dots,\\alpha_N) \\in \\mathbb{N}^N,"
},
{
"math_id": 8,
"text": "|\\alpha| = \\sum_{i=1}^N \\alpha_i"
},
{
"math_id": 9,
"text": "\\alpha! = \\prod_{i=1}^N (\\alpha_i!)"
},
{
"math_id": 10,
"text": "X^\\alpha = \\prod_{i=1}^N X_i^{\\alpha_i}."
},
{
"math_id": 11,
"text": "P,Q"
},
{
"math_id": 12,
"text": "d^\\circ(P)"
},
{
"math_id": 13,
"text": "d^\\circ(Q)"
},
{
"math_id": 14,
"text": "N"
},
{
"math_id": 15,
"text": "\\frac{d^\\circ(P)!d^\\circ(Q)!}{(d^\\circ(P)+d^\\circ(Q))!}\\|P\\|^2 \\, \\|Q\\|^2 \\leq \n \\|P\\cdot Q\\|^2 \\leq \\|P\\|^2 \\, \\|Q\\|^2."
},
{
"math_id": 16,
"text": "d"
},
{
"math_id": 17,
"text": "h"
},
{
"math_id": 18,
"text": "\\mathbb R^N"
},
{
"math_id": 19,
"text": "\\mathbb C^N"
},
{
"math_id": 20,
"text": "\\langle P\\circ h|Q\\circ h\\rangle = \\langle P|Q\\rangle"
},
{
"math_id": 21,
"text": "P=Q"
},
{
"math_id": 22,
"text": "\\|P\\circ h\\|=\\|P\\|"
},
{
"math_id": 23,
"text": "\\langle P|Q\\rangle = {d+N-1 \\choose N-1} \\int_{S^N} P(Z)\\overline{Q(Z)}\\,d\\sigma(Z)"
},
{
"math_id": 24,
"text": "S^N"
},
{
"math_id": 25,
"text": "d\\sigma(Z)"
},
{
"math_id": 26,
"text": "P"
},
{
"math_id": 27,
"text": "Z \\in \\mathbb C^N"
},
{
"math_id": 28,
"text": "|P(Z)| \\leq \\|P\\| \\, \\|Z\\|_E^d"
},
{
"math_id": 29,
"text": "\\|\\nabla P(Z)\\|_E \\leq d \\|P\\| \\, \\|Z\\|_E^d"
},
{
"math_id": 30,
"text": "\\|\\cdot\\|_E"
}
] | https://en.wikipedia.org/wiki?curid=12055125 |
12055142 | Academic authorship | Academic authorship of journal articles, books, and other original works is a means by which academics communicate the results of their scholarly work, establish priority for their discoveries, and build their reputation among their peers.
Authorship is a primary basis that employers use to evaluate academic personnel for employment, promotion, and tenure. In academic publishing, authorship of a work is claimed by those making intellectual contributions to the completion of the research described in the work. In simple cases, a solitary scholar carries out a research project and writes the subsequent article or book. In many disciplines, however, collaboration is the norm and issues of authorship can be controversial. In these contexts, authorship can encompass activities other than writing the article; a researcher who comes up with an experimental design and analyzes the data may be considered an author, even if she or he had little role in composing the text describing the results. According to some standards, even writing the entire article would not constitute authorship unless the writer was also involved in at least one other phase of the project.
Definition.
Guidelines for assigning authorship vary between institutions and disciplines. They may be formally defined or simply cultural norms. Incorrect assignment of authorship occasionally leads to charges of academic misconduct and sanctions for the violator. A 2002 survey of a large sample of researchers who had received funding from the U.S. National Institutes of Health revealed that 10% of respondents claimed to have inappropriately assigned authorship credit within the last three years. This was the first large scale survey concerning such issues. In other fields only limited or no empirical data is available.
Authorship in the natural sciences.
The natural sciences have no universal standard for authorship, but some major multi-disciplinary journals and institutions have established guidelines for work that they publish. The journal "Proceedings of the National Academy of Sciences of the United States of America" ("PNAS") has an editorial policy that specifies "authorship should be limited to those who have contributed substantially to the work" and furthermore, "authors are strongly encouraged to indicate their specific contributions" as a footnote. The American Chemical Society further specifies that authors are those who also "share responsibility and accountability for the results" and the U.S. National Academies specify "an author who is willing to take credit for a paper must also bear responsibility for its contents. Thus, unless a footnote or the text of the paper explicitly assigns responsibility for different parts of the paper to different authors, the authors whose names appear on a paper must share responsibility for all of it."
Authorship in mathematics.
In mathematics, the authors are usually listed in alphabetical order (the so-called Hardy-Littlewood Rule).
Authorship in medicine.
The medical field defines authorship very narrowly. According to the Uniform Requirements for Manuscripts Submitted to Biomedical Journals, designation as an author must satisfy four conditions. The author must have:
Acquisition of funding, or general supervision of the research group alone does not constitute authorship. Biomedical authorship is prone to various misconducts and disputes. Many authors – especially those in the middle of the byline – do not fulfill these authorship criteria. Some medical journals have abandoned the strict notion of author, with the flexible notion of "contributor".
Authorship in the social sciences.
The American Psychological Association (APA) has similar guidelines as medicine for authorship. The APA acknowledge that authorship is not limited to the writing of manuscripts, but must include those who have made substantial contributions to a study such as "formulating the problem or hypothesis, structuring the experimental design, organizing and conducting the statistical analysis, interpreting the results, or writing a major portion of the paper". While the APA guidelines list many other forms of contributions to a study that do not constitute authorship, it does state that combinations of these and other tasks may justify authorship. Like medicine, the APA considers institutional position, such as Department Chair, insufficient for attributing authorship.
Authorship in the humanities.
Neither the Modern Languages Association nor the Chicago Manual of Style define requirements for authorship (because usually humanities works are single-authored and the author is responsible for the entire work).
Growing number of authors per paper.
From the late 17th century to the 1920s, sole authorship was the norm, and the one-paper-one-author model worked well for distributing credit. Today, shared authorship is common in most academic disciplines, with the exception of the humanities, where sole authorship is still the predominant model. Between about 1980-2010 the average number of authors in medical papers increased, and perhaps tripled. One survey found that in mathematics journals over the first decade of the 2000's, "the number of papers with 2, 3 and 4+ authors increased by approximately 50%, 100% and 200%, respectively, while single author papers decreased slightly."
In particular types of research, including particle physics, genome sequencing and clinical trials, a paper's author list can run into the hundreds. In 1998, the Collider Detector at Fermilab (CDF) adopted a (at that time) highly unorthodox policy for assigning authorship. CDF maintains a "standard author list". All scientists and engineers working at CDF are added to the standard author list after one year of full-time work; names stay on the list until one year after the worker leaves CDF. Every publication coming out of CDF uses the entire standard author list, in alphabetical order. Other big collaborations, including most particle physics experiments, followed this model.
In large, multi-center clinical trials authorship is often used as a reward for recruiting patients. A paper published in the New England Journal of Medicine in 1993 reported on a clinical trial conducted in 1,081 hospitals in 15 different countries, involving a total of 41,021 patients. There were 972 authors listed in an appendix and authorship was assigned to a group. In 2015, an article in high-energy physics was published describing the measurement of the mass of the Higgs boson based on collisions in the Large Hadron Collider; the article boasted 5,154 authors, the printed author list needed 24 pages.
Large authors lists have attracted some criticism. They strain guidelines that insist that each author's role be described and that each author is responsible for the validity of the whole work. Such a system treats authorship more as "credit" for scientific service at the facility in general rather that as an identification of specific contributions. One commentator wrote, "In more than 25 years working as a scientific editor ... I have not been aware of any valid argument for more than three authors per paper, although I recognize that this may not be true for every field." The rise of shared authorship has been attributed to Big Science—scientific experiments that require collaboration and specialization of many individuals.
Alternatively, the increase in multi-authorship is according to a game-theoretic analysis a consequence of the way scientists are evaluated. Scientists are judged by the number of papers they publish, and by the impact of those papers. Both measures are integrated into the most popular single value measure formula_0-index. The formula_0-index correlates with winning the Nobel Prize, being accepted for research fellowships and holding positions at top universities. When each author claims each paper and each citation as his/her own, papers and citations are multiplied by the number of authors. Since it is common and rational to cite own papers more than others, a high number of coauthors increases not only the number of own papers, but also their impact. As result, game rules set by formula_0-index being a decision criterion for success create a zero-sum formula_0-index ranking game, where the rational strategy includes maximizing the number of coauthors up to the majority of the researchers in a field. Data of 189 thousand publications showed that the coauthors' number is strongly correlated with formula_0-index. Hence, the system rewards heavily multi-authored papers. This problem is openly acknowledged, and it could easily be "corrected" by dividing each paper and its citations by the number of authors, though this practice has not been widely adopted.
Finally, the rise in shared authorship may also reflect increased acknowledgment of the contributions of lower level workers, including graduate students and technicians, as well as honorary authorship, while allowing for such collaborations to make an independent statement about the quality and integrity of a scientific work.
Order of authors in a list.
Rules for the order of multiple authors in a list have historically varied significantly between fields of research. Some fields list authors in order of their degree of involvement in the work, with the most active contributors listed first; other fields, such as mathematics or engineering, sometimes list them alphabetically. Historically, biologists tended to place a principal investigator (supervisor or lab head) last in an author list whereas organic chemists might have put him or her first. Research articles in high energy physics, where the author lists can number in the tens to hundreds, often list authors alphabetically. In the academic fields of economics, business, finance or particle physics, it is also usual to sort the authors alphabetically.
Although listing authors in order of the involvement in the project seems straightforward, it often leads to conflict. A study in the "Canadian Medical Association Journal" found that more than two-thirds of 919 corresponding authors disagreed with their coauthors regarding contributions of each author.
Responsibilities of authors.
Authors' reputations can be damaged if their names appear on a paper that they do not completely understand or with which they were not intimately involved. Numerous guidelines and customs specify that all co-authors must be able to understand and support a paper's major points.
In a notable case, American stem-cell researcher Gerald Schatten had his name listed on a paper co-authored with Hwang Woo-suk. The paper was later exposed as fraudulent and, though Schatten was not accused of participating in the fraud, a panel at his university found that "his failure to more closely oversee research with his name on it does make him guilty of 'research misbehavior.'"
All authors, including co-authors, are usually expected to have made reasonable attempts to check findings submitted for publication. In some cases, co-authors of faked research have been accused of inappropriate behavior or research misconduct for failing to verify reports authored by others or by a commercial sponsor. Examples include the case of Professor Geoffrey Chamberlain named as guest author of papers fabricated by Malcolm Pearce, (Chamberlain was exonerated from collusion in Pearce's deception) and the co-authors of Jan Hendrik Schön at Bell Laboratories. More recent cases include Charles Nemeroff, former editor-in-chief of "Neuropsychopharmacology", and the so-called Sheffield Actonel affair.
Additionally, authors are expected to keep all study data for later examination even after publication. Both scientific and academic censure can result from a failure to keep primary data; the case of Ranjit Chandra of Memorial University of Newfoundland provides an example of this. Many scientific journals also require that authors provide information to allow readers to determine whether the authors may have commercial or non-commercial conflicts of interest. Outlined in the author disclosure statement for the "American Journal of Human Biology", this is a policy more common in scientific fields where funding often comes from corporate sources. Authors are also commonly required to provide information about ethical aspects of research, particularly where research involves human or animal participants or use of biological material. Provision of incorrect information to journals may be regarded as misconduct. Financial pressures on universities have encouraged this type of misconduct. The majority of recent cases of alleged misconduct involving undisclosed conflicts of interest or failure of the authors to have seen scientific data involve collaborative research between scientists and biotechnology companies.
Unconventional types of authorship.
Honorary authorship.
"Honorary authorship" is sometimes granted to those who played no significant role in the work, for a variety of reasons. Until recently, it was standard to list the head of a German department or institution as an author on a paper regardless of input. The United States National Academy of Sciences, however, warns that such practices "dilute the credit due the people who actually did the work, inflate the credentials of those so 'honored,' and make the proper attribution of credit more difficult." The extent to which honorary authorship still occurs is not empirically known. However, it is plausible to expect that it is still widespread, because senior scientists leading large research groups can receive much of their reputation from a long publication list and thus have little motivation to give up honorary authorships.
A possible measure against honorary authorships has been implemented by some scientific journals, in particular by the "Nature" journals. They demand that each new manuscript must include a statement of responsibility that specifies the contribution of every author. The level of detail varies between the disciplines. Senior persons may still make some vague claim to have "supervised the project", for example, even if they were only in the formal position of a supervisor without having delivered concrete contributions. (The truth content of such statements is usually not checked by independent persons.) However, the need to describe contributions can at least be expected to somewhat reduce honorary authorships. In addition, it may help to identify the perpetrator in a case of scientific fraud.
Gift, guest and rolling authorship.
More specific types of honorary authorship are gift, guest and rolling authorship. Gift authorship consists of authorship obtained by the offer of another author (honorary or not) with objectives that are beyond the research article itself or are ulterior, as promotion or favor. Guest authors are those that are included with the specific objective to increase the probability that it becomes accepted by a journal. A rolling authorship is a special case of gift authorship in which the honor is granted on the basis of previous research papers (published or not) and collaborations within the same research group. The "rolled" author may (or may not) be imposed by a superior employee for reasons that range from the research group's strategic interests, personal career interests, camaraderie or (professional) concession. For instance, a post-doc researcher in the same research group where his PhD was awarded, may be willing to roll his authorship into any subsequent paper from other researchers in that same group, overseeing the criteria for authorship. Per se, this would not cause authorship issues unless the collaboration was imposed by a third party, like a supervisor or department manager, in which case it is called a "coercive authorship". Still, omitting the authorship criteria by prioritizing hierarchy arguments, is an unethical practice. This kind of practices may hinder free-thinking and professional independence, and thus should be tackled by research managers, clear research guidelines and authors agreements.
Ghost authorship.
"Ghost authorship" occurs when an individual makes a substantial contribution to the research or the writing of the report, but is not listed as an author. Researchers, statisticians and writers (e.g. medical writers or technical writers) become "ghost authors" when they meet authorship criteria but are not named as an author. Writers who work in this capacity are called ghostwriters.
Ghost authorship has been linked to partnerships between industry and higher education. Two-thirds of industry-initiated randomized trials may have evidence of ghost authorship. Ghost authorship is considered problematic because it may be used to obscure the participation of researchers with conflicts of interest.
Litigation against the pharmaceutical company, Merck over health concerns related to use of their drug, Rofecoxib (brand name Vioxx), revealed examples of ghost authorship. Merck routinely paid medical writing companies to prepare journal manuscripts, and subsequently recruited external, academically affiliated researchers to pose as the authors.
Authors are sometimes included in a list without their permission. Even if this is done with the benign intention to acknowledge some contributions, it is problematic since authors carry responsibility for correctness and thus need to have the opportunity to check the manuscript and possibly demand changes.
Fraudulent paid-for authorship.
Researchers can pay to intentionally and dishonestly list themselves as authors on papers they have not contributed to, usually by using an academic paper mill which specializes in authorship sales.
Anonymous and unclaimed authorship.
Authors occasionally forgo claiming authorship, for a number of reasons. Historically some authors have published anonymously to shield themselves when presenting controversial claims. A key example is Robert Chambers' anonymous publication of Vestiges of the Natural History of Creation, a speculative, pre-Darwinian work on the origins of life and the cosmos. The book argued for an evolutionary view of life in the same spirit as the late Frenchman Jean-Baptiste Lamarck. Lamarck had long been discredited among intellectuals by this time and evolutionary (or development) theories were exceedingly unpopular, except among the political radicals, materialists, and atheists – Chambers hoped to avoid Lamarck's fate.
In the 18th century, Émilie du Châtelet began her career as a scientific author by submitting a paper in an annual competition held by the French Academy of Sciences; papers in this competition were submitted anonymously. Initially presenting her work without claiming authorship allowed her to have her work judged by established scientists while avoiding the bias against women in the sciences. She did not win the competition, but eventually her paper was published alongside the winning submissions, under her real name.
Scientists and engineers working in corporate and military organizations are often restricted from publishing and claiming authorship of their work because their results are considered secret property of the organization that employs them. One notable example is that of William Sealy Gosset, who was forced to publish his work in statistics under the pseudonym "Student" due to his employment at the Guinness brewery. Another account describes the frustration of physicists working in nuclear weapons programs at the Lawrence Livermore Laboratory – years after making a discovery they would read of the same phenomenon being "discovered" by a physicist unaware of the original, secret discovery of the phenomenon.
Satoshi Nakamoto is a pseudonym of a still unknown author or authors' group behind a white paper about bitcoin.
Non-human authorship.
Artificial intelligence systems have been credited with authorship on a handful of academic publications, however, many publishers disallow this on the grounds that "they cannot take responsibility for the content and integrity of scientific papers".
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "h"
}
] | https://en.wikipedia.org/wiki?curid=12055142 |
12055434 | Progressive Graphics File | File format
PGF (Progressive Graphics File) is a wavelet-based bitmapped image format that employs lossless and lossy data compression. PGF was created to improve upon and replace the JPEG format. It was developed at the same time as JPEG 2000 but with a focus on speed over compression ratio.
PGF can operate at higher compression ratios without taking more encoding/decoding time and without generating the characteristic "blocky and blurry" artifacts of the original DCT-based JPEG standard. It also allows more sophisticated progressive downloads.
Color models.
PGF supports a wide variety of color models:
Technical discussion.
PGF claims to achieve an improved compression quality over JPEG adding or improving features such as scalability. Its compression performance is similar to the original JPEG standard. Very low and very high compression rates (including lossless compression) are also supported in PGF. The ability of the design to handle a very large range of effective bit rates is one of the strengths of PGF. For example, to reduce the number of bits for a picture below a certain amount, the advisable thing to do with the first JPEG standard is to reduce the resolution of the input image before encoding it — something that is ordinarily not necessary for that purpose when using PGF because of its wavelet scalability properties.
The PGF process chain contains the following four steps:
Color components transformation.
Initially, images have to be transformed from the RGB color space to another color space, leading to three "components" that are handled separately. PGF uses a fully reversible modified YUV color transform. The transformation matrices are:
formula_0
The chrominance components can be, but do not necessarily have to be, down-scaled in resolution.
Wavelet transform.
The color components are then wavelet transformed to an arbitrary depth. In contrast to JPEG 1992 which uses an 8x8 block-size discrete cosine transform, PGF uses one reversible wavelet transform: a rounded version of the biorthogonal CDF 5/3 wavelet transform. This wavelet filter bank is exactly the same as the reversible wavelet used in JPEG 2000. It uses only integer coefficients, so the output does not require rounding (quantization) and so it does not introduce any quantization noise.
Quantization.
After the wavelet transform, the coefficients are scalar-quantized to reduce the amount of bits to represent them, at the expense of a loss of quality. The output is a set of integer numbers which have to be encoded bit-by-bit. The parameter that can be changed to set the final quality is the quantization step: the greater the step, the greater is the compression and the loss of quality. With a quantization step that equals 1, no quantization is performed (it is used in lossless compression). In contrast to JPEG 2000, PGF uses only powers of two, therefore the parameter value "i" represents a quantization step of 2"i". Just using powers of two makes no need of integer multiplication and division operations.
Coding.
The result of the previous process is a collection of "sub-bands" which represent several approximation scales.
A sub-band is a set of "coefficients" — integer numbers which represent aspects of the image associated with a certain frequency range as well as a spatial area of the image.
The quantized sub-bands are split further into "blocks", rectangular regions in the wavelet domain. They are typically selected in a way that the coefficients within them across the sub-bands form approximately spatial blocks in the (reconstructed) image domain and collected in a fixed size "macroblock".
The encoder has to encode the bits of all quantized coefficients of a macroblock, starting with the most significant bits and progressing to less significant bits. In this encoding process, each bit-plane of the macroblock gets encoded in two so-called "coding passes", first encoding bits of significant coefficients, then refinement bits of significant coefficients. Clearly, in lossless mode all bit-planes have to be encoded, and no bit-planes can be dropped.
Only significant coefficients are compressed with an adaptive run-length/Rice (RLR) coder, because they contain long runs of zeros. The RLR coder with parameter "k" (logarithmic length of a run of zeros) is also known as the elementary Golomb code of order 2"k".
Comparison with other file formats.
There are several self-proclaimed advantages of PGF over the ordinary JPEG standard:
Available software.
The author published "libPGF" via a SourceForge, under the GNU Lesser General Public License version 2.0. Xeraina offers a free Windows console encoder and decoder, and PGF viewers based on WIC for 32bit and 64bit Windows platforms. Other WIC applications including File Explorer are able to display PGF images after installing this viewer.
Digikam is a popular open-source image editing and cataloging software that uses "libPGF" for its thumbnails. It makes use of the progressive decoding feature of PGF images to store a single version of each thumbnail, which can then be decoded to different resolutions without loss, thus allowing users to dynamically change the size of the thumbnails without having to recalculate them again.
See also.
File extension.
File extension codice_0 and the TLA PGF are also used for unrelated purposes:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{bmatrix}\nY_r \\\\ U_r \\\\ V_r\n\\end{bmatrix} \n= \\begin{bmatrix}\n\\frac{1}{4} & \\frac{1}{2} & \\frac{1}{4} \\\\\n1 & -1 & 0 \\\\\n0 & -1 & 1\n\\end{bmatrix}\n\\begin{bmatrix}\nR \\\\ G \\\\ B\n\\end{bmatrix}; \\qquad \\qquad\n\\begin{bmatrix}\nR \\\\ G \\\\ B\n\\end{bmatrix} \n= \\begin{bmatrix}\n1 & \\frac{3}{4} & -\\frac{1}{4} \\\\\n1 & -\\frac{1}{4} & -\\frac{1}{4} \\\\\n1 & -\\frac{1}{4} & \\frac{3}{4}\n\\end{bmatrix}\n\\begin{bmatrix}\nY_r \\\\ U_r \\\\ V_r\n\\end{bmatrix}\n"
}
] | https://en.wikipedia.org/wiki?curid=12055434 |
12055540 | Shapley–Shubik power index | The Shapley–Shubik power index was formulated by Lloyd Shapley and Martin Shubik in 1954 to measure the powers of players in a voting game.
The constituents of a voting system, such as legislative bodies, executives, shareholders, individual legislators, and so forth, can be viewed as players in an "n"-player game. Players with the same preferences form coalitions. Any coalition that has enough votes to pass a bill or elect a candidate is called winning. The power of a coalition (or a player) is measured by the fraction of the possible voting sequences in which that coalition casts the deciding vote, that is, the vote that first guarantees passage or failure.
The power index is normalized between 0 and 1. A power of 0 means that a coalition has no effect at all on the outcome of the game; and a power of 1 means a coalition determines the outcome by its vote. Also the sum of the powers of all the players is always equal to 1.
There are some algorithms for calculating the power index, e.g., dynamic programming techniques, enumeration methods and Monte Carlo methods.
Since Shapley and Shubik have published their paper, several axiomatic approaches have been used to mathematically study the Shapley–Shubik power index, with the anonymity axiom, the null player axiom, the efficiency axiom and the transfer axiom being the most widely used.
Examples.
Suppose decisions are made by majority rule in a body consisting of A, B, C, D, who have 3, 2, 1 and 1 votes, respectively. The majority vote threshold is 4. There are 4! = 24 possible orders for these members to vote:
For each voting sequence the pivot voter – that voter who first raises the cumulative sum to 4 or more – is bolded. Here, A is pivotal in 12 of the 24 sequences. Therefore, A has an index of power 1/2. The others have an index of power 1/6. Curiously, B has no more power than C and D. When you consider that A's vote determines the outcome unless the others unite against A, it becomes clear that B, C, D play identical roles. This reflects in the power indices.
Suppose that in another majority-rule voting body with formula_0 members, in which a single strong member has formula_1 votes and the remaining formula_2 members have one vote each. In this case the strong member has a power index of formula_3 (unless formula_4, in which case the power index is simply formula_5). Note that this is more than the fraction of votes which the strong member commands. Indeed, this strong member has only a fraction formula_6 of the votes. Consider, for instance, a company which has 1000 outstanding shares of voting stock. One large shareholder holds 400 shares, while 600 other shareholders hold 1 share each. This corresponds to formula_7 and formula_8. In this case the power index of the large shareholder is approximately 0.666 (or 66.6%), even though this shareholder holds only 40% of the stock. The remaining 600 shareholder have a power index of less than 0.0006 (or 0.06%). Thus, the large shareholder holds over 1000 times more voting power as each other shareholder, while holding only 400 times as much stock.
The above can be mathematically derived as follows. Note that a majority is reached if at least formula_9 votes are cast in favor. If formula_10, the strong member clearly holds all the power, since in this case formula_11 (i.e., the votes of the strong member alone meet the majority threshold). Suppose now that formula_12 and that in a randomly chosen voting sequence, the strong member votes as the formula_13th member. This means that after the first formula_14 member have voted, formula_14 votes have been cast in favor, while after the first formula_13 members have voted, formula_15 votes have been cast in favor. The vote of strong member is pivotal if the former does not meet the majority threshold, while the latter does. That is, formula_16, and formula_17. We can rewrite this condition as formula_18. Note that our condition of formula_12 ensures that formula_19 and formula_20 (i.e., all of the permitted values of formula_13 are feasible). Thus, the strong member is the pivotal voter if formula_13 takes on one of the formula_1 values of formula_21 up to but not including formula_22. Since each of the formula_0 possible values of formula_13 is associated with the same number of voting sequences, this means that the strong member is the pivotal voter in a fraction formula_3 of the voting sequences. That is, the power index of the strong member is formula_3.
Applications.
The index has been applied to the analysis of voting in the Council of the European Union.
The index has been applied to the analysis of voting in the United Nations Security Council. The UN Security Council is made up of fifteen member states, of which five (the United States of America, Russia, China, France and the United Kingdom) are permanent members of the council. For a motion to pass in the Council, it needs the support of every permanent member and the support of four non permanent members. This is equivalent to a voting body where the five permanent members have eight votes each, the ten other members have one vote each and there is a quota of forty four votes, as then there would be fifty total votes, so you need all five permanent members and then four other votes for a motion to pass.
Note that a non-permanent member is pivotal in a permutation if and only if they are in the ninth position to vote and all five permanent members have already voted. Suppose that we have a permutation in which a non-permanent member is pivotal. Then there are three non-permanent members and five permanent that have to come before this pivotal member in this permutation.
Therefore, there are formula_23 ways of choosing these members and so 8! × formula_23 different orders of the members before the pivotal voter. There would then
be 6! ways of choosing the remaining voters after the pivotal voter. As there are a total of 15! permutations of 15 voters, the Shapley-Shubik power index of a non-permanent member is: formula_24.
Hence the power index of a permanent member is formula_25. | [
{
"math_id": 0,
"text": "n+1"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "\\dfrac{k}{n+1}"
},
{
"math_id": 4,
"text": "k > n+1"
},
{
"math_id": 5,
"text": "1"
},
{
"math_id": 6,
"text": "\\dfrac{k}{n+k}"
},
{
"math_id": 7,
"text": "n = 600"
},
{
"math_id": 8,
"text": "k=400"
},
{
"math_id": 9,
"text": "t(n, k) = \\left\\lfloor\\dfrac{n+k}{2}\\right\\rfloor + 1"
},
{
"math_id": 10,
"text": "k \\geq n+1"
},
{
"math_id": 11,
"text": "k \\geq t(n, k)"
},
{
"math_id": 12,
"text": "k \\leq n+1"
},
{
"math_id": 13,
"text": "r"
},
{
"math_id": 14,
"text": "r-1"
},
{
"math_id": 15,
"text": "r-1+k"
},
{
"math_id": 16,
"text": "r-1 < t(n, k)"
},
{
"math_id": 17,
"text": "r-1+k \\geq t(n, k)"
},
{
"math_id": 18,
"text": "t(n,k) + 1 - k \\leq r < t(n,k) + 1"
},
{
"math_id": 19,
"text": "1 \\leq t(n,k) + 1 - k"
},
{
"math_id": 20,
"text": "t(n,k) + 1 \\leq n + 2"
},
{
"math_id": 21,
"text": "t(n, k) + 1 - k"
},
{
"math_id": 22,
"text": "t(n,k) + 1"
},
{
"math_id": 23,
"text": "\\textstyle\\binom 9 3"
},
{
"math_id": 24,
"text": " \\frac{\\binom{9}{3} (8!) (6!)}{15!} = \\frac{4}{2145}"
},
{
"math_id": 25,
"text": " \\frac{421}{2145}"
}
] | https://en.wikipedia.org/wiki?curid=12055540 |
12056032 | Polygonal chain | Connected series of line segments
In geometry, a polygonal chain is a connected series of line segments. More formally, a polygonal chain &NoBreak;&NoBreak; is a curve specified by a sequence of points formula_0 called its vertices. The curve itself consists of the line segments connecting the consecutive vertices.
Variations.
Simple.
A simple polygonal chain is one in which only consecutive segments intersect and only at their endpoints.
Closed.
A closed polygonal chain is one in which the first vertex coincides with the last one, or, alternatively, the first and the last vertices are also connected by a line segment.
A simple closed polygonal chain in the plane is the boundary of a simple polygon. Often the term "polygon" is used in the meaning of "closed polygonal chain", but in some cases it is important to draw a distinction between a polygonal area and a polygonal chain.
Monotone.
A polygonal chain is called monotone if there is a straight line "L" such that every line perpendicular to "L" intersects the chain at most once. Every nontrivial monotone polygonal chain is open. In comparison, a monotone polygon is a polygon (a closed chain) that can be partitioned into exactly two monotone chains. The graphs of piecewise linear functions form monotone chains with respect to a horizontal line.
Parametrization.
Each segment of a polygonal chain is typically parametrized linearly, using linear interpolation between successive vertices. For the whole chain, two parametrizations are common in practical applications: Each segment of the chain can be assigned a unit interval of the parameter corresponding to the index of the first vertex; alternately, each segment of the chain can be assigned an interval of the parameter corresponding to the length of the segment, so that the parameter corresponds uniformly to arclength along the whole chain.
From point sets.
Every set of at least formula_1 points contains a polygonal path of at least formula_2 edges in which all slopes have the same sign. This is a corollary of the Erdős–Szekeres theorem.
Applications.
Polygonal chains can often be used to approximate more complex curves. In this context, the Ramer–Douglas–Peucker algorithm can be used to find a polygonal chain with few segments that serves as an accurate approximation.
In graph drawing, polygonal chains are often used to represent the edges of graphs, in drawing styles where drawing the edges as straight line segments would cause crossings, edge-vertex collisions, or other undesired features. In this context, it is often desired to draw edges with as few segments and bends as possible, to reduce the visual clutter in the drawing; the problem of minimizing the number of bends is called bend minimization.
In computer-aided geometric design, smooth curves are often defined by a list of control points, e.g. in defining Bézier curve segments. When connected together, the control points form a polygonal chain called a "control polygon".
Polygonal chains are also a fundamental data type in computational geometry. For instance, a point location algorithm of Lee and Preparata operates by decomposing arbitrary planar subdivisions into an ordered sequence of monotone chains, in which a point location query problem may be solved by binary search; this method was later refined to give optimal time bounds for the point location problem.
With geographic information system, linestrings may represent any linear geometry, and can be described using the well-known text markup as a codice_0 or codice_1. Linear rings (or codice_2) are closed and simple polygonal chains used to build polygon geometries.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(A_1, A_2, \\dots, A_n)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\lfloor\\sqrt{n-1}\\rfloor"
}
] | https://en.wikipedia.org/wiki?curid=12056032 |
1205940 | Stein factorization | In algebraic geometry, the Stein factorization, introduced by Karl Stein (1956) for the case of complex spaces, states that a proper morphism can be factorized as a composition of a finite mapping and a proper morphism with connected fibers. Roughly speaking, Stein factorization contracts the connected components of the fibers of a mapping to points.
Statement.
One version for schemes states the following:
Let "X" be a scheme, "S" a locally noetherian scheme and formula_0 a proper morphism. Then one can write
formula_1
where formula_2 is a finite morphism and formula_3 is a proper morphism so that formula_4
The existence of this decomposition itself is not difficult. See below. But, by Zariski's connectedness theorem, the last part in the above says that the fiber formula_5 is connected for any formula_6. It follows:
Corollary: For any formula_7, the set of connected components of the fiber formula_8 is in bijection with the set of points in the fiber formula_9.
Proof.
Set:
formula_10
where Spec"S" is the relative Spec. The construction gives the natural map formula_2, which is finite since formula_11 is coherent and "f" is proper. The morphism "f" factors through "g" and one gets formula_3, which is proper. By construction, formula_12. One then uses the theorem on formal functions to show that the last equality implies formula_13 has connected fibers. (This part is sometimes referred to as Zariski's connectedness theorem.)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f: X \\to S"
},
{
"math_id": 1,
"text": "f = g \\circ f'"
},
{
"math_id": 2,
"text": "g\\colon S' \\to S"
},
{
"math_id": 3,
"text": "f'\\colon X \\to S'"
},
{
"math_id": 4,
"text": "f'_* \\mathcal{O}_X = \\mathcal{O}_{S'}."
},
{
"math_id": 5,
"text": "f'^{-1}(s)"
},
{
"math_id": 6,
"text": "s \\in S'"
},
{
"math_id": 7,
"text": "s \\in S"
},
{
"math_id": 8,
"text": "f^{-1}(s)"
},
{
"math_id": 9,
"text": "g^{-1}(s)"
},
{
"math_id": 10,
"text": "S' = \\operatorname{Spec}_S f_* \\mathcal{O}_X"
},
{
"math_id": 11,
"text": "\\mathcal{O}_X"
},
{
"math_id": 12,
"text": "f'_* \\mathcal{O}_X = \\mathcal{O}_{S'}"
},
{
"math_id": 13,
"text": "f'"
}
] | https://en.wikipedia.org/wiki?curid=1205940 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.