id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
885725 | Lens space | 3-manifold that is a quotient of S³ by ℤ/p actions: (z,w) ↦ (exp(2πi/p)z, exp(2πiq/p)w)
A lens space is an example of a topological space, considered in mathematics. The term often refers to a specific class of 3-manifolds, but in general can be defined for higher dimensions.
In the 3-manifold case, a lens space can be visualized as the result of gluing two solid tori together by a homeomorphism of their boundaries. Often the 3-sphere and formula_0, both of which can be obtained as above, are not counted as they are considered trivial special cases.
The three-dimensional lens spaces formula_1 were introduced by Heinrich Tietze in 1908. They were the first known examples of 3-manifolds which were not determined by their homology and fundamental group alone, and the simplest examples of closed manifolds whose homeomorphism type is not determined by their homotopy type. J. W. Alexander in 1919 showed that the lens spaces formula_2 and formula_3 were not homeomorphic even though they have isomorphic fundamental groups and the same homology, though they do not have the same homotopy type. Other lens spaces (such as formula_4 and formula_5) have even the same homotopy type (and thus isomorphic fundamental groups and homology), but not the same homeomorphism type; they can thus be seen as the birth of geometric topology of manifolds as distinct from algebraic topology.
There is a complete classification of three-dimensional lens spaces, by fundamental group and Reidemeister torsion.
Definition.
The three-dimensional lens spaces formula_1 are quotients of formula_6 by formula_7-actions. More precisely, let formula_8 and formula_9 be coprime integers and consider formula_10 as the unit sphere in formula_11. Then the formula_12-action on formula_6 generated by the homeomorphism
formula_13
is free. The resulting quotient space is called the lens space formula_1.
This can be generalized to higher dimensions as follows: Let formula_14 be integers such that the formula_15 are coprime to formula_8 and consider formula_16 as the unit sphere in formula_17. The lens space formula_18 is the quotient of formula_16 by the free formula_19-action generated by
formula_20
In three dimensions we have formula_21
Properties.
The fundamental group of all the lens spaces formula_22 is formula_23 independent of the formula_15.
Lens spaces are locally symmetric spaces, but not (fully) symmetric, with the exception of formula_24 which is symmetric. (Locally symmetric spaces are symmetric spaces that are quotiented by an isometry that has no fixed points; lens spaces meet this definition.)
Alternative definitions of three-dimensional lens spaces.
The three dimensional lens space formula_1 is often defined to be a solid ball with the following identification: first mark "p" equally spaced points on the equator of the solid ball, denote them formula_25 to formula_26, then on the boundary of the ball, draw geodesic lines connecting the points to the north and south pole. Now identify spherical triangles by identifying the north pole to the south pole and the points formula_27 with formula_28 and formula_29 with formula_30. The resulting space is homeomorphic to the lens space formula_1.
Another related definition is to view the solid ball as the following solid bipyramid: construct a planar regular "p" sided polygon. Put two points "n" and "s" directly above and below the center of the polygon. Construct the bipyramid by joining each point of the regular "p" sided polygon to "n" and "s". Fill in the bipyramid to make it solid and give the triangles on the boundary the same identification as above.
Classification of 3-dimensional lens spaces.
Classifications up to homeomorphism and homotopy equivalence are known, as follows. The three-dimensional spaces formula_31 and
formula_32 are:
If formula_35 as in case 2., they are "obviously" homeomorphic, as one can easily produce a homeomorphism. It is harder to show that these are the only homeomorphic lens spaces.
The invariant that gives the homotopy classification of 3-dimensional lens spaces is the torsion linking form.
The homeomorphism classification is more subtle, and is given by Reidemeister torsion. This was given in as a classification up to PL homeomorphism, but it was shown in to be a homeomorphism classification. In modern terms, lens spaces are determined by "simple" homotopy type, and there are no normal invariants (like characteristic classes) or surgery obstruction.
A knot-theoretic classification is given in :
let "C" be a closed curve in the lens space which lifts to a knot in the universal cover of the lens space. If the lifted knot has a trivial Alexander polynomial, compute the torsion linking form on the pair (C,C) – then this gives the homeomorphism classification.
Another invariant is the homotopy type of the configuration spaces – showed that homotopy equivalent but not homeomorphic lens spaces may have configuration spaces with different homotopy types, which can be detected by different Massey products. | [
{
"math_id": 0,
"text": "S^2 \\times S^1"
},
{
"math_id": 1,
"text": "L(p;q)"
},
{
"math_id": 2,
"text": "L(5;1)"
},
{
"math_id": 3,
"text": "L(5;2)"
},
{
"math_id": 4,
"text": "L(7;1)"
},
{
"math_id": 5,
"text": "L(7;2)"
},
{
"math_id": 6,
"text": "S^3"
},
{
"math_id": 7,
"text": "\\Z/p"
},
{
"math_id": 8,
"text": "p"
},
{
"math_id": 9,
"text": "q"
},
{
"math_id": 10,
"text": "S^3 "
},
{
"math_id": 11,
"text": " \\Complex^2"
},
{
"math_id": 12,
"text": " \\mathbb{Z}/p"
},
{
"math_id": 13,
"text": "(z_1,z_2) \\mapsto (e^{2\\pi i /p} \\cdot z_1, e^{2\\pi i q/p}\\cdot z_2)"
},
{
"math_id": 14,
"text": "p,q_1,\\ldots,q_n"
},
{
"math_id": 15,
"text": "q_i"
},
{
"math_id": 16,
"text": "S^{2n-1}"
},
{
"math_id": 17,
"text": " \\mathbb C^n"
},
{
"math_id": 18,
"text": "L(p;q_1,\\ldots q_n)"
},
{
"math_id": 19,
"text": "\\mathbb Z/p"
},
{
"math_id": 20,
"text": "(z_1,\\ldots,z_n) \\mapsto (e^{2\\pi iq_1/p} \\cdot z_1,\\ldots, e^{2\\pi i q_n/p}\\cdot z_n)."
},
{
"math_id": 21,
"text": "L(p;q)=L(p;1,q)."
},
{
"math_id": 22,
"text": "L(p;q_1,\\ldots, q_n)"
},
{
"math_id": 23,
"text": "\\Z/p\\Z"
},
{
"math_id": 24,
"text": "L(2;1)"
},
{
"math_id": 25,
"text": "a_0"
},
{
"math_id": 26,
"text": "a_{p-1}"
},
{
"math_id": 27,
"text": "a_i"
},
{
"math_id": 28,
"text": "a_{i+q}"
},
{
"math_id": 29,
"text": "a_{i+1}"
},
{
"math_id": 30,
"text": "a_{i+q+1}"
},
{
"math_id": 31,
"text": "L(p;q_1)"
},
{
"math_id": 32,
"text": "L(p;q_2)"
},
{
"math_id": 33,
"text": "q_1 q_2 \\equiv \\pm n^2 \\pmod{p}"
},
{
"math_id": 34,
"text": "n \\in \\mathbb{N}"
},
{
"math_id": 35,
"text": "q_1 \\equiv \\pm q_2^{\\pm 1} \\pmod{p}"
},
{
"math_id": 36,
"text": "\\S"
}
] | https://en.wikipedia.org/wiki?curid=885725 |
8861079 | Euler's continued fraction formula | Connects a very general infinite series with an infinite continued fraction.
In the analytic theory of continued fractions, Euler's continued fraction formula is an identity connecting a certain very general infinite series with an infinite continued fraction. First published in 1748, it was at first regarded as a simple identity connecting a finite sum with a finite continued fraction in such a way that the extension to the infinite case was immediately apparent. Today it is more fully appreciated as a useful tool in analytic attacks on the general convergence problem for infinite continued fractions with complex elements.
The original formula.
Euler derived the formula as
connecting a finite sum of products with a finite continued fraction.
formula_0
The identity is easily established by induction on "n", and is therefore applicable in the limit: if the expression on the left is extended to represent a convergent infinite series, the expression on the right can also be extended to represent a convergent infinite continued fraction.
This is written more compactly using generalized continued fraction notation:
formula_1
Euler's formula.
If "r""i" are complex numbers and "x" is defined by
formula_2
then this equality can be proved by induction
formula_3.
Here equality is to be understood as equivalence, in the sense that the n'th convergent of each continued fraction is equal to the n'th partial sum of the series shown above. So if the series shown is convergent – or "uniformly" convergent, when the "r""i"'s are functions of some complex variable "z" – then the continued fractions also converge, or converge uniformly.
Proof by induction.
Theorem: Let formula_4 be a natural number. For formula_5 complex values formula_6,
formula_7
and for formula_4 complex values formula_8,
formula_9
Proof: We perform a double induction. For formula_10, we have
formula_11
and
formula_12
Now suppose both statements are true for some formula_13.
We have
formula_14 where formula_15
by applying the induction hypothesis to formula_16.
But if formula_17 implies formula_18 implies formula_19, contradiction. Hence
formula_20
completing that induction.
Note that for formula_21,
formula_22
if formula_23, then both sides are zero.
Using
formula_24 and formula_25,
and applying the induction hypothesis to the values formula_26,
formula_27
completing the other induction.
As an example, the expression formula_28 can be rearranged into a continued fraction.
formula_29
This can be applied to a sequence of any length, and will therefore also apply in the infinite case.
Examples.
The exponential function.
The exponential function "e""x" is an entire function with a power series expansion that converges uniformly on every bounded domain in the complex plane.
formula_30
The application of Euler's continued fraction formula is straightforward:
formula_31
Applying an equivalence transformation that consists of clearing the fractions this example is simplified to
formula_32
and we can be certain that this continued fraction converges uniformly on every bounded domain in the complex plane because it is equivalent to the power series for "e""x".
The natural logarithm.
The Taylor series for the principal branch of the natural logarithm in the neighborhood of 1 is well known:
formula_33
This series converges when |"x"| < 1 and can also be expressed as a sum of products:
formula_34
Applying Euler's continued fraction formula to this expression shows that
formula_35
and using an equivalence transformation to clear all the fractions results in
formula_36
This continued fraction converges when |"x"| < 1 because it is equivalent to the series from which it was derived.
The trigonometric functions.
The Taylor series of the sine function converges over the entire complex plane and can be expressed as the sum of products.
formula_37
Euler's continued fraction formula can then be applied
formula_38
An equivalence transformation is used to clear the denominators:
formula_39
The same argument can be applied to the cosine function:
formula_40
formula_41
The inverse trigonometric functions.
The inverse trigonometric functions can be represented as continued fractions.
formula_42
An equivalence transformation yields
formula_43
The continued fraction for the inverse tangent is straightforward:
formula_44
A continued fraction for π.
We can use the previous example involving the inverse tangent to construct a continued fraction representation of π. We note that
formula_45
And setting "x" = "1" in the previous result, we obtain immediately
formula_46
The hyperbolic functions.
Recalling the relationship between the hyperbolic functions and the trigonometric functions,
formula_47
formula_48
And that formula_49 the following continued fractions are easily derived from the ones above:
formula_50
formula_51
The inverse hyperbolic functions.
The inverse hyperbolic functions are related to the inverse trigonometric functions similar to how the hyperbolic functions are related to the trigonometric functions,
formula_52
formula_53
And these continued fractions are easily derived:
formula_54
formula_55
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\na_0\\left(1 + a_1\\left(1 + a_2\\left(\\cdots + a_n\\right)\\cdots\\right)\\right) = a_0 + a_0a_1 + a_0a_1a_2 + \\cdots + a_0a_1a_2\\cdots a_n =\n\\cfrac{a_0}{1 - \\cfrac{a_1}{1 + a_1 - \\cfrac{a_2}{1 + a_2 - \\cfrac{\\ddots}{\\ddots \n\\cfrac{a_{n-1}}{1 + a_{n-1} - \\cfrac{a_n}{1 + a_n}}}}}}\\,\n"
},
{
"math_id": 1,
"text": "\na_0 + a_0 a_1 + a_0 a_1 a_2 + \\cdots + a_0 a_1 a_2 \\cdots a_n =\n\\frac{a_0}{1 +} \\, \\frac{-a_1}{1 + a_1 +} \\, \\cfrac{-a_2}{1 + a_2 +} \\cdots \\frac{-a_n}{1 + a_n}.\n"
},
{
"math_id": 2,
"text": "\nx = 1 + \\sum_{i=1}^\\infty r_1r_2\\cdots r_i = 1 + \\sum_{i=1}^\\infty \\left( \\prod_{j=1}^i r_j \\right)\\,,\n"
},
{
"math_id": 3,
"text": "\nx = \\cfrac{1}{1 - \\cfrac{r_1}{1 + r_1 - \\cfrac{r_2}{1 + r_2 - \\cfrac{r_3}{1 + r_3 - \\ddots}}}}\\,\n"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "n+1"
},
{
"math_id": 6,
"text": "a_0, a_1, \\ldots, a_{n}"
},
{
"math_id": 7,
"text": "\n\\sum_{k=0}^n \\prod_{j=0}^k a_j = \\frac{a_0}{1+} \\, \\frac{-a_1}{1+a_1+} \\cdots \\frac{-a_n}{1+a_n} "
},
{
"math_id": 8,
"text": "b_1, \\ldots, b_{n}"
},
{
"math_id": 9,
"text": "\\frac{-b_1}{1+b_1+} \\, \\frac{-b_2}{1+b_2+} \\cdots \\frac{-b_n}{1+b_n} \\ne -1."
},
{
"math_id": 10,
"text": "n=1"
},
{
"math_id": 11,
"text": "\n\\frac{a_0}{1+} \\, \\frac{-a_1}{1+a_1} = \\frac{a_0}{1+\\frac{-a_1}{1+a_1}} = \\frac{a_0(1+a_1)}{1} = a_0 + a_0 a_1\n= \\sum_{k=0}^1 \\prod_{j=0}^k a_j"
},
{
"math_id": 12,
"text": "\n\\frac{-b_1}{1+b_1}\\ne -1."
},
{
"math_id": 13,
"text": "n \\ge 1"
},
{
"math_id": 14,
"text": "\\frac{-b_1}{1+b_1+} \\, \\frac{-b_2}{1+b_2+} \\cdots \\frac{-b_{n+1}}{1+b_{n+1}}\n= \\frac{-b_1}{1+b_1+x}"
},
{
"math_id": 15,
"text": "x = \\frac{-b_2}{1+b_2+} \\cdots \\frac{-b_{n+1}}{1+b_{n+1}} \\ne -1"
},
{
"math_id": 16,
"text": "b_2, \\ldots, b_{n+1}"
},
{
"math_id": 17,
"text": "\\frac{-b_1}{1+b_1+x} = -1"
},
{
"math_id": 18,
"text": "b_1 = 1+b_1+x"
},
{
"math_id": 19,
"text": "x = -1"
},
{
"math_id": 20,
"text": "\\frac{-b_1}{1+b_1+} \\, \\frac{-b_2}{1+b_2+} \\cdots \\frac{-b_{n+1}}{1+b_{n+1}} \\ne -1,"
},
{
"math_id": 21,
"text": "x \\ne -1"
},
{
"math_id": 22,
"text": "\n\\frac{1}{1+} \\, \\frac{-a}{1+a+x} = \\frac{1}{1-\\frac{a}{1+a+x}} = \\frac{1+a+x}{1+x} = 1 + \\frac{a}{1+x};"
},
{
"math_id": 23,
"text": "x=-1-a"
},
{
"math_id": 24,
"text": "a=a_1"
},
{
"math_id": 25,
"text": "x = \\frac{-a_2}{1+a_2+} \\, \\cdots \\frac{-a_{n+1}}{1+a_{n+1}} \\ne -1"
},
{
"math_id": 26,
"text": "a_1, a_2, \\ldots, a_{n+1}"
},
{
"math_id": 27,
"text": "\n\\begin{align}\na_0 + & a_0a_1 + a_0a_1a_2 + \\cdots + a_0a_1a_2a_3 \\cdots a_{n+1} \\\\\n&= a_0 + a_0(a_1 + a_1a_2 + \\cdots + a_1a_2a_3 \\cdots a_{n+1}) \\\\\n&= a_0 + a_0 \\big( \\frac{a_1}{1+} \\, \\frac{-a_2}{1+a_2+} \\, \\cdots \\frac{-a_{n+1}}{1+a_{n+1}} \\big)\\\\\n&= a_0 \\big(1 + \\frac{a_1}{1+} \\, \\frac{-a_2}{1+a_2+} \\, \\cdots \\frac{-a_{n+1}}{1+a_{n+1}} \\big)\\\\\n&= a_0 \\big(\\frac{1}{1+} \\, \\frac{-a_1}{1+a_1+} \\, \\frac{-a_2}{1+a_2+} \\, \\cdots \\frac{-a_{n+1}}{1+a_{n+1}} \\big)\\\\\n&= \\frac{a_0}{1+} \\, \\frac{-a_1}{1+a_1+} \\, \\frac{-a_2}{1+a_2+} \\, \\cdots \\frac{-a_{n+1}}{1+a_{n+1}},\n\\end{align}"
},
{
"math_id": 28,
"text": "a_0 + a_0a_1 + a_0a_1a_2 + a_0a_1a_2a_3"
},
{
"math_id": 29,
"text": " \\begin{align}\na_0 + a_0a_1 + a_0a_1a_2 + a_0a_1a_2a_3 & = a_0(a_1(a_2(a_3 + 1) + 1) + 1) \\\\[8pt]\n& = \\cfrac{a_0}{\\cfrac{1}{a_1(a_2(a_3 + 1) + 1) + 1}} \\\\[8pt]\n& = \\cfrac{a_0}{\\cfrac{a_1(a_2(a_3 + 1) + 1) + 1}{a_1(a_2(a_3 + 1) + 1) + 1} - \\cfrac{a_1(a_2(a_3 + 1) + 1)}{a_1(a_2(a_3 + 1) + 1) + 1}} = \\cfrac{a_0}{1 - \\cfrac{a_1(a_2(a_3 + 1) + 1)}{a_1(a_2(a_3 + 1) + 1) + 1}} \\\\[8pt]\n& = \\cfrac{a_0}{1 - \\cfrac{a_1}{\\cfrac{a_1(a_2(a_3 + 1) + 1) + 1}{a_2(a_3 + 1) + 1}}} \\\\[8pt]\n& = \\cfrac{a_0}{1 - \\cfrac{a_1}{\\cfrac{a_1(a_2(a_3 + 1) + 1)}{a_2(a_3 + 1) + 1} + \\cfrac{a_2(a_3 + 1) + 1}{a_2(a_3 + 1) + 1} - \\cfrac{a_2(a_3 + 1)}{a_2(a_3 + 1) + 1}}} = \\cfrac{a_0}{1 - \\cfrac{a_1}{1 + a_1 - \\cfrac{a_2(a_3 + 1)}{a_2(a_3 + 1) + 1}}} \\\\[8pt]\n& = \\cfrac{a_0}{1 - \\cfrac{a_1}{1 + a_1 - \\cfrac{a_2}{\\cfrac{a_2(a_3 + 1) + 1}{a_3 + 1}}}} \\\\[8pt]\n& = \\cfrac{a_0}{1 - \\cfrac{a_1}{1 + a_1 - \\cfrac{a_2}{\\cfrac{a_2(a_3 + 1)}{a_3 + 1} + \\cfrac{a_3 + 1}{a_3 + 1} - \\cfrac{a_3}{a_3 + 1}}}} = \\cfrac{a_0}{1 - \\cfrac{a_1}{1 + a_1 - \\cfrac{a_2}{1 + a_2 - \\cfrac{a_3}{1 + a_3}}}}\n\\end{align}"
},
{
"math_id": 30,
"text": "\ne^x = 1 + \\sum_{n=1}^\\infty \\frac{x^n}{n!} = 1 + \\sum_{n=1}^\\infty \\left(\\prod_{i=1}^n \\frac{x}{i}\\right)\\,\n"
},
{
"math_id": 31,
"text": "\ne^x = \\cfrac{1}{1 - \\cfrac{x}{1 + x - \\cfrac{\\frac{1}{2}x}{1 + \\frac{1}{2}x - \\cfrac{\\frac{1}{3}x}\n{1 + \\frac{1}{3}x - \\cfrac{\\frac{1}{4}x}{1 + \\frac{1}{4}x - \\ddots}}}}}.\\,\n"
},
{
"math_id": 32,
"text": "\ne^x = \\cfrac{1}{1 - \\cfrac{x}{1 + x - \\cfrac{x}{2 + x - \\cfrac{2x}{3 + x - \\cfrac{3x}{4 + x - \\ddots}}}}}\\,\n"
},
{
"math_id": 33,
"text": "\n\\log(1+x) = x - \\frac{x^2}{2} + \\frac{x^3}{3} - \\frac{x^4}{4} + \\cdots =\n\\sum_{n=1}^\\infty \\frac{(-1)^{n+1}z^{n}}{n}.\\,\n"
},
{
"math_id": 34,
"text": "\n\\log (1+x) = x + (x)\\left(\\frac{-x}{2}\\right) + (x)\\left(\\frac{-x}{2}\\right)\\left(\\frac{-2x}{3}\\right) + (x)\\left(\\frac{-x}{2}\\right)\\left(\\frac{-2x}{3}\\right)\\left(\\frac{-3x}{4}\\right) + \\cdots\n"
},
{
"math_id": 35,
"text": "\n\\log (1+x) = \\cfrac{x}{1 - \\cfrac{\\frac{-x}{2}}{1+\\frac{-x}{2}-\\cfrac{\\frac{-2x}{3}}{1+\\frac{-2x}{3}-\\cfrac{\\frac{-3x}{4}}{1+\\frac{-3x}{4}-\\ddots}}}}\n"
},
{
"math_id": 36,
"text": "\n\\log (1+x) = \\cfrac{x}{1+\\cfrac{x}{2-x+\\cfrac{2^2x}{3-2x+\\cfrac{3^2x}{4-3x+\\ddots}}}}\n"
},
{
"math_id": 37,
"text": " \\begin{align} \n\\sin x = \\sum^{\\infty}_{n=0} \\frac{(-1)^n}{(2n+1)!} x^{2n+1} & = x - \\frac{x^3}{3!} + \\frac{x^5}{5!} - \\frac{x^7}{7!} + \\frac{x^9}{9!} - \\cdots \\\\[8pt]\n& = x + (x)\\left(\\frac{-x^2}{2 \\cdot 3}\\right) + (x)\\left(\\frac{-x^2}{2 \\cdot 3}\\right)\\left(\\frac{-x^2}{4 \\cdot 5}\\right) + (x)\\left(\\frac{-x^2}{2 \\cdot 3}\\right)\\left(\\frac{-x^2}{4 \\cdot 5}\\right)\\left(\\frac{-x^2}{6 \\cdot 7}\\right) + \\cdots \n\\end{align}"
},
{
"math_id": 38,
"text": "\\cfrac{x}{1 - \\cfrac{\\frac{-x^2}{2 \\cdot 3}}{1 + \\frac{-x^2}{2 \\cdot 3} - \\cfrac{\\frac{-x^2}{4 \\cdot 5}}{1 + \\frac{-x^2}{4 \\cdot 5} - \\cfrac{\\frac{-x^2}{6 \\cdot 7}}{1 + \\frac{-x^2}{6 \\cdot 7} - \\ddots}}}}"
},
{
"math_id": 39,
"text": " \\sin x = \\cfrac{x}{1 + \\cfrac{x^2}{2 \\cdot 3 - x^2 + \\cfrac{2 \\cdot 3x^2}{4 \\cdot 5 - x^2 + \\cfrac{4 \\cdot 5x^2}{6 \\cdot 7 - x^2 + \\ddots}}}}."
},
{
"math_id": 40,
"text": " \\begin{align}\n\\cos x = \\sum^{\\infty}_{n=0} \\frac{(-1)^n}{(2n)!} x^{2n} & = 1 - \\frac{x^2}{2!} + \\frac{x^4}{4!} - \\frac{x^6}{6!} + \\frac{x^8}{8!} - \\cdots \\\\[8pt]\n& = 1 + \\frac{-x^2}{2} + \\left(\\frac{-x^2}{2}\\right)\\left(\\frac{-x^2}{ 3 \\cdot 4}\\right) + \\left(\\frac{-x^2}{2}\\right)\\left(\\frac{-x^2}{ 3 \\cdot 4}\\right)\\left(\\frac{-x^2}{ 5 \\cdot 6}\\right) + \\cdots \\\\[8pt] & = \\cfrac{1}{1 - \\cfrac{\\frac{-x^2}{2}}{1 + \\frac{-x^2}{2} - \\cfrac{\\frac{-x^2}{3 \\cdot 4}}{1 + \\frac{-x^2}{3 \\cdot 4} - \\cfrac{\\frac{-x^2}{5 \\cdot 6}}{1 + \\frac{-x^2}{5 \\cdot 6} - \\ddots}}}}\n\\end{align}"
},
{
"math_id": 41,
"text": " \\therefore \\cos x = \\cfrac{1}{1 + \\cfrac{x^2}{2 - x^2 + \\cfrac{2x^2}{3 \\cdot 4 - x^2 + \\cfrac{3 \\cdot 4x^2}{5 \\cdot 6 - x^2 + \\ddots}}}}."
},
{
"math_id": 42,
"text": "\n\\begin{align}\n\\sin^{-1} x = \\sum_{n=0}^\\infty \\frac{(2n-1)!!}{(2n)!!} \\cdot \\frac{x^{2n+1}}{2n+1} & = x + \\left( \\frac{1}{2} \\right) \\frac{x^3}{3} + \\left( \\frac{1 \\cdot 3}{2 \\cdot 4} \\right) \\frac{x^5}{5} + \\left( \\frac{1 \\cdot 3 \\cdot 5}{2 \\cdot 4 \\cdot 6} \\right) \\frac{x^7}{7} + \\cdots \\\\[8pt]\n& = x + x \\left(\\frac{x^2}{2 \\cdot 3}\\right) + x \\left(\\frac{x^2}{2 \\cdot 3}\\right)\\left(\\frac{(3x)^2}{4 \\cdot 5}\\right) + x \\left(\\frac{x^2}{2 \\cdot 3}\\right)\\left(\\frac{(3x)^2}{4 \\cdot 5}\\right)\\left(\\frac{(5x)^2}{6 \\cdot 7}\\right) + \\cdots \\\\[8pt]\n& = \\cfrac{x}{1 - \\cfrac{\\frac{x^2}{2 \\cdot 3}}{1 + \\frac{x^2}{2 \\cdot 3} - \\cfrac{\\frac{(3x)^2}{4 \\cdot 5}}{1 + \\frac{(3x)^2}{4 \\cdot 5} - \\cfrac{\\frac{(5x)^2}{6 \\cdot 7}}{ 1 + \\frac{(5x)^2}{6 \\cdot 7} - \\ddots}}}}\n\\end{align}\n"
},
{
"math_id": 43,
"text": " \\sin^{-1} x = \\cfrac{x}{1 - \\cfrac{x^2}{2 \\cdot 3 + x^2 - \\cfrac{2 \\cdot 3 (3x)^2}{4 \\cdot 5 +(3x)^2 - \\cfrac{4 \\cdot 5 (5x^2)}{6 \\cdot 7 + (5x^2) - \\ddots}}}}."
},
{
"math_id": 44,
"text": "\n\\begin{align}\n\\tan^{-1} x = \\sum_{n=0}^\\infty (-1)^n \\frac{x^{2n + 1}}{2n + 1} & = x - \\frac{x^3}{3} + \\frac{x^5}{5} - \\frac{x^7}{7} + \\cdots \\\\[8pt]\n& = x + x \\left(\\frac{-x^2}{3}\\right) + x \\left(\\frac{-x^2}{3}\\right)\\left(\\frac{-3x^2}{5}\\right) + x \\left(\\frac{-x^2}{3}\\right)\\left(\\frac{-3x^2}{5}\\right)\\left(\\frac{-5x^2}{7}\\right) + \\cdots \\\\[8pt]\n& = \\cfrac{x}{1 - \\cfrac{\\frac{-x^2}{3}}{1 + \\frac{-x^2}{3} - \\cfrac{\\frac{-3x^2}{5}}{1 + \\frac{-3x^2}{5} - \\cfrac{\\frac{-5x^2}{7}}{1 + \\frac{-5x^2}{7} - \\ddots}}}} \\\\[8pt]\n& = \\cfrac{x}{1 + \\cfrac{x^2}{3 - x^2 + \\cfrac{(3x)^2}{5 - 3x^2 + \\cfrac{(5x)^2}{7 - 5x^2 + \\ddots}}}}.\n\\end{align}\n"
},
{
"math_id": 45,
"text": "\n\\tan^{-1} (1) = \\frac\\pi4 ,\n"
},
{
"math_id": 46,
"text": "\n\\pi = \\cfrac{4}{1 + \\cfrac{1^2}{2 + \\cfrac{3^2}{2 + \\cfrac{5^2}{2 + \\cfrac{7^2}{2 + \\ddots}}}}}.\\,\n"
},
{
"math_id": 47,
"text": " \\sin ix = i \\sinh x "
},
{
"math_id": 48,
"text": " \\cos ix = \\cosh x ,"
},
{
"math_id": 49,
"text": " i^2 = -1,"
},
{
"math_id": 50,
"text": " \\sinh x = \\cfrac{x}{1 - \\cfrac{x^2}{2 \\cdot 3 + x^2 - \\cfrac{2 \\cdot 3x^2}{4 \\cdot 5 + x^2 - \\cfrac{4 \\cdot 5x^2}{6 \\cdot 7 + x^2 - \\ddots}}}}"
},
{
"math_id": 51,
"text": " \\cosh x = \\cfrac{1}{1 - \\cfrac{x^2}{2 + x^2 - \\cfrac{2x^2}{3 \\cdot 4 + x^2 - \\cfrac{3 \\cdot 4x^2}{5 \\cdot 6 + x^2 - \\ddots}}}}."
},
{
"math_id": 52,
"text": " \\sin^{-1} ix = i \\sinh^{-1} x "
},
{
"math_id": 53,
"text": " \\tan^{-1} ix = i \\tanh^{-1} x ,"
},
{
"math_id": 54,
"text": " \\sinh^{-1} x = \\cfrac{x}{1 + \\cfrac{x^2}{2 \\cdot 3 - x^2 + \\cfrac{2 \\cdot 3 (3x)^2}{4 \\cdot 5 - (3x)^2 + \\cfrac{4 \\cdot 5 (5x^2)}{6 \\cdot 7 - (5x^2) + \\ddots}}}}"
},
{
"math_id": 55,
"text": " \\tanh^{-1} x = \\cfrac{x}{1 - \\cfrac{x^2}{3 + x^2 - \\cfrac{(3x)^2}{5 + 3x^2 - \\cfrac{(5x)^2}{7 + 5x^2 - \\ddots}}}}."
}
] | https://en.wikipedia.org/wiki?curid=8861079 |
886330 | Strategyproofness | In mechanism design, a strategyproof (SP) mechanism is a game form in which each player has a weakly-dominant strategy, so that no player can gain by "spying" over the other players to know what they are going to play. When the players have private information (e.g. their type or their value to some item), and the strategy space of each player consists of the possible information values (e.g. possible types or values), a truthful mechanism is a game in which revealing the true information is a weakly-dominant strategy for each player. An SP mechanism is also called dominant-strategy-incentive-compatible (DSIC), to distinguish it from other kinds of incentive compatibility.
An SP mechanism is immune to manipulations by individual players (but not by coalitions). In contrast, in a group strategyproof mechanism, no group of people can collude to misreport their preferences in a way that makes every member better off. In a strong group strategyproof mechanism, no group of people can collude to misreport their preferences in a way that makes at least one member of the group better off without making any of the remaining members worse off.
Examples.
Typical examples of SP mechanisms are:
Typical examples of mechanisms that are "not" SP are:
SP in network routing.
SP is also applicable in network routing. Consider a network as a graph where each edge (i.e. link) has an associated cost of transmission, privately known to the owner of the link. The owner of a link wishes to be compensated for relaying messages. As the sender of a message on the network, one wants to find the least cost path. There are efficient methods for doing so, even in large networks. However, there is one problem: the costs for each link are unknown. A naive approach would be to ask the owner of each link the cost, use these declared costs to find the least cost path, and pay all links on the path their declared costs. However, it can be shown that this payment scheme is not SP, that is, the owners of some links can benefit by lying about the cost. We may end up paying far more than the actual cost. It can be shown that given certain assumptions about the network and the players (owners of links), a variant of the VCG mechanism is SP.
Formal definitions.
There is a set formula_0 of possible outcomes.
There are formula_1 agents which have different valuations for each outcome. The valuation of agent formula_2 is represented as a function:
formula_3
which expresses the value it has for each alternative, in monetary terms.
It is assumed that the agents have Quasilinear utility functions; this means that, if the outcome is formula_4 and in addition the agent receives a payment formula_5 (positive or negative), then the total utility of agent formula_2 is:
formula_6
The vector of all value-functions is denoted by formula_7.
For every agent formula_2, the vector of all value-functions of the "other" agents is denoted by formula_8. So formula_9.
A "mechanism" is a pair of functions:
A mechanism is called strategyproof if, for every player formula_2 and for every value-vector of the other players formula_8:
formula_14
Characterization.
It is helpful to have simple conditions for checking whether a given mechanism is SP or not. This subsection shows two simple conditions that are both necessary and sufficient.
If a mechanism with monetary transfers is SP, then it must satisfy the following two conditions, for every agent formula_2:
1. The payment to agent formula_2 is a function of the chosen outcome and of the valuations of the other agents formula_8 - but "not" a direct function of the agent's own valuation formula_15. Formally, there exists a price function formula_16, that takes as input an outcome formula_11 and a valuation vector for the other agents formula_8, and returns the payment for agent formula_2, such that for every formula_17, if:
formula_18
then:
formula_19
PROOF: If formula_20 then an agent with valuation formula_21 prefers to report formula_15, since it gives him the same outcome and a larger payment; similarly, if formula_22 then an agent with valuation formula_15 prefers to report formula_21.
As a corollary, there exists a "price-tag" function, formula_16, that takes as input an outcome formula_11 and a valuation vector for the other agents formula_8, and returns the payment for agent formula_2 For every formula_23, if:
formula_24
then:
formula_25
2. The selected outcome is optimal for agent formula_2, given the other agents' valuations. Formally:
formula_26
where the maximization is over all outcomes in the range of formula_27.
PROOF: If there is another outcome formula_28 such that formula_29, then an agent with valuation formula_15 prefers to report formula_21, since it gives him a larger total utility.
Conditions 1 and 2 are not only necessary but also sufficient: any mechanism that satisfies conditions 1 and 2 is SP.
PROOF: Fix an agent formula_2 and valuations formula_17. Denote:
formula_30 - the outcome when the agent acts truthfully.
formula_31 - the outcome when the agent acts untruthfully.
By property 1, the utility of the agent when playing truthfully is:
formula_32
and the utility of the agent when playing untruthfully is:
formula_33
By property 2:
formula_34
so it is a dominant strategy for the agent to act truthfully.
Outcome-function characterization.
The actual goal of a mechanism is its formula_10 function; the payment function is just a tool to induce the players to be truthful. Hence, it is useful to know, given a certain outcome function, whether it can be implemented using a SP mechanism or not (this property is also called implementability).
The monotonicity property is necessary for strategyproofness.
Truthful mechanisms in single-parameter domains.
A "single-parameter domain" is a game in which each player formula_2 gets a certain positive value formula_35 for "winning" and a value 0 for "losing". A simple example is a single-item auction, in which formula_35 is the value that player formula_2 assigns to the item.
For this setting, it is easy to characterize truthful mechanisms. Begin with some definitions.
A mechanism is called "normalized" if every losing bid pays 0.
A mechanism is called "monotone" if, when a player raises his bid, his chances of winning (weakly) increase.
For a monotone mechanism, for every player "i" and every combination of bids of the other players, there is a "critical value" in which the player switches from losing to winning.
A normalized mechanism on a single-parameter domain is truthful if the following two conditions hold:
Truthfulness of randomized mechanisms.
There are various ways to extend the notion of truthfulness to randomized mechanisms. They are, from strongest to weakest:
Universal implies strong-SD implies Lex implies weak-SD, and all implications are strict.Thm.3.4
Truthfulness with high probability.
For every constant formula_36, a randomized mechanism is called truthful with probability formula_37 if for every agent and for every vector of bids, the probability that the agent benefits by bidding non-truthfully is at most formula_38, where the probability is taken over the randomness of the mechanism.
If the constant formula_38 goes to 0 when the number of bidders grows, then the mechanism is called truthful with high probability. This notion is weaker than full truthfulness, but it is still useful in some cases; see e.g. consensus estimate.
False-name-proofness.
A new type of fraud that has become common with the abundance of internet-based auctions is "false-name bids" – bids submitted by a single bidder using multiple identifiers such as multiple e-mail addresses.
False-name-proofness means that there is no incentive for any of the players to issue false-name-bids. This is a stronger notion than strategyproofness. In particular, the Vickrey–Clarke–Groves (VCG) auction is not false-name-proof.
False-name-proofness is importantly different from group strategyproofness because it assumes that an individual alone can simulate certain behaviors that normally require the collusive coordination of multiple individuals.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "v_i : X \\longrightarrow R_+"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "p_i"
},
{
"math_id": 6,
"text": "u_i := v_i(x) + p_i"
},
{
"math_id": 7,
"text": "v"
},
{
"math_id": 8,
"text": "v_{-i}"
},
{
"math_id": 9,
"text": "v \\equiv (v_i,v_{-i})"
},
{
"math_id": 10,
"text": "Outcome"
},
{
"math_id": 11,
"text": "x\\in X"
},
{
"math_id": 12,
"text": "Payment"
},
{
"math_id": 13,
"text": "(p_1,\\dots,p_n)"
},
{
"math_id": 14,
"text": "v_i(Outcome(v_i,v_{-i})) + Payment_i(v_i,v_{-i}) \\geq v_i(Outcome(v_i',v_{-i})) + Payment_i(v_i',v_{-i})"
},
{
"math_id": 15,
"text": "v_i"
},
{
"math_id": 16,
"text": "Price_i"
},
{
"math_id": 17,
"text": "v_i,v_i',v_{-i}"
},
{
"math_id": 18,
"text": "Outcome(v_i,v_{-i}) = Outcome(v_i',v_{-i})"
},
{
"math_id": 19,
"text": "Payment_i(v_i,v_{-i}) = Payment_i(v_i',v_{-i})"
},
{
"math_id": 20,
"text": "Payment_i(v_i,v_{-i}) > Payment_i(v_i',v_{-i})"
},
{
"math_id": 21,
"text": "v_i'"
},
{
"math_id": 22,
"text": "Payment_i(v_i,v_{-i}) < Payment_i(v_i',v_{-i})"
},
{
"math_id": 23,
"text": "v_i,v_{-i}"
},
{
"math_id": 24,
"text": "Outcome(v_i,v_{-i}) = x"
},
{
"math_id": 25,
"text": "Payment_i(v_i,v_{-i}) = Price_i(x,v_{-i})"
},
{
"math_id": 26,
"text": "Outcome(v_i, v_{-i}) \\in \\arg\\max_{x} [v_i(x) + Price_i(x,v_{-i})]"
},
{
"math_id": 27,
"text": "Outcome(\\cdot,v_{-i})"
},
{
"math_id": 28,
"text": "x' = Outcome(v_i',v_{-i})"
},
{
"math_id": 29,
"text": "v_i(x') + Price_i(x',v_{-i}) > v_i(x) + Price_i(x,v_{-i})"
},
{
"math_id": 30,
"text": "x := Outcome(v_i, v_{-i})"
},
{
"math_id": 31,
"text": "x' := Outcome(v_i', v_{-i})"
},
{
"math_id": 32,
"text": "u_i(v_i) = v_i(x) + Price_i(x, v_{-i})"
},
{
"math_id": 33,
"text": "u_i(v_i') = v_i(x') + Price_i(x', v_{-i})"
},
{
"math_id": 34,
"text": "u_i(v_i) \\geq u_i(v_i')"
},
{
"math_id": 35,
"text": "v_{i}"
},
{
"math_id": 36,
"text": "\\epsilon>0"
},
{
"math_id": 37,
"text": "1-\\epsilon"
},
{
"math_id": 38,
"text": "\\epsilon"
}
] | https://en.wikipedia.org/wiki?curid=886330 |
8864 | Delaunay triangulation | Triangulation method
In computational geometry, a Delaunay triangulation or Delone triangulation of a set of points in the plane subdivides their convex hull into triangles whose circumcircles do not contain any of the points. This maximizes the size of the smallest angle in any of the triangles, and tends to avoid sliver triangles.
The triangulation is named after Boris Delaunay for his work on it from 1934.
If the points all lie on a straight line, the notion of triangulation becomes degenerate and there is no Delaunay triangulation. For four or more points on the same circle (e.g., the vertices of a rectangle) the Delaunay triangulation is not unique: each of the two possible triangulations that split the quadrangle into two triangles satisfies the "Delaunay condition", i.e., the requirement that the circumcircles of all triangles have empty interiors.
By considering circumscribed spheres, the notion of Delaunay triangulation extends to three and higher dimensions. Generalizations are possible to metrics other than Euclidean distance. However, in these cases a Delaunay triangulation is not guaranteed to exist or be unique.
Relationship with the Voronoi diagram.
The Delaunay triangulation of a discrete point set P in general position corresponds to the dual graph of the Voronoi diagram for P.
The circumcenters of Delaunay triangles are the vertices of the Voronoi diagram.
In the 2D case, the Voronoi vertices are connected via edges, that can be derived from adjacency-relationships of the Delaunay triangles: If two triangles share an edge in the Delaunay triangulation, their circumcenters are to be connected with an edge in the Voronoi tesselation.
Special cases where this relationship does not hold, or is ambiguous, include cases like:
"d"-dimensional Delaunay.
For a set P of points in the (d-dimensional) Euclidean space, a Delaunay triangulation is a triangulation DT(P) such that no point in P is inside the circum-hypersphere of any d-simplex in DT(P). It is known that there exists a unique Delaunay triangulation for P if P is a set of points in "general position"; that is, the affine hull of P is d-dimensional and no set of "d" + 2 points in P lie on the boundary of a ball whose interior does not intersect P.
The problem of finding the Delaunay triangulation of a set of points in d-dimensional Euclidean space can be converted to the problem of finding the convex hull of a set of points in ("d" + 1)-dimensional space. This may be done by giving each point p an extra coordinate equal to , thus turning it into a hyper-paraboloid (this is termed "lifting"); taking the bottom side of the convex hull (as the top end-cap faces upwards away from the origin, and must be discarded); and mapping back to d-dimensional space by deleting the last coordinate. As the convex hull is unique, so is the triangulation, assuming all facets of the convex hull are simplices. Nonsimplicial facets only occur when "d" + 2 of the original points lie on the same d-hypersphere, i.e., the points are not in general position.
Properties.
Let n be the number of points and d the number of dimensions.
Visual Delaunay definition: Flipping.
From the above properties an important feature arises: Looking at two triangles △"ABD", △"BCD" with the common edge (see figures), if the sum of the angles α + γ ≤ 180°, the triangles meet the Delaunay condition.
This is an important property because it allows the use of a "flipping" technique. If two triangles do not meet the Delaunay condition, switching the common edge for the common edge produces two triangles that do meet the Delaunay condition:
This operation is called a "flip", and can be generalised to three and higher dimensions.
Algorithms.
Many algorithms for computing Delaunay triangulations rely on fast operations for detecting when a point is within a triangle's circumcircle and an efficient data structure for storing triangles and edges. In two dimensions, one way to detect if point D lies in the circumcircle of A, B, C is to evaluate the determinant:
formula_0
When A, B, C are sorted in a counterclockwise order, this determinant is positive only if D lies inside the circumcircle.
Flip algorithms.
As mentioned above, if a triangle is non-Delaunay, we can flip one of its edges. This leads to a straightforward algorithm: construct any triangulation of the points, and then flip edges until no triangle is non-Delaunay. Unfortunately, this can take Ω("n"2) edge flips. While this algorithm can be generalised to three and higher dimensions, its convergence is not guaranteed in these cases, as it is conditioned to the connectedness of the underlying flip graph: this graph is connected for two-dimensional sets of points, but may be disconnected in higher dimensions.
Incremental.
The most straightforward way of efficiently computing the Delaunay triangulation is to repeatedly add one vertex at a time, retriangulating the affected parts of the graph. When a vertex v is added, we split in three the triangle that contains v, then we apply the flip algorithm. Done naïvely, this will take O("n") time: we search through all the triangles to find the one that contains v, then we potentially flip away every triangle. Then the overall runtime is O("n"2).
If we insert vertices in random order, it turns out (by a somewhat intricate proof) that each insertion will flip, on average, only O(1) triangles – although sometimes it will flip many more. This still leaves the point location time to improve. We can store the history of the splits and flips performed: each triangle stores a pointer to the two or three triangles that replaced it. To find the triangle that contains v, we start at a root triangle, and follow the pointer that points to a triangle that contains v, until we find a triangle that has not yet been replaced. On average, this will also take O(log "n") time. Over all vertices, then, this takes O("n" log "n") time. While the technique extends to higher dimension (as proved by Edelsbrunner and Shah), the runtime can be exponential in the dimension even if the final Delaunay triangulation is small.
The Bowyer–Watson algorithm provides another approach for incremental construction. It gives an alternative to edge flipping for computing the Delaunay triangles containing a newly inserted vertex.
Unfortunately the flipping-based algorithms are generally hard to parallelize, since adding some certain point (e.g. the center point of a wagon wheel) can lead to up to O("n") consecutive flips. Blelloch et al. proposed another version of incremental algorithm based on rip-and-tent, which is practical and highly parallelized with polylogarithmic span.
Divide and conquer.
A divide and conquer algorithm for triangulations in two dimensions was developed by Lee and Schachter and improved by Guibas and Stolfi and later by Dwyer. In this algorithm, one recursively draws a line to split the vertices into two sets. The Delaunay triangulation is computed for each set, and then the two sets are merged along the splitting line. Using some clever tricks, the merge operation can be done in time O("n"), so the total running time is O("n" log "n").
For certain types of point sets, such as a uniform random distribution, by intelligently picking the splitting lines the expected time can be reduced to O("n" log log "n") while still maintaining worst-case performance.
A divide and conquer paradigm to performing a triangulation in d dimensions is presented in "DeWall: A fast divide and conquer Delaunay triangulation algorithm in E"d"" by P. Cignoni, C. Montani, R. Scopigno.
The divide and conquer algorithm has been shown to be the fastest DT generation technique sequentially.
Sweephull.
Sweephull is a hybrid technique for 2D Delaunay triangulation that uses a radially propagating sweep-hull, and a flipping algorithm. The sweep-hull is created sequentially by iterating a radially-sorted set of 2D points, and connecting triangles to the visible part of the convex hull, which gives a non-overlapping triangulation. One can build a convex hull in this manner so long as the order of points guarantees no point would fall within the triangle. But, radially sorting should minimize flipping by being highly Delaunay to start. This is then paired with a final iterative triangle flipping step.
Applications.
The Euclidean minimum spanning tree of a set of points is a subset of the Delaunay triangulation of the same points, and this can be exploited to compute it efficiently.
For modelling terrain or other objects given a point cloud, the Delaunay triangulation gives a nice set of triangles to use as polygons in the model. In particular, the Delaunay triangulation avoids narrow triangles (as they have large circumcircles compared to their area). See triangulated irregular network.
Delaunay triangulations can be used to determine the density or intensity of points samplings by means of the Delaunay tessellation field estimator (DTFE).
Delaunay triangulations are often used to generate meshes for space-discretised solvers such as the finite element method and the finite volume method of physics simulation, because of the angle guarantee and because fast triangulation algorithms have been developed. Typically, the domain to be meshed is specified as a coarse simplicial complex; for the mesh to be numerically stable, it must be refined, for instance by using Ruppert's algorithm.
The increasing popularity of finite element method and boundary element method techniques increases the incentive to improve automatic meshing algorithms. However, all of these algorithms can create distorted and even unusable grid elements. Fortunately, several techniques exist which can take an existing mesh and improve its quality. For example, smoothing (also referred to as mesh refinement) is one such method, which repositions nodes to minimize element distortion. The stretched grid method allows the generation of pseudo-regular meshes that meet the Delaunay criteria easily and quickly in a one-step solution.
Constrained Delaunay triangulation has found applications in path planning in automated driving and topographic surveying.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\n& \\begin{vmatrix}\nA_x & A_y & A_x^2 + A_y^2 & 1\\\\\nB_x & B_y & B_x^2 + B_y^2 & 1\\\\\nC_x & C_y & C_x^2 + C_y^2 & 1\\\\\nD_x & D_y & D_x^2 + D_y^2 & 1\n\\end{vmatrix} \\\\[8pt]\n= {} & \\begin{vmatrix}\nA_x - D_x & A_y - D_y & (A_x - D_x)^2 + (A_y - D_y)^2 \\\\\nB_x - D_x & B_y - D_y & (B_x - D_x)^2 + (B_y - D_y)^2 \\\\\nC_x - D_x & C_y - D_y & (C_x - D_x)^2 + (C_y - D_y)^2\n\\end{vmatrix} > 0\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=8864 |
8864159 | Molecular vibration | Periodic motion of the atoms of a molecule
A molecular vibration is a periodic motion of the atoms of a molecule relative to each other, such that the center of mass of the molecule remains unchanged. The typical vibrational frequencies range from less than 1013 Hz to approximately 1014 Hz, corresponding to wavenumbers of approximately 300 to 3000 cm−1 and wavelengths of approximately 30 to 3 μm.
For a diatomic molecule A−B, the vibrational frequency in s−1 is given by formula_0, where k is the force constant in dyne/cm or erg/cm2 and μ is the reduced mass given by formula_1. The vibrational wavenumber in cm−1 is formula_2 where c is the speed of light in cm/s.
Vibrations of polyatomic molecules are described in terms of normal modes, which are independent of each other, but each normal mode involves simultaneous vibrations of different parts of the molecule. In general, a non-linear molecule with "N" atoms has 3"N" – 6 normal modes of vibration, but a "linear" molecule has 3"N" – 5 modes, because rotation about the molecular axis cannot be observed. A diatomic molecule has one normal mode of vibration, since it can only stretch or compress the single bond.
A molecular vibration is excited when the molecule absorbs energy, "ΔE", corresponding to the vibration's frequency, "ν", according to the relation Δ"E" = "hν", where "h" is Planck's constant. A fundamental vibration is evoked when one such quantum of energy is absorbed by the molecule in its ground state. When multiple quanta are absorbed, the first and possibly higher overtones are excited.
To a first approximation, the motion in a normal vibration can be described as a kind of simple harmonic motion. In this approximation, the vibrational energy is a quadratic function (parabola) with respect to the atomic displacements and the first overtone has twice the frequency of the fundamental. In reality, vibrations are anharmonic and the first overtone has a frequency that is slightly lower than twice that of the fundamental. Excitation of the higher overtones involves progressively less and less additional energy and eventually leads to dissociation of the molecule, because the potential energy of the molecule is more like a Morse potential or more accurately, a Morse/Long-range potential.
The vibrational states of a molecule can be probed in a variety of ways. The most direct way is through infrared spectroscopy, as vibrational transitions typically require an amount of energy that corresponds to the infrared region of the spectrum. Raman spectroscopy, which typically uses visible light, can also be used to measure vibration frequencies directly. The two techniques are complementary and comparison between the two can provide useful structural information such as in the case of the rule of mutual exclusion for centrosymmetric molecules.
Vibrational excitation can occur in conjunction with electronic excitation in the ultraviolet-visible region. The combined excitation is known as a vibronic transition, giving vibrational fine structure to electronic transitions, particularly for molecules in the gas state.
Simultaneous excitation of a vibration and rotations gives rise to vibration–rotation spectra.
Number of vibrational modes.
For a molecule with N atoms, the positions of all N nuclei depend on a total of 3N coordinates, so that the molecule has 3N degrees of freedom including translation, rotation and vibration. Translation corresponds to movement of the center of mass whose position can be described by 3 cartesian coordinates.
A nonlinear molecule can rotate about any of three mutually perpendicular axes and therefore has 3 rotational degrees of freedom. For a linear molecule, rotation about the molecular axis does not involve movement of any atomic nucleus, so there are only 2 rotational degrees of freedom which can vary the atomic coordinates.
An equivalent argument is that the rotation of a linear molecule changes the direction of the molecular axis in space, which can be described by 2 coordinates corresponding to latitude and longitude. For a nonlinear molecule, the direction of one axis is described by these two coordinates, and the orientation of the molecule about this axis provides a third rotational coordinate.
The number of vibrational modes is therefore 3N minus the number of translational and rotational degrees of freedom, or 3N–5 for linear and 3N–6 for nonlinear molecules.
Vibrational coordinates.
The coordinate of a normal vibration is a combination of "changes" in the positions of atoms in the molecule. When the vibration is excited the coordinate changes sinusoidally with a frequency ν, the frequency of the vibration.
Internal coordinates.
"Internal coordinates" are of the following types, illustrated with reference to the planar molecule ethylene,
In a rocking, wagging or twisting coordinate the bond lengths within the groups involved do not change. The angles do. Rocking is distinguished from wagging by the fact that the atoms in the group stay in the same plane.
In ethylene there are 12 internal coordinates: 4 C–H stretching, 1 C–C stretching, 2 H–C–H bending, 2 CH2 rocking, 2 CH2 wagging, 1 twisting. Note that the H–C–C angles cannot be used as internal coordinates as well as the H-C-H angle because the angles at each carbon atom cannot all increase at the same time.
Note that these coordinates do not correspond to normal modes (see #Normal coordinates). In other words, they do not correspond to particular frequencies or vibrational transitions.
Vibrations of a methylene group (−CH2−) in a molecule for illustration.
Within the CH2 group, commonly found in organic compounds, the two low mass hydrogens can vibrate in six different ways which can be grouped as 3 pairs of modes: 1. symmetric and asymmetric stretching, 2. scissoring and rocking, 3. wagging and twisting. These are shown here:
(These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H atoms).
Symmetry-adapted coordinates.
Symmetry–adapted coordinates may be created by applying a projection operator to a set of internal coordinates. The projection operator is constructed with the aid of the character table of the molecular point group. For example, the four (un–normalized) C–H stretching coordinates of the molecule ethene are given by
formula_3
where formula_4 are the internal coordinates for stretching of each of the four C–H bonds.
Illustrations of symmetry–adapted coordinates for most small molecules can be found in Nakamoto.
Normal coordinates.
The normal coordinates, denoted as "Q", refer to the positions of atoms away from their equilibrium positions, with respect to a normal mode of vibration. Each normal mode is assigned a single normal coordinate, and so the normal coordinate refers to the "progress" along that normal mode at any given time. Formally, normal modes are determined by solving a secular determinant, and then the normal coordinates (over the normal modes) can be expressed as a summation over the cartesian coordinates (over the atom positions). The normal modes diagonalize the matrix governing the molecular vibrations, so that each normal mode is an independent molecular vibration. If the molecule possesses symmetries, the normal modes "transform as" an irreducible representation under its point group. The normal modes are determined by applying group theory, and projecting the irreducible representation onto the cartesian coordinates. For example, when this treatment is applied to CO2, it is found that the C=O stretches are not independent, but rather there is an O=C=O symmetric stretch and an O=C=O asymmetric stretch:
When two or more normal coordinates belong to the same irreducible representation of the molecular point group (colloquially, have the same symmetry) there is "mixing" and the coefficients of the combination cannot be determined "a priori". For example, in the linear molecule hydrogen cyanide, HCN, The two stretching vibrations are
The coefficients a and b are found by performing a full normal coordinate analysis by means of the Wilson GF method.
Newtonian mechanics.
Perhaps surprisingly, molecular vibrations can be treated using Newtonian mechanics to calculate the correct vibration frequencies. The basic assumption is that each vibration can be treated as though it corresponds to a spring. In the harmonic approximation the spring obeys Hooke's law: the force required to extend the spring is proportional to the extension. The proportionality constant is known as a "force constant, k". The anharmonic oscillator is considered elsewhere.
formula_5
By Newton's second law of motion this force is also equal to a reduced mass, "μ", times acceleration.
formula_6
Since this is one and the same force the ordinary differential equation follows.
formula_7
The solution to this equation of simple harmonic motion is
formula_8
"A" is the maximum amplitude of the vibration coordinate "Q". It remains to define the reduced mass, "μ". In general, the reduced mass of a diatomic molecule, AB, is expressed in terms of the atomic masses, "mA" and "mB", as
formula_9
The use of the reduced mass ensures that the centre of mass of the molecule is not affected by the vibration. In the harmonic approximation the potential energy of the molecule is a quadratic function of the normal coordinate. It follows that the force-constant is equal to the second derivative of the potential energy.
formula_10
When two or more normal vibrations have the same symmetry a full normal coordinate analysis must be performed (see GF method). The vibration frequencies, "ν"i, are obtained from the eigenvalues, "λ"i, of the matrix product GF. G is a matrix of numbers derived from the masses of the atoms and the geometry of the molecule. F is a matrix derived from force-constant values. Details concerning the determination of the eigenvalues can be found in.
Quantum mechanics.
In the harmonic approximation the potential energy is a quadratic function of the normal coordinates. Solving the Schrödinger wave equation, the energy states for each normal coordinate are given by
formula_11
where "n" is a quantum number that can take values of 0, 1, 2 ... In molecular spectroscopy where several types of molecular energy are studied and several quantum numbers are used, this "vibrational quantum number" is often designated as "v".
The difference in energy when "n" (or "v") changes by 1 is therefore equal to formula_12, the product of the Planck constant and the vibration frequency derived using classical mechanics. For a transition from level "n" to level "n+1" due to absorption of a photon, the frequency of the photon is equal to the classical vibration frequency formula_13 (in the harmonic oscillator approximation).
See quantum harmonic oscillator for graphs of the first 5 wave functions, which allow certain selection rules to be formulated. For example, for a harmonic oscillator transitions are allowed only when the quantum number "n" changes by one,
formula_14
but this does not apply to an anharmonic oscillator; the observation of overtones is only possible because vibrations are anharmonic. Another consequence of anharmonicity is that transitions such as between states "n"=2 and "n"=1 have slightly less energy than transitions between the ground state and first excited state. Such a transition gives rise to a hot band. To describe vibrational levels of an anharmonic oscillator, Dunham expansion is used.
Intensities.
In an infrared spectrum the intensity of an absorption band is proportional to the derivative of the molecular dipole moment with respect to the normal coordinate. Likewise, the intensity of Raman bands depends on the derivative of polarizability with respect to the normal coordinate. There is also a dependence on the fourth-power of the wavelength of the laser used.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\nu = \\frac{1}{2 \\pi} \\sqrt{k / \\mu} "
},
{
"math_id": 1,
"text": "\\frac{1}{\\mu} = \\frac{1}{m_A}+\\frac{1}{m_B}"
},
{
"math_id": 2,
"text": "\\tilde{\\nu} \\;= \\frac{1}{2 \\pi c} \\sqrt{k / \\mu},"
},
{
"math_id": 3,
"text": "\\begin{align}\nQ_{s1} &= q_{1} + q_{2} + q_{3} + q_{4} \\\\\nQ_{s2} &= q_{1} + q_{2} - q_{3} - q_{4} \\\\\nQ_{s3} &= q_{1} - q_{2} + q_{3} - q_{4} \\\\\nQ_{s4} &= q_{1} - q_{2} - q_{3} + q_{4}\n\\end{align}"
},
{
"math_id": 4,
"text": "q_{1} - q_{4}"
},
{
"math_id": 5,
"text": "\\mathrm{F} = - k Q "
},
{
"math_id": 6,
"text": " \\mathrm{F} = \\mu \\frac{d^2Q}{dt^2}"
},
{
"math_id": 7,
"text": "\\mu \\frac{d^2Q}{dt^2} + k Q = 0"
},
{
"math_id": 8,
"text": "Q(t) = A \\cos (2 \\pi \\nu t) ;\\ \\ \\nu = {1\\over {2 \\pi}} \\sqrt{k \\over \\mu}. "
},
{
"math_id": 9,
"text": "\\frac{1}{\\mu} = \\frac{1}{m_A}+\\frac{1}{m_B}."
},
{
"math_id": 10,
"text": "k=\\frac{\\partial ^2V}{\\partial Q^2}"
},
{
"math_id": 11,
"text": "E_n = h \\left( n + {1 \\over 2 } \\right)\\nu=h\\left( n + {1 \\over 2 } \\right) {1\\over {2 \\pi}} \\sqrt{k \\over m} ,"
},
{
"math_id": 12,
"text": "h\\nu"
},
{
"math_id": 13,
"text": "\\nu"
},
{
"math_id": 14,
"text": "\\Delta n = \\pm 1"
}
] | https://en.wikipedia.org/wiki?curid=8864159 |
8864768 | Amoeba (mathematics) | Set associated with a complex-valued polynomial
In complex analysis, a branch of mathematics, an amoeba is a set associated with a polynomial in one or more complex variables. Amoebas have applications in algebraic geometry, especially tropical geometry.
Definition.
Consider the function
formula_0
defined on the set of all "n"-tuples formula_1 of non-zero complex numbers with values in the Euclidean space formula_2 given by the formula
formula_3
Here, log denotes the natural logarithm. If "p"("z") is a polynomial in formula_4 complex variables, its amoeba formula_5 is defined as the image of the set of zeros of "p" under Log, so
formula_6
Amoebas were introduced in 1994 in a book by Gelfand, Kapranov, and Zelevinsky.
Properties.
Let formula_7 be the zero locus of a polynomial
formula_8
where formula_9 is finite, formula_10 and formula_11 if formula_12 and formula_13. Let formula_14 be the Newton polyhedron of formula_15, i.e.,
formula_16
Then
Ronkin function.
A useful tool in studying amoebas is the Ronkin function. For "p"("z"), a polynomial in "n" complex variables, one defines the Ronkin function
formula_26
by the formula
formula_27
where formula_28 denotes formula_29 Equivalently, formula_30 is given by the integral
formula_31
where
formula_32
The Ronkin function is convex and affine on each connected component of the complement of the amoeba of formula_33.
As an example, the Ronkin function of a monomial
formula_34
with formula_35 is
formula_36 | [
{
"math_id": 0,
"text": "\\operatorname{Log}: \\big({\\mathbb C} \\setminus \\{0\\}\\big)^n \\to \\mathbb R^n"
},
{
"math_id": 1,
"text": "z = (z_1, z_2, \\dots, z_n)"
},
{
"math_id": 2,
"text": "\\mathbb R^n,"
},
{
"math_id": 3,
"text": "\\operatorname{Log}(z_1, z_2, \\dots, z_n) = \\big(\\log|z_1|, \\log|z_2|, \\dots, \\log|z_n|\\big)."
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "\\mathcal A_p"
},
{
"math_id": 6,
"text": "\\mathcal A_p = \\left\\{\\operatorname{Log}(z) : z \\in \\big(\\mathbb C \\setminus \\{0\\}\\big)^n, p(z) = 0\\right\\}."
},
{
"math_id": 7,
"text": " V \\subset (\\mathbb{C}^{*})^{n} "
},
{
"math_id": 8,
"text": " f(z) = \\sum_{j \\in A}a_{j}z^{j} "
},
{
"math_id": 9,
"text": " A \\subset \\mathbb{Z}^{n} "
},
{
"math_id": 10,
"text": " a_{j} \\in \\mathbb{C} "
},
{
"math_id": 11,
"text": " z^{j} = z_{1}^{j_{1}}\\cdots z_{n}^{j_{n}} "
},
{
"math_id": 12,
"text": " z = (z_{1},\\dots,z_{n}) "
},
{
"math_id": 13,
"text": " j = (j_{1},\\dots,j_{n}) "
},
{
"math_id": 14,
"text": " \\Delta_{f} "
},
{
"math_id": 15,
"text": "f "
},
{
"math_id": 16,
"text": " \\Delta_{f} = \\text{Convex Hull}\\{j \\in A \\mid a_{j} \\ne 0\\}. "
},
{
"math_id": 17,
"text": "\\mathbb R^n \\setminus \\mathcal A_p"
},
{
"math_id": 18,
"text": " \\mathbb{R}^{n} \\setminus \\mathcal{A}_{p} "
},
{
"math_id": 19,
"text": " \\#(\\Delta_{f} \\cap \\mathbb{Z}^{n}) "
},
{
"math_id": 20,
"text": " \\mathbb{R}^{n} \\setminus \\mathcal{A}_{p}"
},
{
"math_id": 21,
"text": "\\Delta_{f} \\cap \\mathbb{Z}^{n}"
},
{
"math_id": 22,
"text": " \\Delta_{f}"
},
{
"math_id": 23,
"text": " V \\subset (\\mathbb{C}^{*})^{2} "
},
{
"math_id": 24,
"text": " \\mathcal{A}_{p}(V) "
},
{
"math_id": 25,
"text": " \\pi^{2}\\text{Area}(\\Delta_{f}) "
},
{
"math_id": 26,
"text": "N_p : \\mathbb R^n \\to \\mathbb R"
},
{
"math_id": 27,
"text": "N_p(x) = \\frac{1}{(2\\pi i)^n} \\int_{\\operatorname{Log}^{-1}(x)} \\log|p(z)| \\,\\frac{dz_1}{z_1} \\wedge \\frac{dz_2}{z_2} \\wedge \\cdots \\wedge \\frac{dz_n}{z_n},"
},
{
"math_id": 28,
"text": "x"
},
{
"math_id": 29,
"text": "x = (x_1, x_2, \\dots, x_n)."
},
{
"math_id": 30,
"text": "N_p"
},
{
"math_id": 31,
"text": "N_p(x) = \\frac{1}{(2\\pi)^n} \\int_{[0, 2\\pi]^n} \\log|p(z)| \\,d\\theta_1 \\,d\\theta_2 \\cdots d\\theta_n,"
},
{
"math_id": 32,
"text": "z = \\left(e^{x_1+i\\theta_1}, e^{x_2+i\\theta_2}, \\dots, e^{x_n+i\\theta_n}\\right)."
},
{
"math_id": 33,
"text": "p(z)"
},
{
"math_id": 34,
"text": "p(z) = a z_1^{k_1} z_2^{k_2} \\dots z_n^{k_n}"
},
{
"math_id": 35,
"text": "a \\ne 0"
},
{
"math_id": 36,
"text": "N_p(x) = \\log|a| + k_1 x_1 + k_2 x_2 + \\cdots + k_n x_n."
}
] | https://en.wikipedia.org/wiki?curid=8864768 |
886617 | Semiperimeter | Half of the sum of side lengths of a polygon
In geometry, the semiperimeter of a polygon is half its perimeter. Although it has such a simple derivation from the perimeter, the semiperimeter appears frequently enough in formulas for triangles and other figures that it is given a separate name. When the semiperimeter occurs as part of a formula, it is typically denoted by the letter s.
Motivation: triangles.
The semiperimeter is used most often for triangles; the formula for the semiperimeter of a triangle with side lengths a, b, c
formula_0
Properties.
In any triangle, any vertex and the point where the opposite excircle touches the triangle partition the triangle's perimeter into two equal lengths, thus creating two paths each of which has a length equal to the semiperimeter. If A, B, B', C' are as shown in the figure, then the segments connecting a vertex with the opposite excircle tangency (, shown in red in the diagram) are known as splitters, and
formula_1
The three splitters concur at the Nagel point of the triangle.
A cleaver of a triangle is a line segment that bisects the perimeter of the triangle and has one endpoint at the midpoint of one of the three sides. So any cleaver, like any splitter, divides the triangle into two paths each of whose length equals the semiperimeter. The three cleavers concur at the center of the Spieker circle, which is the incircle of the medial triangle; the Spieker center is the center of mass of all the points on the triangle's edges.
A line through the triangle's incenter bisects the perimeter if and only if it also bisects the area.
A triangle's semiperimeter equals the perimeter of its medial triangle.
By the triangle inequality, the longest side length of a triangle is less than the semiperimeter.
Formulas involving the semiperimeter.
For triangles.
The area A of any triangle is the product of its inradius (the radius of its inscribed circle) and its semiperimeter:
formula_2
The area of a triangle can also be calculated from its semiperimeter and side lengths a, b, c using Heron's formula:
formula_3
The circumradius R of a triangle can also be calculated from the semiperimeter and side lengths:
formula_4
This formula can be derived from the law of sines.
The inradius is
formula_5
The law of cotangents gives the cotangents of the half-angles at the vertices of a triangle in terms of the semiperimeter, the sides, and the inradius.
The length of the internal bisector of the angle opposite the side of length a is
formula_6
In a right triangle, the radius of the excircle on the hypotenuse equals the semiperimeter. The semiperimeter is the sum of the inradius and twice the circumradius. The area of the right triangle is formula_7 where a, b are the legs.
For quadrilaterals.
The formula for the semiperimeter of a quadrilateral with side lengths a, b, c, d is
formula_8
One of the triangle area formulas involving the semiperimeter also applies to tangential quadrilaterals, which have an incircle and in which (according to Pitot's theorem) pairs of opposite sides have lengths summing to the semiperimeter—namely, the area is the product of the inradius and the semiperimeter:
formula_9
The simplest form of Brahmagupta's formula for the area of a cyclic quadrilateral has a form similar to that of Heron's formula for the triangle area:
formula_10
Bretschneider's formula generalizes this to all convex quadrilaterals:
formula_11
in which α and γ are two opposite angles.
The four sides of a bicentric quadrilateral are the four solutions of a quartic equation parametrized by the semiperimeter, the inradius, and the circumradius.
Regular polygons.
The area of a convex regular polygon is the product of its semiperimeter and its apothem.
Circles.
The semiperimeter of a circle, also called the semicircumference, is directly proportional to its radius "r":
formula_12
The constant of proportionality is the number pi, π.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "s = \\frac{a+b+c}{2}."
},
{
"math_id": 1,
"text": "\\begin{align}\ns &= |AB|+|A'B|=|AB|+|AB'|=|AC|+|A'C| \\\\\n &= |AC|+|AC'|=|BC|+|B'C|=|BC|+|BC'|.\n\\end{align}"
},
{
"math_id": 2,
"text": " A = rs."
},
{
"math_id": 3,
"text": "A = \\sqrt{s\\left(s-a\\right)\\left(s-b\\right)\\left(s-c\\right)}."
},
{
"math_id": 4,
"text": "R = \\frac{abc} {4\\sqrt{s(s-a)(s-b)(s-c)}}."
},
{
"math_id": 5,
"text": "r = \\sqrt{\\frac{(s-a)(s-b)(s-c)}{s}}. "
},
{
"math_id": 6,
"text": "t_a= \\frac{2 \\sqrt{bcs(s-a)}}{b+c}."
},
{
"math_id": 7,
"text": "(s-a)(s-b)"
},
{
"math_id": 8,
"text": "s = \\frac{a+b+c+d}{2}."
},
{
"math_id": 9,
"text": " K = rs."
},
{
"math_id": 10,
"text": "K = \\sqrt{\\left(s-a\\right)\\left(s-b\\right)\\left(s-c\\right)\\left(s-d\\right)}."
},
{
"math_id": 11,
"text": " K = \\sqrt {(s-a)(s-b)(s-c)(s-d) - abcd \\cdot \\cos^2 \\left(\\frac{\\alpha + \\gamma}{2}\\right)},"
},
{
"math_id": 12,
"text": "s = \\pi \\cdot r.\\!"
}
] | https://en.wikipedia.org/wiki?curid=886617 |
8867314 | Stefan number | The Stefan number (St or Ste) is defined as the ratio of sensible heat to latent heat. It is given by the formula
formula_0
where
It is a dimensionless parameter that is useful in analyzing a Stefan problem. The parameter was developed from Josef Stefan's calculations of the rate of phase change of water into ice on the polar ice caps and coined by G.S.H. Lock in 1969. The problems origination is fully described by Vuik and further commentary on its place in Josef Stefan's larger career can be found in
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Ste} = \\frac{c_p \\Delta T}{L}, "
}
] | https://en.wikipedia.org/wiki?curid=8867314 |
886766 | Bell test | Experiments to test Bell's theorem in quantum mechanics
A Bell test, also known as Bell inequality test or Bell experiment, is a real-world physics experiment designed to test the theory of quantum mechanics in relation to Albert Einstein's concept of local realism. Named for John Stewart Bell, the experiments test whether or not the real world satisfies local realism, which requires the presence of some additional local variables (called "hidden" because they are not a feature of quantum theory) to explain the behavior of particles like photons and electrons. The test empirically evaluates the implications of Bell's theorem. , all Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave.
Many types of Bell tests have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests".
Bell inequality violations are also used in some quantum cryptography protocols, whereby a spy's presence is detected when Bell's inequalities "cease" to be violated.
Overview.
The Bell test has its origins in the debate between Einstein and other pioneers of quantum physics, principally Niels Bohr. One feature of the theory of quantum mechanics under debate was the meaning of Heisenberg's uncertainty principle. This principle states that if some information is known about a given particle, there is some other information about it that is impossible to know. An example of this is found in observations of the position and the momentum of a given particle. According to the uncertainty principle, a particle's momentum and its position cannot simultaneously be determined with arbitrarily high precision.
In 1935, Einstein, Boris Podolsky, and Nathan Rosen published a claim that quantum mechanics predicts that more information about a pair of entangled particles could be observed than Heisenberg's principle allowed, which would only be possible if information were travelling instantly between the two particles. This produces a paradox which came to be known as the "EPR paradox" after the three authors. It arises if any effect felt in one location is not the result of a cause that occurred in its past light cone, relative to its location. This action at a distance seems to violate causality, by allowing information between the two locations to travel faster than the speed of light. However, it is a common misconception to think that any information can be shared between two observers faster than the speed of light using entangled particles; the hypothetical information transfer here is between the particles. See no-communication theorem for further explanation.
Based on this, the authors concluded that the quantum wave function does not provide a complete description of reality. They suggested that there must be some local hidden variables at work in order to account for the behavior of entangled particles. In a theory of hidden variables, as Einstein envisaged it, the randomness and indeterminacy seen in the behavior of quantum particles would only be apparent. For example, if one knew the details of all the hidden variables associated with a particle, then one could predict both its position and momentum. The uncertainty that had been quantified by Heisenberg's principle would simply be an artifact of not having complete information about the hidden variables. Furthermore, Einstein argued that the hidden variables should obey the condition of locality: Whatever the hidden variables actually are, the behavior of the hidden variables for one particle should not be able to instantly affect the behavior of those for another particle far away. This idea, called the principle of locality, is rooted in intuition from classical physics that physical interactions do not propagate instantly across space. These ideas were the subject of ongoing debate between their proponents. In particular, Einstein himself did not approve of the way Podolsky had stated the problem in the famous EPR paper.
In 1964, John Stewart Bell proposed his famous theorem, which states that no physical theory of hidden local variables can ever reproduce all the predictions of quantum mechanics. Implicit in the theorem is the proposition that the determinism of classical physics is fundamentally incapable of describing quantum mechanics. Bell expanded on the theorem to provide what would become the conceptual foundation of the Bell test experiments.
A typical experiment involves the observation of particles, often photons, in an apparatus designed to produce entangled pairs and allow for the measurement of some characteristic of each, such as their spin. The results of the experiment could then be compared to what was predicted by local realism and those predicted by quantum mechanics.
In theory, the results could be "coincidentally" consistent with both. To address this problem, Bell proposed a mathematical description of local realism that placed a statistical limit on the likelihood of that eventuality. If the results of an experiment violate Bell's inequality, local hidden variables can be ruled out as their cause. Later researchers built on Bell's work by proposing new inequalities that serve the same purpose and refine the basic idea in one way or another. Consequently, the term "Bell inequality" can mean any one of a number of inequalities satisfied by local hidden-variables theories; in practice, many present-day experiments employ the CHSH inequality. All these inequalities, like the original devised by Bell, express the idea that assuming local realism places restrictions on the statistical results of experiments on sets of particles that have taken part in an interaction and then separated.
To date, all Bell tests have supported the theory of quantum physics, and not the hypothesis of local hidden variables. These efforts to experimentally validate violations of the Bell inequalities resulted in John Clauser, Alain Aspect, and Anton Zeilinger being awarded the 2022 Nobel Prize in Physics.
Conduct of optical Bell test experiments.
In practice most actual experiments have used light, assumed to be emitted in the form of particle-like photons (produced by atomic cascade or spontaneous parametric down conversion), rather than the atoms that Bell originally had in mind. The property of interest is, in the best known experiments, the polarisation direction, though other properties can be used. Such experiments fall into two classes, depending on whether the analysers used have one or two output channels.
A typical CHSH (two-channel) experiment.
The diagram shows a typical optical experiment of the two-channel kind for which Alain Aspect set a precedent in 1982. Coincidences (simultaneous detections) are recorded, the results being categorised as '++', '+−', '−+' or '−−' and corresponding counts accumulated.
Four separate subexperiments are conducted, corresponding to the four terms "E"("a", "b") in the test statistic "S" (equation (2) shown below). The settings "a", "a"′, "b" and "b"′ are generally in practice chosen to be 0, 45°, 22.5° and 67.5° respectively — the "Bell test angles" — these being the ones for which the quantum mechanical formula gives the greatest violation of the inequality.
For each selected value of "a" and "b", the numbers of coincidences in each category ("N"++, "N"−−, "N"+− and "N"−+) are recorded. The experimental estimate for "E"("a", "b") is then calculated as:
Once all four "E"’s have been estimated, an experimental estimate of the test statistic
can be found. If "S" is numerically greater than 2 it has infringed the CHSH inequality. The experiment is declared to have supported the QM prediction and ruled out all local hidden-variable theories.
A strong assumption has had to be made, however, to justify use of expression (2), namely, that the sample of detected pairs is representative of the pairs emitted by the source. Denial of this assumption is called the fair sampling loophole.
A typical CH74 (single-channel) experiment.
Prior to 1982 all actual Bell tests used "single-channel" polarisers and variations on an inequality designed for this setup. The latter is described in Clauser, Horne, Shimony and Holt's much-cited 1969 article as being the one suitable for practical use. As with the CHSH test, there are four subexperiments in which each polariser takes one of two possible settings, but in addition there are other subexperiments in which one or other polariser or both are absent. Counts are taken as before and used to estimate the test statistic.
where the symbol ∞ indicates absence of a polariser.
If "S" exceeds 0 then the experiment is declared to have infringed the CH inequality and hence to have refuted local hidden-variables. This inequality is known as CH inequality instead of CHSH as it was also derived in a 1974 article by Clauser and Horne more rigorously and under weaker assumptions.
Experimental assumptions.
In addition to the theoretical assumptions, there are practical ones. There may, for example, be a number of "accidental coincidences" in addition to those of interest. It is assumed that no bias is introduced by subtracting their estimated number before calculating "S", but that this is true is not considered by some to be obvious. There may be synchronisation problems — ambiguity in recognising pairs because in practice they will not be detected at "exactly" the same time.
Nevertheless, despite all the deficiencies of the actual experiments, one striking fact emerges: the results are, to a very good approximation, what quantum mechanics predicts. If imperfect experiments give us such excellent overlap with quantum predictions, most working quantum physicists would agree with John Bell in expecting that, when a perfect Bell test is done, the Bell inequalities will still be violated. This attitude has led to the emergence of a new sub-field of physics known as quantum information theory. One of the main achievements of this new branch of physics is showing that violation of Bell's inequalities leads to the possibility of a secure information transfer, which utilizes the so-called quantum cryptography (involving entangled states of pairs of particles).
Notable experiments.
Over the past half century, a great number of Bell test experiments have been conducted. The experiments are commonly interpreted to rule out local hidden-variable theories, and in 2015 an experiment was performed that is not subject to either the locality loophole or the detection loophole (Hensen et al.). An experiment free of the locality loophole is one where for each separate measurement and in each wing of the experiment, a new setting is chosen and the measurement completed before signals could communicate the settings from one wing of the experiment to the other. An experiment free of the detection loophole is one where close to 100% of the successful measurement outcomes in one wing of the experiment are paired with a successful measurement in the other wing. This percentage is called the efficiency of the experiment. Advancements in technology have led to a great variety of methods to test Bell-type inequalities.
Some of the best known and recent experiments include:
Kasday, Ullman and Wu (1970).
Leonard Ralph Kasday, Jack R. Ullman and Chien-Shiung Wu carried out the first experimental Bell test, using photon pairs produced by positronium decay and analyzed by Compton scattering. The experiment observed photon polarization correlations consistent with quantum predictions and inconsistent with local realistic models that obey the known polarization dependence of Compton scattering. Due to the low polarization selectivity of Compton scattering, the results did not violate a Bell inequality.
Freedman and Clauser (1972).
Stuart J. Freedman and John Clauser carried out the first Bell test that observed a Bell inequality violation, using Freedman's inequality, a variant on the CH74 inequality.
Aspect et al. (1982).
Alain Aspect and his team at Orsay, Paris, conducted three Bell tests using calcium cascade sources. The first and last used the CH74 inequality. The second was the first application of the CHSH inequality. The third (and most famous) was arranged such that the choice between the two settings on each side was made during the flight of the photons (as originally suggested by John Bell).
Tittel et al. (1998).
The Geneva 1998 Bell test experiments showed that distance did not destroy the "entanglement". Light was sent in fibre optic cables over distances of several kilometers before it was analysed. As with almost all Bell tests since about 1985, a "parametric down-conversion" (PDC) source was used.
Weihs et al. (1998): experiment under "strict Einstein locality" conditions.
In 1998 Gregor Weihs and a team at Innsbruck, led by Anton Zeilinger, conducted an experiment that closed the "locality" loophole, improving on Aspect's of 1982. The choice of detector was made using a quantum process to ensure that it was random. This test violated the CHSH inequality by over 30 standard deviations, the coincidence curves agreeing with those predicted by quantum theory.
Pan et al. (2000) experiment on the GHZ state.
This is the first of new Bell-type experiments on more than two particles; this one uses the so-called GHZ state of three particles.
Rowe et al. (2001): the first to close the detection loophole.
The detection loophole was first closed in an experiment with two entangled trapped ions, carried out in the ion storage group of David Wineland at the National Institute of Standards and Technology in Boulder. The experiment had detection efficiencies well over 90%.
Go et al. (Belle collaboration): Observation of Bell inequality violation in B mesons.
Using semileptonic B0 decays of Υ(4S) at Belle experiment, a clear violation of Bell Inequality in particle-antiparticle correlation is observed.
Gröblacher et al. (2007) test of Leggett-type non-local realist theories.
A specific class of non-local theories suggested by Anthony Leggett is ruled out. Based on this, the authors conclude that any possible non-local hidden-variable theory consistent with quantum mechanics must be highly counterintuitive.
Salart et al. (2008): separation in a Bell Test.
This experiment filled a loophole by providing an 18 km separation between detectors, which is sufficient to allow the completion of the quantum state measurements before any information could have traveled between the two detectors.
Ansmann et al. (2009): overcoming the detection loophole in solid state.
This was the first experiment testing Bell inequalities with solid-state qubits (superconducting Josephson phase qubits were used). This experiment surmounted the detection loophole using a pair of superconducting qubits in an entangled state. However, the experiment still suffered from the locality loophole because the qubits were only separated by a few millimeters.
Giustina et al. (2013), Larsson et al (2014): overcoming the detection loophole for photons.
The detection loophole for photons has been closed for the first time by Marissa Giustina, using highly efficient detectors. This makes photons the first system for which all of the main loopholes have been closed, albeit in different experiments.
Christensen et al. (2013): overcoming the detection loophole for photons.
The Christensen et al. (2013) experiment is similar to that of Giustina et al. Giustina et al. did just four long runs with constant measurement settings (one for each of the four pairs of settings). The experiment was not pulsed so that formation of "pairs" from the two records of measurement results (Alice and Bob) had to be done after the experiment which in fact exposes the experiment to the coincidence loophole. This led to a reanalysis of the experimental data in a way which removed the coincidence loophole, and fortunately the new analysis still showed a violation of the appropriate CHSH or CH inequality. On the other hand, the Christensen et al. experiment was pulsed and measurement settings were frequently reset in a random way, though only once every 1000 particle pairs, not every time.
Hensen et al., Giustina et al., Shalm et al. (2015): "loophole-free" Bell tests.
In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally.
The first published experiment by Hensen et al. used a photonic link to entangle the electron spins of two nitrogen-vacancy defect centres in diamonds 1.3 kilometers apart and measured a violation of the CHSH inequality ("S" = 2.42 ± 0.20). Thereby the local-realist hypothesis could be rejected with a "p"-value of 0.039.
Both simultaneously published experiments by Giustina et al.
and Shalm et al.
used entangled photons to obtain a Bell inequality violation with high statistical significance (p-value ≪10−6). Notably, the experiment by Shalm et al. also combined three types of (quasi-)random number generators to determine the measurement basis choices. One of these methods, detailed in an ancillary file, is the “'Cultural' pseudorandom source” which involved using bit strings from popular media such as the "Back to the Future" films, "", "Monty Python and the Holy Grail", and the television shows "Saved by the Bell" and "Dr. Who".
Schmied et al. (2016): Detection of Bell correlations in a many-body system.
Using a witness for Bell correlations derived from a multi-partite Bell inequality, physicists at the University of Basel were able to conclude for the first time Bell correlation in a many-body system composed by about 480 atoms in a Bose-Einstein condensate. Even though loopholes were not closed, this experiment shows the possibility of observing Bell correlations in the macroscopic regime.
Handsteiner et al. (2017): "Cosmic Bell Test" - Measurement Settings from Milky Way Stars.
Physicists led by David Kaiser of the Massachusetts Institute of Technology and Anton Zeilinger of the Institute for Quantum Optics and Quantum Information and University of Vienna performed an experiment that "produced results consistent with nonlocality" by measuring starlight that had taken 600 years to travel to Earth. The experiment “represents the first experiment to dramatically limit the space-time region in which hidden variables could be relevant.”
Rosenfeld et al. (2017): "Event-Ready" Bell test with entangled atoms and closed detection and locality loopholes.
Physicists at the Ludwig Maximilian University of Munich and the Max Planck Institute of Quantum Optics published results from an experiment in which they observed a Bell inequality violation using entangled spin states of two atoms with a separation distance of 398 meters in which the detection loophole, the locality loophole, and the memory loophole were closed. The violation of S = 2.221 ± 0.033 rejected local realism with a significance value of P = 1.02×10−16 when taking into account 7 months of data and 55000 events or an upper bound of P = 2.57×10−9 from a single run with 10000 events.
The BIG Bell Test Collaboration (2018): “Challenging local realism with human choices”.
An international collaborative scientific effort used arbitrary human choice to define measurement settings instead of using random number generators. Assuming that human free will exists, this would close the “freedom-of-choice loophole”. Around 100,000 participants were recruited in order to provide sufficient input for the experiment to be statistically significant.
Rauch et al (2018): measurement settings from distant quasars.
In 2018, an international team used light from two quasars (one whose light was generated approximately eight billion years ago and the other approximately twelve billion years ago) as the basis for their measurement settings. This experiment pushed the timeframe for when the settings could have been mutually determined to at least 7.8 billion years in the past, a substantial fraction of the superdeterministic limit (that being the creation of the universe 13.8 billion years ago).
The 2019 PBS Nova episode "Einstein's Quantum Riddle" documents this "cosmic Bell test" measurement, with footage of the scientific team on-site at the high-altitude Teide Observatory located in the Canary Islands.
Storz et al (2023): Loophole-free Bell inequality violation with superconducting circuits.
In 2023, an international team led by the group of Andreas Wallraff at ETH Zurich demonstrated a loophole-free violation of the CHSH inequality with superconducting circuits deterministically entangled via a cryogenic link spanning a distance of 30 meters.
Loopholes.
Though the series of increasingly sophisticated Bell test experiments has convinced the physics community that local hidden-variable theories are indefensible; they can never be excluded entirely. For example, the hypothesis of superdeterminism in which all experiments and outcomes (and everything else) are predetermined cannot be excluded (because it is unfalsifiable).
Up to 2015, the outcome of all experiments that violate a Bell inequality could still theoretically be explained by exploiting the detection loophole and/or the locality loophole. The locality (or communication) loophole means that since in actual practice the two detections are separated by a time-like interval, the first detection may influence the second by some kind of signal. To avoid this loophole, the experimenter has to ensure that particles travel far apart before being measured, and that the measurement process is rapid. More serious is the detection (or unfair sampling) loophole, because particles are not always detected in both wings of the experiment. It can be imagined that the complete set of particles would behave randomly, but instruments only detect a subsample showing quantum correlations, by letting detection be dependent on a combination of local hidden variables and detector setting.
Experimenters had repeatedly voiced that loophole-free tests could be expected in the near future. In 2015, a loophole-free Bell violation was reported using entangled diamond spins over a distance of and corroborated by two experiments using entangled photon pairs.
The remaining possible theories that obey local realism can be further restricted by testing different spatial configurations, methods to determine the measurement settings, and recording devices. It has been suggested that using humans to generate the measurement settings and observe the outcomes provides a further test. David Kaiser of MIT told the "New York Times" in 2015 that a potential weakness of the "loophole-free" experiments is that the systems used to add randomness to the measurement may be predetermined in a method that was not detected in experiments.
Detection loophole.
A common problem in optical Bell tests is that only a small fraction of the emitted photons are detected. It is then possible that the correlations of the detected photons are unrepresentative: although they show a violation of a Bell inequality, if all photons were detected the Bell inequality would actually be respected. This was first noted by Philip M. Pearle in 1970, who devised a local hidden variable model that faked a Bell violation by letting the photon be detected only if the measurement setting was favourable. The assumption that this does not happen, i.e., that the small sample is actually representative of the whole is called the "fair sampling" assumption.
To do away with this assumption it is necessary to detect a sufficiently large fraction of the photons. This is usually characterized in terms of the detection efficiency formula_0, defined as the probability that a photodetector detects a photon that arrives at it. Anupam Garg and N. David Mermin showed that when using a maximally entangled state and the CHSH inequality an efficiency of formula_1 is required for a loophole-free violation. Later Philippe H. Eberhard showed that when using a "partially" entangled state a loophole-free violation is possible for formula_2, which is the optimal bound for the CHSH inequality. Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for formula_3.
Historically, only experiments with non-optical systems have been able to reach high enough efficiencies to close this loophole, such as trapped ions, superconducting qubits, and nitrogen-vacancy centers. These experiments were not able to close the locality loophole, which is easy to do with photons. More recently, however, optical setups have managed to reach sufficiently high detection efficiencies by using superconducting photodetectors, and hybrid setups have managed to combine the high detection efficiency typical of matter systems with the ease of distributing entanglement at a distance typical of photonic systems.
Locality loophole.
One of the assumptions of Bell's theorem is the one of locality, namely that the choice of setting at a measurement site does not influence the result of the other. The motivation for this assumption is the theory of relativity, that prohibits communication faster than light. For this motivation to apply to an experiment, it needs to have space-like separation between its measurements events. That is, the time that passes between the choice of measurement setting and the production of an outcome must be shorter than the time it takes for a light signal to travel between the measurement sites.
The first experiment that strived to respect this condition was Aspect's 1982 experiment. In it the settings were changed fast enough, but deterministically. The first experiment to change the settings randomly, with the choices made by a quantum random number generator, was Weihs et al.'s 1998 experiment. Scheidl et al. improved on this further in 2010 by conducting an experiment between locations separated by a distance of .
Coincidence loophole.
In many experiments, especially those based on photon polarization, pairs of events in the two wings of the experiment are only identified as belonging to a single pair after the experiment is performed, by judging whether or not their detection times are close enough to one another. This generates a new possibility for a local hidden variables theory to "fake" quantum correlations: delay the detection time of each of the two particles by a larger or smaller amount depending on some relationship between hidden variables carried by the particles and the detector settings encountered at the measurement station.
The coincidence loophole can be ruled out entirely simply by working with a pre-fixed lattice of detection windows which are short enough that most pairs of events occurring in the same window do originate with the same emission and long enough that a true pair is not separated by a window boundary.
Memory loophole.
In most experiments, measurements are repeatedly made at the same two locations. A local hidden variable theory could exploit the memory of past measurement settings and outcomes in order to increase the violation of a Bell inequality. Moreover, physical parameters might be varying in time. It has been shown that, provided each new pair of measurements is done with a new random pair of measurement settings, that neither memory nor time inhomogeneity have a serious effect on the experiment.
Superdeterminism.
A necessary assumption to derive Bell's theorem is that the hidden variables are not correlated with the measurement settings. This assumption has been justified on the grounds that the experimenter has "free will" to choose the settings, and that such is necessary to do science in the first place. A (hypothetical) theory where the choice of measurement is determined by the system being measured is known as "superdeterministic".
Many-worlds loophole.
The many-worlds interpretation, also known as the Hugh Everett interpretation, is deterministic and has local dynamics, consisting of the unitary part of quantum mechanics without collapse. Bell's theorem does not apply because of an implicit assumption that measurements have a single outcome.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\eta"
},
{
"math_id": 1,
"text": "\\eta > 2\\sqrt2-2 \\approx 0.83 "
},
{
"math_id": 2,
"text": "\\eta > 2/3 \\approx 0.67 "
},
{
"math_id": 3,
"text": "\\eta > (\\sqrt5-1)/2 \\approx 0.62"
}
] | https://en.wikipedia.org/wiki?curid=886766 |
886856 | Colloidal gold | Suspension of gold nanoparticles in a liquid
Colloidal gold is a sol or colloidal suspension of nanoparticles of gold in a fluid, usually water. The colloid is coloured usually either wine red (for spherical particles less than 100 nm) or blue-purple (for larger spherical particles or nanorods).
Due to their optical, electronic, and molecular-recognition properties, gold nanoparticles are the subject of substantial research, with many potential or promised applications in a wide variety of areas, including electron microscopy, electronics, nanotechnology, materials science, and biomedicine.
The properties of colloidal gold nanoparticles, and thus their potential applications, depend strongly upon their size and shape. For example, rodlike particles have both a transverse and longitudinal absorption peak, and anisotropy of the shape affects their self-assembly.
History.
Used since ancient times as a method of staining glass colloidal gold was used in the 4th-century Lycurgus Cup, which changes color depending on the location of light source.
During the Middle Ages, soluble gold, a solution containing gold salt, had a reputation for its curative property for various diseases. In 1618, Francis Anthony, a philosopher and member of the medical profession, published a book called "Panacea Aurea, sive tractatus duo de ipsius Auro Potabili" (Latin: gold potion, or two treatments of potable gold). The book introduces information on the formation of colloidal gold and its medical uses. About half a century later, English botanist Nicholas Culpepper published a book in 1656, "Treatise of Aurum Potabile", solely discussing the medical uses of colloidal gold.
In 1676, Johann Kunckel, a German chemist, published a book on the manufacture of stained glass. In his book "Valuable Observations or Remarks About the Fixed and Volatile Salts-Auro and Argento Potabile, Spiritu Mundi and the Like", Kunckel assumed that the pink color of Aurum Potabile came from small particles of metallic gold, not visible to human eyes. In 1842, John Herschel invented a photographic process called chrysotype (from the Greek meaning "gold") that used colloidal gold to record images on paper.
Modern scientific evaluation of colloidal gold did not begin until Michael Faraday's work in the 1850s. In 1856, in a basement laboratory of Royal Institution, Faraday accidentally created a ruby red solution while mounting pieces of gold leaf onto microscope slides. Since he was already interested in the properties of light and matter, Faraday further investigated the optical properties of the colloidal gold. He prepared the first pure sample of colloidal gold, which he called 'activated gold', in 1857. He used phosphorus to reduce a solution of gold chloride. The colloidal gold Faraday made 150 years ago is still optically active. For a long time, the composition of the 'ruby' gold was unclear. Several chemists suspected it to be a gold tin compound, due to its preparation. Faraday recognized that the color was actually due to the miniature size of the gold particles. He noted the light scattering properties of suspended gold microparticles, which is now called Faraday-Tyndall effect.
In 1898, Richard Adolf Zsigmondy prepared the first colloidal gold in diluted solution. Apart from Zsigmondy, Theodor Svedberg, who invented ultracentrifugation, and Gustav Mie, who provided the theory for scattering and absorption by spherical particles, were also interested in the synthesis and properties of colloidal gold.
With advances in various analytical technologies in the 20th century, studies on gold nanoparticles has accelerated. Advanced microscopy methods, such as atomic force microscopy and electron microscopy, have contributed the most to nanoparticle research. Due to their comparably easy synthesis and high stability, various gold particles have been studied for their practical uses. Different types of gold nanoparticle are already used in many industries, such as electronics.
Physical properties.
Optical.
Colloidal gold has been used by artists for centuries because of the nanoparticle’s interactions with visible light. Gold nanoparticles absorb and scatter light resulting in colours ranging from vibrant reds (smaller particles) to blues to black and finally to clear and colorless (larger particles), depending on particle size, shape, local refractive index, and aggregation state. These colors occur because of a phenomenon called localized surface plasmon resonance (LSPR), in which conduction electrons on the surface of the nanoparticle oscillate in resonance with incident light.
Effect of size, shape, composition and environment.
As a general rule, the wavelength of light absorbed increases as a function of increasing nanoparticle size. Both the surface plasmon resonance frequency and scattering intensity depend on the size, shape composition and environment of the nanoparticles. This phenomenon may be quantified by use of the Mie scattering theory for spherical nanoparticles. Nanoparticles with diameters of 30–100 nm may be detected easily by a microscope, and particles with a size of 40 nm may even be detected by the naked eye when the concentration of the particles is 10−4 M or greater. The scattering from a 60 nm nanoparticle is about 105 times stronger than the emission from a fluorescein molecule.
Effect of local refractive index.
Changes in the apparent color of a gold nanoparticle solution can also be caused by the environment in which the colloidal gold is suspended. The optical properties of gold nanoparticles depend on the refractive index near the nanoparticle surface, so the molecules directly attached to the nanoparticle surface (i.e. nanoparticle ligands) and the nanoparticle solvent may both influence the observed optical features. As the refractive index near the gold surface increases, the LSPR shifts to longer wavelengths. In addition to solvent environment, the extinction peak can be tuned by coating the nanoparticles with non-conducting shells such as silica, biomolecules, or aluminium oxide.
Effect of aggregation.
When gold nanoparticles aggregate, the optical properties of the particle change, because the effective particle size, shape, and dielectric environment all change.
Medical research.
Electron microscopy.
Colloidal gold and various derivatives have long been among the most widely used labels for antigens in biological electron microscopy. Colloidal gold particles can be attached to many traditional biological probes such as antibodies, lectins, superantigens, glycans, nucleic acids, and receptors. Particles of different sizes are easily distinguishable in electron micrographs, allowing simultaneous multiple-labelling experiments.
In addition to biological probes, gold nanoparticles can be transferred to various mineral substrates, such as mica, single crystal silicon, and atomically flat gold(III), to be observed under atomic force microscopy (AFM).
Drug delivery system.
Gold nanoparticles can be used to optimize the biodistribution of drugs to diseased organs, tissues or cells, in order to improve and target drug delivery.
Nanoparticle-mediated drug delivery is feasible only if the drug distribution is otherwise inadequate. These cases include drug targeting of unstable (proteins, siRNA, DNA), delivery to the difficult sites (brain, retina, tumors, intracellular organelles) and drugs with serious side effects (e.g. anti-cancer agents). The performance of the nanoparticles depends on the size and surface functionalities in the particles. Also, the drug release and particle disintegration can vary depending on the system (e.g. biodegradable polymers sensitive to pH). An optimal nanodrug delivery system ensures that the active drug is available at the site of action for the correct time and duration, and their concentration should be above the minimal effective concentration (MEC) and below the minimal toxic concentration (MTC).
Gold nanoparticles are being investigated as carriers for drugs such as Paclitaxel. The administration of hydrophobic drugs require molecular encapsulation and it is found that nanosized particles are particularly efficient in evading the reticuloendothelial system.
Tumor detection.
In cancer research, colloidal gold can be used to target tumors and provide detection using SERS (surface enhanced Raman spectroscopy) "in vivo". These gold nanoparticles are surrounded with Raman reporters, which provide light emission that is over 200 times brighter than quantum dots. It was found that the Raman reporters were stabilized when the nanoparticles were encapsulated with a thiol-modified polyethylene glycol coat. This allows for compatibility and circulation "in vivo". To specifically target tumor cells, the polyethylenegylated gold particles are conjugated with an antibody (or an antibody fragment such as scFv), against, e.g. epidermal growth factor receptor, which is sometimes overexpressed in cells of certain cancer types. Using SERS, these pegylated gold nanoparticles can then detect the location of the tumor.
Gold nanoparticles accumulate in tumors, due to the leakiness of tumor vasculature, and can be used as contrast agents for enhanced imaging in a time-resolved optical tomography system using short-pulse lasers for skin cancer detection in mouse model. It is found that intravenously administered spherical gold nanoparticles broadened the temporal profile of reflected optical signals and enhanced the contrast between surrounding normal tissue and tumors.
Gene therapy.
Gold nanoparticles have shown potential as intracellular delivery vehicles for siRNA oligonucleotides with maximal therapeutic impact.
Gold nanoparticles show potential as intracellular delivery vehicles for antisense oligonucleotides (single and double stranded DNA) by providing protection against intracellular nucleases and ease of functionalization for selective targeting.
Photothermal agents.
Gold nanorods are being investigated as photothermal agents for in-vivo applications. Gold nanorods are rod-shaped gold nanoparticles whose aspect ratios tune the surface plasmon resonance (SPR) band from the visible to near-infrared wavelength. The total extinction of light at the SPR is made up of both absorption and scattering. For the smaller axial diameter nanorods (~10 nm), absorption dominates, whereas for the larger axial diameter nanorods (>35 nm) scattering can dominate. As a consequence, for in-vivo studies, small diameter gold nanorods are being used as photothermal converters of near-infrared light due to their high absorption cross-sections. Since near-infrared light transmits readily through human skin and tissue, these nanorods can be used as ablation components for cancer, and other targets. When coated with polymers, gold nanorods have been observed to circulate in-vivo with half-lives longer than 6 hours, bodily residence times around 72 hours, and little to no uptake in any internal organs except the liver.
Despite the unquestionable success of gold nanorods as photothermal agents in preclinical research, they have yet to obtain the approval for clinical use because the size is above the renal excretion threshold. In 2019, the first NIR-absorbing plasmonic ultrasmall-in-nano architecture has been reported, and jointly combine: (i) a suitable photothermal conversion for hyperthermia treatments, (ii) the possibility of multiple photothermal treatments and (iii) renal excretion of the building blocks after the therapeutic action.
Radiotherapy dose enhancer.
Considerable interest has been shown in the use of gold and other heavy-atom-containing nanoparticles to enhance the dose delivered to tumors. Since the gold nanoparticles are taken up by the tumors more than the nearby healthy tissue, the dose is selectively enhanced. The biological effectiveness of this type of therapy seems to be due to the local deposition of the radiation dose near the nanoparticles. This mechanism is the same as occurs in heavy ion therapy.
Detection of toxic gas.
Researchers have developed simple inexpensive methods for on-site detection of hydrogen sulfide H2S present in air based on the antiaggregation of gold nanoparticles (AuNPs). Dissolving H2S into a weak alkaline buffer solution leads to the formation of HS-, which can stabilize AuNPs and ensure they maintain their red color allowing for visual detection of toxic levels of H2S.
Gold nanoparticle based biosensor.
Gold nanoparticles are incorporated into biosensors to enhance its stability, sensitivity, and selectivity. Nanoparticle properties such as small size, high surface-to-volume ratio, and high surface energy allow immobilization of large range of biomolecules. Gold nanoparticle, in particular, could also act as "electron wire" to transport electrons and its amplification effect on electromagnetic light allows it to function as signal amplifiers. Main types of gold nanoparticle based biosensors are optical and electrochemical biosensor.
Optical biosensor.
Gold nanoparticles improve the sensitivity of optical sensors in response to the change in the local refractive index. The angle of the incidence light for surface plasmon resonance, an interaction between light waves and conducting electrons in metal, changes when other substances are bounded to the metal surface. Because gold is very sensitive to its surroundings' dielectric constant, binding of an analyte significantly shifts the gold nanoparticle's SPR and therefore allows for more sensitive detection. Gold nanoparticle could also amplify the SPR signal. When the plasmon wave pass through the gold nanoparticle, the charge density in the wave and the electron I the gold interact and result in a higher energy response, referred to as electron coupling. When the analyte and bio-receptor both bind to the gold, the apparent mass of the analyte increases and therefore amplifies the signal.
These properties had been used to build a DNA sensor with 1000-fold greater sensitivity than without the Au NP. Humidity sensors have also been built by altering the atom interspacing between molecules with humidity change, the interspacing change would also result in a change of the Au NP's LSPR.
Electrochemical biosensor.
Electrochemical sensor convert biological information into electrical signals that could be detected. The conductivity and biocompatibility of Au NP allow it to act as "electron wire". It transfers electron between the electrode and the active site of the enzyme. It could be accomplished in two ways: attach the Au NP to either the enzyme or the electrode. GNP-glucose oxidase monolayer electrode was constructed use these two methods. The Au NP allowed more freedom in the enzyme's orientation and therefore more sensitive and stable detection. Au NP also acts as immobilization platform for the enzyme. Most biomolecules denatures or lose its activity when interacted with the electrode. The biocompatibility and high surface energy of Au allow it to bind to a large amount of protein without altering its activity and results in a more sensitive sensor. Moreover, Au NP also catalyzes biological reactions. Gold nanoparticle under 2 nm has shown catalytic activity to the oxidation of styrene.
Immunological biosensor.
Gold nanoparticles have been coated with peptides and glycans for use in immunological detection methods. The possibility to use glyconanoparticles in ELISA was unexpected, but the method seems to have a high sensitivity and thus offers potential for development of specific assays for diagnostic identification of antibodies in patient sera.
Thin films.
Gold nanoparticles capped with organic ligands, such as alkanethiol molecules, can self-assemble into large monolayers (>cm2). The particles are first prepared in organic solvent, such as chloroform or toluene, and are then spread into monolayers either on a liquid surface or on a solid substrate. Such interfacial thin films of nanoparticles have close relationship with Langmuir-Blodgett monolayers made from surfactants.
The mechanical properties of nanoparticle monolayers have been studied extensively. For 5 nm spheres capped with dodecanethiol, the Young's modulus of the monolayer is on the order of GPa. The mechanics of the membranes are guided by strong interactions between ligand shells on adjacent particles. Upon fracture, the films crack perpendicular to the direction of strain at a fracture stress of 11 formula_0 2.6 MPa, comparable to that of cross-linked polymer films. Free-standing nanoparticle membranes exhibit bending rigidity on the order of 10formula_1 eV, higher than what is predicted in theory for continuum plates of the same thickness, due to nonlocal microstructural constraints such as nonlocal coupling of particle rotational degrees of freedom. On the other hand, resistance to bending is found to be greatly reduced in nanoparticle monolayers that are supported at the air/water interface, possibly due to screening of ligand interactions in a wet environment.
Surface chemistry.
In many different types of colloidal gold syntheses, the interface of the nanoparticles can display widely different character – ranging from an interface similar to a self-assembled monolayer to a disordered boundary with no repeating patterns. Beyond the Au-Ligand interface, conjugation of the interfacial ligands with various functional moieties (from small organic molecules to polymers to DNA to RNA) afford colloidal gold much of its vast functionality.
Ligand exchange/functionalization.
After initial nanoparticle synthesis, colloidal gold ligands are often exchanged with new ligands designed for specific applications. For example, Au NPs produced via the Turkevich-style (or Citrate Reduction) method are readily reacted via ligand exchange reactions, due to the relatively weak binding between the carboxyl groups and the surfaces of the NPs. This ligand exchange can produce conjugation with a number of biomolecules from DNA to RNA to proteins to polymers (such as PEG) to increase biocompatibility and functionality. For example, ligands have been shown to enhance catalytic activity by mediating interactions between adsorbates and the active gold surfaces for specific oxygenation reactions. Ligand exchange can also be used to promote phase transfer of the colloidal particles. Ligand exchange is also possible with alkane thiol-arrested NPs produced from the Brust-type synthesis method, although higher temperatures are needed to promote the rate of the ligand detachment. An alternative method for further functionalization is achieved through the conjugation of the ligands with other molecules, though this method can cause the colloidal stability of the Au NPs to breakdown.
Ligand removal.
In many cases, as in various high-temperature catalytic applications of Au, the removal of the capping ligands produces more desirable physicochemical properties. The removal of ligands from colloidal gold while maintaining a relatively constant number of Au atoms per Au NP can be difficult due to the tendency for these bare clusters to aggregate. The removal of ligands is partially achievable by simply washing away all excess capping ligands, though this method is ineffective in removing all capping ligand. More often ligand removal achieved under high temperature or light ablation followed by washing. Alternatively, the ligands can be electrochemically etched off.
Surface structure and chemical environment.
The precise structure of the ligands on the surface of colloidal gold NPs impact the properties of the colloidal gold particles. Binding conformations and surface packing of the capping ligands at the surface of the colloidal gold NPs tend to differ greatly from bulk surface model adsorption, largely due to the high curvature observed at the nanoparticle surfaces. Thiolate-gold interfaces at the nanoscale have been well-studied and the thiolate ligands are observed to pull Au atoms off of the surface of the particles to form “staple” motifs that have significant Thiyl-Au(0) character. The citrate-gold surface, on the other hand, is relatively less-studied due to the vast number of binding conformations of the citrate to the curved gold surfaces. A study performed in 2014 identified that the most-preferred binding of the citrate involves two carboxylic acids and the hydroxyl group of the citrate binds three surface metal atoms.
Health and safety.
As gold nanoparticles (AuNPs) are further investigated for targeted drug delivery in humans, their toxicity needs to be considered. For the most part, it is suggested that AuNPs are biocompatible, but the concentrations at which they become toxic needs to be determined, and if those concentrations fall within the range of used concentrations. Toxicity can be tested "in vitro" and "in vivo". "In vitro" toxicity results can vary depending on the type of the cellular growth media with different protein compositions, the method used to determine cellular toxicity (cell health, cell stress, how many cells are taken into a cell), and the capping ligands in solution. "In vivo" assessments can determine the general health of an organism (abnormal behavior, weight loss, average life span) as well as tissue specific toxicology (kidney, liver, blood) and inflammation and oxidative responses. "In vitro" experiments are more popular than "in vivo" experiments because "in vitro" experiments are more simplistic to perform than "in vivo" experiments.
Toxicity and hazards in synthesis.
While AuNPs themselves appear to have low or negligible toxicity, and the literature shows that the toxicity has much more to do with the ligands rather than the particles themselves, the synthesis of them involves chemicals that are hazardous. Sodium borohydride, a harsh reagent, is used to reduce the gold ions to gold metal. The gold ions usually come from chloroauric acid, a potent acid. Because of the high toxicity and hazard of reagents used to synthesize AuNPs, the need for more “green” methods of synthesis arose.
Toxicity due to capping ligands.
Some of the capping ligands associated with AuNPs can be toxic while others are nontoxic. In gold nanorods (AuNRs), it has been shown that a strong cytotoxicity was associated with CTAB-stabilized AuNRs at low concentration, but it is thought that free CTAB was the culprit in toxicity . Modifications that overcoat these AuNRs reduces this toxicity in human colon cancer cells (HT-29) by preventing CTAB molecules from desorbing from the AuNRs back into the solution.
Ligand toxicity can also be seen in AuNPs. Compared to the 90% toxicity of HAuCl4 at the same concentration, AuNPs with carboxylate termini were shown to be non-toxic. Large AuNPs conjugated with biotin, cysteine, citrate, and glucose were not toxic in human leukemia cells (K562) for concentrations up to 0.25 M. Also, citrate-capped gold nanospheres (AuNSs) have been proven to be compatible with human blood and did not cause platelet aggregation or an immune response. However, citrate-capped gold nanoparticles sizes 8-37 nm were found to be lethally toxic for mice, causing shorter lifespans, severe sickness, loss of appetite and weight, hair discoloration, and damage to the liver, spleen, and lungs; gold nanoparticles accumulated in the spleen and liver after traveling a section of the immune system.
There are mixed-views for polyethylene glycol (PEG)-modified AuNPs. These AuNPs were found to be toxic in mouse liver by injection, causing cell death and minor inflammation. However, AuNPs conjugated with PEG copolymers showed negligible toxicity towards human colon cells (Caco-2).
AuNP toxicity also depends on the overall charge of the ligands. In certain doses, AuNSs that have positively-charged ligands are toxic in monkey kidney cells (Cos-1), human red blood cells, and E. coli because of the AuNSs interaction with the negatively-charged cell membrane; AuNSs with negatively-charged ligands have been found to be nontoxic in these species.
In addition to the previously mentioned "in vivo" and "in vitro" experiments, other similar experiments have been performed. Alkylthiolate-AuNPs with trimethlyammonium ligand termini mediate the translocation of DNA across mammalian cell membranes "in vitro" at a high level, which is detrimental to these cells. Corneal haze in rabbits have been healed "in vivo" by using polyethylemnimine-capped gold nanoparticles that were transfected with a gene that promotes wound healing and inhibits corneal fibrosis.
Toxicity due to size of nanoparticles.
Toxicity in certain systems can also be dependent on the size of the nanoparticle. AuNSs size 1.4 nm were found to be toxic in human skin cancer cells (SK-Mel-28), human cervical cancer cells (HeLa), mouse fibroblast cells (L929), and mouse macrophages (J774A.1), while 0.8, 1.2, and 1.8 nm sized AuNSs were less toxic by a six-fold amount and 15 nm AuNSs were nontoxic. There is some evidence for AuNP buildup after injection in "in vivo" studies, but this is very size dependent. 1.8 nm AuNPs were found to be almost totally trapped in the lungs of rats. Different sized AuNPs were found to buildup in the blood, brain, stomach, pancreas, kidneys, liver, and spleen.
Biosafety and biokinetics investigations on biodegradable ultrasmall-in-nano architectures have demonstrated that gold nanoparticles are able to avoid metal accumulation in organisms through escaping by the renal pathway.
Synthesis.
Generally, gold nanoparticles are produced in a liquid ("liquid chemical methods") by reduction of chloroauric acid (H[AuCl4]). To prevent the particles from aggregating, stabilizing agents are added. Citrate acts both as the reducing agent and colloidal stabilizer.
They can be functionalized with various organic ligands to create organic-inorganic hybrids with advanced functionality.
Turkevich method.
This simple method was pioneered by J. Turkevich et al. in 1951 and refined by G. Frens in the 1970s. It produces modestly monodisperse spherical gold nanoparticles of around 10–20 nm in diameter. Larger particles can be produced, but at the cost of monodispersity and shape. In this method, hot chloroauric acid is treated with sodium citrate solution, producing colloidal gold. The Turkevich reaction proceeds via formation of transient gold nanowires. These gold nanowires are responsible for the dark appearance of the reaction solution before it turns ruby-red.
Capping agents.
A capping agent is used during nanoparticle synthesis to inhibit particle growth and aggregation. The chemical blocks or reduces reactivity at the periphery of the particle—a good capping agent has a high affinity for the new nuclei. Citrate ions or tannic acid function both as a reducing agent and a capping agent. Less sodium citrate results in larger particles.
Brust-Schiffrin method.
This method was discovered by Brust and Schiffrin in the early 1990s, and can be used to produce gold nanoparticles in organic liquids that are normally not miscible with water (like toluene). It involves the reaction of a chlorauric acid solution with tetraoctylammonium bromide (TOAB) solution in toluene and sodium borohydride as an anti-coagulant and a reducing agent, respectively.
Here, the gold nanoparticles will be around 5–6 nm. NaBH4 is the reducing agent, and TOAB is both the phase transfer catalyst and the stabilizing agent.
TOAB does not bind to the gold nanoparticles particularly strongly, so the solution will aggregate gradually over the course of approximately two weeks. To prevent this, one can add a stronger binding agent, like a thiol (in particular, alkanethiols), which will bind to gold, producing a near-permanent solution. Alkanethiol protected gold nanoparticles can be precipitated and then redissolved. Thiols are better binding agents because there is a strong affinity for the gold-sulfur bonds that form when the two substances react with each other. Tetra-dodecanthiol is a commonly used strong binding agent to synthesize smaller particles.
Some of the phase transfer agent may remain bound to the purified nanoparticles, this may affect physical properties such as solubility. In order to remove as much of this agent as possible, the nanoparticles must be further purified by soxhlet extraction.
Perrault method.
This approach, discovered by Perrault and Chan in 2009, uses hydroquinone to reduce HAuCl4 in an aqueous solution that contains 15 nm gold nanoparticle seeds. This seed-based method of synthesis is similar to that used in photographic film development, in which silver grains within the film grow through addition of reduced silver onto their surface. Likewise, gold nanoparticles can act in conjunction with hydroquinone to catalyze reduction of ionic gold onto their surface. The presence of a stabilizer such as citrate results in controlled deposition of gold atoms onto the particles, and growth. Typically, the nanoparticle seeds are produced using the citrate method. The hydroquinone method complements that of Frens, as it extends the range of monodispersed spherical particle sizes that can be produced. Whereas the Frens method is ideal for particles of 12–20 nm, the hydroquinone method can produce particles of at least 30–300 nm.
Martin method.
This simple method, discovered by Martin and Eah in 2010, generates nearly monodisperse "naked" gold nanoparticles in water. Precisely controlling the reduction stoichiometry by adjusting the ratio of NaBH4-NaOH ions to HAuCl4-HCl ions within the "sweet zone," along with heating, enables reproducible diameter tuning between 3–6 nm. The aqueous particles are colloidally stable due to their high charge from the excess ions in solution. These particles can be coated with various hydrophilic functionalities, or mixed with hydrophobic ligands for applications in non-polar solvents. In non-polar solvents the nanoparticles remain highly charged, and self-assemble on liquid droplets to form 2D monolayer films of monodisperse nanoparticles.
Nanotech studies.
"Bacillus licheniformis" can be used in synthesis of gold nanocubes with sizes between 10 and 100 nanometres. Gold nanoparticles are usually synthesized at high temperatures in organic solvents or using toxic reagents. The bacteria produce them in much milder conditions.
Navarro et al. method.
For particles larger than 30 nm, control of particle size with a low polydispersity of spherical gold nanoparticles remains challenging. In order to provide maximum control on the NP structure, Navarro and co-workers used a modified Turkevitch-Frens procedure using sodium acetylacetonate as the reducing agent and sodium citrate as the stabilizer.
Sonolysis.
Another method for the experimental generation of gold particles is by sonolysis. The first method of this type was invented by Baigent and Müller. This work pioneered the use of ultrasound to provide the energy for the processes involved and allowed the creation of gold particles with a diameter of under 10 nm. In another method using ultrasound, the reaction of an aqueous solution of HAuCl4 with glucose, the reducing agents are hydroxyl radicals and sugar pyrolysis radicals (forming at the interfacial region between the collapsing cavities and the bulk water) and the morphology obtained is that of nanoribbons with width 30–50 nm and length of several micrometers. These ribbons are very flexible and can bend with angles larger than 90°. When glucose is replaced by cyclodextrin (a glucose oligomer), only spherical gold particles are obtained, suggesting that glucose is essential in directing the morphology toward a ribbon.
Block copolymer-mediated method.
An economical, environmentally benign and fast synthesis methodology for gold nanoparticles using block copolymer has been developed by Sakai et al. In this synthesis methodology, block copolymer plays the dual role of a reducing agent as well as a stabilizing agent. The formation of gold nanoparticles comprises three main steps: reduction of gold salt ion by block copolymers in the solution and formation of gold clusters, adsorption of block copolymers on gold clusters and further reduction of gold salt ions on the surfaces of these gold clusters for the growth of gold particles in steps, and finally its stabilization by block copolymers. But this method usually has a limited-yield (nanoparticle concentration), which does not increase with the increase in the gold salt concentration. Ray et al. improved this synthesis method by enhancing the nanoparticle yield by manyfold at ambient temperature.
Applications.
Antibiotic conjugated nanoparticle synthesis.
Antibiotic functionalized metal nanoparticles have been widely studied as a mode to treat multi-drug resistant bacterial strains. For example, kanamycin capped gold-nanoparticles (Kan-AuPs) showed broad spectrum dose dependent antibacterial activity against both gram positive and gram negative bacterial strains in comparison to kanamycin alone.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\pm"
},
{
"math_id": 1,
"text": "^{5}"
}
] | https://en.wikipedia.org/wiki?curid=886856 |
886866 | Gelfond–Schneider constant | The Gelfond–Schneider constant or Hilbert number is two to the power of the square root of two:
2 ≈ ...
which was proved to be a transcendental number by Rodion Kuzmin in 1930.
In 1934, Aleksandr Gelfond and Theodor Schneider independently proved the more general "Gelfond–Schneider theorem", which solved the part of Hilbert's seventh problem described below.
Properties.
The square root of the Gelfond–Schneider constant is the transcendental number
formula_0 ...
This same constant can be used to prove that "an irrational elevated to an irrational power may be rational", even without first proving its transcendence. The proof proceeds as follows: either formula_1 is a rational which proves the theorem, or it is irrational (as it turns out to be) and then
formula_2
is an irrational to an irrational power that is a rational which proves the theorem. The proof is not constructive, as it does not say which of the two cases is true, but it is much simpler than Kuzmin's proof.
Hilbert's seventh problem.
Part of the seventh of Hilbert's twenty-three problems posed in 1900 was to prove, or find a counterexample to, the claim that "ab" is always transcendental for algebraic "a" ≠ 0, 1 and irrational algebraic "b". In the address he gave two explicit examples, one of them being the Gelfond–Schneider constant 2√2.
In 1919, he gave a lecture on number theory and spoke of three conjectures: the Riemann hypothesis, Fermat's Last Theorem, and the transcendence of 2√2. He mentioned to the audience that he didn't expect anyone in the hall to live long enough to see a proof of this result. But the proof of this number's transcendence was published by Kuzmin in 1930, well within Hilbert's own lifetime. Namely, Kuzmin proved the case where the exponent "b" is a real quadratic irrational, which was later extended to an arbitrary algebraic irrational "b" by Gelfond and by Schneider.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{2^{\\sqrt{2}}}=\\sqrt{2}^{\\sqrt{2}} \\approx "
},
{
"math_id": 1,
"text": "\\sqrt{2}^\\sqrt{2}"
},
{
"math_id": 2,
"text": "\\left(\\sqrt{2}^{\\sqrt{2}}\\right)^{\\sqrt{2}}=\\sqrt{2}^{\\sqrt{2}\\times\\sqrt{2}}=\\sqrt{2}^2=2"
}
] | https://en.wikipedia.org/wiki?curid=886866 |
886876 | Crowd simulation | Model of movement
Crowd simulation is the process of simulating the movement (or <templatestyles src="Template:Visible anchor/styles.css" />dynamics) of a large number of entities or characters. It is commonly used to create virtual scenes for visual media like films and video games, and is also used in crisis training, architecture and urban planning, and evacuation simulation.
Crowd simulation may focus on aspects that target different applications. For realistic and fast rendering of a crowd for visual media or virtual cinematography, reduction of the complexity of the 3D scene and image-based rendering are used, while variations (changes) in appearance help present a realistic population.
In games and applications intended to replicate real-life human crowd movement, like in evacuation simulations, simulated agents may need to navigate towards a goal, avoid collisions, and exhibit other human-like behavior. Many crowd steering algorithms have been developed to lead simulated crowds to their goals realistically. Some more general systems are researched that can support different kinds of agents (like cars and pedestrians), different levels of abstraction (like individual and continuum), agents interacting with smart objects, and more complex physical and social dynamics.
History.
There has always been a deep-seated interest in the understanding and gaining control of motional and behavior of crowds of people. Many major advancements have taken place since the beginnings of research within the realm of crowd simulation. Evidently many new findings are continually made and published following these which enhance the scalability, flexibility, applicability, and realism of simulations:
In 1987, behavioral animation was introduced and developed by Craig Reynolds. He had simulated flocks of birds alongside schools of fish for the purpose of studying group intuition and movement. All agents within these simulations were given direct access to the respective positions and velocities of their surrounding agents. The theorization and study set forth by Reynolds was improved and built upon in 1994 by Xiaoyuan Tu, Demetri Terzopoulos and Radek Grzeszczuk. The realistic quality of simulation was engaged with as the individual agents were equipped with synthetic vision and a general view of the environment within which they resided, allowing for a perceptual awareness within their dynamic habitats.
Initial research in the field of crowd simulation began in 1997 with Daniel Thalmann's supervision of Soraia Raupp Musse's PhD thesis. They present a new model of crowd behavior in order to create a simulation of generic populations. Here a relation is drawn between the autonomous behavior of the individual within the crowd and the emergent behavior originating from this.
In 1999, individualistic navigation began its course within the realm of crowd simulation via continued research of Craig Reynolds. Steering behaviors are proven to play a large role in the process of automating agents within a simulation. Reynolds states the processes of low-level locomotion to be dependent and reliant on mid-level steering behaviors and higher-level goal states and path finding strategies. Building off of the advanced work of Reynolds, Musse and Thalmann began to study the modeling of real time simulations of these crowds, and their applications to human behavior. The control of human crowds was designated as a hierarchical organization with levels of autonomy amongst agents. This marks the beginnings of modeling individual behavior in its most elementary form on humanoid agents or virtual humans.
Coinciding with publications regarding human behavior models and simulations of group behaviors, Matt Anderson, Eric McDaniel, and Stephen Chenney's proposal of constraints on behavior gained popularity. The positioning of constraints on group animations was presented to be able to be done at any time within the simulation. This process of applying constraints to the behavioral model is undergone in a two-fold manner, by first determining the initial set of goal trajectories coinciding with the constraints, and then applying behavioral rules to these paths to select those which do not violate them.
Correlating and building off of the findings proposed in his work with Musse, Thalmann, working alongside Bratislava Ulicny and Pablo de Heras Ciechomski, proposed a new model which allowed for interactive authoring of agents at the level of an individual, a group of agents and the entirety of a crowd. A brush metaphor is introduced to distribute, model and control crowd members in real-time with immediate feedback.
Crowd dynamics.
One of the major goals in crowd simulation is to steer crowds realistically and recreate human dynamic behaviors.
There exists several overarching approaches to crowd simulation and AI, each one providing advantages and disadvantages based on crowd size and time scale. Time scale refers to how the objective of the simulation also affects the length of the simulation. For example, researching social questions such as how ideologies are spread amongst a population will result in a much longer running simulation since such an event can span up to months or years. Using those two characteristics, researchers have attempted to apply classifications to better evaluate and organize existing crowd simulators.
Particle systems.
One way to simulate virtual crowds is to use a particle system. Particle systems were first introduced in computer graphics by W. T. Reeves in 1983. A particle system is a collection of a number of individual elements or "particles". Each particle is able to act autonomously and is assigned a set of physical attributes (such as color, size and velocity).
A particle system is dynamic, in that the movements of the particles change over time. A particle system's movement is what makes it so desirable and easy to implement. Calculating the movements of these particles takes very little time. It simply involves physics: the sum of all the forces acting on a particle determines its motion. Forces such as gravity, friction and force from a collision, and social forces like the attractive force of a goal.
Usually each particle has a velocity vector and a position vector, containing information about the particle's current velocity and position respectively. The particles next position is calculated by adding its velocity vector to its position vector. A very simple operation (again why particle systems are so desirable). Its velocity vector changes over time, in response to the forces acting on the particle. For example, a collision with another particle will cause it to change direction.
Particles systems have been widely used in films for effects such as explosions, for water effects in the 2000 movie "The Perfect Storm", and simulated gas in the 1994 film "the Mask".
Particles systems, however, do have some drawbacks. It can be a bad idea to use a particle system to simulate agents in a crowd that the director will move on command, as determining which particles belong to the agent and which do not is very difficult.
Algorithm by Patil and Van Den Berg.
This algorithm was designed for relatively simplistic crowds, where each agent in the crowd only desires to get to its own goal destination while also avoiding obstacles. This algorithm could be used for simulating a crowd in Times Square.
Patils algorithm's most important and distinctive feature is that it utilizes the concept of "navigation fields" for directing agents. This is different from a guidance field; a guidance field is an area around the agent in which the agent is capable of "seeing"/detecting information. Guidance fields are typically used for avoiding obstacles, dynamic obstacles (obstacles that move) in particular. Every agent possesses its own guidance field. A navigation field, on the other hand, is a vector field which calculates the minimum cost path for every agent so that every agent arrives at its own goal position.
The navigation field can only be used properly when a path exists from every free (non-obstacle) position in the environment to one of the goal positions. The navigation field is computed using coordinates of the static objects in the environment, goal positions for each agent, and the guidance field for each agent. In order to guarantee that every agent reaches its own goal the navigation field must be free of local minima, except for the presence of sinks at the specified goals.
The running time of computing the navigation field is formula_0, where m × n is the grid dimension (similar to Dijkstra's algorithm). Thus, the algorithm is only dependent on the grid resolution and not dependent on the number of agents in the environment. However, this algorithm has a high memory cost.
Individual behavior modelling.
One set of techniques for AI-based crowd simulation is to model crowd behavior by advanced simulation of individual agent motivations and decision-making. Generally, this means each agent is assigned some set of variables that measure various traits or statuses such as stress, personality, or different goals. This results in more realistic crowd behavior though may be more computationally intensive than simpler techniques.
Personality-based models.
One method of creating individualistic behavior for crowd agents is through the use of personality traits. Each agent may have certain aspects of their personality tuned based on a formula that associates aspects such as aggressiveness or impulsiveness with variables that govern the agents' behavior. One way this association can be found is through a subjective study in which agents are randomly assigned values for these variables and participants are asked to describe each agent in terms of these personality traits. A regression may then be done to determine a correlation between these traits and the agent variables. The personality traits can then be tuned and have an appropriate effect on agent behavior.
The OCEAN personality model has been used to define a mapping between personality traits and crowd simulation parameters. Automating crowd parameter tuning with personality traits provides easy authoring of scenarios with heterogeneous crowds.
Stress-based model.
The behavior of crowds in high-stress situations can be modeled using General Adaptation Syndrome theory. Agent behavior is affected by various stressors from their environment categorized into four prototypes: time pressure, area pressure, positional stressors, and interpersonal stressors, each with associated mathematical models.
"Time pressure" refers to stressors related to a time limit in reaching a particular goal. An example would be a street crossing with a timed walk signal or boarding a train before the doors are closed. This prototype is modeled by the following formula:
formula_1
where formula_2 is the intensity of the time pressure as a function of the estimated time to reach the goal formula_3 and a time constraint formula_4.
"Area pressure" refers to stressors as a result of an environmental condition. Examples would be noise or heat in an area. The intensity of this stressor is constant over a particular area and is modeled by the following formula:
formula_5
where formula_6 is the intensity of the area pressure, formula_7 is the position of the agent in an area formula_8, and formula_9 is a constant.
"Positional stressors" refer to stressors associated with a local source of stress. The intensity of this stressor increases as an agent approaches the source of the stress. An example would be a fire or a dynamic object such as an assailant. It can be modeled by the following formula:
formula_10
where formula_11 is the intensity of the positional stressor, formula_12 is the position of the agent and formula_13 is the position of the stressor. Alternatively, stressors that generate high stress over a large area (such as a fire) can be modeled using a Gaussian distribution with standard deviation formula_14:
formula_15
"Interpersonal stressors" are stressors as a result of crowding by nearby agents. It can be modeled by the following formula:
formula_16
where formula_17 is the intensity of the interpersonal stressor, formula_18 is the current number of neighbors within a unit space and formula_19 is the preferred number of neighbors within a unit space for that particular agent.
The "perceived stress" follows Steven's Law and is modeled by the formula:
formula_20
where formula_21 is the perceived stress for a stress level formula_22, formula_23 is a scale factor, and formula_24 is an exponent depending on the stressor type.
An agent's "stress response" can be found with the following formula:
formula_25
where formula_26 is the stress response capped at a maximum value of formula_27 and formula_28 is the maximum rate at which an agent's stress response can change.
Examples of notable crowd AI simulation can be seen in New Line Cinema's "The Lord of the Rings" films, where AI armies of thousands of characters battle each other. This crowd simulation was done using Weta Digital's Massive software.
Sociology.
"Crowd simulation" can also refer to simulations based on group dynamics, crowd psychology, and even social etiquette. In this case, the focus is on the behavior of the crowd, not necessarily on the visual realism of the simulation. Crowds have been studied as a scientific interest since the end of the 19th century. A lot of research has focused on the collective social behavior of people at social gatherings, assemblies, protests, rebellions, concerts, sporting events and religious ceremonies. Gaining insight into natural human behavior under varying types of stressful situations will allow better models to be created which can be used to develop crowd controlling strategies, often in public safety planning.
"Emergency response teams" such as policemen, the National Guard, military and even volunteers must undergo some type of crowd control training. Using researched principles of human behavior in crowds can give disaster training designers more elements to incorporate to create realistic simulated disasters. Crowd behavior can be observed during both panic and non-panic conditions. Military programs are looking more towards simulated training involving emergency responses due to their cost-effective technology, as well as how effective the learning can be transferred to the real world. Many events that may start out controlled can have a twisting event that turns them into catastrophic situations, where decisions need to be made on the spot. It is these situations in which crowd dynamical understanding could play a vital role in reducing the potential for chaos.
"Modeling" techniques of crowds vary from holistic or network approaches to understanding individualistic or behavioral aspects of each agent. For example, the Social Force Model describes a need for individuals to find a balance between social interaction and physical interaction. An approach that incorporates both aspects, and is able to adapt depending on the situation, would better describe natural human behavior, always incorporating some measure of unpredictability. With the use of multi-agent models understanding these complex behaviors has become a much more comprehensible task. With the use of this type of software, systems can now be tested under extreme conditions, and simulate conditions over long periods of time in the matter of seconds.
In some situations, the behavior of swarms of non-human animals can be used as an experimental model of crowd behavior. The panic behavior of ants when exposed to a repellent chemical in a confined space with limited exit routes has been found to have both similarities and differences to equivalent human behavior.
Modeling individual behaviors.
Hacohen, Shoval and Shvalb formulated the drivers-pedestrians dynamics at congested conflict spots. In such scenarios, the drivers and/or pedestrians do not closely follow the traffic laws. The model is based on the Probabilistic Navigation function (PNF), which was originally developed for robotics motion planning. The algorithm constructs a trajectory according to the probability for collision at each point in the entire crossing area. The pedestrian then follow a trajectory that locally minimizes their perceived probability for collision.
Helbing proposed a model based on physics using a particle system and socio-psychological forces in order to describe human crowd behavior in panic situation, this is now called the Helbing Model. His work is based on how the average person would react in a certain situation. Although this is a good model, there are always different types of people present in the crowd and they each have their own individual characteristics as well as how they act in a group structure. For instance, one person may not react to a panic situation, while another may stop walking and interfere in the crowd dynamics as a whole. Furthermore, depending on the group structure, the individual action can change because the agent is part of a group, for example, returning to a dangerous place in order to rescue a member of that group. Helbing's model can be generalized incorporating individualism, as proposed by Braun, Musse, Oliveira and Bodmann.
In order to tackle this problem, individuality should be assigned to each agent, allowing to deal with different types of behaviors. Another aspect to tackle this problem is the possibility to group people, forming these group causes people to change their behavior as a function of part of the group structure. Each agent (individual) can be defined according to the following parameters:
To model the effect of the dependence parameter with "individual agents", the equation is defined as:
formula_29
When evaluating the speed of the agent, it is clear that if the value of the dependence factor, DE, is equal to one, then the person would be fully disabled making him unable to move. If the dependence factor is equal to zero, then the person is able to run at his max speed.
Group formation is related to the Altruism force which is implemented as an interaction force between two or more agents who are part of the same family. Mathematically, it is described as the following:
formula_30
where:
"dij" represents the distance between two agents with the origin at the position of the agent;
"dip" is the distance vector point from the agents to the door's position "p" of the simulation environment;
"K" is a constant;
"eij" is the unitary vector with the origin at position i.
Consequently, the greater the parameter "ALi" of agent "i", the bigger will be "Fāi" which points to the agent "j" and has the high level of "DEj". When both agents are close enough to each other, the one with high "DE" (agent "j" in this example) adopts the value of agent "i" (formula_31). This means that the evacuation ability of agent "i" is shared with agent "j" and both start moving together.
By using these applying these equations in model testing using a normally distributed population, the results are fairly similar to the Helbing Model.
The places where this would be helpful would be in an evacuation scenario. Take for example, an evacuation of a building in the case of a fire. Taking into account the characteristics of individual agents and their group performances, determining the outcome of how the crowd would exit the building is critically important in creating the layout of the building.
Leader behavior during evacuation simulations.
As described earlier, the Helbing Model is used as the basics for crowd behavior. This same type of behavior model is used for evacuation simulations.
In general, the first thing that has to be assumed is that not everyone has knowledge about the environment or where there are and aren't hazards. From this assumption we can create three types of agents. The first type is a trained leader, this agent knows about the environment and is able to spread knowledge to other agents so they know how to exit from an environment. The next type of agent is an untrained leader, this agent does not know about the environment, however, as the agent explores the environment and gets information from other types of leaders, the agent is able to spread the knowledge about the environment. The last type of agent is a follower, this type of agent can only take information from other leaders and not be able to share the information with other agents.
The implementation of these types of agents is fairly straightforward. The leaders in the environment have a map of the environment saved as one of their attributes. An untrained leader and followers will start out with an empty map as their attribute. Untrained leaders and followers will start exploring an environment by themselves and create a map of walkable and unwalkable locations. Leaders and untrained leaders (once they have the knowledge), will share information with other agents depending on their proximity. They will share information about which points on the grid are blocked, the local sub-graphs and the dangers in the area.
There were two types of searching algorithms tried out for this implementation. There was the random search and the depth first search. A random search is where each of the agents go in any direction through the environment and try to find a pathway out. The depth first search is where agents follow one path as far as it can go then return and try another path if the path they traversed does not contain an exit. If was found that depth first search gave a speed up of 15 times versus a random search.
Scalable simulations.
There are many different case situations that come into play in crowd simulations. Recently, crowd simulation has been essential for the many virtual environment applications such as education, training, and entertainment. Many situations are based on the environment of the simulation or the behavior of the group of local agents. In virtual reality applications, every agent interacts with many other agents in the environment, calling for complex real-time interactions. Agents must have continuous changes in the environment since agent behaviors allow complex interactions. Scalable architecture can manage large crowds through the behavior and interactive rates. These situations will indicate how the crowds will act in multiple complex scenarios while several different situations are being applied. A situation can be any circumstance that has typical local behaviors. We can categorize all situations into two different kinds.
"Spatial situation" is a situation that has a region where the environment affects the local agents. For instance, a crowd waiting in line for a ticket booth would be displaying a spatial situation. Other examples may be a bus stop or an ATM where characters act upon their environment. Therefore, we would consider 'bus stop' as the situation if the behavior of the agents are to be getting on or off a bus.
"Non-Spatial situation" has no region in the environment because this only involves the behavior of the crowd. The relationship of the local agents is an important factor to consider when determining behavior. An example would be a group of friends walking together. Typical behavior of characters that are friends would all move along with each other. This means that 'friendship' would be the situation among the typical behavior of walking together.
The structure of any situation is built upon four components, Behavior functions, Sensors, States, and Event Rules. Behavior functions represent what the characters behaviors are specific to the situation. Sensors are the sensing capability for agents to see and respond to events. States are the different motions and state transitions used only for the local behaviors. Event rule is the way to connect different events to their specific behaviors. While a character is being put into a situation, these four components are considered at the same time. For spatial situations, the components are added when the individual initially enters the environment that influences the character. For non-spatial situations, the character is affected only once the user assigns the situation to the character. The four components are removed when the agent is taken away from its situations region or the situation itself is removed. The dynamic adding and removing of the situations lets us achieve scalable agents.
Human-like behaviors and crowd AI.
To simulate more aspects of human activities in a crowd, more is needed than path and motion planning. Complex social interactions, smart object manipulation, and hybrid models are challenges in this area. Simulated crowd behavior is inspired by the flow of real-world crowds. Behavioral patterns, movement speeds and densities, and anomalies are analyzed across many environments and building types. Individuals are tracked and their movements are documented such that algorithms can be derived and implemented into crowd simulations.
Individual entities in a crowd are also called agents. In order for a crowd to behave realistically each agent should act autonomously (be capable of acting independently of the other agents). This idea is referred to as an "agent-based model." Moreover, it is usually desired that the agents act with some degree of intelligence (i.e. the agents should not perform actions that would cause them to harm themselves). For agents to make intelligent and realistic decisions, they should act in accordance with their surrounding environment, react to its changes, and react to the other agents. Terzopoulos and his students have pioneered agent-based models of pedestrians, an approach referred to as multi-human simulation to distinguish it from conventional crowd simulation.
Rule-based AI.
In rule-based AI, virtual agents follow scripts: "if this happens, do that". This is a good approach to take if agents with different roles are required, such as a main character and several background characters. This type of AI is usually implemented with a hierarchy, such as in Maslow's hierarchy of needs, where the lower the need lies in the hierarchy, the stronger it is.
For example, consider a student walking to class who encounters an explosion and runs away. The theory behind this is initially the first four levels of his needs are met, and the student is acting according to his need for self-actualization. When the explosion happens his safety is threatened which is a much stronger need, causing him to act according to that need.
This approach is scalable, and can be applied to crowds with a large number of agents. Rule-based AI, however, does have some drawbacks. Most notably the behavior of the agents can become very predictable, which may cause a crowd to behave unrealistically.
Learning AI.
In learning AI, virtual characters behave in ways that have been tested to help them achieve their goals. Agents experiment with their environment or a sample environment which is similar to their real one.
Agents perform a variety of actions and learn from their mistakes. Each agent alters its behavior in response to rewards and punishments it receives from the environment. Over time, each agent would develop behaviors that are consistently more likely to earn high rewards.
If this approach is used, along with a large number of possible behaviors and a complex environment agents will act in a realistic and unpredictable fashion.
Algorithms.
There are a wide variety of machine learning algorithms that can be applied to crowd simulations.
Q-Learning is an algorithm residing under machine learning's sub field known as reinforcement learning. A basic overview of the algorithm is that each action is assigned a Q value and each agent is given the directive to always perform the action with the highest Q value. In this case learning applies to the way in which Q values are assigned, which is entirely reward based. When an agent comes in contact with a state, s, and action, a, the algorithm then estimates the total reward value that an agent would receive for performing that state action pair. After calculating this data, it is then stored in the agent's knowledge and the agent proceeds to act from there.
The agent will constantly alter its behavior depending on the best Q value available to it. And as it explores more and more of the environment, it will eventually learn the most optimal state action pairs to perform in almost every situation.
The following function outlines the bulk of the algorithm:
"Q(s, a) ←− r + maxaQ(s', a')"
Given a state s and action a, r and s are the reward and state after performing (s,a), and a' is the range over all the actions.
Crowd rendering and animation.
Rendering and animating a large number of agents realistically, especially in real time, is challenging. To reduce the complexity of 3D rendering of large-scale crowds, techniques like culling (discarding unimportant objects), impostors (image-based rendering) and decreasing levels of detail have been used.
Variations in appearance, body shape and size, accessories and behavior (social or cultural) exist in real crowds, and lack of variety affects the realism of visual simulations. Existing systems can create virtual crowds with varying texture, color, size, shape and animation.
Real world applications.
Virtual cinematography.
Crowd simulations have been used widely across films as a cost-effective and realistic alternative from hiring actors and capturing shots that would otherwise be unrealistic. A significant example of its use lies in The Lord of the Rings (film series). One of the most glaring problems for the production team in the initial stages were large-scale battles, as the author of the novels, J. R. R. Tolkien, envisioned them to have at least 50,000 participants. Such a number was unrealistic had they decided to only attempt to hire real actors and actresses. Instead they decided to use CG to simulate these scenes through the use of the Multiple Agent Simulation System in a Virtual Environment, otherwise known as MASSIVE. The Human Logic Engine based Maya plugin for crowd simulation, Miarmy, was used for the development of these sequences. The software allowed the filmmakers to provide each character model an agent based A.I. that could utilize a library of 350 animations. Based on sight, hearing, and touch parameters generated from the simulation, agents would react uniquely to each situation. Thus each simulation of the scene was unpredictable. The final product clearly displayed the advantages to using crowd simulation software.
Urban planning.
The development of crowd simulation software has become a modern and useful tool in designing urban environments. Whereas the traditional method of urban planning relies on maps and abstract sketches, a digital simulation is more capable of conveying both form and intent of design from architect to pedestrian. For example, street signs and traffic lights are localized visual cues that influence pedestrians to move and behave accordingly. Following this logic, a person is able to move from point A to point B in a way that is efficient and that a collective group of people can operate more effectively as a result. In a broader sense, bus systems and roadside restaurants serve a spatial purpose in their locations through an understanding of human movement patterns. The SimCity video game series exemplifies this concept in a more simplistic manner. In this series, the player assigns city development in designated zones while maintaining a healthy budget. The progression from empty land to a bustling city is fully controlled by the player's choices and the digital citizens behave as according to the city's design and events.
Evacuation and riot handling.
Simulated realistic crowds can be used in training for riots handling, architecture, safety science (evacuation planning).
Military.
Being that crowd simulations are so prevalent in use for public planning and general order with regards to chaotic situations, many applications can be drawn for governmental and military simulations. Crowd modeling is essential in police and military simulation in order to train officers and soldiers to deal with mass gatherings of people. Not only do offensive combatants prove to be difficult for these individuals to handle, but noncombatant crowds play significant roles in making these aggressive situations more out of control. Game technology is used in order to simulate these situations for soldiers and technicians to practice their skills.
Sociology.
The behavior of a modeled crowd plays a prominent role in analytical matters. These dynamics rely on the physical behaviors of individual agents within a crowd rather than the visual reality of the model itself. The social behaviors of people within these constructs have been of interest for many years, and the sociological concepts which underpin these interactions are constantly studied. The simulation of crowds in different situations allows for sociological study of real life gatherings in a variety of arrangements and locations. The variations in human behavior in situations varying in stress-levels allows for the further development and creation of crowd control strategies which can be more specifically applied to situations rather than generalized.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(m*n*log(mn))"
},
{
"math_id": 1,
"text": "I_t = max(t_e - t_a, 0)"
},
{
"math_id": 2,
"text": "I_t"
},
{
"math_id": 3,
"text": "t_e"
},
{
"math_id": 4,
"text": "t_a"
},
{
"math_id": 5,
"text": "I_a = \\begin{cases} c & \\text{if }p_a\\in A \\\\ 0 & \\text{if }p_a\\not\\in A \\end{cases}"
},
{
"math_id": 6,
"text": "I_a"
},
{
"math_id": 7,
"text": "p_a"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "c"
},
{
"math_id": 10,
"text": "I_p = \\lVert p_a - p_s \\rVert"
},
{
"math_id": 11,
"text": "I_p"
},
{
"math_id": 12,
"text": "p_a"
},
{
"math_id": 13,
"text": "p_s"
},
{
"math_id": 14,
"text": "\\sigma"
},
{
"math_id": 15,
"text": "I_p = \\mathcal{N}(p_a - p_s, \\sigma)"
},
{
"math_id": 16,
"text": "I_i = max(n_c - n_p, 0)"
},
{
"math_id": 17,
"text": "I_i"
},
{
"math_id": 18,
"text": "n_c"
},
{
"math_id": 19,
"text": "n_p"
},
{
"math_id": 20,
"text": "\\psi(I) = kI^n"
},
{
"math_id": 21,
"text": "\\psi(I)"
},
{
"math_id": 22,
"text": "I"
},
{
"math_id": 23,
"text": "k"
},
{
"math_id": 24,
"text": "n"
},
{
"math_id": 25,
"text": "{dS \\over dt} = \\begin{cases} \\alpha & \\text{if } \\psi > S \\\\ (-\\alpha \\leq {d\\psi \\over dt} \\leq \\alpha) & \\text{if } \\psi = S\\\\ -\\alpha & \\text{if } \\psi < S\\end{cases}"
},
{
"math_id": 26,
"text": "S"
},
{
"math_id": 27,
"text": "\\beta"
},
{
"math_id": 28,
"text": "\\alpha"
},
{
"math_id": 29,
"text": "v_i = (1-DE)v_{max}"
},
{
"math_id": 30,
"text": "F\\overline{a}_i = K\\sum \\left ( AL_iDE_j \\times \\left | d_{ij}-d_{ip} \\right | \\times e_{ij} \\right ) "
},
{
"math_id": 31,
"text": "DE_j = DE_i"
}
] | https://en.wikipedia.org/wiki?curid=886876 |
886930 | Hamiltonian constraint | Key constraint in some theories admitting Hamiltonian formulations
The Hamiltonian constraint arises from any theory that admits a Hamiltonian formulation and is reparametrisation-invariant. The Hamiltonian constraint of general relativity is an important non-trivial example.
In the context of general relativity, the Hamiltonian constraint technically refers to a linear combination of spatial and time diffeomorphism constraints reflecting the reparametrizability of the theory under both spatial as well as time coordinates. However, most of the time the term "Hamiltonian constraint" is reserved for the constraint that generates time diffeomorphisms.
Simplest example: the parametrized clock and pendulum system.
Parametrization.
In its usual presentation, classical mechanics appears to give time a special role as an independent variable. This is unnecessary, however. Mechanics can be formulated to treat the time variable on the same footing as the other variables in an extended phase space, by parameterizing the temporal variable(s) in terms of a common, albeit unspecified parameter variable. Phase space variables being on the same footing.
Say our system comprised a pendulum executing a simple harmonic motion and a clock. Whereas the system could be described classically by a position x=x(t), with x defined as a function of time, it is also possible to describe the same system as x(formula_0) and t(formula_0) where the relation between x and t is not directly specified. Instead, x and t are determined by the parameter formula_0, which is simply a parameter of the system, possibly having no objective meaning in its own right.
The system would be described by the position of a pendulum from the center, denoted formula_2, and the reading on the clock, denoted formula_1. We put these variables on the same footing by introducing a fictitious parameter formula_0,
formula_3
whose 'evolution' with respect to formula_0 takes us continuously through every possible correlation between the displacement and reading on the clock. Obviously the variable formula_0 can be replaced by any monotonic function, formula_4. This is what makes the system reparametrisation-invariant. Note that by this reparametrisation-invariance the theory cannot predict the value of formula_5 or formula_6 for a given value of formula_0 but only the relationship between these quantities. Dynamics is then determined by this relationship.
Dynamics of this reparametrization-invariant system.
The action for the parametrized Harmonic oscillator is then
formula_7
where formula_2 and formula_1 are canonical coordinates and formula_8 and formula_9 are their conjugate momenta respectively and represent our extended phase space (we will show that we can recover the usual Newton's equations from this expression). Writing the action as
formula_10
we identify the formula_11 as
formula_12
Hamilton's equations for formula_13 are
formula_14
which gives a constraint,
formula_15
formula_16 is our Hamiltonian constraint! It could also be obtained from the Euler–Lagrange equation of motion, noting that the action depends on formula_13 but not its formula_0 derivative. Then the extended phase space variables formula_2, formula_1, formula_8, and formula_9 are constrained to take values on this constraint-hypersurface of the extended phase space. We refer to formula_17 as the `smeared' Hamiltonian constraint where formula_13 is an arbitrary number. The 'smeared' Hamiltonian constraint tells us how an extended phase space variable (or function thereof) evolves with respect to formula_0:
formula_18
(these are actually the other Hamilton's equations). These equations describe a flow or orbit in phase space. In general we have
formula_19
for any phase space function formula_20. As the Hamiltonian constraint Poisson commutes with itself, it preserves itself and hence the constraint-hypersurface. The possible correlations between measurable quantities like formula_5 and formula_6 then correspond to `orbits' generated by the constraint within the constraint surface, each particular orbit differentiated from each other by say also measuring the value of say formula_21 along with formula_5 and formula_6 at one formula_0-instant; after determining the particular orbit, for each measurement of formula_6 we can predict the value formula_5 will take.
Deparametrization.
The other equations of Hamiltonian mechanics are
formula_22
Upon substitution of our action these give,
formula_23
These represent the fundamental equations governing our system.
In the case of the parametrized clock and pendulum system we can of course recover the usual equations of motion in which formula_1 is the independent variable:
Now formula_24 and formula_25 can be deduced by
formula_26
formula_27
We recover the usual differential equation for the simple harmonic oscillator,
formula_28
We also have formula_29 or formula_30
Our Hamiltonian constraint is then easily seen as the condition of constancy of energy! Deparametrization and the identification of a time variable with respect to which everything evolves is the opposite process of parametrization. It turns out in general that not all reparametrisation-invariant systems can be deparametrized. General relativity being a prime physical example (here the spacetime coordinates correspond to the unphysical formula_0 and the Hamiltonian is a linear combination of constraints which generate spatial and time diffeomorphisms).
Reason why we could deparametrize here.
The underlining reason why we could deparametrize (aside from the fact that we already know it was an artificial reparametrization in the first place) is the mathematical form of the constraint, namely,
formula_31
Substitute the Hamiltonian constraint into the original action we obtain
formula_32
which is the standard action for the harmonic oscillator. General relativity is an example of a physical theory where the Hamiltonian constraint isn't of the above mathematical form in general, and so cannot be deparametrized in general.
Hamiltonian of classical general relativity.
In the ADM formulation of general relativity one splits spacetime into spatial slices and time, the basic variables are taken to be the induced metric, formula_33, on the spatial slice (the metric induced on the spatial slice by the spacetime metric), and its conjugate momentum variable related to the extrinsic curvature, formula_34, (this tells us how the spatial slice curves with respect to spacetime and is a measure of how the induced metric evolves in time). These are the metric canonical coordinates.
Dynamics such as time-evolutions of fields are controlled by the Hamiltonian constraint.
The identity of the Hamiltonian constraint is a major open question in quantum gravity, as is extracting of physical observables from any such specific constraint.
In 1986 Abhay Ashtekar introduced a new set of canonical variables, Ashtekar variables to represent an unusual way of rewriting the metric canonical variables on the three-dimensional spatial slices in terms of a SU(2) gauge field and its complementary variable. The Hamiltonian was much simplified in this reformulation. This led to the loop representation of quantum general relativity and in turn loop quantum gravity.
Within the loop quantum gravity representation Thiemann formulated a mathematically rigorous operator as a proposal as such a constraint. Although this operator defines a complete and consistent quantum theory, doubts have been raised as to the physical reality of this theory due to inconsistencies with classical general relativity (the quantum constraint algebra closes, but it is not isomorphic to the classical constraint algebra of GR, which is seen as circumstantial evidence of inconsistencies definitely not a proof of inconsistencies), and so variants have been proposed.
Metric formulation.
The idea was to quantize the canonical variables formula_35 and formula_36, making them into operators acting on wavefunctions on the space of 3-metrics, and then to quantize the Hamiltonian (and other constraints). However, this program soon became regarded as dauntingly difficult for various reasons, one being the non-polynomial nature of the Hamiltonian constraint:
formula_37
where formula_38 is the scalar curvature of the three metric formula_33. Being a non-polynomial expression in the canonical variables and their derivatives it is very difficult to promote to a quantum operator.
Expression using Ashtekar variables.
The configuration variables of Ashtekar's variables behave like an formula_39 gauge field or connection formula_40. Its canonically conjugate momentum is formula_41 is the densitized "electric" field or triad (densitized as formula_42). What do these variables have to do with gravity? The densitized triads can be used to reconstruct the spatial metric via
formula_43
The densitized triads are not unique, and in fact one can perform a local in space rotation with respect to the internal indices formula_44. This is actually the origin of the formula_39 gauge invariance. The connection can be used to reconstruct the extrinsic curvature. The relation is given by
formula_45
where formula_46 is related to the spin connection, formula_47, by formula_48 and formula_49.
In terms of Ashtekar variables the classical expression of the constraint is given by,
formula_50
where formula_51 field strength tensor of the gauge field formula_40 . Due to the factor formula_52 this is non-polynomial in the Ashtekar's variables. Since we impose the condition
formula_53
we could consider the densitized Hamiltonian instead,
formula_54
This Hamiltonian is now polynomial the Ashtekar's variables. This development raised new hopes for the canonical quantum gravity programme. Although Ashtekar variables had the virtue of simplifying the Hamiltonian, it has the problem that the variables become complex. When one quantizes the theory it is a difficult task ensure that one recovers real general relativity as opposed to complex general relativity. Also there were also serious difficulties promoting the densitized Hamiltonian to a quantum operator.
A way of addressing the problem of reality conditions was noting that if we took the signature to be formula_55, that is Euclidean instead of Lorentzian, then one can retain the simple form of the Hamiltonian for but for real variables. One can then define what is called a generalized Wick rotation to recover the Lorentzian theory. Generalized as it is a Wick transformation in phase space and has nothing to do with analytical continuation of the time parameter formula_1.
Expression for real formulation of Ashtekar variables.
Thomas Thiemann addressed both the above problems. He used the real connection
formula_56
In real Ashtekar variables the full Hamiltonian is
formula_57
where the constant formula_58 is the Barbero–Immirzi parameter. The constant formula_59 is -1 for Lorentzian signature and +1 for Euclidean signature. The formula_46 have a complicated relationship with the densitized triads and causes serious problems upon quantization. Ashtekar variables can be seen as choosing formula_60 to make the second more complicated term was made to vanish (the first term is denoted formula_61 because for the Euclidean theory this term remains for the real choice of formula_62). Also we still have the problem of the formula_52 factor.
Thiemann was able to make it work for real formula_58. First he could simplify the troublesome formula_52 by using the identity
formula_63
where formula_64 is the volume,
formula_65
The first term of the Hamiltonian constraint becomes
formula_66
upon using Thiemann's identity. This Poisson bracket is replaced by a commutator upon quantization. It turns out that a similar trick can be used to teat the second term. Why are the formula_46 given by the densitized triads formula_67? It actually come about from the Gauss Law
formula_68
We can solve this in much the same way as the Levi-Civita connection can be calculated from the equation formula_69; by rotating the various indices and then adding and subtracting them. The result is complicated and non-linear. To circumvent the problems introduced by this complicated relationship Thiemann first defines the Gauss gauge invariant quantity
formula_70
where formula_49, and notes that
formula_71
We are then able to write
formula_72
and as such find an expression in terms of the configuration variable formula_40 and formula_73. We obtain for the second term of the Hamiltonian
formula_74
Why is it easier to quantize formula_73? This is because it can be rewritten in terms of quantities that we already know how to quantize. Specifically formula_73 can be rewritten as
formula_75
where we have used that the integrated densitized trace of the extrinsic curvature is the "time derivative of the volume".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tau"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "x (\\tau), \\;\\;\\;\\; t (\\tau)"
},
{
"math_id": 4,
"text": "\\tau' = f(\\tau)"
},
{
"math_id": 5,
"text": "x (\\tau)"
},
{
"math_id": 6,
"text": "t (\\tau)"
},
{
"math_id": 7,
"text": "S = \\int d \\tau \\left[ {dx \\over d \\tau} p + {dt \\over d \\tau} p_t - \\lambda \\left( p_t + {p^2 \\over 2m} + {1 \\over 2} m \\omega^2 x^2 \\right) \\right],"
},
{
"math_id": 8,
"text": "p"
},
{
"math_id": 9,
"text": "p_t"
},
{
"math_id": 10,
"text": "S = \\int d \\tau \\left[ {dx \\over d \\tau} p + {dt \\over d \\tau} p_t - \\mathcal{H} (x,t;p,p_t) \\right]"
},
{
"math_id": 11,
"text": "\\mathcal{H}"
},
{
"math_id": 12,
"text": "\\mathcal{H} (x,t,\\lambda;p,p_t) = \\lambda \\left( p_t + {p^2 \\over 2m} + {1 \\over 2} m \\omega^2 x^2 \\right)."
},
{
"math_id": 13,
"text": "\\lambda"
},
{
"math_id": 14,
"text": "{\\partial \\mathcal{H} \\over \\partial \\lambda} = 0"
},
{
"math_id": 15,
"text": "C = p_t + {p^2 \\over 2m} + {1 \\over 2} m \\omega^2 x^2 = 0."
},
{
"math_id": 16,
"text": "C"
},
{
"math_id": 17,
"text": "\\lambda C"
},
{
"math_id": 18,
"text": "{dx \\over d \\tau} = \\{ x , \\lambda C \\} , \\;\\;\\;\\; {dp \\over d \\tau} = \\{ p , \\lambda C \\} \\;\\;\\;\\;\\;\\; {dt \\over d \\tau} = \\{ t , \\lambda C \\}, \\;\\;\\;\\; {dp_t \\over d \\tau} = \\{ p_t , \\lambda C \\} "
},
{
"math_id": 19,
"text": "{d F (x,p,t,p_t) \\over d \\tau} = \\{ F (x,p,t,p_t) , \\lambda C \\}"
},
{
"math_id": 20,
"text": "F"
},
{
"math_id": 21,
"text": "p (\\tau)"
},
{
"math_id": 22,
"text": "{dx \\over d \\tau} = {\\partial \\mathcal{H} \\over \\partial p}, \\;\\;\\;\\; {dp \\over d \\tau} = - {\\partial \\mathcal{H} \\over \\partial x} ; \\;\\;\\;\\;\\;\\; {dt \\over d \\tau} = {\\partial \\mathcal{H} \\over \\partial p_t}, \\;\\;\\;\\; {dp_t \\over d \\tau} = {\\partial \\mathcal{H} \\over \\partial t}."
},
{
"math_id": 23,
"text": "{dx \\over d \\tau} = \\lambda {p \\over m}, \\;\\;\\;\\; {dp \\over d \\tau} = - \\lambda m \\omega^2 x ; \\;\\;\\;\\;\\;\\; {dt \\over d \\tau} = \\lambda, \\;\\;\\;\\; {dp_t \\over d \\tau} = 0,"
},
{
"math_id": 24,
"text": "dx / dt"
},
{
"math_id": 25,
"text": "dp / dt"
},
{
"math_id": 26,
"text": "{dx \\over dt} = {dx \\over d \\tau} \\Big/ {dt \\over d \\tau} = {\\lambda p/m \\over \\lambda} = {p \\over m}"
},
{
"math_id": 27,
"text": "{dp \\over dt} = {dp \\over d \\tau} \\Big/ {dt \\over d \\tau} = {- \\lambda m \\omega^2 x \\over \\lambda}= - m \\omega^2 x."
},
{
"math_id": 28,
"text": "{d^2 x \\over dt^2} = - \\omega^2 x."
},
{
"math_id": 29,
"text": "dp_t / d t = dp_t / d \\tau \\big/ d t / d \\tau = 0"
},
{
"math_id": 30,
"text": "p_t = \\mathrm{const}."
},
{
"math_id": 31,
"text": "C = p_t + C' (x,p)."
},
{
"math_id": 32,
"text": "\\begin{align}\nS &= \\int d \\tau \\left[ {dx \\over d \\tau} p + {dt \\over d \\tau} p_t - \\lambda (p_t + C' (x,p)) \\right] \\\\\n&= \\int d \\tau \\left[ {dx \\over d \\tau} p - {dt \\over d \\tau} C' (x,p) \\right] \\\\\n&= \\int dt \\left[ {dx \\over dt} p - {p^2 \\over 2m} + {1 \\over 2} m \\omega^2 x^2 \\right]\n\\end{align}"
},
{
"math_id": 33,
"text": "q_{ab} (x)"
},
{
"math_id": 34,
"text": "K^{ab} (x)"
},
{
"math_id": 35,
"text": "q_{ab}"
},
{
"math_id": 36,
"text": "\\pi^{ab} = \\sqrt{q} (K^{ab} - q^{ab} K_c^c)"
},
{
"math_id": 37,
"text": "H = \\sqrt{\\det (q)} (K_{ab} K^{ab} - (K_a^a)^2 -{}^3R)"
},
{
"math_id": 38,
"text": "\\;^3R"
},
{
"math_id": 39,
"text": "SU(2)"
},
{
"math_id": 40,
"text": "A_a^i"
},
{
"math_id": 41,
"text": "\\tilde{E}_i^a"
},
{
"math_id": 42,
"text": "\\tilde{E}_i^a = \\sqrt{\\det (q)} E_i^a"
},
{
"math_id": 43,
"text": "\\det (q) q^{ab} = \\tilde{E}_i^a \\tilde{E}_j^b \\delta^{ij}."
},
{
"math_id": 44,
"text": "i"
},
{
"math_id": 45,
"text": "A_a^i = \\Gamma_a^i - i K_a^i"
},
{
"math_id": 46,
"text": "\\Gamma_a^i"
},
{
"math_id": 47,
"text": "\\Gamma_{a \\;\\; i}^{\\;\\; j}"
},
{
"math_id": 48,
"text": "\\Gamma_a^i = \\Gamma_{ajk} \\epsilon^{jki}"
},
{
"math_id": 49,
"text": "K_a^i = K_{ab} \\tilde{E}^{ai} / \\sqrt{\\det (q)}"
},
{
"math_id": 50,
"text": "H = {\\epsilon_{ijk} F_{ab}^k \\tilde{E}_i^a \\tilde{E}_j^b \\over \\sqrt{\\det (q)}}."
},
{
"math_id": 51,
"text": "F_{ab}^k"
},
{
"math_id": 52,
"text": "1 / \\sqrt{\\det (q)}"
},
{
"math_id": 53,
"text": "H = 0,"
},
{
"math_id": 54,
"text": "\\tilde{H} = \\sqrt{\\det (q)} H = \\epsilon_{ijk} F_{ab}^k \\tilde{E}_i^a \\tilde{E}_j^b = 0."
},
{
"math_id": 55,
"text": "(+,+,+,+)"
},
{
"math_id": 56,
"text": "A_a^i = \\Gamma_a^i + \\beta K_a^i"
},
{
"math_id": 57,
"text": "H = - \\zeta {\\epsilon_{ijk} F_{ab}^k \\tilde{E}_i^a \\tilde{E}_j^b \\over \\sqrt{\\det (q)}} + 2 {\\zeta \\beta^2 - 1 \\over \\beta^2} {(\\tilde{E}_i^a \\tilde{E}_j^b - \\tilde{E}_j^a \\tilde{E}_i^b) \\over \\sqrt{\\det (q)}} (A_a^i - \\Gamma_a^i) (A_b^j - \\Gamma_b^j) = H_E + H'."
},
{
"math_id": 58,
"text": "\\beta"
},
{
"math_id": 59,
"text": "\\zeta"
},
{
"math_id": 60,
"text": "\\beta = i"
},
{
"math_id": 61,
"text": "H_E"
},
{
"math_id": 62,
"text": "\\beta = \\pm 1"
},
{
"math_id": 63,
"text": "\\{ A_c^k , V \\} = {\\epsilon_{abc} \\epsilon^{ijk} \\tilde{E}_i^a \\tilde{E}_j^b \\over \\sqrt{\\det (q)}}"
},
{
"math_id": 64,
"text": "V"
},
{
"math_id": 65,
"text": "V = \\int d^3 x \\sqrt{\\det (q)} = {1 \\over 6} \\int d^3 x \\sqrt{\\left|\\tilde{E}_i^a \\tilde{E}_j^b \\tilde{E}_k^c \\epsilon^{ijk} \\epsilon_{abc}\\right|}."
},
{
"math_id": 66,
"text": "H_E = \\{ A_c^k , V \\} F_{ab}^k \\tilde{\\epsilon}^{abc}"
},
{
"math_id": 67,
"text": "\\tilde{E}^a_i"
},
{
"math_id": 68,
"text": "D_a \\tilde{E}^a_i = 0."
},
{
"math_id": 69,
"text": "\\nabla_c g_{ab} = 0"
},
{
"math_id": 70,
"text": "K = \\int d^3 x K_a^i \\tilde{E}_i^a"
},
{
"math_id": 71,
"text": "K_a^i = \\{ A_a^i , K \\}."
},
{
"math_id": 72,
"text": "A_a^i - \\Gamma_a^i = \\beta K_a^i = \\beta \\{ A_a^i , K \\}"
},
{
"math_id": 73,
"text": "K"
},
{
"math_id": 74,
"text": "H' = \\epsilon^{abc} \\epsilon_{ijk} \\{ A_a^i , K \\} \\{ A_b^j , K \\} \\{ A_c^k , V \\}."
},
{
"math_id": 75,
"text": "K = - \\left\\{ V , \\int d^3 x H_E \\right\\}"
}
] | https://en.wikipedia.org/wiki?curid=886930 |
8869727 | Fixed-point property | Mathematical property
A mathematical object "X" has the fixed-point property if every suitably well-behaved mapping from "X" to itself has a fixed point. The term is most commonly used to describe topological spaces on which every continuous mapping has a fixed point. But another use is in order theory, where a partially ordered set "P" is said to have the fixed point property if every increasing function on "P" has a fixed point.
Definition.
Let "A" be an object in the concrete category C. Then "A" has the "fixed-point property" if every morphism (i.e., every function) formula_0 has a fixed point.
The most common usage is when C = Top is the category of topological spaces. Then a topological space "X" has the fixed-point property if every continuous map formula_1 has a fixed point.
Examples.
Singletons.
In the category of sets, the objects with the fixed-point property are precisely the singletons.
The closed interval.
The closed interval [0,1] has the fixed point property: Let "f": [0,1] → [0,1] be a continuous mapping. If "f"(0) = 0 or "f"(1) = 1, then our mapping has a fixed point at 0 or 1. If not, then "f"(0) > 0 and "f"(1) − 1 < 0. Thus the function "g"("x") = "f"("x") − x is a continuous real valued function which is positive at "x" = 0 and negative at "x" = 1. By the intermediate value theorem, there is some point "x"0 with "g"("x"0) = 0, which is to say that "f"("x"0) − "x"0 = 0, and so "x"0 is a fixed point.
The open interval does "not" have the fixed-point property. The mapping "f"("x") = "x"2 has no fixed point on the interval (0,1).
The closed disc.
The closed interval is a special case of the closed disc, which in any finite dimension has the fixed-point property by the Brouwer fixed-point theorem.
Topology.
A retract "A" of a space "X" with the fixed-point property also has the fixed-point property. This is because if formula_2 is a retraction and formula_0 is any continuous function, then the composition formula_3 (where formula_4 is inclusion) has a fixed point. That is, there is formula_5 such that formula_6. Since formula_5 we have that formula_7 and therefore formula_8
A topological space has the fixed-point property if and only if its identity map is universal.
A product of spaces with the fixed-point property in general fails to have the fixed-point property even if one of the spaces is the closed real interval.
The FPP is a topological invariant, i.e. is preserved by any homeomorphism. The FPP is also preserved by any retraction.
According to the Brouwer fixed point theorem, every compact and convex subset of a Euclidean space has the FPP. More generally, according to the Schauder-Tychonoff fixed point theorem every compact and convex subset of a locally convex topological vector space has the FPP. Compactness alone does not imply the FPP and convexity is not even a topological property so it makes sense to ask how to topologically characterize the FPP. In 1932 Borsuk asked whether compactness together with contractibility could be a sufficient condition for the FPP to hold. The problem was open for 20 years until the conjecture was disproved by Kinoshita who found an example of a compact contractible space without the FPP.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f: A \\to A"
},
{
"math_id": 1,
"text": "f: X \\to X"
},
{
"math_id": 2,
"text": "r: X \\to A"
},
{
"math_id": 3,
"text": "i \\circ f \\circ r: X \\to X"
},
{
"math_id": 4,
"text": "i: A \\to X"
},
{
"math_id": 5,
"text": "x \\in A"
},
{
"math_id": 6,
"text": "f \\circ r(x) = x"
},
{
"math_id": 7,
"text": "r(x) = x"
},
{
"math_id": 8,
"text": "f(x) = x."
}
] | https://en.wikipedia.org/wiki?curid=8869727 |
8870800 | Multivariate probit model | In statistics and econometrics, the multivariate probit model is a generalization of the probit model used to estimate several correlated binary outcomes jointly. For example, if it is believed that the decisions of sending at least one child to public school and that of voting in favor of a school budget are correlated (both decisions are binary), then the multivariate probit model would be appropriate for jointly predicting these two choices on an individual-specific basis. J.R. Ashford and R.R. Sowden initially proposed an approach for multivariate probit analysis. Siddhartha Chib and Edward Greenberg extended this idea and also proposed simulation-based inference methods for the multivariate probit model which simplified and generalized parameter estimation.
Example: bivariate probit.
In the ordinary probit model, there is only one binary dependent variable formula_0 and so only one latent variable formula_1 is used. In contrast, in the bivariate probit model there are two binary dependent variables formula_2 and formula_3, so there are two latent variables: formula_4 and formula_5.
It is assumed that each observed variable takes on the value 1 if and only if its underlying continuous latent variable takes on a positive value:
formula_6
formula_7
with
formula_8
and
formula_9
Fitting the bivariate probit model involves estimating the values of formula_10 and formula_11. To do so, the likelihood of the model has to be maximized. This likelihood is
formula_12
Substituting the latent variables formula_13 and formula_14 in the probability functions and taking logs gives
formula_15
After some rewriting, the log-likelihood function becomes:
formula_16
Note that formula_17 is the cumulative distribution function of the bivariate normal distribution. formula_18 and formula_19 in the log-likelihood function are observed variables being equal to one or zero.
Multivariate Probit.
For the general case, formula_20 where we can take formula_21 as choices and formula_22 as individuals or observations, the probability of observing choice formula_23 is
formula_24
Where formula_25 and,
formula_26
The log-likelihood function in this case would be
formula_27
Except for formula_28 typically there is no closed form solution to the integrals in the log-likelihood equation. Instead simulation methods can be used to simulated the choice probabilities. Methods using importance sampling include the GHK algorithm, AR (accept-reject), Stern's method. There are also MCMC approaches to this problem including CRB (Chib's method with Rao–Blackwellization), CRT (Chib, Ritter, Tanner), ARK (accept-reject kernel), and ASK (adaptive sampling kernel). A variational approach scaling to large datasets is proposed in Probit-LMM. | [
{
"math_id": 0,
"text": "Y"
},
{
"math_id": 1,
"text": "Y^* "
},
{
"math_id": 2,
"text": "Y_1"
},
{
"math_id": 3,
"text": "Y_2"
},
{
"math_id": 4,
"text": " Y^*_1 "
},
{
"math_id": 5,
"text": " Y^*_2 "
},
{
"math_id": 6,
"text": "\nY_1 = \\begin{cases} 1 & \\text{if }Y^*_1>0, \\\\\n0 & \\text{otherwise},\n\\end{cases}\n"
},
{
"math_id": 7,
"text": "\nY_2 = \\begin{cases}\n1 & \\text{if }Y^*_2>0, \\\\\n0 & \\text{otherwise},\n\\end{cases}\n"
},
{
"math_id": 8,
"text": "\n\\begin{cases}\nY_1^* = X_1\\beta_1+\\varepsilon_1 \\\\\nY_2^* = X_2\\beta_2+\\varepsilon_2\n\\end{cases}\n"
},
{
"math_id": 9,
"text": "\n\\begin{bmatrix}\n\\varepsilon_1\\\\\n\\varepsilon_2\n\\end{bmatrix}\n\\mid X\n\\sim \\mathcal{N}\n\\left(\n\\begin{bmatrix}\n0\\\\\n0\n\\end{bmatrix},\n\\begin{bmatrix}\n1&\\rho\\\\\n\\rho&1\n\\end{bmatrix}\n\\right)\n"
},
{
"math_id": 10,
"text": "\\beta_1,\\ \\beta_2,"
},
{
"math_id": 11,
"text": " \\rho "
},
{
"math_id": 12,
"text": "\n\\begin{align}\nL(\\beta_1,\\beta_2) =\\Big( \\prod & P(Y_1=1,Y_2=1\\mid\\beta_1,\\beta_2)^{Y_1Y_2} P(Y_1=0,Y_2=1\\mid\\beta_1,\\beta_2)^{(1-Y_1)Y_2} \\\\[8pt]\n& {}\\qquad P(Y_1=1,Y_2=0\\mid\\beta_1,\\beta_2)^{Y_1(1-Y_2)}\nP(Y_1=0,Y_2=0\\mid\\beta_1,\\beta_2)^{(1-Y_1)(1-Y_2)} \\Big)\n\\end{align}\n"
},
{
"math_id": 13,
"text": "Y_1^* "
},
{
"math_id": 14,
"text": "Y_2^* "
},
{
"math_id": 15,
"text": "\n\\begin{align}\n\\sum & \\Big( Y_1Y_2 \\ln P(\\varepsilon_1>-X_1\\beta_1,\\varepsilon_2>-X_2\\beta_2) \\\\[4pt]\n& {}\\quad{}+(1-Y_1)Y_2\\ln P(\\varepsilon_1<-X_1\\beta_1,\\varepsilon_2>-X_2\\beta_2) \\\\[4pt]\n& {}\\quad{}+Y_1(1-Y_2)\\ln P(\\varepsilon_1>-X_1\\beta_1,\\varepsilon_2<-X_2\\beta_2) \\\\[4pt]\n& {}\\quad{}+(1-Y_1)(1-Y_2)\\ln P(\\varepsilon_1<-X_1\\beta_1,\\varepsilon_2<-X_2\\beta_2) \\Big).\n\\end{align}\n"
},
{
"math_id": 16,
"text": "\n\\begin{align}\n\\sum & \\Big ( Y_1Y_2\\ln \\Phi(X_1\\beta_1,X_2\\beta_2,\\rho) \\\\[4pt]\n& {}\\quad{} + (1-Y_1)Y_2\\ln \\Phi(-X_1\\beta_1,X_2\\beta_2,-\\rho) \\\\[4pt]\n& {}\\quad{} + Y_1(1-Y_2)\\ln \\Phi(X_1\\beta_1,-X_2\\beta_2,-\\rho) \\\\[4pt]\n& {}\\quad{} +(1-Y_1)(1-Y_2)\\ln \\Phi(-X_1\\beta_1,-X_2\\beta_2,\\rho) \\Big).\n\\end{align}\n"
},
{
"math_id": 17,
"text": " \\Phi "
},
{
"math_id": 18,
"text": " Y_1 "
},
{
"math_id": 19,
"text": " Y_2 "
},
{
"math_id": 20,
"text": " \\mathbf{y_i} = (y_1, ..., y_j), \\ (i = 1,...,N)"
},
{
"math_id": 21,
"text": " j "
},
{
"math_id": 22,
"text": " i "
},
{
"math_id": 23,
"text": " \\mathbf{y_i} "
},
{
"math_id": 24,
"text": "\n\\begin{align}\n\\Pr(\\mathbf{y_i}|\\mathbf{X_i\\beta}, \\Sigma) = & \\int_{A_J}\\cdots\\int_{A_1}f_N(\\mathbf{y}^*_i|\\mathbf{X_i\\beta}, \\Sigma) dy^*_1\\dots dy^*_J \\\\\n\\Pr(\\mathbf{y_i}|\\mathbf{X_i\\beta}, \\Sigma) = & \\int \\mathbb{1}_{y^* \\in A} f_N(\\mathbf{y}^*_i|\\mathbf{X_i\\beta}, \\Sigma) d\\mathbf{y}^*_i\n\\end{align} \n"
},
{
"math_id": 25,
"text": " A = A_1 \\times \\cdots \\times A_J "
},
{
"math_id": 26,
"text": " \nA_j = \\begin{cases} (-\\infty,0] & y_j = 0 \\\\\n(0, \\infty) & y_j = 1 \\end{cases} \n"
},
{
"math_id": 27,
"text": " \\sum_{i=1}^N \\log\\Pr(\\mathbf{y_i}|\\mathbf{X_i\\beta}, \\Sigma) "
},
{
"math_id": 28,
"text": " J\\leq2"
}
] | https://en.wikipedia.org/wiki?curid=8870800 |
8871770 | Membrane fluidity | Viscosity of the lipid bilayer of a cell membrane
In biology, membrane fluidity refers to the viscosity of the lipid bilayer of a cell membrane or a synthetic lipid membrane. Lipid packing can influence the fluidity of the membrane. Viscosity of the membrane can affect the rotation and diffusion of proteins and other bio-molecules within the membrane, there-by affecting the functions of these things.
Membrane fluidity is affected by fatty acids. More specifically, whether the fatty acids are saturated or unsaturated has an effect on membrane fluidity. Saturated fatty acids have no double bonds in the hydrocarbon chain, and the maximum amount of hydrogen. The absence of double bonds decreases fluidity. Unsaturated fatty acids have at least one double bond, creating a "kink" in the chain. The double bond increases fluidity. While the addition of one double bond raises the melting temperature, research conducted by Xiaoguang Yang et. al. supports that four or more double bonds has a direct correlation to membrane fluidity. Membrane fluidity is also affected by cholesterol. Cholesterol can make the cell membrane fluid as well as rigid.
Factors determining membrane fluidity.
Membrane fluidity can be affected by a number of factors. The main factors affecting membrane fluidity are environmental (ie. temperature), and compositionally. One way to increase membrane fluidity is to heat up the membrane. Lipids acquire thermal energy when they are heated up; energetic lipids move around more, arranging and rearranging randomly, making the membrane more fluid. At low temperatures, the lipids are laterally ordered and organized in the membrane, and the lipid chains are mostly in the all-trans configuration and pack well together.
The melting temperature formula_0 of a membrane is defined as the temperature across which the membrane transitions from a crystal-like to a fluid-like organization, or vice versa. This phase transition is not an actual state transition, but the two levels of organizations are very similar to a solid and liquid state.
The composition of a membrane can also affect its fluidity. The membrane phospholipids incorporate fatty acyl chains of varying length and saturation. Lipids with shorter chains are less stiff and less viscous because they are more susceptible to changes in kinetic energy due to their smaller molecular size and they have less surface area to undergo stabilizing London forces with neighboring hydrophobic chains. Molecules with carbon-carbon double bonds (unsaturated) are more rigid than those that are saturated with hydrogens, as double bonds cannot freely turn. As a result, the presence of fatty acyl chains with unsaturated double bonds makes it harder for the lipids to pack together by putting kinks into the otherwise straightened hydrocarbon chain. While unsaturated lipids may have more rigid individual bonds, membranes made with such lipids are more fluid because the individual lipids cannot pack as tightly as saturated lipids and thus have lower melting points: less thermal energy is required to achieve the same level of fluidity as membranes made with lipids with saturated hydrocarbon chains. Incorporation of particular lipids, such as sphingomyelin, into synthetic lipid membranes is known to stiffen a membrane. Such membranes can be described as "a glass state, i.e., rigid but without crystalline order".
Cholesterol acts as a bidirectional regulator of membrane fluidity because at high temperatures, it stabilizes the membrane and raises its melting point, whereas at low temperatures it intercalates between the phospholipids and prevents them from clustering together and stiffening. Some drugs, e.g. Losartan, are also known to alter membrane viscosity. Another way to change membrane fluidity is to change the pressure. In the laboratory, supported lipid bilayers and monolayers can be made artificially. In such cases, one can still speak of membrane fluidity. These membranes are supported by a flat surface, e.g. the bottom of a box. The fluidity of these membranes can be controlled by the lateral pressure applied, e.g. by the side walls of a box.
Heterogeneity in membrane physical property.
Discrete lipid domains with differing composition, and thus membrane fluidity, can coexist in model lipid membranes; this can be observed using fluorescence microscopy. The biological analogue, 'lipid raft', is hypothesized to exist in cell membranes and perform biological functions. Also, a narrow annular lipid shell of membrane lipids in contact with integral membrane proteins have low fluidity compared to bulk lipids in biological membranes, as these lipid molecules stay stuck to surface of the protein macromolecules.
Measurement methods.
Membrane fluidity can be measured with electron spin resonance, fluorescence, atomic force microscopy-based force spectroscopy, or deuterium nuclear magnetic resonance spectroscopy. Electron spin resonance measurements involve observing spin probe behaviour in the membrane. Fluorescence experiments involve observing fluorescent probes incorporated into the membrane. Atomic force microscopy experiments can measure fluidity on synthetic or isolated patches of native membranes. Solid state deuterium nuclear magnetic resonance spectroscopy involves observing deuterated lipids. The techniques are complementary in that they operate on different timescales.
Membrane fluidity can be described by two different types of motion: rotational and lateral. In electron spin resonance, rotational correlation time of spin probes is used to characterize how much restriction is imposed on the probe by the membrane. In fluorescence, steady-state anisotropy of the probe can be used, in addition to the rotation correlation time of the fluorescent probe. Fluorescent probes show varying degree of preference for being in an environment of restricted motion. In heterogeneous membranes, some probes will only be found in regions of higher membrane fluidity, while others are only found in regions of lower membrane fluidity. Partitioning preference of probes can also be a gauge of membrane fluidity. In deuterium nuclear magnetic resonance spectroscopy, the average carbon-deuterium bond orientation of the deuterated lipid gives rise to specific spectroscopic features. All three of techniques can give some measure of the time-averaged orientation of the relevant (probe) molecule, which is indicative of the rotational dynamics of the molecule.
Lateral motion of molecules within the membrane can be measured by a number of fluorescence techniques: fluorescence recovery after photobleaching involves photobleaching a uniformly labelled membrane with an intense laser beam and measuring how long it takes for fluorescent probes to diffuse back into the photobleached spot. Fluorescence correlation spectroscopy monitors the fluctuations in fluorescence intensity measured from a small number of probes in a small space. These fluctuations are affected by the mode of lateral diffusion of the probe. Single particle tracking involves following the trajectory of fluorescent molecules or gold particles attached to a biomolecule and applying statistical analysis to extract information about the lateral diffusion of the tracked particle.
Phospholipid-deficient bio-membranes.
A study of central linewidths of electron spin resonance spectra of thylakoid membranes and aqueous dispersions of their total extracted lipids, labeled with stearic acid spin label (having spin or doxyl moiety at 5,7,9,12,13,14 and 16th carbons, with reference to carbonyl group), reveals a "fluidity gradient". Decreasing linewidth from 5th to 16th carbons represents increasing degree of motional freedom ("fluidity gradient") from headgroup-side to methyl terminal in both native membranes and their aqueous lipid extract (a multilamellar liposomal structure, typical of lipid bilayer organization). This pattern points at similarity of lipid bilayer organization in both native membranes and liposomes. This observation is critical, as thylakoid membranes comprising largely galactolipids, contain only 10% phospholipid, unlike other biological membranes consisting largely of phospholipids. Proteins in chloroplast thylakoid membranes, apparently, restrict lipid fatty acyl chain segmental mobility from 9th to 16th carbons "vis a vis" their liposomal counterparts. Surprisingly, liposomal fatty acyl chains are more restricted at 5th and 7th carbon positions as compared at these positions in thylakoid membranes. This is explainable as due to motional restricting effect at these positions, because of steric hindrance by large chlorophyll headgroups, specially so, in liposomes. However, in native thylakoid membranes, chlorophylls are mainly complexed with proteins as light-harvesting complexes and may not largely be free to restrain lipid fluidity, as such.
Diffusion coefficients.
Diffusion coefficients of fluorescent lipid analogues are about 10−8cm2/s in fluid lipid membranes. In gel lipid membranes and natural biomembranes, the diffusion coefficients are about 10−11cm2/s to 10−9cm2/s.
Charged lipid membranes.
The melting of charged lipid membranes, such as 1,2-dimyristoyl-sn-glycero-3-phosphoglycerol, can take place over a wide range of temperature. Within this range of temperatures, these membranes become very viscous.
Biological relevance.
Microorganisms subjected to thermal stress are known to alter the lipid composition of their cell membrane (see homeoviscous adaptation). This is one way they can adjust the fluidity of their membrane in response to their environment. Membrane fluidity is known to affect the function of biomolecules residing within or associated with the membrane structure. For example, the binding of some peripheral proteins is dependent on membrane fluidity. Lateral diffusion (within the membrane matrix) of membrane-related enzymes can affect reaction rates. Consequently, membrane-dependent functions, such as phagocytosis and cell signalling, can be regulated by the fluidity of the cell-membrane.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_m "
},
{
"math_id": 1,
"text": "T < T_m"
},
{
"math_id": 2,
"text": "T > T_m"
}
] | https://en.wikipedia.org/wiki?curid=8871770 |
887179 | Colpitts oscillator | LC oscillator design
A Colpitts oscillator, invented in 1918 by Canadian-American engineer Edwin H. Colpitts using vacuum tubes, is one of a number of designs for LC oscillators, electronic oscillators that use a combination of inductors (L) and capacitors (C) to produce an oscillation at a certain frequency. The distinguishing feature of the Colpitts oscillator is that the feedback for the active device is taken from a voltage divider made of two capacitors in series across the inductor.
Overview.
The Colpitts circuit, like other LC oscillators, consists of a gain device (such as a bipolar junction transistor, field-effect transistor, operational amplifier, or vacuum tube) with its output connected to its input in a feedback loop containing a parallel LC circuit (tuned circuit), which functions as a bandpass filter to set the frequency of oscillation. The amplifier will have differing input and output impedances, and these need to be coupled into the LC circuit without overly damping it.
A Colpitts oscillator uses a pair of capacitors to provide voltage division to couple the energy in and out of the tuned circuit. (It can be considered as the electrical dual of a Hartley oscillator, where the feedback signal is taken from an "inductive" voltage divider consisting of two coils in series (or a tapped coil).) Fig. 1 shows the common-base Colpitts circuit. The inductor "L" and the series combination of "C"1 and "C"2 form the resonant tank circuit, which determines the frequency of the oscillator. The voltage across "C"2 is applied to the base-emitter junction of the transistor, as feedback to create oscillations. Fig. 2 shows the common-collector version. Here the voltage across "C"1 provides feedback. The frequency of oscillation is approximately the resonant frequency of the LC circuit, which is the series combination of the two capacitors in parallel with the inductor:
formula_0
The actual frequency of oscillation will be slightly lower due to junction capacitances and resistive loading of the transistor.
As with any oscillator, the amplification of the active component should be marginally larger than the attenuation of the resonator losses and its voltage division, to obtain stable operation. Thus, a Colpitts oscillator used as a variable-frequency oscillator (VFO) performs best when a variable inductance is used for tuning, as opposed to tuning just one of the two capacitors. If tuning by variable capacitor is needed, it should be done with a third capacitor connected in parallel to the inductor (or in series as in the Clapp oscillator).
Practical example.
Fig. 3 shows an example with component values. Instead of field-effect transistors, other active components such as bipolar junction transistors or vacuum tubes, capable of producing gain at the desired frequency, could be used.
The common gate amplifier has a low input impedance and a high output impedance. Therefore the amplifier input, the source, is connected to the low impedance tap of the LC circuit L1, C1, C2, C3 and the amplifier output, the drain, is connected to the high impedance top of the LC circuit. The resistor R1 sets the operating point to 0.5mA drain current with no oscillating. The output is at the low impedance tap and can drive some load. Still, this circuit has low harmonic distortion. An additional variable capacitor between drain of J1 and ground allows to change the frequency of the circuit. The load resistor RL is part of the simulation, not part of the circuit.
Theory.
One method of oscillator analysis is to determine the input impedance of an input port neglecting any reactive components. If the impedance yields a negative resistance term, oscillation is possible. This method will be used here to determine conditions of oscillation and the frequency of oscillation.
An ideal model is shown to the right. This configuration models the common collector circuit in the section above. For initial analysis, parasitic elements and device non-linearities will be ignored. These terms can be included later in a more rigorous analysis. Even with these approximations, acceptable comparison with experimental results is possible.
Ignoring the inductor, the input impedance at the base can be written as
formula_1
where formula_2 is the input voltage, and formula_3 is the input current. The voltage formula_4 is given by
formula_5
where formula_6 is the impedance of formula_7. The current flowing into formula_7 is formula_8, which is the sum of two currents:
formula_9
where formula_10 is the current supplied by the transistor. formula_10 is a dependent current source given by
formula_11
where formula_12 is the transconductance of the transistor. The input current formula_3 is given by
formula_13
where formula_14 is the impedance of formula_15. Solving for formula_4 and substituting above yields
formula_16
The input impedance appears as the two capacitors in series with the term formula_17, which is proportional to the product of the two impedances:
formula_18
If formula_14 and formula_6 are complex and of the same sign, then formula_17 will be a negative resistance. If the impedances for formula_14 and formula_6 are substituted, formula_17 is
formula_19
If an inductor is connected to the input, then the circuit will oscillate if the magnitude of the negative resistance is greater than the resistance of the inductor and any stray elements. The frequency of oscillation is as given in the previous section.
For the example oscillator above, the emitter current is roughly 1 mA. The transconductance is roughly 40 mS. Given all other values, the input resistance is roughly
formula_20
This value should be sufficient to overcome any positive resistance in the circuit. By inspection, oscillation is more likely for larger values of transconductance and smaller values of capacitance. A more complicated analysis of the common-base oscillator reveals that a low-frequency amplifier voltage gain must be at least 4 to achieve oscillation. The low-frequency gain is given by
formula_21
If the two capacitors are replaced by inductors, and magnetic coupling is ignored, the circuit becomes a Hartley oscillator. In that case, the input impedance is the sum of the two inductors and a negative resistance given by
formula_22
In the Hartley circuit, oscillation is more likely for larger values of transconductance and larger values of inductance.
The above analysis also describes the behavior of the Pierce oscillator. The Pierce oscillator, with two capacitors and one inductor, is equivalent to the Colpitts oscillator. Equivalence can be shown by choosing the junction of the two capacitors as the ground point. An electrical dual of the standard Pierce oscillator using two inductors and one capacitor is equivalent to the Hartley oscillator.
Working Principle.
A Colpitts oscillator is an electronic circuit that generates a sinusoidal waveform, typically in the radio frequency range. It uses an inductor and two capacitors in parallel to form a resonant tank circuit, which determines the oscillation frequency. The output signal from the tank circuit is fed back into the input of an amplifier, where it is amplified and fed back into the tank circuit. The feedback signal provides the necessary phase shift for sustained oscillation.
The working principle of a Colpitts oscillator can be explained as follows:
formula_27
Where:
formula_29
The Colpitts oscillator is widely used in various applications, such as RF communication systems, signal generators, and electronic testing equipment. It has better frequency stability than the Hartley oscillator, which uses a tapped inductor instead of a tapped capacitor in the tank circuit. However, the Colpitts oscillator may require a higher supply voltage and a larger coupling capacitor than the Hartley oscillator.
Oscillation amplitude.
The amplitude of oscillation is generally difficult to predict, but it can often be accurately estimated using the describing function method.
For the common-base oscillator in Figure 1, this approach applied to a simplified model predicts an output (collector) voltage amplitude given by
formula_30
where formula_31 is the bias current, and formula_32 is the load resistance at the collector.
This assumes that the transistor does not saturate, the collector current flows in narrow pulses, and that the output voltage is sinusoidal (low distortion).
This approximate result also applies to oscillators employing different active device, such as MOSFETs and vacuum tubes.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f_0 = \\frac{1}{2\\pi \\sqrt{L \\frac{C_1 C_2}{C_1 + C_2}}}."
},
{
"math_id": 1,
"text": "Z_\\text{in} = \\frac{v_1}{i_1},"
},
{
"math_id": 2,
"text": "v_1"
},
{
"math_id": 3,
"text": "i_1"
},
{
"math_id": 4,
"text": "v_2"
},
{
"math_id": 5,
"text": "v_2 = i_2 Z_2,"
},
{
"math_id": 6,
"text": "Z_2"
},
{
"math_id": 7,
"text": "C_2"
},
{
"math_id": 8,
"text": "i_2"
},
{
"math_id": 9,
"text": "i_2 = i_1 + i_s,"
},
{
"math_id": 10,
"text": "i_s"
},
{
"math_id": 11,
"text": "i_s = g_m (v_1 - v_2),"
},
{
"math_id": 12,
"text": "g_m"
},
{
"math_id": 13,
"text": "i_1 = \\frac{v_1 - v_2}{Z_1},"
},
{
"math_id": 14,
"text": "Z_1"
},
{
"math_id": 15,
"text": "C_1"
},
{
"math_id": 16,
"text": "Z_\\text{in} = Z_1 + Z_2 + g_m Z_1 Z_2."
},
{
"math_id": 17,
"text": "R_\\text{in}"
},
{
"math_id": 18,
"text": "R_\\text{in} = g_m Z_1 Z_2."
},
{
"math_id": 19,
"text": "R_\\text{in} = \\frac{-g_m}{\\omega^2 C_1 C_2}."
},
{
"math_id": 20,
"text": "R_\\text{in} = -30\\ \\Omega."
},
{
"math_id": 21,
"text": "A_v = g_m R_p \\ge 4."
},
{
"math_id": 22,
"text": "R_\\text{in} = -g_m \\omega^2 L_1 L_2."
},
{
"math_id": 23,
"text": "R_1"
},
{
"math_id": 24,
"text": "R_2"
},
{
"math_id": 25,
"text": "C_\\text{in}"
},
{
"math_id": 26,
"text": "C_\\text{out}"
},
{
"math_id": 27,
"text": "f = \\frac{1}{2 \\pi \\sqrt{LC_t}}"
},
{
"math_id": 28,
"text": "C_t"
},
{
"math_id": 29,
"text": "C_t = \\frac{C_1 C_2}{C_1 + C_2}"
},
{
"math_id": 30,
"text": "\nV_C = 2 I_C R_L \\frac{C_2}{C_1 + C_2},\n"
},
{
"math_id": 31,
"text": "I_C"
},
{
"math_id": 32,
"text": "R_L"
}
] | https://en.wikipedia.org/wiki?curid=887179 |
887197 | Arbelos | Plane region bounded by three semicircles
In geometry, an arbelos is a plane region bounded by three semicircles with three apexes such that each corner of each semicircle is shared with one of the others (connected), all on the same side of a straight line (the "baseline") that contains their diameters.
The earliest known reference to this figure is in Archimedes's "Book of Lemmas", where some of its mathematical properties are stated as Propositions 4 through 8. The word "arbelos" is Greek for 'shoemaker's knife'. The figure is closely related to the Pappus chain.
Properties.
Two of the semicircles are necessarily concave, with arbitrary diameters a and b; the third semicircle is convex, with diameter "a"+"b".
Area.
The area of the arbelos is equal to the area of a circle with diameter .
Proof: For the proof, reflect the arbelos over the line through the points B and C, and observe that twice the area of the arbelos is what remains when the areas of the two smaller circles (with diameters , ) are subtracted from the area of the large circle (with diameter ). Since the area of a circle is proportional to the square of the diameter (Euclid's Elements, Book XII, Proposition 2; we do not need to know that the constant of proportionality is ), the problem reduces to showing that formula_0. The length equals the sum of the lengths and , so this equation simplifies algebraically to the statement that formula_1. Thus the claim is that the length of the segment is the geometric mean of the lengths of the segments and . Now (see Figure) the triangle BHC, being inscribed in the semicircle, has a right angle at the point H (Euclid, Book III, Proposition 31), and consequently is indeed a "mean proportional" between and (Euclid, Book VI, Proposition 8, Porism). This proof approximates the ancient Greek argument; Harold P. Boas cites a paper of Roger B. Nelsen who implemented the idea as the following proof without words.
Rectangle.
Let D and E be the points where the segments and intersect the semicircles AB and AC, respectively. The quadrilateral ADHE is actually a rectangle.
"Proof": ∠BDA, ∠BHC, and ∠AEC are right angles because they are inscribed in semicircles (by Thales's theorem). The quadrilateral ADHE therefore has three right angles, so it is a rectangle. "Q.E.D."
Tangents.
The line DE is tangent to semicircle BA at D and semicircle AC at E.
"Proof": Since ∠BDA is a right angle, ∠DBA equals minus ∠DAB. However, ∠DAH also equals minus ∠DAB (since ∠HAB is a right angle). Therefore triangles DBA and DAH are similar. Therefore ∠DIA equals ∠DOH, where I is the midpoint of and O is the midpoint of . But ∠AOH is a straight line, so ∠DOH and ∠DOA are supplementary angles. Therefore the sum of ∠DIA and ∠DOA is π. ∠IAO is a right angle. The sum of the angles in any quadrilateral is 2π, so in quadrilateral IDOA, ∠IDO must be a right angle. But ADHE is a rectangle, so the midpoint O of (the rectangle's diagonal) is also the midpoint of (the rectangle's other diagonal). As I (defined as the midpoint of ) is the center of semicircle BA, and angle ∠IDE is a right angle, then DE is tangent to semicircle BA at D. By analogous reasoning DE is tangent to semicircle AC at E. "Q.E.D."
Archimedes' circles.
The altitude AH divides the arbelos into two regions, each bounded by a semicircle, a straight line segment, and an arc of the outer semicircle. The circles inscribed in each of these regions, known as the Archimedes' circles of the arbelos, have the same size.
Variations and generalisations.
The parbelos is a figure similar to the arbelos, that uses parabola segments instead of half circles. A generalisation comprising both arbelos and parbelos is the "f"-belos, which uses a certain type of similar differentiable functions.
In the Poincaré half-plane model of the hyperbolic plane, an arbelos models an ideal triangle.
Etymology.
The name "arbelos" comes from Greek ἡ ἄρβηλος "he árbēlos" or ἄρβυλος "árbylos", meaning "shoemaker's knife", a knife used by cobblers from antiquity to the current day, whose blade is said to resemble the geometric figure. | [
{
"math_id": 0,
"text": "2|AH|^2 = |BC|^2 - |AC|^2 - |BA|^2"
},
{
"math_id": 1,
"text": "|AH|^2 = |BA||AC|"
}
] | https://en.wikipedia.org/wiki?curid=887197 |
8874570 | Spray nozzle | Device that facilitates dispersion of liquid into a spray
A spray nozzle or atomizer is a device that facilitates the dispersion of a liquid by the formation of a spray. The production of a spray requires the fragmentation of liquid structures, such as liquid sheets or ligaments, into droplets, often by using kinetic energy to overcome the cost of creating additional surface area. A wide variety of spray nozzles exist, that make use of one or multiple liquid breakup mechanisms, which can be divided into three categories: liquid sheet breakup, jets and capillary waves. Spray nozzles are of great importance for many applications, where the spray nozzle is designed to have the right spray characteristics.
Spray nozzles can have one or more outlets; a multiple outlet nozzle is known as a compound nozzle. Multiple outlets on nozzles are present on spray balls, which have been used in the brewing industry for many years for cleaning casks and kegs. Spray nozzles range from those for heavy duty industrial uses to light duty spray cans or spray bottles.
Single-fluid nozzles.
Single-fluid or hydraulic spray nozzles utilize the kinetic energy imparted to the liquid to break it up into droplets. This most widely used type of spray nozzle is more energy efficient at producing surface area than most other types. As the fluid pressure increases, the flow through the nozzle increases, and the drop size decreases. Many configurations of single fluid nozzles are used depending on the spray characteristics desired.
Plain-orifice.
The simplest single fluid nozzle is a plain orifice nozzle as shown in the diagram. This nozzle often produces little if any atomization, but directs the stream of liquid. If the pressure drop is high, at least , the material is often finely atomized, as in a diesel injector. At lower pressures, this type of nozzle is often used for tank cleaning, either as a fixed position compound spray nozzle or as a rotary nozzle.
Shaped-orifice.
The shaped orifice uses a semi spherical shaped inlet and a "V" notched outlet to cause the flow to spread out on the axis of the V notch. A flat fan spray results which is useful for many spray applications, such as spray painting.
Surface-impingement single-fluid.
A surface impingement nozzle causes a stream of liquid to impinge on a surface resulting in a sheet of liquid that breaks up into small drops. This flat fan spray pattern nozzle is used in many applications ranging from applying agricultural herbicides to painting.
The impingement surface can be formed in a spiral to yield a spiral shaped sheet approximating a full cone spray pattern or a hollow-cone spray pattern.
The spiral design generally produces a smaller drop size than pressure swirl type nozzle design, for a given pressure and flow rate. This design is clog resistant due to the large free passage.
Common applications include gas scrubbing applications (e.g., flue-gas desulfurization where the smaller droplets often offer superior performance) and fire fighting (where the mix of droplet densities allow spray penetration through strong thermal currents).
Pressure-swirl single-fluid.
Pressure-swirl spray nozzles are high-performance (small drop size) devices with one configuration shown. The stationary core induces a rotary fluid motion which causes the swirling of the fluid in the swirl chamber. A film is discharged from the perimeter of the outlet orifice producing a characteristic hollow cone spray pattern. Air or other surrounding gas is drawn inside the swirl chamber to form an air core within the swirling liquid. Many configurations of fluid inlets are used to produce this hollow cone pattern depending on the nozzle capacity and materials of construction. The uses of this nozzle include evaporative cooling and spray drying.
Solid-cone single-fluid.
One of the configurations of the solid cone spray nozzle is shown in a schematic diagram. A swirling liquid motion is induced with the vane structure, however; the discharge flow fills the entire outlet orifice. For the same capacity and pressure drop, a full cone nozzle will produce a larger drop size than a hollow cone nozzle. The coverage is the desired feature for such a nozzle, which is often used for applications to distribute fluid over an area.
Compound.
A compound nozzle is a type of nozzle in which several individual single or two fluid nozzles are incorporated into one nozzle body, as shown below. This allows design control of drop size and spray coverage angle.
Two-fluid nozzles.
Two-fluid nozzles atomize by causing the interaction of high velocity gas and liquid. Compressed air is most often used as the atomizing gas, but sometimes steam or other gases are used. The many varied designs of two-fluid nozzles can be grouped into internal mix or external mix depending on the mixing point of the gas and liquid streams relative to the nozzle face.
Internal-mix two-fluid.
Internal mix nozzles contact fluids inside the nozzle; one configuration is shown in the figure below. Shearing between high velocity gas and low velocity liquid disintegrates the liquid stream into droplets, producing a high velocity spray. This type of nozzle tends to use less atomizing gas than an external mix atomizer and is better suited to higher viscosity streams. Many compound internal-mix nozzles are commercially used; e.g., for fuel oil atomization.
External-mix two-fluid.
External mix nozzles contacts fluids outside the nozzle as shown in the schematic diagram. This type of spray nozzle may require more atomizing air and a higher atomizing air pressure drop because the mixing and atomization of liquid takes place outside the nozzle. The liquid pressure drop is lower for this type of nozzle, sometimes drawing liquid into the nozzle due to the suction caused by the atomizing air nozzles (siphon nozzle). If the liquid to be atomized contains solids an external mix atomizer may be preferred. This spray may be shaped to produce different spray patterns. A flat pattern is formed with additional air ports to flatten or reshape the circular spray cross-section discharge.
Control of two-fluid.
Many applications use two-fluid nozzles to achieve a controlled small drop size over a range of operation. Each nozzle has a performance curve, and the liquid and gas flow rates determine the drop size. Excessive drop size can lead to catastrophic equipment failure or may have an adverse effect on the process or product. For example, the gas conditioning tower in a cement plant often utilizes evaporative cooling caused by water atomized by two-fluid nozzles into the dust laden gas. If drops do not completely evaporate and strike a vessel wall, dust will accumulate, resulting in the potential for flow restriction in the outlet duct, disrupting the plant operation.
Rotary atomizers.
Rotary atomizers use a high speed rotating disk, cup or wheel to discharge liquid at high speed to the perimeter, forming a hollow cone spray. The rotational speed controls the drop size. Spray drying and spray painting are the most important and common uses of this technology. They can also be automatic.
Ultrasonic atomizers.
This type of spray nozzle utilizes high frequency (20–180 kHz) vibration to produce narrow drop-size distribution and low velocity spray from a liquid. The vibration of a piezoelectric crystal causes capillary waves on the nozzle surface liquid film. An ultrasonic nozzle can be key to high transfer efficiency and process stability as they are very hard to clog. They are particularly useful in medical device coatings for their reliability.
Electrostatic.
Electrostatic charging of sprays is very useful for high transfer efficiency. Examples are the industrial spraying of coatings (paint) and applying lubricant oils. The charging is at high voltage (20–40 kV) but low current.
Performance factors.
Liquid properties.
Almost all drop size data supplied by nozzle manufacturers are based on spraying water under laboratory conditions, . The effect of liquid properties should be understood and accounted for when selecting a nozzle for a process that is drop size sensitive.
Temperature.
Liquid temperature changes do not directly affect nozzle performance, but can affect viscosity, surface tension, and specific gravity, which can then influence spray nozzle performance.
Specific gravity.
Specific gravity is the ratio of the mass of a given volume of liquid to the mass of the same volume of water. In spraying, the main effect of the specific gravity Sg of a liquid other than water is on the capacity of the spray nozzle. All vendor-supplied performance data for nozzles are based on spraying water. To determine the volumetric flowrate Q, of a liquid other than water the following equation should be used.
formula_0
Viscosity.
Dynamic viscosity is defined as the property of a liquid that resists change in the shape or arrangement of its elements during flow. Liquid viscosity primarily affects spray pattern formation and drop size. Liquids with a high viscosity require a higher minimum pressure to begin spray pattern formation and yield narrower spray angles compared to water.
Surface tension.
The surface tension of a liquid tends to assume the smallest possible size, acting as a membrane under tension. Any portion of the liquid surface exerts a tension upon adjacent portions or upon other objects that it contacts. This force is in the plane of the surface, and its amount per unit of length is surface tension. The value for water is about at . The main effects of surface tension are on minimum operating pressure, spray angle, and drop size. Surface tension is more apparent at low operating pressures. A higher surface tension reduces the spray angle, particularly on hollow cone nozzles. Low surface tensions can allow nozzles to be operated at lower pressures.
Nozzle wear.
Nozzle wear is indicated by an increase in nozzle capacity and by a change in the spray pattern, in which the distribution (uniformity of spray pattern) deteriorates and increases drop size. Choice of a wear-resistant material of construction increases nozzle life. Because many single fluid nozzles are used to meter flows, worn nozzles result in excessive liquid usage.
Material of construction.
The material of construction is selected based on the fluid properties of the liquid that is to be sprayed and the environment surrounding the nozzle. Spray nozzles are most commonly fabricated from metals, such as brass, Stainless steel, and nickel alloys, but plastics such as PTFE and PVC and ceramics (alumina and silicon carbide) are also used. Several factors must be considered, including erosive wear, chemical attack, and the effects of high temperature.
Applications.
Automotive coating: Automotive coating demands droplets in size uniformly deposited on substrate. Applications of spray technology are more pronounced during the course of base and clear coatings process which are encompassed as the last stages in automotive coating. Among others rotary bells mounted on robots and hvlp (high volume, low pressure) sprayers are widely used to paint car bodywork during manufacture. Agricultural spraying may involve hydraulic, twin fluid and rotary nozzles: discussed further under pesticide application.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{Q_f} = {Q_{water}}\\sqrt \\frac{1}{Sg}"
}
] | https://en.wikipedia.org/wiki?curid=8874570 |
8875186 | Moyal bracket | Suitably normalized antisymmetrization of the phase-space star product
In physics, the Moyal bracket is the suitably normalized antisymmetrization of the phase-space star product.
The Moyal bracket was developed in about 1940 by José Enrique Moyal, but Moyal only succeeded in publishing his work in 1949 after a lengthy dispute with Paul Dirac. In the meantime this idea was independently introduced in 1946 by Hip Groenewold.
Overview.
The Moyal bracket is a way of describing the commutator of observables in the phase space formulation of quantum mechanics when these observables are described as functions on phase space. It relies on schemes for identifying functions on phase space with quantum observables, the most famous of these schemes being the Wigner–Weyl transform. It underlies Moyal’s dynamical equation, an equivalent formulation of Heisenberg’s quantum equation of motion, thereby providing the quantum generalization of Hamilton’s equations.
Mathematically, it is a deformation of the phase-space Poisson bracket (essentially an extension of it), the deformation parameter being the reduced Planck constant ħ. Thus, its group contraction "ħ"→0 yields the Poisson bracket Lie algebra.
Up to formal equivalence, the Moyal Bracket is the "unique one-parameter Lie-algebraic deformation" of the Poisson bracket. Its algebraic isomorphism to the algebra of commutators bypasses the negative result of the Groenewold–van Hove theorem, which precludes such an isomorphism for the Poisson bracket, a question implicitly raised by Dirac in his 1926 doctoral thesis, the "method of classical analogy" for quantization.
For instance, in a two-dimensional flat phase space, and for the Weyl-map correspondence, the Moyal bracket reads,
formula_0
where is the star-product operator in phase space (cf. Moyal product), while f and g are differentiable phase-space functions, and {"f", "g"} is their Poisson bracket.
More specifically, in operational calculus language, this equals
formula_1
The left & right arrows over the partial derivatives denote the left & right partial derivatives. Sometimes the Moyal bracket is referred to as the "Sine bracket".
A popular (Fourier) integral representation for it, introduced by George Baker is
formula_2
Each correspondence map from phase space to Hilbert space induces a characteristic "Moyal" bracket (such as the one illustrated here for the Weyl map). All such Moyal brackets are "formally equivalent" among themselves, in accordance with a systematic theory.
The Moyal bracket specifies the eponymous infinite-dimensional
Lie algebra—it is antisymmetric in its arguments f and g, and satisfies the Jacobi identity.
The corresponding abstract Lie algebra is realized by " Tf ≡ f", so that
formula_3
On a 2-torus phase space, "T" 2, with periodic
coordinates x and p, each in [0,2"π"], and integer mode indices "mi" , for basis functions exp("i" ("m"1"x"+"m"2"p")), this Lie algebra reads,
formula_4
which reduces to "SU"("N") for integer "N" ≡ 4"π/ħ".
"SU"("N") then emerges as a deformation of "SU"(∞), with deformation parameter 1/"N".
Generalization of the Moyal bracket for quantum systems with second-class constraints involves an operation on equivalence classes of functions in phase space, which can be considered as a quantum deformation of the Dirac bracket.
Sine bracket and cosine bracket.
Next to the sine bracket discussed, Groenewold further introduced the cosine bracket, elaborated by Baker,
formula_5
Here, again, is the star-product operator in phase space, f and g are differentiable phase-space functions, and "f" "g" is the ordinary product.
The sine and cosine brackets are, respectively, the results of antisymmetrizing and symmetrizing the star product. Thus, as the sine bracket is the Wigner map of the commutator, the cosine bracket is the Wigner image of the anticommutator in standard quantum mechanics. Similarly, as the Moyal bracket equals the Poisson bracket up to higher orders of ħ, the cosine bracket equals the ordinary product up to higher orders of ħ. In the classical limit, the Moyal bracket helps reduction to the Liouville equation (formulated in terms of the Poisson bracket), as the cosine bracket leads to the classical Hamilton–Jacobi equation.
The sine and cosine bracket also stand in relation to equations of a purely algebraic description of quantum mechanics. | [
{
"math_id": 0,
"text": "\\begin{align}\n\\{\\{f,g\\}\\} & \\stackrel{\\mathrm{def}}{=}\\ \\frac{1}{i\\hbar}(f\\star g-g\\star f) \\\\\n & = \\{f,g\\} + O(\\hbar^2), \\\\\n\\end{align}"
},
{
"math_id": 1,
"text": "\\{\\{f,g\\}\\}\\ =\n\\frac{2}{\\hbar} ~ f(x,p)\\ \\sin \\left ( {{\\tfrac{\\hbar }{2}}(\\overleftarrow{\\partial }_x\n\\overrightarrow{\\partial }_{p}-\\overleftarrow{\\partial }_{p}\\overrightarrow{\\partial }_{x})} \\right ) \n\\ g(x,p)."
},
{
"math_id": 2,
"text": "\\{ \\{ f,g \\} \\}(x,p) = {2 \\over \\hbar^3 \\pi^2 } \\int dp' \\, dp'' \\, dx' \\, dx'' f(x+x',p+p') g(x+x'',p+p'')\\sin \\left( \\tfrac{2}{\\hbar} (x'p''-x''p')\\right)~."
},
{
"math_id": 3,
"text": " [ T_f ~, T_g ] = T_{i\\hbar \\{ \\{ f,g \\} \\} }. "
},
{
"math_id": 4,
"text": "[ T_{m_1,m_2} ~ , T_{n_1,n_2} ] = \n2i \\sin \\left (\\tfrac{\\hbar}{2}(n_1 m_2 - n_2 m_1 )\\right ) ~ T_{m_1+n_1,m_2+ n_2}, ~\n"
},
{
"math_id": 5,
"text": "\\begin{align}\n\\{ \\{ \\{f ,g\\} \\} \\} & \\stackrel{\\mathrm{def}}{=}\\ \\tfrac{1}{2}(f\\star g+g\\star f) = f g + O(\\hbar^2). \\\\\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=8875186 |
887545 | Coupling (probability) | Proof technique in probability theory
In probability theory, coupling is a proof technique that allows one to compare two unrelated random variables (distributions) X and Y by creating a random vector W whose marginal distributions correspond to X and Y respectively. The choice of W is generally not unique, and the whole idea of "coupling" is about making such a choice so that X and Y can be related in a particularly desirable way.
Definition.
Using the standard formalism of probability theory, let formula_0 and formula_1 be two random variables defined on probability spaces formula_2 and formula_3. Then a coupling of formula_0 and formula_1 is a "new" probability space formula_4 over which there are two random variables formula_5 and formula_6 such that formula_5 has the same distribution as formula_0 while formula_6 has the same distribution as formula_1.
An interesting case is when formula_5 and formula_6 are "not" independent.
Examples.
Random walk.
Assume two particles "A" and "B" perform a simple random walk in two dimensions, but they start from different points. The simplest way to couple them is simply to force them to walk together. On every step, if "A" walks up, so does "B", if "A" moves to the left, so does "B", etc. Thus, the difference between the two particles' positions stays fixed. As far as "A" is concerned, it is doing a perfect random walk, while "B" is the copycat. "B" holds the opposite view, i.e. that it is, in effect, the original and that "A" is the copy. And in a sense they both are right. In other words, any mathematical theorem, or result that holds for a regular random walk, will also hold for both "A" and "B".
Consider now a more elaborate example. Assume that "A" starts from the point (0,0) and "B" from (10,10). First couple them so that they walk together in the vertical direction, i.e. if "A" goes up, so does "B", etc., but are mirror images in the horizontal direction i.e. if "A" goes left, "B" goes right and vice versa. We continue this coupling until "A" and "B" have the same horizontal coordinate, or in other words are on the vertical line (5,"y"). If they never meet, we continue this process forever (the probability of that is zero, though). After this event, we change the coupling rule. We let them walk together in the horizontal direction, but in a mirror image rule in the vertical direction. We continue this rule until they meet in the vertical direction too (if they do), and from that point on, we just let them walk together.
This is a coupling in the sense that neither particle, taken on its own, can "feel" anything we did. Neither the fact that the other particle follows it in one way or the other, nor the fact that we changed the coupling rule or when we did it. Each particle performs a simple random walk. And yet, our coupling rule forces them to meet almost surely and to continue from that point on together permanently. This allows one to prove many interesting results that say that "in the long run", it is not important where you started in order to obtain that particular result.
Biased coins.
Assume two biased coins, the first with probability "p" of turning up heads and the second with probability "q" > "p" of turning up heads. Intuitively, if both coins are tossed the same number of times, we should expect the first coin turns up fewer heads than the second one. More specifically, for any fixed "k", the probability that the first coin produces at least "k" heads should be less than the probability that the second coin produces at least "k" heads. However proving such a fact can be difficult with a standard counting argument. Coupling easily circumvents this problem.
Let "X"1, "X"2, ..., "X""n" be indicator variables for heads in a sequence of flips of the first coin. For the second coin, define a new sequence "Y"1, "Y"2, ..., "Y""n" such that
Then the sequence of "Yi" has exactly the probability distribution of tosses made with the second coin. However, because "Yi" depends on "Xi", a toss by toss comparison of the two coins is now possible. That is, for any "k" ≤ "n"
formula_7
Convergence of Markov Chains to a stationary distribution.
Initialize one process formula_8 outside the stationary distribution and initialize another process formula_9 inside the stationary distribution. Couple these two independent processes together formula_10. As you let time run these two processes will evolve independently. Under certain conditions, these two processes will eventually meet and can be considered the same process at that point. This means that the process outside the stationary distribution converges to the stationary distribution.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X_1"
},
{
"math_id": 1,
"text": "X_2"
},
{
"math_id": 2,
"text": "(\\Omega_1,F_1,P_1)"
},
{
"math_id": 3,
"text": "(\\Omega_2,F_2,P_2)"
},
{
"math_id": 4,
"text": "(\\Omega,F,P)"
},
{
"math_id": 5,
"text": "Y_1"
},
{
"math_id": 6,
"text": "Y_2"
},
{
"math_id": 7,
"text": " \\Pr(X_1 + \\cdots + X_n > k) \\leq \\Pr(Y_1 + \\cdots + Y_n > k)."
},
{
"math_id": 8,
"text": "X_n"
},
{
"math_id": 9,
"text": " Y_n "
},
{
"math_id": 10,
"text": " (X_n,Y_n) "
}
] | https://en.wikipedia.org/wiki?curid=887545 |
8876082 | Kirkman's schoolgirl problem | Combinatorics problem proposed by Thomas Penyngton Kirkman
Kirkman's schoolgirl problem is a problem in combinatorics proposed by Thomas Penyngton Kirkman in 1850 as Query VI in "The Lady's and Gentleman's Diary" (pg.48). The problem states:
Fifteen young ladies in a school walk out three abreast for seven days in succession: it is required to arrange them daily so that no two shall walk twice abreast.
Solutions.
A solution to this problem is an example of a "Kirkman triple system", which is a Steiner triple system having a "parallelism", that is, a partition of the blocks of the triple system into parallel classes which are themselves partitions of the points into disjoint blocks. Such Steiner systems that have a parallelism are also called "resolvable".
There are exactly seven non-isomorphic solutions to the schoolgirl problem, as originally listed by Frank Nelson Cole in "Kirkman Parades" in 1922. The seven solutions are summarized in the table below, denoting the 15 girls with the letters A to O.
From the number of automorphisms for each solution and the definition of an automorphism group, the total number of solutions "including" isomorphic solutions is therefore:
formula_0
formula_1
formula_2
formula_3.
History.
The problem has a long and storied history. This section is based on historical work done at different times by Robin Wilson and by Louise Duffield Cummings. The history is as follows:
Sylvester's problem.
James Joseph Sylvester in 1850 asked if 13 disjoint Kirkman systems of 35 triples each could be constructed to use all formula_5 triples on 15 girls. No solution was found until 1974 when RHF Denniston at the University of Leicester constructed it with a computer. Denniston's insight was to create a single-week Kirkman solution in such a way that it could be permuted according to a specific permutation of cycle length 13 to create disjoint solutions for subsequent weeks; he chose a permutation with a single 13-cycle and two fixed points like (1 2 3 4 5 6 7 8 9 10 11 12 13)(14)(15). Under this permutation, a triple like 123 would map to 234, 345, ... (11, 12, 13), (12, 13, 1) and (13, 1, 2) before repeating. Denniston thus classified the 455 triples into 35 rows of 13 triples each, each row being the orbit of a given triple under the permutation. In order to construct a Sylvester solution, no single-week Kirkman solution could use two triples from the same row, otherwise they would eventually collide when the permutation was applied to one of them. Solving Sylvester's problem is equivalent to finding one triple from each of the 35 rows such that the 35 triples together make a Kirkman solution. He then asked an Elliott 4130 computer to do exactly that search, which took him 7 hours to find this first-week solution, labeling the 15 girls with the letters "A" to "O":
He stopped the search at that point, not looking to establish uniqueness.
The American minimalist composer Tom Johnson composed a piece of music called "Kirkman's Ladies" based on Denniston's solution.
As of 2021, it is not known whether there are other non-isomorphic solutions to Sylvester's problem, or how many solutions there are.
9 schoolgirls and extensions.
The equivalent of the Kirkman problem for 9 schoolgirls results in S(2,3,9), an affine plane isomorphic to the following triples on each day:
The corresponding Sylvester problem asks for 7 different S(2,3,9) systems of 12 triples each, together covering all formula_7 triples. This solution was known to Bays (1917) which was found again from a different direction by Earl Kramer and Dale Mesner in a 1974 paper titled "Intersections Among Steiner Systems" (J Combinatorial Theory, Vol 16 pp 273-285). There can indeed be 7 disjoint S(2,3,9) systems, and all such sets of 7 fall into two non-isomorphic categories of sizes 8640 and 6720, with 42 and 54 automorphisms respectively.
Solution 1 has 42 automorphisms, generated by the permutations (A I D C F H)(B G) and (C F D H E I)(B G). Applying the 9! = 362880 permutations of ABCDEFGHI, there are 362880/42 = 8640 different solutions all isomorphic to Solution 1.
Solution 2 has 54 automorphisms, generated by the permutations (A B D)(C H E)(F G I) and (A I F D E H)(B G). Applying the 9! = 362880 permutations of ABCDEFGHI, there are 362880/54 = 6720 different solutions all isomorphic to Solution 2.
Thus there are 8640 + 6720 = 15360 solutions in total, falling into two non-isomorphic categories.
In addition to S(2,3,9), Kramer and Mesner examined other systems that could be derived from S(5,6,12) and found that there could be up to 2 disjoint S(5,6,12) systems, up to 2 disjoint S(4,5,11) systems, and up to 5 disjoint S(3,4,10) systems. All such sets of 2 or 5 are respectively isomorphic to each other.
Larger systems and continuing research.
In the 21st century, analogues of Sylvester's problem have been visited by other authors under terms like "Disjoint Steiner systems" or "Disjoint Kirkman systems" or "LKTS" (Large Sets of Kirkman Triple Systems), for "n" > 15. Similar sets of disjoint Steiner systems have also been investigated for the S(5,8,24) Steiner system in addition to triple systems.
Galois geometry.
In 1910 the problem was addressed using Galois geometry by George Conwell.
The Galois field GF(2) with two elements is used with four homogeneous coordinates to form PG(3,2) which has 15 points, 3 points to a line, 7 points and 7 lines in a plane. A plane can be considered a complete quadrilateral together with the line through its diagonal points. Each point is on 7 lines, and there are 35 lines in all.
The lines of PG(3,2) are identified by their Plücker coordinates in PG(5,2) with 63 points, 35 of which represent lines of PG(3,2). These 35 points form the surface "S" known as the Klein quadric. For each of the 28 points off "S" there are 6 lines through it which do not intersect "S".
As there are seven days in a week, the heptad is an important part of the solution:
<templatestyles src="Template:Blockquote/styles.css" />
A heptad is determined by any two of its points. Each of the 28 points off "S" lies in two heptads. There are 8 heptads. The projective linear group PGL(3,2) is isomorphic the alternating group on the 8 heptads.
The schoolgirl problem consists in finding seven lines in the 5-space which do not intersect and such that any two lines always have a heptad in common.
Spreads and packing.
In PG(3,2), a partition of the points into lines is called a spread, and a partition of the lines into spreads is called a "<dfn >packing</dfn>" or "<dfn >parallelism</dfn>". There are 56 spreads and 240 packings. When Hirschfeld considered the problem in his "Finite Projective Spaces of Three Dimensions" (1985), he noted that some solutions correspond to packings of PG(3,2), essentially as described by Conwell above, and he presented two of them.
Generalization.
The problem can be generalized to formula_8 girls, where formula_8 must be an odd multiple of 3 (that is formula_9), walking in triplets for formula_10 days, with the requirement, again, that no pair of girls walk in the same row twice. The solution to this generalisation is a Steiner triple system, an S(2, 3, 6"t" + 3) with parallelism (that is, one in which each of the 6"t" + 3 elements occurs exactly once in each block of 3-element sets), known as a "Kirkman triple system". It is this generalization of the problem that Kirkman discussed first, while the famous special case formula_11 was only proposed later. A complete solution to the general case was published by D. K. Ray-Chaudhuri and R. M. Wilson in 1968, though it had already been solved by Lu Jiaxi () in 1965, but had not been published at that time.
Many variations of the basic problem can be considered. Alan Hartman solves a problem of this type with the requirement that no trio walks in a row of four more than once using Steiner quadruple systems.
More recently a similar problem known as the Social Golfer Problem has gained interest that deals with 32 golfers who want to get to play with different people each day in groups of 4, over the course of 10 days.
As this is a regrouping strategy where all groups are orthogonal, this process within the problem of organising a large group into smaller groups where no two people share the same group twice can be referred to as orthogonal regrouping.
The Resolvable Coverings problem considers the general formula_8 girls, formula_12 groups case where each pair of girls must be in the same group at some point, but we want to use as few days as possible. This can, for example, be used to schedule a rotating table plan, in which each pair of guests must at some point be at the same table.
The Oberwolfach problem, of decomposing a complete graph into edge-disjoint copies of a given 2-regular graph, also generalizes Kirkman's schoolgirl problem. Kirkman's problem is the special case of the Oberwolfach problem in which the 2-regular graph consists of five disjoint triangles.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "15! \\times \\left(\\frac{1}{168} + \\frac{1}{168} + \\frac{1}{24} + \\frac{1}{24} + \\frac{1}{12} + \\frac{1}{12} + \\frac{1}{21}\\right)"
},
{
"math_id": 1,
"text": "= 15! \\times \\frac{13}{42}"
},
{
"math_id": 2,
"text": "= 404,756,352,000"
},
{
"math_id": 3,
"text": "= 2^{10} \\times 3^5 \\times 5^3 \\times 7 \\times 11 \\times 13^2"
},
{
"math_id": 4,
"text": " \\frac{n!}{q! (n-q)!} \\div \\frac{p!}{q! (p-q)!} "
},
{
"math_id": 5,
"text": "{15 \\choose 3} = 455"
},
{
"math_id": 6,
"text": "455 = 13 \\times 35"
},
{
"math_id": 7,
"text": "{9 \\choose 3} = 84"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "n \\equiv 3 \\pmod{6}"
},
{
"math_id": 10,
"text": "\\frac{1}{2}(n-1)"
},
{
"math_id": 11,
"text": "n=15"
},
{
"math_id": 12,
"text": "g"
}
] | https://en.wikipedia.org/wiki?curid=8876082 |
8877643 | Misner space | Abstract mathematical spacetime
Misner space is an abstract mathematical spacetime, first described by Charles W. Misner. It is also known as the Lorentzian orbifold formula_0. It is a simplified, two-dimensional version of the Taub–NUT spacetime. It contains a non-curvature singularity and is an important counterexample to various hypotheses in general relativity.
Metric.
The simplest description of Misner space is to consider two-dimensional Minkowski space with the metric
formula_1
with the identification of every pair of spacetime points by a constant boost
formula_2
It can also be defined directly on the cylinder manifold formula_3 with coordinates formula_4 by the metric
formula_5
The two coordinates are related by the map
formula_6
formula_7
and
formula_8
formula_9
Causality.
Misner space is a standard example for the study of causality since it contains both closed timelike curves and a compactly generated Cauchy horizon, while still being flat (since it is just Minkowski space). With the coordinates formula_4, the loop defined by formula_10, with tangent vector formula_11, has the norm formula_12, making it a closed null curve. This is the chronology horizon : there are no closed timelike curves in the region formula_13, while every point admits a closed timelike curve through it in the region formula_14.
This is due to the tipping of the light cones which, for formula_13, remains above lines of constant formula_15 but will open beyond that line for formula_14, causing any loop of constant formula_15 to be a closed timelike curve.
Chronology protection.
Misner space was the first spacetime where the notion of chronology protection was used for quantum fields, by showing that in the semiclassical approximation, the expectation value of the stress-energy tensor for the vacuum formula_16 is divergent.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}^{1,1}/\\text{boost}"
},
{
"math_id": 1,
"text": " ds^2= -dt^2 + dx^2,"
},
{
"math_id": 2,
"text": " (t, x) \\to (t \\cosh (\\pi) + x \\sinh(\\pi), x \\cosh (\\pi) + t \\sinh(\\pi))."
},
{
"math_id": 3,
"text": "\\mathbb{R} \\times S"
},
{
"math_id": 4,
"text": "(t', \\varphi)"
},
{
"math_id": 5,
"text": " ds^2= -2dt'd\\varphi + t'd\\varphi^2,"
},
{
"math_id": 6,
"text": " t= 2 \\sqrt{-t'} \\cosh\\left(\\frac{\\varphi}{2}\\right)"
},
{
"math_id": 7,
"text": " x= 2 \\sqrt{-t'} \\sinh\\left(\\frac{\\varphi}{2}\\right)"
},
{
"math_id": 8,
"text": " t'= \\frac{1}{4}(x^2 - t^2)"
},
{
"math_id": 9,
"text": " \\phi= 2 \\tanh^{-1}\\left(\\frac{x}{t}\\right)"
},
{
"math_id": 10,
"text": "t = 0, \\varphi = \\lambda"
},
{
"math_id": 11,
"text": "X = (0,1)"
},
{
"math_id": 12,
"text": "g(X,X) = 0"
},
{
"math_id": 13,
"text": "t < 0"
},
{
"math_id": 14,
"text": "t > 0"
},
{
"math_id": 15,
"text": "t"
},
{
"math_id": 16,
"text": "\\langle T_{\\mu\\nu} \\rangle_\\Omega"
}
] | https://en.wikipedia.org/wiki?curid=8877643 |
888140 | Stock and flow | Types of quantities in financial fields
Economics, business, accounting, and related fields often distinguish between quantities that are stocks and those that are flows. These differ in their units of measurement. A "stock" is measured at one specific time, and represents a quantity existing at that point in time (say, December 31, 2004), which may have accumulated in the past. A "flow" variable is measured over an interval of time. Therefore, a flow would be measured "per unit of time" (say a year). Flow is roughly analogous to rate or speed in this sense.
For example, U.S. nominal gross domestic product refers to a total number of dollars spent over a time period, such as a year. Therefore, it is a flow variable, and has units of dollars/year. In contrast, the U.S. nominal capital stock is the total value, in dollars, of equipment, buildings, and other real productive assets in the U.S. economy, and has units of dollars. The diagram provides an intuitive illustration of how the "stock" of capital currently available is increased by the "flow" of new investment and depleted by the "flow" of depreciation.
Stocks and flows in accounting.
Thus, a stock refers to the value of an asset at a balance date (or point in time), while a flow refers to the total value of transactions (sales or purchases, incomes or expenditures) during an accounting period. If the flow value of an economic activity is divided by the average stock value during an accounting period, we obtain a measure of the number of turnovers (or rotations) of a stock in that accounting period. Some accounting entries are normally always represented as a flow (e.g. profit or income), while others may be represented both as a stock or as a flow (e.g. capital).
A person or country might have stocks of money, financial assets, liabilities, wealth, real means of production, capital, inventories, and human capital (or labor power). Flow magnitudes include income, spending, saving, debt repayment, fixed investment, inventory investment, and labor utilization. These differ in their units of measurement.
Capital is a stock concept which yields a periodic income which is a flow concept.
Comparing stocks and flows.
Stocks and flows have different units and are thus not "commensurable" – they cannot be meaningfully "compared, equated, added, or subtracted." However, one may meaningfully take "ratios" of stocks and flows, or multiply or divide them. This is a point of some confusion for some economics students, as some confuse taking ratios (valid) with comparing (invalid).
The ratio of a stock over a flow has units of (units)/(units/time) = time. For example, the debt to GDP ratio has units of years (as GDP is measured in, for example, dollars per year whereas debt is measured in dollars), which yields the interpretation of the debt to GDP ratio as "number of years to pay off all debt, assuming all GDP devoted to debt repayment".
The ratio of a flow to a stock has units 1/time. For example, the velocity of money is defined as nominal GDP / nominal money supply; it has units of (dollars / year) / dollars = 1/year.
In discrete time, the change in a stock variable from one point in time to another point in time one time unit later (the first difference of the stock) is equal to the corresponding flow variable per unit of time. For example, if a country's stock of physical capital on January 1, 2010 is 20 machines and on January 1, 2011 is 23 machines, then the flow of net investment during 2010 was 3 machines per year. If it then has 27 machines on January 1, 2012, the flow of net investment during 2010 and 2011 averaged formula_0 machines per year.
In continuous time, the time derivative of a stock variable is a flow variable.
More general uses.
Stocks and flows also have natural meanings in many contexts outside of economics, business and related fields. The concepts apply to many conserved quantities such as energy, and to materials such as in stoichiometry, water reservoir management, and greenhouse gases and other durable pollutants that accumulate in the environment or in organisms. Climate change mitigation, for example, is a fairly straightforward stock and flow problem with the primary goal of reducing the stock (the concentration of durable greenhouse gases in the atmosphere) by manipulating the flows (reducing inflows such as greenhouse gas emissions into the atmosphere, and increasing outflows such as carbon dioxide removal). In living systems, such as the human body, energy homeostasis describes the linear relationship between flows (the food we eat and the energy we expend along with the wastes we excrete) and the stock (manifesting as our gain or loss of body weight over time). In Earth system science, many stock and flow problems arise, such as in the carbon cycle, the nitrogen cycle, the water cycle, and Earth's energy budget. Thus stocks and flows are the basic building blocks of system dynamics models. Jay Forrester originally referred to them as "levels" rather than stocks, together with "rates" or "rates of flow".
A stock (or "level variable") in this broader sense is some entity that is accumulated over time by inflows and/or depleted by outflows. Stocks can only be changed via flows. Mathematically a stock can be seen as an accumulation or integration of flows over time – with outflows subtracting from the stock. Stocks typically have a certain value at each moment of time – e.g. the number of population at a certain moment, or the quantity of water in a reservoir.
A flow (or "rate") changes a stock over time. Usually we can clearly distinguish inflows (adding to the stock) and outflows (subtracting from the stock). Flows typically are measured over a certain interval of time – e.g., the number of births over a day or month.
Synonyms
Calculus interpretation.
If the quantity of some "stock" variable at time formula_1 is formula_2, then the derivative formula_3 is the "flow" of changes in the stock. Likewise, the "stock" at some time formula_4 is the integral of the "flow" from some moment set as zero until time formula_4.
For example, if the capital stock formula_5 is increased gradually over time by a flow of gross investment formula_6 and decreased gradually over time by a flow of depreciation formula_7, then the instantaneous rate of change in the capital stock is given by
formula_8
where the notation formula_9 refers to the flow of net investment, which is the difference between gross investment and depreciation.
Example of dynamic stock and flow diagram.
Equations that change the two stocks via the flow are:
formula_10
formula_11
List of all the equations, in their order of execution in each time, from time = 1 to 36:
formula_12
formula_13
formula_14
History.
The distinction between a stock and a flow variable is elementary, and dates back centuries in accounting practice (distinction between an asset and income, for instance). In economics, the distinction was formalized and terms were set in , in which Irving Fisher formalized capital (as a stock).
Polish economist Michał Kalecki emphasized the centrality of the distinction of stocks and flows, caustically calling economics "the science of confusing stocks with flows" in his critique of the quantity theory of money (circa 1936, frequently quoted by Joan Robinson).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "3 \\tfrac{1}{2}"
},
{
"math_id": 1,
"text": "\\,t\\,"
},
{
"math_id": 2,
"text": "\\,Q(t)\\,"
},
{
"math_id": 3,
"text": "\\,\\frac{dQ(t)}{dt}\\,"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "\\,K(t)\\,"
},
{
"math_id": 6,
"text": "\\,I^g(t)\\,"
},
{
"math_id": 7,
"text": "\\,D(t)\\,"
},
{
"math_id": 8,
"text": "\\frac{dK(t)}{dt} = I^g(t) - D(t) = I^n(t)"
},
{
"math_id": 9,
"text": "\\,I^n(t)\\,"
},
{
"math_id": 10,
"text": " \\ \\text{Stock A} = \\int_0^t -\\text{Flow }\\,dt "
},
{
"math_id": 11,
"text": " \\ \\text{Stock B} = \\int_0^t \\text{Flow }\\,dt "
},
{
"math_id": 12,
"text": "1) \\ \\text{Flow}= \\sin( 5t ) "
},
{
"math_id": 13,
"text": "2.1) \\ \\text{Stock A}\\ -= \\text{Flow}\\ "
},
{
"math_id": 14,
"text": "2.2) \\ \\text{Stock B}\\ += \\text{Flow}\\ "
}
] | https://en.wikipedia.org/wiki?curid=888140 |
8881835 | Aortic valve area calculation | Measurement of the area of the heart's aortic valve
In cardiology, aortic valve area calculation is an indirect method of determining the area of the aortic valve of the heart. The calculated aortic valve orifice area is currently one of the measures for evaluating the severity of aortic stenosis. A valve area of less than 1.0 cm2 is considered to be severe aortic stenosis.
There are many ways to calculate the valve area of aortic stenosis. The most commonly used methods involve measurements taken during echocardiography. For interpretation of these values, the area is generally divided by the body surface area, to arrive at the patient's optimal aortic valve orifice area.
Planimetry.
Planimetry is the tracing out of the opening of the aortic valve in a still image obtained during echocardiographic acquisition during ventricular systole, when the valve is supposed to be open. While this method directly measures the valve area, the image may be difficult to obtain due to artifacts during echocardiography, and the measurements are dependent on the technician who has to manually trace the perimeter of the open aortic valve. Because of these reasons, planimetry of aortic valve is not routinely performed.
The continuity equation.
The continuity equation states that the flow in one area must equal the flow in a second area if there are no shunts between the two areas. In practical terms, the flow from the left ventricular outflow tract (LVOT) is compared to the flow at the level of the aortic valve. In echocardiography the aortic valve area is calculated using the velocity time integral (VTI) which is the most accurate method and preferred. The flow through the LVOT, or LV stroke volume (in cm3), can be calculated by measuring the LVOT diameter (in cm), squaring that value, multiplying the value by 0.78540 (which is π/4) giving a cross sectional area of the LVOT (in cm2) and multiplying that value by the LVOT VTI (in cm), measured on the spectral Doppler display using pulsed-wave Doppler. From these, it is easy to calculate the area (in cm2) of the aortic valve by simply dividing the LV stroke volume (in cm3) by the AV VTI (in cm) measured on the spectral Doppler display using continuous-wave Doppler.
Stroke volume = 0.785(π/4) x Diameter2 x VTI of LVOT
Cross sectional area of LVOT = 0.785(π/4) x LVOT Diameter2
formula_0
The weakest aspect of this calculation is the variability in measurement of LVOT area, because it involves squaring the LVOT dimension. Therefore, it is crucial for the sonographer to take great care in measuring the LVOT diameter.
Inaccuracies in using the continuity equation to calculate aortic valve area may arise when there is an error in measurement of the LVOT diameter. This is sometimes difficult to measure depending on the sonographic view and anatomy. If measured incorrectly, the effect on aortic valve area is amplified because the radius of the LVOT is squared. Additionally, estimation of aortic valve area and stenosis may be inaccurate in cases of subvalvular and supravalvular stenosis.
For verification purposes of the obtained valve area using echocardiogram and doppler measures, especially if the obtained valve area is in the range requiring surgery and cardiac output is low, the Gold standard of left heart catheterization for true hemodynamics should be obtained for validation using the Gorlin formula, so patient does not go for unneeded surgery.
The Gorlin equation.
The Gorlin equation states that the aortic valve area is equal to the flow through the aortic valve during ventricular systole divided by the systolic pressure gradient across the valve times a constant. The flow across the aortic valve is calculated by taking the cardiac output (measured in liters per minute) and dividing it by the heart rate (to give output per cardiac cycle) and then dividing it by the systolic ejection period measured in seconds per beat (to give flow per ventricular contraction).
formula_1
The Gorlin equation is related to flow across the valve. Because of this, the valve area may be erroneously calculated as stenotic if the flow across the valve is low (i.e. if the cardiac output is low). The measurement of the true gradient is accomplished by temporarily increasing the cardiac output by the infusion of positive inotropic agents, such as dobutamine.
The Hakki equation.
The Hakki equation is a simplification of the Gorlin equation, relying on the observation that in most cases, the numerical value of
formula_2. The resulting simplified formula is:
formula_3
The Agarwal-Okpara-Bao equation.
The Agarwal-Okpara-Bao equation is a new form of AVA evaluation equation named after Ramesh K. Agarwal, Emmanuel c Okpara, and Guangyu Bao. It was derived from curve fitting of CFD simulation results and 80 clinical data obtained by Minners, Allgeier, Gohlke-Baerwolf, Kienzle, Neumann, and Jander using a multi-objective genetic algorithm. The comparison of the results calculated from Gorlin Equation, Agarwal-Okpara-Bao Equation, and clinical data is shown in the figures on the right.
formula_4
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Aortic Valve Area (in cm}^2\\text{)} = {\\text{LVOT diameter}^2 \\cdot 0.78540 \\cdot \\text{LVOT VTI}\\over \\text{Aortic Valve VTI}}"
},
{
"math_id": 1,
"text": "\\text{Valve Area (in cm}^2\\text{)} =\n\\frac{\\text{Cardiac Output }(\\frac{\\text{ml}}{\\text{min}})}{\\text{Heart rate }(\\frac{\\text{beats}}{\\text{min}})\\cdot \\text{Systolic ejection period (s)}\\cdot 44.3 \\cdot \\sqrt{\\text{mean Gradient (mmHg)}}}"
},
{
"math_id": 2,
"text": "\\text{heart rate (bpm)} \\cdot \\text{systolic ejection period (s)} \\cdot 44.3 \\approx 1000"
},
{
"math_id": 3,
"text": "\\text{Aortic Valve area (in cm}^2\\text{)}\n\\approx\\frac{\\text{Cardiac Output} (\\frac{\\text{litre}}{\\text{min}})}{\\sqrt{\\text{Peak to Peak Gradient (mmHg)}}}"
},
{
"math_id": 4,
"text": "\\text{Valve Area (in cm}^2\\text{)} = \\text{(0.83}^2 + \\frac{\\frac{\\text{Q (}\\frac{ml}{min})}{\\text{60}}}{\\text{0.35 } \\cdot \\sqrt{\\text{mean Gradient (dynes/cm2)}}}\\text{)}^\\text{0.5} - \\text{0.87} "
}
] | https://en.wikipedia.org/wiki?curid=8881835 |
8883303 | Chip (CDMA) | Digital communications term
In digital communications, a chip is a pulse of a direct-sequence spread spectrum (DSSS) code, such as a pseudo-random noise (PN) code sequence used in direct-sequence code-division multiple access (CDMA) channel access techniques.
In a binary direct-sequence system, each chip is typically a rectangular pulse of +1 or −1 amplitude, which is multiplied by a data sequence (similarly +1 or −1 representing the message bits) and by a carrier waveform to make the transmitted signal. The chips are therefore just the bit sequence out of the code generator; they are called chips to avoid confusing them with message bits.
The chip rate of a code is the number of pulses per second (chips per second) at which the code is transmitted (or received). The chip rate is larger than the symbol rate, meaning that one symbol is represented by multiple chips. The ratio is known as the spreading factor (SF) or processing gain:
formula_0
Orthogonal variable spreading factor.
Orthogonal variable spreading factor (OVSF) is an implementation of code-division multiple access (CDMA) where before each signal is transmitted, the signal is spread over a wide spectrum range through the use of a user's code. Users' codes are carefully chosen to be mutually orthogonal to each other.
These codes are derived from an OVSF code tree, and each user is given a different code. An OVSF code tree is a complete binary tree that reflects the construction of Hadamard matrices.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mbox{SF} = \\frac{\\mbox{chip rate}}{\\mbox{symbol rate}}"
}
] | https://en.wikipedia.org/wiki?curid=8883303 |
8884790 | Cutler's bar notation | Arithmetic notation system
In mathematics, Cutler's bar notation is a notation system for large numbers, introduced by Mark Cutler in 2004. The idea is based on iterated exponentiation in much the same way that exponentiation is iterated multiplication.
Introduction.
A regular exponential can be expressed as such:
formula_0
However, these expressions become arbitrarily large when dealing with systems such as Knuth's up-arrow notation. Take the following:
formula_1
Cutler's bar notation shifts these exponentials counterclockwise, forming formula_2. A bar is placed above the variable to denote this change. As such:
formula_3
This system becomes effective with multiple exponents, when regular denotation becomes too cumbersome.
formula_4
At any time, this can be further shortened by rotating the exponential counterclockwise once more.
formula_5
The same pattern could be iterated a fourth time, becoming formula_6. For this reason, it is sometimes referred to as Cutler's circular notation.
Advantages and drawbacks.
The Cutler bar notation can be used to easily express other notation systems in exponent form. It also allows for a flexible summarization of multiple copies of the same exponents, where any number of stacked exponents can be shifted counterclockwise and shortened to a single variable. The bar notation also allows for fairly rapid composure of very large numbers. For instance, the number formula_7 would contain more than a googolplex digits, while remaining fairly simple to write with and remember.
However, the system reaches a problem when dealing with different exponents in a single expression. For instance, the expression formula_8 could not be summarized in bar notation. Additionally, the exponent can only be shifted thrice before it returns to its original position, making a five degree shift indistinguishable from a one degree shift. Some have suggested using a double and triple bar in subsequent rotations, though this presents problems when dealing with ten- and twenty-degree shifts.
Other equivalent notations for the same operations already exist without being limited to a fixed number of recursions, notably Knuth's up-arrow notation and hyperoperation notation. | [
{
"math_id": 0,
"text": "\n \\begin{matrix}\n a^b & = & \\underbrace{a_{} \\times a \\times\\dots \\times a} \\\\\n & & b\\mbox{ copies of }a\n \\end{matrix}\n "
},
{
"math_id": 1,
"text": "\n \\begin{matrix}\n & \\underbrace{a_{}^{a^{{}^{.\\,^{.\\,^{.\\,^a}}}}}} & \n\\\\ \n& b\\mbox{ copies of }a\n \\end{matrix}\n "
},
{
"math_id": 2,
"text": "{^b} \\bar a"
},
{
"math_id": 3,
"text": "\n \\begin{matrix}\n {^b} \\bar a = & \\underbrace{a_{}^{a^{{}^{.\\,^{.\\,^{.\\,^a}}}}}} & \n\\\\ \n & b\\mbox{ copies of }a\n \\end{matrix}\n "
},
{
"math_id": 4,
"text": "\n \\begin{matrix}\n ^{^b{b}} \\bar a = & \\underbrace{a_{}^{a^{{}^{.\\,^{.\\,^{.\\,^a}}}}}} & \n\\\\ \n & {{^b} \\bar a}\\mbox{ copies of }a\n \\end{matrix}\n "
},
{
"math_id": 5,
"text": "\n \\begin{matrix}\n \\underbrace{b_{}^{b^{{}^{.\\,^{.\\,^{.\\,^b}}}}}} \\bar a = {_c} \\bar a\n\\\\ \n c \\mbox{ copies of } b\n \\end{matrix}\n "
},
{
"math_id": 6,
"text": "\\bar a_{d}"
},
{
"math_id": 7,
"text": "\\bar {10}_{10}"
},
{
"math_id": 8,
"text": " ^{a^{b^{b^{c}}}}"
}
] | https://en.wikipedia.org/wiki?curid=8884790 |
8884972 | Kirchhoff equations | In fluid dynamics, the Kirchhoff equations, named after Gustav Kirchhoff, describe the motion of a rigid body in an ideal fluid.
formula_0
where formula_1 and formula_2 are the angular and linear velocity vectors at the point formula_3, respectively; formula_4 is the moment of inertia tensor, formula_5 is the body's mass; formula_6 is
a unit normal vector to the surface of the body at the point formula_3;
formula_7 is a pressure at this point; formula_8 and formula_9 are the hydrodynamic
torque and force acting on the body, respectively;
formula_10 and formula_11 likewise denote all other torques and forces acting on the
body. The integration is performed over the fluid-exposed portion of the
body's surface.
If the body is completely submerged body in an infinitely large volume of irrotational, incompressible, inviscid fluid, that is at rest at infinity, then the vectors formula_8 and formula_9 can be found via explicit integration, and the dynamics of the body is described by the Kirchhoff – Clebsch equations:
formula_12
formula_13
Their first integrals read
formula_14
Further integration produces explicit expressions for position and velocities. | [
{
"math_id": 0,
"text": "\n\\begin{align}\n{\\mathrm{d}\\over\\mathrm{d}t} {{\\partial T}\\over{\\partial \\boldsymbol\\omega}}\n& = {{\\partial T}\\over{\\partial \\boldsymbol\\omega}} \\times \\boldsymbol\\omega + {{\\partial\nT}\\over{\\partial \\mathbf v}} \\times \\mathbf v + \\mathbf Q_h + \\mathbf Q, \\\\[10pt]\n{\\mathrm{d}\\over\\mathrm{d}t} {{\\partial T}\\over{\\partial \\mathbf v}} \n& = {{\\partial T}\\over{\\partial \\mathbf v}} \\times \\boldsymbol\\omega + \\mathbf F_h + \\mathbf F, \\\\[10pt]\nT & = {1 \\over 2} \\left( \\boldsymbol\\omega^T \\tilde I \\boldsymbol\\omega + m v^2 \\right) \\\\[10pt]\n\\mathbf Q_h & = -\\int p \\mathbf x \\times\\hat\\mathbf n \\, d\\sigma, \\\\[10pt]\n\\mathbf F_h & = -\\int p \\hat\\mathbf n \\, d\\sigma\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\\boldsymbol\\omega"
},
{
"math_id": 2,
"text": "\\mathbf v"
},
{
"math_id": 3,
"text": "\\mathbf x"
},
{
"math_id": 4,
"text": "\\tilde I"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "\\hat\\mathbf n"
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "\\mathbf Q_h"
},
{
"math_id": 9,
"text": "\\mathbf F_h"
},
{
"math_id": 10,
"text": "\\mathbf Q"
},
{
"math_id": 11,
"text": "\\mathbf F"
},
{
"math_id": 12,
"text": "\n{\\mathrm{d}\\over\\mathrm{d}t}\n{{\\partial L}\\over{\\partial \\boldsymbol\\omega}} = {{\\partial L}\\over{\\partial \\boldsymbol\\omega}} \\times \\boldsymbol\\omega + {{\\partial L}\\over{\\partial \\mathbf v}} \\times \\mathbf v, \\quad {\\mathrm{d}\\over\\mathrm{d}t}\n{{\\partial L}\\over{\\partial \\mathbf v}} = {{\\partial L}\\over{\\partial \\mathbf v}} \\times \\boldsymbol\\omega,\n"
},
{
"math_id": 13,
"text": "\nL(\\boldsymbol\\omega, \\mathbf v) = {1 \\over 2} (A \\boldsymbol\\omega,\\boldsymbol\\omega) + (B \\boldsymbol\\omega,\\mathbf v) + {1 \\over 2} (C \\mathbf v,\\mathbf v) + (\\mathbf k,\\boldsymbol\\omega) + (\\mathbf l,\\mathbf v). \n"
},
{
"math_id": 14,
"text": "\nJ_0 = \\left({{\\partial L}\\over{\\partial \\boldsymbol\\omega}}, \\boldsymbol\\omega \\right) + \\left({{\\partial L}\\over{\\partial \\mathbf v}}, \\mathbf v \\right) - L, \\quad\nJ_1 = \\left({{\\partial L}\\over{\\partial \\boldsymbol\\omega}},{{\\partial L}\\over{\\partial \\mathbf v}}\\right), \\quad J_2 = \\left({{\\partial L}\\over{\\partial \\mathbf v}},{{\\partial L}\\over{\\partial \\mathbf v}}\\right)\n. "
}
] | https://en.wikipedia.org/wiki?curid=8884972 |
888587 | Quantum logic gate | Basic circuit in quantum computing
In quantum computing and specifically the quantum circuit model of computation, a quantum logic gate (or simply quantum gate) is a basic quantum circuit operating on a small number of qubits. Quantum logic gates are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.
Unlike many classical logic gates, quantum logic gates are reversible. It is possible to perform classical computing using only reversible gates. For example, the reversible Toffoli gate can implement all Boolean functions, often at the cost of having to use ancilla bits. The Toffoli gate has a direct quantum equivalent, showing that quantum circuits can perform all operations performed by classical circuits.
Quantum gates are unitary operators, and are described as unitary matrices relative to some orthonormal basis. Usually the "computational basis" is used, which unless comparing it with something, just means that for a "d"-level quantum system (such as a qubit, a quantum register, or qutrits and qudits) the orthonormal basis vectors are labeled formula_0, or use binary notation.
History.
The current notation for quantum gates was developed by many of the founders of quantum information science including Adriano Barenco, Charles Bennett, Richard Cleve, David P. DiVincenzo, Norman Margolus, Peter Shor, Tycho Sleator, John A. Smolin, and Harald Weinfurter, building on notation introduced by Richard Feynman in 1986.
Representation.
Quantum logic gates are represented by unitary matrices. A gate that acts on formula_1 qubits is represented by a formula_2 unitary matrix, and the set of all such gates with the group operation of matrix multiplication is the unitary group U(2"n"). The quantum states that the gates act upon are unit vectors in formula_3 complex dimensions, with the complex Euclidean norm (the 2-norm).66 The basis vectors (sometimes called "eigenstates") are the possible outcomes if the state of the qubits is measured, and a quantum state is a linear combination of these outcomes. The most common quantum gates operate on vector spaces of one or two qubits, just like the common classical logic gates operate on one or two bits.
Even though the quantum logic gates belong to continuous symmetry groups, real hardware is inexact and thus limited in precision. The application of gates typically introduces errors, and the quantum states' fidelities decrease over time. If error correction is used, the usable gates are further restricted to a finite set. Later in this article, this is ignored as the focus is on the ideal quantum gates' properties.
Quantum states are typically represented by "kets", from a notation known as bra–ket.
The vector representation of a single qubit is
formula_4
Here, formula_5 and formula_6 are the complex probability amplitudes of the qubit. These values determine the probability of measuring a 0 or a 1, when measuring the state of the qubit. See measurement below for details.
The value zero is represented by the ket formula_7, and the value one is represented by the ket formula_8.
The tensor product (or Kronecker product) is used to combine quantum states. The combined state for a qubit register is the tensor product of the constituent qubits. The tensor product is denoted by the symbol formula_9.
The vector representation of two qubits is:
formula_10
The action of the gate on a specific quantum state is found by multiplying the vector formula_11, which represents the state by the matrix formula_12 representing the gate. The result is a new quantum state formula_13:
formula_14
Notable examples.
There exists an uncountably infinite number of gates. Some of them have been named by various authors, and below follow some of those most often used in the literature.
Identity gate.
The identity gate is the identity matrix, usually written as "I", and is defined for a single qubit as
formula_15
where "I" is basis independent and does not modify the quantum state. The identity gate is most useful when describing mathematically the result of various gate operations or when discussing multi-qubit circuits.
Pauli gates ("X","Y","Z").
The Pauli gates formula_16 are the three Pauli matrices formula_17 and act on a single qubit. The Pauli "X", "Y" and "Z" equate, respectively, to a rotation around the "x", "y" and "z" axes of the Bloch sphere by formula_18 radians.
The Pauli-"X" gate is the quantum equivalent of the NOT gate for classical computers with respect to the standard basis formula_19, formula_20, which distinguishes the "z" axis on the Bloch sphere. It is sometimes called a bit-flip as it maps formula_19 to formula_20 and formula_20 to formula_19. Similarly, the Pauli-"Y" maps formula_19 to formula_21 and formula_20 to formula_22. Pauli "Z" leaves the basis state formula_19 unchanged and maps formula_20 to formula_23. Due to this nature, Pauli "Z" is sometimes called phase-flip.
These matrices are usually represented as
formula_24
formula_25
formula_26
The Pauli matrices are involutory, meaning that the square of a Pauli matrix is the identity matrix.
formula_27
The Pauli matrices also anti-commute, for example formula_28
The matrix exponential of a Pauli matrix formula_29 is a rotation operator, often written as formula_30
Controlled gates.
Controlled gates act on 2 or more qubits, where one or more qubits act as a control for some operation. For example, the controlled NOT gate (or CNOT or CX) acts on 2 qubits, and performs the NOT operation on the second qubit only when the first qubit is formula_20, and otherwise leaves it unchanged. With respect to the basis formula_31, formula_32, formula_33, formula_34, it is represented by the Hermitian unitary matrix:
formula_35
The CNOT (or controlled Pauli-"X") gate can be described as the gate that maps the basis states formula_36, where formula_37 is XOR.
The CNOT can be expressed in the Pauli basis as:
formula_38
Being a Hermitian unitary operator, CNOT has the property that formula_39 and formula_40, and is involutory.
More generally if "U" is a gate that operates on a single qubit with matrix representation
formula_41
then the "controlled-U gate" is a gate that operates on two qubits in such a way that the first qubit serves as a control. It maps the basis states as follows.
formula_42
formula_43
formula_44
formula_45
The matrix representing the controlled "U" is
formula_46
When "U" is one of the Pauli operators, "X","Y", "Z", the respective terms "controlled-"X"", "controlled-"Y"", or "controlled-"Z"" are sometimes used. Sometimes this is shortened to just C"X", C"Y" and C"Z".
In general, any single qubit unitary gate can be expressed as formula_47, where "H" is a Hermitian matrix, and then the controlled "U" is formula_48
Control can be extended to gates with arbitrary number of qubits and functions in programming languages. Functions can be conditioned on superposition states.
Classical control.
Gates can also be controlled by classical logic. A quantum computer is controlled by a classical computer, and behaves like a coprocessor that receives instructions from the classical computer about what gates to execute on which qubits. Classical control is simply the inclusion, or omission, of gates in the instruction sequence for the quantum computer.
Phase shift gates.
The phase shift is a family of single-qubit gates that map the basis states formula_50 and formula_51. The probability of measuring a formula_19 or formula_20 is unchanged after applying this gate, however it modifies the phase of the quantum state. This is equivalent to tracing a horizontal circle (a line of constant latitude), or a rotation about the z-axis on the Bloch sphere by formula_52 radians. The phase shift gate is represented by the matrix:
formula_53
where formula_52 is the "phase shift" with the period 2π. Some common examples are the "T" gate where formula_54 (historically known as the formula_55 gate), the phase gate (also known as the S gate, written as "S", though "S" is sometimes used for SWAP gates) where formula_56 and the Pauli-"Z" gate where formula_57
The phase shift gates are related to each other as follows:
formula_58
formula_59
formula_60
Note that the phase gate formula_61 is not Hermitian (except for all formula_62). These gates are different from their Hermitian conjugates: formula_63. The two adjoint (or conjugate transpose) gates formula_64 and formula_65 are sometimes included in instruction sets.
Hadamard gate.
The Hadamard or Walsh-Hadamard gate, named after Jacques Hadamard () and Joseph L. Walsh, acts on a single qubit. It maps the basis states formula_66 and formula_67 (it creates an equal superposition state if given a computational basis state). The two states formula_68 and formula_69 are sometimes written formula_70 and formula_71 respectively. The Hadamard gate performs a rotation of formula_18 about the axis formula_72 at the Bloch sphere, and is therefore involutory. It is represented by the Hadamard matrix:
formula_73
If the Hermitian (so formula_74) Hadamard gate is used to perform a change of basis, it flips formula_75 and formula_76. For example, formula_77 and formula_78
Swap gate.
The swap gate swaps two qubits. With respect to the basis formula_31, formula_32, formula_33, formula_34, it is represented by the matrix
formula_79
The swap gate can be decomposed into summation form:
formula_80
Toffoli (CCNOT) gate.
The Toffoli gate, named after Tommaso Toffoli and also called the CCNOT gate or Deutsch gate formula_81, is a 3-bit gate that is universal for classical computation but not for quantum computation. The quantum Toffoli gate is the same gate, defined for 3 qubits. If we limit ourselves to only accepting input qubits that are formula_19 and formula_20, then if the first two bits are in the state formula_20 it applies a Pauli-"X" (or NOT) on the third bit, else it does nothing. It is an example of a CC-U (controlled-controlled Unitary) gate. Since it is the quantum analog of a classical gate, it is completely specified by its truth table. The Toffoli gate is universal when combined with the single qubit Hadamard gate.
The Toffoli gate is related to the classical AND (formula_82) and XOR (formula_37) operations as it performs the mapping formula_83 on states in the computational basis.
The Toffoli gate can be expressed using Pauli matrices as
formula_84
Universal quantum gates.
A set of universal quantum gates is any set of gates to which any operation possible on a quantum computer can be reduced, that is, any other unitary operation can be expressed as a finite sequence of gates from the set. Technically, this is impossible with anything less than an uncountable set of gates since the number of possible quantum gates is uncountable, whereas the number of finite sequences from a finite set is countable. To solve this problem, we only require that any quantum operation can be approximated by a sequence of gates from this finite set. Moreover, for unitaries on a constant number of qubits, the Solovay–Kitaev theorem guarantees that this can be done efficiently. Checking if a set of quantum gates is universal can be done using group theory methods and/or relation to (approximate) unitary t-designs
Some universal quantum gate sets include:
Deutsch gate.
A single-gate set of universal quantum gates can also be formulated using the parametrized three-qubit Deutsch gate formula_85, named after physicist David Deutsch. It is a general case of "CC-U", or "controlled-controlled-unitary" gate, and is defined as
formula_86
Unfortunately, a working Deutsch gate has remained out of reach, due to lack of a protocol. There are some proposals to realize a Deutsch gate with dipole–dipole interaction in neutral atoms.
A universal logic gate for reversible classical computing, the Toffoli gate, is reducible to the Deutsch gate formula_81, thus showing that all reversible classical logic operations can be performed on a universal quantum computer.
There also exist single two-qubit gates sufficient for universality. In 1996, Adriano Barenco showed that the Deutsch gate can be decomposed using only a single two-qubit gate (Barenco gate), but it is hard to realize experimentally. This feature is exclusive to quantum circuits, as there is no classical two-bit gate that is both reversible and universal. Universal two-qubit gates could be implemented to improve classical reversible circuits in fast low-power microprocessors.
Circuit composition.
Serially wired gates.
Assume that we have two gates "A" and "B" that both act on formula_1 qubits. When "B" is put after "A" in a series circuit, then the effect of the two gates can be described as a single gate "C".
formula_87
where formula_88 is matrix multiplication. The resulting gate "C" will have the same dimensions as "A" and "B". The order in which the gates would appear in a circuit diagram is reversed when multiplying them together.17–18,22–23,62–64147–169
For example, putting the Pauli "X" gate after the Pauli "Y" gate, both of which act on a single qubit, can be described as a single combined gate "C":
formula_89
The product symbol (formula_88) is often omitted.
Exponents of quantum gates.
All real exponents of unitary matrices are also unitary matrices, and all quantum gates are unitary matrices.
Positive integer exponents are equivalent to sequences of serially wired gates (e.g. formula_90), and the real exponents is a generalization of the series circuit. For example, formula_91 and formula_92 are both valid quantum gates.
formula_93 for any unitary matrix formula_12. The identity matrix (formula_94) behaves like a NOP and can be represented as bare wire in quantum circuits, or not shown at all.
All gates are unitary matrices, so that formula_95 and formula_96, where formula_97 is the conjugate transpose. This means that negative exponents of gates are unitary inverses of their positively exponentiated counterparts: formula_98. For example, some negative exponents of the phase shift gates are formula_99 and formula_100.
Note that for a Hermitian matrix formula_101 and because of unitarity, formula_102 so formula_103 for all Hermitian gates. They are involutory. Examples of Hermitian gates are the Pauli gates, Hadamard, CNOT, SWAP and Toffoli. Each Hermitian unitary matrix formula_104 has the property that formula_105 where formula_106
Parallel gates.
The tensor product (or Kronecker product) of two quantum gates is the gate that is equal to the two gates in parallel.
If we, as in the picture, combine the Pauli-"Y" gate with the Pauli-"X" gate in parallel, then this can be written as:
formula_107
Both the Pauli-"X" and the Pauli-"Y" gate act on a single qubit. The resulting gate formula_108 act on two qubits.
Sometimes the tensor product symbol is omitted, and indexes are used for the operators instead.
Hadamard transform.
The gate formula_109 is the Hadamard gate (formula_104) applied in parallel on 2 qubits. It can be written as:
formula_110
This "two-qubit parallel Hadamard gate" will, when applied to, for example, the two-qubit zero-vector (formula_31), create a quantum state that has equal probability of being observed in any of its four possible outcomes; formula_31, formula_32, formula_33, and formula_34. We can write this operation as:
formula_111
Here the amplitude for each measurable state is <templatestyles src="Fraction/styles.css" />1⁄2. The probability to observe any state is the square of the absolute value of the measurable states amplitude, which in the above example means that there is one in four that we observe any one of the individual four cases. See measurement for details.
formula_113 performs the Hadamard transform on two qubits. Similarly the gate formula_114 performs a Hadamard transform on a register of formula_1 qubits.
When applied to a register of formula_1 qubits all initialized to formula_19, the Hadamard transform puts the quantum register into a superposition with equal probability of being measured in any of its formula_3 possible states:
formula_115
This state is a "uniform superposition" and it is generated as the first step in some search algorithms, for example in amplitude amplification and phase estimation.
Measuring this state results in a random number between formula_19 and formula_116. How random the number is depends on the fidelity of the logic gates. If not measured, it is a quantum state with equal probability amplitude formula_117 for each of its possible states.
The Hadamard transform acts on a register formula_112 with formula_1 qubits such that formula_118 as follows:
formula_119
Application on entangled states.
If two or more qubits are viewed as a single quantum state, this combined state is equal to the tensor product of the constituent qubits. Any state that can be written as a tensor product from the constituent subsystems are called "separable states". On the other hand, an "entangled state" is any state that cannot be tensor-factorized, or in other words: "An entangled state can not be written as a tensor product of its constituent qubits states." Special care must be taken when applying gates to constituent qubits that make up entangled states.
If we have a set of "N" qubits that are entangled and wish to apply a quantum gate on "M" < "N" qubits in the set, we will have to extend the gate to take "N" qubits. This application can be done by combining the gate with an identity matrix such that their tensor product becomes a gate that act on "N" qubits. The identity matrix (formula_94) is a representation of the gate that maps every state to itself (i.e., does nothing at all). In a circuit diagram the identity gate or matrix will often appear as just a bare wire.
For example, the Hadamard gate (formula_104) acts on a single qubit, but if we feed it the first of the two qubits that constitute the entangled Bell state formula_120, we cannot write that operation easily. We need to extend the Hadamard gate formula_104 with the identity gate formula_94 so that we can act on quantum states that span "two" qubits:
formula_121
The gate formula_122 can now be applied to any two-qubit state, entangled or otherwise. The gate formula_122 will leave the second qubit untouched and apply the Hadamard transform to the first qubit. If applied to the Bell state in our example, we may write that as:
formula_123
Computational complexity and the tensor product.
The time complexity for multiplying two formula_124-matrices is at least formula_125, if using a classical machine. Because the size of a gate that operates on formula_126 qubits is formula_127 it means that the time for simulating a step in a quantum circuit (by means of multiplying the gates) that operates on generic entangled states is formula_128. For this reason it is believed to be intractable to simulate large entangled quantum systems using classical computers. Subsets of the gates, such as the Clifford gates, or the trivial case of circuits that only implement classical Boolean functions (e.g. combinations of X, CNOT, Toffoli), can however be efficiently simulated on classical computers.
The state vector of a quantum register with formula_1 qubits is formula_3 complex entries. Storing the probability amplitudes as a list of floating point values is not tractable for large formula_1.
Unitary inversion of gates.
Because all quantum logical gates are reversible, any composition of multiple gates is also reversible. All products and tensor products (i.e. series and parallel combinations) of unitary matrices are also unitary matrices. This means that it is possible to construct an inverse of all algorithms and functions, as long as they contain only gates.
Initialization, measurement, I/O and spontaneous decoherence are side effects in quantum computers. Gates however are purely functional and bijective.
If formula_12 is a unitary matrix, then formula_95 and formula_96. The dagger (formula_97) denotes the conjugate transpose. It is also called the Hermitian adjoint.
If a function formula_129 is a product of formula_130 gates, formula_131, the unitary inverse of the function formula_132 can be constructed:
Because formula_133 we have, after repeated application on itself
formula_134
Similarly if the function formula_135 consists of two gates formula_136 and formula_137 in parallel, then formula_138 and formula_139.
Gates that are their own unitary inverses are called Hermitian or self-adjoint operators. Some elementary gates such as the Hadamard ("H") and the Pauli gates ("I", "X", "Y", "Z") are Hermitian operators, while others like the phase shift ("S", "T", "P", CPhase) gates generally are not.
For example, an algorithm for addition can be used for subtraction, if it is being "run in reverse", as its unitary inverse. The inverse quantum Fourier transform is the unitary inverse. Unitary inverses can also be used for uncomputation. Programming languages for quantum computers, such as Microsoft's Q#, Bernhard Ömer's QCL,61 and IBM's Qiskit, contain function inversion as programming concepts.
Measurement.
Measurement (sometimes called "observation") is irreversible and therefore not a quantum gate, because it assigns the observed quantum state to a single value. Measurement takes a quantum state and projects it to one of the basis vectors, with a likelihood equal to the square of the vector's length (in the 2-norm66) along that basis vector. This is known as the Born rule and appears as a stochastic non-reversible operation as it probabilistically sets the quantum state equal to the basis vector that represents the measured state. At the instant of measurement, the state is said to "collapse" to the definite single value that was measured. Why and how, or even if the quantum state collapses at measurement, is called the measurement problem.
The probability of measuring a value with probability amplitude formula_49 is formula_140, where formula_141 is the modulus.
Measuring a single qubit, whose quantum state is represented by the vector formula_142, will result in formula_19 with probability formula_143, and in formula_20 with probability formula_144.
For example, measuring a qubit with the quantum state formula_145 will yield with equal probability either formula_19 or formula_20.
A quantum state formula_146 that spans n qubits can be written as a vector in formula_3 complex dimensions: formula_147. This is because the tensor product of n qubits is a vector in formula_3 dimensions. This way, a register of n qubits can be measured to formula_3 distinct states, similar to how a register of n classical bits can hold formula_3 distinct states. Unlike with the bits of classical computers, quantum states can have non-zero probability amplitudes in multiple measurable values simultaneously. This is called "superposition".
The sum of all probabilities for all outcomes must always be equal to . Another way to say this is that the Pythagorean theorem generalized to formula_148 has that all quantum states formula_146 with n qubits must satisfy formula_149 where formula_150 is the probability amplitude for measurable state formula_151. A geometric interpretation of this is that the possible value-space of a quantum state formula_146 with n qubits is the surface of the unit sphere in formula_148 and that the unitary transforms (i.e. quantum logic gates) applied to it are rotations on the sphere. The rotations that the gates perform form the symmetry group U(2n). Measurement is then a probabilistic projection of the points at the surface of this complex sphere onto the basis vectors that span the space (and labels the outcomes).
In many cases the space is represented as a Hilbert space formula_152 rather than some specific formula_3-dimensional complex space. The number of dimensions (defined by the basis vectors, and thus also the possible outcomes from measurement) is then often implied by the operands, for example as the required state space for solving a problem. In Grover's algorithm, Grover named this generic basis vector set "the database".
The selection of basis vectors against which to measure a quantum state will influence the outcome of the measurement. See change of basis and Von Neumann entropy for details. In this article, we always use the "computational basis", which means that we have labeled the formula_3 basis vectors of an n-qubit register formula_153, or use the binary representation formula_154.
In quantum mechanics, the basis vectors constitute an orthonormal basis.
An example of usage of an alternative measurement basis is in the BB84 cipher.
The effect of measurement on entangled states.
If two quantum states (i.e. qubits, or registers) are entangled (meaning that their combined state cannot be expressed as a tensor product), measurement of one register affects or reveals the state of the other register by partially or entirely collapsing its state too. This effect can be used for computation, and is used in many algorithms.
The Hadamard-CNOT combination acts on the zero-state as follows:
formula_155
This resulting state is the Bell state formula_156. It cannot be described as a tensor product of two qubits. There is no solution for
formula_157
because for example w needs to be both non-zero and zero in the case of xw and yw.
The quantum state "spans" the two qubits. This is called "entanglement". Measuring one of the two qubits that make up this Bell state will result in that the other qubit logically must have the same value, both must be the same: Either it will be found in the state formula_31, or in the state formula_34. If we measure one of the qubits to be for example formula_20, then the other qubit must also be formula_20, because their combined state "became" formula_34. Measurement of one of the qubits collapses the entire quantum state, that span the two qubits.
The GHZ state is a similar entangled quantum state that spans three or more qubits.
This type of value-assignment occurs "instantaneously over any distance" and this has as of 2018 been experimentally verified by QUESS for distances of up to 1200 kilometers. That the phenomena appears to happen instantaneously as opposed to the time it would take to traverse the distance separating the qubits at the speed of light is called the EPR paradox, and it is an open question in physics how to resolve this. Originally it was solved by giving up the assumption of local realism, but other interpretations have also emerged. For more information see the Bell test experiments. The no-communication theorem proves that this phenomenon cannot be used for faster-than-light communication of classical information.
Measurement on registers with pairwise entangled qubits.
Take a register A with n qubits all initialized to formula_19, and feed it through a parallel Hadamard gate formula_158. Register A will then enter the state formula_159 that have equal probability of when measured to be in any of its formula_3 possible states; formula_19 to formula_116. Take a second register B, also with n qubits initialized to formula_19 and pairwise CNOT its qubits with the qubits in register A, such that for each p the qubits formula_160 and formula_161 forms the state formula_162.
If we now measure the qubits in register A, then register B will be found to contain the same value as A. If we however instead apply a quantum logic gate F on A and then measure, then formula_163, where formula_132 is the unitary inverse of F.
Because of how unitary inverses of gates act, formula_164. For example, say formula_165, then formula_166.
The equality will hold no matter in which order measurement is performed (on the registers A or B), assuming that F has run to completion. Measurement can even be randomly and concurrently interleaved qubit by qubit, since the measurements assignment of one qubit will limit the possible value-space from the other entangled qubits.
Even though the equalities holds, the probabilities for measuring the possible outcomes may change as a result of applying F, as may be the intent in a quantum search algorithm.
This effect of value-sharing via entanglement is used in Shor's algorithm, phase estimation and in quantum counting. Using the Fourier transform to amplify the probability amplitudes of the solution states for some problem is a generic method known as "Fourier fishing".
Logic function synthesis.
Functions and routines that only use gates can themselves be described as matrices, just like the smaller gates. The matrix that represents a quantum function acting on formula_126 qubits has size formula_127. For example, a function that acts on a "qubyte" (a register of 8 qubits) would be represented by a matrix with formula_167 elements.
Unitary transformations that are not in the set of gates natively available at the quantum computer (the primitive gates) can be synthesised, or approximated, by combining the available primitive gates in a circuit. One way to do this is to factor the matrix that encodes the unitary transformation into a product of tensor products (i.e. series and parallel circuits) of the available primitive gates. The group U(2"q") is the symmetry group for the gates that act on formula_126 qubits. Factorization is then the problem of finding a path in U(2"q") from the generating set of primitive gates. The Solovay–Kitaev theorem shows that given a sufficient set of primitive gates, there exist an efficient approximate for any gate. For the general case with a large number of qubits this direct approach to circuit synthesis is intractable. This puts a limit on how large functions can be brute-force factorized into primitive quantum gates. Typically quantum programs are instead built using relatively small and simple quantum functions, similar to normal classical programming.
Because of the gates unitary nature, all functions must be reversible and always be bijective mappings of input to output. There must always exist a function formula_168 such that formula_169. Functions that are not invertible can be made invertible by adding ancilla qubits to the input or the output, or both. After the function has run to completion, the ancilla qubits can then either be uncomputed or left untouched. Measuring or otherwise collapsing the quantum state of an ancilla qubit (e.g. by re-initializing the value of it, or by its spontaneous decoherence) that have not been uncomputed may result in errors, as their state may be entangled with the qubits that are still being used in computations.
Logically irreversible operations, for example addition modulo formula_3 of two formula_1-qubit registers "a" and "b", formula_170, can be made logically reversible by adding information to the output, so that the input can be computed from the output (i.e. there exists a function formula_168). In our example, this can be done by passing on one of the input registers to the output: formula_171. The output can then be used to compute the input (i.e. given the output formula_172 and formula_173, we can easily find the input; formula_173 is given and formula_174) and the function is made bijective.
All Boolean algebraic expressions can be encoded as unitary transforms (quantum logic gates), for example by using combinations of the Pauli-X, CNOT and Toffoli gates. These gates are functionally complete in the Boolean logic domain.
There are many unitary transforms available in the libraries of Q#, QCL, Qiskit, and other quantum programming languages. It also appears in the literature.
For example, formula_175, where formula_176 is the number of qubits that constitutes the register formula_177, is implemented as the following in QCL:
cond qufunct inc(qureg x) { // increment register
int i;
for i = #x-1 to 0 step -1 {
CNot(x[i], x[0::i]); // apply controlled-not from
} // MSB to LSB
In QCL, decrement is done by "undoing" increment. The prefix codice_0 is used to instead run the unitary inverse of the function. codice_1 is the inverse of codice_2 and instead performs the operation formula_178. The codice_3 keyword means that the function can be conditional.
In the model of computation used in this article (the quantum circuit model), a classic computer generates the gate composition for the quantum computer, and the quantum computer behaves as a coprocessor that receives instructions from the classical computer about which primitive gates to apply to which qubits. Measurement of quantum registers results in binary values that the classical computer can use in its computations. Quantum algorithms often contain both a classical and a quantum part. Unmeasured I/O (sending qubits to remote computers without collapsing their quantum states) can be used to create networks of quantum computers. Entanglement swapping can then be used to realize distributed algorithms with quantum computers that are not directly connected. Examples of distributed algorithms that only require the use of a handful of quantum logic gates are superdense coding, the quantum Byzantine agreement and the BB84 cipherkey exchange protocol.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|0\\rangle, |1\\rangle, \\dots, |d-1\\rangle"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "2^n \\times 2^n"
},
{
"math_id": 3,
"text": "2^n"
},
{
"math_id": 4,
"text": "|a\\rangle = v_0 | 0 \\rangle + v_1 | 1 \\rangle \\rightarrow \\begin{bmatrix} v_0 \\\\ v_1 \\end{bmatrix} ."
},
{
"math_id": 5,
"text": "v_0"
},
{
"math_id": 6,
"text": "v_1"
},
{
"math_id": 7,
"text": "|0\\rangle = \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}"
},
{
"math_id": 8,
"text": "|1\\rangle = \\begin{bmatrix} 0 \\\\ 1\\end{bmatrix}"
},
{
"math_id": 9,
"text": "\\otimes"
},
{
"math_id": 10,
"text": "| \\psi \\rangle = v_{00} | 00 \\rangle + v_{01} | 0 1 \\rangle + v_{10} | 1 0 \\rangle + v_{11} | 1 1 \\rangle \\rightarrow \\begin{bmatrix} v_{00} \\\\ v_{01} \\\\ v_{10} \\\\ v_{11} \\end{bmatrix}."
},
{
"math_id": 11,
"text": "|\\psi_1\\rangle"
},
{
"math_id": 12,
"text": "U"
},
{
"math_id": 13,
"text": "|\\psi_2\\rangle"
},
{
"math_id": 14,
"text": "U|\\psi_1\\rangle = |\\psi_2\\rangle."
},
{
"math_id": 15,
"text": " I = \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix} ,"
},
{
"math_id": 16,
"text": "(X,Y,Z)"
},
{
"math_id": 17,
"text": "(\\sigma_x,\\sigma_y,\\sigma_z)"
},
{
"math_id": 18,
"text": "\\pi"
},
{
"math_id": 19,
"text": "|0\\rangle"
},
{
"math_id": 20,
"text": "|1\\rangle"
},
{
"math_id": 21,
"text": "i|1\\rangle"
},
{
"math_id": 22,
"text": "-i|0\\rangle"
},
{
"math_id": 23,
"text": "-|1\\rangle"
},
{
"math_id": 24,
"text": " X = \\sigma_x =\\operatorname{NOT} = \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix} ,"
},
{
"math_id": 25,
"text": " Y = \\sigma_y = \\begin{bmatrix} 0 & -i \\\\ i & 0 \\end{bmatrix}, "
},
{
"math_id": 26,
"text": " Z = \\sigma_z = \\begin{bmatrix} 1 & 0 \\\\ 0 & -1 \\end{bmatrix}."
},
{
"math_id": 27,
"text": "I^2 = X^2 = Y^2 = Z^2 = -iXYZ = I"
},
{
"math_id": 28,
"text": "ZX=iY=-XZ."
},
{
"math_id": 29,
"text": "\\sigma_j"
},
{
"math_id": 30,
"text": "e^{-i\\sigma_j\\theta/2}."
},
{
"math_id": 31,
"text": "|00\\rangle"
},
{
"math_id": 32,
"text": "|01\\rangle"
},
{
"math_id": 33,
"text": "|10\\rangle"
},
{
"math_id": 34,
"text": "|11\\rangle"
},
{
"math_id": 35,
"text": " \\mbox{CNOT} = \\begin{bmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\\\ 0 & 0 & 1 & 0 \\end{bmatrix} . "
},
{
"math_id": 36,
"text": "|a,b\\rangle \\mapsto |a,a \\oplus b\\rangle"
},
{
"math_id": 37,
"text": "\\oplus"
},
{
"math_id": 38,
"text": " \\mbox{CNOT} = e^{i\\frac{\\pi}{4}(I-Z_1)(I-X_2)}=e^{-i\\frac{\\pi}{4}(I-Z_1)(I-X_2)}. "
},
{
"math_id": 39,
"text": " e^{i\\theta U}=(\\cos \\theta)I+(i\\sin \\theta) U"
},
{
"math_id": 40,
"text": " U =e^{i\\frac{\\pi}{2}(I-U)}=e^{-i\\frac{\\pi}{2}(I-U)}"
},
{
"math_id": 41,
"text": " U = \\begin{bmatrix} u_{00} & u_{01} \\\\ u_{10} & u_{11} \\end{bmatrix} , "
},
{
"math_id": 42,
"text": " | 0 0 \\rangle \\mapsto | 0 0 \\rangle "
},
{
"math_id": 43,
"text": " | 0 1 \\rangle \\mapsto | 0 1 \\rangle "
},
{
"math_id": 44,
"text": " | 1 0 \\rangle \\mapsto | 1 \\rangle \\otimes U |0 \\rangle = | 1 \\rangle \\otimes (u_{00} |0 \\rangle + u_{10} |1 \\rangle) "
},
{
"math_id": 45,
"text": " | 1 1 \\rangle \\mapsto | 1 \\rangle \\otimes U |1 \\rangle = | 1 \\rangle \\otimes (u_{01} |0 \\rangle + u_{11} |1 \\rangle) "
},
{
"math_id": 46,
"text": " \\mbox{C}U = \\begin{bmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & u_{00} & u_{01} \\\\ 0 & 0 & u_{10} & u_{11} \\end{bmatrix}."
},
{
"math_id": 47,
"text": " U = e^{iH} "
},
{
"math_id": 48,
"text": " CU = e^{i\\frac{1}{2}(I-Z_1)H_2}."
},
{
"math_id": 49,
"text": "\\phi"
},
{
"math_id": 50,
"text": "|0\\rangle \\mapsto |0\\rangle"
},
{
"math_id": 51,
"text": "|1\\rangle \\mapsto e^{i\\varphi}|1\\rangle"
},
{
"math_id": 52,
"text": "\\varphi"
},
{
"math_id": 53,
"text": "P(\\varphi) = \\begin{bmatrix} 1 & 0 \\\\ 0 & e^{i \\varphi} \\end{bmatrix}"
},
{
"math_id": 54,
"text": "\\varphi = \\frac{\\pi}{4}"
},
{
"math_id": 55,
"text": "\\pi /8"
},
{
"math_id": 56,
"text": "\\varphi= \\frac{\\pi}{2}"
},
{
"math_id": 57,
"text": "\\varphi = \\pi."
},
{
"math_id": 58,
"text": " Z = \\begin{bmatrix} 1 & 0 \\\\ 0 & e^{i \\pi} \\end{bmatrix} = \\begin{bmatrix} 1 & 0 \\\\ 0 & -1 \\end{bmatrix} = P\\left(\\pi\\right)"
},
{
"math_id": 59,
"text": " S = \\begin{bmatrix} 1 & 0 \\\\ 0 & e^{i \\frac{\\pi}{2}} \\end{bmatrix} = \\begin{bmatrix} 1 & 0 \\\\ 0 & i \\end{bmatrix} = P\\left(\\frac{\\pi}{2}\\right)=\\sqrt{Z}"
},
{
"math_id": 60,
"text": " T = \\begin{bmatrix} 1 & 0 \\\\ 0 & e^{i \\frac{\\pi}{4}} \\end{bmatrix} =P\\left(\\frac{\\pi}{4}\\right) = \\sqrt{S} = \\sqrt[4]{Z}"
},
{
"math_id": 61,
"text": "P(\\varphi)"
},
{
"math_id": 62,
"text": "\\varphi = n\\pi, n \\in \\mathbb{Z}"
},
{
"math_id": 63,
"text": "P^\\dagger(\\varphi)=P(-\\varphi)"
},
{
"math_id": 64,
"text": "S^\\dagger"
},
{
"math_id": 65,
"text": "T^\\dagger"
},
{
"math_id": 66,
"text": "|0\\rangle \\mapsto \\frac{|0\\rangle + |1\\rangle}{\\sqrt{2}}"
},
{
"math_id": 67,
"text": "|1\\rangle \\mapsto \\frac{|0\\rangle - |1\\rangle}{\\sqrt{2}}"
},
{
"math_id": 68,
"text": "(|0\\rangle + |1\\rangle)/\\sqrt{2}"
},
{
"math_id": 69,
"text": "(|0\\rangle - |1\\rangle)/\\sqrt{2}"
},
{
"math_id": 70,
"text": "|+\\rangle"
},
{
"math_id": 71,
"text": "|-\\rangle"
},
{
"math_id": 72,
"text": "(\\hat{x}+\\hat{z})/\\sqrt{2}"
},
{
"math_id": 73,
"text": " H = \\frac{1}{\\sqrt{2}} \\begin{bmatrix} 1 & 1 \\\\ 1 & -1 \\end{bmatrix} ."
},
{
"math_id": 74,
"text": "H^{\\dagger}=H^{-1}=H"
},
{
"math_id": 75,
"text": "\\hat{x}"
},
{
"math_id": 76,
"text": "\\hat{z}"
},
{
"math_id": 77,
"text": "HZH=X"
},
{
"math_id": 78,
"text": "H\\sqrt{X}\\;H=\\sqrt{Z}=S."
},
{
"math_id": 79,
"text": " \\mbox{SWAP} = \\begin{bmatrix} 1&0&0&0\\\\0&0&1&0\\\\0&1&0&0\\\\0&0&0&1\\end{bmatrix} . "
},
{
"math_id": 80,
"text": "\\mbox{SWAP}=\\frac{I\\otimes I +X\\otimes X+Y\\otimes Y+Z\\otimes Z}{2}"
},
{
"math_id": 81,
"text": "D(\\pi/2)"
},
{
"math_id": 82,
"text": "\\land"
},
{
"math_id": 83,
"text": "|a, b, c\\rangle \\mapsto |a, b, c\\oplus (a \\land b)\\rangle"
},
{
"math_id": 84,
"text": " \\mbox{Toff} = e^{i\\frac{\\pi}{8}(I-Z_1)(I-Z_2)(I-X_3)}= e^{-i\\frac{\\pi}{8}(I-Z_1)(I-Z_2)(I-X_3)}. "
},
{
"math_id": 85,
"text": "D(\\theta)"
},
{
"math_id": 86,
"text": "|a, b, c\\rangle \\mapsto \\begin{cases}\n i \\cos(\\theta) |a, b , c\\rangle + \\sin(\\theta) |a, b, 1 - c\\rangle & \\text{for}\\ a = b = 1, \\\\\n |a, b, c\\rangle & \\text{otherwise}.\n\\end{cases}"
},
{
"math_id": 87,
"text": "C = B \\cdot A"
},
{
"math_id": 88,
"text": "\\cdot"
},
{
"math_id": 89,
"text": "C = X \\cdot Y = \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix} \\cdot \\begin{bmatrix} 0 & -i \\\\ i & 0 \\end{bmatrix} = \\begin{bmatrix} i & 0 \\\\ 0 & -i \\end{bmatrix} = iZ"
},
{
"math_id": 90,
"text": "X^3 = X \\cdot X \\cdot X"
},
{
"math_id": 91,
"text": "X^\\pi"
},
{
"math_id": 92,
"text": "\\sqrt{X}=X^{1/2}"
},
{
"math_id": 93,
"text": "U^0=I"
},
{
"math_id": 94,
"text": "I"
},
{
"math_id": 95,
"text": "U^\\dagger U = UU^\\dagger = I"
},
{
"math_id": 96,
"text": "U^\\dagger = U^{-1}"
},
{
"math_id": 97,
"text": "\\dagger"
},
{
"math_id": 98,
"text": "U^{-n} = (U^n)^{\\dagger}"
},
{
"math_id": 99,
"text": "T^{-1}=T^{\\dagger}"
},
{
"math_id": 100,
"text": "T^{-2}=(T^2)^{\\dagger}=S^{\\dagger}"
},
{
"math_id": 101,
"text": "H^\\dagger=H,"
},
{
"math_id": 102,
"text": "HH^\\dagger=I,"
},
{
"math_id": 103,
"text": "H^2 = I"
},
{
"math_id": 104,
"text": "H"
},
{
"math_id": 105,
"text": "e^{i\\theta H}=(\\cos \\theta)I+(i\\sin \\theta) H"
},
{
"math_id": 106,
"text": "H=e^{i\\frac{\\pi}{2}(I-H)}=e^{-i\\frac{\\pi}{2}(I-H)}."
},
{
"math_id": 107,
"text": "C = Y \\otimes X = \\begin{bmatrix} 0 & -i \\\\ i & 0 \\end{bmatrix} \\otimes \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix} = \\begin{bmatrix} 0 \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix} & -i \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix} \\\\ i \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix} & 0 \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix}\\end{bmatrix} = \\begin{bmatrix} 0 & 0 & 0 & -i \\\\ 0 & 0 & -i & 0 \\\\ 0 & i & 0 & 0 \\\\ i & 0 & 0 & 0 \\end{bmatrix}"
},
{
"math_id": 108,
"text": "C"
},
{
"math_id": 109,
"text": "H_2 = H \\otimes H"
},
{
"math_id": 110,
"text": "H_2 = H \\otimes H = \\frac{1}{\\sqrt{2}} \\begin{bmatrix} 1 & 1 \\\\ 1 & -1 \\end{bmatrix} \\otimes \\frac{1}{\\sqrt{2}} \\begin{bmatrix} 1 & 1 \\\\ 1 & -1 \\end{bmatrix} = \\frac{1}{2} \\begin{bmatrix} 1 & 1 & 1 & 1 \\\\ 1 & -1 & 1 & -1 \\\\ 1 & 1 & -1 & -1 \\\\ 1 & -1 & -1 & 1 \\end{bmatrix}"
},
{
"math_id": 111,
"text": "H_2 |00\\rangle = \\frac{1}{2} \\begin{bmatrix} 1 & 1 & 1 & 1 \\\\ 1 & -1 & 1 & -1 \\\\ 1 & 1 & -1 & -1 \\\\ 1 & -1 & -1 & 1 \\end{bmatrix} \\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\\\ 0 \\end{bmatrix} = \\frac{1}{2} \\begin{bmatrix} 1 \\\\ 1 \\\\ 1 \\\\ 1 \\end{bmatrix} = \\frac{1}{2} |00\\rangle + \\frac{1}{2} |01\\rangle +\\frac{1}{2} |10\\rangle +\\frac{1}{2} |11\\rangle = \\frac{|00\\rangle + |01\\rangle + |10\\rangle + |11\\rangle}{2}"
},
{
"math_id": 112,
"text": "|\\psi\\rangle"
},
{
"math_id": 113,
"text": "H_2"
},
{
"math_id": 114,
"text": "\\underbrace{ H \\otimes H \\otimes \\dots \\otimes H }_{n\\text{ times}} = \\bigotimes_1^n H = H^{\\otimes n} = H_n"
},
{
"math_id": 115,
"text": "\\bigotimes_0^{n-1}(H|0\\rangle) = \\frac{1}{\\sqrt{2^n}} \\begin{bmatrix} 1 \\\\ 1 \\\\ \\vdots \\\\ 1 \\end{bmatrix} = \\frac{1}{\\sqrt{2^n}} \\Big( |0\\rangle + |1\\rangle + \\dots + |2^n-1\\rangle \\Big)= \\frac{1}{\\sqrt{2^n}}\\sum_{i=0}^{2^{n}-1}|i\\rangle"
},
{
"math_id": 116,
"text": "|2^n-1\\rangle"
},
{
"math_id": 117,
"text": "\\frac{1}{\\sqrt{2^n}}"
},
{
"math_id": 118,
"text": "|\\psi\\rangle = \\bigotimes_{i=0}^{n-1} |\\psi_i\\rangle"
},
{
"math_id": 119,
"text": "\\bigotimes_0^{n-1}H|\\psi\\rangle = \\bigotimes_{i=0}^{n-1}\\frac{|0\\rangle + (-1)^{\\psi_i}|1\\rangle}{\\sqrt{2}} = \\frac{1}{\\sqrt{2^n}}\\bigotimes_{i=0}^{n-1}\\Big(|0\\rangle + (-1)^{\\psi_i}|1\\rangle\\Big) = H|\\psi_0\\rangle \\otimes H|\\psi_1\\rangle \\otimes \\cdots \\otimes H|\\psi_{n-1}\\rangle"
},
{
"math_id": 120,
"text": "\\frac{|00\\rangle + |11\\rangle}{\\sqrt{2}}"
},
{
"math_id": 121,
"text": "K = H \\otimes I = \\frac{1}{\\sqrt{2}} \\begin{bmatrix} 1 & 1 \\\\ 1 & -1 \\end{bmatrix} \\otimes \\begin{bmatrix} 1 & 0 \\\\ 0 & 1\\end{bmatrix} = \\frac{1}{\\sqrt{2}} \\begin{bmatrix} 1 & 0 & 1 & 0 \\\\ 0 & 1 & 0 & 1 \\\\ 1 & 0 & -1 & 0 \\\\ 0 & 1 & 0 & -1\\end{bmatrix}"
},
{
"math_id": 122,
"text": "K"
},
{
"math_id": 123,
"text": "K \\frac{|00\\rangle + |11\\rangle}{\\sqrt{2}} = \\frac{1}{\\sqrt{2}} \\begin{bmatrix} 1 & 0 & 1 & 0 \\\\ 0 & 1 & 0 & 1 \\\\ 1 & 0 & -1 & 0 \\\\ 0 & 1 & 0 & -1\\end{bmatrix} \\frac{1}{\\sqrt{2}} \\begin{bmatrix}1 \\\\ 0 \\\\ 0 \\\\ 1\\end{bmatrix} = \\frac{1}{2} \\begin{bmatrix} 1 \\\\ 1 \\\\ 1 \\\\ -1 \\end{bmatrix} = \\frac{|00\\rangle + |01\\rangle + |10\\rangle - |11\\rangle}{2}"
},
{
"math_id": 124,
"text": "n \\times n"
},
{
"math_id": 125,
"text": "\\Omega(n^2 \\log n)"
},
{
"math_id": 126,
"text": "q"
},
{
"math_id": 127,
"text": "2^q \\times 2^q"
},
{
"math_id": 128,
"text": "\\Omega({2^q}^2 \\log({2^q}))"
},
{
"math_id": 129,
"text": "F"
},
{
"math_id": 130,
"text": "m"
},
{
"math_id": 131,
"text": "F = A_1 \\cdot A_2 \\cdot \\dots \\cdot A_m"
},
{
"math_id": 132,
"text": "F^\\dagger"
},
{
"math_id": 133,
"text": "(UV)^\\dagger = V^\\dagger U^\\dagger"
},
{
"math_id": 134,
"text": "F^\\dagger = \\left(\\prod_{i=1}^{m} A_i\\right)^\\dagger = \\prod_{i=m}^{1} A^\\dagger_{i} = A_m^\\dagger \\cdot \\dots \\cdot A_2^\\dagger \\cdot A_1^\\dagger"
},
{
"math_id": 135,
"text": "G"
},
{
"math_id": 136,
"text": "A"
},
{
"math_id": 137,
"text": "B"
},
{
"math_id": 138,
"text": "G=A\\otimes B"
},
{
"math_id": 139,
"text": "G^\\dagger = (A \\otimes B)^\\dagger = A^\\dagger \\otimes B^\\dagger"
},
{
"math_id": 140,
"text": "1 \\ge |\\phi|^2 \\ge 0"
},
{
"math_id": 141,
"text": "|\\cdot|"
},
{
"math_id": 142,
"text": "a|0\\rangle + b|1\\rangle = \\begin{bmatrix} a \\\\ b \\end{bmatrix}"
},
{
"math_id": 143,
"text": "|a|^2"
},
{
"math_id": 144,
"text": "|b|^2"
},
{
"math_id": 145,
"text": "\\frac{|0\\rangle -i|1\\rangle }{\\sqrt{2}} = \\frac{1}{\\sqrt{2}}\\begin{bmatrix} 1 \\\\ -i \\end{bmatrix}"
},
{
"math_id": 146,
"text": "|\\Psi\\rangle"
},
{
"math_id": 147,
"text": "|\\Psi\\rangle \\in \\mathbb C^{2^n}"
},
{
"math_id": 148,
"text": "\\mathbb C^{2^n}"
},
{
"math_id": 149,
"text": "1 = \\sum_{x=0}^{2^n-1}|a_x|^2,"
},
{
"math_id": 150,
"text": "a_x"
},
{
"math_id": 151,
"text": "|x\\rangle"
},
{
"math_id": 152,
"text": "\\mathcal{H}"
},
{
"math_id": 153,
"text": "|0\\rangle, |1\\rangle, |2\\rangle, \\cdots, |2^n-1\\rangle"
},
{
"math_id": 154,
"text": "|0_{10}\\rangle = |0\\dots 00_{2}\\rangle, |1_{10}\\rangle = |0\\dots01_{2}\\rangle, |2_{10}\\rangle = |0\\dots10_{2}\\rangle, \\cdots, |2^n-1\\rangle = |111\\dots1_{2}\\rangle"
},
{
"math_id": 155,
"text": "\\operatorname{CNOT}(H \\otimes I)|00\\rangle\n= \\left(\n \\begin{bmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\\\ 0 & 0 & 1 & 0 \\end{bmatrix}\n \\left(\n \\frac{1}{\\sqrt{2}}\n \\begin{bmatrix} 1 & 1 \\\\ 1 & -1 \\end{bmatrix}\n \\otimes\n \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}\n \\right)\n \\right)\n \\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\\\ 0 \\end{bmatrix}\n= \\frac{1}{\\sqrt{2}} \\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\\\ 1 \\end{bmatrix}\n= \\frac{|00\\rangle + |11\\rangle}{\\sqrt{2}}"
},
{
"math_id": 156,
"text": "\\frac{|00\\rangle + |11\\rangle}{\\sqrt{2}} = \\frac{1}{\\sqrt{2}}\\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\\\ 1\\end{bmatrix}"
},
{
"math_id": 157,
"text": "\\begin{bmatrix} x \\\\ y \\end{bmatrix} \\otimes \\begin{bmatrix} w \\\\ z \\end{bmatrix} = \\begin{bmatrix} xw \\\\ xz \\\\ yw \\\\ yz \\end{bmatrix} = \\frac{1}{\\sqrt{2}}\\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\\\ 1 \\end{bmatrix},"
},
{
"math_id": 158,
"text": "H^{\\otimes n}"
},
{
"math_id": 159,
"text": "\\frac{1}{\\sqrt{2^n}} \\sum_{k=0}^{2^{n}-1} |k\\rangle"
},
{
"math_id": 160,
"text": "A_{p}"
},
{
"math_id": 161,
"text": "B_{p}"
},
{
"math_id": 162,
"text": "|A_{p}B_{p}\\rangle = \\frac{|00\\rangle + |11\\rangle}{\\sqrt{2}}"
},
{
"math_id": 163,
"text": "|A\\rangle = F|B\\rangle \\iff F^\\dagger|A\\rangle = |B\\rangle"
},
{
"math_id": 164,
"text": "F^\\dagger |A\\rangle = F^{-1}(|A\\rangle) = |B\\rangle"
},
{
"math_id": 165,
"text": "F(x)=x+3 \\pmod{2^n}"
},
{
"math_id": 166,
"text": "|B\\rangle = |A - 3 \\pmod{2^n}\\rangle"
},
{
"math_id": 167,
"text": "2^8 \\times 2^8 = 256 \\times 256"
},
{
"math_id": 168,
"text": "F^{-1}"
},
{
"math_id": 169,
"text": "F^{-1}(F(|\\psi\\rangle)) = |\\psi\\rangle"
},
{
"math_id": 170,
"text": "F(a, b) = a+b \\pmod{2^n}"
},
{
"math_id": 171,
"text": "F(|a\\rangle \\otimes |b\\rangle) = |a+b \\pmod{2^n}\\rangle \\otimes |a\\rangle"
},
{
"math_id": 172,
"text": "a+b"
},
{
"math_id": 173,
"text": "a"
},
{
"math_id": 174,
"text": "(a+b) - a = b"
},
{
"math_id": 175,
"text": "\\mathrm{inc}(|x\\rangle) = |x + 1 \\pmod{2^{x_\\text{length}}}\\rangle"
},
{
"math_id": 176,
"text": "x_\\text{length}"
},
{
"math_id": 177,
"text": "x"
},
{
"math_id": 178,
"text": "\\mathrm{inc}^\\dagger |x\\rangle = \\mathrm{inc}^{-1}(|x\\rangle) = |x - 1 \\pmod{2^{x_\\text{length}}}\\rangle"
}
] | https://en.wikipedia.org/wiki?curid=888587 |
8887 | Direct product | Generalization of the Cartesian product
In mathematics, one can often define a direct product of objects already known, giving a new one. This induces a structure on the Cartesian product of the underlying sets from that of the contributing objects. More abstractly, one talks about the product in category theory, which formalizes these notions.
Examples are the product of sets, groups (described below), rings, and other algebraic structures. The product of topological spaces is another instance.
There is also the direct sum – in some areas this is used interchangeably, while in others it is a different concept.
Examples.
In a similar manner, we can talk about the direct product of finitely many algebraic structures, for example, formula_9 This relies on the direct product being associative up to isomorphism. That is, formula_10 for any algebraic structures formula_11 formula_12 and formula_13 of the same kind. The direct product is also commutative up to isomorphism, that is, formula_14 for any algebraic structures formula_15 and formula_16 of the same kind. We can even talk about the direct product of infinitely many algebraic structures; for example we can take the direct product of countably many copies of formula_17 which we write as formula_18
Direct product of groups.
In group theory one can define the direct product of two groups formula_19 and formula_20 denoted by formula_21 For abelian groups that are written additively, it may also be called the direct sum of two groups, denoted by formula_22
It is defined as follows:
Note that formula_19 may be the same as formula_26
This construction gives a new group. It has a normal subgroup isomorphic to formula_27 (given by the elements of the form formula_28), and one isomorphic to formula_29 (comprising the elements formula_30).
The reverse also holds. There is the following recognition theorem: If a group formula_31 contains two normal subgroups formula_23 such that formula_32 and the intersection of formula_33 contains only the identity, then formula_31 is isomorphic to formula_21 A relaxation of these conditions, requiring only one subgroup to be normal, gives the semidirect product.
As an example, take as formula_33 two copies of the unique (up to isomorphisms) group of order 2, formula_34 say formula_35 Then formula_36 with the operation element by element. For instance, formula_37 andformula_38
With a direct product, we get some natural group homomorphisms for free: the projection maps defined by
formula_39
are called the coordinate functions.
Also, every homomorphism formula_40 to the direct product is totally determined by its component functions formula_41
For any group formula_19 and any integer formula_42 repeated application of the direct product gives the group of all formula_43-tuples formula_44 (for formula_45 this is the trivial group), for example formula_46 and formula_47
Direct product of modules.
The direct product for modules (not to be confused with the tensor product) is very similar to the one defined for groups above, using the Cartesian product with the operation of addition being componentwise, and the scalar multiplication just distributing over all the components. Starting from formula_0 we get Euclidean space formula_48 the prototypical example of a real formula_43-dimensional vector space. The direct product of formula_49 and formula_50 is formula_51
Note that a direct product for a finite index formula_52 is canonically isomorphic to the direct sum formula_53 The direct sum and direct product are not isomorphic for infinite indices, where the elements of a direct sum are zero for all but for a finite number of entries. They are dual in the sense of category theory: the direct sum is the coproduct, while the direct product is the product.
For example, consider formula_54 and formula_55 the infinite direct product and direct sum of the real numbers. Only sequences with a finite number of non-zero elements are in formula_56 For example, formula_57 is in formula_58 but formula_59 is not. Both of these sequences are in the direct product formula_60 in fact, formula_58 is a proper subset of formula_61 (that is, formula_62).
Topological space direct product.
The direct product for a collection of topological spaces formula_63 for formula_64 in formula_65 some index set, once again makes use of the Cartesian product
formula_66
Defining the topology is a little tricky. For finitely many factors, this is the obvious and natural thing to do: simply take as a basis of open sets to be the collection of all Cartesian products of open subsets from each factor:
formula_67
This topology is called the product topology. For example, directly defining the product topology on formula_68 by the open sets of formula_0 (disjoint unions of open intervals), the basis for this topology would consist of all disjoint unions of open rectangles in the plane (as it turns out, it coincides with the usual metric topology).
The product topology for infinite products has a twist, and this has to do with being able to make all the projection maps continuous and to make all functions into the product continuous if and only if all its component functions are continuous (that is, to satisfy the categorical definition of product: the morphisms here are continuous functions): we take as a basis of open sets to be the collection of all Cartesian products of open subsets from each factor, as before, with the proviso that all but finitely many of the open subsets are the entire factor:
formula_69
The more natural-sounding topology would be, in this case, to take products of infinitely many open subsets as before, and this does yield a somewhat interesting topology, the box topology. However it is not too difficult to find an example of bunch of continuous component functions whose product function is not continuous (see the separate entry box topology for an example and more). The problem that makes the twist necessary is ultimately rooted in the fact that the intersection of open sets is only guaranteed to be open for finitely many sets in the definition of topology.
Products (with the product topology) are nice with respect to preserving properties of their factors; for example, the product of Hausdorff spaces is Hausdorff; the product of connected spaces is connected, and the product of compact spaces is compact. That last one, called Tychonoff's theorem, is yet another equivalence to the axiom of choice.
For more properties and equivalent formulations, see the separate entry product topology.
Direct product of binary relations.
On the Cartesian product of two sets with binary relations formula_70 define formula_71 as formula_72 If formula_73 are both reflexive, irreflexive, transitive, symmetric, or antisymmetric, then formula_74 will be also. Similarly, totality of formula_74 is inherited from formula_75 Combining properties it follows that this also applies for being a preorder and being an equivalence relation. However, if formula_73 are connected relations, formula_74 need not be connected; for example, the direct product of formula_76 on formula_77 with itself does not relate formula_78
Direct product in universal algebra.
If formula_79 is a fixed signature, formula_80 is an arbitrary (possibly infinite) index set, and formula_81 is an indexed family of formula_79 algebras, the direct product formula_82 is a formula_79 algebra defined as follows:
For each formula_90 the formula_64th projection formula_93 is defined by formula_94 It is a surjective homomorphism between the formula_79 algebras formula_95
As a special case, if the index set formula_96 the direct product of two formula_79 algebras formula_97 is obtained, written as formula_98 If formula_79 just contains one binary operation formula_99 the above definition of the direct product of groups is obtained, using the notation formula_100 formula_101 Similarly, the definition of the direct product of modules is subsumed here.
Categorical product.
The direct product can be abstracted to an arbitrary category. In a category, given a collection of objects formula_102 indexed by a set formula_80, a product of these objects is an object formula_15 together with morphisms formula_103 for all formula_104, such that if formula_16 is any other object with morphisms formula_105 for all formula_104, there exists a unique morphism formula_106 whose composition with formula_107 equals formula_108 for every formula_64.
Such formula_15 and formula_109 do not always exist. If they do exist, then formula_110 is unique up to isomorphism, and formula_15 is denoted formula_111.
In the special case of the category of groups, a product always exists: the underlying set of formula_111 is the Cartesian product of the underlying sets of the formula_84, the group operation is componentwise multiplication, and the (homo)morphism formula_103 is the projection sending each tuple to its formula_64th coordinate.
Internal and external direct product.
Some authors draw a distinction between an internal direct product and an external direct product. For example, if formula_15 and formula_16 are subgroups of an additive abelian group formula_27, such that formula_112 and formula_113, then formula_114 and we say that formula_27 is the "internal" direct product of formula_15 and formula_16. To avoid ambiguity, we can refer to the set formula_115 as the "external" direct product of formula_15 and formula_16.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\R"
},
{
"math_id": 1,
"text": "\\R \\times \\R"
},
{
"math_id": 2,
"text": "\\{(x,y) : x,y \\in \\R\\}."
},
{
"math_id": 3,
"text": "\\R\\times \\R"
},
{
"math_id": 4,
"text": "\\{(x,y) : x,y \\in \\R\\}"
},
{
"math_id": 5,
"text": "(a,b) + (c,d) = (a+c, b+d)."
},
{
"math_id": 6,
"text": "(a,b) + (c,d) = (a+c, b+d)"
},
{
"math_id": 7,
"text": "(a,b) (c,d) = (ac, bd)."
},
{
"math_id": 8,
"text": "(1,0)"
},
{
"math_id": 9,
"text": "\\R \\times \\R \\times \\R \\times \\R."
},
{
"math_id": 10,
"text": "(A \\times B) \\times C \\cong A \\times (B \\times C)"
},
{
"math_id": 11,
"text": "A,"
},
{
"math_id": 12,
"text": "B,"
},
{
"math_id": 13,
"text": "C"
},
{
"math_id": 14,
"text": "A \\times B \\cong B \\times A"
},
{
"math_id": 15,
"text": "A"
},
{
"math_id": 16,
"text": "B"
},
{
"math_id": 17,
"text": "\\mathbb R,"
},
{
"math_id": 18,
"text": "\\R \\times \\R \\times \\R \\times \\dotsb."
},
{
"math_id": 19,
"text": "(G, \\circ)"
},
{
"math_id": 20,
"text": "(H, \\cdot),"
},
{
"math_id": 21,
"text": "G \\times H."
},
{
"math_id": 22,
"text": "G \\oplus H."
},
{
"math_id": 23,
"text": "G \\text{ and } H,"
},
{
"math_id": 24,
"text": "\\{(g, h) : g \\in G, h \\in H\\};"
},
{
"math_id": 25,
"text": "(g, h) \\times \\left(g', h'\\right) = \\left(g \\circ g', h \\cdot h'\\right)"
},
{
"math_id": 26,
"text": "(H, \\cdot)."
},
{
"math_id": 27,
"text": "G"
},
{
"math_id": 28,
"text": "(g, 1)"
},
{
"math_id": 29,
"text": "H"
},
{
"math_id": 30,
"text": "(1, h)"
},
{
"math_id": 31,
"text": "K"
},
{
"math_id": 32,
"text": "K = GH"
},
{
"math_id": 33,
"text": "G \\text{ and } H"
},
{
"math_id": 34,
"text": "C^2:"
},
{
"math_id": 35,
"text": "\\{1, a\\} \\text{ and } \\{1, b\\}."
},
{
"math_id": 36,
"text": "C_2 \\times C_2 = \\{(1,1), (1,b), (a,1), (a,b)\\},"
},
{
"math_id": 37,
"text": "(1,b)^* (a,1) = \\left(1^* a, b^* 1\\right) = (a, b),"
},
{
"math_id": 38,
"text": "(1,b)^* (1, b) = \\left(1, b^2\\right) = (1, 1)."
},
{
"math_id": 39,
"text": "\\begin{align}\n \\pi_1: G \\times H \\to G, \\ \\ \\pi_1(g, h) &= g \\\\\n \\pi_2: G \\times H \\to H, \\ \\ \\pi_2(g, h) &= h\n\\end{align}"
},
{
"math_id": 40,
"text": "f"
},
{
"math_id": 41,
"text": "f_i = \\pi_i \\circ f."
},
{
"math_id": 42,
"text": "n \\geq 0,"
},
{
"math_id": 43,
"text": "n"
},
{
"math_id": 44,
"text": "G^n"
},
{
"math_id": 45,
"text": "n = 0,"
},
{
"math_id": 46,
"text": "\\Z^n"
},
{
"math_id": 47,
"text": "\\R^n."
},
{
"math_id": 48,
"text": "\\R^n,"
},
{
"math_id": 49,
"text": "\\R^m"
},
{
"math_id": 50,
"text": "\\R^n"
},
{
"math_id": 51,
"text": "\\R^{m+n}."
},
{
"math_id": 52,
"text": "\\prod_{i=1}^n X_i"
},
{
"math_id": 53,
"text": "\\bigoplus_{i=1}^n X_i."
},
{
"math_id": 54,
"text": "X = \\prod_{i=1}^\\infty \\R"
},
{
"math_id": 55,
"text": "Y = \\bigoplus_{i=1}^\\infty \\R,"
},
{
"math_id": 56,
"text": "Y."
},
{
"math_id": 57,
"text": "(1, 0, 0, 0, \\ldots)"
},
{
"math_id": 58,
"text": "Y"
},
{
"math_id": 59,
"text": "(1, 1, 1, 1, \\ldots)"
},
{
"math_id": 60,
"text": "X;"
},
{
"math_id": 61,
"text": "X"
},
{
"math_id": 62,
"text": "Y \\subset X"
},
{
"math_id": 63,
"text": "X_i"
},
{
"math_id": 64,
"text": "i"
},
{
"math_id": 65,
"text": "I,"
},
{
"math_id": 66,
"text": "\\prod_{i \\in I} X_i."
},
{
"math_id": 67,
"text": "\\mathcal B = \\left\\{U_1 \\times \\cdots \\times U_n\\ : \\ U_i\\ \\mathrm{open\\ in}\\ X_i\\right\\}."
},
{
"math_id": 68,
"text": "\\R^2"
},
{
"math_id": 69,
"text": "\\mathcal B = \\left\\{ \\prod_{i \\in I} U_i\\ : \\ (\\exists j_1,\\ldots,j_n)(U_{j_i}\\ \\mathrm{open\\ in}\\ X_{j_i})\\ \\mathrm{and}\\ (\\forall i \\neq j_1,\\ldots,j_n)(U_i = X_i) \\right\\}."
},
{
"math_id": 70,
"text": "R \\text{ and } S,"
},
{
"math_id": 71,
"text": "(a, b) T (c, d)"
},
{
"math_id": 72,
"text": "a R c \\text{ and } b S d."
},
{
"math_id": 73,
"text": "R \\text{ and } S"
},
{
"math_id": 74,
"text": "T"
},
{
"math_id": 75,
"text": "R \\text{ and } S."
},
{
"math_id": 76,
"text": "\\,\\leq\\,"
},
{
"math_id": 77,
"text": "\\N"
},
{
"math_id": 78,
"text": "(1, 2) \\text{ and } (2, 1)."
},
{
"math_id": 79,
"text": "\\Sigma"
},
{
"math_id": 80,
"text": "I"
},
{
"math_id": 81,
"text": "\\left(\\mathbf{A}_i\\right)_{i \\in I}"
},
{
"math_id": 82,
"text": "\\mathbf{A} = \\prod_{i \\in I} \\mathbf{A}_i"
},
{
"math_id": 83,
"text": "\\mathbf{A}"
},
{
"math_id": 84,
"text": "A_i"
},
{
"math_id": 85,
"text": "\\mathbf{A}_i,"
},
{
"math_id": 86,
"text": "A = \\prod_{i \\in I} A_i."
},
{
"math_id": 87,
"text": "f \\in \\Sigma,"
},
{
"math_id": 88,
"text": "f^{\\mathbf{A}}"
},
{
"math_id": 89,
"text": "a_1, \\dotsc, a_n \\in A"
},
{
"math_id": 90,
"text": "i \\in I,"
},
{
"math_id": 91,
"text": "f^{\\mathbf{A}}\\!\\left(a_1, \\dotsc, a_n\\right)"
},
{
"math_id": 92,
"text": "f^{\\mathbf{A}_i}\\!\\left(a_1(i), \\dotsc, a_n(i)\\right)."
},
{
"math_id": 93,
"text": "\\pi_i : A \\to A_i"
},
{
"math_id": 94,
"text": "\\pi_i(a) = a(i)."
},
{
"math_id": 95,
"text": "\\mathbf{A} \\text{ and } \\mathbf{A}_i."
},
{
"math_id": 96,
"text": "I = \\{1, 2\\},"
},
{
"math_id": 97,
"text": "\\mathbf{A}_1 \\text{ and } \\mathbf{A}_2"
},
{
"math_id": 98,
"text": "\\mathbf{A} = \\mathbf{A}_1 \\times \\mathbf{A}_2."
},
{
"math_id": 99,
"text": "f,"
},
{
"math_id": 100,
"text": "A_1 = G, A_2 = H,"
},
{
"math_id": 101,
"text": "f^{A_1} = \\circ, \\ f^{A_2} = \\cdot, \\ \\text{ and } f^A = \\times."
},
{
"math_id": 102,
"text": "(A_i)_{i \\in I}"
},
{
"math_id": 103,
"text": "p_i \\colon A \\to A_i"
},
{
"math_id": 104,
"text": "i \\in I"
},
{
"math_id": 105,
"text": "f_i \\colon B \\to A_i"
},
{
"math_id": 106,
"text": "B \\to A"
},
{
"math_id": 107,
"text": "p_i"
},
{
"math_id": 108,
"text": "f_i"
},
{
"math_id": 109,
"text": "(p_i)_{i \\in I}"
},
{
"math_id": 110,
"text": "(A,(p_i)_{i \\in I})"
},
{
"math_id": 111,
"text": "\\prod_{i \\in I} A_i"
},
{
"math_id": 112,
"text": "A + B = G"
},
{
"math_id": 113,
"text": "A \\cap B = \\{0\\}"
},
{
"math_id": 114,
"text": "A \\times B \\cong G,"
},
{
"math_id": 115,
"text": "\\{\\, (a,b) \\mid a \\in A, \\, b \\in B \\,\\}"
}
] | https://en.wikipedia.org/wiki?curid=8887 |
8889938 | Pillai prime | In number theory, a Pillai prime is a prime number "p" for which there is an integer "n" > 0 such that the factorial of "n" is one less than a multiple of the prime, but the prime is not one more than a multiple of "n". To put it algebraically, formula_0 but formula_1. The first few Pillai primes are
23, 29, 59, 61, 67, 71, 79, 83, 109, 137, 139, 149, 193, ... (sequence in the OEIS)
Pillai primes are named after the mathematician Subbayya Sivasankaranarayana Pillai, who studied these numbers. Their infinitude has been proven several times, by Subbarao, Erdős, and Hardy & Subbarao. | [
{
"math_id": 0,
"text": "n! \\equiv -1 \\mod p"
},
{
"math_id": 1,
"text": "p \\not\\equiv 1 \\mod n"
}
] | https://en.wikipedia.org/wiki?curid=8889938 |
8890014 | Perfect totient number | In number theory, a perfect totient number is an integer that is equal to the sum of its iterated totients. That is, one applies the totient function to a number "n", apply it again to the resulting totient, and so on, until the number 1 is reached, and adds together the resulting sequence of numbers; if the sum equals "n", then "n" is a perfect totient number.
Examples.
For example, there are six positive integers less than 9 and relatively prime to it, so the totient of 9 is 6; there are two numbers less than 6 and relatively prime to it, so the totient of 6 is 2; and there is one number less than 2 and relatively prime to it, so the totient of 2 is 1; and 9
6 + 2 + 1, so 9 is a perfect totient number.
The first few perfect totient numbers are
3, 9, 15, 27, 39, 81, 111, 183, 243, 255, 327, 363, 471, 729, 2187, 2199, 3063, 4359, 4375, ... (sequence in the OEIS).
Notation.
In symbols, one writes
formula_0
for the iterated totient function. Then if "c" is the integer such that
formula_1
one has that "n" is a perfect totient number if
formula_2
Multiples and powers of three.
It can be observed that many perfect totient are multiples of 3; in fact, 4375 is the smallest perfect totient number that is not divisible by 3. All powers of 3 are perfect totient numbers, as may be seen by induction using the fact that
formula_3
Venkataraman (1975) found another family of perfect totient numbers: if "p"
4 × 3"k" + 1 is prime, then 3"p" is a perfect totient number. The values of "k" leading to perfect totient numbers in this way are
0, 1, 2, 3, 6, 14, 15, 39, 201, 249, 1005, 1254, 1635, ... (sequence in the OEIS).
More generally if "p" is a prime number greater than 3, and 3"p" is a perfect totient number, then "p" ≡ 1 (mod 4) (Mohan and Suryanarayana 1982). Not all "p" of this form lead to perfect totient numbers; for instance, 51 is not a perfect totient number. Iannucci et al. (2003) showed that if 9"p" is a perfect totient number then "p" is a prime of one of three specific forms listed in their paper. It is not known whether there are any perfect totient numbers of the form 3"k""p" where "p" is prime and "k" > 3.
References.
"This article incorporates material from Perfect Totient Number on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "\\varphi^i(n) = \\begin{cases}\n\\varphi(n), &\\text{ if } i = 1 \\\\\n\\varphi(\\varphi^{i-1}(n)), &\\text{ if } i \\geq 2\n\\end{cases}"
},
{
"math_id": 1,
"text": "\\displaystyle\\varphi^c(n)=2,"
},
{
"math_id": 2,
"text": "n = \\sum_{i = 1}^{c + 1} \\varphi^i(n)."
},
{
"math_id": 3,
"text": "\\displaystyle\\varphi(3^k) = \\varphi(2\\times 3^k) = 2 \\times 3^{k-1}."
}
] | https://en.wikipedia.org/wiki?curid=8890014 |
8890983 | Nonlinear acoustics | Nonlinear acoustics (NLA) is a branch of physics and acoustics dealing with sound waves of sufficiently large amplitudes. Large amplitudes require using full systems of governing equations of fluid dynamics (for sound waves in liquids and gases) and elasticity (for sound waves in solids). These equations are generally nonlinear, and their traditional linearization is no longer possible. The solutions of these equations show that, due to the effects of nonlinearity, sound waves are being distorted as they travel.
Introduction.
A sound wave propagates through a material as a localized pressure change. Increasing the pressure of a gas or fluid increases its local temperature. The local speed of sound in a compressible material increases with temperature; as a result, the wave travels faster during the high pressure phase of the oscillation than during the lower pressure phase. This affects the wave's frequency structure; for example, in an initially plain sinusoidal wave of a single frequency, the peaks of the wave travel faster than the troughs, and the pulse becomes cumulatively more like a sawtooth wave. In other words, the wave distorts itself. In doing so, other frequency components are introduced, which can be described by the Fourier series. This phenomenon is characteristic of a nonlinear system, since a linear acoustic system responds only to the driving frequency. This always occurs but the effects of geometric spreading and of absorption usually overcome the self-distortion, so linear behavior usually prevails and nonlinear acoustic propagation occurs only for very large amplitudes and only near the source.
Additionally, waves of different amplitudes will generate different pressure gradients, contributing to the nonlinear effect.
Physical analysis.
The pressure changes within a medium cause the wave energy to transfer to higher harmonics. Since attenuation generally increases with frequency, a countereffect exists that changes the nature of the nonlinear effect over distance. To describe their level of nonlinearity, materials can be given a nonlinearity parameter, formula_0. The values of formula_1 and formula_2 are the coefficients of the first and second order terms of the Taylor series expansion of the equation relating the material's pressure to its density. The Taylor series has more terms, and hence more coefficients (C, D, ...) but they are seldom used. Typical values for the nonlinearity parameter in biological mediums are shown in the following table.
In a liquid usually a modified coefficient is used known as formula_3.
Mathematical model.
Governing equations to derive Westervelt equation.
Continuity:
formula_4
Conservation of momentum:
formula_5
with Taylor perturbation expansion on density:
formula_6
where ε is a small parameter, i.e. the perturbation parameter, the equation of state becomes:
formula_7
If the second term in the Taylor expansion of pressure is dropped, the viscous wave equation can be derived. If it is kept, the nonlinear term in pressure appears in the Westervelt equation.
Westervelt equation.
The general wave equation that accounts for nonlinearity up to the second-order is given by the Westervelt equation
formula_8
where formula_9 is the sound pressure, formula_10 is the small signal sound speed, formula_11 is the sound diffusivity, formula_12 is the nonlinearity coefficient and formula_13 is the ambient density.
The sound diffusivity is given by
formula_14
where formula_15 is the shear viscosity, formula_16 the bulk viscosity, formula_17 the thermal conductivity, formula_18 and formula_19 the specific heat at constant volume and pressure respectively.
Burgers' equation.
The Westervelt equation can be simplified to take a one-dimensional form with an assumption of strictly forward propagating waves and the use of a coordinate transformation to a retarded time frame:
formula_20
where formula_21 is retarded time. This corresponds to a viscous Burgers equation:
formula_22
in the pressure field (y=p), with a mathematical "time variable":
formula_23
and with a "space variable":
formula_24
and a negative diffusion coefficient:
formula_25.
The Burgers' equation is the simplest equation that describes the combined effects of nonlinearity and losses on the propagation of progressive waves.
KZK equation.
An augmentation to the Burgers equation that accounts for the combined effects of nonlinearity, diffraction, and absorption in directional sound beams is described by the Khokhlov–Zabolotskaya–Kuznetsov (KZK) equation, named after Rem Khokhlov, Evgenia Zabolotskaya, and V. P. Kuznetsov. Solutions to this equation are generally used to model nonlinear acoustics.
If the formula_26 axis is in the direction of the sound beam path and the formula_27 plane is perpendicular to that, the KZK equation can be written
formula_28
The equation can be solved for a particular system using a finite difference scheme. Such solutions show how the sound beam distorts as it passes through a nonlinear medium.
Common occurrences.
Sonic boom.
The nonlinear behavior of the atmosphere leads to change of the wave shape in a sonic boom. Generally, this makes the boom more 'sharp' or sudden, as the high-amplitude peak moves to the wavefront.
Acoustic levitation.
Acoustic levitation would not be possible without nonlinear acoustic phenomena. The nonlinear effects are particularly evident due to the high-powered acoustic waves involved.
Ultrasonic waves.
Because of their relatively high amplitude to wavelength ratio, ultrasonic waves commonly display nonlinear propagation behavior. For example, nonlinear acoustics is a field of interest for medical ultrasonography because it can be exploited to produce better image quality.
Musical acoustics.
The physical behavior of musical acoustics is mainly nonlinear. Attempts are made to model their sound generation from physical modeling synthesis, emulating their sound from measurements of their nonlinearity.
Parametric arrays.
A parametric array is a nonlinear transduction mechanism that generates narrow, nearly side lobe-free beams of low frequency sound, through the mixing and interaction of high-frequency sound waves. Applications are e.g. in underwater acoustics and audio.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "B/A"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "\\beta = 1 + \\frac{B}{2A}"
},
{
"math_id": 4,
"text": " \\frac{\\partial \\rho}{\\partial t} + \\nabla \\cdot (\\rho \\textbf{u}) = 0"
},
{
"math_id": 5,
"text": " \\rho \\left( \\frac{\\partial \\textbf{u}}{\\partial t} + \\textbf{u} \\cdot \\nabla \\textbf{u} \\right) + \\nabla p = (\\lambda + 2 \\mu) \\nabla (\\nabla \\cdot \\textbf{u}) "
},
{
"math_id": 6,
"text": "\\rho = \\sum_0^\\infty \\varepsilon^i \\rho_i"
},
{
"math_id": 7,
"text": " p = \\varepsilon \\rho_1 c_0^2 \\left( 1+ \\varepsilon \\frac{B}{2!A}\\frac{\\rho_1}{\\rho_0} + O(\\varepsilon^2) \\right) "
},
{
"math_id": 8,
"text": "\\, \\nabla^{2} p - \\frac{1}{c_{0}^{2}} \\frac{\\partial^{2} p}{\\partial t^{2}} + \\frac{\\delta}{c_{0}^{4}} \\frac{\\partial^{3} p}{\\partial t^{3}} = - \\frac{\\beta}{\\rho_{0} c_{0}^{4}} \\frac{\\partial^{2} p^{2}}{\\partial t^{2}}"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "c_0"
},
{
"math_id": 11,
"text": "\\delta"
},
{
"math_id": 12,
"text": "\\beta"
},
{
"math_id": 13,
"text": "\\rho_0"
},
{
"math_id": 14,
"text": "\\, \\delta = \\frac{1}{\\rho_{0}} \\left(\\frac{4}{3}\\mu+\\mu_{B}\\right) + \\frac{k}{\\rho_{0}} \\left(\\frac{1}{c_{v}} - \\frac{1}{c_{p}}\\right)"
},
{
"math_id": 15,
"text": "\\mu"
},
{
"math_id": 16,
"text": "\\mu_{B}"
},
{
"math_id": 17,
"text": "k"
},
{
"math_id": 18,
"text": "c_{v}"
},
{
"math_id": 19,
"text": "c_{p}"
},
{
"math_id": 20,
"text": "\\frac{\\partial p}{\\partial z} - \\frac{\\beta}{\\rho_{0} c_{0}^{3}} p \\frac{\\partial p}{\\partial \\tau} = \\frac{\\delta}{2 c_{0}^{3}}\\frac{\\partial^{2} p}{\\partial \\tau^{2}}"
},
{
"math_id": 21,
"text": "\\tau = t-z/c_0"
},
{
"math_id": 22,
"text": "\\frac{\\partial y}{\\partial t'} + y \\frac{\\partial y}{\\partial x} = d \\frac{\\partial^2 y}{\\partial x^2}"
},
{
"math_id": 23,
"text": "t' = \\frac z {c_0} "
},
{
"math_id": 24,
"text": "x = - \\frac{\\rho_{0} c_{0}^{2}}{\\beta} \\tau "
},
{
"math_id": 25,
"text": "d = - \\frac{\\rho_{0} c_{0}}{2 \\beta^2} \\delta "
},
{
"math_id": 26,
"text": "z"
},
{
"math_id": 27,
"text": "(x,y)"
},
{
"math_id": 28,
"text": "\\, \\frac{\\partial^2 p}{\\partial z \\partial \\tau} = \\frac{c_0}{2}\\nabla^2_{\\perp}p + \\frac{\\delta}{2c^3_0}\\frac{\\partial^3 p}{\\partial \\tau^3} + \\frac{\\beta}{2\\rho_0 c^3_0}\\frac{\\partial^2 p^2}{\\partial \\tau^2}"
}
] | https://en.wikipedia.org/wiki?curid=8890983 |
8891344 | Gibbs–Duhem equation | Equation in thermodynamics
In thermodynamics, the Gibbs–Duhem equation describes the relationship between changes in chemical potential for components in a thermodynamic system:
formula_0
where formula_1 is the number of moles of component formula_2 the infinitesimal increase in chemical potential for this component, formula_3 the entropy, formula_4 the absolute temperature, formula_5 volume and formula_6 the pressure. formula_7 is the number of different components in the system. This equation shows that in thermodynamics intensive properties are not independent but related, making it a mathematical statement of the state postulate. When pressure and temperature are variable, only formula_8 of formula_7 components have independent values for chemical potential and Gibbs' phase rule follows. The Gibbs−Duhem equation cannot be used for small thermodynamic systems due to the influence of surface effects and other microscopic phenomena.
The equation is named after Josiah Willard Gibbs and Pierre Duhem.
Derivation.
Deriving the Gibbs–Duhem equation from the fundamental thermodynamic equation is straightforward. The total differential of the extensive Gibbs free energy formula_9 in terms of its natural variables is
formula_10
Since the Gibbs free energy is the Legendre transformation of the internal energy, the derivatives can be replaced by their definitions, transforming the above equation into:
formula_11
The chemical potential is simply another name for the partial molar Gibbs free energy (or the partial Gibbs free energy, depending on whether "N" is in units of moles or particles). Thus the Gibbs free energy of a system can be calculated by collecting moles together carefully at a specified "T", "P" and at a constant molar ratio composition (so that the chemical potential does not change as the moles are added together), i.e.
formula_12.
The total differential of this expression is
formula_13
Combining the two expressions for the total differential of the Gibbs free energy gives
formula_14
which simplifies to the Gibbs–Duhem relation:
formula_15
Alternative derivation.
Another way of deriving the Gibbs–Duhem equation can be found by taking the extensivity of energy into account. Extensivity implies that
formula_16
where formula_17 denotes all extensive variables of the internal energy formula_18. The internal energy is thus a first-order homogenous function. Applying Euler's homogeneous function theorem, one finds the following relation when taking only volume, number of particles, and entropy as extensive variables:
formula_19
Taking the total differential, one finds
formula_20
Finally, one can equate this expression to the definition of formula_21 to find the Gibbs–Duhem equation
formula_22
Applications.
By normalizing the above equation by the extent of a system, such as the total number of moles, the Gibbs–Duhem equation provides a relationship between the intensive variables of the system. For a simple system with formula_7 different components, there will be formula_23 independent parameters or "degrees of freedom". For example, if we know a gas cylinder filled with pure nitrogen is at room temperature (298 K) and 25 MPa, we can determine the fluid density (258 kg/m3), enthalpy (272 kJ/kg), entropy (5.07 kJ/kg⋅K) or any other intensive thermodynamic variable. If instead the cylinder contains a nitrogen/oxygen mixture, we require an additional piece of information, usually the ratio of oxygen-to-nitrogen.
If multiple phases of matter are present, the chemical potentials across a phase boundary are equal. Combining expressions for the Gibbs–Duhem equation in each phase and assuming systematic equilibrium (i.e. that the temperature and pressure is constant throughout the system), we recover the Gibbs' phase rule.
One particularly useful expression arises when considering binary solutions. At constant P (isobaric) and T (isothermal) it becomes:
formula_24
or, normalizing by total number of moles in the system formula_25 substituting in the definition of activity coefficient formula_26 and using the identity formula_27:
formula_28
This equation is instrumental in the calculation of thermodynamically consistent and thus more accurate expressions for the vapor pressure of a fluid mixture from limited experimental data.
Ternary and multicomponent solutions and mixtures.
Lawrence Stamper Darken has shown that the Gibbs–Duhem equation can be applied to the determination of chemical potentials of components from a multicomponent system from experimental data regarding the chemical potential formula_29 of only one component (here component 2) at all compositions. He has deduced the following relation
formula_30
xi, amount (mole) fractions of components.
Making some rearrangements and dividing by (1 – x2)2 gives:
formula_31
or
formula_32
or
formula_33 as formatting variant
The derivative with respect to one mole fraction x2 is taken at constant ratios of amounts (and therefore of mole fractions) of the other components of the solution representable in a diagram like ternary plot.
The last equality can be integrated from formula_34 to formula_35 gives:
formula_36
Applying LHopital's rule gives:
formula_37.
This becomes further:
formula_38.
Express the mole fractions of component 1 and 3 as functions of component 2 mole fraction and binary mole ratios:
formula_39
formula_40
and the sum of partial molar quantities
formula_41
gives
formula_42
formula_43 and formula_44 are constants which can be determined from the binary systems 1_2 and 2_3. These constants can be obtained from the previous equality by putting the complementary mole fraction x3 = 0 for x1 and vice versa.
Thus
formula_45
and
formula_46
The final expression is given by substitution of these constants into the previous equation:
formula_47
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{i=1}^I N_i \\mathrm{d}\\mu_i = - S \\mathrm{d}T + V \\mathrm{d}p"
},
{
"math_id": 1,
"text": "N_i"
},
{
"math_id": 2,
"text": "i, \\mathrm{d}\\mu_i "
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "V"
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "I"
},
{
"math_id": 8,
"text": "I-1 "
},
{
"math_id": 9,
"text": "G"
},
{
"math_id": 10,
"text": "\\mathrm{d}G =\\left. \\frac{\\partial G}{\\partial p}\\right |_{T,N} \\mathrm{d}p +\\left. \\frac{\\partial G}{\\partial T}\\right | _{p,N} \\mathrm{d}T +\\sum_{i=1}^I \\left. \\frac{\\partial G}{\\partial N_i}\\right |_{p,T,N_{j \\neq i}} \\mathrm{d}N_i."
},
{
"math_id": 11,
"text": "\\mathrm{d}G =V \\mathrm{d}p-S \\mathrm{d}T +\\sum_{i=1}^I \\mu_i \\mathrm{d}N_i "
},
{
"math_id": 12,
"text": " G = \\sum_{i=1}^I \\mu_i N_i "
},
{
"math_id": 13,
"text": " \\mathrm{d}G= \\sum_{i=1}^I \\mu_i \\mathrm{d}N_i + \\sum_{i=1}^I N_i \\mathrm{d}\\mu_i"
},
{
"math_id": 14,
"text": " \\sum_{i=1}^I \\mu_i \\mathrm{d}N_i + \\sum_{i=1}^I N_i \\mathrm{d}\\mu_i =V \\mathrm{d}p-S \\mathrm{d}T+\\sum_{i=1}^I \\mu_i \\mathrm{d}N_i "
},
{
"math_id": 15,
"text": " \\sum_{i=1}^I N_i \\mathrm{d}\\mu_i = -S \\mathrm{d}T + V \\mathrm{d}p "
},
{
"math_id": 16,
"text": "U(\\lambda \\mathbf{X}) = \\lambda U (\\mathbf{X})"
},
{
"math_id": 17,
"text": "\\mathbf{X}"
},
{
"math_id": 18,
"text": "U"
},
{
"math_id": 19,
"text": "U = TS - pV + \\sum_{i=1}^I \\mu_i N_i"
},
{
"math_id": 20,
"text": "\\mathrm{d}U = T\\mathrm{d}S + S\\mathrm{d}T - p\\mathrm{d}V - V \\mathrm{d}p + \\sum_{i=1}^I \\mu_i \\mathrm{d} N_i + \\sum_{i=1}^I N_i \\mathrm{d} \\mu_i "
},
{
"math_id": 21,
"text": "\\mathrm{d}U"
},
{
"math_id": 22,
"text": "0 =S\\mathrm{d}T - V \\mathrm{d}p + \\sum_{i=1}^I N_i \\mathrm{d} \\mu_i "
},
{
"math_id": 23,
"text": "I+1 "
},
{
"math_id": 24,
"text": "0= N_1 \\mathrm{d}\\mu_1 + N_2 \\mathrm{d}\\mu_2 "
},
{
"math_id": 25,
"text": "N_1 + N_2,"
},
{
"math_id": 26,
"text": " \\gamma"
},
{
"math_id": 27,
"text": " x_1 + x_2 = 1 "
},
{
"math_id": 28,
"text": "0= x_1 \\mathrm{d}\\ln(\\gamma_1) + x_2 \\mathrm{d}\\ln(\\gamma_2)"
},
{
"math_id": 29,
"text": "\\bar {G_2}"
},
{
"math_id": 30,
"text": "\\bar{G_2}= G + (1-x_2) \\left(\\frac{{\\partial G}}{{\\partial x_2}}\\right)_{\\frac{x_1}{x_3}}"
},
{
"math_id": 31,
"text": "\\frac{G}{(1-x_2)^2} + \\frac{1}{1-x_2} \\left(\\frac{\\partial G}{\\partial x_2}\\right)_{\\frac{x_1}{x_3}} = \\frac{\\bar{G_2}}{(1-x_2)^2}"
},
{
"math_id": 32,
"text": " \\left(\\mathfrak{d} \\frac{G}{\\frac{1 - x_2}{\\mathfrak{d} x_2}}\\right)_{\\frac{x_1}{x_3}} = \\frac{\\bar{G_2}}{(1 - x_2)^2}"
},
{
"math_id": 33,
"text": "\\left(\\frac {\\frac{\\partial G}{1-x_2}}{\\partial x_2}\\right)_{\\frac{x_1}{x_3}} = \\frac{\\bar{G_2}}{(1 - x_2)^2}"
},
{
"math_id": 34,
"text": "x_2 = 1"
},
{
"math_id": 35,
"text": "x_2"
},
{
"math_id": 36,
"text": "G - (1 - x_2) \\lim_{x_2\\to 1} \\frac{G}{1 - x_2} = (1 - x_2) \\int_{1}^{x_2}\\frac{\\bar{G_2}}{(1 - x_2)^2} dx_2 "
},
{
"math_id": 37,
"text": " \\lim_{x_2\\to 1} \\frac{G}{1 - x_2} = \\lim_{x_2\\to 1} \\left(\\frac{\\partial G}{\\partial x_2}\\right)_{\\frac{x_1}{x_3}} "
},
{
"math_id": 38,
"text": " \\lim_{x_2\\to 1} \\frac{G}{1 - x_2} = -\\lim_{x_2\\to 1} \\frac {\\bar{G_2} - G}{1 - x_2}"
},
{
"math_id": 39,
"text": "x_1 = \\frac{1-x_2}{1+\\frac{x_3}{x_1}}"
},
{
"math_id": 40,
"text": "x_3 = \\frac{1-x_2}{1+\\frac{x_1}{x_3}}"
},
{
"math_id": 41,
"text": "G=\\sum _{i=1}^3 x_i \\bar{G_i},"
},
{
"math_id": 42,
"text": "G= x_1 (\\bar {G_1})_{x_2 =1} + x_3 (\\bar {G_3})_{x_2 =1} + (1 - x_2) \\int_{1}^{x_2}\\frac{\\bar{G_2}}{(1 - x_2)^2} dx_2 "
},
{
"math_id": 43,
"text": "(\\bar{G_1})_{x_2 =1}"
},
{
"math_id": 44,
"text": "(\\bar{G_3})_{x_2 =1}"
},
{
"math_id": 45,
"text": "(\\bar {G_1})_{x_2 =1} = - \\left(\\int_{1}^{0}\\frac{\\bar{G_2}}{(1 - x_2)^2} dx_2 \\right)_{x_3=0}"
},
{
"math_id": 46,
"text": "(\\bar {G_3})_{x_2 =1} = - \\left(\\int_{1}^{0}\\frac{\\bar{G_2}}{(1 - x_2)^2} dx_2 \\right)_{x_1=0}"
},
{
"math_id": 47,
"text": "G= (1 - x_2) \\left(\\int_{1}^{x_2}\\frac{\\bar{G_2}}{(1 - x_2)^2} dx_2 \\right)_{\\frac{x_1}{x_3}} - x_1 \\left(\\int_{1}^{0}\\frac{\\bar{G_2}}{(1 - x_2)^2} dx_2 \\right)_{x_3=0} - x_3 \\left(\\int_{1}^{0}\\frac{\\bar{G_2}}{(1 - x_2)^2} dx_2 \\right)_{x_1=0}"
}
] | https://en.wikipedia.org/wiki?curid=8891344 |
8892726 | Negative pedal curve | Mathematical plane curve
In geometry, a negative pedal curve is a plane curve that can be constructed from another plane curve "C" and a fixed point "P" on that curve. For each point "X" ≠ "P" on the curve "C", the negative pedal curve has a tangent that passes through "X" and is perpendicular to line "XP". Constructing the negative pedal curve is the inverse operation to constructing a pedal curve.
Definition.
In the plane, for every point "X" other than "P" there is a unique line through "X" perpendicular to "XP". For a given curve in the plane and a given fixed point "P", called the pedal point, the negative pedal curve is the envelope of the lines "XP" for which "X" lies on the given curve.
Parameterization.
For a parametrically defined curve, its negative pedal curve with pedal point (0; 0) is defined as
formula_0
formula_1
Properties.
The negative pedal curve of a pedal curve with the same pedal point is the original curve. | [
{
"math_id": 0,
"text": "X[x,y]=\\frac{(y^2-x^2)y'+2xyx'}{xy'-yx'}"
},
{
"math_id": 1,
"text": "Y[x,y]=\\frac{(x^2-y^2)x'+2xyy'}{xy'-yx'}"
}
] | https://en.wikipedia.org/wiki?curid=8892726 |
8893027 | Step recovery diode | Semiconductor diode producing short impulses
In electronics, a step recovery diode (SRD, snap-off diode or charge-storage diode or memory varactor) is a semiconductor junction diode with the ability to generate extremely short pulses. It has a variety of uses in microwave (MHz to GHz range) electronics as pulse generator or parametric amplifier.
When diodes switch from forward conduction to reverse cut-off, a reverse current flows briefly as stored charge is removed. It is the abruptness with which this reverse current ceases which characterises the step recovery diode.
Historical note.
The first published paper on the SRD is : the authors start the brief survey stating that "the recovery characteristics of certain types of pn-junction diodes exhibit a discontinuity which may be used to advantage for the generation of harmonics or for the production of millimicrosecond pulses". They also refer that they first observed this phenomenon in February, 1959
Operating the SRD.
Physical principles.
The main phenomenon used in SRDs is the storage of electric charge during forward conduction, which is present in all semiconductor junction diodes and is due to finite lifetime of minority carriers in semiconductors. Assume that the SRD is forward biased and in "steady state" i.e. the anode bias current does not change with time: since charge transport in a junction diode is mainly due to diffusion, i.e. to a non constant spatial charge carrier density caused by bias voltage, a charge "Qs" is stored in the device. This "stored charge" depends on
Quantitatively, if the steady state of forward conduction lasts for a time much greater than τ, the stored charge has the following approximate expression
formula_0
Now suppose that the voltage bias abruptly changes, switching from its stationary positive value to a higher magnitude constant negative value: then, since a certain amount of charge has been stored during forward conduction, diode resistance is still low ("i.e. the anode-to-cathode voltage VAK has nearly the same forward conduction value"). Anode current does not cease but reverses its polarity (i.e. the direction of its flow) and stored charge "Qs" starts to flow out of the device at an almost constant rate IR. All the stored charge is thus removed in a certain amount of time: this time is the "storage time tS" and its approximate expression is
formula_1
When all stored charge has been removed, diode resistance suddenly changes, rising to its cut-off value at reverse bias within a time tTr, the "transition time": this behavior can be used to produce pulses with rise time equal to this time.
Operation of the Drift Step Recovery Diode (DSRD).
The Drift Step Recovery Diode (DSRD) was discovered by Russian scientists in 1981 (Grekhov et al., 1981). The principle of the DSRD operation is similar to the SRD, with one essential difference - the forward pumping current should be pulsed, not continuous, because drift diodes function with slow carriers.
The principle of DSRD operation can be explained as follows: A short pulse of current is applied in the forward direction of the DSRD effectively "pumping" the P-N junction, or in other words, “charging” the P-N junction capacitively. When the current direction reverses, the accumulated charges are removed from the base region.
As soon as the accumulated charge decreases to zero, the diode opens rapidly. A high voltage spike can appear due to the self-induction of the diode circuit.
The larger the commutation current and the shorter the transition from forward to reverse conduction, the higher the pulse amplitude and efficiency of the pulse generator (Kardo-Sysoev et al., 1997).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
The following two books contain a comprehensive analysis of the theory of non-equilibrium charge transport in semiconductor diodes, and give also an overview of applications (at least up to the end of the seventies).
The following application notes deals extensively with practical circuits and applications using SRDs. | [
{
"math_id": 0,
"text": "Q_S\\cong I_A\\cdot\\tau"
},
{
"math_id": 1,
"text": "t_S\\cong\\frac{Q_S}{I_R}"
}
] | https://en.wikipedia.org/wiki?curid=8893027 |
8895763 | Oren–Nayar reflectance model | The Oren–Nayar reflectance model, developed by Michael Oren and Shree K. Nayar, is a reflectivity model for diffuse reflection from rough surfaces. It has been shown to accurately predict the appearance of a wide range of natural surfaces, such as concrete, plaster, sand, etc.
Introduction.
Reflectance is a physical property of a material that describes how it reflects incident light. The appearance of various materials are determined to a large extent by their reflectance properties. Most reflectance models can be broadly classified into two categories: diffuse and specular. In computer vision and computer graphics, the diffuse component is often assumed to be Lambertian. A surface that obeys Lambert's Law appears equally bright from all viewing directions. This model for diffuse reflection was proposed by Johann Heinrich Lambert in 1760 and has been perhaps the most widely used reflectance model in computer vision and graphics. For a large number of real-world surfaces, such as concrete, plaster, sand, etc., however, the Lambertian model is an inadequate approximation of the diffuse component. This is primarily because the Lambertian model does not take the roughness of the surface into account.
Rough surfaces can be modelled as a set of facets with different slopes, where each facet is a small planar patch. Since photo receptors of the retina and pixels in a camera are both finite-area detectors, substantial macroscopic (much larger than the wavelength of incident light) surface roughness is often projected onto a single detection element, which in turn produces an aggregate brightness value over many facets. Whereas Lambert’s law may hold well when observing a single planar facet, a collection of such facets with different orientations is guaranteed to violate Lambert’s law. The primary reason for this is that the foreshortened facet areas will change for different viewing directions, and thus the surface appearance will be view-dependent.
Analysis of this phenomenon has a long history and can be traced back almost a century. Past work has resulted in empirical models designed to fit experimental data as well as theoretical results derived from first principles. Much of this work was motivated by the non-Lambertian reflectance of the moon.
The Oren–Nayar reflectance model, developed by Michael Oren and Shree K. Nayar in 1993, predicts reflectance from rough diffuse surfaces for the entire hemisphere of source and sensor directions. The model takes into account complex physical phenomena such as masking, shadowing and interreflections between points on the surface facets. It can be viewed as a generalization of Lambert’s law. Today, it is widely used in computer graphics and animation for rendering rough surfaces. It also has important implications for human vision and computer vision problems, such as shape from shading, photometric stereo, etc.
Formulation.
The surface roughness model used in the derivation of the Oren-Nayar model is the microfacet model, proposed by Torrance and Sparrow, which assumes the surface to be composed of long symmetric V-cavities. Each cavity consists of two planar facets. The roughness of the surface is specified using a probability function for the distribution of facet slopes. In particular, the Gaussian distribution is often used, and thus the variance of the Gaussian distribution, formula_0, is a measure of the roughness of the surfaces. The standard deviation of the facet slopes (gradient of the surface elevation), formula_1 ranges in formula_2.
In the Oren–Nayar reflectance model, each facet is assumed to be Lambertian in reflectance. If formula_3 is the irradiance when the facet is illuminated head-on, the radiance formula_4 of the light reflected by the faceted surface, according to the Oren-Nayar model, is
formula_5
where the direct illumination term formula_6 and the term formula_7 that describes bounces of light between the facets are defined as follows.
formula_8
formula_9
where
formula_10,
formula_11
formula_12
formula_13,
formula_14,
and formula_15 is the albedo of the surface, and formula_1 is the roughness of the surface. In the case of formula_16 (i.e., all facets in the same plane), we have formula_17, and formula_18, and thus the Oren-Nayar model simplifies to the Lambertian model:
formula_19
Results.
Here is a real image of a matte vase illuminated from the viewing direction, along with versions rendered using the Lambertian and Oren-Nayar models. It shows that the Oren-Nayar model predicts the diffuse reflectance for rough surfaces more accurately than the Lambertian model.
Here are rendered images of a sphere using the Oren-Nayar model, corresponding to different surface roughnesses (i.e. different formula_1 values):
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma^2"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "[0, \\infty)"
},
{
"math_id": 3,
"text": "E_0"
},
{
"math_id": 4,
"text": "L_r"
},
{
"math_id": 5,
"text": "L_r = L_1+L_2,"
},
{
"math_id": 6,
"text": "L_1"
},
{
"math_id": 7,
"text": "L_2"
},
{
"math_id": 8,
"text": "L_1 = \\frac{\\rho}{\\pi} E_0 \\cos \\theta_i \\left(C_1 + C_2 \\cos(\\phi_i-\\phi_r) \\tan\\beta + C_3(1-|\\cos(\\phi_i-\\phi_r)|)\\tan\\frac{\\alpha+\\beta}2\\right),"
},
{
"math_id": 9,
"text": "L_2 = 0.17\\frac{\\rho^2}\\pi E_0 \\cos\\theta_i\\frac{\\sigma^2}{\\sigma^2+0.13}\\left[1-\\cos(\\phi_i-\\phi_r)\\left(\\frac{2\\beta}\\pi\\right)^2\\right],"
},
{
"math_id": 10,
"text": "C_1 = 1-0.5\\frac{\\sigma^2}{\\sigma^2+0.33}"
},
{
"math_id": 11,
"text": "C_2 = \\begin{cases}\n0.45\\frac{\\sigma^2}{\\sigma^2+0.09}\\sin\\alpha & \\text{if }\\cos(\\phi_i-\\phi_r)\\ge0,\\\\\n0.45\\frac{\\sigma^2}{\\sigma^2+0.09}\\left(\\sin\\alpha-\\left(\\frac{2\\beta}\\pi\\right)^3\\right) & \\text{otherwise,}\n\\end{cases}"
},
{
"math_id": 12,
"text": "C_3 = 0.125\\frac{\\sigma^2}{\\sigma^2+0.09}\\left(\\frac{4\\alpha\\beta}{\\pi^2}\\right)^2,"
},
{
"math_id": 13,
"text": "\\alpha = \\max(\\theta_i, \\theta_r)"
},
{
"math_id": 14,
"text": "\\beta = \\min(\\theta_i, \\theta_r)"
},
{
"math_id": 15,
"text": "\\rho"
},
{
"math_id": 16,
"text": "\\sigma=0"
},
{
"math_id": 17,
"text": "C_1=1"
},
{
"math_id": 18,
"text": "C_2=C_3=L_2=0"
},
{
"math_id": 19,
"text": "L_r = \\frac{\\rho}{\\pi} E_0 \\cos \\theta_i."
}
] | https://en.wikipedia.org/wiki?curid=8895763 |
889639 | Default logic | Type of non-monotonic logic
Default logic is a non-monotonic logic proposed by Raymond Reiter to formalize reasoning with default assumptions.
Default logic can express facts like “by default, something is true”; by contrast, standard logic can only express that something is true or that something is false. This is a problem because reasoning often involves facts that are true in the majority of cases but not always. A classical example is: “birds typically fly”. This rule can be expressed in standard logic either by “all birds fly”, which is inconsistent with the fact that penguins do not fly, or by “all birds that are not penguins and not ostriches and ... fly”, which requires all exceptions to the rule to be specified. Default logic aims at formalizing inference rules like this one without explicitly mentioning all their exceptions.
Syntax of default logic.
A default theory is a pair formula_0. W is a set of logical formulas, called "the background theory", that formalize the facts that are known for sure. D is a set of "default rules", each one being of the form:
formula_1
According to this default, if we believe that Prerequisite is true, and each formula_2 for formula_3 is consistent with our current beliefs, we are led to believe that Conclusion is true.
The logical formulae in W and all formulae in a default were originally assumed to be first-order logic formulae, but they can potentially be formulae in an arbitrary formal logic. The case in which they are formulae in propositional logic is one of the most studied.
Examples.
The default rule “birds typically fly” is formalized by the following default:
formula_4
This rule means that, "if X is a bird, and it can be assumed that it flies, then we can conclude that it flies". A background theory containing some facts about birds is the following one:
formula_5.
According to this default rule, a condor flies because the precondition Bird(Condor) is true and the justification Flies(Condor) is not inconsistent with what is currently known. On the contrary, Bird(Penguin) does not allow concluding Flies(Penguin): even if the precondition of the default Bird(Penguin) is true, the justification Flies(Penguin) is inconsistent with what is known. From this background theory and this default, Bird(Bee) cannot be concluded because the default rule only allows deriving Flies("X") from Bird("X"), but not vice versa. Deriving the antecedents of an inference rule from the consequences is a form of explanation of the consequences, and is the aim of abductive reasoning.
A common default assumption is that what is not known to be true is believed to be false. This is known as the Closed-World Assumption, and is formalized in default logic using a default like the following one for every fact F.
formula_6
For example, the computer language Prolog uses a sort of default assumption when dealing with negation: if a negative atom cannot be proved to be true, then it is assumed to be false.
Note, however, that Prolog uses the so-called negation as failure: when the interpreter has to evaluate the atom formula_7, it tries to prove that F is true, and conclude that formula_7 is true if it fails. In default logic, instead, a default having formula_7 as a justification can only be applied if formula_7 is consistent with the current knowledge.
Restrictions.
A default is categorical or prerequisite-free if it has no prerequisite (or, equivalently, its prerequisite is tautological). A default is normal if it has a single justification that is equivalent to its conclusion. A default is supernormal if it is both categorical and normal. A default is seminormal if all its justifications entail its conclusion. A default theory is called categorical, normal, supernormal, or seminormal if all defaults it contains are categorical, normal, supernormal, or seminormal, respectively.
Semantics of default logic.
A default rule can be applied to a theory if its precondition is entailed by the theory and its justifications are all consistent with the theory. The application of a default rule leads to the addition of its consequence to the theory. Other default rules may then be applied to the resulting theory. When the theory is such that no other default can be applied, the theory is called an extension of the default theory. The default rules may be applied in different order, and this may lead to different extensions. The Nixon diamond example is a default theory with two extensions:
formula_8
Since Nixon is both a Republican and a Quaker, both defaults can be applied. However, applying the first default leads to the conclusion that Nixon is not a pacifist, which makes the second default not applicable. In the same way, applying the second default we obtain that Nixon is a pacifist, thus making the first default not applicable. This particular default theory has therefore two extensions, one in which Pacifist(Nixon) is true, and one in which Pacifist(Nixon) is false.
The original semantics of default logic was based on the fixed point of a function. The following is an equivalent algorithmic definition. If a default contains formulae with free variables, it is considered to represent the set of all defaults obtained by giving a value to all these variables. A default formula_9 is applicable to a propositional theory T if formula_10 and all theories formula_11 are consistent. The application of this default to T leads to the theory formula_12. An extension can be generated by applying the following algorithm:
T = W /* current theory */
A = 0 /* set of defaults applied so far */
/* apply a sequence of defaults */
while there is a default d that is not in A and is applicable to T
add the consequence of d to T
add d to A
/* final consistency check */
if
for every default d in A
T is consistent with all justifications of d
then
output T
This algorithm is non-deterministic, as several defaults can alternatively be applied to a given theory T. In the Nixon diamond example, the application of the first default leads to a theory to which the second default cannot be applied and vice versa. As a result, two extensions are generated: one in which Nixon is a pacifist and one in which Nixon is not a pacifist.
The final check of consistency of the justifications of all defaults that have been applied implies that some theories do not have any extensions. In particular, this happens whenever this check fails for every possible sequence of applicable defaults. The following default theory has no extension:
formula_13
Since formula_14 is consistent with the background theory, the default can be applied, thus leading to the conclusion that formula_14 is false. This result however undermines the assumption that has been made for applying the first default. Consequently, this theory has no extensions.
In a normal default theory, all defaults are normal: each default has the form formula_15. A normal default theory is guaranteed to have at least one extension. Furthermore, the extensions of a normal default theory are mutually inconsistent, i.e., inconsistent with each other.
Entailment.
A default theory can have zero, one, or more extensions. Entailment of a formula from a default theory can be defined in two ways:
Thus, the Nixon diamond example theory has two extensions, one in which Nixon is a pacifist and one in which he is not a pacifist. Consequently, neither Pacifist(Nixon) nor ¬Pacifist(Nixon) are skeptically entailed, while both of them are credulously entailed. As this example shows, the credulous consequences of a default theory may be inconsistent with each other.
Alternative default inference rules.
The following alternative inference rules for default logic are all based on the same syntax as the original system.
The justified and constrained versions of the inference rule assign at least an extension to every default theory.
Variants of default logic.
The following variants of default logic differ from the original one on both syntax and semantics.
Translations.
Default theories can be translated into theories in other logics and vice versa. The following conditions on translations have been considered:
Translations are typically required to be faithful or at
least consequence-preserving, while the conditions of
modularity and same alphabet are sometimes ignored.
The translatability between propositional default logic and
the following logics have been studied:
Translations exist or not depending on which conditions are imposed. Translations from propositional default logic to classical propositional logic cannot always generate a polynomially sized propositional theory, unless the polynomial hierarchy collapses. Translations to autoepistemic logic exist or not depending on whether modularity or the use of the same alphabet is required.
Complexity.
The computational complexity of the following problems about default logic is known:
Implementations.
Four systems implementing default logics are
DeReS,
XRay,
GADeL , and . | [
{
"math_id": 0,
"text": "\\langle W, D \\rangle"
},
{
"math_id": 1,
"text": "\\frac{\\mathrm{Prerequisite : Justification}_1, \\dots , \\mathrm{Justification}_n}{\\mathrm{Conclusion}}"
},
{
"math_id": 2,
"text": "\\mathrm{Justification}_i"
},
{
"math_id": 3,
"text": "i = 1, \\dots, n"
},
{
"math_id": 4,
"text": "D = \\left\\{ \\frac{\\mathrm{Bird}(X) : \\mathrm{Flies}(X)}{\\mathrm{Flies}(X)} \\right\\}"
},
{
"math_id": 5,
"text": "W = \\{ \\mathrm{Bird}(\\mathrm{Condor}), \\mathrm{Bird}(\\mathrm{Penguin}), \\neg \\mathrm{Flies}(\\mathrm{Penguin}), \\mathrm{Flies}(\\mathrm{Bee}) \\}"
},
{
"math_id": 6,
"text": "\\frac{:{\\neg}F}{{\\neg}F}"
},
{
"math_id": 7,
"text": "\\neg F"
},
{
"math_id": 8,
"text": "\n\\left\\langle\n\\left\\{\n\\frac{\\mathrm{Republican}(X):\\neg \\mathrm{Pacifist}(X)}{\\neg \\mathrm{Pacifist}(X)},\n\\frac{\\mathrm{Quaker}(X):\\mathrm{Pacifist}(X)}{\\mathrm{Pacifist}(X)}\n\\right\\},\n\\left\\{\\mathrm{Republican}(\\mathrm{Nixon}), \\mathrm{Quaker}(\\mathrm{Nixon})\\right\\}\n\\right\\rangle\n"
},
{
"math_id": 9,
"text": "\\frac{\\alpha:\\beta_1,\\ldots,\\beta_n}{\\gamma}"
},
{
"math_id": 10,
"text": "T \\models \\alpha"
},
{
"math_id": 11,
"text": "T \\cup \\{\\beta_i\\}"
},
{
"math_id": 12,
"text": "T \\cup \\{\\gamma\\}"
},
{
"math_id": 13,
"text": "\n\\left\\langle \n\\left\\{\n\\frac{:A(b)}{\\neg A(b)}\n\\right\\},\n\\emptyset\n\\right\\rangle\n"
},
{
"math_id": 14,
"text": "A(b)"
},
{
"math_id": 15,
"text": "\\frac{\\phi : \\psi}{\\psi}"
},
{
"math_id": 16,
"text": "\\langle p: \\{r_1,\\ldots,r_n\\} \\rangle"
},
{
"math_id": 17,
"text": "r_1,\\ldots,r_n"
},
{
"math_id": 18,
"text": "\\Box x \\rightarrow x"
},
{
"math_id": 19,
"text": "\\Box x"
},
{
"math_id": 20,
"text": "\\Sigma^P_2"
},
{
"math_id": 21,
"text": "\\Pi^P_2"
},
{
"math_id": 22,
"text": "\\Delta^{P[\\log]}_2"
}
] | https://en.wikipedia.org/wiki?curid=889639 |
8898050 | Electron tomography | Electron tomography (ET) is a tomography technique for obtaining detailed 3D structures of sub-cellular, macro-molecular, or materials specimens. Electron tomography is an extension of traditional transmission electron microscopy and uses a transmission electron microscope to collect the data. In the process, a beam of electrons is passed through the sample at incremental degrees of rotation around the center of the target sample. This information is collected and used to assemble a three-dimensional image of the target. For biological applications, the typical resolution of ET systems are in the 5–20 nm range, suitable for examining supra-molecular multi-protein structures, although not the secondary and tertiary structure of an individual protein or polypeptide. Recently, atomic resolution in 3D electron tomography reconstructions has been demonstrated.
BF-TEM and ADF-STEM tomography.
In the field of biology, bright-field transmission electron microscopy (BF-TEM) and high-resolution TEM (HRTEM) are the primary imaging methods for tomography tilt series acquisition. However, there are two issues associated with BF-TEM and HRTEM. First, acquiring an interpretable 3-D tomogram requires that the projected image intensities vary monotonically with material thickness. This condition is difficult to guarantee in BF/HRTEM, where image intensities are dominated by phase-contrast with the potential for multiple contrast reversals with thickness, making it difficult to distinguish voids from high-density inclusions. Second, the contrast transfer function of BF-TEM is essentially a high-pass filter – information at low spatial frequencies is significantly suppressed – resulting in an exaggeration of sharp features. However, the technique of annular dark-field scanning transmission electron microscopy (ADF-STEM), which is typically used on material specimens, more effectively suppresses phase and diffraction contrast, providing image intensities that vary with the projected mass-thickness of samples up to micrometres thick for materials with low atomic number. ADF-STEM also acts as a low-pass filter, eliminating the edge-enhancing artifacts common in BF/HRTEM. Thus, provided that the features can be resolved, ADF-STEM tomography can yield a reliable reconstruction of the underlying specimen which is extremely important for its application in materials science. For 3D imaging, the resolution is traditionally described by the Crowther criterion. In 2010, a 3D resolution of 0.5±0.1×0.5±0.1×0.7±0.2 nm was achieved with a single-axis ADF-STEM tomography.
Atomic Electron Tomography (AET).
Atomic level resolution in 3D electron tomography reconstructions has been demonstrated. Reconstructions of crystal defects such as stacking faults, grain boundaries, dislocations, and twinning in structures have been achieved. This method is relevant to the physical sciences, where cryo-EM techniques cannot always be used to locate the coordinates of individual atoms in disordered materials. AET reconstructions are achieved using the combination of an ADF-STEM tomographic tilt series and iterative algorithms for reconstruction. Currently, algorithms such as the real-space algebraic reconstruction technique (ART) and the fast Fourier transform equal slope tomography (EST) are used to address issues such as image noise, sample drift, and limited data. ADF-STEM tomography has recently been used to directly visualize the atomic structure of screw dislocations in nanoparticles.
AET has also been used to find the 3D coordinates of 3,769 atoms in a tungsten needle with 19 pm precision and 20,000 atoms in a multiply twinned palladium nanoparticle. The combination of AET with electron energy loss spectroscopy (EELS) allows for investigation of electronic states in addition to 3D reconstruction. Challenges to atomic level resolution from electron tomography include the need for better reconstruction algorithms and increased precision of tilt angle required to image defects in non-crystalline samples.
Different tilting methods.
The most popular tilting methods are the single-axis and the dual-axis tilting methods. The geometry of most specimen holders and electron microscopes normally precludes tilting the specimen through a full 180° range, which can lead to artifacts in the 3D reconstruction of the target. Standard single-tilt sample holders have a limited rotation of ±80°, leading to a missing wedge in the reconstruction. A solution is to use needle shaped-samples to allow for full rotation. By using dual-axis tilting, the reconstruction artifacts are reduced by a factor of formula_0 compared to single-axis tilting. However, twice as many images need to be taken. Another method of obtaining a tilt-series is the so-called conical tomography method, in which the sample is tilted, and then rotated a complete turn.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{2}"
}
] | https://en.wikipedia.org/wiki?curid=8898050 |
8898329 | Frequency scaling | In computer architecture, frequency scaling (also known as frequency ramping) is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question. Frequency ramping was the dominant force in commodity processor performance increases from the mid-1980s until roughly the end of 2004.
The effect of processor frequency on computer speed can be seen by looking at the equation for computer program runtime:
formula_0
where instructions per program is the total instructions being executed in a given program, cycles per instruction is a program-dependent, architecture-dependent average value, and time per cycle is by definition the inverse of processor frequency. An increase in frequency thus decreases runtime.
However, power consumption in a chip is given by the equation
formula_1
where "P" is power consumption, "C" is the capacitance being switched per clock cycle, "V" is voltage, and "F" is the processor frequency (cycles per second). Increases in frequency thus increase the amount of power used in a processor. Increasing processor power consumption led ultimately to Intel's May 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm.
Moore's Law was still in effect when frequency scaling ended. Despite power issues, transistor densities were still doubling every 18 to 24 months. With the end of frequency scaling, new transistors (which are no longer needed to facilitate frequency scaling) are used to add extra hardware, such as additional cores, to facilitate parallel computing - a technique that is being referred to as parallel scaling.
The end of frequency scaling as the dominant cause of processor performance gains has caused an industry-wide shift to parallel computing in the form of multicore processors.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathrm{Run time} = \\frac{\\mathrm{Instructions}}{\\mathrm{Program}} \\times \\frac{\\mathrm{Cycles}}{\\mathrm{Instruction}} \\times \\frac {\\mathrm{Time}}{\\mathrm{Cycle}},"
},
{
"math_id": 1,
"text": "P = C \\times V^2 \\times F,"
}
] | https://en.wikipedia.org/wiki?curid=8898329 |
8898866 | Demand for money | Concept in economics
In monetary economics, the demand for money is the desired holding of financial assets in the form of money: that is, cash or bank deposits rather than investments. It can refer to the demand for money narrowly defined as M1 (directly spendable holdings), or for money in the broader sense of M2 or M3.
Money in the sense of M1 is dominated as a store of value (even a temporary one) by interest-bearing assets. However, M1 is necessary to carry out transactions; in other words, it provides liquidity. This creates a trade-off between the liquidity advantage of holding money for near-future expenditure and the interest advantage of temporarily holding other assets. The demand for M1 is a result of this trade-off regarding the form in which a person's funds to be spent should be held. In macroeconomics motivations for holding one's wealth in the form of M1 can roughly be divided into the transaction motive and the precautionary motive. The demand for those parts of the broader money concept M2 that bear a non-trivial interest rate is based on the asset demand. These can be further subdivided into more microeconomically founded motivations for holding money.
Generally, the nominal demand for money increases with the level of nominal output (price level times real output) and decreases with the nominal interest rate. The real demand for money is defined as the nominal amount of money demanded divided by the price level. For a given money supply the locus of income-interest rate pairs at which money demand equals money supply is known as the LM curve.
The magnitude of the volatility of money demand has crucial implications for the optimal way in which a central bank should carry out monetary policy and its choice of a nominal anchor.
Conditions under which the LM curve is flat, so that increases in the money supply have no stimulatory effect (a liquidity trap), play an important role in Keynesian theory. This situation occurs when the demand for money is infinitely elastic with respect to the interest rate.
A typical "money-demand function" may be written as
formula_0
where formula_1 is the nominal amount of money demanded, "P" is the price level, "R" is the nominal interest rate, "Y" is real income, and "L"(.) is real money demand. An alternate name for formula_2 is the "liquidity preference function".
Motives for holding money.
Transaction motive.
The transactions motive for the demand for M1 (directly spendable money balances) results from the need for liquidity for day-to-day transactions in the near future.
This need arises when income is received only occasionally (say once per month) in discrete amounts but expenditures occur continuously.
Quantity theory.
The most basic "classical" transaction motive can be illustrated with reference to the Quantity Theory of Money. According to the equation of exchange "MV" = "PY", where "M" is the stock of money, "V" is its velocity (how many times a unit of money turns over during a period of time), "P" is the price level and "Y" is real income. Consequently, "PY" is nominal income or in other words the number of transactions carried out in an economy during a period of time. Rearranging the above identity and giving it a behavioral interpretation as a demand for money we have
formula_3
or in terms of demand for real balances
formula_4
Hence in this simple formulation demand for money is a function of prices and income, as long as its velocity is constant.
Inventory models.
The amount of money demanded for transactions however is also likely to depend on the nominal interest rate. This arises due to the lack of synchronization in time between when purchases are desired and when factor payments (such as wages) are made. In other words, while workers may get paid only once a month they generally will wish to make purchases, and hence need money, over the course of the entire month.
The most well-known example of an economic model that is based on such considerations is the Baumol-Tobin model. In this model an individual receives her income periodically, for example, only once per month, but wishes to make purchases continuously. The person could carry her entire income with her at all times and use it to make purchases. However, in this case she would be giving up the (nominal) interest rate that she can get by holding her income in the bank. The optimal strategy involves holding a portion of one's income in the bank and portion as liquid money. The money portion is continuously run down as the individual makes purchases and then she makes periodic (costly) trips to the bank to replenish the holdings of money. Under some simplifying assumptions the demand for money resulting from the Baumol-Tobin model is given by
formula_5
where t is the cost of a trip to the bank, R is the nominal interest rate and P and Y are as before.
The key difference between this formulation and the one based on a simple version of Quantity Theory is that now the demand for real balances depends on both income (positively) or the desired level of transactions, and on the nominal interest rate (negatively).
Microfoundations for money demand.
While the Baumol–Tobin model provides a microeconomic explanation for the form of the money demand function, it is generally too stylized to be included in modern macroeconomic models, particularly dynamic stochastic general equilibrium models. As a result, most models of this type resort to simpler indirect methods which capture the spirit of the transactions motive. The two most commonly used methods are the cash-in-advance model (sometimes called the Clower constraint model) and the money-in-the-utility-function (MIU) model (as known as the Sidrauski model).
In the cash-in-advance model agents are restricted to carrying out a volume of transactions equal to or less than their money holdings. In the MIU model, money directly enters agents' utility functions, capturing the 'liquidity services' provided by money.
Precautionary demand.
The precautionary demand for M1 is the holding of transaction funds for use if unexpected needs for immediate expenditure arise.
Asset motive.
The asset motive for the demand for broader monetary measures, M2 and M3, states that people demand money as a way to hold wealth. While it is still assumed that money in the sense of M1 is held in order to carry out transactions, this approach focuses on the potential return on various assets (including money broadly defined) as an additional motivation.
Speculative motive.
John Maynard Keynes, in laying out speculative reasons for holding money, stressed the choice between money and bonds. If agents expect the future nominal interest rate (the return on bonds) to be lower than the current rate they will then reduce their holdings of money and increase their holdings of bonds. If the future interest rate falls, then the price of bonds will increase and the agents will have realized a capital gain on the bonds they purchased. This means that the demand for money in any period will depend on both the current nominal interest rate and the expected future interest rate (in addition to the standard transaction motives which depend on income).
The fact that the current demand for money can depend on expectations of the future interest rates has implications for volatility of money demand. If these expectations are formed, as in Keynes' view, by "animal spirits" they are likely to change erratically and cause money demand to be quite unstable.
Portfolio motive.
The portfolio motive also focuses on demand for money over and above that required for carrying out transactions. The basic framework is due to James Tobin, who considered a situation where agents can hold their wealth in a form of a low risk/low return asset (here, money) or high risk/high return asset (bonds or equity). Agents will choose a mix of these two types of assets (their portfolio) based on the risk-expected return trade-off. For a given expected rate of return, more risk averse individuals will choose a greater share for money in their portfolio. Similarly, given a person's degree of risk aversion, a higher expected return (nominal interest rate plus expected capital gains on bonds) will cause agents to shift away from safe money and into risky assets. Like in the other motivations above, this creates a negative relationship between the nominal interest rate and the demand for money. However, what matters additionally in the Tobin model is the subjective rate of risk aversion, as well as the objective degree of risk of other assets, as, say, measured by the standard deviation of capital gains and losses resulting from holding bonds and/or equity.
Empirical estimations of money demand functions.
Is money demand stable?
Friedman and Schwartz in their 1963 work "A Monetary History of the United States" argued that the demand for real balances was a function of income and the interest rate. For the time period they were studying this appeared to be true. However, shortly after the publication of the book, due to changes in financial markets and financial regulation money demand became more unstable. Various researchers showed that money demand became much more unstable after 1975. Ericsson, Hendry and Prestwich (1998) consider a model of money demand based on the various motives outlined above and test it with empirical data. The basic model turns out to work well for the period 1878 to 1975 and there doesn't appear to be much volatility in money demand, in a result analogous to that of Friedman and Schwartz. This is true even despite the fact that the two world wars during this time period could have led to changes in the velocity of money. However, when the same basic model is used on data spanning 1976 to 1993, it performs poorly. In particular, money demand appears not to be sensitive to interest rates and there appears to be much more exogenous volatility. The authors attribute the difference to technological innovations in the financial markets, financial deregulation, and the related issue of the changing menu of assets considered in the definition of money. Other researchers confirmed this finding with recent data and over a longer period. Money demand appears to be time varying which also depends on household's real balance effects.
Laurence M. Ball suggests that the use of adapted aggregates, such as near monies, can produce a more stable demand function. He shows that using the return on near monies produced smaller deviations than previous models.
Importance of money demand volatility for monetary policy.
If the demand for money is stable then a monetary policy which consists of a monetary rule which targets the growth rate of some monetary aggregate (such as M1 or M2) can help to stabilize the economy or at least remove monetary policy as a source of macroeconomic volatility. Additionally, if the demand for money does not change unpredictably then money supply targeting is a reliable way of attaining a constant inflation rate. This can be most easily seen with the quantity theory of money equation given above. When that equation is converted into growth rates we have:
formula_6
which says that the growth rate of money supply plus the growth rate of its velocity equals the inflation rate plus the growth rate of real output. If money demand is stable then velocity is constant and formula_7. Additionally, in the long run real output grows at a constant rate equal to the sum of the rates of growth of population, technological know-how, and technology in place, and as such is exogenous. In this case the above equation can be solved for the inflation rate:
formula_8
Here, given the long-run output growth rate, the only determinant of the inflation rate is the growth rate of the money supply. In this case inflation in the long run is a purely monetary phenomenon; a monetary policy which targets the money supply can stabilize the economy and ensure a non-variable inflation rate.
This analysis however breaks down if the demand for money is not stable – for example, if velocity in the above equation is not constant. In that case, shocks to money demand under money supply targeting will translate into changes in real and nominal interest rates and result in economic fluctuations. An alternative policy of targeting interest rates rather than the money supply can improve upon this outcome as the money supply is adjusted to shocks in money demand, keeping interest rates (and hence, economic activity) relatively constant.
The above discussion implies that the volatility of money demand matters for how monetary policy should be conducted. If most of the aggregate demand shocks which affect the economy come from the expenditure side, the IS curve, then a policy of targeting the money supply will be stabilizing, relative to a policy of targeting interest rates. However, if most of the aggregate demand shocks come from changes in money demand, which influences the LM curve, then a policy of targeting the money supply will be destabilizing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M^d=P\\times L(R,Y) \\,"
},
{
"math_id": 1,
"text": "M^d"
},
{
"math_id": 2,
"text": "L(R,Y)"
},
{
"math_id": 3,
"text": "M^d=P \\frac {Y} {V} \\,"
},
{
"math_id": 4,
"text": "\\frac {M^d} {P}=\\frac {Y} {V} \\,"
},
{
"math_id": 5,
"text": "\\frac {M^d} {P}=\\sqrt {\\frac {tY} {2R}} \\,"
},
{
"math_id": 6,
"text": "g_m+g_v=\\pi+g_y \\,"
},
{
"math_id": 7,
"text": "g_v=0"
},
{
"math_id": 8,
"text": "\\pi=-g_y+g_m \\,"
}
] | https://en.wikipedia.org/wiki?curid=8898866 |
889936 | Circulant matrix | Linear algebra matrix
In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.
In numerical analysis, circulant matrices are important because they are diagonalized by a discrete Fourier transform, and hence linear equations that contain them may be quickly solved using a fast Fourier transform. They can be interpreted analytically as the integral kernel of a convolution operator on the cyclic group formula_0 and hence frequently appear in formal descriptions of spatially invariant linear operations. This property is also critical in modern software defined radios, which utilize Orthogonal Frequency Division Multiplexing to spread the symbols (bits) using a cyclic prefix. This enables the channel to be represented by a circulant matrix, simplifying channel equalization in the frequency domain.
In cryptography, a circulant matrix is used in the MixColumns step of the Advanced Encryption Standard.
Definition.
An formula_1 circulant matrix formula_2 takes the form
formula_3
or the transpose of this form (by choice of notation). If each formula_4 is a formula_5 square matrix, then the formula_6 matrix formula_2 is called a block-circulant matrix.
A circulant matrix is fully specified by one vector, formula_7, which appears as the first column (or row) of formula_2. The remaining columns (and rows, resp.) of formula_2 are each cyclic permutations of the vector formula_7 with offset equal to the column (or row, resp.) index, if lines are indexed from formula_8 to formula_9. (Cyclic permutation of rows has the same effect as cyclic permutation of columns.) The last row of formula_2 is the vector formula_7 shifted by one in reverse.
Different sources define the circulant matrix in different ways, for example as above, or with the vector formula_7 corresponding to the first row rather than the first column of the matrix; and possibly with a different direction of shift (which is sometimes called an anti-circulant matrix).
The polynomial formula_10 is called the "associated polynomial" of the matrix formula_2.
Properties.
Eigenvectors and eigenvalues.
The normalized eigenvectors of a circulant matrix are the Fourier modes, namely,
formula_11
where formula_12 is a primitive formula_13-th root of unity and formula_14 is the imaginary unit.
The corresponding eigenvalues are given by
formula_15
Determinant.
As a consequence of the explicit formula for the eigenvalues above,
the determinant of a circulant matrix can be computed as:
formula_16
Since taking the transpose does not change the eigenvalues of a matrix, an equivalent formulation is
formula_17
Rank.
The rank of a circulant matrix formula_2 is equal to formula_18 where formula_19 is the degree of the polynomial formula_20.
Analytic interpretation.
Circulant matrices can be interpreted geometrically, which explains the connection with the discrete Fourier transform.
Consider vectors in formula_42 as functions on the integers with period formula_13, (i.e., as periodic bi-infinite sequences: formula_43) or equivalently, as functions on the cyclic group of order formula_13 (denoted formula_0 or formula_44) geometrically, on (the vertices of) the regular formula_13-gon: this is a discrete analog to periodic functions on the real line or circle.
Then, from the perspective of operator theory, a circulant matrix is the kernel of a discrete integral transform, namely the convolution operator for the function formula_45; this is a discrete circular convolution. The formula for the convolution of the functions formula_46 is
formula_47
which is the product of the vector formula_48 by the circulant matrix for formula_49.
The discrete Fourier transform then converts convolution into multiplication, which in the matrix setting corresponds to diagonalization.
The formula_50-algebra of all circulant matrices with complex entries is isomorphic to the group formula_50-algebra of formula_51
Symmetric circulant matrices.
For a symmetric circulant matrix formula_2 one has the extra condition that formula_52.
Thus it is determined by formula_53 elements.
formula_54
The eigenvalues of any real symmetric matrix are real.
The corresponding eigenvalues formula_55become:
formula_56
for formula_13 even, and
formula_57
for formula_13 odd, where formula_58 denotes the real part of formula_59.
This can be further simplified by using the fact that formula_60 and formula_61 depending on formula_62 even or odd.
Symmetric circulant matrices belong to the class of bisymmetric matrices.
Hermitian circulant matrices.
The complex version of the circulant matrix, ubiquitous in communications theory, is usually Hermitian. In this case formula_63 and its determinant and all eigenvalues are real.
If "n" is even the first two rows necessarily takes the form
formula_64
in which the first element formula_65 in the top second half-row is real.
If "n" is odd we get
formula_66
Tee has discussed constraints on the eigenvalues for the Hermitian condition.
Applications.
In linear equations.
Given a matrix equation
formula_67
where formula_2 is a circulant matrix of size formula_13, we can write the equation as the circular convolution
formula_68
where formula_69 is the first column of formula_2, and the vectors formula_69, formula_70 and formula_71 are cyclically extended in each direction. Using the circular convolution theorem, we can use the discrete Fourier transform to transform the cyclic convolution into component-wise multiplication
formula_72
so that
formula_73
This algorithm is much faster than the standard Gaussian elimination, especially if a fast Fourier transform is used.
In graph theory.
In graph theory, a graph or digraph whose adjacency matrix is circulant is called a circulant graph/digraph. Equivalently, a graph is circulant if its automorphism group contains a full-length cycle. The Möbius ladders are examples of circulant graphs, as are the Paley graphs for fields of prime order.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_n"
},
{
"math_id": 1,
"text": "n \\times n"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "C = \\begin{bmatrix}\nc_0 & c_{n-1} & \\cdots & c_2 & c_1 \\\\\nc_1 & c_0 & c_{n-1} & & c_2 \\\\\n\\vdots & c_1 & c_0 & \\ddots & \\vdots \\\\\nc_{n-2} & & \\ddots & \\ddots & c_{n-1} \\\\\nc_{n-1} & c_{n-2} & \\cdots & c_1 & c_0 \\\\\n\\end{bmatrix}"
},
{
"math_id": 4,
"text": "c_i"
},
{
"math_id": 5,
"text": "p \\times p"
},
{
"math_id": 6,
"text": "np \\times np"
},
{
"math_id": 7,
"text": "c"
},
{
"math_id": 8,
"text": "0"
},
{
"math_id": 9,
"text": "n-1"
},
{
"math_id": 10,
"text": "f(x) = c_0 + c_1 x + \\dots + c_{n-1} x^{n-1}"
},
{
"math_id": 11,
"text": "v_j=\\frac{1}{\\sqrt{n}} \\left(1, \\omega^j, \\omega^{2j}, \\ldots, \\omega^{(n-1)j}\\right)^{T},\\quad j = 0, 1, \\ldots, n-1,"
},
{
"math_id": 12,
"text": "\\omega=\\exp \\left(\\tfrac{2\\pi i}{n}\\right)"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "i"
},
{
"math_id": 15,
"text": "\\lambda_j = c_0+c_{1} \\omega^j + c_{2} \\omega^{2j} + \\dots + c_{n-1} \\omega^{(n-1)j},\\quad j = 0, 1, \\dots, n-1."
},
{
"math_id": 16,
"text": "\\det C = \\prod_{j=0}^{n-1} (c_0 + c_{n-1} \\omega^j + c_{n-2} \\omega^{2j} + \\dots + c_1\\omega^{(n-1)j})."
},
{
"math_id": 17,
"text": "\\det C\n= \\prod_{j=0}^{n-1} (c_0 + c_1 \\omega^j + c_2 \\omega^{2j} + \\dots + c_{n-1}\\omega^{(n-1)j})\n= \\prod_{j=0}^{n-1} f(\\omega^j)."
},
{
"math_id": 18,
"text": "n - d"
},
{
"math_id": 19,
"text": "d"
},
{
"math_id": 20,
"text": "\\gcd( f(x), x^n - 1)"
},
{
"math_id": 21,
"text": "P"
},
{
"math_id": 22,
"text": " C = c_0 I + c_1 P + c_2 P^2 + \\dots + c_{n-1} P^{n-1} = f(P),"
},
{
"math_id": 23,
"text": "P = \\begin{bmatrix}\n 0&0&\\cdots&0&1\\\\\n 1&0&\\cdots&0&0\\\\\n 0&\\ddots&\\ddots&\\vdots&\\vdots\\\\\n \\vdots&\\ddots&\\ddots&0&0\\\\\n 0&\\cdots&0&1&0\n\\end{bmatrix}."
},
{
"math_id": 24,
"text": "A"
},
{
"math_id": 25,
"text": "B"
},
{
"math_id": 26,
"text": "A + B"
},
{
"math_id": 27,
"text": "AB"
},
{
"math_id": 28,
"text": "AB = BA"
},
{
"math_id": 29,
"text": "A^{-1}"
},
{
"math_id": 30,
"text": "A^+"
},
{
"math_id": 31,
"text": "F_n"
},
{
"math_id": 32,
"text": "F_n \\text{ is unitary}, \\text{ where } F_n = [v_0,v_1,\\dots,v_{n-1}]= \\frac{1}{\\sqrt{n}}(f_{kj}) \\text{ with } f_{kj} = e^{2\\pi i/n \\cdot kj}, \\,\\text{for } 0 \\leq k,j \\leq n-1."
},
{
"math_id": 33,
"text": "U_n"
},
{
"math_id": 34,
"text": "C = F_n\\operatorname{diag}(\\sqrt n \\cdot F_n^{\\dagger} c) F_n^{\\dagger} ,"
},
{
"math_id": 35,
"text": "F_n^{\\dagger} c"
},
{
"math_id": 36,
"text": "D"
},
{
"math_id": 37,
"text": "F_nDF_n^{\\dagger}"
},
{
"math_id": 38,
"text": "p(x)"
},
{
"math_id": 39,
"text": "\\frac{1}{n}p'(x)"
},
{
"math_id": 40,
"text": "(n-1)\\times(n-1)"
},
{
"math_id": 41,
"text": "C_{n-1} = \\begin{bmatrix}\n c_0 & c_{n-1} & \\cdots & c_3 & c_2 \\\\\n c_1 & c_0 & c_{n-1} & & c_3 \\\\\n \\vdots & c_1 & c_0 & \\ddots & \\vdots \\\\\n c_{n-3} & & \\ddots & \\ddots & c_{n-1} \\\\\n c_{n-2} & c_{n-3} & \\cdots & c_{1} & c_0 \\\\\n\\end{bmatrix}"
},
{
"math_id": 42,
"text": "\\R^n"
},
{
"math_id": 43,
"text": "\\dots,a_0,a_1,\\dots,a_{n-1},a_0,a_1,\\dots"
},
{
"math_id": 44,
"text": "\\Z/n\\Z"
},
{
"math_id": 45,
"text": "(c_0,c_1,\\dots,c_{n-1})"
},
{
"math_id": 46,
"text": "(b_i) := (c_i) * (a_i)"
},
{
"math_id": 47,
"text": "b_k = \\sum_{i=0}^{n-1} a_i c_{k-i}"
},
{
"math_id": 48,
"text": "(a_i)"
},
{
"math_id": 49,
"text": "(c_i)"
},
{
"math_id": 50,
"text": "C^*"
},
{
"math_id": 51,
"text": "\\Z/n\\Z."
},
{
"math_id": 52,
"text": "c_{n-i}=c_i"
},
{
"math_id": 53,
"text": "\\lfloor n/2\\rfloor + 1"
},
{
"math_id": 54,
"text": "C = \\begin{bmatrix}\nc_0 & c_1 & \\cdots & c_2 & c_1 \\\\\nc_1 & c_0 & c_1 & & c_2 \\\\\n\\vdots & c_1 & c_0 & \\ddots & \\vdots \\\\\nc_2 & & \\ddots & \\ddots & c_1 \\\\\nc_1 & c_2 & \\cdots & c_1 & c_0 \\\\\n\\end{bmatrix}."
},
{
"math_id": 55,
"text": " \\vec{\\lambda}= \\sqrt n \\cdot F_n^{\\dagger} c"
},
{
"math_id": 56,
"text": "\\begin{array}{lcl} \\lambda_k & = & c_0 + c_{n/2} e^{-\\pi i \\cdot k} + 2\\sum_{j=1}^{\\frac{n}{2}-1} c_j \\cos{(-\\frac{2\\pi}{n}\\cdot k j )} \\\\\n& = & c_0+ c_{n/2} \\omega_k^{n/2} + 2 c_1 \\Re \\omega_k + 2 c_2 \\Re \\omega_k^2 + \\dots + 2c_{n/2-1} \\Re \\omega_k^{n/2-1} \\end{array}"
},
{
"math_id": 57,
"text": "\\begin{array}{lcl} \\lambda_k & = & c_0 + 2\\sum_{j=1}^{\\frac{n-1}{2}} c_j \\cos{(-\\frac{2\\pi}{n}\\cdot k j )} \\\\ \n & = & c_0 + 2 c_1 \\Re \\omega_k + 2 c_2 \\Re \\omega_k^2 + \\dots + 2c_{(n-1)/2} \\Re \\omega_k^{(n-1)/2} \\end{array}\n"
},
{
"math_id": 58,
"text": "\\Re z"
},
{
"math_id": 59,
"text": "z"
},
{
"math_id": 60,
"text": "\\Re \\omega_k^j = \\Re e^{-\\frac{2\\pi i}{n} \\cdot kj} = \\cos(-\\frac{2\\pi}{n} \\cdot kj)"
},
{
"math_id": 61,
"text": "\\omega_k^{n/2}=e^{-\\frac{2\\pi i}{n} \\cdot k \\frac{n}{2}} =e^{-\\pi i \\cdot k}"
},
{
"math_id": 62,
"text": "k"
},
{
"math_id": 63,
"text": "c_{n-i} = c_i^*, \\; i \\le n/2 "
},
{
"math_id": 64,
"text": "\n\\begin{bmatrix}\nr_0 & z_1 & z_2 & r_3 & z_2^* & z_1^* \\\\\nz_1^* & r_0 & z_1 & z_2 & r_3 & z_2^* \\\\\n\\dots \\\\\n\\end{bmatrix}.\n"
},
{
"math_id": 65,
"text": " r_3 "
},
{
"math_id": 66,
"text": "\n\\begin{bmatrix}\nr_0 & z_1 & z_2 & z_2^* & z_1^* \\\\\nz_1^* & r_0 & z_1 & z_2 & z_2^* \\\\\n\\dots\\\\\n\\end{bmatrix}.\n"
},
{
"math_id": 67,
"text": "C \\mathbf{x} = \\mathbf{b},"
},
{
"math_id": 68,
"text": "\\mathbf{c} \\star \\mathbf{x} = \\mathbf{b},"
},
{
"math_id": 69,
"text": "\\mathbf c"
},
{
"math_id": 70,
"text": "\\mathbf x"
},
{
"math_id": 71,
"text": "\\mathbf b"
},
{
"math_id": 72,
"text": "\\mathcal{F}_{n}(\\mathbf{c} \\star \\mathbf{x}) = \\mathcal{F}_{n}(\\mathbf{c}) \\mathcal{F}_{n}(\\mathbf{x}) = \\mathcal{F}_{n}(\\mathbf{b})"
},
{
"math_id": 73,
"text": "\\mathbf{x} = \\mathcal{F}_n^{-1} \\left[ \\left(\n\\frac{(\\mathcal{F}_n(\\mathbf{b}))_{\\nu}}\n{(\\mathcal{F}_n(\\mathbf{c}))_{\\nu}} \n\\right)_{\\!\\nu\\in\\Z}\\, \\right]^{\\rm T}."
}
] | https://en.wikipedia.org/wiki?curid=889936 |
8900795 | String harmonic | String instrument technique
Playing a string harmonic (a flageolet) is a string instrument technique that uses the nodes of natural harmonics of a musical string to isolate overtones. Playing string harmonics produces high pitched tones, often compared in timbre to a whistle or flute. Overtones can be isolated "by lightly touching the string with the finger instead of pressing it down" against the fingerboard (without stopping). For some instruments this is a fundamental technique, such as the Chinese guqin, where it is known as "fan yin" (泛音, lit. "floating sound"), and the Vietnamese đàn bầu.
Overtones.
When a string is plucked or bowed normally, the ear hears the fundamental frequency most prominently, but the overall sound is also colored by the presence of various overtones (frequencies greater than the fundamental frequency). The fundamental frequency and its overtones are perceived by the listener as a single note; however, different combinations of overtones give rise to noticeably different overall tones (see timbre). A harmonic overtone has evenly spaced nodes along the string, where the string does not move from its resting position.
Nodes.
The nodes of natural harmonics are located at the following points along the string:
Above, the length fraction is the point, with respect to the length of the whole string, the string is lightly touched. It is expressed as a fraction , where "m" is the mode (2 through 16 are given above), and "n" the node number. The node number for a given mode can be any integer from 1 to "m" − 1. However, certain nodes of higher harmonics are coincident with nodes of lower harmonics, and the lower sounds overpower the higher ones. For example, mode number 4 can be fingered at nodes 1 and 3; it will occur at node 2 but will not be heard over the stronger first harmonic. Ineffective nodes to finger are not listed above.
The fret number, which shows the position of the node in terms of half tones (or frets on a fretted instrument) then is given by:
formula_0
With "s" equal to the twelfth root of two, notated "s" because it's the first letter of the word "semitone".
Artificial harmonics.
When a string is only lightly pressed by one finger (that is, isolating overtones of the open string), the resulting harmonics are called natural harmonics. However, when a string is held down on the neck in addition to being lightly pressed on a node, the resulting harmonics are called artificial harmonics. In this case, as the total length of the string is shortened, the fundamental frequency is raised, and the positions of the nodes shift accordingly (that is, by the same number of frets), thereby raising the frequency of the overtone by the same interval as the fundamental frequency.
<templatestyles src="Template:Blockquote/styles.css" />Artificial harmonics are produced by stopping the string with the first or second finger, and thus making an artificial 'nut,' and then slightly pressing the node with the fourth finger. By this means harmonics in perfect intonation can be produced in all scales.
Artificial harmonics are more difficult to play than natural harmonics, but they are not limited to the overtone series of the open strings, meaning they have much greater flexibility to play chromatic passages. Unlike natural harmonics, they can be played with vibrato.
This technique, like natural harmonics, works by canceling out the fundamental tone and one or more partial tones by deadening their modes of vibration. It is traditionally notated using two or three simultaneous noteheads in one staff: a normal notehead for the position of the firmly held finger, a square notehead for the position of the lightly pressed finger, and sometimes, a small notehead for the resulting pitch.
The most commonly used artificial harmonic, due to its relatively easy and natural fingering, is that in which, "the fourth finger lightly touches the nodal point a perfect fourth above the first finger. (Resulting harmonic sound: two octaves above the first finger or new fundamental.)," followed by the artificial harmonic produced when, "the fourth finger lightly touches the nodal point a perfect fifth above the first finger (Resulting harmonic sound: a twelfth above the first finger or new fundamental.)," and, "the third finger lightly touches the nodal point a major third above the first finger. (Resulting harmonic sound: two octaves and a major third above the first finger or new fundamental.)"
In some cases, especially in the electric guitar technique, it is common to refer to Pinch Harmonics as Artificial Harmonics (AH) and to refer to harmonics produced by other means as Natural Harmonics.
Guitar.
There are a few harmonic techniques unique to guitar.
Pinch harmonics.
A pinch harmonic (also known as squelch picking, pick harmonic or squealy) is a guitar technique to achieve artificial harmonics in which the player's thumb or index finger on the picking hand slightly catches the string after it is picked, canceling (silencing) the fundamental frequency of the string, and letting one of the overtones dominate. This results in a high-pitched sound which is particularly discernible on an electrically amplified guitar as a "squeal".
Tapped harmonics.
Tapped harmonics were popularized by Eddie Van Halen. This technique is an extension of the tapping technique. The note is fretted as usual, but instead of striking the string the excitation energy required to sound the note is achieved by tapping at a harmonic nodal point. The tapping finger bounces lightly on and off the fret. The open string technique can be extended to artificial harmonics. For instance, for an octave harmonic (12-fret nodal point) press at the third fret, and tap the fifteenth fret, as 12 + 3
15.
String harmonics driven by a magnetic field.
This technique is used by effect devices producing a magnetic field that can agitate fundamentals and harmonics of steel strings. There are harmonic mode switches as provided by newer versions of the EBow and by guitars built in sustainers like the Fernandes Sustainer and the Moog Guitar. Harmonics control by harmonic mode switching and by the playing technique is applied by the Guitar Resonator where harmonics can be alternated by changing the string driver position at the fretboard while playing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F = \\log_s \\frac{m}{m-n}"
}
] | https://en.wikipedia.org/wiki?curid=8900795 |
8904 | Double-ended queue | Abstract data type
In computer science, a double-ended queue (abbreviated to deque, ) is an abstract data type that generalizes a queue, for which elements can be added to or removed from either the front (head) or back (tail). It is also often called a head-tail linked list, though properly this refers to a specific data structure "implementation" of a deque (see below).
Naming conventions.
"Deque" is sometimes written "dequeue", but this use is generally deprecated in technical literature or technical writing because "dequeue" is also a verb meaning "to remove from a queue". Nevertheless, several libraries and some writers, such as Aho, Hopcroft, and Ullman in their textbook "Data Structures and Algorithms", spell it "dequeue". John Mitchell, author of "Concepts in Programming Languages," also uses this terminology.
Distinctions and sub-types.
This differs from the queue abstract data type or "first in first out" list (FIFO), where elements can only be added to one end and removed from the other. This general data class has some possible sub-types:
Both the basic and most common list types in computing, queues and stacks can be considered specializations of deques, and can be implemented using deques. A deque is a data structure that allows users to perform push and pop operations at both ends, providing flexibility in managing the order of elements.
Operations.
The basic operations on a deque are "enqueue" and "dequeue" on either end. Also generally implemented are "peek" operations, which return the value at that end without dequeuing it.
Names vary between languages; major implementations include:
Implementations.
There are at least two common ways to efficiently implement a deque: with a modified dynamic array or with a doubly linked list.
The dynamic array approach uses a variant of a dynamic array that can grow from both ends, sometimes called array deques. These array deques have all the properties of a dynamic array, such as constant-time random access, good locality of reference, and inefficient insertion/removal in the middle, with the addition of amortized constant-time insertion/removal at both ends, instead of just one end. Three common implementations include:
Purely functional implementation.
Double-ended queues can also be implemented as a purely functional data structure. Two versions of the implementation exist. The first one, called "'real-time deque", is presented below. It allows the queue to be persistent with operations in "O"(1) worst-case time, but requires lazy lists with memoization. The second one, with no lazy lists nor memoization is presented at the end of the sections. Its amortized time is "O"(1) if the persistency is not used; but the worst-time complexity of an operation is "O"("n") where n is the number of elements in the double-ended queue.
Let us recall that, for a list codice_0, codice_1 denotes its length, that codice_2 represents an empty list and codice_3 represents the list whose head is codice_4 and whose tail is codice_5. The functions codice_6 and codice_7 return the list codice_0 without its first codice_9 elements, and the first codice_9 elements of codice_0, respectively. Or, if codice_12, they return the empty list and codice_0 respectively.
Real-time deques via lazy rebuilding and scheduling.
A double-ended queue is represented as a sextuple codice_14 where codice_15 is a linked list which contains the front of the queue of length codice_16. Similarly, codice_17 is a linked list which represents the reverse of the rear of the queue, of length codice_18. Furthermore, it is assured that codice_19 and codice_20 - intuitively, it means that both the front and the rear contains between a third minus one and two thirds plus one of the elements. Finally, codice_21 and codice_22 are tails of codice_15 and of codice_17, they allow scheduling the moment where some lazy operations are forced. Note that, when a double-ended queue contains codice_25 elements in the front list and codice_25 elements in the rear list, then the inequality invariant remains satisfied after codice_9 insertions and codice_28 deletions when codice_29. That is, at most codice_30 operations can happen between each rebalancing.
Let us first give an implementation of the various operations that affect the front of the deque - cons, head and tail. Those implementations do not necessarily respect the invariant. In a second time we'll explain how to modify a deque which does not satisfy the invariant into one which satisfies it. However, they use the invariant, in that if the front is empty then the rear has at most one element. The operations affecting the rear of the list are defined similarly by symmetry.
empty = (0, NIL, NIL, 0, NIL, NIL)
fun insert'(x, (len_front, front, tail_front, len_rear, rear, tail_rear)) =
(len_front+1, CONS(x, front), drop(2, tail_front), len_rear, rear, drop(2, tail_rear))
fun head((_, CONS(h, _), _, _, _, _)) = h
fun head((_, NIL, _, _, CONS(h, NIL), _)) = h
fun tail'((len_front, CONS(head_front, front), tail_front, len_rear, rear, tail_rear)) =
(len_front - 1, front, drop(2, tail_front), len_rear, rear, drop(2, tail_rear))
fun tail'((_, NIL, _, _, CONS(h, NIL), _)) = empty
It remains to explain how to define a method codice_31 that rebalance the deque if codice_32 or codice_33 broke the invariant. The method codice_34 and codice_33 can be defined by first applying codice_32 and codice_37 and then applying codice_31.
fun balance(q as (len_front, front, tail_front, len_rear, rear, tail_rear)) =
let floor_half_len = (len_front + len_rear) / 2 in
let ceil_half_len = len_front + len_rear - floor_half_len in
if len_front > 2*len_rear+1 then
let val front' = take(ceil_half_len, front)
val rear' = rotateDrop(rear, floor_half_len, front)
in (ceil_half_len, front', front', floor_half_len, rear', rear')
else if len_front > 2*len_rear+1 then
let val rear' = take(floor_half_len, rear)
val front' = rotateDrop(front, ceil_half_len, rear)
in (ceil_half_len, front', front', floor_half_len, rear', rear')
else q
where codice_39 return the concatenation of codice_15 and of codice_41. That iscodice_42 put into codice_43 the content of codice_15 and the content of codice_17 that is not already in codice_46. Since dropping codice_25 elements takes formula_0 time, we use laziness to ensure that elements are dropped two by two, with two drops being done during each codice_37 and each codice_32 operation.
fun rotateDrop(front, i, rear) =
if i < 2 then rotateRev(front, drop(i, rear), NIL)
else let CONS(x, front') = front in
CONS(x, rotateDrop(front', j-2, drop(2, rear)))
where codice_50 is a function that returns the front, followed by the middle reversed, followed by the rear. This function is also defined using laziness to ensure that it can be computed step by step, with one step executed during each codice_32 and codice_37 and taking a constant time. This function uses the invariant that codice_53 is 2 or 3.
fun rotateRev(NIL, rear, a) =
reverse(rear)++a
fun rotateRev(CONS(x, front), rear, a) =
CONS(x, rotateRev(front, drop(2, rear), reverse(take(2, rear))++a))
where codice_54 is the function concatenating two lists.
Implementation without laziness.
Note that, without the lazy part of the implementation, this would be a non-persistent implementation of queue in "O"(1) amortized time. In this case, the lists codice_21 and codice_22 could be removed from the representation of the double-ended queue.
Language support.
Ada's containers provides the generic packages codice_57 and codice_58, for the dynamic array and linked list implementations, respectively.
C++'s Standard Template Library provides the class templates codice_59 and codice_60, for the multiple array and linked list implementations, respectively.
As of Java 6, Java's Collections Framework provides a new interface that provides the functionality of insertion and removal at both ends. It is implemented by classes such as (also new in Java 6) and , providing the dynamic array and linked list implementations, respectively. However, the codice_61, contrary to its name, does not support random access.
Javascript's Array prototype & Perl's arrays have native support for both removing (shift and pop) and adding (unshift and push) elements on both ends.
Python 2.4 introduced the codice_62 module with support for deque objects. It is implemented using a doubly linked list of fixed-length subarrays.
As of PHP 5.3, PHP's SPL extension contains the 'SplDoublyLinkedList' class that can be used to implement Deque datastructures. Previously to make a Deque structure the array functions array_shift/unshift/pop/push had to be used instead.
GHC's Data.Sequence module implements an efficient, functional deque structure in Haskell. The implementation uses 2–3 finger trees annotated with sizes. There are other (fast) possibilities to implement purely functional (thus also persistent) double queues (most using heavily lazy evaluation). Kaplan and Tarjan were the first to implement optimal confluently persistent catenable deques. Their implementation was strictly purely functional in the sense that it did not use lazy evaluation. Okasaki simplified the data structure by using lazy evaluation with a bootstrapped data structure and degrading the performance bounds from worst-case to amortized. Kaplan, Okasaki, and Tarjan produced a simpler, non-bootstrapped, amortized version that can be implemented either using lazy evaluation or more efficiently using mutation in a broader but still restricted fashion. Mihaescu and Tarjan created a simpler (but still highly complex) strictly purely functional implementation of catenable deques, and also a much simpler implementation of strictly purely functional non-catenable deques, both of which have optimal worst-case bounds.
Rust's codice_63 includes VecDeque which implements a double-ended queue using a growable ring buffer.
Applications.
One example where a deque can be used is the work stealing algorithm. This algorithm implements task scheduling for several processors. A separate deque with threads to be executed is maintained for each processor. To execute the next thread, the processor gets the first element from the deque (using the "remove first element" deque operation). If the current thread forks, it is put back to the front of the deque ("insert element at front") and a new thread is executed. When one of the processors finishes execution of its own threads (i.e. its deque is empty), it can "steal" a thread from another processor: it gets the last element from the deque of another processor ("remove last element") and executes it. The work stealing algorithm is used by Intel's Threading Building Blocks (TBB) library for parallel programming.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(n)"
}
] | https://en.wikipedia.org/wiki?curid=8904 |
8904174 | Bethe formula | Moving charge energy loss formula found by Hans Bethe
The Bethe formula or Bethe–Bloch formula describes the mean energy loss per distance travelled of swift charged particles (protons, alpha particles, atomic ions) traversing matter (or alternatively the stopping power of the material). For electrons the energy loss is slightly different due to their small mass (requiring relativistic corrections) and their indistinguishability, and since they suffer much larger losses by Bremsstrahlung, terms must be added to account for this. Fast charged particles moving through matter interact with the electrons of atoms in the material. The interaction excites or ionizes the atoms, leading to an energy loss of the traveling particle.
The non-relativistic version was found by Hans Bethe in 1930; the relativistic version (shown below) was found by him in 1932. The most probable energy loss differs from the mean energy loss and is described by the Landau-Vavilov distribution.
The formula.
For a particle with speed "v", charge "z" (in multiples of the electron charge), and energy "E", traveling a distance "x" into a target of electron number density "n" and mean excitation energy "I" (see below), the relativistic version of the formula reads, in SI units:
where "c" is the speed of light and "ε"0 the vacuum permittivity, formula_0, "e" and "me" the electron charge and rest mass respectively.
Here, the electron density of the material can be calculated by
formula_1
where "ρ" is the density of the material, "Z" its atomic number, "A" its relative atomic mass, "NA" the Avogadro number and "Mu" the Molar mass constant.
In the figure to the right, the small circles are experimental results obtained from measurements of various authors, while the red curve is Bethe's formula. Evidently, Bethe's theory agrees very well with experiment at high energy. The agreement is even better when corrections are applied (see below).
For low energies, i.e., for small velocities of the particle "β" « 1, the Bethe formula reduces to
This can be seen by first replacing "βc" by "v" in eq. (1) and then neglecting "β"2 because of its small size.
At low energy, the energy loss according to the Bethe formula therefore decreases approximately as "v"−2 with increasing energy. It reaches a minimum for approximately "E" = 3"Mc"2, where "M" is the mass of the particle (for protons, this would be about at 3000 MeV). For highly relativistic cases "β" ≈ 1, the energy loss increases again, logarithmically due to the transversal component of the electric field.
The mean excitation energy.
In the Bethe theory, the material is completely described by a single number, the mean excitation energy "I". In 1933 Felix Bloch showed that the mean excitation energy of atoms is approximately given by
where "Z" is the atomic number of the atoms of the material. If this approximation is introduced into formula (1) above, one obtains an expression which is often called "Bethe-Bloch formula". But since we have now accurate tables of "I" as a function of "Z" (see below), the use of such a table will yield better results than the use of formula (3).
The figure shows normalized values of "I", taken from a table. The peaks and valleys in this figure lead to corresponding valleys and peaks in the stopping power. These are called ""Z2"-oscillations" or ""Z2"-structure" (where "Z2" = "Z" means the atomic number of the target).
Corrections to the Bethe formula.
The Bethe formula is only valid for energies high enough so that the charged atomic particle (the ion) does not carry any atomic electrons with it. At smaller energies, when the ion carries electrons, this reduces its charge effectively, and the stopping power is thus reduced. But even if the atom is fully ionized, corrections are necessary.
Bethe found his formula using quantum mechanical perturbation theory. Hence, his result is proportional to the square of the charge "z" of the particle. The description can be improved by considering corrections which correspond to higher powers of "z". These are: the Barkas-Andersen-effect (proportional to "z"3, after Walter H. Barkas and Hans Henrik Andersen), and the Felix Bloch-correction (proportional to "z"4). In addition, one has to take into account that the atomic electrons of the material traversed are not stationary ("shell correction").
The corrections mentioned have been built into the programs PSTAR and ASTAR, for example, by which one can calculate the stopping power for protons and alpha particles. The corrections are large at low energy and become smaller and smaller as energy is increased.
At very high energies, Fermi's density correction has to be added. | [
{
"math_id": 0,
"text": " \\beta = \\frac{v}{c} "
},
{
"math_id": 1,
"text": "n=\\frac{N_{A}\\cdot Z\\cdot\\rho}{A\\cdot M_{u}}\\,,"
}
] | https://en.wikipedia.org/wiki?curid=8904174 |
890523 | Proportion (mathematics) | A proportion is a mathematical statement expressing equality of two ratios.
formula_0
a and d are called "extremes", b and c are called "means".
Proportion can be written as formula_1, where ratios are expressed as fractions.
Such a proportion is known as geometrical proportion, not to be confused with arithmetical proportion and harmonic proportion.
If formula_2, then formula_3
formula_5,
formula_6.
formula_7,
formula_8.
formula_9,
formula_10.
History.
A Greek mathematician Eudoxus provided a definition for the meaning of the equality between two ratios. This definition of proportion forms the subject of Euclid's Book V, where we can read:
<templatestyles src="Template:Blockquote/styles.css" />Magnitudes are said to be in the same ratio, the first to the second and the third to the fourth when, if any equimultiples whatever be taken of the first and third, and any equimultiples whatever of the second and fourth, the former equimultiples alike exceed, are alike equal to, or alike fall short of, the latter equimultiples respectively taken in corresponding order.
Later, the realization that ratios are numbers allowed to switch from solving proportions to equations, and from transformation of proportions to algebraic transformations.
Related concepts.
Arithmetic proportion.
An equation of the form formula_11 is called arithmetic proportion or difference proportion.
Harmonic proportion.
If the means of the geometric proportion are equal, and the rightmost extreme is equal to the difference between the leftmost extreme and a mean, then such a proportion is called harmonic: formula_12. In this case the ratio formula_13 is called "golden ratio".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a:b=c:d"
},
{
"math_id": 1,
"text": "\\frac{a}{b}=\\frac{c}{d}"
},
{
"math_id": 2,
"text": "\\ \\frac ab=\\frac cd"
},
{
"math_id": 3,
"text": "\\ ad=bc"
},
{
"math_id": 4,
"text": "\\ \\frac ba=\\frac dc"
},
{
"math_id": 5,
"text": "\\ \\frac ac=\\frac bd"
},
{
"math_id": 6,
"text": "\\ \\frac db=\\frac ca"
},
{
"math_id": 7,
"text": "\\ \\dfrac{a+b}{b}=\\dfrac{c+d}{d}"
},
{
"math_id": 8,
"text": "\\ \\dfrac{a-b}{b}=\\dfrac{c-d}{d}"
},
{
"math_id": 9,
"text": "\\ \\dfrac{a+c}{b+d}=\\frac ab =\\frac cd"
},
{
"math_id": 10,
"text": "\\ \\dfrac{a-c}{b-d}=\\frac ab =\\frac cd"
},
{
"math_id": 11,
"text": "a-b = c-d"
},
{
"math_id": 12,
"text": " a : b = b : (a - b) "
},
{
"math_id": 13,
"text": " a : b "
}
] | https://en.wikipedia.org/wiki?curid=890523 |
8906668 | Acousto-optics | The study of sound and light interaction
Acousto-optics is a branch of physics that studies the interactions between sound waves and light waves, especially the diffraction of laser light by ultrasound (or sound in general) through an ultrasonic grating.
Introduction.
In general, acousto-optic effects are based on the change of the refractive index of a medium due to the presence of sound waves in that medium. Sound waves produce a refractive index grating in the material, and it is this grating that is "seen" by the light wave. These variations in the refractive index, due to the pressure fluctuations, may be detected optically by refraction, diffraction, and interference effects; reflection may also be used.
The acousto-optic effect is extensively used in the measurement and study of ultrasonic waves. However, the growing principal area of interest is in acousto-optical devices for the deflection, modulation, signal processing and frequency shifting of light beams. This is due to the increasing availability and performance of lasers, which have made the acousto-optic effect easier to observe and measure. Technical progress in both crystal growth and high frequency piezoelectric transducers has brought valuable benefits to acousto-optic components' improvements.
Along with the current applications, acousto-optics presents interesting possible application. It can be used in nondestructive testing, structural health monitoring and biomedical applications, where optically generated and optical measurements of ultrasound gives a non-contact method of imaging.
History.
Optics has had a very long and full history, from ancient Greece, through the renaissance and modern times. As with optics, acoustics has a history of similar duration, again starting with the ancient Greeks. In contrast, the acousto-optic effect has had a relatively short history, beginning with Brillouin predicting the diffraction of light by an acoustic wave, being propagated in a medium of interaction, in 1922. This was then confirmed with experimentation in 1932 by Debye and Sears, and also by Lucas and Biquard.
The particular case of diffraction on the first order, under a certain angle of incidence, (also predicted by Brillouin), has been observed by Rytow in 1935. Raman and Nath (1937) have designed a general ideal model of interaction taking into account several orders. This model was developed by Phariseau (1956) for diffraction including only one diffraction order.
Acousto-optic effect.
The acousto-optic effect is a specific case of photoelasticity, where there is a change of a material's permittivity, formula_0, due to a mechanical strain formula_1. Photoelasticity is the variation of the optical indicatrix coefficients formula_2 caused by the strain formula_3 given by,
formula_4
where formula_5 is the photoelastic tensor with components, formula_6,formula_7 = 1,2...,6.
Specifically in the acousto-optic effect, the strains formula_3 are a result of the acoustic wave which has been excited within a transparent medium. This then gives rise to the variation of the refractive index. For a plane acoustic wave propagating along the z axis, the change in the refractive index can be expressed as
formula_8
where formula_9 is the undisturbed refractive index, formula_10 is the angular frequency, formula_11 is the wavenumber of the acoustic wave, and formula_12 is the amplitude of variation in the refractive index generated by the acoustic wave, and is given as,
formula_13
The generated refractive index, (2), gives a diffraction grating moving with the velocity given by the speed of the sound wave in the medium. Light which then passes through the transparent material, is diffracted due to this generated refraction index, forming a prominent diffraction pattern. This diffraction pattern corresponds with a conventional diffraction grating at angles formula_14 from the original direction, and is given by,
formula_15
where formula_16 is the wavelength of the optical wave, formula_17 is the wavelength of the acoustic wave and formula_18 is the integer order maximum.
Light diffracted by an acoustic wave of a single frequency produces two distinct diffraction types. These are Raman–Nath diffraction and Bragg diffraction.
Raman–Nath diffraction is observed with relatively low acoustic frequencies, typically less than 10 MHz, and with a small acousto-optic interaction length, ℓ, which is typically less than 1 cm. This type of diffraction occurs at an arbitrary angle of incidence, formula_19.
In contrast, Bragg diffraction occurs at higher acoustic frequencies, usually exceeding 100 MHz. The observed diffraction pattern generally consists of two diffraction maxima; these are the zeroth and the first orders. However, even these two maxima only appear at definite incidence angles close to the Bragg angle, formula_20. The first order maximum or the Bragg maximum is formed due to a selective reflection of the light from the wave fronts of ultrasonic wave. The Bragg angle is given by the expression,
formula_21
where formula_16 is the wavelength of the incident light wave (in a vacuum), formula_22 is the acoustic frequency, formula_23 is the velocity of the acoustic wave, formula_24 is the refractive index for the incident optical wave, and formula_25 is the refractive index for the diffracted optical waves.
In general, there is no point at which Bragg diffraction takes over from Raman–Nath diffraction. It is simply a fact that as the acoustic frequency increases, the number of observed maxima is gradually reduced due to the angular selectivity of the acousto-optic interaction. Traditionally, the type of diffraction, Bragg or Raman–Nath, is determined by the conditions formula_26 and formula_27 respectively, where Q is given by,
formula_28
which is known as the Klein–Cook parameter. Since, in general, only the first order diffraction maximum is used in acousto-optic devices, Bragg diffraction is preferable due to the lower optical losses. However, the acousto-optic requirements for Bragg diffraction limit the frequency range of acousto-optic interaction. As a consequence, the speed of operation of acousto-optic devices is also limited.
Acousto-optic devices.
Acousto-optic modulator.
By varying the parameters of the acoustic wave, including the amplitude, phase, frequency and polarization, properties of the optical wave may be modulated. The acousto-optic interaction also makes it possible to modulate the optical beam by both temporal and spatial modulation.
A simple method of modulating the optical beam travelling through the acousto-optic device is done by switching the acoustic field on and off. When off the light beam is undiverted, the intensity of light directed at the Bragg diffraction angle is zero. When switched on and Bragg diffraction occurs, the intensity at the Bragg angle increases. So the acousto-optic device is modulating the output along the Bragg diffraction angle, switching it on and off. The device is operated as a modulator by keeping the acoustic wavelength (frequency) fixed and varying the drive power to vary the amount of light in the deflected beam.
There are several limitations associated with the design and performance of acousto-optic modulators. The acousto-optic medium must be designed carefully to provide maximum light intensity in a single diffracted beam. The time taken for the acoustic wave to travel across the diameter of the light beam gives a limitation on the switching speed, and hence limits the modulation bandwidth. The finite velocity of the acoustic wave means the light cannot be fully switched on or off until the acoustic wave has traveled across the light beam. So to increase the bandwidth the light must be focused to a small diameter at the location of the acousto-optic interaction. This minimum focused size of the beam represents the limit for the bandwidth.
Acousto-optic tunable filter.
The principle behind the operation of acousto-optic tunable filters is based on the wavelength of the diffracted light being dependent on the acoustic frequency. By tuning the frequency of the acoustic wave, the desired wavelength of the optical wave can be diffracted acousto-optically.
There are two types of the acousto-optic filters, the collinear and non-collinear filters. The type of filter depends on geometry of acousto-optic interaction.
The polarization of the incident light can be either ordinary or extraordinary. For the definition, we assume ordinary polarization. Here the following list of symbols is used,
formula_29: the angle between the acoustic wave vector and the crystallographic axis "z" of the crystal;
formula_30: the wedge angle between the input and output faces of the filter cell (the wedge angle is necessary for eliminating the angular shift of the diffracted beam caused by frequency changing);
formula_31: the angle between the incident light wave vector and [110] axis of the crystal;
formula_32: the angle between the input face of the cell and acoustic wave vector;
formula_33: the angle between deflected and non-deflected light at the central frequency;
formula_34: the transducer length.
The incidence angle formula_31 and the central frequency formula_35 of the filter are defined by the following set of equations,
formula_36
formula_37
Refractive indices of the ordinary (formula_9) and extraordinary (formula_38) polarized beams are determined by taking into account their dispersive dependence.
The sound velocity, formula_23, depends on the angle α, such that,
formula_39
formula_40 and formula_41 are the sound velocities along the axes [110] and [001], consecutively. The value of formula_42 is determined by the angles formula_31 and formula_29,
formula_43
The angle formula_33 between the diffracted and non-diffracted beams defines the view field of the filter; it can be calculated from the formula,
formula_44
Input light need not be polarized for a non-collinear design. Unpolarized input light is scattered into orthogonally polarized beams separated by the scattering angle for the particular design and wavelength. If the optical design provides an appropriate beam block for the unscattered light, then two beams (images) are formed in an optical passband that is nearly equivalent in both orthogonally linearly polarized output beams (differing by the Stokes and Anti-Stokes scattering parameter). Because of dispersion, these beams move slightly with scanning rf frequency.
Acousto-optic deflectors.
An acousto-optic deflector spatially controls the optical beam. In the operation of an acousto-optic deflector the power driving the acoustic transducer is kept on, at a constant level, while the acoustic frequency is varied to deflect the beam to different angular positions. The acousto-optic deflector makes use of the acoustic frequency dependent diffraction angle, where a change in the angle formula_45 as a function of the change in frequency formula_46 is given as,
formula_47
where formula_16 is the optical wavelength of the beam and formula_48 is the velocity of the acoustic wave.
AOD technology has made practical the Bose–Einstein condensation for which the 2001 Nobel Prize in Physics was awarded to Eric A. Cornell, Wolfgang Ketterle and Carl E. Wieman. Another application of acoustic-optical deflection is optical trapping of small molecules.
AODs are essentially the same as acousto-optic modulators (AOMs). In an AOM, only the amplitude of the sound wave is modulated (to modulate the intensity of the diffracted laser beam), whereas in an AOD, both the amplitude and frequency are adjusted, making the engineering requirements tighter for an AOD than an AOM.
Materials.
All materials display the acousto-optic effect. Fused silica is used as a standard to compare when measuring photoelastic coefficients. Lithium niobate is often used in high frequency devices. Softer materials, such as arsenic trisulfide, tellurium dioxide and tellurite glasses, lead silicate, Ge55As12S33, mercury(I) chloride, lead(II) bromide, with slow acoustic waves make high efficiency devices at lower frequencies, and give high resolution.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "B_i"
},
{
"math_id": 3,
"text": "a_j"
},
{
"math_id": 4,
"text": "(1) \\ \\Delta B_i = p_{ij} a_j, \\,"
},
{
"math_id": 5,
"text": "p_{ij}"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "j"
},
{
"math_id": 8,
"text": "(2) \\ n(z,t)=n_0+\\Delta n \\cos (\\Omega t - Kz), \\,"
},
{
"math_id": 9,
"text": "n_0"
},
{
"math_id": 10,
"text": "\\Omega"
},
{
"math_id": 11,
"text": "K"
},
{
"math_id": 12,
"text": "\\Delta n"
},
{
"math_id": 13,
"text": "(3) \\ \\Delta n = - \\frac{1}{2} \\sum_j n_0^3p_{zj} a_j,"
},
{
"math_id": 14,
"text": "\\theta_n"
},
{
"math_id": 15,
"text": "(4) \\ \\Lambda \\sin (\\theta_m) = m\\lambda,\\,"
},
{
"math_id": 16,
"text": "\\lambda"
},
{
"math_id": 17,
"text": "\\Lambda"
},
{
"math_id": 18,
"text": "m"
},
{
"math_id": 19,
"text": "\\theta_0"
},
{
"math_id": 20,
"text": "\\theta_B"
},
{
"math_id": 21,
"text": "(5) \\ \\sin \\theta_B = - \\frac{\\lambda f}{2 n_i \\nu}\\left[ 1+\\frac{\\nu^2}{\\lambda^2 f^2 } \\left( n_i^2 - n_d^2 \\right) \\right],"
},
{
"math_id": 22,
"text": "f"
},
{
"math_id": 23,
"text": "v"
},
{
"math_id": 24,
"text": "n_i"
},
{
"math_id": 25,
"text": "n_d"
},
{
"math_id": 26,
"text": " Q \\gg 1 "
},
{
"math_id": 27,
"text": " Q \\ll 1 "
},
{
"math_id": 28,
"text": "(6) \\ Q = \\frac{2\\pi\\lambda \\ell f^2}{n \\nu^2},"
},
{
"math_id": 29,
"text": "\\alpha"
},
{
"math_id": 30,
"text": "\\gamma"
},
{
"math_id": 31,
"text": "\\varphi"
},
{
"math_id": 32,
"text": "\\alpha_\\ell"
},
{
"math_id": 33,
"text": "\\beta"
},
{
"math_id": 34,
"text": "\\ell"
},
{
"math_id": 35,
"text": "f_i"
},
{
"math_id": 36,
"text": "(7) \\ n_\\varphi = \\frac{n_0 n_e}{\\sqrt{n_0^2 \\cos \\varphi + n_e^2 \\sin^2 \\varphi}}"
},
{
"math_id": 37,
"text": "(8) \\ f_i(\\varphi)=\\frac{\\nu}{\\lambda}\\left[n_\\varphi\\cos(\\varphi+\\alpha)\\pm\\sqrt{n_0^2 - n_\\varphi^2(\\varphi)\\sin^2(\\varphi+\\alpha)}\\right]"
},
{
"math_id": 38,
"text": "n_e"
},
{
"math_id": 39,
"text": "(9) \\ \\nu (\\alpha) = \\nu_{110} \\sqrt{\\cos^2\\alpha + \\left(\\frac{\\nu_{001}}{\\nu_{110}}\\right)^2 \\sin^2\\alpha}"
},
{
"math_id": 40,
"text": "v_{110}"
},
{
"math_id": 41,
"text": "v_{001}"
},
{
"math_id": 42,
"text": "\\alpha_1"
},
{
"math_id": 43,
"text": "(10) \\ \\alpha_\\ell = \\varphi + \\alpha"
},
{
"math_id": 44,
"text": "(11) \\ \\beta = \\arcsin \\left( \\frac{\\lambda f_0}{n_0 \\nu} \\sin \\alpha + \\varphi \\right)"
},
{
"math_id": 45,
"text": "\\Delta \\theta_d"
},
{
"math_id": 46,
"text": "\\Delta f"
},
{
"math_id": 47,
"text": " (12) \\ \\Delta \\theta_d = \\frac{\\lambda}{\\nu}\\Delta f"
},
{
"math_id": 48,
"text": "\\nu"
}
] | https://en.wikipedia.org/wiki?curid=8906668 |
8907443 | Caccioppoli set | Region with boundary of finite measure
In mathematics, a Caccioppoli set is a set whose boundary is measurable and has (at least locally) a "finite measure". A synonym is set of (locally) finite perimeter. Basically, a set is a Caccioppoli set if its characteristic function is a function of bounded variation.
History.
The basic concept of a Caccioppoli set was first introduced by the Italian mathematician Renato Caccioppoli in the paper : considering a plane set or a surface defined on an open set in the plane, he defined their measure or area as the total variation in the sense of Tonelli of their defining functions, i.e. of their parametric equations, "provided this quantity was bounded". The "measure of the boundary of a set was defined as a functional", precisely a set function, for the first time: also, being defined on open sets, it can be defined on all Borel sets and its value can be approximated by the values it takes on an increasing net of subsets. Another clearly stated (and demonstrated) property of this functional was its "lower semi-continuity".
In the paper , he precised by using a "triangular mesh" as an increasing net approximating the open domain, defining "positive and negative variations" whose sum is the total variation, i.e. the "area functional". His inspiring point of view, as he explicitly admitted, was those of Giuseppe Peano, as expressed by the Peano-Jordan Measure: "to associate to every portion of a surface an oriented plane area in a similar way as an approximating chord is associated to a curve". Also, another theme found in this theory was the "extension of a functional" from a subspace to the whole ambient space: the use of theorems generalizing the Hahn–Banach theorem is frequently encountered in Caccioppoli research. However, the restricted meaning of total variation in the sense of Tonelli added much complication to the formal development of the theory, and the use of a parametric description of the sets restricted its scope.
Lamberto Cesari introduced the "right" generalization of functions of bounded variation to the case of several variables only in 1936: perhaps, this was one of the reasons that induced Caccioppoli to present an improved version of his theory only nearly 24 years later, in the talk at the IV UMI Congress in October 1951, followed by five notes published in the Rendiconti of the Accademia Nazionale dei Lincei. These notes were sharply criticized by Laurence Chisholm Young in the Mathematical Reviews.
In 1952 Ennio De Giorgi presented his first results, developing the ideas of Caccioppoli, on the definition of the measure of boundaries of sets at the Salzburg Congress of the Austrian Mathematical Society: he obtained this results by using a smoothing operator, analogous to a mollifier, constructed from the Gaussian function, independently proving some results of Caccioppoli. Probably he was led to study this theory by his teacher and friend Mauro Picone, who had also been the teacher of Caccioppoli and was likewise his friend. De Giorgi met Caccioppoli in 1953 for the first time: during their meeting, Caccioppoli expressed a profound appreciation of his work, starting their lifelong friendship. The same year he published his first paper on the topic i.e. : however, this paper and the closely following one did not attracted much interest from the mathematical community. It was only with the paper , reviewed again by Laurence Chisholm Young in the Mathematical Reviews, that his approach to sets of finite perimeter became widely known and appreciated: also, in the review, Young revised his previous criticism on the work of Caccioppoli.
The last paper of De Giorgi on the theory of perimeters was published in 1958: in 1959, after the death of Caccioppoli, he started to call sets of finite perimeter "Caccioppoli sets". Two years later Herbert Federer and Wendell Fleming published their paper , changing the approach to the theory. Basically they introduced two new kind of currents, respectively normal currents and integral currents: in a subsequent series of papers and in his famous treatise, Federer showed that Caccioppoli sets are normal currents of dimension formula_0 in formula_0-dimensional euclidean spaces. However, even if the theory of Caccioppoli sets can be studied within the framework of theory of currents, it is customary to study it through the "traditional" approach using functions of bounded variation, as the various sections found in a lot of important monographs in mathematics and mathematical physics testify.
Formal definition.
In what follows, the definition and properties of functions of bounded variation in the formula_0-dimensional setting will be used.
Caccioppoli definition.
Definition 1. Let "formula_1" be an open subset of formula_2 and let formula_3 be a Borel set. The "perimeter of formula_3 in formula_1" is defined as follows
formula_4
where formula_5 is the characteristic function of formula_3. That is, the perimeter of formula_3 in an open set formula_1 is defined to be the total variation of its characteristic function on that open set. If formula_6, then we write formula_7 for the (global) perimeter.
Definition 2. The Borel set formula_3 is a Caccioppoli set if and only if it has finite perimeter in every bounded open subset formula_1 of formula_8, i.e.
formula_9 whenever formula_10 is open and bounded.
Therefore, a Caccioppoli set has a characteristic function whose total variation is locally bounded. From the theory of functions of bounded variation it is known that this implies the existence of a vector-valued Radon measure formula_11 such that
formula_12
As noted for the case of general functions of bounded variation, this vector measure formula_11 is the distributional or weak gradient of formula_5. The total variation measure associated with formula_11 is denoted by formula_13, i.e. for every open set formula_10 we write formula_14 for formula_15.
De Giorgi definition.
In his papers and , Ennio De Giorgi introduces the following smoothing operator, analogous to the Weierstrass transform in the one-dimensional case
formula_16
As one can easily prove, formula_17 is a smooth function for all formula_18, such that
formula_19
also, its gradient is everywhere well defined, and so is its absolute value
formula_20
Having defined this function, De Giorgi gives the following definition of perimeter:
Definition 3. Let formula_21 be an open subset of formula_22 and let formula_3 be a Borel set. The "perimeter of formula_3 in formula_1" is the value
formula_23
Actually De Giorgi considered the case formula_24: however, the extension to the general case is not difficult. It can be proved that the two definitions are exactly equivalent: for a proof see the already cited De Giorgi's papers or the book . Now having defined what a perimeter is, De Giorgi gives the same definition 2 of what a set of (locally) finite perimeter is.
Basic properties.
The following properties are the ordinary properties which the general notion of a perimeter is supposed to have:
Notions of boundary.
For any given Caccioppoli set formula_36 there exist two naturally associated analytic quantities: the vector-valued Radon measure formula_11 and its total variation measure formula_13. Given that
formula_37
is the perimeter within any open set formula_1, one should expect that formula_11 alone should somehow account for the perimeter of formula_3.
The topological boundary.
It is natural to try to understand the relationship between the objects formula_11, formula_13, and the topological boundary formula_38. There is an elementary lemma that guarantees that the support (in the sense of distributions) of formula_11, and therefore also formula_13, is always contained in formula_38:
Lemma. The support of the vector-valued Radon measure formula_11 is a subset of the topological boundary formula_38 of formula_3.
Proof. To see this choose formula_39: then formula_40 belongs to the open set formula_41 and this implies that it belongs to an open neighborhood formula_42 contained in the interior of formula_3 or in the interior of formula_43. Let formula_44. If formula_45 where formula_46 is the closure of formula_3, then formula_47 for formula_48 and
formula_49
Likewise, if formula_50 then formula_51 for formula_48 so
formula_52
With formula_53 arbitrary it follows that formula_40 is outside the support of formula_11.
The reduced boundary.
The topological boundary formula_38 turns out to be too crude for Caccioppoli sets because its Hausdorff measure overcompensates for the perimeter formula_54 defined above. Indeed, the Caccioppoli set
formula_55
representing a square together with a line segment sticking out on the left has perimeter formula_56, i.e. the extraneous line segment is ignored, while its topological boundary
formula_57
has one-dimensional Hausdorff measure formula_58.
The "correct" boundary should therefore be a subset of formula_38. We define:
Definition 4. The reduced boundary of a Caccioppoli set formula_36 is denoted by formula_59 and is defined to be equal to be the collection of points formula_60 at which the limit:
formula_61
exists and has length equal to one, i.e. formula_62.
One can remark that by the Radon-Nikodym Theorem the reduced boundary formula_59 is necessarily contained in the support of formula_11, which in turn is contained in the topological boundary formula_38 as explained in the section above. That is:
formula_63
The inclusions above are not necessarily equalities as the previous example shows. In that example, formula_38 is the square with the segment sticking out, formula_64 is the square, and formula_59 is the square without its four corners.
De Giorgi's theorem.
For convenience, in this section we treat only the case where formula_65, i.e. the set formula_3 has (globally) finite perimeter. De Giorgi's theorem provides geometric intuition for the notion of reduced boundaries and confirms that it is the more natural definition for Caccioppoli sets by showing
formula_66
i.e. that its Hausdorff measure equals the perimeter of the set. The statement of the theorem is quite long because it interrelates various geometric notions in one fell swoop.
Theorem. Suppose formula_67 is a Caccioppoli set. Then at each point formula_60 of the reduced boundary formula_59 there exists a multiplicity one approximate tangent space formula_68 of formula_13, i.e. a codimension-1 subspace formula_68 of formula_69 such that
formula_70
for every continuous, compactly supported formula_71. In fact the subspace formula_68 is the orthogonal complement of the unit vector
formula_72
defined previously. This unit vector also satisfies
formula_73
locally in formula_74, so it is interpreted as an approximate inward pointing unit normal vector to the reduced boundary formula_59. Finally, formula_59 is (n-1)-rectifiable and the restriction of (n-1)-dimensional Hausdorff measure formula_75 to formula_59 is formula_13, i.e.
formula_76 for all Borel sets formula_77.
In other words, up to formula_75-measure zero the reduced boundary formula_59 is the smallest set on which formula_11 is supported.
Applications.
A Gauss–Green formula.
From the definition of the vector Radon measure formula_11 and from the properties of the perimeter, the following formula holds true:
formula_78
This is one version of the divergence theorem for domains with non smooth boundary. De Giorgi's theorem can be used to formulate the same identity in terms of the reduced boundary formula_59 and the approximate inward pointing unit normal vector formula_79. Precisely, the following equality holds
formula_80
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\Omega"
},
{
"math_id": 2,
"text": "\\R^n "
},
{
"math_id": 3,
"text": "E"
},
{
"math_id": 4,
"text": "P(E,\\Omega) = V\\left(\\chi_E,\\Omega\\right):=\\sup\\left\\{\\int_\\Omega \\chi_E(x) \\mathrm{div}\\boldsymbol{\\phi}(x) \\, \\mathrm{d}x : \\boldsymbol{\\phi}\\in C_c^1(\\Omega,\\R^n),\\ \\|\\boldsymbol{\\phi}\\|_{L^\\infty(\\Omega)}\\le 1\\right\\}\n"
},
{
"math_id": 5,
"text": "\\chi_E"
},
{
"math_id": 6,
"text": "\\Omega = \\R^n"
},
{
"math_id": 7,
"text": "P(E) = P(E,\\R^n)"
},
{
"math_id": 8,
"text": "\\R ^n "
},
{
"math_id": 9,
"text": "P(E,\\Omega)<+\\infty"
},
{
"math_id": 10,
"text": "\\Omega \\subset \\R^n"
},
{
"math_id": 11,
"text": "D\\chi_E"
},
{
"math_id": 12,
"text": "\\int_\\Omega\\chi_E(x)\\mathrm{div}\\boldsymbol{\\phi}(x)\\mathrm{d}x = \\int_E\\mathrm{div}\\boldsymbol{\\phi}(x) \\, \\mathrm{d}x = -\\int_\\Omega \\langle\\boldsymbol{\\phi}, D\\chi_E(x)\\rangle \\qquad \\forall\\boldsymbol{\\phi}\\in C_c^1(\\Omega,\\R ^n)"
},
{
"math_id": 13,
"text": "|D\\chi_E|"
},
{
"math_id": 14,
"text": "|D\\chi_E|(\\Omega)"
},
{
"math_id": 15,
"text": "P(E, \\Omega) = V(\\chi_E, \\Omega)"
},
{
"math_id": 16,
"text": "W_\\lambda\\chi_E(x)=\\int_{\\R ^n}g_\\lambda(x-y)\\chi_E(y)\\mathrm{d}y = (\\pi\\lambda)^{-\\frac{n}{2}}\\int_Ee^{-\\frac{(x-y)^2}{\\lambda}}\\mathrm{d}y"
},
{
"math_id": 17,
"text": "W_\\lambda\\chi(x)"
},
{
"math_id": 18,
"text": "x\\in\\R^n"
},
{
"math_id": 19,
"text": "\\lim_{\\lambda\\to 0}W_\\lambda\\chi_E(x)=\\chi_E(x)"
},
{
"math_id": 20,
"text": "\\nabla W_\\lambda\\chi_E(x) = \\mathrm{grad}W_\\lambda\\chi_E(x) = DW_\\lambda\\chi_E(x) = \n\\begin{pmatrix}\\frac{\\partial W_\\lambda\\chi_E(x)}{\\partial x_1}\\\\ \\vdots\\\\ \\frac{\\partial W_\\lambda\\chi_E(x)}{\\partial x_n}\\\\ \\end{pmatrix} \n\\Longleftrightarrow\n\\left | DW_\\lambda\\chi_E(x)\\right | = \\sqrt{\\sum_{k=1}^n\\left|\\frac{\\partial W_\\lambda\\chi_E(x)}{\\partial x_k}\\right|^2}"
},
{
"math_id": 21,
"text": " \\Omega "
},
{
"math_id": 22,
"text": "\\R^n"
},
{
"math_id": 23,
"text": "P(E,\\Omega) = \\lim_{\\lambda\\to 0}\\int_\\Omega | DW_\\lambda\\chi_E(x) | \\mathrm{d}x"
},
{
"math_id": 24,
"text": "\\Omega=\\R ^n"
},
{
"math_id": 25,
"text": "\\Omega\\subseteq\\Omega_1"
},
{
"math_id": 26,
"text": "P(E,\\Omega)\\leq P(E,\\Omega_1)"
},
{
"math_id": 27,
"text": "E_1"
},
{
"math_id": 28,
"text": "E_2"
},
{
"math_id": 29,
"text": "P(E_1\\cup E_2,\\Omega)\\leq P(E_1,\\Omega) + P(E_2,\\Omega_1)"
},
{
"math_id": 30,
"text": "d(E_1,E_2)>0"
},
{
"math_id": 31,
"text": "d"
},
{
"math_id": 32,
"text": "0"
},
{
"math_id": 33,
"text": "P(E)=0"
},
{
"math_id": 34,
"text": "E_1\\triangle E_2"
},
{
"math_id": 35,
"text": "P(E_1)=P(E_2)"
},
{
"math_id": 36,
"text": "E \\subset \\R ^n"
},
{
"math_id": 37,
"text": " P(E, \\Omega) = \\int_{\\Omega} |D\\chi_E| "
},
{
"math_id": 38,
"text": "\\partial E"
},
{
"math_id": 39,
"text": "x_0 \\notin\\partial E"
},
{
"math_id": 40,
"text": "x_0"
},
{
"math_id": 41,
"text": "\\R ^n\\setminus\\partial E"
},
{
"math_id": 42,
"text": "A"
},
{
"math_id": 43,
"text": "\\R^n\\setminus E"
},
{
"math_id": 44,
"text": "\\phi \\in C^1_c(A; \\R ^n)"
},
{
"math_id": 45,
"text": "A\\subseteq(\\R^n \\setminus E)^\\circ=\\R^n\\setminus E^-"
},
{
"math_id": 46,
"text": "E^-"
},
{
"math_id": 47,
"text": "\\chi_E(x)=0"
},
{
"math_id": 48,
"text": "x \\in A"
},
{
"math_id": 49,
"text": " \\int_\\Omega \\langle\\boldsymbol{\\phi}, D\\chi_E(x)\\rangle =- \\int_A\\chi_E(x) \\, \\operatorname{div}\\boldsymbol{\\phi}(x)\\, \\mathrm{d}x = 0"
},
{
"math_id": 50,
"text": "A\\subseteq E^\\circ "
},
{
"math_id": 51,
"text": "\\chi_E(x)=1"
},
{
"math_id": 52,
"text": "\\int_\\Omega \\langle\\boldsymbol{\\phi}, D\\chi_E(x)\\rangle = -\\int_A\\operatorname{div} \\boldsymbol{\\phi}(x) \\, \\mathrm{d}x = 0\n"
},
{
"math_id": 53,
"text": "\\phi \\in C^1_c(A, \\R^n)"
},
{
"math_id": 54,
"text": "P(E)"
},
{
"math_id": 55,
"text": "E = \\{ (x,y) : 0 \\leq x, y \\leq 1 \\} \\cup \\{ (x, 0) : -1 \\leq x \\leq 1 \\} \\subset \\R^2 "
},
{
"math_id": 56,
"text": "P(E) = 4"
},
{
"math_id": 57,
"text": "\\partial E = \\{(x, 0) : -1 \\leq x \\leq 1 \\} \\cup \\{(x, 1) : 0 \\leq x \\leq 1 \\} \\cup \\{(x, y) : x \\in \\{0, 1\\}, 0 \\leq y \\leq 1 \\} "
},
{
"math_id": 58,
"text": "\\mathcal{H}^1(\\partial E) = 5"
},
{
"math_id": 59,
"text": "\\partial^* E"
},
{
"math_id": 60,
"text": "x"
},
{
"math_id": 61,
"text": " \\nu_E(x) := \\lim_{\\rho \\downarrow 0} \\frac{D\\chi_E(B_\\rho(x))}{|D\\chi_E|(B_\\rho(x))} \\in \\R^n"
},
{
"math_id": 62,
"text": "|\\nu_E(x)| = 1"
},
{
"math_id": 63,
"text": "\\partial^* E \\subseteq \\operatorname{support} D\\chi_E \\subseteq \\partial E"
},
{
"math_id": 64,
"text": "\\operatorname{support} D\\chi_E"
},
{
"math_id": 65,
"text": "\\Omega = \\R ^n"
},
{
"math_id": 66,
"text": " P(E) \\left( = \\int |D\\chi_E| \\right) = \\mathcal{H}^{n-1}(\\partial^* E)"
},
{
"math_id": 67,
"text": "E \\subset \\R^n"
},
{
"math_id": 68,
"text": "T_x"
},
{
"math_id": 69,
"text": "\\R ^n"
},
{
"math_id": 70,
"text": " \\lim_{\\lambda \\downarrow 0} \\int_{\\R^n} f(\\lambda^{-1}(z-x)) |D\\chi_E|(z) = \\int_{T_x} f(y) \\, d\\mathcal{H}^{n-1}(y)"
},
{
"math_id": 71,
"text": "f : \\R ^n \\to \\R "
},
{
"math_id": 72,
"text": "\\nu_E(x) = \\lim_{\\rho \\downarrow 0} \\frac{D\\chi_E(B_\\rho(x))}{|D\\chi_E|(B_\\rho(x))} \\in \\R ^n"
},
{
"math_id": 73,
"text": "\\lim_{\\lambda \\downarrow 0} \\left \\{ \\lambda^{-1}(z - x) : z \\in E \\right \\} \\to \\left \\{ y \\in \\R^n : y \\cdot \\nu_E(x) > 0 \\right \\}"
},
{
"math_id": 74,
"text": "L^1"
},
{
"math_id": 75,
"text": "\\mathcal{H}^{n-1}"
},
{
"math_id": 76,
"text": "|D\\chi_E|(A) = \\mathcal{H}^{n-1}(A \\cap \\partial^* E)"
},
{
"math_id": 77,
"text": "A \\subset \\R^n"
},
{
"math_id": 78,
"text": "\\int_E\\operatorname{div}\\boldsymbol{\\phi}(x) \\, \\mathrm{d}x = -\\int_{\\partial E} \\langle\\boldsymbol{\\phi}, D\\chi_E(x)\\rangle \n\\qquad \\boldsymbol{\\phi}\\in C_c^1(\\Omega, \\R^n)"
},
{
"math_id": 79,
"text": "\\nu_E"
},
{
"math_id": 80,
"text": "\\int_E \\operatorname{div} \\boldsymbol{\\phi}(x) \\, \\mathrm{d}x = - \\int_{\\partial^* E} \\boldsymbol{\\phi}(x) \\cdot \\nu_E(x) \\, \\mathrm{d}\\mathcal{H}^{n-1}(x) \\qquad \\boldsymbol{\\phi} \\in C^1_c(\\Omega, \\R^n)"
}
] | https://en.wikipedia.org/wiki?curid=8907443 |
890862 | L-attributed grammar | L-attributed grammars are a special type of attribute grammars. They allow the attributes to be evaluated in one depth-first left-to-right traversal of the abstract syntax tree. As a result, attribute evaluation in L-attributed grammars can be incorporated conveniently in top-down parsing.
A syntax-directed definition is L-attributed if each "inherited" attribute of formula_0 on the right side of formula_1 depends only on
Every S-attributed syntax-directed definition is also L-attributed.
Implementing L-attributed definitions in Bottom-Up parsers requires rewriting L-attributed definitions into translation schemes.
Many programming languages are L-attributed. Special types of compilers, the narrow compilers, are based on some form of L-attributed grammar. These are a strict superset of S-attributed grammars. Used for code synthesis.
Either "inherited attributes" or "synthesized attributes" associated with the occurrence of symbol formula_4.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X_j"
},
{
"math_id": 1,
"text": "A \\rightarrow X_1, X_2, \\dots, X_n "
},
{
"math_id": 2,
"text": "X_1, X_2, \\dots, X_{j-1}"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "X_1,X_2, \\dots, X_n"
}
] | https://en.wikipedia.org/wiki?curid=890862 |
8908740 | Occurrences of Grandi's series | This article lists occurrences of the paradoxical infinite "sum" +1 -1 +1 -1 ... , sometimes called Grandi's series.
Parables.
Guido Grandi illustrated the series with a parable involving two brothers who share a gem.
Thomson's lamp is a supertask in which a hypothetical lamp is turned on and off infinitely many times in a finite time span. One can think of turning the lamp on as adding 1 to its state, and turning it off as subtracting 1. Instead of asking the sum of the series, one asks the final state of the lamp.
One of the best-known classic parables to which infinite series have been applied, Achilles and the tortoise, can also be adapted to the case of Grandi's series.
Numerical series.
The Cauchy product of Grandi's series with itself is 1 − 2 + 3 − 4 + · · ·.
Several series resulting from the introduction of zeros into Grandi's series have interesting properties; for these see Summation of Grandi's series#Dilution.
Grandi's series is just one example of a divergent geometric series.
The rearranged series 1 − 1 − 1 + 1 + 1 − 1 − 1 + · · · occurs in Euler's 1775 treatment of the pentagonal number theorem as the value of the Euler function at "q" = 1.
Power series.
The power series most famously associated with Grandi's series is its ordinary generating function,
formula_0
Fourier series.
Hyperbolic sine.
In his 1822 "Théorie Analytique de la Chaleur", Joseph Fourier obtains what is currently called a Fourier sine series for a scaled version of the hyperbolic sine function,
formula_1
He finds that the general coefficient of sin "nx" in the series is
formula_2
For "n" > 1 the above series converges, while the coefficient of sin "x" appears as 1 − 1 + 1 − 1 + · · · and so is expected to be 1⁄2. In fact, this is correct, as can be demonstrated by directly calculating the Fourier coefficient from an integral:
formula_3
Dirac comb.
Grandi's series occurs more directly in another important series,
formula_4
At "x" = π, the series reduces to −1 + 1 − 1 + 1 − · · · and so one might expect it to meaningfully equal −1⁄2. In fact, Euler held that this series obeyed the formal relation Σ cos "kx" = −1⁄2, while d'Alembert rejected the relation, and Lagrange wondered if it could be defended by an extension of the geometric series similar to Euler's reasoning with Grandi's numerical series.
Euler's claim suggests that
formula_5
for all "x". This series is divergent everywhere, while its Cesàro sum is indeed 0 for almost all "x". However, the series diverges to infinity at "x" = 2π"n" in a significant way: it is the Fourier series of a Dirac comb. The ordinary, Cesàro, and Abel sums of this series involve limits of the Dirichlet, Fejér, and Poisson kernels, respectively.
Dirichlet series.
Multiplying the terms of Grandi's series by 1/"n""z" yields the Dirichlet series
formula_6
which converges only for complex numbers "z" with a positive real part. Grandi's series is recovered by letting "z" = 0.
Unlike the geometric series, the Dirichlet series for "η" is not useful for determining what 1 − 1 + 1 − 1 + · · · "should" be. Even on the right half-plane, "η"("z") is not given by any elementary expression, and there is no immediate evidence of its limit as "z" approaches 0. On the other hand, if one uses stronger methods of summability, then the Dirichlet series for "η" defines a function on the whole complex plane — the Dirichlet eta function — and moreover, this function is analytic. For "z" with real part > −1 it suffices to use Cesàro summation, and so "η"(0) = 1⁄2 after all.
The function "η" is related to a more famous Dirichlet series and function:
formula_7
where ζ is the Riemann zeta function. Keeping Grandi's series in mind, this relation explains why ζ(0) = −1⁄2; see also 1 + 1 + 1 + 1 + · · ·. The relation also implies a much more important result. Since "η"("z") and (1 − 21−"z") are both analytic on the entire plane and the latter function's only zero is a simple zero at "z" = 1, it follows that ζ("z") is meromorphic with only a simple pole at "z" = 1.
Euler characteristics.
Given a CW complex "S" containing one vertex, one edge, one face, and generally exactly one cell of every dimension, Euler's formula "V" − "E" + "F" − · · · for the Euler characteristic of "S" returns 1 − 1 + 1 − · · ·. There are a few motivations for defining a generalized Euler characteristic for such a space that turns out to be 1/2.
One approach comes from combinatorial geometry. The open interval (0, 1) has an Euler characteristic of −1, so its power set 2(0, 1) should have an Euler characteristic of 2−1 = 1/2. The appropriate power set to take is the "small power set" of finite subsets of the interval, which consists of the union of a point (the empty set), an open interval (the set of singletons), an open triangle, and so on. So the Euler characteristic of the small power set is 1 − 1 + 1 − · · ·. James Propp defines a regularized Euler measure for polyhedral sets that, in this example, replaces 1 − 1 + 1 − · · · with 1 − "t" + "t"2 − · · ·, sums the series for |"t"| < 1, and analytically continues to "t" = 1, essentially finding the Abel sum of 1 − 1 + 1 − · · ·, which is 1/2. Generally, he finds χ(2"A") = 2χ("A") for any polyhedral set "A", and the base of the exponent generalizes to other sets as well.
Infinite-dimensional real projective space RP∞ is another structure with one cell of every dimension and therefore an Euler characteristic of 1 − 1 + 1 − · · ·. This space can be described as the quotient of the infinite-dimensional sphere by identifying each pair of antipodal points. Since the infinite-dimensional sphere is contractible, its Euler characteristic is 1, and its 2-to-1 quotient should have an Euler characteristic of 1/2.
This description of RP∞ also makes it the classifying space of Z2, the cyclic group of order 2. Tom Leinster gives a definition of the Euler characteristic of any category which bypasses the classifying space and reduces to 1/|"G"| for any group when viewed as a one-object category. In this sense the Euler characteristic of Z2 is itself 1⁄2.
In physics.
Grandi's series, and generalizations thereof, occur frequently in many branches of physics; most typically in the discussions of quantized fermion fields (for example, the chiral bag model), which have both positive and negative eigenvalues; although similar series occur also for bosons, such as in the Casimir effect.
The general series is discussed in greater detail in the article on spectral asymmetry, whereas methods used to sum it are discussed in the articles on regularization and, in particular, the zeta function regulator.
In art.
The Grandi series has been applied to e.g. ballet by Benjamin Jarvis, in The Invariant journal. PDF here: https://invariants.org.uk/assets/TheInvariant_HT2016.pdf The noise artist Jliat has a 2000 musical single "Still Life #7: The Grandi Series" advertised as "conceptual art"; it consists of nearly an hour of silence.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f(x) = 1-x+x^2-x^3+\\cdots = \\frac{1}{1+x}."
},
{
"math_id": 1,
"text": "f(x) = \\frac{\\pi}{2\\sinh\\pi} \\sinh x."
},
{
"math_id": 2,
"text": "(-1)^{n-1}\\left(\\frac 1 n - \\frac{1}{n^3} + \\frac{1}{n^5} - \\cdots\\right) = (-1)^{n-1}\\frac{n}{1+n^2}."
},
{
"math_id": 3,
"text": "\\frac 2 \\pi \\int_0^\\pi f(x)\\sin x \\;dx = \\left.\\frac 1 {2\\sinh\\pi}(\\cosh x \\sin x - \\sinh x \\cos x)\\right|_0^\\pi = \\frac 1 2."
},
{
"math_id": 4,
"text": "\\cos x + \\cos 2x + \\cos 3x + \\cdots = \\sum_{k=1}^\\infty\\cos(kx)."
},
{
"math_id": 5,
"text": "1 +2\\sum_{k=1}^\\infty\\cos(kx) = 0?"
},
{
"math_id": 6,
"text": "\\eta(z)=1-\\frac{1}{2^z}+\\frac{1}{3^z}-\\frac{1}{4^z}+\\cdots=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z},"
},
{
"math_id": 7,
"text": "\n\\begin{align}\n\\eta(z) & = 1+\\frac 1 {2^z}+\\frac 1 {3^z}+\\frac 1 {4^z}+\\cdots - \\frac 2 {2^z} \\left(1+\\frac 1 {2^z}+\\cdots\\right) \\\\[5pt]\n & = \\left(1-\\frac 2 {2^z}\\right)\\zeta(z),\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=8908740 |
8909028 | Summation of Grandi's series | General considerations.
Stability and linearity.
The formal manipulations that lead to 1 − 1 + 1 − 1 + · · · being assigned a value of 1⁄2 include:
These are all legal manipulations for sums of convergent series, but 1 − 1 + 1 − 1 + · · · is not a convergent series.
Nonetheless, there are many summation methods that respect these manipulations and that do assign a "sum" to Grandi's series. Two of the simplest methods are Cesàro summation and Abel summation.
Cesàro sum.
The first rigorous method for summing divergent series was published by Ernesto Cesàro in 1890. The basic idea is similar to Leibniz's probabilistic approach: essentially, the Cesàro sum of a series is the average of all of its partial sums. Formally one computes, for each "n", the average σ"n" of the first "n" partial sums, and takes the limit of these Cesàro means as "n" goes to infinity.
For Grandi's series, the sequence of arithmetic means is
1, 1⁄2, 2⁄3, 2⁄4, 3⁄5, 3⁄6, 4⁄7, 4⁄8, …
or, more suggestively,
(1⁄2+1⁄2), 1⁄2, (1⁄2+1⁄6), 1⁄2, (1⁄2+1⁄10), 1⁄2, (1⁄2+1⁄14), 1⁄2, …
where
formula_0 for even "n" and formula_1 for odd "n".
This sequence of arithmetic means converges to 1⁄2, so the Cesàro sum of Σ"a""k" is 1⁄2. Equivalently, one says that the Cesàro limit of the sequence 1, 0, 1, 0, … is 1⁄2.
The Cesàro sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3. So the Cesàro sum of a series can be altered by inserting infinitely many 0s as well as infinitely many brackets.
The series can also be summed by the more general fractional (C, a) methods.
Abel sum.
Abel summation is similar to Euler's attempted definition of sums of divergent series, but it avoids Callet's and N. Bernoulli's objections by precisely constructing the function to use. In fact, Euler likely meant to limit his definition to power series, and in practice he used it almost exclusively in a form now known as Abel's method.
Given a series "a"0 + "a"1 + "a"2 + · · ·, one forms a new series "a"0 + "a"1"x" + "a"2"x"2 + · · ·. If the latter series converges for 0 < "x" < 1 to a function with a limit as "x" tends to 1, then this limit is called the Abel sum of the original series, after Abel's theorem which guarantees that the procedure is consistent with ordinary summation. For Grandi's series one has
formula_2
Related series.
The corresponding calculation that the Abel sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3 involves the function (1 + "x")/(1 + "x" + "x"2).
Whenever a series is Cesàro summable, it is also Abel summable and has the same sum. On the other hand, taking the Cauchy product of Grandi's series with itself yields a series which is Abel summable but not Cesàro summable:
1 − 2 + 3 − 4 + · · ·
has Abel sum 1⁄4.
Dilution.
Alternating spacing.
That the ordinary Abel sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3 can also be phrased as the (A, λ) sum of the original series 1 − 1 + 1 − 1 + · · · where (λ"n") = (0, 2, 3, 5, 6, …). Likewise the (A, λ) sum of 1 − 1 + 1 − 1 + · · · where (λ"n") = (0, 1, 3, 4, 6, …) is 1⁄3.
Exponential spacing.
The summability of 1 − 1 + 1 − 1 + · · · can be frustrated by separating its terms with exponentially longer and longer groups of zeros. The simplest example to describe is the series where (−1)"n" appears in the rank 2"n":
0 + 1 − 1 + 0 + 1 + 0 + 0 + 0 − 1 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 1 + 0 + · · ·.
This series is not Cesaro summable. After each nonzero term, the partial sums spend enough time lingering at either 0 or 1 to bring the average partial sum halfway to that point from its previous value. Over the interval 22"m"−1 ≤ "n" ≤ 22"m" − 1 following a (− 1) term, the "n"th arithmetic means vary over the range
formula_3
or about 2⁄3 to 1⁄3.
In fact, the exponentially spaced series is not Abel summable either. Its Abel sum is the limit as "x" approaches 1 of the function
"F"("x") = 0 + "x" − "x"2 + 0 + "x"4 + 0 + 0 + 0 − "x"8 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + "x"16 + 0 + · · ·.
This function satisfies a functional equation:
formula_4
This functional equation implies that "F"("x") roughly oscillates around 1⁄2 as "x" approaches 1. To prove that the amplitude of oscillation is nonzero, it helps to separate "F" into an exactly periodic and an aperiodic part:
formula_5
where
formula_6
satisfies the same functional equation as "F". This now implies that Ψ("x") = −Ψ("x"2) = Ψ("x"4), so Ψ is a periodic function of loglog(1/"x"). Since dy (p.77) speaks of "another solution" and "plainly not constant", although technically he does not prove that "F" and Φ are different.</ref> Since the Φ part has a limit of 1⁄2, "F" oscillates as well.
Separation of scales.
Given any function φ(x) such that φ(0) = 1, and the derivative of φ is integrable over (0, +∞), then the generalized φ-sum of Grandi's series exists and is equal to 1⁄2:
formula_7
The Cesaro or Abel sum is recovered by letting φ be a triangular or exponential function, respectively. If φ is additionally assumed to be "continuously" differentiable, then the claim can be proved by applying the mean value theorem and converting the sum into an integral. Briefly:
formula_8
Borel sum.
The Borel sum of Grandi's series is again 1⁄2, since
formula_9
and
formula_10
The series can also be summed by generalized (B, r) methods.
Spectral asymmetry.
The entries in Grandi's series can be paired to the eigenvalues of an infinite-dimensional operator on Hilbert space. Giving the series this interpretation gives rise to the idea of spectral asymmetry, which occurs widely in physics. The value that the series sums to depends on the asymptotic behaviour of the eigenvalues of the operator. Thus, for example, let formula_11 be a sequence of both positive and negative eigenvalues. Grandi's series corresponds to the formal sum
formula_12
where formula_13 is the sign of the eigenvalue. The series can be given concrete values by considering various limits. For example, the heat kernel regulator leads to the sum
formula_14
which, for many interesting cases, is finite for non-zero "t", and converges to a finite value in the limit.
Methods that fail.
The integral function method with "p""n" = exp (−"cn"2) and "c" > 0.
The moment constant method with
formula_15
and "k" > 0.
Geometric series.
The geometric series in formula_16,
formula_17
is convergent for formula_18. Formally substituting formula_19 would give
formula_20
However, formula_19 is outside the radius of convergence, formula_18, so this conclusion cannot be made.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma_n=\\frac12"
},
{
"math_id": 1,
"text": "\\sigma_n=\\frac12+\\frac{1}{2n}"
},
{
"math_id": 2,
"text": "A\\sum_{n=0}^\\infty(-1)^n = \\lim_{x\\rightarrow 1}\\sum_{n=0}^\\infty(-x)^n = \\lim_{x\\rightarrow 1}\\frac{1}{1+x}=\\frac12."
},
{
"math_id": 3,
"text": "\\frac{2}{3} \\left(\\frac{2^{2m}-1}{2^{2m}+2}\\right)\\;\\mathrm{to}\\;\\frac{1}{3}(1-2^{-2m}),"
},
{
"math_id": 4,
"text": "\\begin{array}{rcl}\nF(x) & = &\\displaystyle x-x^2+x^4-x^8+\\cdots \\\\[1em]\n & = & \\displaystyle x - \\left[(x^2)-(x^2)^2+(x^2)^4-\\cdots\\right] \\\\[1em]\n & = & \\displaystyle x-F(x^2).\n\\end{array}"
},
{
"math_id": 5,
"text": "F(x) = \\Psi(x) + \\Phi(x)"
},
{
"math_id": 6,
"text": "\\Phi(x) = \\sum_{n=0}^\\infty\\frac{(-1)^n}{n!(1+2^n)}\\left(\\log\\frac 1x\\right)^n"
},
{
"math_id": 7,
"text": "S_\\varphi = \\lim_{\\delta\\downarrow0}\\sum_{k=0}^\\infty(-1)^k\\varphi(\\delta k) = \\frac12."
},
{
"math_id": 8,
"text": "\\begin{array}{rcl}\nS_\\varphi & = &\\displaystyle \\lim_{\\delta\\downarrow0}\\sum_{k=0}^\\infty\\left[\\varphi(2k\\delta) - \\varphi(2k\\delta+\\delta)\\right] \\\\[1em]\n & = & \\displaystyle \\lim_{\\delta\\downarrow0}\\sum_{k=0}^\\infty\\varphi'(2k\\delta+c_k)(-\\delta) \\\\[1em]\n & = & \\displaystyle-\\frac12\\int_0^\\infty\\varphi'(x) \\,dx = -\\frac12\\varphi(x)|_0^\\infty = \\frac12.\n\\end{array}"
},
{
"math_id": 9,
"text": "1-x+\\frac{x^2}{2!}-\\frac{x^3}{3!}+\\frac{x^4}{4!}-\\cdots=e^{-x}"
},
{
"math_id": 10,
"text": "\\int_0^\\infty e^{-x}e^{-x}\\,dx=\\int_0^\\infty e^{-2x}\\,dx=\\frac12."
},
{
"math_id": 11,
"text": "\\{\\omega_n\\}"
},
{
"math_id": 12,
"text": "\\sum_n \\sgn(\\omega_n)\\;"
},
{
"math_id": 13,
"text": "\\sgn(\\omega_n)=\\pm 1"
},
{
"math_id": 14,
"text": "\\lim_{t\\to 0} \\sum_n \\sgn(\\omega_n) e^{-t|\\omega_n|}"
},
{
"math_id": 15,
"text": "d\\chi = e^{-k(\\log x)^2}x^{-1}dx"
},
{
"math_id": 16,
"text": "(x - 1)"
},
{
"math_id": 17,
"text": "\\frac{1}{x} = 1 - (x-1) + (x-1)^2 - (x-1)^3 + (x-1)^4 - .. ."
},
{
"math_id": 18,
"text": "|x - 1| < 1"
},
{
"math_id": 19,
"text": "x = 2"
},
{
"math_id": 20,
"text": " \\frac{1}{2} = 1 - 1 + 1 - 1 + 1 - ..."
}
] | https://en.wikipedia.org/wiki?curid=8909028 |
8910528 | Zonal polynomial | In mathematics, a zonal polynomial is a multivariate symmetric homogeneous polynomial. The zonal polynomials form a basis of the space of symmetric polynomials. Zonal polynomials appear in special functions with matrix argument which on the other hand appear in matrixvariate distributions such as the Wishart distribution when integrating over compact Lie groups. The theory was started in multivariate statistics in the 1960s and 1970s in a series of papers by Alan Treleven James and his doctorial student Alan Graham Constantine.
They appear as zonal spherical functions of the Gelfand pairs
formula_0 (here, formula_1 is the hyperoctahedral group) and formula_2, which means that they describe canonical basis of the double class
algebras formula_3 and formula_4.
The zonal polynomials are the formula_5 case of the C normalization of the Jack function. | [
{
"math_id": 0,
"text": "(S_{2n},H_n)"
},
{
"math_id": 1,
"text": "H_n"
},
{
"math_id": 2,
"text": "(Gl_n(\\mathbb{R}),\nO_n)"
},
{
"math_id": 3,
"text": "\\mathbb{C}[H_n \\backslash S_{2n} / H_n]"
},
{
"math_id": 4,
"text": "\\mathbb{C}[O_d(\\mathbb{R})\\backslash\nM_d(\\mathbb{R})/O_d(\\mathbb{R})]"
},
{
"math_id": 5,
"text": "\\alpha=2"
}
] | https://en.wikipedia.org/wiki?curid=8910528 |
8910573 | Mechanical amplifier | A mechanical amplifier or a mechanical amplifying element is a linkage mechanism that amplifies the magnitude of mechanical quantities such as force, displacement, velocity, acceleration and torque in linear and rotational systems. In some applications, mechanical amplification induced by nature or unintentional oversights in man-made designs can be disastrous, causing situations such as the 1940 Tacoma Narrows Bridge collapse. When employed appropriately, it can help to magnify small mechanical signals for practical applications.
No additional energy can be created from any given mechanical amplifier due to conservation of energy. Claims of using mechanical amplifiers for perpetual motion machines are false, due to either a lack of understanding of the working mechanism or a simple hoax.
Generic mechanical amplifiers.
Amplifiers, in the most general sense, are intermediate elements that increase the magnitude of a signal. These include mechanical amplifiers, electrical/electronic amplifiers, hydraulic/fluidic amplifiers, pneumatic amplifiers, optical amplifiers and quantum amplifiers. The purpose of employing a mechanical amplifier is generally to magnify the mechanical signal fed into a given transducer such as gear trains in generators or to enhance the mechanical signal output from a given transducer such as diaphragm in speakers and gramophones.
Electrical amplifiers increase the power of the signal with energy supplied from an external source. This is generally not the case with most devices described as mechanical amplifiers; all the energy is provided by the original signal and there is no power amplification. For instance a lever can amplify the displacement of a signal, but the force is proportionately reduced. Such devices are more correctly described as transformers, at least in the context of mechanical–electrical analogies.
Transducers are devices that convert energy from one form to another, such as mechanical-to-electrical or vice versa; and mechanical amplifiers are employed to improve the efficiency of this energy conversion from mechanical sources. Mechanical amplifiers can be broadly classified as resonating/oscillating amplifiers (such as diaphragms) or non-resonating/oscillating amplifiers (such as gear trains).
Resonating amplifiers.
Any mechanical body that is not infinitely rigid (infinite damping) can exhibit vibration upon experiencing an external forcing. Most vibrating elements can be represented by a second order mass-spring-damper system governed by the following second order differential equation.
formula_0
where, "x" is the displacement, "m" is the effective mass, "c" is the damping coefficient, "k" is the spring constant of the restoring force, and "F(t)" is external forcing as a function of time.
"A mechanical amplifier is basically a mechanical resonator that resonates at the operating frequency and magnifies the amplitude of the vibration of the transducer at anti-node location."
Resonance is the physical phenomenon where the amplitude of oscillation (output) exhibit a buildup over time when the frequency of the external forcing (input) is in the vicinity of a resonant frequency. The output thus achieved is generally larger than the input in terms of displacement, velocity or acceleration. Although resonant frequency is generally used synonymously with natural frequency, there is in fact a distinction. While resonance can be achieved at the natural frequency, it can also be achieved at several other modes such as flexural modes. Therefore, the term resonant frequency encompasses all frequency bandwidths where some forms of resonance can be achieved; and this includes the natural frequency.
Direct resonators.
All mechanical vibrating systems possess a natural frequency "f"n, which is presented as the following in its most basic form.
formula_1
When an external forcing is applied directly (parallel to the plane of the oscillatory displacement) to the system around the frequency of its natural frequency, then the fundamental mode of resonance can be achieved. The oscillatory amplitude outside this frequency region is typically smaller than the resonant peak and the input amplitude. The amplitude of the resonant peak and the bandwidth of resonance is dependent on the damping conditions and is quantified by the dimensionless quantity Q factor. Higher resonant modes and resonant modes at different planes (transverse, lateral, rotational and flexural) are usually triggered at higher frequencies. The specific frequency vicinity of these modes depends on the nature and boundary conditions of each mechanical system. Additionally, subharmonics, superharmonics or subsuperharmonics of each mode can also be excited at the right boundary conditions.
“As a model for a detector we note that if you hang a weight on a spring and then move the upper end of the spring up and down, the amplitude of the weight will be much larger than the driving amplitude if you are at the resonant frequency of the mass and spring assembly. It is essentially a mechanical amplifier and serves as a good candidate for a sensitive detector."
Parametric resonators.
Parametric resonance is the physical phenomenon where an external excitation, at a specific frequency and typically orthogonal to the plane of displacement, introduces a periodic modulation in one of the system parameters resulting in a buildup in oscillatory amplitude. It is governed by the Mathieu equation. The following is a damped Mathieu equation.
formula_2
where "δ" is the squared of the natural frequency and "ε" is the amplitude of the parametric excitation.
The first order or the principal parametric resonance is achieved when the driving/excitation frequency is twice the natural frequency of a given system. Higher orders of parametric resonance are observed either at or at submultiples of the natural frequency. For direct resonance, the response frequency always matches the excitation frequency. However, regardless of which order of parametric resonance is activated, the response frequency of parametric resonance is always in the vicinity of the natural frequency. Parametric resonance has the ability to exhibit higher mechanical amplification than direct resonance when operating at favourable conditions, but usually has a longer build up/transient state.
“The parametric resonator provides a very useful instrument that has been developed by a number of researchers, in part because a parametric resonator can serve as a mechanical amplifier, over a narrow band of frequencies.”
Swing analogy.
Direct resonance can be equated to someone pushing a child on a swing. If the frequency of the pushing (external forcing) matches the natural frequency of the child-swing system, direct resonance can be achieved. Parametric resonance, on the other hand, is the child shifting his/her own weight with time (twice the frequency of the natural frequency) and building up the oscillatory amplitude of the swing without anyone helping to push. In other words, there is an internal transfer of energy (instead of simply dissipating all available energy) as the system parameter (child's weight) modulates and changes with time.
Other resonators/oscillators.
Other means of signal enhancement, applicable to both mechanical and electrical domains, exist. This include chaos theory, stochastic resonance and many other nonlinear or vibrational phenomena. No new energy is created. However, through mechanical amplification, more of the available power spectrum can be utilised at a more optimal efficiency rather than dissipated.
Non-resonating amplifiers.
Levers and gear trains are classical tools used to achieve mechanical advantage "MA", which is a measure of mechanical amplification.
Lever.
Lever can be used to change the magnitude of a given mechanical signal, such as force or displacement. Levers are widely used as mechanical amplifiers in actuators and generators.
It is a mechanism that usually consist of a rigid beam/rod fixed about a pivot. Levers are balanced when there is a balance of moment or torque about the pivot. Three major classifications exist, depending on the position of the pivot, input and output forces. The fundamental principle of lever mechanism is governed by the following ratio, dating back to Archimedes.
formula_3
where "FA" is a force acting on point "A" on the rigid lever beam, "FB" is a force acting on point "B" on the rigid lever beam and "a" and "b" are the respective distances from points "A" and "B" to the pivot point.
If "FB" is the output force and "FA" is the input force, then mechanical advantage "MA" is given by the ratio of output force to input force.
formula_4
Gear train.
Gear trains are usually formed by the meshing engagement of two or more gears on a frame to form a transmission. This can provide translation (linear motion) or rotation as well as mechanically alter displacement, speed, velocity, acceleration, direction and torque depending on the type of gears employed, transmission configuration and gearing ratio.
The mechanical advantage of a gear train is given by the ratio of the output torque "TB" and input torque "TA", which is also the same ratio of number of teeth of the output gear "NB" and the number of teeth of the input gear "NA".
formula_5
Therefore, torque can be amplified if the number of teeth of the output gear is larger than that of the input gear.
The ratio of the number of gear teeth is also related to the gear velocities "ωA" and "ωB" as follows.
formula_6
Therefore, if the number of teeth of the output gear is less than that of the input, the output velocity is amplified.
Others.
The above-mentioned mechanical quantities can also be amplified and/or converted either through a combination of above or other iterations of mechanical transmission systems, such as, cranks, cam, torque amplifiers, hydraulic jacks, mechanical comparator such as Johansson Mikrokator and many more.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m\\ddot{x} + c\\dot{x} + kx = F(t)"
},
{
"math_id": 1,
"text": "f_n = {1\\over 2 \\pi} \\sqrt {k\\over m} "
},
{
"math_id": 2,
"text": "\\ddot{x} + c\\dot{x} + [\\delta - 2 \\varepsilon \\cos{2t}]x = 0 "
},
{
"math_id": 3,
"text": "\\frac{F_B}{F_A} = \\frac{a}{b}"
},
{
"math_id": 4,
"text": "MA = \\frac{F_B}{F_A}"
},
{
"math_id": 5,
"text": " MA = \\frac{T_B}{T_A} = \\frac{N_B}{N_A}."
},
{
"math_id": 6,
"text": " \\frac{T_A}{T_B} = \\frac{\\omega_B}{\\omega_A}."
}
] | https://en.wikipedia.org/wiki?curid=8910573 |
891150 | Schrödinger method | In combinatorial mathematics and probability theory, the Schrödinger method, named after the Austrian physicist Erwin Schrödinger, is used to solve some problems of distribution and occupancy.
Suppose
formula_0
are independent random variables that are uniformly distributed on the interval [0, 1]. Let
formula_1
be the corresponding order statistics, i.e., the result of sorting these "n" random variables into increasing order. We seek the probability of some event "A" defined in terms of these order statistics. For example, we might seek the probability that in a certain seven-day period there were at most two days in on which only one phone call was received, given that the number of phone calls during that time was 20. This assumes uniform distribution of arrival times.
The Schrödinger method begins by assigning a Poisson distribution with expected value "λt" to the number of observations in the interval [0, "t"], the number of observations in non-overlapping subintervals being independent (see Poisson process). The number "N" of observations is Poisson-distributed with expected value "λ". Then we rely on the fact that the conditional probability
formula_2
does not depend on "λ" (in the language of statisticians, "N" is a sufficient statistic for this parametrized family of probability distributions for the order statistics). We proceed as follows:
formula_3
so that
formula_4
Now the lack of dependence of "P"("A" | "N" = "n") upon "λ" entails that the last sum displayed above is a power series in "λ" and "P"("A" | "N" = "n") is the value of its "n"th derivative at "λ" = 0, i.e.,
formula_5
For this method to be of any use in finding "P"("A" | "N" ="n"), must be possible to find "P""λ"("A") more directly than "P"("A" | "N" = "n"). What makes that possible is the independence of the numbers of arrivals in non-overlapping subintervals. | [
{
"math_id": 0,
"text": "X_1,\\dots,X_n \\, "
},
{
"math_id": 1,
"text": "X_{(1)},\\dots,X_{(n)} \\, "
},
{
"math_id": 2,
"text": "P(A\\mid N=n) \\, "
},
{
"math_id": 3,
"text": "P_\\lambda(A)=\\sum_{n=0}^\\infty P(A\\mid N=n)P(N=n)=\\sum_{n=0}^\\infty P(A\\mid N=n){\\lambda^n e^{-\\lambda} \\over n!},"
},
{
"math_id": 4,
"text": "e^{\\lambda}\\,P_\\lambda(A)=\\sum_{n=0}^\\infty P(A\\mid N=n){\\lambda^n \\over n!}."
},
{
"math_id": 5,
"text": "P(A\\mid N=n) = \\left[{d^n \\over d\\lambda^n}\\left(e^\\lambda\\, P_\\lambda(A)\\right)\\right]_{\\lambda=0}."
}
] | https://en.wikipedia.org/wiki?curid=891150 |
8912 | Drake equation | Estimate of extraterrestrial civilizations
The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way Galaxy.
The equation was formulated in 1961 by Frank Drake, not for purposes of quantifying the number of civilizations, but as a way to stimulate scientific dialogue at the first scientific meeting on the search for extraterrestrial intelligence (SETI). The equation summarizes the main concepts which scientists must contemplate when considering the question of other radio-communicative life. It is more properly thought of as an approximation than as a serious attempt to determine a precise number.
Criticism related to the Drake equation focuses not on the equation itself, but on the fact that the estimated values for several of its factors are highly conjectural, the combined multiplicative effect being that the uncertainty associated with any derived value is so large that the equation cannot be used to draw firm conclusions.
Equation.
The Drake equation is:
formula_0
where
and
This form of the equation first appeared in Drake's 1965 paper.
History.
In September 1959, physicists Giuseppe Cocconi and Philip Morrison published an article in the journal "Nature" with the provocative title "Searching for Interstellar Communications". Cocconi and Morrison argued that radio telescopes had become sensitive enough to pick up transmissions that might be broadcast into space by civilizations orbiting other stars. Such messages, they suggested, might be transmitted at a wavelength of 21 cm (1,420.4 MHz). This is the wavelength of radio emission by neutral hydrogen, the most common element in the universe, and they reasoned that other intelligences might see this as a logical landmark in the radio spectrum.
Two months later, Harvard University astronomy professor Harlow Shapley speculated on the number of inhabited planets in the universe, saying "The universe has 10 million, million, million suns (10 followed by 18 zeros) similar to our own. One in a million has planets around it. Only one in a million million has the right combination of chemicals, temperature, water, days and nights to support planetary life as we know it. This calculation arrives at the estimated figure of 100 million worlds where life has been forged by evolution."
Seven months after Cocconi and Morrison published their article, Drake began searching for extraterrestrial intelligence in an experiment called Project Ozma. It was the first systematic search for signals from communicative extraterrestrial civilizations. Using the dish of the National Radio Astronomy Observatory, Green Bank in Green Bank, West Virginia, Drake monitored two nearby Sun-like stars: Epsilon Eridani and Tau Ceti, slowly scanning frequencies close to the 21 cm wavelength for six hours per day from April to July 1960. The project was well designed, inexpensive, and simple by today's standards. It detected no signals.
Soon thereafter, Drake hosted the first search for extraterrestrial intelligence conference on detecting their radio signals. The meeting was held at the Green Bank facility in 1961. The equation that bears Drake's name arose out of his preparations for the meeting.
<templatestyles src="Template:Blockquote/styles.css" />As I planned the meeting, I realized a few day[s] ahead of time we needed an agenda. And so I wrote down all the things you needed to know to predict how hard it's going to be to detect extraterrestrial life. And looking at them it became pretty evident that if you multiplied all these together, you got a number, N, which is the number of detectable civilizations in our galaxy. This was aimed at the radio search, and not to search for primordial or primitive life forms.
The ten attendees were conference organizer J. Peter Pearman, Frank Drake, Philip Morrison, businessman and radio amateur Dana Atchley, chemist Melvin Calvin, astronomer Su-Shu Huang, neuroscientist John C. Lilly, inventor Barney Oliver, astronomer Carl Sagan, and radio-astronomer Otto Struve. These participants called themselves "The Order of the Dolphin" (because of Lilly's work on dolphin communication), and commemorated their first meeting with a plaque at the observatory hall.
Usefulness.
The Drake equation results in a summary of the factors affecting the likelihood that we might detect radio-communication from intelligent extraterrestrial life. The last three parameters, "f"i, "f"c, and L, are not known and are very difficult to estimate, with values ranging over many orders of magnitude (see ). Therefore, the usefulness of the Drake equation is not in the solving, but rather in the contemplation of all the various concepts which scientists must incorporate when considering the question of life elsewhere, and gives the question of life elsewhere a basis for scientific analysis. The equation has helped draw attention to some particular scientific problems related to life in the universe, for example abiogenesis, the development of multi-cellular life, and the development of intelligence itself.
Within the limits of existing human technology, any practical search for distant intelligent life must necessarily be a search for some manifestation of a distant technology. After about 50 years, the Drake equation is still of seminal importance because it is a 'road map' of what we need to learn in order to solve this fundamental existential question. It also formed the backbone of astrobiology as a science; although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories. Some 50 years of SETI have failed to find anything, even though radio telescopes, receiver techniques, and computational abilities have improved significantly since the early 1960s. SETI efforts since 1961 have conclusively ruled out widespread alien emissions near the 21 cm wavelength of the hydrogen frequency.
Estimates.
Original estimates.
There is considerable disagreement on the values of these parameters, but the 'educated guesses' used by Drake and his colleagues in 1961 were:
Inserting the above minimum numbers into the equation gives a minimum N of 20 (see: Range of results). Inserting the maximum numbers gives a maximum of 50,000,000. Drake states that given the uncertainties, the original meeting concluded that "N" ≈ "L", and there were probably between 1000 and 100,000,000 planets with civilizations in the Milky Way Galaxy.
Current estimates.
This section discusses and attempts to list the best current estimates for the parameters of the Drake equation.
Rate of star creation in this Galaxy, "R"∗.
Calculations in 2010, from NASA and the European Space Agency indicate that the rate of star formation in this Galaxy is about 0.68–1.45 M☉ of material per year. To get the number of stars per year, we divide this by the initial mass function (IMF) for stars, where the average new star's mass is about 0.5 M☉. This gives a star formation rate of about 1.5–3 stars per year.
Fraction of those stars that have planets, "f"p.
Analysis of microlensing surveys, in 2012, has found that "f"p may approach 1—that is, stars are orbited by planets as a rule, rather than the exception; and that there are one or more bound planets per Milky Way star.
Average number of planets that might support life per star that has planets, "n"e.
In November 2013, astronomers reported, based on Kepler space telescope data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of sun-like stars and red dwarf stars within the Milky Way Galaxy. 11 billion of these estimated planets may be orbiting sun-like stars. Since there are about 100 billion stars in the galaxy, this implies "f"p · "n"e is roughly 0.4. The nearest planet in the habitable zone is Proxima Centauri b, which is as close as about 4.2 light-years away.
The consensus at the Green Bank meeting was that "n"e had a minimum value between 3 and 5. Dutch science journalist Govert Schilling has opined that this is optimistic. Even if planets are in the habitable zone, the number of planets with the right proportion of elements is difficult to estimate. Brad Gibson, Yeshe Fenner, and Charley Lineweaver determined that about 10% of star systems in the Milky Way Galaxy are hospitable to life, by having heavy elements, being far from supernovae and being stable for a sufficient time.
The discovery of numerous gas giants in close orbit with their stars has introduced doubt that life-supporting planets commonly survive the formation of their stellar systems. So-called hot Jupiters may migrate from distant orbits to near orbits, in the process disrupting the orbits of habitable planets.
On the other hand, the variety of star systems that might have habitable zones is not just limited to solar-type stars and Earth-sized planets. It is now estimated that even tidally locked planets close to red dwarf stars might have habitable zones, although the flaring behavior of these stars might speak against this. The possibility of life on moons of gas giants (such as Jupiter's moon Europa, or Saturn's moons Titan and Enceladus) adds further uncertainty to this figure.
The authors of the rare Earth hypothesis propose a number of additional constraints on habitability for planets, including being in galactic zones with suitably low radiation, high star metallicity, and low enough density to avoid excessive asteroid bombardment. They also propose that it is necessary to have a planetary system with large gas giants which provide bombardment protection without a hot Jupiter; and a planet with plate tectonics, a large moon that creates tidal pools, and moderate axial tilt to generate seasonal variation.
Fraction of the above that actually go on to develop life, "f"l.
Geological evidence from the Earth suggests that "f"l may be high; life on Earth appears to have begun around the same time as favorable conditions arose, suggesting that abiogenesis may be relatively common once conditions are right. However, this evidence only looks at the Earth (a single model planet), and contains anthropic bias, as the planet of study was not chosen randomly, but by the living organisms that already inhabit it (ourselves). From a classical hypothesis testing standpoint, without assuming that the underlying distribution of "f"l is the same for all planets in the Milky Way, there are zero degrees of freedom, permitting no valid estimates to be made. If life (or evidence of past life) were to be found on Mars, Europa, Enceladus or Titan that developed independently from life on Earth it would imply a value for "f"l close to 1. While this would raise the number of degrees of freedom from zero to one, there would remain a great deal of uncertainty on any estimate due to the small sample size, and the chance they are not really independent.
Countering this argument is that there is no evidence for abiogenesis occurring more than once on the Earth—that is, all terrestrial life stems from a common origin. If abiogenesis were more common it would be speculated to have occurred more than once on the Earth. Scientists have searched for this by looking for bacteria that are unrelated to other life on Earth, but none have been found yet. It is also possible that life arose more than once, but that other branches were out-competed, or died in mass extinctions, or were lost in other ways. Biochemists Francis Crick and Leslie Orgel laid special emphasis on this uncertainty: "At the moment we have no means at all of knowing" whether we are "likely to be alone in the galaxy (Universe)" or whether "the galaxy may be pullulating with life of many different forms." As an alternative to abiogenesis on Earth, they proposed the hypothesis of directed panspermia, which states that Earth life began with "microorganisms sent here deliberately by a technological society on another planet, by means of a special long-range unmanned spaceship".
In 2020, a paper by scholars at the University of Nottingham proposed an "Astrobiological Copernican" principle, based on the Principle of Mediocrity, and speculated that "intelligent life would form on other [Earth-like] planets like it has on Earth, so within a few billion years life would automatically form as a natural part of evolution". In the authors' framework, "f"l, "f"i, and "f"c are all set to a probability of 1 (certainty). Their resultant calculation concludes there are more than thirty current technological civilizations in the galaxy (disregarding error bars).
Fraction of the above that develops intelligent life, "f"i.
This value remains particularly controversial. Those who favor a low value, such as the biologist Ernst Mayr, point out that of the billions of species that have existed on Earth, only one has become intelligent and from this, infer a tiny value for "f"i. Likewise, the Rare Earth hypothesis, notwithstanding their low value for "n"e above, also think a low value for "f"i dominates the analysis. Those who favor higher values note the generally increasing complexity of life over time, concluding that the appearance of intelligence is almost inevitable, implying an "f"i approaching 1. Skeptics point out that the large spread of values in this factor and others make all estimates unreliable. (See Criticism).
In addition, while it appears that life developed soon after the formation of Earth, the Cambrian explosion, in which a large variety of multicellular life forms came into being, occurred a considerable amount of time after the formation of Earth, which suggests the possibility that special conditions were necessary. Some scenarios such as the snowball Earth or research into extinction events have raised the possibility that life on Earth is relatively fragile. Research on any past life on Mars is relevant since a discovery that life did form on Mars but ceased to exist might raise the estimate of "f"l but would indicate that in half the known cases, intelligent life did not develop.
Estimates of "f"i have been affected by discoveries that the Solar System's orbit is circular in the galaxy, at such a distance that it remains out of the spiral arms for tens of millions of years (evading radiation from novae). Also, Earth's large moon may aid the evolution of life by stabilizing the planet's axis of rotation.
There has been quantitative work to begin to define formula_1. One example is a Bayesian analysis published in 2020. In the conclusion, the author cautions that this study applies to Earth's conditions. In Bayesian terms, the study favors the formation of intelligence on a planet with identical conditions to Earth but does not do so with high confidence.
Planetary scientist Pascal Lee of the SETI Institute proposes that this fraction is very low (0.0002). He based this estimate on how long it took Earth to develop intelligent life (1 million years since "Homo erectus" evolved, compared to 4.6 billion years since Earth formed).
Fraction of the above revealing their existence via signal release into space, "f"c.
For deliberate communication, the one example we have (the Earth) does not do much explicit communication, though there are some efforts covering only a tiny fraction of the stars that might look for human presence. (See Arecibo message, for example). There is considerable speculation why an extraterrestrial civilization might exist but choose not to communicate. However, deliberate communication is not required, and calculations indicate that current or near-future Earth-level technology might well be detectable to civilizations not too much more advanced than present day humans. By this standard, the Earth is a communicating civilization.
Another question is what percentage of civilizations in the galaxy are close enough for us to detect, assuming that they send out signals. For example, existing Earth radio telescopes could only detect Earth radio transmissions from roughly a light year away.
Lifetime of such a civilization wherein it communicates its signals into space, "L".
Michael Shermer estimated "L" as 420 years, based on the duration of sixty historical Earthly civilizations. Using 28 civilizations more recent than the Roman Empire, he calculates a figure of 304 years for "modern" civilizations. It could also be argued from Michael Shermer's results that the fall of most of these civilizations was followed by later civilizations that carried on the technologies, so it is doubtful that they are separate civilizations in the context of the Drake equation. In the expanded version, including "reappearance number", this lack of specificity in defining single civilizations does not matter for the end result, since such a civilization turnover could be described as an increase in the "reappearance number" rather than increase in "L", stating that a civilization reappears in the form of the succeeding cultures. Furthermore, since none could communicate over interstellar space, the method of comparing with historical civilizations could be regarded as invalid.
David Grinspoon has argued that once a civilization has developed enough, it might overcome all threats to its survival. It will then last for an indefinite period of time, making the value for "L" potentially billions of years. If this is the case, then he proposes that the Milky Way Galaxy may have been steadily accumulating advanced civilizations since it formed. He proposes that the last factor "L" be replaced with "f"IC · "T", where "f"IC is the fraction of communicating civilizations that become "immortal" (in the sense that they simply do not die out), and "T" representing the length of time during which this process has been going on. This has the advantage that "T" would be a relatively easy-to-discover number, as it would simply be some fraction of the age of the universe.
It has also been hypothesized that once a civilization has learned of a more advanced one, its longevity could increase because it can learn from the experiences of the other.
The astronomer Carl Sagan speculated that all of the terms, except for the lifetime of a civilization, are relatively high and the determining factor in whether there are large or small numbers of civilizations in the universe is the civilization lifetime, or in other words, the ability of technological civilizations to avoid self-destruction. In Sagan's case, the Drake equation was a strong motivating factor for his interest in environmental issues and his efforts to warn against the dangers of nuclear warfare. Paleobiologist Olev Vinn suggests that the lifetime of most technological civilizations is brief due to inherited behavior patterns present in all intelligent organisms. These behaviors, incompatible with civilized conditions, inevitably lead to self-destruction soon after the emergence of advanced technologies.
An intelligent civilization might not be organic, as some have suggested that artificial general intelligence may replace humanity.
Range of results.
As many skeptics have pointed out, the Drake equation can give a very wide range of values, depending on the assumptions, as the values used in portions of the Drake equation are not well established. In particular, the result can be "N" ≪ 1, meaning we are likely alone in the galaxy, or "N" ≫ 1, implying there are many civilizations we might contact. One of the few points of wide agreement is that the presence of humanity implies a probability of intelligence arising of greater than zero.
As an example of a low estimate, combining NASA's star formation rates, the rare Earth hypothesis value of "f"p · "n"e · "f"l
10−5, Mayr's view on intelligence arising, Drake's view of communication, and Shermer's estimate of lifetime:
"R"∗
1.5–3 yr−1, "f"p · "n"e · "f"l
10−5, "f"i
10−9, "f"c
0.2[Drake, above], and "L"
304 years
gives:
"N"
1.5 × 10−5 × 10−9 × 0.2 × 304
9.1 × 10−13
i.e., suggesting that we are probably alone in this galaxy, and possibly in the observable universe.
On the other hand, with larger values for each of the parameters above, values of "N" can be derived that are greater than 1. The following higher values that have been proposed for each of the parameters:
"R"∗
1.5–3 yr−1, "f"p
1, "n"e
0.2, "f"l
0.13, "f"i
1, "f"c
0.2[Drake, above], and "L"
109 years
Use of these parameters gives:
"N"
3 × 1 × 0.2 × 0.13 × 1 × 0.2 × 109
15,600,000
Monte Carlo simulations of estimates of the Drake equation factors based on a stellar and planetary model of the Milky Way have resulted in the number of civilizations varying by a factor of 100.
Possible former technological civilizations.
In 2016, Adam Frank and Woodruff Sullivan modified the Drake equation to determine just how unlikely the event of a technological species arising on a given habitable planet must be, to give the result that Earth hosts the "only" technological species that has "ever" arisen, for two cases: (a) this Galaxy, and (b) the universe as a whole. By asking this different question, one removes the lifetime and simultaneous communication uncertainties. Since the numbers of habitable planets per star can today be reasonably estimated, the only remaining unknown in the Drake equation is the probability that a habitable planet "ever" develops a technological species over its lifetime. For Earth to have the only technological species that has ever occurred in the universe, they calculate the probability of any given habitable planet ever developing a technological species must be less than . Similarly, for Earth to have been the only case of hosting a technological species over the history of this Galaxy, the odds of a habitable zone planet ever hosting a technological species must be less than (about 1 in 60 billion). The figure for the universe implies that it is extremely unlikely that Earth hosts the only technological species that has ever occurred. On the other hand, for this Galaxy one must think that fewer than 1 in 60 billion habitable planets develop a technological species for there not to have been at least a second case of such a species over the past history of this Galaxy.
Modifications.
As many observers have pointed out, the Drake equation is a very simple model that omits potentially relevant parameters, and many changes and modifications to the equation have been proposed. One line of modification, for example, attempts to account for the uncertainty inherent in many of the terms.
Combining the estimates of the original six factors by major researchers via a Monte Carlo procedure leads to a best value for the non-longevity factors of 0.85 1/years. This result differs insignificantly from the estimate of unity given both by Drake and the Cyclops report.
Others note that the Drake equation ignores many concepts that might be relevant to the odds of contacting other civilizations. For example, David Brin states: "The Drake equation merely speaks of the number of sites at which ETIs spontaneously arise. The equation says nothing directly about the contact cross-section between an ETIS and contemporary human society". Because it is the contact cross-section that is of interest to the SETI community, many additional factors and modifications of the Drake equation have been proposed.
The factor depends on what generally is the cause of civilization extinction. If it is generally by temporary uninhabitability, for example a nuclear winter, then "n"r may be relatively high. On the other hand, if it is generally by permanent uninhabitability, such as stellar evolution, then "n"r may be almost zero. In the case of total life extinction, a similar factor may be applicable for "f"l, that is, "how many times" life may appear on a planet where it has appeared once.
The METI factor is somewhat misleading since active, purposeful transmission of messages by a civilization is not required for them to receive a broadcast sent by another that is seeking first contact. It is merely required they have capable and compatible receiver systems operational; however, this is a variable humans cannot accurately estimate.
The Seager equation looks like this:
formula_2
where:
"N" = the number of planets with detectable signs of life
"N"∗ = the number of stars observed
"F"Q = the fraction of stars that are quiet
"F"HZ = the fraction of stars with rocky planets in the habitable zone
"F"O = the fraction of those planets that can be observed
"F"L = the fraction that have life
"F"S = the fraction on which life produces a detectable signature gas
Seager stresses, "We're not throwing out the Drake Equation, which is really a different topic," explaining, "Since Drake came up with the equation, we have discovered thousands of exoplanets. We as a community have had our views revolutionized as to what could possibly be out there. And now we have a real question on our hands, one that's not related to intelligent life: Can we detect any signs of life in any way in the very near future?"
formula_3
where
and
Criticism.
Criticism of the Drake equation is varied. Firstly, many of the terms in the equation are largely or entirely based on conjecture. Star formation rates are well-known, and the incidence of planets has a sound theoretical and observational basis, but the other terms in the equation become very speculative. The uncertainties revolve around the present day understanding of the evolution of life, intelligence, and civilization, not physics. No statistical estimates are possible for some of the parameters, where only one example is known. The net result is that the equation cannot be used to draw firm conclusions of any kind, and the resulting margin of error is huge, far beyond what some consider acceptable or meaningful.
Others point out that the equation was formulated before our understanding of the universe had matured. Astrophysicist Ethan Siegel, said:
<templatestyles src="Template:Blockquote/styles.css" />The Drake equation, when it was put forth, made an assumption about the Universe that we now know is untrue: It assumed that the Universe was eternal and static in time. As we learned only a few years after Frank Drake first proposed his equation, the Universe doesn’t exist in a steady state, where it’s unchanging in time, but rather has evolved from a hot, dense, energetic, and rapidly expanding state: a hot Big Bang that occurred over a finite duration in our cosmic past.
One reply to such criticisms is that even though the Drake equation currently involves speculation about unmeasured parameters, it was intended as a way to stimulate dialogue on these topics. Then the focus becomes how to proceed experimentally. Indeed, Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference.
Fermi paradox.
A civilization lasting for tens of millions of years could be able to spread throughout the galaxy, even at the slow speeds foreseeable with present day technology. However, no confirmed signs of civilizations or intelligent life elsewhere have been found, either in this Galaxy or in the observable universe of 2 trillion galaxies. According to this line of thinking, the tendency to fill (or at least explore) all available territory seems to be a universal trait of living things, so the Earth should have already been colonized, or at least visited, but no evidence of this exists. Hence Fermi's question "Where is everybody?".
A large number of explanations have been proposed to explain this lack of contact; a book published in 2015 elaborated on 75 different explanations. In terms of the Drake Equation, the explanations can be divided into three classes:
These lines of reasoning lead to the Great Filter hypothesis, which states that since there are no observed extraterrestrial civilizations despite the vast number of stars, at least one step in the process must be acting as a filter to reduce the final value. According to this view, either it is very difficult for intelligent life to arise, or the lifetime of technologically advanced civilizations, or the period of time they reveal their existence must be relatively short.
An analysis by Anders Sandberg, Eric Drexler and Toby Ord suggests "a substantial "ex ante" probability of there being no other intelligent life in our observable universe".
In fiction and popular culture.
The equation was cited by Gene Roddenberry as supporting the multiplicity of inhabited planets shown on "Star Trek", the television series he created. However, Roddenberry did not have the equation with him, and he was forced to "invent" it for his original proposal. The invented equation created by Roddenberry is:
formula_4
Regarding Roddenberry's fictional version of the equation, Drake himself commented that a number raised to the first power is just the number itself.
A commemorative plate on NASA's Europa Clipper mission, planned for launch in October 2024, features a poem by the U.S. Poet Laureate Ada Limón, waveforms of the word 'water' in 103 languages, a schematic of the water hole, the Drake equation, and a portrait of planetary scientist Ron Greeley on it.
The track "Abiogenesis" on the Carbon Based Lifeforms album World of Sleepers features the Drake equation in a spoken voice-over.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N = R_* \\cdot f_\\mathrm{p} \\cdot n_\\mathrm{e} \\cdot f_\\mathrm{l} \\cdot f_\\mathrm{i} \\cdot f_\\mathrm{c} \\cdot L"
},
{
"math_id": 1,
"text": "f_\\mathrm{l} \\cdot f_\\mathrm{i}"
},
{
"math_id": 2,
"text": "N = N_* \\cdot F_\\mathrm{Q} \\cdot F_\\mathrm{HZ} \\cdot F_\\mathrm{O} \\cdot F_\\mathrm{L} \\cdot F_\\mathrm{S}"
},
{
"math_id": 3,
"text": "N = N_\\mathrm{*} \\cdot f_\\mathrm{p} \\cdot n_\\mathrm{e} \\cdot f_\\mathrm{l} \\cdot f_\\mathrm{i} \\cdot f_\\mathrm{c} \\cdot f_\\mathrm{L}"
},
{
"math_id": 4,
"text": "Ff^2 (MgE)-C^1 Ri^1 \\cdot M=L/So "
}
] | https://en.wikipedia.org/wiki?curid=8912 |
8912350 | Discrete measure | In mathematics, more precisely in measure theory, a measure on the real line is called a discrete measure (in respect to the Lebesgue measure) if it is concentrated on an at most countable set. The support need not be a discrete set. Geometrically, a discrete measure (on the real line, with respect to Lebesgue measure) is a collection of point masses.
Definition and properties.
Given two (positive) σ-finite measures formula_0 and formula_1 on a measurable space formula_2. Then formula_0 is said to be discrete with respect to formula_1 if there exists an at most countable subset formula_3 in formula_4 such that
A measure formula_0 on formula_2 is discrete (with respect to formula_1) if and only if formula_0 has the form
formula_10
with formula_11 and Dirac measures formula_12 on the set formula_13 defined as
formula_14
for all formula_15.
One can also define the concept of discreteness for signed measures. Then, instead of conditions 2 and 3 above one should ask that formula_1 be zero on all measurable subsets of formula_7 and formula_0 be zero on measurable subsets of formula_16
Example on R.
A measure formula_0 defined on the Lebesgue measurable sets of the real line with values in formula_17 is said to be discrete if there exists a (possibly finite) sequence of numbers
formula_18
such that
formula_19
Notice that the first two requirements in the previous section are always satisfied for an at most countable subset of the real line if formula_1 is the Lebesgue measure.
The simplest example of a discrete measure on the real line is the Dirac delta function formula_20 One has formula_21 and formula_22
More generally, one may prove that any discrete measure on the real line has the form
formula_23
for an appropriately chosen (possibly finite) sequence formula_24 of real numbers and a sequence formula_25 of numbers in formula_17 of the same length. | [
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "\\nu"
},
{
"math_id": 2,
"text": "(X, \\Sigma)"
},
{
"math_id": 3,
"text": "S \\subset X"
},
{
"math_id": 4,
"text": "\\Sigma"
},
{
"math_id": 5,
"text": "\\{s\\}"
},
{
"math_id": 6,
"text": "s \\in S"
},
{
"math_id": 7,
"text": "S"
},
{
"math_id": 8,
"text": "\\nu(S)=0\\,"
},
{
"math_id": 9,
"text": "\\mu(X\\setminus S)=0.\\,"
},
{
"math_id": 10,
"text": "\\mu = \\sum_{i=1}^{\\infty} a_i \\delta_{s_i}"
},
{
"math_id": 11,
"text": " a_i \\in \\mathbb{R}_{>0}"
},
{
"math_id": 12,
"text": "\\delta_{s_i}"
},
{
"math_id": 13,
"text": "S=\\{s_i\\}_{i\\in\\mathbb{N}}"
},
{
"math_id": 14,
"text": "\\delta_{s_i}(X) = \n\\begin{cases}\n1 & \\mbox { if } s_i \\in X\\\\ \n0 & \\mbox { if } s_i \\not\\in X\\\\ \n\\end{cases}\n"
},
{
"math_id": 15,
"text": "i\\in\\mathbb{N}"
},
{
"math_id": 16,
"text": "X\\backslash S."
},
{
"math_id": 17,
"text": "[0, \\infty]"
},
{
"math_id": 18,
"text": "s_1, s_2, \\dots \\,"
},
{
"math_id": 19,
"text": "\\mu(\\mathbb R\\backslash\\{s_1, s_2, \\dots\\})=0."
},
{
"math_id": 20,
"text": "\\delta."
},
{
"math_id": 21,
"text": "\\delta(\\mathbb R\\backslash\\{0\\})=0"
},
{
"math_id": 22,
"text": "\\delta(\\{0\\})=1."
},
{
"math_id": 23,
"text": "\\mu = \\sum_{i} a_i \\delta_{s_i}"
},
{
"math_id": 24,
"text": "s_1, s_2, \\dots"
},
{
"math_id": 25,
"text": "a_1, a_2, \\dots"
}
] | https://en.wikipedia.org/wiki?curid=8912350 |
891255 | Supermanifold | Supersymmetric generalization of manifolds
In physics and mathematics, supermanifolds are generalizations of the manifold concept based on ideas coming from supersymmetry. Several definitions are in use, some of which are described below.
Informal definition.
An informal definition is commonly used in physics textbooks and introductory lectures. It defines a supermanifold as a manifold with both bosonic and fermionic coordinates. Locally, it is composed of coordinate charts that make it look like a "flat", "Euclidean" superspace. These local coordinates are often denoted by
formula_0
where "x" is the (real-number-valued) spacetime coordinate, and formula_1 and formula_2 are Grassmann-valued spatial "directions".
The physical interpretation of the Grassmann-valued coordinates are the subject of debate; explicit experimental searches for supersymmetry have not yielded any positive results. However, the use of Grassmann variables allow for the tremendous simplification of a number of important mathematical results. This includes, among other things a compact definition of functional integrals, the proper treatment of ghosts in BRST quantization, the cancellation of infinities in quantum field theory, Witten's work on the Atiyah-Singer index theorem, and more recent applications to mirror symmetry.
The use of Grassmann-valued coordinates has spawned the field of supermathematics, wherein large portions of geometry can be generalized to super-equivalents, including much of Riemannian geometry and most of the theory of Lie groups and Lie algebras (such as Lie superalgebras, "etc.") However, issues remain, including the proper extension of de Rham cohomology to supermanifolds.
Definition.
Three different definitions of supermanifolds are in use. One definition is as a sheaf over a ringed space; this is sometimes called the "algebro-geometric approach". This approach has a mathematical elegance, but can be problematic in various calculations and intuitive understanding. A second approach can be called a "concrete approach", as it is capable of simply and naturally generalizing a broad class of concepts from ordinary mathematics. It requires the use of an infinite number of supersymmetric generators in its definition; however, all but a finite number of these generators carry no content, as the concrete approach requires the use of a coarse topology that renders almost all of them equivalent. Surprisingly, these two definitions, one with a finite number of supersymmetric generators, and one with an infinite number of generators, are equivalent.
A third approach describes a supermanifold as a base topos of a superpoint. This approach remains the topic of active research.
Algebro-geometric: as a sheaf.
Although supermanifolds are special cases of noncommutative manifolds, their local structure makes them better suited to study with the tools of standard differential geometry and locally ringed spaces.
A supermanifold M of dimension ("p","q") is a topological space "M" with a sheaf of superalgebras, usually denoted "OM" or C∞(M), that is locally isomorphic to formula_3, where the latter is a Grassmann (Exterior) algebra on "q" generators.
A supermanifold M of dimension (1,1) is sometimes called a super-Riemann surface.
Historically, this approach is associated with Felix Berezin, Dimitry Leites, and Bertram Kostant.
Concrete: as a smooth manifold.
A different definition describes a supermanifold in a fashion that is similar to that of a smooth manifold, except that the model space formula_4 has been replaced by the "model superspace" formula_5.
To correctly define this, it is necessary to explain what formula_6 and formula_7 are. These are given as the even and odd real subspaces of the one-dimensional space of Grassmann numbers, which, by convention, are generated by a countably infinite number of anti-commuting variables: i.e. the one-dimensional space is given by formula_8 where "V" is infinite-dimensional. An element "z" is termed "real" if formula_9; real elements consisting of only an even number of Grassmann generators form the space formula_6 of "c-numbers", while real elements consisting of only an odd number of Grassmann generators form the space formula_7 of "a-numbers". Note that "c"-numbers commute, while "a"-numbers anti-commute. The spaces formula_10 and formula_11 are then defined as the "p"-fold and "q"-fold Cartesian products of formula_6 and formula_7.
Just as in the case of an ordinary manifold, the supermanifold is then defined as a collection of charts glued together with differentiable transition functions. This definition in terms of charts requires that the transition functions have a smooth structure and a non-vanishing Jacobian. This can only be accomplished if the individual charts use a topology that is considerably coarser than the vector-space topology on the Grassmann algebra. This topology is obtained by projecting formula_10 down to formula_4 and then using the natural topology on that. The resulting topology is "not" Hausdorff, but may be termed "projectively Hausdorff".
That this definition is equivalent to the first one is not at all obvious; however, it is the use of the coarse topology that makes it so, by rendering most of the "points" identical. That is, formula_5 with the coarse topology is essentially isomorphic to formula_12
Properties.
Unlike a regular manifold, a supermanifold is not entirely composed of a set of points. Instead, one takes the dual point of view that the structure of a supermanifold M is contained in its sheaf "OM" of "smooth functions". In the dual point of view, an injective map corresponds to a surjection of sheaves, and a surjective map corresponds to an injection of sheaves.
An alternative approach to the dual point of view is to use the functor of points.
If M is a supermanifold of dimension ("p","q"), then the underlying space "M" inherits the structure of a differentiable manifold whose sheaf of smooth functions is "OM/I", where "I" is the ideal generated by all odd functions. Thus "M" is called the underlying space, or the body, of M. The quotient map "OM" → "OM/I" corresponds to an injective map "M" → M; thus "M" is a submanifold of M.
Batchelor's theorem.
Batchelor's theorem states that every supermanifold is noncanonically isomorphic to a supermanifold of the form Π"E". The word "noncanonically" prevents one from concluding that supermanifolds are simply glorified vector bundles; although the functor Π maps surjectively onto the isomorphism classes of supermanifolds, it is not an equivalence of categories. It was published by Marjorie Batchelor in 1979.
The proof of Batchelor's theorem relies in an essential way on the existence of a partition of unity, so it does not hold for complex or real-analytic supermanifolds.
Odd symplectic structures.
Odd symplectic form.
In many physical and geometric applications, a supermanifold comes equipped with an Grassmann-odd symplectic structure. All natural geometric objects on a supermanifold are graded. In particular, the bundle of two-forms is equipped with a grading. An odd symplectic form ω on a supermanifold is a closed, odd form, inducing a non-degenerate pairing on "TM". Such a supermanifold is called a P-manifold. Its graded dimension is necessarily ("n","n"), because the odd symplectic form induces a pairing of odd and even variables. There is a version of the Darboux theorem for P-manifolds, which allows one
to equip a P-manifold locally with a set of coordinates where the odd symplectic form ω is written as
formula_13
where formula_14 are even coordinates, and formula_15 odd coordinates. (An odd symplectic form should not be confused with a Grassmann-even symplectic form on a supermanifold. In contrast, the Darboux version of an even symplectic form is
formula_16
where formula_17 are even coordinates, formula_15 odd coordinates and formula_18 are either +1 or −1.)
Antibracket.
Given an odd symplectic 2-form ω one may define a Poisson bracket known as the antibracket of any two functions "F" and "G" on a supermanifold by
formula_19
Here formula_20 and formula_21 are the right and left derivatives respectively and "z" are the coordinates of the supermanifold. Equipped with this bracket, the algebra of functions on a supermanifold becomes an antibracket algebra.
A coordinate transformation that preserves the antibracket is called a P-transformation. If the Berezinian of a P-transformation is equal to one then it is called an SP-transformation.
P and SP-manifolds.
Using the Darboux theorem for odd symplectic forms one can show that P-manifolds are constructed from open sets of superspaces formula_22 glued together by P-transformations. A manifold is said to be an SP-manifold if these transition functions can be chosen to be SP-transformations. Equivalently one may define an SP-manifold as a supermanifold with a nondegenerate odd 2-form ω and a density function ρ such that on each coordinate patch there exist Darboux coordinates in which ρ is identically equal to one.
Laplacian.
One may define a Laplacian operator Δ on an SP-manifold as the operator which takes a function "H" to one half of the divergence of the corresponding Hamiltonian vector field. Explicitly one defines
formula_23
In Darboux coordinates this definition reduces to
formula_24
where "x""a" and "θ""a" are even and odd coordinates such that
formula_25
The Laplacian is odd and nilpotent
formula_26
One may define the cohomology of functions "H" with respect to the Laplacian. In Geometry of Batalin-Vilkovisky quantization, Albert Schwarz has proven that the integral of a function "H" over a Lagrangian submanifold "L" depends only on the cohomology class of "H" and on the homology class of the body of "L" in the body of the ambient supermanifold.
SUSY.
A pre-SUSY-structure on a supermanifold of dimension
("n","m") is an odd "m"-dimensional
distribution formula_27.
With such a distribution one associates
its Frobenius tensor formula_28
(since "P" is odd, the skew-symmetric Frobenius
tensor is a symmetric operation).
If this tensor is non-degenerate,
e.g. lies in an open orbit of
formula_29,
"M" is called "a SUSY-manifold".
SUSY-structure in dimension (1, "k")
is the same as odd contact structure.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(x,\\theta,\\bar{\\theta})"
},
{
"math_id": 1,
"text": "\\theta\\,"
},
{
"math_id": 2,
"text": "\\bar{\\theta}"
},
{
"math_id": 3,
"text": "C^\\infty(\\mathbb{R}^p)\\otimes\\Lambda^\\bullet(\\xi_1,\\dots\\xi_q)"
},
{
"math_id": 4,
"text": "\\mathbb{R}^p"
},
{
"math_id": 5,
"text": "\\mathbb{R}^p_c\\times\\mathbb{R}^q_a"
},
{
"math_id": 6,
"text": "\\mathbb{R}_c"
},
{
"math_id": 7,
"text": "\\mathbb{R}_a"
},
{
"math_id": 8,
"text": "\\mathbb{C}\\otimes\\Lambda(V),"
},
{
"math_id": 9,
"text": "z=z^*"
},
{
"math_id": 10,
"text": "\\mathbb{R}^p_c"
},
{
"math_id": 11,
"text": "\\mathbb{R}^q_a"
},
{
"math_id": 12,
"text": "\\mathbb{R}^p\\otimes\\Lambda^\\bullet(\\xi_1,\\dots\\xi_q)"
},
{
"math_id": 13,
"text": "\\omega = \\sum_{i} d\\xi_i \\wedge dx_i , "
},
{
"math_id": 14,
"text": "x_i"
},
{
"math_id": 15,
"text": "\\xi_i"
},
{
"math_id": 16,
"text": "\\sum_i dp_i \\wedge dq_i+\\sum_j \\frac{\\varepsilon_j}{2}(d\\xi_j)^2, "
},
{
"math_id": 17,
"text": "p_i,q_i"
},
{
"math_id": 18,
"text": "\\varepsilon_j"
},
{
"math_id": 19,
"text": "\\{F,G\\}=\\frac{\\partial_rF}{\\partial z^i}\\omega^{ij}(z)\\frac{\\partial_lG}{\\partial z^j}."
},
{
"math_id": 20,
"text": "\\partial_r"
},
{
"math_id": 21,
"text": "\\partial_l"
},
{
"math_id": 22,
"text": "{\\mathcal{R}}^{n|n}"
},
{
"math_id": 23,
"text": "\\Delta H=\\frac{1}{2\\rho}\\frac{\\partial_r}{\\partial z^a}\\left(\\rho\\omega^{ij}(z)\\frac{\\partial_l H}{\\partial z^j}\\right). "
},
{
"math_id": 24,
"text": "\\Delta=\\frac{\\partial_r}{\\partial x^a}\\frac{\\partial_l}{\\partial \\theta_a}"
},
{
"math_id": 25,
"text": "\\omega=dx^a\\wedge d\\theta_a."
},
{
"math_id": 26,
"text": "\\Delta^2=0."
},
{
"math_id": 27,
"text": "P \\subset TM"
},
{
"math_id": 28,
"text": "S^2 P \\mapsto TM/P"
},
{
"math_id": 29,
"text": "GL(P) \\times GL(TM/P)"
}
] | https://en.wikipedia.org/wiki?curid=891255 |
891263 | Poisson supermanifold | Concept in differential geometry
In differential geometry a Poisson supermanifold is a differential supermanifold M such that the supercommutative algebra of smooth functions over it (to clarify this: M is not a point set space and so, doesn't "really" exist, and really, this algebra is all we have), formula_0 is equipped with a bilinear map called the Poisson superbracket turning it into a Poisson superalgebra.
Every symplectic supermanifold is a Poisson supermanifold but not vice versa. | [
{
"math_id": 0,
"text": "C^\\infty(M)"
}
] | https://en.wikipedia.org/wiki?curid=891263 |
891398 | Factor (programming language) | Stack-oriented programming language
Factor is a stack-oriented programming language created by Slava Pestov. Factor is dynamically typed and has automatic memory management, as well as powerful metaprogramming features. The language has a single implementation featuring a self-hosted optimizing compiler and an interactive development environment. The Factor distribution includes a large standard library.
History.
Slava Pestov created Factor in 2003 as a scripting language for a video game. The initial implementation, now referred to as JFactor, was implemented in Java and ran on the Java Virtual Machine. Though the early language resembled modern Factor superficially in terms of syntax, the modern language is very different in practical terms and the current implementation is much faster.
The language has changed significantly over time. Originally, Factor programs centered on manipulating Java objects with Java's reflection capabilities. From the beginning, the design philosophy has been to modify the language to suit programs written in it. As the Factor implementation and standard libraries grew more detailed, the need for certain language features became clear, and they were added. JFactor did not have an object system where the programmer could define their own classes, and early versions of native Factor were the same; the language was similar to Scheme in this way. Today, the object system is a central part of Factor. Other important language features such as tuple classes, combinator inlining, macros, user-defined parsing words and the modern vocabulary system were only added in a piecemeal fashion as their utility became clear.
The foreign function interface was present from very early versions to Factor, and an analogous system existed in JFactor. This was chosen over creating a plugin to the C part of the implementation for each external library that Factor should communicate with, and has the benefit of being more declarative, faster to compile and easier to write.
The Java implementation initially consisted of just an interpreter, but a compiler to Java bytecode was later added. This compiler only worked on certain procedures. The Java version of Factor was replaced by a version written in C and Factor. Initially, this consisted of just an interpreter, but the interpreter was replaced by two compilers, used in different situations. Over time, the Factor implementation has grown significantly faster.
Description.
Factor is a dynamically typed, functional and object-oriented programming language. Code is structured around small procedures, called words. In typical code, these are 1–3 lines long, and a procedure more than 7 lines long is very rare. Something that would idiomatically be expressed with one procedure in another programming language would be written as several words in Factor.
Each word takes a fixed number of arguments and has a fixed number of return values. Arguments to words are passed on a data stack, using reverse Polish notation. The stack is used just to organize calls to words, and not as a data structure. The stack in Factor is used in a similar way to the stack in Forth; for this, they are both considered stack languages. For example, below is a snippet of code that prints out "hello world" to the current output stream:
"hello world" print
codice_0 is a word in the codice_1 vocabulary that takes a string from the stack and returns nothing. It prints the string to the current output stream (by default, the terminal or the graphical listener).
The factorial function formula_0 can be implemented in Factor in the following way:
factorial ( n -- n! ) dup 1 > [ [1,b] product ] [ drop 1 ] if ;
Not all data has to be passed around only with the stack. Lexically scoped local variables let one store and access temporaries used within a procedure. Dynamically scoped variables are used to pass things between procedure calls without using the stack. For example, the current input and output streams are stored in dynamically scoped variables.
Factor emphasizes flexibility and the ability to extend the language. There is a system for macros, as well as for arbitrary extension of Factor syntax. Factor's syntax is often extended to allow for new types of word definitions and new types of literals for data structures. It is also used in the XML library to provide literal syntax for generating XML. For example, the following word takes a string and produces an XML document object which is an HTML document emphasizing the string:
: make-html ( string -- xml )
dup
<XML
<html>
<head><title><-></title></head>
<body><h1><-></h1></body>
</html>
XML> ;
The word codice_2 duplicates the top item on the stack. The codice_3 stands for filling in that part of the XML document with an item from the stack.
Implementation and libraries.
Factor includes a large standard library, written entirely in the language. These include
A foreign function interface is built into Factor, allowing for communication with C, Objective-C and Fortran programs. There is also support for executing and communicating with shaders written in GLSL.
Factor is implemented in Factor and C++. It was originally bootstrapped from an earlier Java implementation. Today, the parser and the optimizing compiler are written in the language. Certain basic parts of the language are implemented in C++ such as the garbage collector and certain primitives.
Factor uses an image-based model, analogous to many Smalltalk implementations, where compiled code and data are stored in an image. To compile a program, the program is loaded into an image and the image is saved. A special tool assists in the process of creating a minimal image to run a particular program, packaging the result into something that can be deployed as a standalone application.
The Factor compiler implements many advanced optimizations and has been used as a target for research in new optimization techniques.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n!"
}
] | https://en.wikipedia.org/wiki?curid=891398 |
8914599 | Helix–coil transition model | Helix–coil transition models are formalized techniques in statistical mechanics developed to describe conformations of linear polymers in solution. The models are usually but not exclusively applied to polypeptides as a measure of the relative fraction of the molecule in an alpha helix conformation versus turn or random coil. The main attraction in investigating alpha helix formation is that one encounters many of the features of protein folding but in their simplest version. Most of the helix–coil models contain parameters for the likelihood of helix nucleation from a coil region, and helix propagation along the sequence once nucleated; because polypeptides are directional and have distinct N-terminal and C-terminal ends, propagation parameters may differ in each direction.
The two states are
Common transition models include the Zimm–Bragg model and the Lifson–Roig model, and their extensions and variations.
Energy of host poly-alanine helix in aqueous solution:
formula_0
where "m" is number of residues in the helix.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\Delta G_\\text{folding} = (m-2)\\Delta H_\\alpha - m T \\Delta S \n"
}
] | https://en.wikipedia.org/wiki?curid=8914599 |
891470 | Eaton Hodgkinson | British engineer
Eaton Hodgkinson FRS (26 February 1789 – 18 June 1861) was an English engineer, a pioneer of the application of mathematics to problems of structural design.
Early life.
Hodgkinson was born in the village of Anderton, near Northwich, Cheshire, to a farming family. His father died when he was six years old, and he was raised with his two sisters by his mother, who maintained the farming business. She sent her son to Witton Grammar School in Northwich where he studied the classics with the intention that he would fulfill the family's ambition that he prepare for a career in the Church of England. Unfortunately, the regime was unsuited to his tastes and talents which were already showing promise in mathematics. His mother moved him to a less prestigious private school in Northwich where his enthusiasm for mathematics was encouraged and fostered but, as the young Hodgkinson grew physically, he became indispensable on the family farm and soon left education to devote himself there.
However, farming was no more to his taste than Greek and Latin and his mother yearned to satisfy her son's appetites. Family friends advised that Hodgkinson might find some more suitable outlet in nearby Manchester and so, in 1811, the family left for Salford to open a pawnbroking business. Hodgkinson used all his spare time in reading science and mathematics and soon introduced himself into Manchester's scientific community, meeting, among others, his future collaborator, Sir William Fairbairn. He became a pupil of John Dalton, studying mathematics, and the two remained firm friends until Dalton's death in 1844. He retired early from the family business to devote a modest pension to his scientific work.
He married twice, to Catherine Johns and to a Miss Holditch. There were no children.
Scientific work.
Hodgkinson measured the strength of columns of materials including cast iron and marble in a series of experiments.
Hodgkinson worked with Sir William Fairbairn in Manchester on the design of iron beams, especially on the Water Street bridge for the Liverpool and Manchester Railway in 1828–30. His improved cross section was published by the Manchester Literary and Philosophical Society in 1830 and influenced much nineteenth century structural engineering. He derived the empirical formula for a concentrated load, "W" (in tons), at which a beam will fail as a function of its length between simple supports, "L" (in inches); its depth, "d" (in inches); and its bottom-flange area, "A" (inch²):
formula_0
His expertise with beams led to his retention, along with Fairbairn, as consultant on the novel tubular design for the Britannia Bridge. Fairbairn built and tested several prototypes, and developed the final form adopted for the bridge. Both Hodgkinson and Robert Stephenson believed that extra chains would be needed to support the heavy spans, so the towers were built with spaces for the chains. Fairbairn, however, insisted that chains would not be necessary, and his opinion prevailed. He was right, and chains were never used, but the towers remain with their empty recesses.
Later years.
Hodgkinson was elected a Fellow of the Royal Society in 1841 and, in 1847, he became professor of the mechanical principles of engineering at University College London. In 1849, he was appointed by the UK Parliament to participate in a Royal Commission to investigate the application of iron in railroad structures, performing some early investigations of metal fatigue.
Towards the end of his life, his mental faculties failed and he died at Higher Broughton, Salford.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "W=\\frac {26Ad} {L}"
}
] | https://en.wikipedia.org/wiki?curid=891470 |
8918323 | Conservative system | In mathematics, a conservative system is a dynamical system which stands in contrast to a dissipative system. Roughly speaking, such systems have no friction or other mechanism to dissipate the dynamics, and thus, their phase space does not shrink over time. Precisely speaking, they are those dynamical systems that have a null wandering set: under time evolution, no portion of the phase space ever "wanders away", never to be returned to or revisited. Alternately, conservative systems are those to which the Poincaré recurrence theorem applies. An important special case of conservative systems are the measure-preserving dynamical systems.
Informal introduction.
Informally, dynamical systems describe the time evolution of the phase space of some mechanical system. Commonly, such evolution is given by some differential equations, or quite often in terms of discrete time steps. However, in the present case, instead of focusing on the time evolution of discrete points, one shifts attention to the time evolution of collections of points. One such example would be Saturn's rings: rather than tracking the time evolution of individual grains of sand in the rings, one is instead interested in the time evolution of the density of the rings: how the density thins out, spreads, or becomes concentrated. Over short time-scales (hundreds of thousands of years), Saturn's rings are stable, and are thus a reasonable example of a conservative system and more precisely, a measure-preserving dynamical system. It is measure-preserving, as the number of particles in the rings does not change, and, per Newtonian orbital mechanics, the phase space is incompressible: it can be stretched or squeezed, but not shrunk (this is the content of Liouville's theorem).
Formal definition.
Formally, a measurable dynamical system is conservative if and only if it is non-singular, and has no wandering sets.
A measurable dynamical system ("X", Σ, "μ", "τ") is a Borel space ("X", Σ) equipped with a sigma-finite measure "μ" and a transformation "τ". Here, "X" is a set, and Σ is a sigma-algebra on "X", so that the pair ("X", Σ) is a measurable space. "μ" is a sigma-finite measure on the sigma-algebra. The space "X" is the phase space of the dynamical system.
A transformation (a map) formula_0 is said to be Σ-measurable if and only if, for every "σ" ∈ Σ, one has formula_1. The transformation is a single "time-step" in the evolution of the dynamical system. One is interested in invertible transformations, so that the current state of the dynamical system came from a well-defined past state.
A measurable transformation formula_0 is called non-singular when formula_2 if and only if formula_3. In this case, the system ("X", Σ, "μ", "τ") is called a non-singular dynamical system. The condition of being non-singular is necessary for a dynamical system to be suitable for modeling (non-equilibrium) systems. That is, if a certain configuration of the system is "impossible" (i.e. formula_3) then it must stay "impossible" (was always impossible: formula_2), but otherwise, the system can evolve arbitrarily. Non-singular systems preserve the negligible sets, but are not required to preserve any other class of sets. The sense of the word "singular" here is the same as in the definition of a singular measure in that no portion of formula_4 is singular with respect to formula_5 and vice versa.
A non-singular dynamical system for which formula_6 is called invariant, or, more commonly, a measure-preserving dynamical system.
A non-singular dynamical system is conservative if, for every set formula_7 of positive measure and for every formula_8, one has some integer formula_9 such that formula_10. Informally, this can be interpreted as saying that the current state of the system revisits or comes arbitrarily close to a prior state; see Poincaré recurrence for more.
A non-singular transformation formula_0 is incompressible if, whenever one has formula_11, then formula_12.
Properties.
For a non-singular transformation formula_0, the following statements are equivalent:
The above implies that, if formula_14 and formula_15 is measure-preserving, then the dynamical system is conservative. This is effectively the modern statement of the Poincaré recurrence theorem. A sketch of a proof of the equivalence of these four properties is given in the article on the Hopf decomposition.
Suppose that formula_14 and formula_15 is measure-preserving. Let formula_16 be a wandering set of formula_15. By definition of wandering sets and since formula_15 preserves formula_4, formula_17 would thus contain a countably infinite union of pairwise disjoint sets that have the same formula_4-measure as formula_16. Since it was assumed formula_14, it follows that formula_16 is a null set, and so all wandering sets must be null sets.
This argumentation fails for even the simplest examples if formula_18. Indeed, consider for instance formula_19, where formula_20 denotes the Lebesgue measure, and consider the shift operator formula_21. Since the Lebesgue measure is translation-invariant, formula_15 is measure-preserving. However, formula_22 is not conservative. In fact, every interval of length strictly less than formula_23 contained in formula_17 is wandering. In particular, formula_17 can be written as a countable union of wandering sets.
Hopf decomposition.
The Hopf decomposition states that every measure space with a non-singular transformation can be decomposed into an invariant conservative set and a wandering (dissipative) set. A commonplace informal example of Hopf decomposition is the mixing of two liquids (some textbooks mention rum and coke): The initial state, where the two liquids are not yet mixed, can never recur again after mixing; it is part of the dissipative set. Likewise any of the partially-mixed states. The result, after mixing (a cuba libre, in the canonical example), is stable, and forms the conservative set; further mixing does not alter it. In this example, the conservative set is also ergodic: if one added one more drop of liquid (say, lemon juice), it would not stay in one place, but would come to mix in everywhere. One word of caution about this example: although mixing systems are ergodic, ergodic systems are not in general mixing systems! Mixing implies an interaction which may not exist. The canonical example of an ergodic system that does not mix is the Bernoulli process: it is the set of all possible infinite sequences of coin flips (equivalently, the set formula_24 of infinite strings of zeros and ones); each individual coin flip is independent of the others.
Ergodic decomposition.
The ergodic decomposition theorem states, roughly, that every conservative system can be split up into components, each component of which is individually ergodic. An informal example of this would be a tub, with a divider down the middle, with liquids filling each compartment. The liquid on one side can clearly mix with itself, and so can the other, but, due to the partition, the two sides cannot interact. Clearly, this can be treated as two independent systems; leakage between the two sides, of measure zero, can be ignored. The ergodic decomposition theorem states that all conservative systems can be split into such independent parts, and that this splitting is unique (up to differences of measure zero). Thus, by convention, the study of conservative systems becomes the study of their ergodic components.
Formally, every ergodic system is conservative. Recall that an invariant set σ ∈ Σ is one for which "τ"("σ") = "σ". For an ergodic system, the only invariant sets are those with measure zero or with full measure (are null or are conull); that they are conservative then follows trivially from this.
When "τ" is ergodic, the following statements are equivalent: | [
{
"math_id": 0,
"text": " \\tau: X \\to X "
},
{
"math_id": 1,
"text": "\\tau^{-1}\\sigma \\in \\Sigma"
},
{
"math_id": 2,
"text": "\\mu(\\tau^{-1}\\sigma)=0"
},
{
"math_id": 3,
"text": "\\mu(\\sigma)=0"
},
{
"math_id": 4,
"text": "\\mu"
},
{
"math_id": 5,
"text": "\\mu\\circ\\tau^{-1}"
},
{
"math_id": 6,
"text": "\\mu(\\tau^{-1}\\sigma)=\\mu(\\sigma)"
},
{
"math_id": 7,
"text": "\\sigma\\in\\Sigma"
},
{
"math_id": 8,
"text": "n\\in \\mathbb{N}"
},
{
"math_id": 9,
"text": " p > n "
},
{
"math_id": 10,
"text": "\\mu(\\sigma\\cap\\tau^{-p} \\sigma) > 0 "
},
{
"math_id": 11,
"text": "\\tau^{-1}\\sigma\\subset\\sigma"
},
{
"math_id": 12,
"text": "\\mu(\\sigma \\smallsetminus \\tau^{-1}\\sigma)=0"
},
{
"math_id": 13,
"text": "\\mu\\left(\\sigma\\smallsetminus\\bigcup_{n=1}^\\infty\\tau^{-n}\\sigma\\right)=0"
},
{
"math_id": 14,
"text": "\\mu(X)<\\infty"
},
{
"math_id": 15,
"text": "\\tau"
},
{
"math_id": 16,
"text": "A"
},
{
"math_id": 17,
"text": "X"
},
{
"math_id": 18,
"text": "\\mu(X)=\\infty"
},
{
"math_id": 19,
"text": "(X, \\mathcal A, \\mu)=(\\mathbb R, \\mathcal B(\\mathbb R), \\lambda)"
},
{
"math_id": 20,
"text": "\\lambda"
},
{
"math_id": 21,
"text": "\\tau: X\\to X, x\\mapsto x+1"
},
{
"math_id": 22,
"text": "(X, \\mathcal A, \\mu,\\tau)"
},
{
"math_id": 23,
"text": "1"
},
{
"math_id": 24,
"text": "\\{0,1\\}^\\mathbb{N}"
},
{
"math_id": 25,
"text": "\\mu\\left(X\\smallsetminus\\bigcup_{n=1}^\\infty\\tau^{-n}\\sigma\\right) = 0"
},
{
"math_id": 26,
"text": "x\\in X"
},
{
"math_id": 27,
"text": "\\tau^n x\\in\\sigma"
},
{
"math_id": 28,
"text": "\\sigma"
},
{
"math_id": 29,
"text": "\\rho"
},
{
"math_id": 30,
"text": "\\mu\\left(\\tau^{-n}\\rho\\cap\\sigma\\right)>0"
},
{
"math_id": 31,
"text": "\\mu(\\sigma^c)=0"
}
] | https://en.wikipedia.org/wiki?curid=8918323 |
8918642 | Alperin–Brauer–Gorenstein theorem | In mathematics, the Alperin–Brauer–Gorenstein theorem characterizes the finite simple groups with quasidihedral or wreathed Sylow 2-subgroups. These are isomorphic either to three-dimensional projective special linear groups or projective special unitary groups over a finite field of odd order, depending on a certain congruence, or to the Mathieu group formula_0. proved this in the course of 261 pages. The subdivision by 2-fusion is sketched there, given as an exercise in , and presented in some detail in . | [
{
"math_id": 0,
"text": "M_{11}"
}
] | https://en.wikipedia.org/wiki?curid=8918642 |
8919563 | Russo–Vallois integral | In mathematical analysis, the Russo–Vallois integral is an extension to stochastic processes of the classical Riemann–Stieltjes integral
formula_0
for suitable functions formula_1 and formula_2. The idea is to replace the derivative formula_3 by the difference quotient
formula_4 and to pull the limit out of the integral. In addition one changes the type of convergence.
Definitions.
Definition: A sequence formula_5 of stochastic processes converges uniformly on compact sets in probability to a process formula_6
formula_7
if, for every formula_8 and formula_9
formula_10
One sets:
formula_11
formula_12
and
formula_13
Definition: The forward integral is defined as the ucp-limit of
formula_14: formula_15
Definition: The backward integral is defined as the ucp-limit of
formula_16: formula_17
Definition: The generalized bracket is defined as the ucp-limit of
formula_18: formula_19
For continuous semimartingales formula_20 and a càdlàg function H, the Russo–Vallois integral coincidences with the usual Itô integral:
formula_21
In this case the generalised bracket is equal to the classical covariation. In the special case, this means that the process
formula_22
is equal to the quadratic variation process.
Also for the Russo-Vallois Integral an Ito formula holds: If formula_23 is a continuous semimartingale and
formula_24
then
formula_25
By a duality result of Triebel one can provide optimal classes of Besov spaces, where the Russo–Vallois integral can be defined. The norm in the Besov space
formula_26
is given by
formula_27
with the well known modification for formula_28. Then the following theorem holds:
Theorem: Suppose
formula_29
formula_30
formula_31
Then the Russo–Vallois integral
formula_32
exists and for some constant formula_33 one has
formula_34
Notice that in this case the Russo–Vallois integral coincides with the Riemann–Stieltjes integral and with the Young integral for functions with finite p-variation. | [
{
"math_id": 0,
"text": "\\int f \\, dg=\\int fg' \\, ds"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "g"
},
{
"math_id": 3,
"text": "g'"
},
{
"math_id": 4,
"text": "g(s+\\varepsilon)-g(s)\\over\\varepsilon"
},
{
"math_id": 5,
"text": "H_n"
},
{
"math_id": 6,
"text": "H,"
},
{
"math_id": 7,
"text": "H=\\text{ucp-}\\lim_{n\\rightarrow\\infty}H_n,"
},
{
"math_id": 8,
"text": "\\varepsilon>0"
},
{
"math_id": 9,
"text": "T>0,"
},
{
"math_id": 10,
"text": "\\lim_{n\\rightarrow\\infty}\\mathbb{P}(\\sup_{0\\leq t\\leq T}|H_n(t)-H(t)|>\\varepsilon)=0."
},
{
"math_id": 11,
"text": "I^-(\\varepsilon,t,f,dg)={1\\over\\varepsilon}\\int_0^tf(s)(g(s+\\varepsilon)-g(s))\\,ds"
},
{
"math_id": 12,
"text": "I^+(\\varepsilon,t,f,dg)={1\\over\\varepsilon}\\int_0^t f(s)(g(s)-g(s-\\varepsilon)) \\, ds"
},
{
"math_id": 13,
"text": "[f,g]_\\varepsilon (t)={1\\over \\varepsilon}\\int_0^t(f(s+\\varepsilon)-f(s))(g(s+\\varepsilon)-g(s))\\,ds."
},
{
"math_id": 14,
"text": "I^-"
},
{
"math_id": 15,
"text": "\\int_0^t fd^-g=\\text{ucp-}\\lim_{\\varepsilon\\rightarrow\\infty (0?)}I^-(\\varepsilon,t,f,dg)."
},
{
"math_id": 16,
"text": "I^+"
},
{
"math_id": 17,
"text": "\\int_0^t f \\, d^+g = \\text{ucp-}\\lim_{\\varepsilon\\rightarrow\\infty (0?)}I^+(\\varepsilon,t,f,dg)."
},
{
"math_id": 18,
"text": "[f,g]_\\varepsilon"
},
{
"math_id": 19,
"text": "[f,g]_\\varepsilon=\\text{ucp-}\\lim_{\\varepsilon\\rightarrow\\infty}[f,g]_\\varepsilon (t)."
},
{
"math_id": 20,
"text": "X,Y"
},
{
"math_id": 21,
"text": "\\int_0^t H_s \\, dX_s=\\int_0^t H \\, d^-X."
},
{
"math_id": 22,
"text": "[X]:=[X,X] \\, "
},
{
"math_id": 23,
"text": "X"
},
{
"math_id": 24,
"text": "f\\in C_2(\\mathbb{R}),"
},
{
"math_id": 25,
"text": "f(X_t)=f(X_0)+\\int_0^t f'(X_s) \\, dX_s + {1\\over 2}\\int_0^t f''(X_s) \\, d[X]_s."
},
{
"math_id": 26,
"text": "B_{p,q}^\\lambda(\\mathbb{R}^N)"
},
{
"math_id": 27,
"text": "||f||_{p,q}^\\lambda=||f||_{L_p} + \\left(\\int_0^\\infty {1\\over |h|^{1+\\lambda q}}(||f(x+h)-f(x)||_{L_p})^q \\, dh\\right)^{1/q}"
},
{
"math_id": 28,
"text": "q=\\infty"
},
{
"math_id": 29,
"text": "f\\in B_{p,q}^\\lambda,"
},
{
"math_id": 30,
"text": "g\\in B_{p',q'}^{1-\\lambda},"
},
{
"math_id": 31,
"text": "1/p+1/p'=1\\text{ and }1/q+1/q'=1."
},
{
"math_id": 32,
"text": "\\int f \\, dg"
},
{
"math_id": 33,
"text": "c"
},
{
"math_id": 34,
"text": "\\left| \\int f \\, dg \\right| \\leq c ||f||_{p,q}^\\alpha ||g||_{p',q'}^{1-\\alpha}."
}
] | https://en.wikipedia.org/wiki?curid=8919563 |
892014 | Carathéodory's theorem (convex hull) | Point in the convex hull of a set P in Rd, is the convex combination of d+1 points in P
Carathéodory's theorem is a theorem in convex geometry. It states that if a point formula_0 lies in the convex hull formula_1 of a set formula_2, then formula_0 lies in some "formula_3"-dimensional simplex with vertices in formula_4. Equivalently, formula_0 can be written as the convex combination of at most formula_5 points in formula_4. Additionally, formula_0 can be written as the convex combination of at most formula_5 "extremal" points in formula_4, as non-extremal points can be removed from formula_4 without changing the membership of "formula_0" in the convex hull.
An equivalent theorem for conical combinations states that if a point formula_0 lies in the conical hull formula_6 of a set formula_2, then formula_0 can be written as the conical combination of at most formula_3 points in formula_4.
Two other theorems of Helly and Radon are closely related to Carathéodory's theorem: the latter theorem can be used to prove the former theorems and vice versa.
The result is named for Constantin Carathéodory, who proved the theorem in 1911 for the case when formula_4 is compact. In 1914 Ernst Steinitz expanded Carathéodory's theorem for arbitrary sets.
Example.
Carathéodory's theorem in 2 dimensions states that we can construct a triangle consisting of points from "P" that encloses any point in the convex hull of "P".
For example, let "P" = {(0,0), (0,1), (1,0), (1,1)}. The convex hull of this set is a square. Let "x" = (1/4, 1/4) in the convex hull of "P". We can then construct a set {(0,0),(0,1),(1,0)} = "P"′, the convex hull of which is a triangle and encloses "x."
Proof.
Note: We will only use the fact that formula_7 is an ordered field, so the theorem and proof works even when formula_7 is replaced by any field formula_8, together with a total order.
We first formally state Carathéodory's theorem:
<templatestyles src="Math_theorem/styles.css" />
Carathéodory's theorem — If formula_9, then formula_0 is the nonnegative sum of at most formula_3 points of formula_10.
If formula_11, then formula_0 is the convex sum of at most formula_5 points of formula_10.
The essence of Carathéodory's theorem is in the finite case. This reduction to the finite case is possible because formula_12 is the set of "finite" convex combination of elements of formula_10 (see the convex hull page for details).
<templatestyles src="Math_theorem/styles.css" />
Lemma — If formula_13 then formula_14, exists formula_15 such that formula_16, and at most formula_3 of them are nonzero.
With the lemma, Carathéodory's theorem is a simple extension:
<templatestyles src="Math_proof/styles.css" />Proof of Carathéodory's theorem
For any formula_17, represent formula_18 for some formula_19, then formula_20, and we use the lemma.
The second part reduces to the first part by "lifting up one dimension", a common trick used to reduce affine geometry to linear algebra, and reduce convex bodies to convex cones.
Explicitly, let formula_21, then identify formula_22 with the subset formula_23. This induces an embedding of formula_10 into formula_24.
Any formula_25, by first part, has a "lifted" representation formula_26 where at most formula_5 of formula_27 are nonzero. That is, formula_28, and formula_29, which completes the proof.
<templatestyles src="Math_proof/styles.css" />Proof of lemma
This is trivial when formula_30. If we can prove it for all formula_31, then by induction we have proved it for all formula_32. Thus it remains to prove it for formula_31. This we prove by induction on formula_3.
Base case: formula_33, trivial.
Induction case. Represent formula_34. If some formula_35, then the proof is finished. So assume all formula_36
If formula_37 is linearly dependent, then we can use induction on its linear span to eliminate one nonzero term in formula_38, and thus eliminate it in formula_34 as well.
Else, there exists formula_39, such that formula_40. Then we can interpolate a full interval of representations:
formula_41
If formula_42 for all formula_43, then set formula_44. Otherwise, let formula_45 be the smallest formula_45 such that one of formula_46. Then we obtain a convex representation of formula_0 with formula_47 nonzero terms.
Alternative proofs use Helly's theorem or the Perron–Frobenius theorem.
Variants.
Carathéodory's number.
For any nonempty formula_2, define its Carathéodory's number to be the smallest integer formula_48, such that for any formula_49, there exists a representation of formula_0 as a convex sum of up to formula_48 elements in formula_4.
Carathéodory's theorem simply states that any nonempty subset of formula_22 has Carathéodory's number formula_50. This upper bound is not necessarily reached. For example, the unit sphere in formula_22 has Carathéodory's number equal to 2, since any point inside the sphere is the convex sum of two points on the sphere.
With additional assumptions on formula_2, upper bounds strictly lower than formula_5 can be obtained.
Dimensionless variant.
Recently, Adiprasito, Barany, Mustafa and Terpai proved a variant of Caratheodory's theorem that does not depend on the dimension of the space.
Colorful Carathéodory theorem.
Let "X"1, ..., "X""d"+1 be sets in R"d" and let "x" be a point contained in the intersection of the convex hulls of all these "d"+1 sets.
Then there is a set "T" = {"x"1, ..., "x""d"+1}, where "x"1 ∈ "X"1, ..., "x""d"+1 ∈ "X""d"+1, such that the convex hull of "T" contains the point "x".
By viewing the sets "X"1, ..., "X""d"+1 as different colors, the set "T" is made by points of all colors, hence the "colorful" in the theorem's name. The set "T" is also called a "rainbow simplex", since it is a "d"-dimensional simplex in which each corner has a different color.
This theorem has a variant in which the convex hull is replaced by the conical hull. Let "X"1, ..., "X""d" be sets in Rd and let "x" be a point contained in the intersection of the "conical hulls" of all these "d" sets. Then there is a set "T" = {"x"1, ..., "x""d"}, where "x"1 ∈ "X"1, ..., "x""d" ∈ "X""d", such that the "conical hull" of "T" contains the point "x".
Mustafa and Ray extended this colorful theorem from points to convex bodies.
The computational problem of finding the colorful set lies in the intersection of the complexity classes PPAD and PLS.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "\\mathrm{Conv}(P)"
},
{
"math_id": 2,
"text": "P\\subset \\R^d"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "d+1"
},
{
"math_id": 6,
"text": "\\mathrm{Cone}(P)"
},
{
"math_id": 7,
"text": "\\R"
},
{
"math_id": 8,
"text": "\\mathbb F"
},
{
"math_id": 9,
"text": "x \\in \\mathrm{Cone}(S) \\subset \\R^d"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "x \\in \\mathrm{Conv}(S) \\subset \\R^d"
},
{
"math_id": 12,
"text": "\\mathrm{Conv}(S)"
},
{
"math_id": 13,
"text": "q_1, ..., q_N \\in \\R^d"
},
{
"math_id": 14,
"text": "\\forall x \\in \\mathrm{Conv}(\\{q_1, ..., q_N\\})"
},
{
"math_id": 15,
"text": "w_1, ..., w_N \\geq 0"
},
{
"math_id": 16,
"text": "x = \\sum_n w_n q_n"
},
{
"math_id": 17,
"text": "x\\in \\mathrm{Conv}(S)"
},
{
"math_id": 18,
"text": "x=\\sum_{n=1}^N w_n q_n"
},
{
"math_id": 19,
"text": "q_1, ..., q_N\\in S"
},
{
"math_id": 20,
"text": "x\\in \\mathrm{Conv}(\\{q_1, ..., q_N\\})"
},
{
"math_id": 21,
"text": "S\\subset \\R^d"
},
{
"math_id": 22,
"text": "\\R^d"
},
{
"math_id": 23,
"text": "\\{w \\in \\R^{d+1}: w_{d+1} = 1\\}"
},
{
"math_id": 24,
"text": "S \\times \\{1\\}\\subset \\R^{d+1}"
},
{
"math_id": 25,
"text": "x\\in S"
},
{
"math_id": 26,
"text": "(x, 1) = \\sum_{n=1}^N w_n (q_n, 1)"
},
{
"math_id": 27,
"text": "w_n"
},
{
"math_id": 28,
"text": "x = \\sum_{n=1}^N w_n q_n"
},
{
"math_id": 29,
"text": "1 =\\sum_{n=1}^N w_n"
},
{
"math_id": 30,
"text": "N \\leq d"
},
{
"math_id": 31,
"text": "N = d+1"
},
{
"math_id": 32,
"text": "N \\geq d+1"
},
{
"math_id": 33,
"text": "d=1, N = 2"
},
{
"math_id": 34,
"text": "x = \\sum_{n=1}^{d+1} w_n q_n"
},
{
"math_id": 35,
"text": " w_n = 0"
},
{
"math_id": 36,
"text": "w_n > 0"
},
{
"math_id": 37,
"text": "\\{q_1, ..., q_{d}\\}"
},
{
"math_id": 38,
"text": "\\sum_{n=1}^d \\frac{w_n}{w_1 + \\cdots + w_d}q_n"
},
{
"math_id": 39,
"text": "(u_1, ..., u_{d}) \\in \\R^{d}"
},
{
"math_id": 40,
"text": "\\sum_{n=1}^{d} u_n q_n = q_{d+1}"
},
{
"math_id": 41,
"text": "x = \\sum_{n=1}^{d} (w_n+ \\theta w_{d+1}u_n) q_n + (1-\\theta) w_{d+1} q_{d+1}; \\quad \\theta\\in[0, 1]"
},
{
"math_id": 42,
"text": "w_n + w_{d+1} u_n \\geq 0"
},
{
"math_id": 43,
"text": "n=1, ..., d"
},
{
"math_id": 44,
"text": "\\theta = 1"
},
{
"math_id": 45,
"text": "\\theta"
},
{
"math_id": 46,
"text": "w_n + \\theta w_{d+1} u_n = 0"
},
{
"math_id": 47,
"text": "\\leq d"
},
{
"math_id": 48,
"text": "r"
},
{
"math_id": 49,
"text": "x\\in \\mathrm{Conv}(P)"
},
{
"math_id": 50,
"text": "\\leq d+1"
}
] | https://en.wikipedia.org/wiki?curid=892014 |
8920338 | Kautz filter | In signal processing, the Kautz filter, named after William H. Kautz, is a fixed-pole traversal filter, published in 1954.
Like Laguerre filters, Kautz filters can be implemented using a cascade of all-pass filters, with a one-pole lowpass filter at each tap between the all-pass sections.
Orthogonal set.
Given a set of real poles formula_0, the Laplace transform of the Kautz orthonormal basis is defined as the product of a one-pole lowpass factor with an increasing-order allpass factor:
formula_1
formula_2
formula_3.
In the time domain, this is equivalent to
formula_4,
where "ani" are the coefficients of the partial fraction expansion as,
formula_5
For discrete-time Kautz filters, the same formulas are used, with "z" in place of "s".
Relation to Laguerre polynomials.
If all poles coincide at "s = -a", then Kautz series can be written as,
formula_6,
where "Lk" denotes Laguerre polynomials.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\{-\\alpha_1, -\\alpha_2, \\ldots, -\\alpha_n\\}"
},
{
"math_id": 1,
"text": "\\Phi_1(s) = \\frac{\\sqrt{2 \\alpha_1}} {(s+\\alpha_1)}"
},
{
"math_id": 2,
"text": "\\Phi_2(s) = \\frac{\\sqrt{2 \\alpha_2}} {(s+\\alpha_2)} \\cdot \\frac{(s-\\alpha_1)}{(s+\\alpha_1)}"
},
{
"math_id": 3,
"text": "\\Phi_n(s) = \\frac{\\sqrt{2 \\alpha_n}} {(s+\\alpha_n)} \\cdot \\frac{(s-\\alpha_1)(s-\\alpha_2) \\cdots (s-\\alpha_{n-1})}\n {(s+\\alpha_1)(s+\\alpha_2) \\cdots (s+\\alpha_{n-1})}"
},
{
"math_id": 4,
"text": "\\phi_n(t) = a_{n1}e^{-\\alpha_1 t} + a_{n2}e^{-\\alpha_2 t} + \\cdots + a_{nn}e^{-\\alpha_n t}"
},
{
"math_id": 5,
"text": "\\Phi_n(s) = \\sum_{i=1}^{n} \\frac{a_{ni}}{s+\\alpha_i}"
},
{
"math_id": 6,
"text": "\\phi_k(t) = \\sqrt{2a}(-1)^{k-1}e^{-at}L_{k-1}(2at)"
}
] | https://en.wikipedia.org/wiki?curid=8920338 |
8921202 | DSSP (algorithm) | The DSSP algorithm is the standard method for assigning secondary structure to the amino acids of a protein, given the atomic-resolution coordinates of the protein. The abbreviation is only mentioned once in the 1983 paper describing this algorithm, where it is the name of the Pascal program that implements the algorithm "Define Secondary Structure of Proteins".
Algorithm.
DSSP begins by identifying the intra-backbone hydrogen bonds of the protein using a purely electrostatic definition, assuming partial charges of −0.42 "e" and +0.20 "e" to the carbonyl oxygen and amide hydrogen respectively, their opposites assigned to the carbonyl carbon and amide nitrogen. A hydrogen bond is identified if "E" in the following equation is less than -0.5 kcal/mol:
formula_0
where the formula_1 terms indicate the distance between atoms A and B, taken from the carbon (C) and oxygen (O) atoms of the C=O group and the nitrogen (N) and hydrogen (H) atoms of the N-H group.
Based on this, nine types of secondary structure are assigned. The 310 helix, α helix and π helix have symbols G, H and I and are recognized by having a repetitive sequence of hydrogen bonds in which the residues are three, four, or five residues apart respectively. Two types of beta sheet structures exist; a beta bridge has symbol B while longer sets of hydrogen bonds and beta bulges have symbol E. T is used for turns, featuring hydrogen bonds typical of helices, S is used for regions of high curvature (where the angle between formula_2 and formula_3 is at least 70°). As of DSSP version 4, PPII helices are also detected based on a combination of backbone torsion angles and the absence of hydrogen bonds compatible with other types. PPII helices have symbol P. A blank (or space) is used if no other rule applies, referring to loops. These eight types are usually grouped into three larger classes: helix (G, H and I), strand (E and B) and loop (S, T, and C, where C sometimes is represented also as blank space).
π helices.
In the original DSSP algorithm, residues were preferentially assigned to α helices, rather than π helices. In 2011, it was shown that DSSP failed to annotate many "cryptic" π helices, which are commonly flanked by α helices. In 2012, DSSP was rewritten so that the assignment of π helices was given preference over α helices, resulting in better detection of π helices. Versions of DSSP from 2.1.0 onwards therefore produce slightly different output from older versions.
Variants.
In 2002, a continuous DSSP assignment was developed by introducing multiple hydrogen bond thresholds, where the new assignment was found to correlate with protein motion.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nE = 0.084 \\left\\{ \\frac{1}{r_{ON}} + \\frac{1}{r_{CH}} - \\frac{1}{r_{OH}} - \\frac{1}{r_{CN}} \\right\\} \\cdot 332 \\, \\mathrm{kcal/mol}\n"
},
{
"math_id": 1,
"text": "r_{AB}"
},
{
"math_id": 2,
"text": "\\overrightarrow{C_i^\\alpha C_{i+2}^\\alpha}"
},
{
"math_id": 3,
"text": "\\overrightarrow{C_{i-2}^\\alpha C_i^\\alpha}"
}
] | https://en.wikipedia.org/wiki?curid=8921202 |
8921317 | Lifson–Roig model | In polymer science, the Lifson–Roig model
is a helix-coil transition model applied to the alpha helix-random coil transition of polypeptides; it is a refinement of the Zimm–Bragg model that recognizes that a polypeptide alpha helix is only stabilized by a hydrogen bond only once three consecutive residues have adopted the helical conformation. To consider three consecutive residues each with two states (helix and coil), the Lifson–Roig model uses a 4x4 transfer matrix instead of the 2x2 transfer matrix of the Zimm–Bragg model, which considers only two consecutive residues. However, the simple nature of the coil state allows this to be reduced to a 3x3 matrix for most applications.
The Zimm–Bragg and Lifson–Roig models are but the first two in a series of analogous transfer-matrix methods in polymer science that have also been applied to nucleic acids and branched polymers. The transfer-matrix approach is especially elegant for homopolymers, since the statistical mechanics may be solved exactly using a simple eigenanalysis.
Parameterization.
The Lifson–Roig model is characterized by three parameters: the statistical weight for nucleating a helix, the weight for propagating a helix and the weight for forming a hydrogen bond, which is granted only if three consecutive residues are in a helical state. Weights are assigned at each position in a polymer as a function of the conformation of the residue in that position and as a function of its two neighbors. A statistical weight of 1 is assigned to the "reference state" of a coil unit whose neighbors are both coils, and a "nucleation" unit is defined (somewhat arbitrarily) as two consecutive helical units neighbored by a coil. A major modification of the original Lifson–Roig model introduces "capping" parameters for the helical termini, in which the N- and C-terminal capping weights may vary independently. The correlation matrix for this modification can be represented as a matrix M, reflecting the statistical weights of the helix state "h" and coil state "c".
The Lifson–Roig model may be solved by the transfer-matrix method using the transfer matrix M shown at the right, where "w" is the statistical weight for helix propagation, "v" for initiation, "n" for N-terminal capping, and "c" for C-terminal capping. (In the traditional model "n" and "c" are equal to 1.) The partition function for the helix-coil transition equilibrium is
formula_0
where "V" is the end vector formula_1, arranged to ensure the coil state of the first and last residues in the polymer.
This strategy for parameterizing helix-coil transitions was originally developed for alpha helices, whose hydrogen bonds occur between residues "i" and "i+4"; however, it is straightforward to extend the model to 310 helices and pi helices, with "i+3" and "i+5" hydrogen bonding patterns respectively. The complete alpha/310/pi transfer matrix includes weights for transitions between helix types as well as between helix and coil states. However, because 310 helices are much more common in the tertiary structures of proteins than pi helices, extension of the Lifson–Roig model to accommodate 310 helices - resulting in a 9x9 transfer matrix when capping is included - has found a greater range of application. Analogous extensions of the Zimm–Bragg model have been put forth but have not accommodated mixed helical conformations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nZ = V \\left(\\prod_{i=0}^{N+1} M(i)\\right)\\tilde{V}\n"
},
{
"math_id": 1,
"text": "V=[0 0 0 1]"
}
] | https://en.wikipedia.org/wiki?curid=8921317 |
8921481 | Multiplicity (statistical mechanics) | Number of microstates for a given macrostate of a thermodynamic system
In statistical mechanics, multiplicity (also called statistical weight) refers to the number of microstates corresponding to a particular macrostate of a thermodynamic system. Commonly denoted formula_0, it is related to the configuration entropy of an isolated system via Boltzmann's entropy formula
formula_1
where formula_2 is the entropy and formula_3 is the Boltzmann constant.
Example: the two-state paramagnet.
A simplified model of the two-state paramagnet provides an example of the process of calculating the multiplicity of particular macrostate. This model consists of a system of N microscopic dipoles μ which may either be aligned or anti-aligned with an externally applied magnetic field B. Let formula_4 represent the number of dipoles that are aligned with the external field and formula_5 represent the number of anti-aligned dipoles. The energy of a single aligned dipole is formula_6 while the energy of an anti-aligned dipole is formula_7 thus the overall energy of the system is
formula_8
The goal is to determine the multiplicity as a function of U; from there, the entropy and other thermodynamic properties of the system can be determined. However, it is useful as an intermediate step to calculate multiplicity as a function of formula_4 and formula_9 This approach shows that the number of available macrostates is "N" + 1. For example, in a very small system with "N" = 2 dipoles, there are three macrostates, corresponding to formula_10 Since the formula_11 and formula_12 macrostates require both dipoles to be either anti-aligned or aligned, respectively, the multiplicity of either of these states is 1. However, in the formula_13 either dipole can be chosen for the aligned dipole, so the multiplicity is 2. In the general case, the multiplicity of a state, or the number of microstates, with formula_4 aligned dipoles follows from combinatorics, resulting in
formula_14
where the second step follows from the fact that formula_15
Since formula_16 the energy U can be related to formula_4 and formula_5 as follows:
formula_17
Thus the final expression for multiplicity as a function of internal energy is
formula_18
This can be used to calculate entropy in accordance with Boltzmann's entropy formula; from there one can calculate other useful properties such as temperature and heat capacity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Omega"
},
{
"math_id": 1,
"text": "S = k_\\text{B} \\log \\Omega,"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "k_\\text{B}"
},
{
"math_id": 4,
"text": "N_\\uparrow"
},
{
"math_id": 5,
"text": "N_\\downarrow"
},
{
"math_id": 6,
"text": "U_\\uparrow = -\\mu B,"
},
{
"math_id": 7,
"text": "U_\\downarrow = \\mu B;"
},
{
"math_id": 8,
"text": "U = (N_\\downarrow-N_\\uparrow)\\mu B."
},
{
"math_id": 9,
"text": "N_\\downarrow."
},
{
"math_id": 10,
"text": "N_\\uparrow=0, 1, 2."
},
{
"math_id": 11,
"text": "N_\\uparrow = 0"
},
{
"math_id": 12,
"text": "N_\\uparrow = 2"
},
{
"math_id": 13,
"text": "N_\\uparrow = 1,"
},
{
"math_id": 14,
"text": "\\Omega = \\frac{N!}{N_\\uparrow!(N-N_\\uparrow)!} = \\frac{N!}{N_\\uparrow!N_\\downarrow!},"
},
{
"math_id": 15,
"text": "N_\\uparrow+N_\\downarrow = N."
},
{
"math_id": 16,
"text": "N_\\uparrow - N_\\downarrow = -\\tfrac{U}{\\mu B},"
},
{
"math_id": 17,
"text": "\\begin{align}\nN_\\uparrow &= \\frac{N}{2} - \\frac{U}{2\\mu B}\\\\[4pt]\nN_\\downarrow &= \\frac{N}{2} + \\frac{U}{2\\mu B}.\n\\end{align}"
},
{
"math_id": 18,
"text": "\\Omega = \\frac{N!}{ \\left(\\frac{N}{2} - \\frac{U}{2\\mu B} \\right)! \\left( \\frac{N}{2} + \\frac{U}{2\\mu B} \\right)!}."
}
] | https://en.wikipedia.org/wiki?curid=8921481 |
8921722 | Racah parameter | The Racah parameters are a set of parameters used in atomic and molecular spectroscopy to describe the amount of total electrostatic repulsion in an atom that has multiple electrons.
When an atom has more than one electron, there will be some electrostatic repulsion between the electrons. The amount of repulsion varies from atom to atom, depending upon the number of electrons, their spin, and the orbitals that they occupy. The total repulsion can be expressed in terms of three parameters "A", "B" and "C" which are known as the "Racah parameters" after Giulio Racah, who first described them. They are generally obtained empirically from gas-phase spectroscopic studies of atoms.
They are often used in transition-metal chemistry to describe the repulsion energy associated with an electronic term. For example, the interelectronic repulsion of a 3P term is "A" + 7"B", and of a 3F term is "A" - 8"B", and the difference between them is therefore 15"B".
Definition.
The Racah parameters are defined as
formula_0
where formula_1 are Slater integrals
formula_2
and formula_3 are the Slater-Condon parameters
formula_4
where formula_5 is the normalized radial part of an electron orbital, formula_6 and formula_7
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \n\\begin{pmatrix}\nA \\\\ B\\\\ C\\\\\n\\end{pmatrix} = \\begin{pmatrix}\n1 & 0 & -49\\\\\n0 & 1 & -5 \\\\\n0 & 0 & 35 \\\\\n\\end{pmatrix}\\begin{pmatrix}\nF_0 \\\\ F_2\\\\ F_4\\\\\n\\end{pmatrix}\n"
},
{
"math_id": 1,
"text": "F_k"
},
{
"math_id": 2,
"text": "\n\\begin{pmatrix}\nF_0 \\\\ F_2\\\\ F_4\\\\\n\\end{pmatrix} = \\begin{pmatrix}\nF^0 \\\\ \\frac{1}{49}F^2\\\\ \\frac{1}{441}F^4\\\\\n\\end{pmatrix}\n"
},
{
"math_id": 3,
"text": "F^k"
},
{
"math_id": 4,
"text": "\nF^k := \\int_0^\\infty r_1^2 dr_1 \\int_0^\\infty r_2^2 dr_2 R^2(r_1) R^2(r_2)\\frac{r_<^k}{r_>^{k+1}} \n"
},
{
"math_id": 5,
"text": "R(r)"
},
{
"math_id": 6,
"text": "r_> = \\max\\{r_1,r_2\\}"
},
{
"math_id": 7,
"text": "r_< = \\min\\{r_1,r_2\\}"
}
] | https://en.wikipedia.org/wiki?curid=8921722 |
8921769 | Measuring poverty |
Poverty is measured in different ways by different bodies, both governmental and nongovernmental. Measurements can be absolute, which references a single standard, or relative, which is dependent on context. Poverty is widely understood to be multidimensional, comprising social, natural and economic factors situated within wider socio-political processes.
The main poverty line used in the OECD and the European Union is a relative poverty measure based on 60% of the median household income. The United States uses an absolute poverty measure based on the U.S. Department of Agriculture's "economy food plan", adjusted for inflation. The World Bank also defines poverty in absolute terms. It defines "extreme poverty" as living on less than US$1.90 per day. (PPP), and "moderate poverty" as less than $3.10 a day.
It has been estimated that in 2008, 1.4 billion people had consumption levels below US$1.25 a day and 2.7 billion lived on less than $2 a day.
Absolute vs relative poverty.
When measured, poverty may be absolute or relative. Absolute poverty refers to a set standard which is consistent over time and between countries. An example of an absolute measurement would be the percentage of the population eating less food than is required to sustain the human body (approximately 2000–2500 calories per day).
Another interpretation by the European Anti-Poverty Network reads:
Relative poverty, in contrast, views poverty as socially defined and dependent on social context. One relative measurement would be to compare the total wealth of the poorest one-third of the population with the total wealth of the richest 1% of the population. In this case, the number of people counted as poor could increase while their income rises. There are several different income inequality metrics; one example is the Gini coefficient.
Although absolute poverty is more common in developing countries, poverty and inequality exist across the world.
Measurements.
The main poverty line used in the OECD and the European Union is a relative poverty measure based on "economic distance", a level of income usually set at 60% of the median household income.
The United States, in contrast, uses an absolute poverty measure. The US poverty line was created in 1963–64 and was based on the dollar costs of the U.S. Department of Agriculture's "economy food plan" multiplied by a factor of three. The multiplier was based on research showing that food costs then accounted for about one-third of money income. This one-time calculation has since been annually updated for inflation.
The U.S. line has been critiqued as being either too high or too low. For instance, The Heritage Foundation, a conservative U.S. think tank, objects to the fact that, according to the U.S. Census Bureau, 46% of those defined as being in poverty in the U.S. own their own home (with the average poor person's home having three bedrooms, with one and a half baths, and a garage). Others, such as economist Ellen Frank, argue that the poverty measure is too low as families spend much less of their total budget on food than they did when the measure was established in the 1950s. Further, federal poverty statistics do not account for the widely varying regional differences in non-food costs such as housing, transport, and utilities.
Both absolute and relative poverty measures are usually based on a person's yearly income and frequently take no account of total wealth. Some people argue that this ignores a key component of economic well-being. Major developments and research in this area suggest that standard one dimensional measures of poverty, based mainly on wealth or calorie consumption, are seriously deficient. This is because poverty often involves being deprived on several fronts, which do not necessarily correlate well with wealth. Access to basic needs is an example of a measurement that does not include wealth. Access to basic needs that may be used in the measurement of poverty are clean water, food, shelter, and clothing. It has been established that people may have enough income to satisfy basic needs, but not use it wisely. Similarly, extremely poor people may not be deprived if sufficiently strong social networks, or social service systems exist. For deeper discussions, see also the Wikipedia article on Multidimensional poverty.
Indicators.
Indicator income.
Benefits:
.. Help in the allocation of resources by the government
Disadvantages:
Indicator expenditure.
Benefits:
Disadvantages:
Non-monetary indicators.
Some economists, such as Guy Pfeffermann, say that other non-monetary indicators of "absolute poverty" are also improving. Life expectancy has greatly increased in the developing world since World War II and is starting to close the gap to the developed world where the improvement has been smaller. Even in Sub-Saharan Africa, the least developed region, life expectancy increased from 30 years before World War II to a peak of about 50 years — before the HIV pandemic and other diseases started to force it down to the current level of 47 years. Child mortality has decreased in every developing region of the world. The proportion of the world's population living in countries where per-capita food supplies are less than 2,200 calories (9,200 kilojoules) per day decreased from 56% in the mid-1960s to below 10% by the 1990s. Between 1950 and 1999, global literacy increased from 52% to 81% of the world. Women made up much of the gap: Female literacy as a percentage of male literacy has increased from 59% in 1970 to 80% in 2000. The percentage of children not in the labor force has also risen to over 90% in 2000 from 76% in 1960. There are similar trends for electric power, cars, radios, and telephones per capita, as well as the proportion of the population with access to clean water.
Other Measures of Household Welfare.
Even if income and expenses are measured perfectly, none of these measures show well-being objectively. For example, it includes leisure, public goods, health care, education and even peace and security. One criterion is the consumption of calories per person. According to this criterion, we can see what percentage of the population suffers from hunger. Although the average limit is around 2,100 calories, it still varies from person to person. It depends on age, gender, etc. What percentage of income do we spend on food consumption? Research shows that the more developed a country, the smaller the budget we spend on food. The disadvantage of this scale is that people differ relatively in the quality of food and it is consumption per person. Measuring results rather than inputs. Rather than the amount of food, we should focus on the anthropometric state (underweight, etc.). This measurement really shows the quality of the household. However, it cannot be used for comparisons between states or continents, because nationalities differ in scale. The last idea of how to assess poverty is up to the decision among citizens. In Vietnam, for example, some villages are judged by the people themselves, who need help out of poverty. Although this simple solution works somewhere, it is very often distorted by various influences. In short, there is as yet no ideal measure for the well-being of the population. This is not an argument to end the measurement, but rather a warning to minimize errors and take into account as many factors as possible.
Definitions.
The World Bank defines poverty in absolute terms. The bank defines "extreme poverty" as living on less than US$1.90 per day. (PPP), and "moderate poverty" as less than $3.10 a day. It has been estimated that in 2008, 1.4 billion people had consumption levels below US$1.25 a day and 2.7 billion lived on less than $2 a day. The proportion of the developing world's population living in extreme economic poverty has fallen from 28 percent in 1990 to 21 percent in 2001. Much of the improvement has occurred in East and South Asia. In Sub-Saharan Africa GDP/capita shrank with 14 percent, and extreme poverty increased from 41 percent in 1981 to 46 percent in 2001. Other regions have seen little or no change. In the early 1990s the transition economies of Europe and Central Asia experienced a sharp drop in income. Poverty rates rose to 6 percent at the end of the decade before beginning to recede. There are criticisms of these measurements.
Common Poverty metrics.
Headcount index.
Headcount index (Po) is a widely-used measure, which simply indicates the proportion of the poor population. Although it does not indicate how poor the poor are.
Formula: formula_0, where Np is the number of poor and N is the total population.
Example: If 10 people are poor in a survey that samples 1000 people, then Po = 10/1000 = 0.01 = 1%
Its often helpful to rewrite:
formula_1 , Here, I(·) is an indicator function that takes on a value of 1 if the bracketed expression is true, and 0 otherwise. So if expenditure (yi) is less than the poverty line (z), then I(·) equals 1 and the household would be counted as poor.
This index is easy to understand, but has few disadvantages. It does not show the poverty rate. In addition, the headcount index does not show how poor the poor are. Moreover, this estimate is made on households and not on individuals.
Poverty gap index (P1).
The Poverty gap index is the mean distance below the poverty line as a proportion of the poverty line where the mean is taken over the whole population, counting the non-poor as having zero poverty gap.
Using the index function, we have: formula_2, where define the poverty gap (Gi) as the poverty line (z) less actual income (yi) for poor individuals; the gap is considered to be zero for everyone else.
Could be rewrite: formula_3
The poverty gap index denotes the extent to which individuals fall below the poverty line (poverty gap) as a proportion of the poverty line. By summing these poverty gaps we derive the minimum cost of eliminating poverty.
This method is only reasonable if the transfers could be made perfectly efficiently, which is unlikely.
Squared Poverty Gap Index (Poverty severity index, P2).
The squared poverty gap index is conducted by averaging the squares of the poverty gaps relative to the poverty line. This measure emphasizes extreme poverty and gives it a greater weight than less poverty. One of its benefits is the possibility of variation in the weight of income level of the poorest part of society. The Poverty severity index can also be disaggregated for population subgroups.
Sen index.
The Sen index connects the number of poor with the size of their poverty and the distribution of poverty in the sample.
formula_4
Sen-Shorrocks-Thon Index.
The Sen-Shorrocks-Thon index (sometimes referred to as SST index) is an improved version of the Sen index.
formula_5
The Sen-Shorrocks-Thon index takes into perspective measures of the proportion of poor people, the extent of their poverty and the distribution of welfare among the poor. This index enables us to decompose poverty into three components and answer these questions: Are there more poor? Is their depth of poverty worsening? Is there higher inequality among the poor?
Asset-based measures.
Other point of view defines poverty and in terms of assets. These asset-based measures may consider the real financial asset holdings, access to the credit market and poverty related to a household’s wealth. An example of those may be income net worth measures, asset-poverty and financial vulnerability.
Being asset poor does not imply being income poor and vice versa. For example, the importance of being asset wealthy is lower in countries with secure employment as it ensures stable living standards, while in other countries it may be needed as a cushion against uncertainties and shocks.
High-frequency poverty measures.
Real-time information on poverty and people’s well-being has not been yet well developed. There has already been and incentive to conduct a few pilot projects with the help of mobile phones by the World Bank in South Soudan, Peru or Tanzania.
Supplemental and Official Measures in the USA.
The Official Poverty Measure takes into account the individual’s of family’s pretax cash income and a size and age varied set of thresholds but is not effected by in-kind programs (e.g. housing and energy programs, nutrition assistance), tax credits or the regional differences in living costs.
In order of better understanding of the economic well-being of American families and easier interpretation of effectiveness of federal policies, the Supplemental Poverty Measure (SPM) was developed by the Census Bureau and the U.S. Bureau of Labor Statistics in 2010. In the next few years several new methodological improvements were made to SPM. The SPM does take into consideration family resources and expenses not included in the OPM and the geographical conditions.
The demographic profile of the poverty population differs under the SPM and OPM measures. Comparatively, the poverty rate of children is lower in terms of SPM and a higher poverty rate is conducted among the elderly (older than 65). The poverty rate of the working-age population fluctuates from year to year between the two poverty measures. The Supplemental Poverty Measure highlights medical and work-related expenses compared to the Official Poverty Measure and gives policymakers clearer picture of the outcome of government programs.
Case Studies.
Measuring poverty is a complex and comprehensive task requiring various methods and approaches. Case studies can provide insights into real-world examples and examples of how poverty is measured and addressed. This section examines four case studies showcasing the diverse techniques for measuring poverty, including the Multidimensional Poverty Index (MPI) in Mexico, the Community-Led Total Sanitation (PAMSIMAS) approach in Indonesia, and the National Rural Employment Guarantee Act (NREGA) in India. Examining these case studies can provide a deeper understanding of how poverty is assessed and addressed globally.
Multidimensional Poverty Index (MPI).
The Multidimensional Poverty Index (MPI) in Mexico is a comprehensive approach to assessing poverty that considers a variety of indicators beyond just income. Mexico was the first country to introduce an official multidimensional poverty measure, an index which, in addition to considering the lack of economic resources, includes other dimensions that social policy must address. The National Council developed the MPI in Mexico for the Evaluation of Social Development Policy (CONEVAL), which is the agency responsible for evaluating poverty and social policy in Mexico.
The MPI in Mexico measures poverty on eight poverty indicators: income, education lag, access to healthcare services, access to social security, access to food, housing quality, and space, access to basic housing requirements, and degree of social cohesion. The measurement considers income and six dimensions in a social rights approach. It is complemented by the inclusion of social cohesion, recognising the importance of contextual and relational factors, which may be analysed in terms of their impact on society and vice-versa.
The MPI considers various factors contributing to poverty and deprivation, offering a more complex understanding of poverty than traditional income-based measurements. The Mexican government uses it to track progress in eradicating poverty over time and target social services and policies to those most in need. Every two years, the national and state governments in Mexico measure poverty. Municipalities measure poverty every five years.
Mexico's MPI has been a helpful instrument for assessing poverty and promoting social progress. Nonetheless, it has its drawbacks and critics, just like any method of measuring poverty. Several opponents contend that the MPI understates the severity of the poverty and marginalisation of particularly marginalised groups, such as indigenous peoples and residents of rural areas. Nonetheless, the Multidimensional Poverty Measure in Mexico is a valuable model for other countries trying to develop comprehensive and multidimensional perspectives to measuring poverty.
Community-Based Drinking Water Supply and Sanitation Program.
Community-based Drinking Water Supply and Sanitation Program or Penyediaan Air Minum dan Sanitasi Berbasis Masyarakat (PAMSIMAS) approach in Indonesia is an excellent example of measuring poverty since it acknowledges how poor sanitation impacts general health conditions and their social and economic aspects. The PAMSIMAS approach motivates individuals to participate in community development by integrating themselves into identifying and fixing their sanitation problems. As a result, the PAMSIMAS approach offers a comprehensive approach to reducing poverty, considering income and assets and the more extensive social and environmental variables contributing to measuring poverty.
This approach is founded on the idea that local communities are more suited to identify and handle their sanitation requirements than traditional top-down approaches to development. This method involves working with communities to identify practices that lead to poor sanitation, such as open defecation, and promote awareness of the necessity of sanitation. In addition to technical assistance to strengthen the community's role through capacity building, planning, procurement, and management, including community monitoring with a web-based and mobile monitoring system, the project offers grants directly to communities for local water and sanitation infrastructure. Facilitators and districts are also given access to training and consulting services to help them develop better sanitation and hygiene habits. Then, the facilitators assist communities in creating options for further development.
The PAMSIMAS approach in Indonesia has successfully reduced open defecation and improved access to sanitation facilities in rural areas. According to the World Bank, the percentage of communities targeted for available defecation-free status increased significantly from 0% to 58%, and roughly 81% of those schools improved their sanitation and hygiene initiatives. For instance, the institutional sustainability of the PAMSIMAS approach also became more assertive, with 97% of districts replicating it outside of the project's target neighborhoods and 86% of communities increasing their spending on ensuring everyone has access to clean water and sanitation.
National Rural Employment Guarantee Act (NREGA).
In 2005, India launched a national-anti poverty program, Mahatma Gandhi National Rural Employment Guarantee Act (NREGA). It is a flagship program of the Indian government to provide employment opportunities to rural households in the country. The scheme was launched in 2005, and it guarantees 100 days of wage employment to every rural household that demands work.
The NREGA is a demand-driven program, meaning that households must apply for work under the scheme. The government is then obligated to employ within 15 days of the application. The scheme offers a minimum wage for workers, set by the central government, and varies from state to state.
The NREGA is intended to give rural households work opportunities during the slow agricultural season when those opportunities are few. The program attempts to decrease poverty and raise rural income by hiring rural households. The program also intends to encourage sustainable development in rural areas by creating job opportunities in forestry, water conservation, and infrastructure building.
Nonetheless, the NREGA has been widely recognised as an effective tool for poverty reduction and rural development in India. In 2020, the scheme was extended to cover 116 districts affected by the COVID-19 pandemic, providing much-needed employment opportunities to vulnerable households in rural areas.
In conclusion, the NREGA is a significant social welfare program in India that provides employment opportunities to rural households, promotes social inclusion and contributes to rural development. While the scheme has challenges, its positive impact on poverty reduction and rural development must be considered.
Common survey problems.
These surveys interpret data that has some common problems:
Random sample: All research is based on randomly selecting people into a sample, and each should have the same chance of being selected. Unfortunately, it is not easy for the sample not to be biased, because some groups of people are simply difficult to trace.
Sampling: We can see inequalities from experience and from surveys themselves. Surveys show us only an estimated formula. Another thing we should be interested in is how the sampling was performed. In areas with dense population, there is usually under-sampling.
Goods Coverage and Valuation: To cover and refine the information, we should ask the sample more generally. Not only the issues of income and expenditure are enough, but also own consumption from the family farm. It is necessary to gather information on housing and components of durable consumption.
Variability and the Time Period of Measurement: Income, consumption and other factors change over time. In less developed countries, therefore, the focus is only on consumption, which is more stable from income.
Comparisons across Households at Similar Consumption Levels: Comparisons between households are difficult because households differ not only in the size of income and expenditure, but also in the environment, leisure, quality of the environment, etc.
Stats.
Even if poverty may be lessening for the world as a whole, it continues to be an enormous problem:
Other factors.
The World Bank's Voices of the Poor initiative, based on research with over 20,000 poor people in 23 countries, identifies a range of factors that poor people consider elements of poverty. Most important are those necessary for material well-being, especially food. Many others relate to social rather than material issues.
Future Directions.
Future directions in measuring poverty are constantly changing as new discussions and trends take hold in the field of study. In developing the matrices, multidimensional poverty measurements and including subjective well-being indicators are prominent areas.
For example, multidimensional metrics are recognised to consider various perspectives to fully understand the poverty, for example, the approach takes into accounts of various factors such as health, education, access to basic services, social and economic empowerment, and environmental conditions. Governments can create policies and implementations based on this comprehensive manner in studying the underlying causes of poverty.
In cooperation with United Nations Development Programme, the Oxford Poverty and Human Development Initiative (OPHI) developed the Multidimensional Poverty Index (MPI) in 2010. The index employs a collective indicators including health, education, and standard of living, using ten indicators such as nutrition, years of schooling, and access to clean water and electricity.
Another area of emerging debate is the incorporation of subjective measures which are designed to record people's perceptions of their own well-being, which can offer valuable information into the impact of poverty on people's lives in way other than material deprivation. An example of subjective poverty measurement is the Participatory Wealth Ranking (PWR) approach that uses the ratings of local reference groups concerning the relative poverty status of households in their community.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Po = \\frac{Np}{N}"
},
{
"math_id": 1,
"text": "Po=\\frac{1}{N}\\sum_{i=1}^N I(yi<z)"
},
{
"math_id": 2,
"text": "Gi = (yi- z)\\times I(yi<z)"
},
{
"math_id": 3,
"text": "P1 = \\frac{1}{N} \\sum_{k=1}^N \\frac{Gi}{z}"
},
{
"math_id": 4,
"text": "Ps-Po(1-(1-G^p)\\frac{U^P}{4}"
},
{
"math_id": 5,
"text": "Psst=PoP_1^p(1+G^p)\n"
}
] | https://en.wikipedia.org/wiki?curid=8921769 |
892264 | Wheelbase | Distance between the centers of the front and rear wheels
In both road and rail vehicles, the wheelbase is the horizontal distance between the centers of the front and rear wheels. For road vehicles with more than two axles (e.g. some trucks), the wheelbase is the distance between the steering (front) axle and the centerpoint of the driving axle group. In the case of a tri-axle truck, the wheelbase would be the distance between the steering axle and a point midway between the two rear axles.
Vehicles.
The wheelbase of a vehicle equals the distance between its front and rear wheels. At equilibrium, the total torque of the forces acting on a vehicle is zero. Therefore, the wheelbase is related to the force on each pair of tires by the following formula:
formula_0
formula_1
where formula_2 is the force on the front tires, formula_3 is the force on the rear tires, formula_4 is the wheelbase, formula_5 is the distance from the center of mass (CM) to the rear wheels, formula_6 is the distance from the center of mass to the front wheels (formula_6 + formula_5 = formula_4), formula_7 is the mass of the vehicle, and formula_8 is the gravity constant. So, for example, when a truck is loaded, its center of gravity shifts rearward and the force on the rear tires increases. The vehicle will then ride lower. The amount the vehicle sinks will depend on counter acting forces, like the size of the tires, tire pressure, and the spring rate of the suspension.
If the vehicle is accelerating or decelerating, extra torque is placed on the rear or front tire respectively. The equation relating the wheelbase, height above the ground of the CM, and the force on each pair of tires becomes:
formula_9
formula_10
where formula_2 is the force on the front tires, formula_3 is the force on the rear tires, formula_5 is the distance from the CM to the rear wheels, formula_6 is the distance from the CM to the front wheels, formula_4 is the wheelbase, formula_7 is the mass of the vehicle, formula_8 is the acceleration of gravity (approx. 9.8 m/s2), formula_11 is the height of the CM above the ground, formula_12 is the acceleration (or deceleration if the value is negative). So, as is common experience, when the vehicle accelerates, the rear usually sinks and the front rises depending on the suspension. Likewise, when braking the front noses down and the rear rises.
Because of the effect the wheelbase has on the weight distribution of the vehicle, wheelbase dimensions are crucial to the balance and steering. For example, a car with a much greater weight load on the rear tends to understeer due to the lack of the load (force) on the front tires and therefore the grip (friction) from them. This is why it is crucial, when towing a single-axle caravan, to distribute the caravan's weight so that down-thrust on the tow-hook is about 100 pounds force (400 N). Likewise, a car may oversteer or even "spin out" if there is too much force on the front tires and not enough on the rear tires. Also, when turning there is lateral torque placed upon the tires which imparts a turning force that depends upon the length of the tire distances from the CM. Thus, in a car with a short wheelbase ("SWB"), the short lever arm from the CM to the rear wheel will result in a greater lateral force on the rear tire which means greater acceleration and less time for the driver to adjust and prevent a spin out or worse.
Wheelbases provide the basis for one of the most common vehicle size class systems.
Varying wheelbases within nameplate.
Some vehicles are offered with long-wheelbase variants to increase the spaciousness and therefore the luxury of the vehicle. This practice can often be found on full-size cars like the Mercedes-Benz S-Class, but ultra-luxury vehicles such as the Rolls-Royce Phantom and even large family cars like the Rover 75 came with 'limousine' versions. Prime Minister of the United Kingdom Tony Blair was given a long-wheelbase version of the Rover 75 for official use. and even some SUVs like the VW Tiguan and Jeep Wrangler are available with long wheelbases.
In contrast, coupé varieties of some vehicles such as the Honda Accord are usually built on shorter wheelbases than the sedans they are derived from.
Bikes.
The wheelbase on many commercially available bicycles and motorcycles is so short, relative to the height of their centers of mass, that they are able to perform stoppies and wheelies.
Skateboards.
In skateboarding the word 'wheelbase' is used for the distance between the two inner pairs of mounting holes on the deck. This is different from the distance between the rotational centers of the two wheel pairs. A reason for this alternative use is that decks are sold with prefabricated holes, but usually without trucks and wheels. It is therefore easier to use the prefabricated holes for measuring and describing this characteristic of the deck.
A common misconception is that the choice of wheelbase is influenced by the height of the skateboarder. However, the length of the deck would then be a better candidate, because the wheelbase affects characteristics useful in different speeds or terrains regardless of the height of the skateboarder. For example, a deck with a long wheelbase, say , will respond slowly to turns, which is often desirable in high speeds. A deck with a short wheelbase, say , will respond quickly to turns, which is often desirable when skating backyard pools or other terrains requiring quick or intense turns.
Rail.
In rail vehicles, the wheelbase follows a similar concept. However, since the wheels may be of different sizes (for example, on a steam locomotive), the measurement is taken between the points where the wheels contact the rail, and not between the centers of the wheels.
On vehicles where the wheelsets (axles) are mounted inside the vehicle frame (mostly in steam locomotives), the wheelbase is the distance between the front-most and rear-most wheelsets.
On vehicles where the wheelsets are mounted on bogies (American: trucks), three wheelbase measurements can be distinguished:
The wheelbase affects the rail vehicle's capability to negotiate curves. Short-wheelbased vehicles can negotiate sharper curves. On some larger wheelbase locomotives, inner wheels may lack flanges in order to pass curves.
The wheelbase also affects the load the vehicle poses to the track, track infrastructure and bridges. All other conditions being equal, a shorter wheelbase vehicle represents a more concentrated load to the track than a longer wheelbase vehicle. As railway lines are designed to take a predetermined maximum load per unit of length (tonnes per meter, or pounds per foot), the rail vehicles' wheelbase is designed according to their intended gross weight. The higher the gross weight, the longer the wheelbase must be.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_f = {d_r \\over L}mg"
},
{
"math_id": 1,
"text": "F_r = {d_f \\over L}mg"
},
{
"math_id": 2,
"text": "F_f"
},
{
"math_id": 3,
"text": "F_r"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "d_r"
},
{
"math_id": 6,
"text": "d_f"
},
{
"math_id": 7,
"text": "m"
},
{
"math_id": 8,
"text": "g"
},
{
"math_id": 9,
"text": "F_f = {d_r \\over L}mg - {h_{cm} \\over L}ma"
},
{
"math_id": 10,
"text": "F_r = {d_f \\over L}mg + {h_{cm} \\over L}ma"
},
{
"math_id": 11,
"text": "h_{cm}"
},
{
"math_id": 12,
"text": "a"
}
] | https://en.wikipedia.org/wiki?curid=892264 |
8924002 | Operational transformation | Concurrency control method for collaborative software
Operational transformation (OT) is a technology for supporting a range of collaboration functionalities in advanced collaborative software systems. OT was originally invented for consistency maintenance and concurrency control in collaborative editing of plain text documents. Its capabilities have been extended and its applications expanded to include group undo, locking, conflict resolution, operation notification and compression, group-awareness, HTML/XML and tree-structured document editing, collaborative office productivity tools, application-sharing, and collaborative computer-aided media design tools. In 2009 OT was adopted as a core technique behind the collaboration features in Apache Wave and Google Docs.
History.
Operational Transformation was pioneered by C. Ellis and S. Gibbs in the GROVE (GRoup Outline Viewing Edit) system in 1989. Several years later, some correctness issues were identified and several approaches were independently proposed to solve these issues, which was followed by another decade of continuous efforts of extending and improving OT by a community of dedicated researchers. In 1998, a Special Interest Group on Collaborative Editing was set up to promote communication and collaboration among CE and OT researchers. Since then, SIGCE holds annual CE workshops in conjunction with major CSCW (Computer Supported Cooperative Work) conferences, such as ACM, CSCW, GROUP and ECSCW.
System architecture.
Collaboration systems utilizing Operational Transformations typically use replicated document storage, where each client has their own copy of the document; clients operate on their local copies in a lock-free, non-blocking manner, and the changes are then propagated to the rest of the clients; this ensures the client high responsiveness in an otherwise high-latency environment such as the Internet. When a client receives the changes propagated from another client, it typically transforms the changes before executing them; the transformation ensures that application-dependent consistency criteria (invariants) are maintained by all sites. This mode of operation results in a system particularly suited for implementing collaboration features, like simultaneous document editing, in a high-latency environment such as the web.
Basics.
The basic idea of OT can be illustrated by using a simple text editing scenario as follows. Given a text document with a string "abc" replicated at two collaborating sites; and two concurrent operations:
generated by two users at collaborating sites 1 and 2, respectively. Suppose the two operations are executed in the order of O1 and O2 (at site 1). After executing O1, the document becomes "xabc". To execute O2 after O1, O2 must be transformed against O1 to become: O2' = Delete[3, "c"], whose positional parameter is incremented by one due to the insertion of one character "x" by O1. Executing O2' on "xabc" deletes the correct character "c" and the document becomes "xab". However, if O2 is executed without transformation, it incorrectly deletes character "b" rather than "c". The basic idea of OT is to transform (or adjust) the parameters of an editing operation according to the effects of previously executed concurrent operations so that the transformed operation can achieve the correct effect and maintain document consistency.
Consistency models.
One functionality of OT is to support consistency maintenance in collaborative editing systems. A number of consistency models have been proposed in the research community, some generally for collaborative editing systems, and some specifically for OT algorithms.
The CC model.
In Ellis and Gibbs's 1989 paper "Concurrency control in groupware systems", two consistency properties are required for collaborative editing systems:
Since concurrent operations may be executed in different orders and editing operations are not commutative in general, copies of the document at different sites may diverge (inconsistent). The first OT algorithm was proposed in Ellis and Gibbs's paper to achieve convergence in a group text editor; the state-vector (or vector clock in classic distributed computing) was used to preserve the precedence property.
The CCI model.
The CCI model was proposed as a consistency management in collaborative editing systems. Under the CCI model, three consistency properties are grouped together:
The CCI model extends the CC model with a new criterion: intention preservation. The essential difference between convergence and intention preservation is that the former can always be achieved by a serialization protocol, but the latter may not be achieved by any serialization protocol if operations were always executed in their original forms. Achieving the nonserialisable intention preservation property has been a major technical challenge. OT has been found particularly suitable for achieving convergence and intention preservation in collaborative editing systems.
The CCI model is independent of document types or data models, operation types, or supporting techniques (OT, multi-versioning, serialization, undo/redo). It was not intended for correctness verification for techniques (e.g. OT) that are designed for specific data and operation models and for specific applications. In, the notion of intention preservation was defined and refined at three levels: First, it was defined as a generic consistency requirement for collaborative editing systems; Second, it was defined as operation context-based pre- and post- transformation conditions for generic OT functions; Third, it was defined as specific operation verification criteria to guide the design of OT functions for two primitive operations: string-wise insert and delete, in collaborative plain text editors.
The CSM model.
The condition of intention preservation was not formally specified in the CCI model for purposes of formal proofs. The SDT and LBT approaches try to formalize an alternative conditions that can be proved. The consistency model proposed in these two approaches consist of the following formal conditions:
The CA model.
The above CSM model requires that a total order of all objects in the system be specified. Effectively, the specification is reduced to new objects introduced by insert operations. However, specification of the total order entails application-specific policies such as those to break insertion ties (i.e., new objects inserted by two current operations at the same position). Consequently, the total order becomes application specific. Moreover, in the algorithm, the total order must be maintained in the transformation functions and control procedure, which increases time/space complexities of the algorithm.
Alternatively, the CA model is based on the admissibility theory. The CA model includes two aspects:
These two conditions imply convergence. All cooperating sites converge in a state in which there is a same set of objects that are in the same order. Moreover, the ordering is effectively determined by the effects of the operations when they are generated. Since the two conditions also impose additional constraints on object ordering, they are actually stronger than convergence. The CA model and the design/prove approach are elaborated in the 2005 paper. It no longer requires that a total order of objects be specified in the consistency model and maintained in the algorithm, which hence results in reduced time/space complexities in the algorithm.
OT system structure.
OT is a system of multiple components. One established strategy of designing OT systems is to separate the high-level transformation control (or integration) algorithms from the low-level transformation functions.
The transformation control algorithm is concerned with determining:
The control algorithm invokes a corresponding set of transformation functions, which determine how to transform one operation against another according to the operation types, positions, and other parameters. The correctness responsibilities of these two layers are formally specified by a set of transformation properties and conditions. Different OT systems with different control algorithms, functions, and communication topologies require maintaining different sets of transformation properties. The separation of an OT system into these two layers allows for the design of generic control algorithms that are applicable to different kinds of application with different data and operation models.
The other alternative approach was proposed in. In their approach, an OT algorithm is correct if it satisfies two formalized correctness criteria:
As long as these two criteria are satisfied, the data replicas converge (with additional constraints) after all operations are executed at all sites. There is no need to enforce a total order of execution for the sake of achieving convergence. Their approach is generally to first identify and prove sufficient conditions for a few transformation functions, and then design a control procedure to ensure those sufficient conditions. This way the control procedure and transformation functions work synergistically to achieve correctness, i.e., causality and admissibility preservation. In their approach, there is no need to satisfy transformation properties such as TP2 because it does not require that the (inclusive) transformation functions work in all possible cases.
OT data and operation models.
There exist two underlying models in each OT system: the data model that defines the way data objects in a document are addressed by operations, and the operation model that defines the set of operations that can be directly transformed by OT functions. Different OT systems may have different data and operation models. For example, the data model of the first OT system is a single linear address space; and its operation model consists of two primitive operations: character-wise insert and delete. The basic operation model has been extended to include a third primitive operation update to support collaborative Word document processing and 3D model editing. The basic OT data model has been extended into a hierarchy of multiple linear addressing domains, which is capable of modeling a broad range of documents. A data adaptation process is often required to map application-specific data models to an OT-compliant data model.
There exist two approaches to supporting application level operations in an OT system:
OT functions.
Various OT functions have been designed for OT systems with different capabilities and used for different applications. OT functions used in different OT systems may be named differently, but they can be classified into two categories:
For example, suppose a type String with an operation ins(p, c, sid) where "p" is the position of insertion, "c" is the character to insert and "sid" is the identifier of the site that has generated the operation. We can write the following inclusion transformation function:
T(ins(formula_2),ins(formula_3)) :-
if (formula_4) return ins(formula_2)
else if (formula_5 and formula_6) return ins(formula_2)
else return ins(formula_7)
We can also write the following exclusion transformation function:
formula_8(ins(formula_2),ins(formula_9)) :-
if (formula_4) return ins(formula_2)
else if (formula_5 and formula_6) return ins(formula_2)
else return ins(formula_10)
Some OT systems use both IT and ET functions, and some use only IT functions. The complexity of OT function design is determined by various factors:
Transformation properties.
Various transformation properties for ensuring OT system correctness have been identified. These properties can be maintained by either the transformation control algorithm or by the transformation functions. Different OT system designs have different division of responsibilities among these components. The specifications of these properties and preconditions of requiring them are given below.
Convergence properties.
The following two properties are related to achieving convergence.
Inverse properties.
The following three properties are related to achieving the desired group undo effect. They are:
OT control (integration) algorithms.
Various OT control algorithms have been designed for OT systems with different capabilities and for different applications. The complexity of OT control algorithm design is determined by multiple factors. A key differentiating factor is whether an algorithm is capable of supporting concurrency control (do) and/or group undo. In addition, different OT control algorithm designs make different tradeoffs in:
Most existing OT control algorithms for concurrency control adopt the theory of causality/concurrency as the theoretical basis: causally related operations must be executed in their causal order; concurrent operations must be transformed before their execution. However, it was well known that concurrency condition alone cannot capture all OT transformation conditions. In a recent work, the theory of operation context has been proposed to explicitly represent the notion of a document state, which can be used to formally express OT transformation conditions for supporting the design and verification of OT control algorithms.
The following table gives an overview of some existing OT control/integration algorithms
A continuous total order is a strict total order where it is possible to detect a missing element i.e. 1,2,3,4... is a continuous total order, 1,2,3,5... is not a continuous total order.
The transformation-based algorithms proposed in are based on the alternative consistency models "CSM" and "CA" as described above. Their approaches differ from those listed in the table. They use vector timestamps for causality preservation. The other correctness conditions are "single-"/"multi-" operation effects relation preservation or "admissibility" preservation. Those conditions are ensured by the control procedure and transformation functions synergistically. There is no need to discuss TP1/TP2 in their work. Hence they are not listed in the above table.
There exist some other optimistic consistency control algorithms that seek alternative ways to design transformation algorithms, but do not fit well with the above taxonomy and characterization. For example, Mark and Retrace
The correctness problems of OT led to introduction of transformationless post-OT schemes, such as WOOT, Logoot and Causal Trees (CT). "Post-OT" schemes decompose the document into atomic operations, but they workaround the need to transform operations by employing a combination of unique symbol identifiers, vector timestamps and/or tombstones.
Critique of OT.
While the classic OT approach of defining operations through their offsets in the text seems to be simple and natural, real-world distributed systems raise serious issues. Namely, that operations propagate with finite speed, states of participants are often different, thus the resulting combinations of states and operations are extremely hard to foresee and understand. As Li and Li put it, "Due to the need to consider complicated case coverage, formal proofs are very complicated and error-prone, even for OT algorithms that only treat two characterwise primitives (insert and delete)".
Similarly, Joseph Gentle who is a former Google Wave engineer and an author of the Share.JS library wrote, "Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. […] Wave took 2 years to write and if we rewrote it today, it would take almost as long to write a second time." But later he amends his comment with "I no longer believe that wave would take 2 years to implement now - mostly because of advances in web frameworks and web browsers."
For OT to work, every single change to the data needs to be captured: "Obtaining a snapshot of the state is usually trivial, but capturing edits is a different matter altogether. […] The richness of modern user interfaces can make this problematic, especially within a browser-based environment." An alternative to OT is differential synchronization.
Another alternative to OT is using sequence types of conflict-free replicated data type.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T(op_1,op_2)"
},
{
"math_id": 1,
"text": "T^{-1}(op_1,op_2)"
},
{
"math_id": 2,
"text": "p_1,c_1,sid_1"
},
{
"math_id": 3,
"text": "p_2,c_2,sid_2"
},
{
"math_id": 4,
"text": "p_1 < p_2"
},
{
"math_id": 5,
"text": "p_1=p_2"
},
{
"math_id": 6,
"text": "sid_1 < sid_2"
},
{
"math_id": 7,
"text": "p_1+1,c_1,sid_1"
},
{
"math_id": 8,
"text": "T^{-1}"
},
{
"math_id": 9,
"text": "p_2,sid_2"
},
{
"math_id": 10,
"text": "p_1-1,c_1,sid_1"
},
{
"math_id": 11,
"text": "op_1"
},
{
"math_id": 12,
"text": "op_2"
},
{
"math_id": 13,
"text": " op_1 \\circ T(op_2,op_1) \\equiv op_2 \\circ T(op_1,op_2) "
},
{
"math_id": 14,
"text": "op_i \\circ op_j"
},
{
"math_id": 15,
"text": "op_i"
},
{
"math_id": 16,
"text": "op_j"
},
{
"math_id": 17,
"text": "\\equiv"
},
{
"math_id": 18,
"text": "op_1, op_2"
},
{
"math_id": 19,
"text": "op_3"
},
{
"math_id": 20,
"text": "T(op_3, op_1 \\circ T(op_2,op_1)) = T(op_3, op_2 \\circ T(op_1,op_2))"
},
{
"math_id": 21,
"text": "T(op_2,op_1)"
},
{
"math_id": 22,
"text": "op \\circ \\overline{op}"
},
{
"math_id": 23,
"text": "S \\circ op \\circ \\overline{op} = S"
},
{
"math_id": 24,
"text": "T(op_x, op \\circ \\overline{op})=op_x"
},
{
"math_id": 25,
"text": "op_x"
},
{
"math_id": 26,
"text": "\\overline{op_1}' = T(\\overline{op_1}, T(op_2, op_1))"
},
{
"math_id": 27,
"text": "\\overline{op_1'} = \\overline{T(op_1, op_2)}"
},
{
"math_id": 28,
"text": "\\overline{op_1}' = \\overline{op_1'}"
},
{
"math_id": 29,
"text": "\\overline{op_1}'"
},
{
"math_id": 30,
"text": "\\overline{op_1'}"
},
{
"math_id": 31,
"text": "\\overline{op_1}"
}
] | https://en.wikipedia.org/wiki?curid=8924002 |
89244 | Safe and Sophie Germain primes | A prime pair (p, 2p+1)
In number theory, a prime number "p" is a Sophie Germain prime if 2"p" + 1 is also prime. The number 2"p" + 1 associated with a Sophie Germain prime is called a safe prime. For example, 11 is a Sophie Germain prime and 2 × 11 + 1 = 23 is its associated safe prime. Sophie Germain primes and safe primes have applications in public key cryptography and primality testing. It has been conjectured that there are infinitely many Sophie Germain primes, but this remains unproven.
Sophie Germain primes are named after French mathematician Sophie Germain, who used them in her investigations of Fermat's Last Theorem. One attempt by Germain to prove Fermat’s Last Theorem was to let "p" be a prime number of the form 8"k" + 7 and to let "n" = "p" – 1. In this case, formula_0 is unsolvable. Germain’s proof, however, remained unfinished. Through her attempts to solve Fermat's Last Theorem, Germain developed a result now known as Germain's Theorem which states that if "p" is an odd prime and 2"p" + 1 is also prime, then "p" must divide "x", "y", or "z." Otherwise, formula_1. This case where "p" does not divide "x", "y", or "z" is called the first case. Sophie Germain’s work was the most progress achieved on Fermat’s last theorem at that time. Later work by Kummer and others always divided the problem into first and second cases.
Individual numbers.
The first few Sophie Germain primes (those less than 1000) are
2, 3, 5, 11, 23, 29, 41, 53, 83, 89, 113, 131, 173, 179, 191, 233, 239, 251, 281, 293, 359, 419, 431, 443, 491, 509, 593, 641, 653, 659, 683, 719, 743, 761, 809, 911, 953, ... OEIS:
Hence, the first few safe primes are
5, 7, 11, 23, 47, 59, 83, 107, 167, 179, 227, 263, 347, 359, 383, 467, 479, 503, 563, 587, 719, 839, 863, 887, 983, 1019, 1187, 1283, 1307, 1319, 1367, 1439, 1487, 1523, 1619, 1823, 1907, ... OEIS:
In cryptography much larger Sophie Germain primes like 1,846,389,521,368 + 11600 are required.
Two distributed computing projects, PrimeGrid and Twin Prime Search, include searches for large Sophie Germain primes. Some of the largest known Sophie Germain primes are given in the following table.
On 2 Dec 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé, and Paul Zimmermann announced the computation of a discrete logarithm modulo the 240-digit (795 bit) prime RSA-240 + 49204 (the first safe prime above RSA-240) using a number field sieve algorithm; see Discrete logarithm records.
Properties.
There is no special primality test for safe primes the way there is for Fermat primes and Mersenne primes. However, Pocklington's criterion can be used to prove the primality of 2"p" + 1 once one has proven the primality of "p".
Just as every term except the last one of a Cunningham chain of the first kind is a Sophie Germain prime, so every term except the first of such a chain is a safe prime. Safe primes ending in 7, that is, of the form 10"n" + 7, are the last terms in such chains when they occur, since 2(10"n" + 7) + 1 = 20"n" + 15 is divisible by 5.
For a safe prime, every quadratic nonresidue, except -1 (if nonresidue), is a primitive root. It follows that for a safe prime, the least positive primitive root is a prime number.
Modular restrictions.
With the exception of 7, a safe prime "q" is of the form 6"k" − 1 or, equivalently, "q" ≡ 5 (mod 6) – as is "p" > 3. Similarly, with the exception of 5, a safe prime "q" is of the form 4"k" − 1 or, equivalently, "q" ≡ 3 (mod 4) — trivially true since ("q" − 1) / 2 must evaluate to an odd natural number. Combining both forms using lcm(6, 4) we determine that a safe prime "q" > 7 also must be of the form 12"k" − 1 or, equivalently, "q" ≡ 11 (mod 12).
It follows that, for any safe prime "q" > 7:
If "p" is a Sophie Germain prime greater than 3, then "p" must be congruent to 2 mod 3. For, if not, it would be congruent to 1 mod 3 and 2"p" + 1 would be congruent to 3 mod 3, impossible for a prime number. Similar restrictions hold for larger prime moduli, and are the basis for the choice of the "correction factor" 2"C" in the Hardy–Littlewood estimate on the density of the Sophie Germain primes.
If a Sophie Germain prime "p" is congruent to 3 (mod 4) (OEIS: , "Lucasian primes"), then its matching safe prime 2"p" + 1 (congruent to 7 modulo 8) will be a divisor of the Mersenne number 2"p" − 1. Historically, this result of Leonhard Euler was the first known criterion for a Mersenne number with a prime index to be composite. It can be used to generate the largest Mersenne numbers (with prime indices) that are known to be composite.
Infinitude and density.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Are there infinitely many Sophie Germain primes?
It is conjectured that there are infinitely many Sophie Germain primes, but this has not been proven. Several other famous conjectures in number theory generalize this and the twin prime conjecture; they include the Dickson's conjecture, Schinzel's hypothesis H, and the Bateman–Horn conjecture.
A heuristic estimate for the number of Sophie Germain primes less than "n" is
formula_2
where
formula_3
is Hardy–Littlewood's twin prime constant. For "n" = 104, this estimate predicts 156 Sophie Germain primes, which has a 20% error compared to the exact value of 190. For "n" = 107, the estimate predicts 50822, which is still 10% off from the exact value of 56032. The form of this estimate is due to G. H. Hardy and J. E. Littlewood, who applied a similar estimate to twin primes.
A sequence ("p", 2"p" + 1, 2(2"p" + 1) + 1, ...) in which all of the numbers are prime is called a Cunningham chain of the first kind. Every term of such a sequence except the last is a Sophie Germain prime, and every term except the first is a safe prime. Extending the conjecture that there exist infinitely many Sophie Germain primes, it has also been conjectured that arbitrarily long Cunningham chains exist, although infinite chains are known to be impossible.
Strong primes.
A prime number "q" is a strong prime if "q" + 1 and "q" − 1 both have some large (around 500 digits) prime factors. For a safe prime "q" = 2"p" + 1, the number "q" − 1 naturally has a large prime factor, namely "p", and so a safe prime "q" meets part of the criteria for being a strong prime. The running times of some methods of factoring a number with "q" as a prime factor depend partly on the size of the prime factors of "q" − 1. This is true, for instance, of the "p" − 1 method.
Applications.
Cryptography.
Safe primes are also important in cryptography because of their use in discrete logarithm-based techniques like Diffie–Hellman key exchange. If 2"p" + 1 is a safe prime, the multiplicative group of integers modulo 2"p" + 1 has a subgroup of large prime order. It is usually this prime-order subgroup that is desirable, and the reason for using safe primes is so that the modulus is as small as possible relative to "p".
A prime number "p" = 2"q" + 1 is called a "safe prime" if "q" is prime. Thus, "p" = 2"q" + 1 is a safe prime if and only if "q" is a Sophie Germain prime, so finding safe primes and finding Sophie Germain primes are equivalent in computational difficulty. The notion of a safe prime can be strengthened to a strong prime, for which both "p" − 1 and "p" + 1 have large prime factors. Safe and strong primes were useful as the factors of secret keys in the RSA cryptosystem, because they prevent the system being broken by some factorization algorithms such as Pollard's "p" − 1 algorithm. However, with the current factorization technology, the advantage of using safe and strong primes appears to be negligible.
Similar issues apply in other cryptosystems as well, including Diffie–Hellman key exchange and similar systems that depend on the security of the discrete logarithm problem rather than on integer factorization. For this reason, key generation protocols for these methods often rely on efficient algorithms for generating strong primes, which in turn rely on the conjecture that these primes have a sufficiently high density.
In Sophie Germain Counter Mode, it was proposed to use the arithmetic in the finite field of order equal to the safe prime 2128 + 12451, to counter weaknesses in Galois/Counter Mode using the binary finite field GF(2128). However, SGCM has been shown to be vulnerable to many of the same cryptographic attacks as GCM.
Primality testing.
In the first version of the AKS primality test paper, a conjecture about Sophie Germain primes is used to lower the worst-case complexity from O(log12"n") to O(log6"n"). A later version of the paper is shown to have time complexity O(log7.5"n") which can also be lowered to O(log6"n") using the conjecture. Later variants of AKS have been proven to have complexity of O(log6"n") without any conjectures or use of Sophie Germain primes.
Pseudorandom number generation.
Safe primes obeying certain congruences can be used to generate pseudo-random numbers of use in Monte Carlo simulation.
Similarly, Sophie Germain primes may be used in the generation of pseudo-random numbers. The decimal expansion of 1/"q" will produce a stream of "q" − 1 pseudo-random digits, if "q" is the safe prime of a Sophie Germain prime "p", with "p" congruent to 3, 9, or 11 modulo 20. Thus "suitable" prime numbers "q" are 7, 23, 47, 59, 167, 179, etc. (OEIS: ) (corresponding to "p" = 3, 11, 23, 29, 83, 89, etc.) (OEIS: ). The result is a stream of length "q" − 1 digits (including leading zeros). So, for example, using "q" = 23 generates the pseudo-random digits 0, 4, 3, 4, 7, 8, 2, 6, 0, 8, 6, 9, 5, 6, 5, 2, 1, 7, 3, 9, 1, 3. Note that these digits are not appropriate for cryptographic purposes, as the value of each can be derived from its predecessor in the digit-stream.
In popular culture.
Sophie Germain primes are mentioned in the stage play "Proof" and the subsequent film.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x^n + y^n = z^n"
},
{
"math_id": 1,
"text": "x^n + y^n \\neq z^n"
},
{
"math_id": 2,
"text": "2C \\frac{n}{(\\ln n)^2} \\approx 1.32032\\frac{n}{(\\ln n)^2}"
},
{
"math_id": 3,
"text": "C=\\prod_{p>2} \\frac{p(p-2)}{(p-1)^2}\\approx 0.660161"
}
] | https://en.wikipedia.org/wiki?curid=89244 |
892446 | Levenberg–Marquardt algorithm | Algorithm used to solve non-linear least squares problems
In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach.
The algorithm was first published in 1944 by Kenneth Levenberg, while working at the Frankford Army Arsenal. It was rediscovered in 1963 by Donald Marquardt, who worked as a statistician at DuPont, and independently by Girard, Wynne and Morrison.
The LMA is used in many software applications for solving generic curve-fitting problems. By using the Gauss–Newton algorithm it often converges faster than first-order methods. However, like other iterative optimization algorithms, the LMA finds only a local minimum, which is not necessarily the global minimum.
The problem.
The primary application of the Levenberg–Marquardt algorithm is in the least-squares curve fitting problem: given a set of formula_0 empirical pairs formula_1 of independent and dependent variables, find the parameters &NoBreak;&NoBreak; of the model curve formula_2 so that the sum of the squares of the deviations formula_3 is minimized:
formula_4 which is assumed to be non-empty.
The solution.
Like other numeric minimization algorithms, the Levenberg–Marquardt algorithm is an iterative procedure. To start a minimization, the user has to provide an initial guess for the parameter vector &NoBreak;&NoBreak;. In cases with only one minimum, an uninformed standard guess like formula_5 will work fine; in cases with multiple minima, the algorithm converges to the global minimum only if the initial guess is already somewhat close to the final solution.
In each iteration step, the parameter vector &NoBreak;&NoBreak; is replaced by a new estimate &NoBreak;&NoBreak;. To determine &NoBreak;&NoBreak;, the function formula_6 is approximated by its linearization:
formula_7
where
formula_8
is the gradient (row-vector in this case) of &NoBreak;&NoBreak; with respect to &NoBreak;&NoBreak;.
The sum formula_3 of square deviations has its minimum at a zero gradient with respect to &NoBreak;&NoBreak;. The above first-order approximation of formula_6 gives
formula_9
or in vector notation,
formula_10
Taking the derivative of this approximation of formula_11 with respect to &NoBreak;&NoBreak; and setting the result to zero gives
formula_12
where formula_13 is the Jacobian matrix, whose &NoBreak;&NoBreak;-th row equals formula_14, and where formula_15 and formula_16 are vectors with &NoBreak;&NoBreak;-th component
formula_17 and formula_18 respectively. The above expression obtained for &NoBreak;&NoBreak; comes under the Gauss–Newton method. The Jacobian matrix as defined above is not (in general) a square matrix, but a rectangular matrix of size formula_19, where formula_20 is the number of parameters (size of the vector formula_21). The matrix multiplication formula_22 yields the required formula_23 square matrix and the matrix-vector product on the right hand side yields a vector of size formula_20. The result is a set of formula_20 linear equations, which can be solved for &NoBreak;&NoBreak;.
Levenberg's contribution is to replace this equation by a "damped version":
formula_24
where &NoBreak;&NoBreak; is the identity matrix, giving as the increment &NoBreak;&NoBreak; to the estimated parameter vector &NoBreak;&NoBreak;.
The (non-negative) damping factor &NoBreak;&NoBreak; is adjusted at each iteration. If reduction of &NoBreak;&NoBreak; is rapid, a smaller value can be used, bringing the algorithm closer to the Gauss–Newton algorithm, whereas if an iteration gives insufficient reduction in the residual, &NoBreak;&NoBreak; can be increased, giving a step closer to the gradient-descent direction. Note that the gradient of &NoBreak;&NoBreak; with respect to &NoBreak;&NoBreak; equals formula_25. Therefore, for large values of &NoBreak;&NoBreak;, the step will be taken approximately in the direction opposite to the gradient. If either the length of the calculated step &NoBreak;&NoBreak; or the reduction of sum of squares from the latest parameter vector &NoBreak;&NoBreak; fall below predefined limits, iteration stops, and the last parameter vector &NoBreak;&NoBreak; is considered to be the solution.
When the damping factor &NoBreak;&NoBreak; is large relative to formula_26, inverting formula_27 is not necessary, as the update is well-approximated by the small gradient step formula_28.
To make the solution scale invariant Marquardt's algorithm solved a modified problem with each component of the gradient scaled according to the curvature. This provides larger movement along the directions where the gradient is smaller, which avoids slow convergence in the direction of small gradient. Fletcher in his 1971 paper "A modified Marquardt subroutine for non-linear least squares" simplified the form, replacing the identity matrix &NoBreak;&NoBreak; with the diagonal matrix consisting of the diagonal elements of &NoBreak;&NoBreak;:
formula_29
A similar damping factor appears in Tikhonov regularization, which is used to solve linear ill-posed problems, as well as in ridge regression, an estimation technique in statistics.
Choice of damping parameter.
Various more or less heuristic arguments have been put forward for the best choice for the damping parameter &NoBreak;&NoBreak;. Theoretical arguments exist showing why some of these choices guarantee local convergence of the algorithm; however, these choices can make the global convergence of the algorithm suffer from the undesirable properties of steepest descent, in particular, very slow convergence close to the optimum.
The absolute values of any choice depend on how well-scaled the initial problem is. Marquardt recommended starting with a value &NoBreak;&NoBreak; and a factor &NoBreak;&NoBreak;. Initially setting formula_30 and computing the residual sum of squares formula_3 after one step from the starting point with the damping factor of formula_30 and secondly with &NoBreak;&NoBreak;. If both of these are worse than the initial point, then the damping is increased by successive multiplication by &NoBreak;&NoBreak; until a better point is found with a new damping factor of &NoBreak;&NoBreak; for some &NoBreak;&NoBreak;.
If use of the damping factor &NoBreak;&NoBreak; results in a reduction in squared residual, then this is taken as the new value of &NoBreak;&NoBreak; (and the new optimum location is taken as that obtained with this damping factor) and the process continues; if using &NoBreak;&NoBreak; resulted in a worse residual, but using &NoBreak;&NoBreak; resulted in a better residual, then &NoBreak;&NoBreak; is left unchanged and the new optimum is taken as the value obtained with &NoBreak;&NoBreak; as damping factor.
An effective strategy for the control of the damping parameter, called "delayed gratification", consists of increasing the parameter by a small amount for each uphill step, and decreasing by a large amount for each downhill step. The idea behind this strategy is to avoid moving downhill too fast in the beginning of optimization, therefore restricting the steps available in future iterations and therefore slowing down convergence. An increase by a factor of 2 and a decrease by a factor of 3 has been shown to be effective in most cases, while for large problems more extreme values can work better, with an increase by a factor of 1.5 and a decrease by a factor of 5.
Geodesic acceleration.
When interpreting the Levenberg–Marquardt step as the velocity formula_31 along a geodesic path in the parameter space, it is possible to improve the method by adding a second order term that accounts for the acceleration formula_32 along the geodesic
formula_33
where formula_32 is the solution of
formula_34
Since this geodesic acceleration term depends only on the directional derivative formula_35 along the direction of the velocity formula_36, it does not require computing the full second order derivative matrix, requiring only a small overhead in terms of computing cost. Since the second order derivative can be a fairly complex expression, it can be convenient to replace it with a finite difference approximation
formula_37
where formula_38 and formula_39 have already been computed by the algorithm, therefore requiring only one additional function evaluation to compute formula_40. The choice of the finite difference step formula_41 can affect the stability of the algorithm, and a value of around 0.1 is usually reasonable in general.
Since the acceleration may point in opposite direction to the velocity, to prevent it to stall the method in case the damping is too small, an additional criterion on the acceleration is added in order to accept a step, requiring that
formula_42
where formula_43 is usually fixed to a value lesser than 1, with smaller values for harder problems.
The addition of a geodesic acceleration term can allow significant increase in convergence speed and it is especially useful when the algorithm is moving through narrow canyons in the landscape of the objective function, where the allowed steps are smaller and the higher accuracy due to the second order term gives significant improvements.
Example.
In this example we try to fit the function formula_44 using the Levenberg–Marquardt algorithm implemented in GNU Octave as the "leasqr" function. The graphs show progressively better fitting for the parameters formula_45, formula_46 used
in the initial curve. Only when the parameters in the last graph are chosen closest to the original, are the curves fitting exactly. This equation
is an example of very sensitive initial conditions for the Levenberg–Marquardt algorithm. One reason for this sensitivity is the existence of multiple minima — the function formula_47 has minima at parameter value formula_48 and formula_49.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
External links.
| [
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "\\left (x_i, y_i\\right )"
},
{
"math_id": 2,
"text": "f\\left (x, \\boldsymbol\\beta\\right )"
},
{
"math_id": 3,
"text": "S\\left (\\boldsymbol\\beta\\right )"
},
{
"math_id": 4,
"text": "\\hat{\\boldsymbol\\beta} \\in \\operatorname{argmin}\\limits_{\\boldsymbol\\beta} S\\left (\\boldsymbol\\beta\\right ) \\equiv \\operatorname{argmin}\\limits_{\\boldsymbol\\beta} \\sum_{i=1}^m \\left [y_i - f\\left (x_i, \\boldsymbol\\beta\\right )\\right ]^2,"
},
{
"math_id": 5,
"text": "\\boldsymbol\\beta^\\text{T} = \\begin{pmatrix}1,\\ 1,\\ \\dots,\\ 1\\end{pmatrix}"
},
{
"math_id": 6,
"text": "f\\left (x_i, \\boldsymbol\\beta + \\boldsymbol\\delta\\right )"
},
{
"math_id": 7,
"text": "f\\left (x_i, \\boldsymbol\\beta + \\boldsymbol\\delta\\right ) \\approx f\\left (x _i, \\boldsymbol\\beta\\right ) + \\mathbf J_i \\boldsymbol\\delta,"
},
{
"math_id": 8,
"text": "\\mathbf J_i = \\frac{\\partial f\\left (x_i, \\boldsymbol\\beta\\right )}{\\partial \\boldsymbol\\beta}"
},
{
"math_id": 9,
"text": "S\\left (\\boldsymbol\\beta + \\boldsymbol\\delta\\right ) \\approx \\sum_{i=1}^m \\left [y_i - f\\left (x_i, \\boldsymbol\\beta\\right ) - \\mathbf J_i \\boldsymbol\\delta\\right ]^2,"
},
{
"math_id": 10,
"text": "\\begin{align}\n S\\left (\\boldsymbol\\beta + \\boldsymbol\\delta\\right ) &\\approx \\left \\|\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right ) - \\mathbf J\\boldsymbol\\delta\\right \\|^2\\\\\n &= \\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right ) - \\mathbf J\\boldsymbol\\delta \\right ]^{\\mathrm T}\\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right ) - \\mathbf J\\boldsymbol\\delta\\right ]\\\\\n &= \\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right ]^{\\mathrm T}\\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right ] - \\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right ]^{\\mathrm T} \\mathbf J \\boldsymbol\\delta - \\left (\\mathbf J \\boldsymbol\\delta\\right )^{\\mathrm T} \\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right ] + \\boldsymbol\\delta^{\\mathrm T} \\mathbf J^{\\mathrm T} \\mathbf J \\boldsymbol\\delta\\\\\n &= \\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right ]^{\\mathrm T}\\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right ] - 2\\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right ]^{\\mathrm T} \\mathbf J \\boldsymbol\\delta + \\boldsymbol\\delta^{\\mathrm T} \\mathbf J^{\\mathrm T} \\mathbf J\\boldsymbol\\delta.\n\\end{align}"
},
{
"math_id": 11,
"text": "S\\left (\\boldsymbol\\beta + \\boldsymbol\\delta\\right )"
},
{
"math_id": 12,
"text": "\\left (\\mathbf J^{\\mathrm T} \\mathbf J\\right )\\boldsymbol\\delta = \\mathbf J^{\\mathrm T}\\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right ],"
},
{
"math_id": 13,
"text": "\\mathbf J"
},
{
"math_id": 14,
"text": "\\mathbf J_i"
},
{
"math_id": 15,
"text": "\\mathbf f\\left (\\boldsymbol\\beta\\right )"
},
{
"math_id": 16,
"text": "\\mathbf y"
},
{
"math_id": 17,
"text": "f\\left (x_i, \\boldsymbol\\beta\\right )"
},
{
"math_id": 18,
"text": "y_i"
},
{
"math_id": 19,
"text": "m \\times n"
},
{
"math_id": 20,
"text": "n"
},
{
"math_id": 21,
"text": "\\boldsymbol\\beta"
},
{
"math_id": 22,
"text": "\\left (\\mathbf J^{\\mathrm T} \\mathbf J\\right)"
},
{
"math_id": 23,
"text": "n \\times n"
},
{
"math_id": 24,
"text": "\\left (\\mathbf J^{\\mathrm T} \\mathbf J + \\lambda\\mathbf I\\right ) \\boldsymbol\\delta = \\mathbf J^{\\mathrm T}\\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right],"
},
{
"math_id": 25,
"text": "-2\\left (\\mathbf J^{\\mathrm T}\\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right ]\\right )^{\\mathrm T}"
},
{
"math_id": 26,
"text": " \\| \\mathbf J^{\\mathrm T} \\mathbf J \\| "
},
{
"math_id": 27,
"text": " \\mathbf J^{\\mathrm T} \\mathbf J + \\lambda \\mathbf I "
},
{
"math_id": 28,
"text": " \\lambda^{-1} \\mathbf J^{\\mathrm T}\\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right ]"
},
{
"math_id": 29,
"text": "\\left [\\mathbf J^{\\mathrm T} \\mathbf J + \\lambda \\operatorname{diag}\\left (\\mathbf J^{\\mathrm T} \\mathbf J\\right )\\right ] \\boldsymbol\\delta = \\mathbf J^{\\mathrm T}\\left [\\mathbf y - \\mathbf f\\left (\\boldsymbol\\beta\\right )\\right ]."
},
{
"math_id": 30,
"text": "\\lambda = \\lambda_0"
},
{
"math_id": 31,
"text": "\\boldsymbol{v}_k"
},
{
"math_id": 32,
"text": "\\boldsymbol{a}_k"
},
{
"math_id": 33,
"text": "\n\\boldsymbol{v}_k + \\frac{1}{2} \\boldsymbol{a}_k\n"
},
{
"math_id": 34,
"text": "\n\\boldsymbol{J}_k \\boldsymbol{a}_k = -f_{vv} .\n"
},
{
"math_id": 35,
"text": "f_{vv} = \\sum_{\\mu\\nu} v_{\\mu} v_{\\nu} \\partial_{\\mu} \\partial_{\\nu} f (\\boldsymbol{x})"
},
{
"math_id": 36,
"text": "\\boldsymbol{v}"
},
{
"math_id": 37,
"text": "\n\\begin{align}\nf_{vv}^i &\\approx \\frac{f_i(\\boldsymbol{x} + h \\boldsymbol{\\delta}) - 2 f_i(\\boldsymbol{x}) + f_i(\\boldsymbol{x} - h \\boldsymbol{\\delta})}{h^2} \\\\\n &= \\frac{2}{h} \\left( \\frac{f_i(\\boldsymbol{x} + h \\boldsymbol{\\delta}) - f_i(\\boldsymbol{x})}{h} - \\boldsymbol{J}_i \\boldsymbol{\\delta} \\right)\n\\end{align}\n"
},
{
"math_id": 38,
"text": "f(\\boldsymbol{x})"
},
{
"math_id": 39,
"text": "\\boldsymbol{J}"
},
{
"math_id": 40,
"text": "f(\\boldsymbol{x} + h \\boldsymbol{\\delta})"
},
{
"math_id": 41,
"text": "h"
},
{
"math_id": 42,
"text": "\n\\frac{2 \\left\\| \\boldsymbol{a}_k \\right\\|}{\\left\\| \\boldsymbol{v}_k \\right\\|} \\le \\alpha\n"
},
{
"math_id": 43,
"text": "\\alpha"
},
{
"math_id": 44,
"text": "y = a \\cos\\left (bX\\right ) + b \\sin\\left (aX\\right )"
},
{
"math_id": 45,
"text": "a = 100"
},
{
"math_id": 46,
"text": "b = 102"
},
{
"math_id": 47,
"text": "\\cos\\left (\\beta x\\right )"
},
{
"math_id": 48,
"text": "\\hat\\beta"
},
{
"math_id": 49,
"text": "\\hat\\beta + 2n\\pi"
}
] | https://en.wikipedia.org/wiki?curid=892446 |
89246 | Curve | Mathematical idealization of the trace left by a moving point
In mathematics, a curve (also called a curved line in older texts) is an object similar to a line, but that does not have to be straight.
Intuitively, a curve may be thought of as the trace left by a moving point. This is the definition that appeared more than 2000 years ago in Euclid's "Elements": "The [curved] line is […] the first species of quantity, which has only one dimension, namely length, without any width nor depth, and is nothing else than the flow or run of the point which […] will leave from its imaginary moving some vestige in length, exempt of any width."
This definition of a curve has been formalized in modern mathematics as: "A curve is the image of an interval to a topological space by a continuous function". In some contexts, the function that defines the curve is called a "parametrization", and the curve is a parametric curve. In this article, these curves are sometimes called "topological curves" to distinguish them from more constrained curves such as differentiable curves. This definition encompasses most curves that are studied in mathematics; notable exceptions are level curves (which are unions of curves and isolated points), and algebraic curves (see below). Level curves and algebraic curves are sometimes called implicit curves, since they are generally defined by implicit equations.
Nevertheless, the class of topological curves is very broad, and contains some curves that do not look as one may expect for a curve, or even cannot be drawn. This is the case of space-filling curves and fractal curves. For ensuring more regularity, the function that defines a curve is often supposed to be differentiable, and the curve is then said to be a differentiable curve.
A plane algebraic curve is the zero set of a polynomial in two indeterminates. More generally, an algebraic curve is the zero set of a finite set of polynomials, which satisfies the further condition of being an algebraic variety of dimension one. If the coefficients of the polynomials belong to a field k, the curve is said to be "defined over" k. In the common case of a real algebraic curve, where k is the field of real numbers, an algebraic curve is a finite union of topological curves. When complex zeros are considered, one has a "complex algebraic curve", which, from the topological point of view, is not a curve, but a surface, and is often called a Riemann surface. Although not being curves in the common sense, algebraic curves defined over other fields have been widely studied. In particular, algebraic curves over a finite field are widely used in modern cryptography.
History.
Interest in curves began long before they were the subject of mathematical study. This can be seen in numerous examples of their decorative use in art and on everyday objects dating back to prehistoric
times. Curves, or at least their graphical representations, are simple to create, for example with a stick on the sand on a beach.
Historically, the term line was used in place of the more modern term curve. Hence the terms straight line and right line were used to distinguish what are today called lines from curved lines. For example, in Book I of Euclid's Elements, a line is defined as a "breadthless length" (Def. 2), while a straight line is defined as "a line that lies evenly with the points on itself" (Def. 4). Euclid's idea of a line is perhaps clarified by the statement "The extremities of a line are points," (Def. 3). Later commentators further classified lines according to various schemes. For example:
The Greek geometers had studied many other kinds of curves. One reason was their interest in solving geometrical problems that could not be solved using standard compass and straightedge construction.
These curves include:
A fundamental advance in the theory of curves was the introduction of analytic geometry by René Descartes in the seventeenth century. This enabled a curve to be described using an equation rather than an elaborate geometrical construction. This not only allowed new curves to be defined and studied, but it enabled a formal distinction to be made between algebraic curves that can be defined using polynomial equations, and transcendental curves that cannot. Previously, curves had been described as "geometrical" or "mechanical" according to how they were, or supposedly could be, generated.
Conic sections were applied in astronomy by Kepler.
Newton also worked on an early example in the calculus of variations. Solutions to variational problems, such as the brachistochrone and tautochrone questions, introduced properties of curves in new ways (in this case, the cycloid). The catenary gets its name as the solution to the problem of a hanging chain, the sort of question that became routinely accessible by means of differential calculus.
In the eighteenth century came the beginnings of the theory of plane algebraic curves, in general. Newton had studied the cubic curves, in the general description of the real points into 'ovals'. The statement of Bézout's theorem showed a number of aspects which were not directly accessible to the geometry of the time, to do with singular points and complex solutions.
Since the nineteenth century, curve theory is viewed as the special case of dimension one of the theory of manifolds and algebraic varieties. Nevertheless, many questions remain specific to curves, such as space-filling curves, Jordan curve theorem and Hilbert's sixteenth problem.
Topological curve.
A topological curve can be specified by a continuous function formula_0 from an interval I of the real numbers into a topological space X. Properly speaking, the "curve" is the image of formula_1 However, in some contexts, formula_2 itself is called a curve, especially when the image does not look like what is generally called a curve and does not characterize sufficiently formula_1
For example, the image of the Peano curve or, more generally, a space-filling curve completely fills a square, and therefore does not give any information on how formula_2 is defined.
A curve formula_2 is closed or is a "loop" if formula_3 and formula_4. A closed curve is thus the image of a continuous mapping of a circle. A non-closed curve may also be called an open curve.
If the domain of a topological curve is a closed and bounded interval formula_3, the curve is called a "path", also known as "topological arc" (or just <templatestyles src="Template:Visible anchor/styles.css" />arc).
A curve is simple if it is the image of an interval or a circle by an injective continuous function. In other words, if a curve is defined by a continuous function formula_2 with an interval as a domain, the curve is simple if and only if any two different points of the interval have different images, except, possibly, if the points are the endpoints of the interval. Intuitively, a simple curve is a curve that "does not cross itself and has no missing points" (a continuous non-self-intersecting curve).
A "plane curve" is a curve for which formula_5 is the Euclidean plane—these are the examples first encountered—or in some cases the projective plane. A space curve is a curve for which formula_5 is at least three-dimensional; a skew curve is a space curve which lies in no plane. These definitions of plane, space and skew curves apply also to real algebraic curves, although the above definition of a curve does not apply (a real algebraic curve may be disconnected).
A plane simple closed curve is also called a Jordan curve. It is also defined as a non-self-intersecting continuous loop in the plane. The Jordan curve theorem states that the set complement in a plane of a Jordan curve consists of two connected components (that is the curve divides the plane in two non-intersecting regions that are both connected). The bounded region inside a Jordan curve is known as Jordan domain.
The definition of a curve includes figures that can hardly be called curves in common usage. For example, the image of a curve can cover a square in the plane (space-filling curve), and a simple curve may have a positive area. Fractal curves can have properties that are strange for the common sense. For example, a fractal curve can have a Hausdorff dimension bigger than one (see Koch snowflake) and even a positive area. An example is the dragon curve, which has many other unusual properties.
Differentiable curve.
Roughly speaking a differentiable curve is a curve that is defined as being locally the image of an injective differentiable function formula_0 from an interval I of the real numbers into a differentiable manifold X, often formula_6
More precisely, a differentiable curve is a subset C of X where every point of C has a neighborhood U such that formula_7 is diffeomorphic to an interval of the real numbers. In other words, a differentiable curve is a differentiable manifold of dimension one.
Differentiable arc.
In Euclidean geometry, an arc (symbol: ⌒) is a connected subset of a differentiable curve.
Arcs of lines are called segments, rays, or lines, depending on how they are bounded.
A common curved example is an arc of a circle, called a circular arc.
In a sphere (or a spheroid), an arc of a great circle (or a great ellipse) is called a great arc.
Length of a curve.
If formula_8 is the formula_9-dimensional Euclidean space, and if formula_10 is an injective and continuously differentiable function, then the length of formula_11 is defined as the quantity
formula_12
The length of a curve is independent of the parametrization formula_11.
In particular, the length formula_13 of the graph of a continuously differentiable function formula_14 defined on a closed interval formula_15 is
formula_16
which can be thought of intuitively as using the Pythagorean theorem at the infinitesimal scale continuously over the full length of the curve.
More generally, if formula_17 is a metric space with metric formula_18, then we can define the length of a curve formula_19 by
formula_20
where the supremum is taken over all formula_21 and all partitions formula_22 of formula_23.
A rectifiable curve is a curve with finite length. A curve formula_19 is called natural (or unit-speed or parametrized by arc length) if for any formula_24 such that formula_25, we have
formula_26
If formula_19 is a Lipschitz-continuous function, then it is automatically rectifiable. Moreover, in this case, one can define the speed (or metric derivative) of formula_11 at formula_27 as
formula_28
and then show that
formula_29
Differential geometry.
While the first examples of curves that are met are mostly plane curves (that is, in everyday words, "curved lines" in "two-dimensional space"), there are obvious examples such as the helix which exist naturally in three dimensions. The needs of geometry, and also for example classical mechanics are to have a notion of curve in space of any number of dimensions. In general relativity, a world line is a curve in spacetime.
If formula_5 is a differentiable manifold, then we can define the notion of "differentiable curve" in formula_5. This general idea is enough to cover many of the applications of curves in mathematics. From a local point of view one can take formula_5 to be Euclidean space. On the other hand, it is useful to be more general, in that (for example) it is possible to define the tangent vectors to formula_5 by means of this notion of curve.
If formula_5 is a smooth manifold, a "smooth curve" in formula_5 is a smooth map
formula_0.
This is a basic notion. There are less and more restricted ideas, too. If formula_5 is a formula_30 manifold (i.e., a manifold whose charts are formula_31 times continuously differentiable), then a formula_30 curve in formula_5 is such a curve which is only assumed to be formula_30 (i.e. formula_31 times continuously differentiable). If formula_5 is an analytic manifold (i.e. infinitely differentiable and charts are expressible as power series), and formula_2 is an analytic map, then formula_2 is said to be an "analytic curve".
A differentiable curve is said to be <templatestyles src="Template:Visible anchor/styles.css" />regular if its derivative never vanishes. (In words, a regular curve never slows to a stop or backtracks on itself.) Two formula_30 differentiable curves
formula_32 and
formula_33
are said to be "equivalent" if there is a bijective formula_30 map
formula_34
such that the inverse map
formula_35
is also formula_30, and
formula_36
for all formula_37. The map formula_38 is called a "reparametrization" of formula_39; and this makes an equivalence relation on the set of all formula_30 differentiable curves in formula_5. A formula_30 "arc" is an equivalence class of formula_30 curves under the relation of reparametrization.
Algebraic curve.
Algebraic curves are the curves considered in algebraic geometry. A plane algebraic curve is the set of the points of coordinates "x", "y" such that "f"("x", "y") = 0, where "f" is a polynomial in two variables defined over some field "F". One says that the curve is "defined over" "F". Algebraic geometry normally considers not only points with coordinates in "F" but all the points with coordinates in an algebraically closed field "K".
If "C" is a curve defined by a polynomial "f" with coefficients in "F", the curve is said to be defined over "F".
In the case of a curve defined over the real numbers, one normally considers points with complex coordinates. In this case, a point with real coordinates is a "real point", and the set of all real points is the "real part" of the curve. It is therefore only the real part of an algebraic curve that can be a topological curve (this is not always the case, as the real part of an algebraic curve may be disconnected and contain isolated points). The whole curve, that is the set of its complex point is, from the topological point of view a surface. In particular, the nonsingular complex projective algebraic curves are called Riemann surfaces.
The points of a curve "C" with coordinates in a field "G" are said to be rational over "G" and can be denoted "C"("G"). When "G" is the field of the rational numbers, one simply talks of "rational points". For example, Fermat's Last Theorem may be restated as: "For" "n" > 2, "every rational point of the Fermat curve of degree n has a zero coordinate".
Algebraic curves can also be space curves, or curves in a space of higher dimension, say "n". They are defined as algebraic varieties of dimension one. They may be obtained as the common solutions of at least "n"–1 polynomial equations in "n" variables. If "n"–1 polynomials are sufficient to define a curve in a space of dimension "n", the curve is said to be a complete intersection. By eliminating variables (by any tool of elimination theory), an algebraic curve may be projected onto a plane algebraic curve, which however may introduce new singularities such as cusps or double points.
A plane curve may also be completed to a curve in the projective plane: if a curve is defined by a polynomial "f" of total degree "d", then "w""d""f"("u"/"w", "v"/"w") simplifies to a homogeneous polynomial "g"("u", "v", "w") of degree "d". The values of "u", "v", "w" such that "g"("u", "v", "w") = 0 are the homogeneous coordinates of the points of the completion of the curve in the projective plane and the points of the initial curve are those such that "w" is not zero. An example is the Fermat curve "u""n" + "v""n" = "w""n", which has an affine form "x""n" + "y""n" = 1. A similar process of homogenization may be defined for curves in higher dimensional spaces.
Except for lines, the simplest examples of algebraic curves are the conics, which are nonsingular curves of degree two and genus zero. Elliptic curves, which are nonsingular curves of genus one, are studied in number theory, and have important applications to cryptography.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma \\colon I \\rightarrow X"
},
{
"math_id": 1,
"text": "\\gamma."
},
{
"math_id": 2,
"text": "\\gamma"
},
{
"math_id": 3,
"text": "I = [a,\nb]"
},
{
"math_id": 4,
"text": "\\gamma(a) = \\gamma(b)"
},
{
"math_id": 5,
"text": "X"
},
{
"math_id": 6,
"text": "\\mathbb{R}^n."
},
{
"math_id": 7,
"text": "C\\cap U"
},
{
"math_id": 8,
"text": " X = \\mathbb{R}^{n} "
},
{
"math_id": 9,
"text": " n "
},
{
"math_id": 10,
"text": " \\gamma: [a,b] \\to \\mathbb{R}^{n} "
},
{
"math_id": 11,
"text": " \\gamma "
},
{
"math_id": 12,
"text": "\n\\operatorname{Length}(\\gamma) ~ \\stackrel{\\text{def}}{=} ~ \\int_{a}^{b} |\\gamma\\,'(t)| ~ \\mathrm{d}{t}.\n"
},
{
"math_id": 13,
"text": " s "
},
{
"math_id": 14,
"text": " y = f(x) "
},
{
"math_id": 15,
"text": " [a,b] "
},
{
"math_id": 16,
"text": "\ns = \\int_{a}^{b} \\sqrt{1 + [f'(x)]^{2}} ~ \\mathrm{d}{x},\n"
},
{
"math_id": 17,
"text": " X "
},
{
"math_id": 18,
"text": " d "
},
{
"math_id": 19,
"text": " \\gamma: [a,b] \\to X "
},
{
"math_id": 20,
"text": "\n\\operatorname{Length}(\\gamma)\n~ \\stackrel{\\text{def}}{=} ~\n\\sup \\!\n\\left\\{\n\\sum_{i = 1}^{n} d(\\gamma(t_{i}),\\gamma(t_{i - 1})) ~ \\Bigg| ~ n \\in \\mathbb{N} ~ \\text{and} ~ a = t_{0} < t_{1} < \\ldots < t_{n} = b\n\\right\\},\n"
},
{
"math_id": 21,
"text": " n \\in \\mathbb{N} "
},
{
"math_id": 22,
"text": " t_{0} < t_{1} < \\ldots < t_{n} "
},
{
"math_id": 23,
"text": " [a, b] "
},
{
"math_id": 24,
"text": " t_{1},t_{2} \\in [a,b] "
},
{
"math_id": 25,
"text": " t_{1} \\leq t_{2} "
},
{
"math_id": 26,
"text": "\n\\operatorname{Length} \\! \\left( \\gamma|_{[t_{1},t_{2}]} \\right) = t_{2} - t_{1}.\n"
},
{
"math_id": 27,
"text": " t \\in [a,b] "
},
{
"math_id": 28,
"text": "\n{\\operatorname{Speed}_{\\gamma}}(t) ~ \\stackrel{\\text{def}}{=} ~ \\limsup_{s \\to t} \\frac{d(\\gamma(s),\\gamma(t))}{|s - t|}\n"
},
{
"math_id": 29,
"text": "\n\\operatorname{Length}(\\gamma) = \\int_{a}^{b} {\\operatorname{Speed}_{\\gamma}}(t) ~ \\mathrm{d}{t}.\n"
},
{
"math_id": 30,
"text": "C^k"
},
{
"math_id": 31,
"text": "k"
},
{
"math_id": 32,
"text": "\\gamma_1 \\colon I \\rightarrow X"
},
{
"math_id": 33,
"text": "\\gamma_2 \\colon J \\rightarrow X"
},
{
"math_id": 34,
"text": "p \\colon J \\rightarrow I"
},
{
"math_id": 35,
"text": "p^{-1} \\colon I \\rightarrow J"
},
{
"math_id": 36,
"text": "\\gamma_{2}(t) = \\gamma_{1}(p(t))"
},
{
"math_id": 37,
"text": "t"
},
{
"math_id": 38,
"text": "\\gamma_2"
},
{
"math_id": 39,
"text": "\\gamma_1"
}
] | https://en.wikipedia.org/wiki?curid=89246 |
89247 | Greedy algorithm | Sequence of locally optimal choices
A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time.
For example, a greedy strategy for the travelling salesman problem (which is of high computational complexity) is the following heuristic: "At each step of the journey, visit the nearest unvisited city." This heuristic does not intend to find the best solution, but it terminates in a reasonable number of steps; finding an optimal solution to such a complex problem typically requires unreasonably many steps. In mathematical optimization, greedy algorithms optimally solve combinatorial problems having the properties of matroids and give constant-factor approximations to optimization problems with the submodular structure.
Specifics.
Greedy algorithms produce good solutions on some mathematical problems, but not on others. Most problems for which they work will have two properties:
Cases of failure.
Greedy algorithms fail to produce the optimal solution for many other problems and may even produce the "unique worst possible" solution. One example is the travelling salesman problem mentioned above: for each number of cities, there is an assignment of distances between the cities for which the nearest-neighbour heuristic produces the unique worst possible tour.
For other possible examples, see horizon effect.
Types.
Greedy algorithms can be characterized as being 'short sighted', and also as 'non-recoverable'. They are ideal only for problems that have an 'optimal substructure'. Despite this, for many simple problems, the best-suited algorithms are greedy. It is important, however, to note that the greedy algorithm can be used as a selection algorithm to prioritize options within a search, or branch-and-bound algorithm. There are a few variations to the greedy algorithm:
Theory.
Greedy algorithms have a long history of study in combinatorial optimization and theoretical computer science. Greedy heuristics are known to produce suboptimal results on many problems, and so natural questions are:
A large body of literature exists answering these questions for general classes of problems, such as matroids, as well as for specific problems, such as set cover.
Matroids.
A matroid is a mathematical structure that generalizes the notion of linear independence from vector spaces to arbitrary sets. If an optimization problem has the structure of a matroid, then the appropriate greedy algorithm will solve it optimally.
Submodular functions.
A function formula_0 defined on subsets of a set formula_1 is called submodular if for every formula_2 we have that formula_3.
Suppose one wants to find a set formula_4 which maximizes formula_0. The greedy algorithm, which builds up a set formula_4 by incrementally adding the element which increases formula_0 the most at each step, produces as output a set that is at least formula_5. That is, greedy performs within a constant factor of formula_6 as good as the optimal solution.
Similar guarantees are provable when additional constraints, such as cardinality constraints, are imposed on the output, though often slight variations on the greedy algorithm are required. See for an overview.
Other problems with guarantees.
Other problems for which the greedy algorithm gives a strong guarantee, but not an optimal solution, include
Many of these problems have matching lower bounds; i.e., the greedy algorithm does not perform better than the guarantee in the worst case.
Applications.
Greedy algorithms typically (but not always) fail to find the globally optimal solution because they usually do not operate exhaustively on all the data. They can make commitments to certain choices too early, preventing them from finding the best overall solution later. For example, all known greedy coloring algorithms for the graph coloring problem and all other NP-complete problems do not consistently find optimum solutions. Nevertheless, they are useful because they are quick to think up and often give good approximations to the optimum.
If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods like dynamic programming. Examples of such greedy algorithms are Kruskal's algorithm and Prim's algorithm for finding minimum spanning trees and the algorithm for finding optimum Huffman trees.
Greedy algorithms appear in the network routing as well. Using greedy routing, a message is forwarded to the neighbouring node which is "closest" to the destination. The notion of a node's location (and hence "closeness") may be determined by its physical location, as in geographic routing used by ad hoc networks. Location may also be an entirely artificial construct as in small world routing and distributed hash table.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "\\Omega"
},
{
"math_id": 2,
"text": "S, T \\subseteq \\Omega"
},
{
"math_id": 3,
"text": "f(S)+f(T)\\geq f(S\\cup T)+f(S\\cap T)"
},
{
"math_id": 4,
"text": "S"
},
{
"math_id": 5,
"text": "(1 - 1/e) \\max_{X \\subseteq \\Omega} f(X)"
},
{
"math_id": 6,
"text": "(1 - 1/e) \\approx 0.63"
}
] | https://en.wikipedia.org/wiki?curid=89247 |
8924792 | Racah W-coefficient | Racah's W-coefficients were introduced by Giulio Racah in 1942. These coefficients have a purely mathematical definition. In physics they are used in calculations involving the quantum mechanical description of angular momentum, for example in atomic theory.
The coefficients appear when there are three sources of angular momentum in the problem. For example, consider an atom with one electron in an s orbital and one electron in a p orbital. Each electron has electron spin angular momentum and in addition
the p orbital has orbital angular momentum (an s orbital has zero orbital angular momentum). The atom may be described by "LS" coupling or by "jj" coupling as explained in the article on angular momentum coupling. The transformation between the wave functions that correspond to these two couplings involves a Racah W-coefficient.
Apart from a phase factor, Racah's W-coefficients are equal to Wigner's 6-j symbols, so any equation involving Racah's W-coefficients may be rewritten using 6-"j" symbols. This is often advantageous because the symmetry properties of 6-"j" symbols are easier to remember.
Racah coefficients are related to recoupling coefficients by
formula_0
Recoupling coefficients are elements of a unitary transformation and their definition is given in the next section. Racah coefficients have more convenient symmetry properties than the recoupling coefficients (but less convenient than the 6-"j" symbols).
Recoupling coefficients.
Coupling of two angular momenta formula_1 and formula_2 is the construction of simultaneous eigenfunctions of formula_3 and formula_4, where formula_5, as explained in the article on Clebsch–Gordan coefficients. The result is
formula_6
where formula_7 and formula_8.
Coupling of three angular momenta formula_1, formula_2, and formula_9, may be done by first coupling formula_1 and formula_2 to formula_10 and next coupling formula_10 and formula_9 to total angular momentum formula_11:
formula_12
Alternatively, one may first couple formula_2 and formula_9 to formula_13 and next couple formula_1 and formula_13 to formula_11:
formula_14
Both coupling schemes result in complete orthonormal bases for the formula_15 dimensional space spanned by
formula_16
Hence, the two total angular momentum bases are related by a unitary transformation. The matrix elements of this unitary transformation are given by a scalar product and are known as recoupling coefficients. The coefficients are independent of formula_17 and so we have
formula_18
The independence of formula_17 follows readily by writing this equation for formula_19 and applying the lowering operator formula_20 to both sides of the equation.
The definition of Racah W-coefficients lets us write this final expression as
formula_21
Algebra.
Let
formula_22
be the usual triangular factor, then the Racah coefficient is a product
of four of these by a sum over factorials,
formula_23
where
formula_24
and
formula_25
formula_26
formula_27
formula_28
The sum over formula_29 is finite over the range
formula_30
Relation to Wigner's 6-j symbol.
Racah's W-coefficients are related to Wigner's 6-j symbols, which have even more convenient symmetry properties
formula_31
Cf. or
formula_32
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n W(j_1j_2Jj_3;J_{12}J_{23}) \\equiv \\frac{\\langle (j_1, (j_2j_3)J_{23}) J | ((j_1j_2)J_{12},j_3)J \\rangle}{\\sqrt{(2J_{12}+1)(2J_{23}+1)}}.\n"
},
{
"math_id": 1,
"text": "\\mathbf{j}_1"
},
{
"math_id": 2,
"text": "\\mathbf{j}_2"
},
{
"math_id": 3,
"text": "\\mathbf{J}^2"
},
{
"math_id": 4,
"text": "J_z"
},
{
"math_id": 5,
"text": "\\mathbf{J}=\\mathbf{j}_1+\\mathbf{j}_2"
},
{
"math_id": 6,
"text": "\n |(j_1j_2)JM\\rangle = \\sum_{m_1=-j_1}^{j_1} \\sum_{m_2=-j_2}^{j_2}\n |j_1m_1\\rangle |j_2m_2\\rangle \\langle j_1m_1j_2m_2|JM\\rangle,\n"
},
{
"math_id": 7,
"text": "J=|j_1-j_2|,\\ldots,j_1+j_2"
},
{
"math_id": 8,
"text": "M=-J,\\ldots,J"
},
{
"math_id": 9,
"text": "\\mathbf{j}_3"
},
{
"math_id": 10,
"text": "\\mathbf{J}_{12}"
},
{
"math_id": 11,
"text": "\\mathbf{J}"
},
{
"math_id": 12,
"text": "\n |((j_1j_2)J_{12}j_3)JM\\rangle = \\sum_{M_{12}=-J_{12}}^{J_{12}} \\sum_{m_3=-j_3}^{j_3}\n |(j_1j_2)J_{12}M_{12}\\rangle |j_3m_3\\rangle \\langle J_{12}M_{12}j_3m_3|JM\\rangle\n"
},
{
"math_id": 13,
"text": "\\mathbf{J}_{23}"
},
{
"math_id": 14,
"text": "\n |(j_1,(j_2j_3)J_{23})JM \\rangle = \\sum_{m_1=-j_1}^{j_1} \\sum_{M_{23}=-J_{23}}^{J_{23}}\n |j_1m_1\\rangle |(j_2j_3)J_{23}M_{23}\\rangle \\langle j_1m_1J_{23}M_{23}|JM\\rangle\n"
},
{
"math_id": 15,
"text": "(2j_1+1)(2j_2+1)(2j_3+1)"
},
{
"math_id": 16,
"text": "\n |j_1 m_1\\rangle |j_2 m_2\\rangle |j_3 m_3\\rangle, \\;\\; m_1=-j_1,\\ldots,j_1;\\;\\; m_2=-j_2,\\ldots,j_2;\\;\\; m_3=-j_3,\\ldots,j_3.\n"
},
{
"math_id": 17,
"text": "M"
},
{
"math_id": 18,
"text": "\n |((j_1j_2)J_{12}j_3)JM\\rangle = \\sum_{J_{23}} |(j_1,(j_2j_3)J_{23})JM \\rangle\n \\langle (j_1,(j_2j_3)J_{23})J |((j_1j_2)J_{12}j_3)J\\rangle.\n"
},
{
"math_id": 19,
"text": "M=J"
},
{
"math_id": 20,
"text": "J_-"
},
{
"math_id": 21,
"text": "\n |((j_1j_2)J_{12}j_3)JM\\rangle = \\sum_{J_{23}} |(j_1,(j_2j_3)J_{23})JM \\rangle\n W(j_1j_2Jj_3; J_{12}J_{23}) \\sqrt{(2J_{12}+1)(2J_{23}+1)}.\n"
},
{
"math_id": 22,
"text": "\\Delta(a,b,c)=[(a+b-c)!(a-b+c)!(-a+b+c)!/(a+b+c+1)!]^{1/2}"
},
{
"math_id": 23,
"text": "W(abcd;ef)=\\Delta(a,b,e)\\Delta(c,d,e)\\Delta(a,c,f)\\Delta(b,d,f)w(abcd;ef)\n"
},
{
"math_id": 24,
"text": "w(abcd;ef)\\equiv\n\\sum_z\\frac{(-1)^{z+\\beta_1}(z+1)!}{(z-\\alpha_1)!(z-\\alpha_2)!(z-\\alpha_3)!\n(z-\\alpha_4)!(\\beta_1-z)!(\\beta_2-z)!(\\beta_3-z)!}"
},
{
"math_id": 25,
"text": "\\alpha_1=a+b+e;\\quad \\beta_1=a+b+c+d;"
},
{
"math_id": 26,
"text": "\\alpha_2=c+d+e;\\quad \\beta_2=a+d+e+f;"
},
{
"math_id": 27,
"text": "\\alpha_3=a+c+f;\\quad \\beta_3=b+c+e+f;"
},
{
"math_id": 28,
"text": "\\alpha_4=b+d+f."
},
{
"math_id": 29,
"text": "z"
},
{
"math_id": 30,
"text": " \\max(\\alpha_1,\\alpha_2,\\alpha_3,\\alpha_4) \\le z \\le \\min(\\beta_1,\\beta_2,\\beta_3). "
},
{
"math_id": 31,
"text": "\n\nW(abcd;ef)(-1)^{a+b+c+d}=\n\\begin{Bmatrix}\n a&b&e\\\\\n d&c&f\n\\end{Bmatrix}.\n"
},
{
"math_id": 32,
"text": "\n W(j_1j_2Jj_3;J_{12}J_{23}) = (-1)^{j_1+j_2+j_3+J}\n\\begin{Bmatrix}\n j_1 & j_2 & J_{12}\\\\\n j_3 & J & J_{23}\n\\end{Bmatrix}.\n"
}
] | https://en.wikipedia.org/wiki?curid=8924792 |
8925452 | 6-j symbol | Wigner's 6-"j" symbols were introduced by Eugene Paul Wigner in 1940 and published in 1965. They are defined as a sum over products of four Wigner 3-j symbols,
formula_0
The summation is over all six "m""i" allowed by the selection rules of the 3-"j" symbols.
They are closely related to the Racah W-coefficients, which are used for recoupling 3 angular momenta, although Wigner 6-"j" symbols have higher symmetry and therefore provide a more efficient means of storing the recoupling coefficients. Their relationship is given by:
formula_1
Symmetry relations.
The 6-"j" symbol is invariant under any permutation of the columns:
formula_2
The 6-"j" symbol is also invariant if upper and lower arguments
are interchanged in any two columns:
formula_3
These equations reflect the 24 symmetry operations of the automorphism group that leave the associated tetrahedral Yutsis graph with 6 edges invariant: mirror operations that exchange two vertices and a swap an adjacent pair of edges.
The 6-"j" symbol
formula_4
is zero unless "j"1, "j"2, and "j"3 satisfy triangle conditions,
i.e.,
formula_5
In combination with the symmetry relation for interchanging upper and lower arguments this
shows that triangle conditions must also be satisfied for the triads ("j"1, "j"5, "j"6), ("j"4, "j"2, "j"6), and ("j"4, "j"5, "j"3).
Furthermore, the sum of the elements of each triad must be an integer. Therefore, the members of each triad are either all integers or contain one integer and two half-integers.
Special case.
When "j"6 = 0 the expression for the 6-"j" symbol is:
formula_6
The "triangular delta" {"j"1 "j"2 "j"3} is equal to 1 when the triad ("j"1, "j"2, "j"3) satisfies the triangle conditions, and zero otherwise. The symmetry relations can be used to find the expression when another "j" is equal to zero.
Orthogonality relation.
The 6-"j" symbols satisfy this orthogonality relation:
formula_7
Asymptotics.
A remarkable formula for the asymptotic behavior of the 6-"j" symbol was first conjectured by Ponzano and Regge and later proven by Roberts. The asymptotic formula applies when all six quantum numbers "j"1, ..., "j"6 are taken to be large and associates to the 6-"j" symbol the geometry of a tetrahedron. If the 6-"j" symbol is determined by the quantum numbers "j"1, ..., "j"6 the associated tetrahedron has edge lengths "J"i = "j"i+1/2 (i=1...,6) and the asymptotic formula is given by,
formula_8
The notation is as follows: Each θi is the external dihedral angle about the edge "J"i of the associated tetrahedron and the amplitude factor is expressed in terms of the volume, "V", of this tetrahedron.
Mathematical interpretation.
In representation theory, 6-"j" symbols are matrix coefficients of the associator isomorphism in a tensor category. For example, if we are given three representations "V"i, "V"j, "V"k of a group (or quantum group), one has a natural isomorphism
formula_9
of tensor product representations, induced by coassociativity of the corresponding bialgebra. One of the axioms defining a monoidal category is that associators satisfy a pentagon identity, which is equivalent to the Biedenharn-Elliot identity for 6-"j" symbols.
When a monoidal category is semisimple, we can restrict our attention to irreducible objects, and define multiplicity spaces
formula_10
so that tensor products are decomposed as:
formula_11
where the sum is over all isomorphism classes of irreducible objects. Then:
formula_12
The associativity isomorphism induces a vector space isomorphism
formula_13
and the 6j symbols are defined as the component maps:
formula_14
When the multiplicity spaces have canonical basis elements and dimension at most one (as in the case of "SU"(2) in the traditional setting), these component maps can be interpreted as numbers, and the 6-"j" symbols become ordinary matrix coefficients.
In abstract terms, the 6-"j" symbols are precisely the information that is lost when passing from a semisimple monoidal category to its Grothendieck ring, since one can reconstruct a monoidal structure using the associator. For the case of representations of a finite group, it is well known that the character table alone (which determines the underlying abelian category and the Grothendieck ring structure) does not determine a group up to isomorphism, while the symmetric monoidal category structure does, by Tannaka-Krein duality. In particular, the two nonabelian groups of order 8 have equivalent abelian categories of representations and isomorphic Grothdendieck rings, but the 6-"j" symbols of their representation categories are distinct, meaning their representation categories are inequivalent as monoidal categories. Thus, the 6-"j" symbols give an intermediate level of information, that in fact uniquely determines the groups in many cases, such as when the group is odd order or simple.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6\n \\end{Bmatrix}\n = \\sum_{m_1, \\dots, m_6} (-1)^{\\sum_{k = 1}^6 (j_k - m_k)}\n \\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n -m_1 & -m_2 & -m_3\n \\end{pmatrix}\n \\begin{pmatrix}\n j_1 & j_5 & j_6\\\\\n m_1 & -m_5 & m_6\n \\end{pmatrix}\n \\begin{pmatrix}\n j_4 & j_2 & j_6\\\\\n m_4 & m_2 & -m_6\n \\end{pmatrix}\n \\begin{pmatrix}\n j_4 & j_5 & j_3\\\\\n -m_4 & m_5 & m_3\n \\end{pmatrix}\n.\n"
},
{
"math_id": 1,
"text": "\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6\n \\end{Bmatrix}\n = (-1)^{j_1 + j_2 + j_4 + j_5} W(j_1 j_2 j_5 j_4; j_3 j_6).\n"
},
{
"math_id": 2,
"text": "\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6\n \\end{Bmatrix}\n =\n \\begin{Bmatrix}\n j_2 & j_1 & j_3\\\\\n j_5 & j_4 & j_6\n \\end{Bmatrix}\n=\n \\begin{Bmatrix}\n j_1 & j_3 & j_2\\\\\n j_4 & j_6 & j_5\n \\end{Bmatrix}\n=\n \\begin{Bmatrix}\n j_3 & j_2 & j_1\\\\\n j_6 & j_5 & j_4\n \\end{Bmatrix}\n= \\cdots\n"
},
{
"math_id": 3,
"text": "\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6\n \\end{Bmatrix}\n =\n \\begin{Bmatrix}\n j_4 & j_5 & j_3\\\\\n j_1 & j_2 & j_6\n \\end{Bmatrix}\n =\n \\begin{Bmatrix}\n j_1 & j_5 & j_6\\\\\n j_4 & j_2 & j_3\n \\end{Bmatrix}\n =\n \\begin{Bmatrix}\n j_4 & j_2 & j_6\\\\\n j_1 & j_5 & j_3\n \\end{Bmatrix}.\n"
},
{
"math_id": 4,
"text": "\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6\n \\end{Bmatrix}\n"
},
{
"math_id": 5,
"text": "\n j_1 = |j_2-j_3|, \\ldots, j_2+j_3\n"
},
{
"math_id": 6,
"text": "\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & 0\n \\end{Bmatrix}\n = \\frac{\\delta_{j_2,j_4}\\delta_{j_1,j_5}}{\\sqrt{(2j_1+1)(2j_2+1)}} (-1)^{j_1+j_2+j_3} \\begin{Bmatrix} j_1 & j_2 & j_3 \\end{Bmatrix}.\n"
},
{
"math_id": 7,
"text": "\n \\sum_{j_3} (2j_3+1)\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6\n \\end{Bmatrix}\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6'\n \\end{Bmatrix}\n = \\frac{\\delta_{j_6^{}j_6'}}{2j_6+1} \\begin{Bmatrix} j_1 & j_5 & j_6 \\end{Bmatrix} \\begin{Bmatrix} j_4 & j_2 & j_6 \\end{Bmatrix}.\n"
},
{
"math_id": 8,
"text": "\n\\begin{Bmatrix}\nj_1 & j_2 & j_3\\\\\nj_4 & j_5 & j_6\n\\end{Bmatrix}\n\\sim \\frac{1}{\\sqrt{12 \\pi |V|}} \\cos{\\left( \\sum_{i=1}^{6} J_i \\theta_i +\\frac{\\pi}{4}\\right)}.\n"
},
{
"math_id": 9,
"text": "(V_i \\otimes V_j) \\otimes V_k \\to V_i \\otimes (V_j \\otimes V_k)"
},
{
"math_id": 10,
"text": "H_{i,j}^\\ell = \\operatorname{Hom}(V_{\\ell}, V_i \\otimes V_j)"
},
{
"math_id": 11,
"text": "V_i \\otimes V_j = \\bigoplus_\\ell H_{i,j}^\\ell \\otimes V_\\ell"
},
{
"math_id": 12,
"text": "(V_i \\otimes V_j) \\otimes V_k \\cong \\bigoplus_{\\ell,m} H_{i,j}^\\ell \\otimes H_{\\ell,k}^m \\otimes V_m \\qquad \\text{while} \\qquad V_i \\otimes (V_j \\otimes V_k) \\cong \\bigoplus_{m,n} H_{i,n}^m \\otimes H_{j,k}^n \\otimes V_m"
},
{
"math_id": 13,
"text": "\\Phi_{i,j}^{k,m}: \\bigoplus_{\\ell} H_{i,j}^\\ell \\otimes H_{\\ell,k}^m \\to \\bigoplus_n H_{i,n}^m \\otimes H_{j,k}^n"
},
{
"math_id": 14,
"text": "\n \\begin{Bmatrix}\n i & j & \\ell\\\\\n k & m & n\n \\end{Bmatrix}\n= (\\Phi_{i,j}^{k,m})_{\\ell,n}"
}
] | https://en.wikipedia.org/wiki?curid=8925452 |
8925986 | Goodman and Kruskal's gamma | Statistic for rank correlation
In statistics, Goodman and Kruskal's gamma is a measure of rank correlation, i.e., the similarity of the orderings of the data when ranked by each of the quantities. It measures the strength of association of the cross tabulated data when both variables are measured at the ordinal level. It makes no adjustment for either table size or ties. Values range from −1 (100% negative association, or perfect inversion) to +1 (100% positive association, or perfect agreement). A value of zero indicates the absence of association.
This statistic (which is distinct from Goodman and Kruskal's lambda) is named after Leo Goodman and William Kruskal, who proposed it in a series of papers from 1954 to 1972.
Definition.
The estimate of gamma, "G", depends on two quantities:
*"Ns", the number of pairs of cases ranked in the same order on both variables (number of concordant pairs),
*"Nd", the number of pairs of cases ranked in reversed order on both variables (number of reversed pairs),
where "ties" (cases where either of the two variables in the pair are equal) are dropped.
Then
formula_0
This statistic can be regarded as the maximum likelihood estimator for the theoretical quantity formula_1, where
formula_2
and where "P""s" and "P""d" are the probabilities that a randomly selected pair of observations will place in the same or opposite order respectively, when ranked by both variables.
Critical values for the gamma statistic are sometimes found by using an approximation, whereby a transformed value, "t" of the statistic is referred to Student t distribution, where
formula_3
and where "n" is the number of observations (not the number of pairs):
formula_4
Yule's Q.
A special case of Goodman and Kruskal's gamma is Yule's Q, also known as the Yule coefficient of association, which is specific to 2×2 matrices. Consider the following contingency table of events, where each value is a count of an event's frequency:
Yule's Q is given by:
formula_5
Although computed in the same fashion as Goodman and Kruskal's gamma, it has a slightly broader interpretation because the distinction between nominal and ordinal scales becomes a matter of arbitrary labeling for dichotomous distinctions. Thus, whether Q is positive or negative depends merely on which pairings the analyst considers to be concordant, but is otherwise symmetric.
"Q" varies from −1 to +1. −1 reflects total negative association, +1 reflects perfect positive association and 0 reflects no association at all. The sign depends on which pairings the analyst initially considered to be concordant, but this choice does not affect the magnitude.
In term of the odds ratio OR, Yule's "Q" is given by
formula_6
and so Yule's "Q" and Yule's "Y" are related by
formula_7
formula_8
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G=\\frac{N_s-N_d}{N_s+N_d}\\ ."
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": "\\gamma=\\frac{P_s-P_d}{P_s+P_d}\\ ,"
},
{
"math_id": 3,
"text": "t \\approx G \\sqrt{ \\frac{ N_s+N_d}{n(1-G^2)} }\\ ,"
},
{
"math_id": 4,
"text": "n \\ne N_s+N_d. \\,"
},
{
"math_id": 5,
"text": "Q=\\frac{ad - bc}{ad + bc}\\ ."
},
{
"math_id": 6,
"text": "Q= \\frac{{OR}-1}{{OR}+1}\\ ."
},
{
"math_id": 7,
"text": "Q = \\frac{2Y}{1+Y^2}\\ ,"
},
{
"math_id": 8,
"text": "Y = \\frac{1-\\sqrt{1-Q^2}}{Q}\\ ."
}
] | https://en.wikipedia.org/wiki?curid=8925986 |
8927344 | 9-j symbol | Symbol used in quantum mechanics
In physics, Wigner's 9-"j" symbols were introduced by Eugene Paul Wigner in 1937. They are related to recoupling coefficients in quantum mechanics involving four angular momenta:
formula_0
formula_1
Recoupling of four angular momentum vectors.
Coupling of two angular momenta formula_2 and formula_3 is the construction of simultaneous eigenfunctions of formula_4 and formula_5, where formula_6, as explained in the article on Clebsch–Gordan coefficients.
Coupling of three angular momenta can be done in several ways, as explained in the article on Racah W-coefficients. Using the notation and techniques of that article, total angular momentum states that arise from coupling the angular momentum vectors formula_2, formula_3, formula_7, and formula_8 may be written as
formula_9
Alternatively, one may first couple formula_2 and formula_7 to formula_10 and formula_3 and formula_8 to formula_11, before coupling formula_10 and formula_11 to formula_12:
formula_13
Both sets of functions provide a complete, orthonormal basis for the space with dimension formula_14 spanned by
formula_15
Hence, the transformation between the two sets is unitary and the matrix elements of the transformation are given by the scalar products of the functions.
As in the case of the Racah W-coefficients the matrix elements are independent of the total angular momentum projection quantum number (formula_16):
formula_17
Symmetry relations.
A 9-"j" symbol is invariant under reflection about either diagonal as well as even permutations of its rows or columns:
formula_18
An odd permutation of rows or columns yields a phase factor formula_19, where
formula_20
For example:
formula_21
Reduction to 6j symbols.
The 9-"j" symbols can be calculated as sums over triple-products of 6-"j" symbols where the summation extends over all "x" admitted by the triangle conditions in the factors:
formula_22.
Special case.
When formula_23 the 9-"j" symbol is proportional to a 6-j symbol:
formula_24
Orthogonality relation.
The 9-"j" symbols satisfy this orthogonality relation:
formula_25
The "triangular delta" {"j"1 "j"2 "j"3} is equal to 1 when the triad ("j"1, "j"2, "j"3) satisfies the triangle conditions, and zero otherwise.
3"n"-j symbols.
The 6-j symbol is the first representative, "n"
2, of 3"n"-"j" symbols that are defined as sums of products of "n" of Wigner's 3-"jm" coefficients. The sums are over all combinations of m that the 3"n"-"j" coefficients admit, i.e., which lead to non-vanishing contributions.
If each 3-"jm" factor is represented by a vertex and each j by an edge, these 3"n"-"j" symbols can be mapped on certain 3-regular graphs with 3"n" edges and 2"n" nodes. The 6-"j" symbol is associated with the K4 graph on 4 vertices, the 9-"j" symbol with the utility graph on 6 vertices ("K"3,3), and the two distinct (non-isomorphic) 12-"j" symbols with the "Q"3 and Wagner graphs on 8 vertices.
Symmetry relations are generally representative of the automorphism group of these graphs. | [
{
"math_id": 0,
"text": "\n \\sqrt{(2j_3+1)(2j_6+1)(2j_7+1)(2j_8+1)}\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6\\\\\n j_7 & j_8 & j_9\n \\end{Bmatrix}\n"
},
{
"math_id": 1,
"text": " \n =\n \\langle ( (j_1j_2)j_3,(j_4j_5)j_6)j_9 | ((j_1 j_4)j_7,(j_2j_5)j_8)j_9\\rangle.\n"
},
{
"math_id": 2,
"text": "\\mathbf{j}_1"
},
{
"math_id": 3,
"text": "\\mathbf{j}_2"
},
{
"math_id": 4,
"text": "\\mathbf{J}^2"
},
{
"math_id": 5,
"text": "J_z"
},
{
"math_id": 6,
"text": "\\mathbf{J}=\\mathbf{j}_1+\\mathbf{j}_2"
},
{
"math_id": 7,
"text": "\\mathbf{j}_4"
},
{
"math_id": 8,
"text": "\\mathbf{j}_5"
},
{
"math_id": 9,
"text": "\n | ((j_1j_2)j_3, (j_4j_5)j_6)j_9m_9\\rangle.\n"
},
{
"math_id": 10,
"text": "\\mathbf{j}_7"
},
{
"math_id": 11,
"text": "\\mathbf{j}_8"
},
{
"math_id": 12,
"text": "\\mathbf{j}_9"
},
{
"math_id": 13,
"text": "\n |((j_1j_4)j_7, (j_2j_5)j_8)j_9m_9\\rangle.\n"
},
{
"math_id": 14,
"text": "(2j_1+1)(2j_2+1)(2j_4+1)(2j_5+1)"
},
{
"math_id": 15,
"text": "\n |j_1 m_1\\rangle |j_2 m_2\\rangle |j_4 m_4\\rangle |j_5 m_5\\rangle, \\;\\; \n m_1=-j_1,\\ldots,j_1;\\;\\; m_2=-j_2,\\ldots,j_2;\\;\\; m_4=-j_4,\\ldots,j_4;\\;\\;m_5=-j_5,\\ldots,j_5.\n"
},
{
"math_id": 16,
"text": "m_9"
},
{
"math_id": 17,
"text": "\n |((j_1j_4)j_7, (j_2j_5)j_8)j_9m_9\\rangle = \\sum_{j_3}\\sum_{j_6}\n | ((j_1j_2)j_3, (j_4j_5)j_6)j_9m_9\\rangle\n \\langle ( (j_1j_2)j_3,(j_4j_5)j_6)j_9 | ((j_1 j_4)j_7,(j_2j_5)j_8)j_9\\rangle.\n"
},
{
"math_id": 18,
"text": "\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6\\\\\n j_7 & j_8 & j_9\n \\end{Bmatrix}\n = \n \\begin{Bmatrix}\n j_1 & j_4 & j_7\\\\\n j_2 & j_5 & j_8\\\\\n j_3 & j_6 & j_9\n \\end{Bmatrix}\n =\n \\begin{Bmatrix}\n j_9 & j_6 & j_3\\\\\n j_8 & j_5 & j_2\\\\\n j_7 & j_4 & j_1\n \\end{Bmatrix}\n =\n \\begin{Bmatrix}\n j_7 & j_4 & j_1\\\\\n j_9 & j_6 & j_3\\\\\n j_8 & j_5 & j_2\n \\end{Bmatrix}.\n"
},
{
"math_id": 19,
"text": "(-1)^S"
},
{
"math_id": 20,
"text": "S=\\sum_{i=1}^9 j_i.\n"
},
{
"math_id": 21,
"text": "\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6\\\\\n j_7 & j_8 & j_9\n \\end{Bmatrix}\n =\n (-1)^S\n \\begin{Bmatrix}\n j_4 & j_5 & j_6\\\\\n j_1 & j_2 & j_3\\\\\n j_7 & j_8 & j_9\n \\end{Bmatrix}\n =\n (-1)^S\n \\begin{Bmatrix}\n j_2 & j_1 & j_3\\\\\n j_5 & j_4 & j_6\\\\\n j_8 & j_7 & j_9\n \\end{Bmatrix}.\n"
},
{
"math_id": 22,
"text": "\n \\begin{Bmatrix}\n j_1 & j_2 & j_3 \\\\\n j_4 & j_5 & j_6 \\\\\n j_7 & j_8 & j_9\n \\end{Bmatrix} = \\sum_x (-1)^{2 x}(2 x + 1)\n \\begin{Bmatrix}\n j_1 & j_4 & j_7 \\\\\n j_8 & j_9 & x\n \\end{Bmatrix}\n \\begin{Bmatrix}\n j_2 & j_5 & j_8 \\\\\n j_4 & x & j_6\n \\end{Bmatrix}\n \\begin{Bmatrix}\n j_3 & j_6 & j_9 \\\\\n x & j_1 & j_2\n \\end{Bmatrix}\n"
},
{
"math_id": 23,
"text": "j_9=0"
},
{
"math_id": 24,
"text": "\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6\\\\\n j_7 & j_8 & 0\n \\end{Bmatrix}\n = \n \\frac{\\delta_{j_3,j_6} \\delta_{j_7,j_8}}{\\sqrt{(2j_3+1)(2j_7+1)}}\n (-1)^{j_2+j_3+j_4+j_7}\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_5 & j_4 & j_7\n \\end{Bmatrix}.\n"
},
{
"math_id": 25,
"text": "\n \\sum_{j_7 j_8} (2j_7+1)(2j_8+1)\n \\begin{Bmatrix}\n j_1 & j_2 & j_3\\\\\n j_4 & j_5 & j_6\\\\\n j_7 & j_8 & j_9\n \\end{Bmatrix}\n \\begin{Bmatrix}\n j_1 & j_2 & j_3'\\\\\n j_4 & j_5 & j_6'\\\\\n j_7 & j_8 & j_9\n \\end{Bmatrix}\n = \\frac{\\delta_{j_3j_3'}\\delta_{j_6j_6'} \\begin{Bmatrix} j_1 & j_2 & j_3 \\end{Bmatrix} \\begin{Bmatrix} j_4 & j_5 & j_6\\end{Bmatrix} \\begin{Bmatrix} j_3 & j_6 & j_9 \\end{Bmatrix}}\n {(2j_3+1)(2j_6+1)}.\n"
}
] | https://en.wikipedia.org/wiki?curid=8927344 |
892803 | Quantum error correction | Process in quantum computing
Quantum error correction (QEC) is a set of techniques used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is theorised as essential to achieve fault tolerant quantum computing that can reduce the effects of noise on stored quantum information, faulty quantum gates, faulty quantum state preparation, and faulty measurements. Effective quantum error correction would allow quantum computers with low qubit fidelity to execute algorithms of higher complexity or greater circuit depth.
Classical error correction often employs redundancy. The simplest albeit inefficient approach is the repetition code. A repetition code stores the desired (logical) information as multiple copies, and—if these copies are later found to disagree due to errors introduced to the system-determines the most likely value for the original data by majority vote. E.g. suppose we copy a bit in the one (on) state three times. Suppose further that noise in the system introduces an error which corrupts the three-bit state so that one of the copied bits becomes zero (off) but the other two remain equal to one. Assuming that errors are independent and occur with some sufficiently low probability "p", it is most likely that the error is a single-bit error and the intended message is three bits in the one state. It is possible that a double-bit error occurs and the transmitted message is equal to three zeros, but this outcome is less likely than the above outcome. In this example, the logical information is a single bit in the one state and the physical information are the three duplicate bits. Creating a physical state that represents the logical state is called "encoding" and determining which logical state is encoded in the physical state is called "decoding". Similar to classical error correction, QEC codes do not always correctly decode logical qubits, but instead reduce the effect of noise on the logical state.
Copying quantum information is not possible due to the no-cloning theorem. This theorem seems to present an obstacle to formulating a theory of quantum error correction. But it is possible to "spread" the (logical) information of one logical qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a "quantum error correcting code" by storing the information of one qubit onto a highly entangled state of nine qubits.
In classical error correction, "syndrome decoding" is used to diagnose which error was the likely source of corruption on an encoded state. An error can then be reversed by applying a corrective operation based on the syndrome. Quantum error correction also employs syndrome measurements. It performs a multi-qubit measurement that does not disturb the quantum information in the encoded state but retrieves information about the error. Depending on the QEC code used, syndrome measurement can determine the occurrence, location and type of errors. In most QEC codes, the type of error is either a bit flip, or a sign (of the phase) flip, or both (corresponding to the Pauli matrices X, Z, and Y). The measurement of the syndrome has the projective effect of a quantum measurement, so even if the error due to the noise was arbitrary, it can be expressed as a combination of basis operations called the error basis (which is given by the Pauli matrices and the identity). To correct the error, the Pauli operator corresponding to the type of error is used on the corrupted qubit to revert the effect of the error.
The syndrome measurement provides information about the error that has happened, but not about the information that is stored in the logical qubit—as otherwise the measurement would destroy any quantum superposition of this logical qubit with other qubits in the quantum computer, which would prevent it from being used to convey quantum information.
Bit flip code.
The repetition code works in a classical channel, because classical bits are easy to measure and to repeat. This approach does not work for a quantum channel in which, due to the no-cloning theorem, it is not possible to repeat a single qubit three times. To overcome this, a different method has to be used, such as the "three-qubit bit flip code" first proposed by Asher Peres in 1985. This technique uses entanglement and syndrome measurements and is comparable in performance with the repetition code.
Consider the situation in which we want to transmit the state of a single qubit formula_0 through a noisy channel formula_1. Let us moreover assume that this channel either flips the state of the qubit, with probability formula_2, or leaves it unchanged. The action of formula_1 on a general input formula_3 can therefore be written as formula_4.
Let formula_5 be the quantum state to be transmitted. With no error correcting protocol in place, the transmitted state will be correctly transmitted with probability formula_6. We can however improve on this number by "encoding" the state into a greater number of qubits, in such a way that errors in the corresponding logical qubits can be detected and corrected. In the case of the simple three-qubit repetition code, the encoding consists in the mappings formula_7 and formula_8. The input state formula_0 is encoded into the state formula_9. This mapping can be realized for example using two CNOT gates, entangling the system with two ancillary qubits initialized in the state formula_10. The encoded state formula_11 is what is now passed through the noisy channel.
The channel acts on formula_11 by flipping some subset (possibly empty) of its qubits. No qubit is flipped with probability formula_12, a single qubit is flipped with probability formula_13, two qubits are flipped with probability formula_14, and all three qubits are flipped with probability formula_15. Note that a further assumption about the channel is made here: we assume that formula_1 acts equally and independently on each of the three qubits in which the state is now encoded. The problem is now how to detect and correct such errors, while not corrupting the transmitted state"."
Let us assume for simplicity that formula_2 is small enough that the probability of more than a single qubit being flipped is negligible. One can then detect whether a qubit was flipped, without also querying for the values being transmitted, by asking whether one of the qubits differs from the others. This amounts to performing a measurement with four different outcomes, corresponding to the following four projective measurements:formula_16This reveals which qubits are different from the others, without at the same time giving information about the state of the qubits themselves. If the outcome corresponding to formula_17 is obtained, no correction is applied, while if the outcome corresponding to formula_18 is observed, then the Pauli "X" gate is applied to the formula_19-th qubit. Formally, this correcting procedure corresponds to the application of the following map to the output of the channel:
formula_20
Note that, while this procedure perfectly corrects the output when zero or one flips are introduced by the channel, if more than one qubit is flipped then the output is not properly corrected. For example, if the first and second qubits are flipped, then the syndrome measurement gives the outcome formula_21, and the third qubit is flipped, instead of the first two. To assess the performance of this error-correcting scheme for a general input we can study the fidelity formula_22 between the input formula_11 and the output formula_23. Being the output state formula_24 correct when no more than one qubit is flipped, which happens with probability formula_25, we can write it as formula_26, where the dots denote components of formula_24 resulting from errors not properly corrected by the protocol. It follows that formula_27This fidelity is to be compared with the corresponding fidelity obtained when no error-correcting protocol is used, which was shown before to equal formula_28. A little algebra then shows that the fidelity "after" error correction is greater than the one without for formula_29. Note that this is consistent with the working assumption that was made while deriving the protocol (of formula_2 being small enough).
Sign flip code.
Flipped bits are the only kind of error in classical computer, but there is another possibility of an error with quantum computers, the sign flip. Through the transmission in a channel the relative sign between formula_30 and formula_31 can become inverted. For instance, a qubit in the state formula_32 may have its sign flip to formula_33
The original state of the qubit
formula_34
will be changed into the state
formula_35
In the Hadamard basis, bit flips become sign flips and sign flips become bit flips. Let formula_36 be a quantum channel that can cause at most one phase flip. Then the bit flip code from above can recover formula_37 by transforming into the Hadamard basis before and after transmission through formula_36.
Shor code.
The error channel may induce either a bit flip, a sign flip (i.e., a phase flip), or both. It is possible to correct for both types of errors on a logical qubit using a well-designed QEC code. One example of a code that does this is the Shor code, published in 1995.10 Since these two types of errors are the only types of errors that can result after a projective measurement, a Shor code corrects arbitrary single-qubit errors.
Let formula_38 be a quantum channel that can arbitrarily corrupt a single qubit. The 1st, 4th and 7th qubits are for the sign flip code, while the three groups of qubits (1,2,3), (4,5,6), and (7,8,9) are designed for the bit flip code. With the Shor code, a qubit state formula_39 will be transformed into the product of 9 qubits formula_40, where
formula_41
formula_42
If a bit flip error happens to a qubit, the syndrome analysis will be performed on each block of qubits (1,2,3), (4,5,6), and (7,8,9) to detect and correct at most one bit flip error in each block.
If the three bit flip group (1,2,3), (4,5,6), and (7,8,9) are considered as three inputs, then the Shor code circuit can be reduced as a sign flip code. This means that the Shor code can also repair a sign flip error for a single qubit.
The Shor code also can correct for any arbitrary errors (both bit flip and sign flip) to a single qubit. If an error is modeled by a unitary transform U, which will act on a qubit formula_37, then formula_43 can be described in the form
formula_44
where formula_45,formula_46,formula_47, and formula_48 are complex constants, I is the identity, and the Pauli matrices are given by
formula_49
If "U" is equal to "I", then no error occurs. If formula_50, a bit flip error occurs. If formula_51, a sign flip error occurs. If formula_52 then both a bit flip error and a sign flip error occur. In other words, the Shor code can correct any combination of bit or phase errors on a single qubit.
Bosonic codes.
Several proposals have been made for storing error-correctable quantum information in bosonic modes. Unlike a two-level system, a quantum harmonic oscillator has infinitely many energy levels in a single physical system. Codes for these systems include cat, Gottesman-Kitaev-Preskill (GKP), and binomial codes. One insight offered by these codes is to take advantage of the redundancy within a single system, rather than to duplicate many two-level qubits.
Binomial code.
Written in the Fock basis, the simplest binomial encoding is
formula_53
where the subscript L indicates a "logically encoded" state. Then if the dominant error mechanism of the system is the stochastic application of the bosonic lowering operator formula_54 the corresponding error states are formula_55 and formula_56 respectively. Since the codewords involve only even photon number, and the error states involve only odd photon number, errors can be detected by measuring the photon number parity of the system. Measuring the odd parity will allow correction by application of an appropriate unitary operation without knowledge of the specific logical state of the qubit. However, the particular binomial code above is not robust to two-photon loss.
Cat code.
Schrödinger cat states, superpositions of coherent states, can also be used as logical states for error correction codes. Cat code, realized by Ofek et al. in 2016, defined two sets of logical states: formula_57 and formula_58, where each of the states is a superposition of coherent state as follows
formula_59
Those two sets of states differ from the photon number parity, as states denoted with formula_60 only occupy even photon number states and states with formula_61 indicate they have odd parity. Similar to the binomial code, if the dominant error mechanism of the system is the stochastic application of the bosonic lowering operator formula_62, the error takes the logical states from the even parity subspace to the odd one, and vice versa. Single-photon-loss errors can therefore be detected by measuring the photon number parity operator formula_63 using a dispersively coupled ancillary qubit.
Still, cat qubits are not protected against two-photon loss formula_64, dephasing noise formula_65, photon-gain error formula_66, etc.
General codes.
In general, a "quantum code" for a quantum channel formula_67 is a subspace formula_68, where formula_69 is the state Hilbert space, such that there exists another quantum channel formula_70 with
formula_71
where formula_72 is the orthogonal projection onto formula_73. Here formula_70 is known as the "correction operation".
A "non-degenerate code" is one for which different elements of the set of correctable errors produce linearly independent results when applied to elements of the code. If distinct of the set of correctable errors produce orthogonal results, the code is considered "pure".
Models.
Over time, researchers have come up with several codes:
That these codes allow indeed for quantum computations of arbitrary length is the content of the quantum threshold theorem, found by Michael Ben-Or and Dorit Aharonov, which asserts that you can correct for all errors if you concatenate quantum codes such as the CSS codes—i.e. re-encode each logical qubit by the same code again, and so on, on logarithmically many levels—"provided" that the error rate of individual quantum gates is below a certain threshold; as otherwise, the attempts to measure the syndrome and correct the errors would introduce more new errors than they correct for.
As of late 2004, estimates for this threshold indicate that it could be as high as 1–3%, provided that there are sufficiently many qubits available.
Experimental realization.
There have been several experimental realizations of CSS-based codes. The first demonstration was with nuclear magnetic resonance qubits. Subsequently, demonstrations have been made with linear optics, trapped ions, and superconducting (transmon) qubits.
In 2016 for the first time the lifetime of a quantum bit was prolonged by employing a QEC code. The error-correction demonstration was performed on Schrodinger-cat states encoded in a superconducting resonator, and employed a quantum controller capable of performing real-time feedback operations including read-out of the quantum information, its analysis, and the correction of its detected errors. The work demonstrated how the quantum-error-corrected system reaches the break-even point at which the lifetime of a logical qubit exceeds the lifetime of the underlying constituents of the system (the physical qubits).
Other error correcting codes have also been implemented, such as one aimed at correcting for photon loss, the dominant error source in photonic qubit schemes.
In 2021, an entangling gate between two logical qubits encoded in topological quantum error-correction codes has first been realized using 10 ions in a trapped-ion quantum computer. 2021 also saw the first experimental demonstration of fault-tolerant Bacon-Shor code in a single logical qubit of a trapped-ion system, i.e. a demonstration for which the addition of error correction is able to suppress more errors than is introduced by the overhead required to implement the error correction as well as fault tolerant Steane code.
In 2022, researchers at the University of Innsbruck have demonstrated a fault-tolerant universal set of gates on two logical qubits in a trapped-ion quantum computer. They have performed a logical two-qubit controlled-NOT gate between two instances of the seven-qubit colour code, and fault-tolerantly prepared a logical magic state.
In February 2023 researchers at Google claimed to have decreased quantum errors by increasing the qubit number in experiments, they used a fault tolerant surface code measuring an error rate of 3.028% and 2.914% for a distance-3 qubit array and a distance-5 qubit array respectively.
In April 2024, researchers at Microsoft claimed to have successfully tested a quantum error correction code that allowed them to achieve an error rate with logical qubits that is 800 times better than the underlying physical error rate.
This qubit virtualization system was used to create 4 logical qubits with 30 of the 32 qubits on Quantinuum’s trapped-ion hardware. The system uses an active syndrome extraction technique to diagnose errors and correct them while calculations are underway without destroying the logical qubits.
Quantum error-correction without encoding and parity-checks.
In 2022, research at University of Engineering and Technology Lahore demonstrated error-cancellation by inserting single-qubit Z-axis rotation gates into strategically chosen locations of the superconductor quantum circuits. The scheme has been shown to effectively correct errors that would otherwise rapidly add up under constructive interference of coherent noise. This is a circuit-level calibration scheme that traces deviations (e.g. sharp dips or notches) in the decoherence curve to detect and localize the coherent error, but does not require encoding or parity measurements. However, further investigation is needed to establish the effectiveness of this method for the incoherent noise.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vert\\psi\\rangle"
},
{
"math_id": 1,
"text": "\\mathcal E"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "\\rho"
},
{
"math_id": 4,
"text": "\\mathcal E(\\rho) = (1-p) \\rho + p\\ X\\rho X"
},
{
"math_id": 5,
"text": "|\\psi\\rangle = \\alpha_0|0\\rangle + \\alpha_1|1\\rangle"
},
{
"math_id": 6,
"text": "1-p"
},
{
"math_id": 7,
"text": "\\vert0\\rangle\\rightarrow\\vert0_{\\rm L}\\rangle\\equiv\\vert000\\rangle"
},
{
"math_id": 8,
"text": "\\vert1\\rangle\\rightarrow\\vert1_{\\rm L}\\rangle\\equiv\\vert111\\rangle"
},
{
"math_id": 9,
"text": "\\vert\\psi'\\rangle = \\alpha_0 \\vert000\\rangle + \\alpha_1 \\vert111\\rangle"
},
{
"math_id": 10,
"text": "\\vert0\\rangle"
},
{
"math_id": 11,
"text": "\\vert\\psi'\\rangle"
},
{
"math_id": 12,
"text": "(1-p)^3"
},
{
"math_id": 13,
"text": "3p(1-p)^2"
},
{
"math_id": 14,
"text": "3p^2(1-p)"
},
{
"math_id": 15,
"text": "p^3"
},
{
"math_id": 16,
"text": "\\begin{align}\n P_0 &=|000\\rangle\\langle000|+|111\\rangle\\langle111|, \\\\\n P_1 &=|100\\rangle\\langle100|+|011\\rangle\\langle011|, \\\\\n P_2 &=|010\\rangle\\langle010|+|101\\rangle\\langle101|, \\\\\n P_3 &=|001\\rangle\\langle001|+|110\\rangle\\langle110|.\n\\end{align}"
},
{
"math_id": 17,
"text": "P_0"
},
{
"math_id": 18,
"text": "P_i"
},
{
"math_id": 19,
"text": "i"
},
{
"math_id": 20,
"text": "\\mathcal E_{\\operatorname{corr}}(\\rho)=P_0\\rho P_0 + \\sum_{i=1}^3 X_i P_i \\rho\\, P_i X_i."
},
{
"math_id": 21,
"text": "P_3"
},
{
"math_id": 22,
"text": "F(\\psi')"
},
{
"math_id": 23,
"text": "\\rho_{\\operatorname{out}}\\equiv\\mathcal E_{\\operatorname{corr}}(\\mathcal E(\\vert\\psi'\\rangle\\langle\\psi'\\vert))"
},
{
"math_id": 24,
"text": "\\rho_{\\operatorname{out}}"
},
{
"math_id": 25,
"text": "(1-p)^3 + 3p(1-p)^2"
},
{
"math_id": 26,
"text": "[(1-p)^3+3p(1-p)^2]\\,\\vert\\psi'\\rangle\\langle\\psi'\\vert + (...)"
},
{
"math_id": 27,
"text": "F(\\psi')=\\langle\\psi'\\vert\\rho_{\\operatorname{out}}\\vert\\psi'\\rangle\\ge (1-p)^3 + 3p(1-p)^2=1-3p^2+2p^3."
},
{
"math_id": 28,
"text": "{1-p}"
},
{
"math_id": 29,
"text": "p<1/2"
},
{
"math_id": 30,
"text": "|0\\rangle"
},
{
"math_id": 31,
"text": "|1\\rangle"
},
{
"math_id": 32,
"text": "|-\\rangle=(|0\\rangle-|1\\rangle)/\\sqrt{2}"
},
{
"math_id": 33,
"text": "|+\\rangle=(|0\\rangle+|1\\rangle)/\\sqrt{2}."
},
{
"math_id": 34,
"text": "|\\psi\\rangle = \\alpha_0|0\\rangle+\\alpha_1|1\\rangle"
},
{
"math_id": 35,
"text": "|\\psi'\\rangle = \\alpha_0|{+}{+}{+}\\rangle+\\alpha_1|{-}{-}{-}\\rangle."
},
{
"math_id": 36,
"text": "E_\\text{phase}"
},
{
"math_id": 37,
"text": "|\\psi\\rangle"
},
{
"math_id": 38,
"text": "E"
},
{
"math_id": 39,
"text": "|\\psi\\rangle=\\alpha_0|0\\rangle+\\alpha_1|1\\rangle"
},
{
"math_id": 40,
"text": "|\\psi'\\rangle=\\alpha_0|0_S\\rangle+\\alpha_1|1_S\\rangle"
},
{
"math_id": 41,
"text": "|0_{\\rm S}\\rangle=\\frac{1}{2\\sqrt{2}}(|000\\rangle + |111\\rangle) \\otimes (|000\\rangle + |111\\rangle\n) \\otimes (|000\\rangle + |111\\rangle)"
},
{
"math_id": 42,
"text": "|1_{\\rm S}\\rangle=\\frac{1}{2\\sqrt{2}}(|000\\rangle - |111\\rangle) \\otimes (|000\\rangle - |111\\rangle) \\otimes (|000\\rangle - |111\\rangle)"
},
{
"math_id": 43,
"text": "U"
},
{
"math_id": 44,
"text": "U = c_0 I + c_1 X + c_2 Y + c_3 Z"
},
{
"math_id": 45,
"text": "c_0"
},
{
"math_id": 46,
"text": "c_1"
},
{
"math_id": 47,
"text": "c_2"
},
{
"math_id": 48,
"text": "c_3"
},
{
"math_id": 49,
"text": "\\begin{align}\nX &= \\begin{pmatrix}\n 0&1\\\\1&0\n\\end{pmatrix} ; \\\\\nY &= \\begin{pmatrix}\n 0&-i\\\\i&0\n\\end{pmatrix} ; \\\\\nZ &= \\begin{pmatrix}\n 1&0\\\\0&-1\n\\end{pmatrix} .\n\\end{align}"
},
{
"math_id": 50,
"text": "U=X"
},
{
"math_id": 51,
"text": "U=Z"
},
{
"math_id": 52,
"text": "U=iY"
},
{
"math_id": 53,
"text": "|0_{\\rm L}\\rangle=\\frac{|0\\rangle+|4\\rangle}{\\sqrt{2}},\\quad |1_{\\rm L}\\rangle=|2\\rangle,"
},
{
"math_id": 54,
"text": "\\hat{a},"
},
{
"math_id": 55,
"text": "|3\\rangle"
},
{
"math_id": 56,
"text": "|1\\rangle,"
},
{
"math_id": 57,
"text": "\\{|0^+_L\\rangle, |1^+_L\\rangle\\} "
},
{
"math_id": 58,
"text": "\\{|0^-_L\\rangle, |1^-_L\\rangle\\} "
},
{
"math_id": 59,
"text": "\\begin{aligned}\n |0^+_L\\rangle& \\equiv |\\alpha\\rangle + |-\\alpha\\rangle, \\\\\n |1^+_L\\rangle& \\equiv |i\\alpha\\rangle + |-i\\alpha\\rangle, \\\\\n |0^-_L\\rangle& \\equiv |\\alpha\\rangle - |-\\alpha\\rangle, \\\\\n |1^-_L\\rangle& \\equiv |i\\alpha\\rangle - |-i\\alpha\\rangle.\n\\end{aligned}"
},
{
"math_id": 60,
"text": "^+"
},
{
"math_id": 61,
"text": "^-"
},
{
"math_id": 62,
"text": "\\hat{a}"
},
{
"math_id": 63,
"text": "\\exp(i\\pi \\hat{a}^\\dagger\\hat{a}) "
},
{
"math_id": 64,
"text": "\\hat{a}^2"
},
{
"math_id": 65,
"text": "\\hat{a}^\\dagger\\hat{a}"
},
{
"math_id": 66,
"text": "\\hat{a}^\\dagger"
},
{
"math_id": 67,
"text": "\\mathcal{E}"
},
{
"math_id": 68,
"text": "\\mathcal{C} \\subseteq \\mathcal{H}"
},
{
"math_id": 69,
"text": "\\mathcal{H}"
},
{
"math_id": 70,
"text": "\\mathcal{R}"
},
{
"math_id": 71,
"text": " (\\mathcal{R} \\circ \\mathcal{E})(\\rho) = \\rho \\quad \\forall \\rho = P_{\\mathcal{C}}\\rho P_{\\mathcal{C}},"
},
{
"math_id": 72,
"text": "P_{\\mathcal{C}}"
},
{
"math_id": 73,
"text": "\\mathcal{C}"
}
] | https://en.wikipedia.org/wiki?curid=892803 |
892899 | Parsing expression grammar | Type of grammar for describing formal languages
In computer science, a parsing expression grammar (PEG) is a type of analytic formal grammar, i.e. it describes a formal language in terms of a set of rules for recognizing strings in the language. The formalism was introduced by Bryan Ford in 2004 and is closely related to the family of top-down parsing languages introduced in the early 1970s.
Syntactically, PEGs also look similar to context-free grammars (CFGs), but they have a different interpretation: the choice operator selects the first match in PEG, while it is ambiguous in CFG. This is closer to how string recognition tends to be done in practice, e.g. by a recursive descent parser.
Unlike CFGs, PEGs cannot be ambiguous; a string has exactly one valid parse tree or none. It is conjectured that there exist context-free languages that cannot be recognized by a PEG, but this is not yet proven. PEGs are well-suited to parsing computer languages (and artificial human languages such as Lojban) where multiple interpretation alternatives can be disambiguated locally, but are less likely to be useful for parsing natural languages where disambiguation may have to be global.
Definition.
A parsing expression is a kind of pattern that each string may either match or not match. In case of a match, there is a unique prefix of the string (which may be the whole string, the empty string, or something in between) which has been "consumed" by the parsing expression; this prefix is what one would usually think of as having matched the expression. However, whether a string matches a parsing expression "may" (because of look-ahead predicates) depend on parts of it which come after the consumed part. A parsing expression language is a set of all strings that match some specific parsing expression.
A parsing expression grammar is a collection of named parsing expressions, which may reference each other. The effect of one such reference in a parsing expression is as if the whole referenced parsing expression was given in place of the reference. A parsing expression grammar also has a designated starting expression; a string matches the grammar if it matches its starting expression.
An element of a string matched is called a "terminal symbol", or terminal for short. Likewise the names assigned to parsing expressions are called "nonterminal symbols", or nonterminals for short. These terms would be descriptive for generative grammars, but in the case of parsing expression grammars they are merely terminology, kept mostly because of being near ubiquitous in discussions of parsing algorithms.
Syntax.
Both "abstract" and "concrete" syntaxes of parsing expressions are seen in the literature, and in this article. The abstract syntax is essentially a mathematical formula and primarily used in theoretical contexts, whereas concrete syntax parsing expressions could be used directly to control a parser. The primary concrete syntax is that defined by Ford, although many tools have their own dialect of this. Other tools can be closer to using a programming-language native encoding of abstract syntax parsing expressions as their concrete syntax.
Atomic parsing expressions.
The two main kinds of parsing expressions not containing another parsing expression are individual terminal symbols and nonterminal symbols. In concrete syntax, terminals are placed inside quotes (single or double), whereas identifiers not in quotes denote nonterminals:
"terminal" Nonterminal 'another terminal'
In the abstract syntax there is no formalised distinction, instead each symbol is supposedly defined as either terminal or nonterminal, but a common convention is to use upper case for nonterminals and lower case for terminals.
The concrete syntax also has a number of forms for classes of terminals:
In abstract syntax, such forms are usually formalised as nonterminals whose exact definition is elided for brevity; in Unicode, there are tens of thousands of characters that are letters. Conversely, theoretical discussions sometimes introduce atomic abstract syntax for concepts that can alternatively be expressed using composite parsing expressions. Examples of this include:
In the concrete syntax, quoted and bracketed terminals have backslash escapes, so that "line feed or carriage return" may be written codice_5. The abstract syntax counterpart of a quoted terminal of length greater than one would be the sequence of those terminals; codice_6 is the same as codice_7. The primary concrete syntax assigns no distinct meaning to terminals depending on whether they use single or double quotes, but some dialects treat one as case-sensitive and the other as case-insensitive.
Composite parsing expressions.
Given any existing parsing expressions "e", "e"1, and "e"2, a new parsing expression can be constructed using the following operators:
Operator priorities are as follows, based on Table 1 in:
Grammars.
In the concrete syntax, a parsing expression grammar is simply a sequence of nonterminal definitions, each of which has the form
Identifier LEFTARROW Expression
The codice_8 is the nonterminal being defined, and the codice_9 is the parsing expression it is defined as referencing. The codice_10 varies a bit between dialects, but is generally some left-pointing arrow or assignment symbol, such as codice_11, codice_12, codice_13, or codice_14. One way to understand it is precisely as making an assignment or definition of the nonterminal. Another way to understand it is as a contrast to the right-pointing arrow → used in the rules of a context-free grammar; with parsing expressions the flow of information goes from expression to nonterminal, not nonterminal to expression.
As a mathematical object, a parsing expression grammar is a tuple formula_1, where formula_2 is the set of nonterminal symbols, formula_3 is the set of terminal symbols, formula_4 is a function from formula_2 to the set of parsing expressions on formula_5, and formula_6 is the starting parsing expression. Some concrete syntax dialects give the starting expression explicitly, but the primary concrete syntax instead has the implicit rule that the first nonterminal defined is the starting expression.
It is worth noticing that the primary dialect of concrete syntax parsing expression grammars does not have an explicit definition terminator or separator between definitions, although it is customary to begin a new definition on a new line; the codice_10 of the next definition is sufficient for finding the boundary, if one adds the constraint that a nonterminal in an codice_9 must not be followed by a codice_10. However, some dialects may allow an explicit terminator, or outright require it.
Example.
This is a PEG that recognizes mathematical formulas that apply the basic five operations to non-negative integers.
Expr ← Sum
Sum ← Product (('+' / '-') Product)*
Product ← Power (('*' / '/') Power)*
Power ← Value ('^' Power)?
Value ← [0-9]+ / '(' Expr ')'
In the above example, the terminal symbols are characters of text, represented by characters in single quotes, such as codice_18 and codice_19. The range codice_20 is a shortcut for the ten characters from codice_21 to codice_22. (This range syntax is the same as the syntax used by regular expressions.) The nonterminal symbols are the ones that expand to other rules: "Value", "Power", "Product", "Sum", and "Expr". Note that rules "Sum" and "Product" don't lead to desired left-associativity of these operations (they don't deal with associativity at all, and it has to be handled in post-processing step after parsing), and the "Power" rule (by referring to itself on the right) results in desired right-associativity of exponent. Also note that a rule like (with intention to achieve left-associativity) would cause infinite recursion, so it cannot be used in practice even though it can be expressed in the grammar.
Semantics.
The fundamental difference between context-free grammars and parsing expression grammars is that the PEG's choice operator is "ordered". If the first alternative succeeds, the second alternative is ignored. Thus ordered choice is not commutative, unlike unordered choice as in context-free grammars. Ordered choice is analogous to soft cut operators available in some logic programming languages.
The consequence is that if a CFG is transliterated directly to a PEG, any ambiguity in the former is resolved by deterministically picking one parse tree from the possible parses. By carefully choosing the order in which the grammar alternatives are specified, a programmer has a great deal of control over which parse tree is selected.
Parsing expression grammars also add the and- and not- syntactic predicates. Because they can use an arbitrarily complex sub-expression to "look ahead" into the input string without actually consuming it, they provide a powerful syntactic lookahead and disambiguation facility, in particular when reordering the alternatives cannot specify the exact parse tree desired.
Operational interpretation of parsing expressions.
Each nonterminal in a parsing expression grammar essentially represents a parsing function in a recursive descent parser, and the corresponding parsing expression represents the "code" comprising the function. Each parsing function conceptually takes an input string as its argument, and yields one of the following results:
An atomic parsing expression consisting of a single terminal (i.e. literal) succeeds if the first character of the input string matches that terminal, and in that case consumes the input character; otherwise the expression yields a failure result. An atomic parsing expression consisting of the empty string always trivially succeeds without consuming any input.
An atomic parsing expression consisting of a nonterminal "A" represents a recursive call to the nonterminal-function "A". A nonterminal may succeed without actually consuming any input, and this is considered an outcome distinct from failure.
The sequence operator "e"1 "e"2 first invokes "e"1, and if "e"1 succeeds, subsequently invokes "e"2 on the remainder of the input string left unconsumed by "e"1, and returns the result. If either "e"1 or "e"2 fails, then the sequence expression "e"1 "e"2 fails (consuming no input).
The choice operator "e"1 / "e"2 first invokes "e"1, and if "e"1 succeeds, returns its result immediately. Otherwise, if "e"1 fails, then the choice operator backtracks to the original input position at which it invoked "e"1, but then calls "e"2 instead, returning "e"2's result.
The zero-or-more, one-or-more, and optional operators consume zero or more, one or more, or zero or one consecutive repetitions of their sub-expression "e", respectively. Unlike in context-free grammars and regular expressions, however, these operators "always" behave greedily, consuming as much input as possible and never backtracking. (Regular expression matchers may start by matching greedily, but will then backtrack and try shorter matches if they fail to match.) For example, the expression a* will always consume as many a's as are consecutively available in the input string, and the expression (a* a) will always fail because the first part (a*) will never leave any a's for the second part to match.
The and-predicate expression &"e" invokes the sub-expression "e", and then succeeds if "e" succeeds and fails if "e" fails, but in either case "never consumes any input".
The not-predicate expression !"e" succeeds if "e" fails and fails if "e" succeeds, again consuming no input in either case.
More examples.
The following recursive rule matches standard C-style if/then/else statements in such a way that the optional "else" clause always binds to the innermost "if", because of the implicit prioritization of the '/' operator. (In a context-free grammar, this construct yields the classic dangling else ambiguity.)
S ← 'if' C 'then' S 'else' S / 'if' C 'then' S
The following recursive rule matches Pascal-style nested comment syntax, . Recall that matches any single character.
C ← Begin N* End
Begin ← '(*'
End ← '*)'
N ← C / (!Begin !End .)
The parsing expression matches and consumes the text "foo" but only if it is followed by the text "bar". The parsing expression matches the text "foo" but only if it is "not" followed by the text "bar". The expression matches a single "a" but only if it is not part of an arbitrarily long sequence of a's followed by a b.
The parsing expression matches and consumes an arbitrary-length sequence of a's and b's. The production rule describes the simple context-free "matching language" formula_7.
The following parsing expression grammar describes the classic non-context-free language formula_8:
S ← &(A 'c') 'a'+ B !.
A ← 'a' A? 'b'
B ← 'b' B? 'c'
Implementing parsers from parsing expression grammars.
Any parsing expression grammar can be converted directly into a recursive descent parser. Due to the unlimited lookahead capability that the grammar formalism provides, however, the resulting parser could exhibit exponential time performance in the worst case.
It is possible to obtain better performance for any parsing expression grammar by converting its recursive descent parser into a "packrat parser", which always runs in linear time, at the cost of substantially greater storage space requirements. A packrat parser
is a form of parser similar to a recursive descent parser in construction, except that during the parsing process it memoizes the intermediate results of all invocations of the mutually recursive parsing functions, ensuring that each parsing function is only invoked at most once at a given input position. Because of this memoization, a packrat parser has the ability to parse many context-free grammars and "any" parsing expression grammar (including some that do not represent context-free languages) in linear time. Examples of memoized recursive descent parsers are known from at least as early as 1993.
This analysis of the performance of a packrat parser assumes that enough memory is available to hold all of the memoized results; in practice, if there is not enough memory, some parsing functions might have to be invoked more than once at the same input position, and consequently the parser could take more than linear time.
It is also possible to build LL parsers and LR parsers from parsing expression grammars, with better worst-case performance than a recursive descent parser without memoization, but the unlimited lookahead capability of the grammar formalism is then lost. Therefore, not all languages that can be expressed using parsing expression grammars can be parsed by LL or LR parsers.
Bottom-up PEG parsing.
A pika parser uses dynamic programming to apply PEG rules bottom-up and right to left, which is the inverse of the normal recursive descent order of top-down, left to right. Parsing in reverse order solves the left recursion problem, allowing left-recursive rules to be used directly in the grammar without being rewritten into non-left-recursive form, and also confers optimal error recovery capabilities upon the parser, which historically proved difficult to achieve for recursive descent parsers.
Advantages.
No compilation required.
Many parsing algorithms require a preprocessing step where the grammar is first compiled into an opaque executable form, often some sort of automaton. Parsing expressions can be executed directly (even if it is typically still advisable to transform the human-readable PEGs shown in this article into a more native format, such as S-expressions, before evaluating them).
Compared to regular expressions.
Compared to pure regular expressions (i.e., describing a language recognisable using a finite automaton), PEGs are vastly more powerful. In particular they can handle unbounded recursion, and so match parentheses down to an arbitrary nesting depth; regular expressions can at best keep track of nesting down to some fixed depth, because a finite automaton (having a finite set of internal states) can only distinguish finitely many different nesting depths. In more theoretical terms, formula_9 (the language of all strings of zero or more formula_10's, followed by an "equal number" of formula_11s) is not a regular language, but it is easily seen to be a parsing expression language, matched by the grammar
start ← AB !.
AB ← ('a' AB 'b')?
Here codice_23 is the starting expression. The codice_4 part enforces that the input ends after the codice_25, by saying “there is no next character”; unlike regular expressions, which have magic constraints codice_26 or codice_27 for this, parsing expressions can express the end of input using only the basic primitives.
The codice_28, codice_29, and codice_30 of parsing expressions are similar to those in regular expressions, but a difference is that these operate strictly in a greedy mode. This is ultimately due to codice_31 being an ordered choice. A consequence is that something can match as a regular expression which does not match as parsing expression:
codice_32
is both a valid regular expression and a valid parsing expression. As regular expression, it matches codice_33, but as parsing expression it does not match, because the codice_34 will match the codice_35, then codice_36 will match the codice_37, leaving nothing for the codice_38, so at that point matching the sequence fails. "Trying again" with having codice_34 match the empty string is explicitly against the semantics of parsing expressions; this is not an edge case of a particular matching algorithm, instead it is the sought behaviour.
Even regular expressions that depend on nondeterminism "can" be compiled into a parsing expression grammar, by having a separate nonterminal for every state of the corresponding DFA and encoding its transition function into the definitions of these nonterminals —
A ← 'x' B / 'y' C
is effectively saying "from state A transition to state B if the next character is x, but to state C if the next character is y" — but this works because nondeterminism can be eliminated within the realm of regular languages. It would not make use of the parsing expression variants of the repetition operations.
Compared to context-free grammars.
PEGs can comfortably be given in terms of characters, whereas context-free grammars (CFGs) are usually given in terms of tokens, thus requiring an extra step of tokenisation in front of parsing proper. An advantage of not having a separate tokeniser is that different parts of the language (for example embedded mini-languages) can easily have different tokenisation rules.
In the strict formal sense, PEGs are likely incomparable to CFGs, but practically there are many things that PEGs can do which pure CFGs cannot, whereas it is difficult to come up with examples of the contrary. In particular PEGs can be crafted to natively resolve ambiguities, such as the "dangling else" problem in C, C++, and Java, whereas CFG-based parsing often needs a rule outside of the grammar to resolve them. Moreover any PEG can be parsed in linear time by using a packrat parser, as described above, whereas parsing according to a general CFG is asymptotically equivalent to boolean matrix multiplication (thus likely between quadratic and cubic time).
One classical example of a formal language which is provably not context-free is the language formula_12: an arbitrary number of formula_10s are followed by an equal number of formula_11s, which in turn are followed by an equal number of formula_13s. This, too, is a parsing expression language, matched by the grammar
start ← AB 'c'*
AB ← 'a' AB 'b' / &(BC !.)
BC ← ('b' BC 'c')?
For codice_25 to match, the first stretch of formula_10s must be followed by an equal number of formula_11s, and in addition codice_41 has to match where the formula_10s switch to formula_11s, which means those formula_11s are followed by an equal number of formula_13s.
Disadvantages.
Memory consumption.
PEG parsing is typically carried out via "packrat parsing", which uses memoization to eliminate redundant parsing steps. Packrat parsing requires internal storage proportional to the total input size, rather than to the depth of the parse tree as with LR parsers. Whether this is a significant difference depends on circumstances; if parsing is a service provided as a function then the parser will have stored the full parse tree up until returning it, and already that parse tree will typically be of size proportional to the total input size. If parsing is instead provided as a generator then one might get away with only keeping parts of the parse tree in memory, but the feasibility of this depends on the grammar. A parsing expression grammar can be designed so that only after consuming the full input will the parser discover that it needs to backtrack to the beginning, which again could require storage proportional to total input size.
For recursive grammars and some inputs, the depth of the parse tree can be proportional to the input size, so both an LR parser and a packrat parser will appear to have the same worst-case asymptotic performance. However in many domains, for example hand-written source code, the expression nesting depth has an effectively constant bound quite independent of the length of the program, because expressions nested beyond a certain depth tend to get refactored. When it is not necessary to keep the full parse tree, a more accurate analysis would take the depth of the parse tree into account separately from the input size.
Computational model.
In order to attain linear overall complexity, the storage used for memoization must furthermore provide amortized constant time access to individual data items memoized. In practice that is no problem — for example a dynamically sized hash table attains this – but that makes use of pointer arithmetic, so it presumes having a random-access machine. Theoretical discussions of data structures and algorithms have an unspoken tendency to presume a more restricted model (possibly that of lambda calculus, possibly that of Scheme), where a sparse table rather has to be built using trees, and data item access is not constant time. Traditional parsing algorithms such as the LL parser are not affected by this, but it becomes a penalty for the reputation of packrat parsers: they rely on operations of seemingly ill repute.
Viewed the other way around, this says packrat parsers tap into computational power readily available in real life systems, that older parsing algorithms do not understand to employ.
Indirect left recursion.
A PEG is called "well-formed" if it contains no "left-recursive" rules, i.e., rules that allow a nonterminal to expand to an expression in which the same nonterminal occurs as the leftmost symbol. For a left-to-right top-down parser, such rules cause infinite regress: parsing will continually expand the same nonterminal without moving forward in the string. Therefore, to allow packrat parsing, left recursion must be eliminated.
Practical significance.
Direct recursion, be that left or right, is important in context-free grammars, because there recursion is the only way to describe repetition:
Sum → Term | Sum '+' Term | Sum '-' Term
Args → Arg | Arg ',' Args
People trained in using context-free grammars often come to PEGs expecting to use the same idioms, but parsing expressions can do repetition without recursion:
Sum ← Term ( '+' Term / '-' Term )*
Args ← Arg ( ',' Arg )*
A difference lies in the abstract syntax trees generated: with recursion each codice_42 or codice_43 can have at most two children, but with repetition there can be arbitrarily many. If later stages of processing require that such lists of children are recast as trees with bounded degree, for example microprocessor instructions for addition typically only allow two operands, then properties such as left-associativity would be imposed after the PEG-directed parsing stage.
Therefore left-recursion is practically less likely to trouble a PEG packrat parser than, say, an LL(k) context-free parser, unless one insists on using context-free idioms. However, not all cases of recursion are about repetition.
Non-repetition left-recursion.
For example, in the arithmetic grammar above, it could seem tempting to express operator precedence as a matter of ordered choice — codice_44 would mean first try viewing as codice_42 (since we parse top–down), second try viewing as codice_46, and only third try viewing as codice_47 — rather than via nesting of definitions. This (non-well-formed) grammar seeks to keep precedence order only in one line:
Value ← [0-9.]+ / '(' Expr ')'
Product ← Expr (('*' / '/') Expr)+
Sum ← Expr (('+' / '-') Expr)+
Expr ← Sum / Product / Value
Unfortunately matching an codice_48 requires testing if a codice_42 matches, while matching a codice_42 requires testing if an codice_48 matches. Because the term appears in the leftmost position, these rules make up a circular definition that cannot be resolved. (Circular definitions that can be resolved exist—such as in the original formulation from the first example—but such definitions are required not to exhibit pathological recursion.) However, left-recursive rules can always be rewritten to eliminate left-recursion. For example, the following left-recursive CFG rule:
string-of-a ← string-of-a 'a' | 'a'
can be rewritten in a PEG using the plus operator:
string-of-a ← 'a'+
The process of rewriting "indirectly" left-recursive rules is complex in some packrat parsers, especially when semantic actions are involved.
With some modification, traditional packrat parsing can support direct left recursion, but doing so results in a loss of the linear-time parsing property which is generally the justification for using PEGs and packrat parsing in the first place. Only the OMeta parsing algorithm supports full direct and indirect left recursion without additional attendant complexity (but again, at a loss of the linear time complexity), whereas all GLR parsers support left recursion.
Unexpected behaviour.
A common first impression of PEGs is that they look like CFGs with certain convenience features — repetition operators codice_52 as in regular expressions and lookahead predicates codice_53 — plus ordered choice for disambiguation. This understanding can be sufficient when one's goal is to create a parser for a language, but it is not sufficient for more theoretical discussions of the computational power of parsing expressions. In particular the nondeterminism inherent in the unordered choice codice_54 of context-free grammars makes it very different from the deterministic ordered choice codice_31.
The midpoint problem.
PEG packrat parsers cannot recognize some unambiguous nondeterministic CFG rules, such as the following:
S ← 'x' S 'x' | 'x'
Neither LL(k) nor LR(k) parsing algorithms are capable of recognizing this example. However, this grammar can be used by a general CFG parser like the CYK algorithm. However, the "language" in question can be recognised by all these types of parser, since it is in fact a regular language (that of strings of an odd number of x's).
It is instructive to work out exactly what a PEG parser does when attempting to match
S ← 'x' S 'x' / 'x'
against the string codice_56. As expected, it recursively tries to match the nonterminal codice_57 at increasing positions in this string, until failing the match against the codice_58, and after that begins to backtrack. This goes as follows:
Position: 123456
String: xxxxxq
Results: ↑ Pos.6: Neither branch of S matches
↑ Pos.5: First branch of S fails, second branch succeeds, yielding match of length 1.
↑ Pos.4: First branch of S fails, second branch succeeds, yielding match of length 1.
↑ Pos.3: First branch of S succeeds, yielding match of length 3.
↑ Pos.2: First branch of S fails, because after the S match at 3 comes a q.
Second branch succeeds, yielding match of length 1.
↑ Pos.1: First branch of S succeeds, yielding match of length 3.
Matching against a parsing expression is greedy, in the sense that the first success encountered is the only one considered. Even if locally the choices are ordered longest first, there is no guarantee that this greedy matching will find the globally longest match.
Ambiguity detection and influence of rule order on language that is matched.
LL(k) and LR(k) parser generators will fail to complete when the input grammar is ambiguous. This is a feature in the common case that the grammar is intended to be unambiguous but is defective. A PEG parser generator will resolve unintended ambiguities earliest-match-first, which may be arbitrary and lead to surprising parses.
The ordering of productions in a PEG grammar affects not only the resolution of ambiguity, but also the "language matched". For example, consider the first PEG example in Ford's paper
(example rewritten in pegjs.org/online notation, and labelled &NoBreak;&NoBreak; and &NoBreak;&NoBreak;):
Ford notes that "The second alternative in the latter PEG rule will never succeed because the first choice is always taken if the input string ... begins with 'a'.".
Specifically, &NoBreak;&NoBreak; (i.e., the language matched by &NoBreak;&NoBreak;) includes the input "ab", but &NoBreak;&NoBreak; does not.
Thus, adding a new option to a PEG grammar can "remove" strings from the language matched, e.g. &NoBreak;&NoBreak; is the addition of a rule to the single-production grammar , which contains a string not matched by &NoBreak;&NoBreak;.
Furthermore, constructing a grammar to match &NoBreak;&NoBreak; from PEG grammars &NoBreak;&NoBreak; and &NoBreak;&NoBreak; is not always a trivial task.
This is in stark contrast to CFG's, in which the addition of a new production cannot remove strings (though, it can introduce problems in the form of ambiguity),
and a (potentially ambiguous) grammar for &NoBreak;&NoBreak; can be constructed
S → start(G1) | start(G2)
Theory of parsing expression grammars.
It is an open problem to give a concrete example of a context-free language which cannot be recognized by a parsing expression grammar. In particular, it is open whether a parsing expression grammar can recognize the language of palindromes.
The class of parsing expression languages is closed under set intersection and complement, thus also under set union.
Undecidability of emptiness.
In stark contrast to the case for context-free grammars, it is not possible to generate elements of a parsing expression language from its grammar. Indeed, it is algorithmically undecidable whether the language recognised by a parsing expression grammar is empty! One reason for this is that any instance of the Post correspondence problem reduces to an instance of the problem of deciding whether a parsing expression language is empty.
Recall that an instance of the Post correspondence problem consists of a list formula_14 of pairs of strings (of terminal symbols). The problem is to determine whether there exists a sequence formula_15 of indices in the range formula_16 such that formula_17. To reduce this to a parsing expression grammar, let formula_18 be arbitrary pairwise distinct equally long strings of terminal symbols (already with formula_19 distinct symbols in the terminal symbol alphabet, length formula_20 suffices) and consider the parsing expression grammar
formula_21
Any string matched by the nonterminal formula_22 has the form formula_23 for some indices formula_24. Likewise any string matched by the nonterminal formula_25 has the form formula_26. Thus any string matched by formula_27 will have the form formula_28 where formula_29.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\bot"
},
{
"math_id": 1,
"text": "(N,\\Sigma,P,e_S)"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "\\Sigma"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "N \\cup \\Sigma"
},
{
"math_id": 6,
"text": "e_S"
},
{
"math_id": 7,
"text": " \\{ a^n b^n : n \\ge 1 \\} "
},
{
"math_id": 8,
"text": " \\{ a^n b^n c^n : n \\ge 1 \\} "
},
{
"math_id": 9,
"text": " \\{a^n b^n\\}_{n \\geqslant 0} "
},
{
"math_id": 10,
"text": "a"
},
{
"math_id": 11,
"text": "b"
},
{
"math_id": 12,
"text": " \\{a^n b^n c^n\\}_{n \\geqslant 0} "
},
{
"math_id": 13,
"text": "c"
},
{
"math_id": 14,
"text": " (\\alpha_1,\\beta_1), (\\alpha_2,\\beta_2), \\dotsc, (\\alpha_n,\\beta_n) "
},
{
"math_id": 15,
"text": "\\{k_i\\}_{i=1}^m"
},
{
"math_id": 16,
"text": "\\{1,\\dotsc,n\\}"
},
{
"math_id": 17,
"text": " \\alpha_{k_1} \\alpha_{k_2} \\dotsb \\alpha_{k_m} = \\beta_{k_1} \\beta_{k_2} \\dotsb \\beta_{k_m} "
},
{
"math_id": 18,
"text": " \\gamma_0, \\gamma_1, \\dotsc, \\gamma_n "
},
{
"math_id": 19,
"text": "2"
},
{
"math_id": 20,
"text": "\\lceil \\log_2(n+1) \\rceil"
},
{
"math_id": 21,
"text": "\n \\begin{aligned}\n S &\\leftarrow \\&(A \\, !.) \\&(B \\, !.) (\\gamma_1/\\dotsb/\\gamma_n)^+ \\gamma_0 \\\\\n A &\\leftarrow \\gamma_0 / \\gamma_1 A \\alpha_1 / \\dotsb / \\gamma_n A \\alpha_n \\\\\n B &\\leftarrow \\gamma_0 / \\gamma_1 B \\beta_1 / \\dotsb / \\gamma_n B \\beta_n\n \\end{aligned}\n"
},
{
"math_id": 22,
"text": "A"
},
{
"math_id": 23,
"text": "\\gamma_{k_m} \\dotsb \\gamma_{k_2} \\gamma_{k_1} \\gamma_0 \\alpha_{k_1} \\alpha_{k_2} \\dotsb \\alpha_{k_m}"
},
{
"math_id": 24,
"text": "k_1,k_2,\\dotsc,k_m"
},
{
"math_id": 25,
"text": "B"
},
{
"math_id": 26,
"text": "\\gamma_{k_m} \\dotsb \\gamma_{k_2} \\gamma_{k_1} \\gamma_0 \\beta_{k_1} \\beta_{k_2} \\dotsb \\beta_{k_m}"
},
{
"math_id": 27,
"text": "S"
},
{
"math_id": 28,
"text": "\\gamma_{k_m} \\dotsb \\gamma_{k_2} \\gamma_{k_1} \\gamma_0 \\rho"
},
{
"math_id": 29,
"text": " \\rho = \\alpha_{k_1} \\alpha_{k_2} \\dotsb \\alpha_{k_m} = \\beta_{k_1} \\beta_{k_2} \\dotsb \\beta_{k_m}"
}
] | https://en.wikipedia.org/wiki?curid=892899 |
8929041 | Earth potential rise | Rise of voltage of local earth when a large current flows through an earth grid impedance
In electrical engineering, earth potential rise (EPR), also called ground potential rise (GPR), occurs when a large current flows to earth through an earth grid impedance. The potential relative to a distant point on the Earth is highest at the point where current enters the ground, and declines with distance from the source. Ground potential rise is a concern in the design of electrical substations because the high potential may be a hazard to people or equipment.
The change of voltage over distance (potential gradient) may be so high that a person could be injured due to the voltage developed between two feet, or between the ground on which the person is standing and a metal object. Any conducting object connected to the substation earth ground, such as telephone wires, rails, fences, or metallic piping, may also be energized at the ground potential in the substation. This transferred potential is a hazard to people and equipment outside the substation.
Causes.
Earth potential rise (EPR) is caused by electrical faults that occur at electrical substations, power plants, or high-voltage transmission lines. Short-circuit current flows through the plant structure and equipment and into the grounding electrode. The resistance of the Earth is non-zero, so current injected into the earth at the grounding electrode produces a potential rise with respect to a distant reference point. The resulting potential rise can cause hazardous voltage, many hundreds of metres away from the actual fault location. Many factors determine the level of hazard, including: available fault current, soil type, soil moisture, temperature, underlying rock layers, and clearing time to interrupt a fault.
Earth potential rise is a safety issue in the coordination of power and telecommunications services. An EPR event at a site such as an electrical distribution substation may expose personnel, users or structures to hazardous voltages.
Step, touch, and mesh voltages.
"Step voltage" is the voltage between the feet of a person standing near an energized grounded object. It is equal to the difference in voltage, given by the voltage distribution curve, between two points at different distances from the "electrode". A person could be at risk of injury during a fault simply by standing near the grounding point.
"Touch voltage" is the voltage between the energized object and the feet of a person in contact with the object. It is equal to the difference in voltage between the object and a point some distance away. The touch voltage could be nearly the full voltage across the grounded object if that object is grounded at a point remote from the place where the person is in contact with it. For example, a crane that was grounded to the system neutral and that contacted an energized line would expose any person in contact with the crane or its uninsulated load line to a touch voltage nearly equal to the full fault voltage.
"Mesh voltage" is a factor calculated or measured when a grid of grounding conductors is installed. Mesh voltage is the largest potential difference between metallic objects connected to the grid, and soil within the grid, under worst-case fault conditions. It is significant because a person may be standing inside the grid at a point with a large voltage relative to the grid itself.
Mitigation.
An engineering analysis of the power system under fault conditions can be used to determine whether or not hazardous step and touch voltages will develop. The result of this analysis can show the need for protective measures and can guide the selection of appropriate precautions.
Several methods may be used to protect employees from hazardous ground-potential gradients, including equipotential zones, insulating equipment, and restricted work areas.
The creation of an equipotential zone will protect a worker standing within it from hazardous step and touch voltages. Such a zone can be produced through the use of a metal mat connected to the grounded object. Usually this metal mat (or ground mesh) is connected to buried ground rods to increase contact with the earth and effectively reduce grid impedance. In some cases, a grounding grid can be used to equalize the voltage within the grid. Equipotential zones will not, however, protect employees who are either wholly or partially outside the protected area. Bonding conductive objects in the immediate work area can also be used to minimize the voltage between the objects and between each object and ground. (Bonding an object outside the work area can increase the touch voltage to that object in some cases, however.)
The use of insulating personal protective equipment, such as rubber gloves, can protect employees handling grounded equipment and conductors from hazardous touch voltages. The insulating equipment must be rated for the highest voltage that can be impressed on the grounded objects under fault conditions (rather than for the full system voltage).
Workers may be protected from hazardous step or touch voltages by prohibiting access to areas where dangerous voltages may occur, such as within substation boundaries or areas near transmission towers. Workers required to handle conductors or equipment connected to a grounding system may require protective gloves or other measures to protect them from accidentally energized conductors.
In electrical substations, the surface may be covered with a high-resistivity layer of crushed stone or asphalt. The surface layer provides a high resistance between feet and the ground grid, and is an effective method to reduce the step and touch voltage hazard.
Calculations.
In principle, the potential of the earth grid "V"grid can be calculated using Ohm's law if the fault current ("I"f) and resistance of the grid ("Z"grid) are known.
formula_0
While the fault current from a distribution or transmission system can usually be calculated or estimated with precision, calculation of the earth grid resistance is more complicated. Difficulties in calculation arise from the extended and irregular shape of practical ground grids, and the varying resistivity of soil at different depths.
At points outside the earth grid, the potential rise decreases. The simplest case of the potential at a distance is the analysis of a driven rod electrode in homogeneous earth. The voltage profile is given by the following equation.
formula_1
where
This case is a simplified system; practical earthing systems are more complex than a single rod, and the soil will have varying resistivity. It can, however, reliably be said that the resistance of a ground grid is inversely proportional to the area it covers; this rule can be used to quickly assess the degree of difficulty for a particular site. Programs running on desktop personal computers can model ground resistance effects and produce detailed calculations of ground potential rise, using various techniques including the finite element method.
Standards and regulations.
The US Occupational Safety and Health Administration (OSHA) has designated EPR as a "known hazard" and has issued regulations governing the elimination of this hazard in the work place.
Protection and isolation equipment is made to national and international standards described by IEEE, National Electrical Codes (UL/CSA),FCC, and Telcordia.
IEEE Std. 80-2000 is a standard that addresses the calculation and mitigation of step and touch voltages to acceptable levels around electrical substations.
High-voltage protection of telecommunication circuits.
In itself, a ground potential rise is not harmful to any equipment or person bonded to the same ground potential. However, where a conductor (such as a metallic telecommunication line) bonded to a remote ground potential (such as the Central Office/Exchange) enters the area subject to GPR, the conflicting difference of potentials can create significant risks. High voltage can damage equipment and present a danger to personnel. To protect wired communication and control circuits in sub stations, protective devices must be applied. Isolation devices prevent the transfer of potential into or out of the GPR area. This protects equipment and personnel who might otherwise be exposed simultaneously to both ground potentials, and also prevents high voltages and currents propagating towards the telephone company's central office or other users connected to the same network. Circuits may be isolated by transformers or by non-conductive fiber optic couplings. (Surge arresting devices such as carbon blocks or gas-tube shunts to ground do not isolate the circuit but divert high voltage currents from the protected circuit to the local ground. This type of protection will not fully protect against the hazards of GPR where the danger is from a remote ground on the same circuit.)
Telecommunication standards define a "zone of influence" around a substation, inside of which, equipment and circuits must be protected from the effect of ground potential rise. In North American practice, the zone of influence is considered to be bounded by the "300 volt point", which is the point along a telecommunications circuit at which the GPR reaches 300 volts with respect to distant earth. The 300 volt point defining a zone of influence around a sub station is dependent on the ground resistivity and the amount of fault current. It will define a boundary a certain distance from the ground grid of the sub station. Each sub station has its own zone of influence since the variables explained above are different for each location.
In the UK, any site subject to a Rise-of-Earth-Potential (ROEP) is referred to as a 'Hot-Site'. The Zone-of-Influence was historically measured as anywhere within 100m of the boundary of the high-voltage compound at a Hot-Site. Depending on the size of the overall site this may mean that parts of a larger site may not need to be classed as 'Hot', or (conversely) the influence of small sites may extend into areas outside of the land owner's control. Since 2007, it is allowable to use the Energy-Networks-Association (ENA) Recommendation S34 ('A Guide for Assessing the Rise of Earth Potential at Substation Sites') to calculate the Hot-Zone. This is now defined as a contour-line marking where the ROEP exceeds 430V for normal-reliability power lines, or 650V for high-reliability lines. The 'Zone' extends in a radius from any bonded metalwork, such as the site earth electrode system or boundary fence. This may effectively reduce the overall size of the Hot-Zone compared to the previous definition. However, strip earth electrodes, and any non-effectively insulated metallic-sheath/armouring of power cables which extend out of this zone would continue to be considered as 'hot' for a distance of 100m from the boundary, encompassing a width of two meters on either side of the conductor. It is the responsibility of the owning Electrical-Supply-Industry (ESI) to calculate the Hot-Zone.
Openreach (a BT Group company tasked with installing and maintaining a significant majority of the physical telephone network in the UK) maintains a Hot-Site Register, updated every 12 months by voluntarily supplied information from the ESI companies in the UK. Any Openreach engineer visiting a site on the register must be Hot-Site trained. Certain working practices and planning considerations must be followed, such as not using armoured telephone cables, fully sealing cable joints to prevent access, over-sleeving of individual pairs of wires beyond the end of the cable sheath, and isolating (outside the Hot-Zone) any line to be worked on. It is assumed to be the responsibility of the party ordering the initial installation of a service to cover the cost of providing isolation links, service isolating devices, and clearly marked trunking for running cables, and all should be part of the planning process.
In some circumstances (such as when a 'cold' site is upgraded to 'hot' status), the Zone of Influence may encompass residential or commercial property which is not within the property of the Electrical Supply Industry. In these cases the cost of retroactively protecting each telephone circuit may be prohibitively high, so a drainage electrode may be supplied to effectively bring the local Earth Potential back to safe levels.
References.
<templatestyles src="Reflist/styles.css" />
[1] ACIF Working Committee CECRP/WC18, "AS/ACIF S009:2006 Installation Requirements for Customer Cabling (Wiring Rules)", Australian Communications Industry Forum, North Sydney, Australia (2006) | [
{
"math_id": 0,
"text": " V_\\text{grid} = I_\\text{f} \\times Z_\\text{grid}"
},
{
"math_id": 1,
"text": "V_{r} = \\frac{\\rho I}{2 \\pi r_x}"
},
{
"math_id": 2,
"text": "r_x"
},
{
"math_id": 3,
"text": "V_{r}"
},
{
"math_id": 4,
"text": "\\rho"
},
{
"math_id": 5,
"text": "I"
}
] | https://en.wikipedia.org/wiki?curid=8929041 |
893337 | Local hidden-variable theory | Interpretation of quantum mechanics
In the interpretation of quantum mechanics, a local hidden-variable theory is a hidden-variable theory that satisfies the principle of locality. These models attempt to account for the probabilistic features of quantum mechanics via the mechanism of underlying, but inaccessible variables, with the additional requirement that distant events be statistically independent.
The mathematical implications of a local hidden-variable theory with regards to quantum entanglement were explored by physicist John Stewart Bell, who in 1964 proved that broad classes of local hidden-variable theories cannot reproduce the correlations between measurement outcomes that quantum mechanics predicts, a result since confirmed by a range of detailed Bell test experiments.
Models.
Single qubit.
A collection of related theorems, beginning with Bell's proof in 1964, show that quantum mechanics is incompatible with local hidden variables. However, as Bell pointed out, restricted sets of quantum phenomena "can" be imitated using local hidden-variable models. Bell provided a local hidden-variable model for quantum measurements upon a spin-1/2 particle, or in the terminology of quantum information theory, a single qubit. Bell's model was later simplified by N. David Mermin, and a closely related model was presented by Simon B. Kochen and Ernst Specker. The existence of these models is related to the fact that Gleason's theorem does not apply to the case of a single qubit.
Bipartite quantum states.
Bell also pointed out that up until then, discussions of quantum entanglement focused on cases where the results of measurements upon two particles were either perfectly correlated or perfectly anti-correlated. These special cases can also be explained using local hidden variables.
For separable states of two particles, there is a simple hidden-variable model for any measurements on the two parties. Surprisingly, there are also entangled states for which all von Neumann measurements can be described by a hidden-variable model. Such states are entangled, but do not violate any Bell inequality. The so-called Werner states are a single-parameter family of states that are invariant under any transformation of the type formula_0 where formula_1 is a unitary matrix. For two qubits, they are noisy singlets given as
formula_2
where the singlet is defined as formula_3.
Reinhard F. Werner showed that such states allow for a hidden-variable model for formula_4, while they are entangled if formula_5. The bound for hidden-variable models could be improved until formula_6. Hidden-variable models have been constructed for Werner states even if positive operator-valued measurements (POVM) are allowed, not only von Neumann measurements. Hidden variable models were also constructed to noisy maximally entangled states, and even extended to arbitrary pure states mixed with white noise. Beside bipartite systems, there are also results for the multipartite case. A hidden-variable model for any von Neumann measurements at the parties has been presented for a three-qubit quantum state.
Time-dependent variables.
Previously some new hypotheses were conjectured concerning the role of time in constructing hidden-variables theory. One approach was suggested by K. Hess and W. Philipp and relies upon possible consequences of time dependencies of hidden variables; this hypothesis has been criticized by Richard D. Gill, Gregor Weihs, Anton Zeilinger and Marek Żukowski, as well as D. M. Appleby. | [
{
"math_id": 0,
"text": "U \\otimes U,"
},
{
"math_id": 1,
"text": "U"
},
{
"math_id": 2,
"text": "\\varrho = p \\vert \\psi^- \\rangle \\langle \\psi^-\\vert + (1 - p) \\frac{\\mathbb{I}}{4},"
},
{
"math_id": 3,
"text": "\\vert \\psi^-\\rangle = \\tfrac{1}{\\sqrt{2}}\\left(\\vert 01\\rangle - \\vert 10\\rangle\\right)"
},
{
"math_id": 4,
"text": "p \\leq 1/2"
},
{
"math_id": 5,
"text": "p > 1/3"
},
{
"math_id": 6,
"text": "p = 2/3"
}
] | https://en.wikipedia.org/wiki?curid=893337 |
893516 | Core (group theory) | Any of certain special normal subgroups of a group
In group theory, a branch of mathematics, a core is any of certain special normal subgroups of a group. The two most common types are the normal core of a subgroup and the "p"-core of a group.
The normal core.
Definition.
For a group "G", the normal core or normal interior of a subgroup "H" is the largest normal subgroup of "G" that is contained in "H" (or equivalently, the intersection of the conjugates of "H"). More generally, the core of "H" with respect to a subset "S" ⊆ "G" is the intersection of the conjugates of "H" under "S", i.e.
formula_0
Under this more general definition, the normal core is the core with respect to "S" = "G". The normal core of any normal subgroup is the subgroup itself.
Significance.
Normal cores are important in the context of group actions on sets, where the normal core of the isotropy subgroup of any point acts as the identity on its entire orbit. Thus, in case the action is transitive, the normal core of any isotropy subgroup is precisely the kernel of the action.
A core-free subgroup is a subgroup whose normal core is the trivial subgroup. Equivalently, it is a subgroup that occurs as the isotropy subgroup of a transitive, faithful group action.
The solution for the hidden subgroup problem in the abelian case generalizes to finding the normal core in case of subgroups of arbitrary groups.
The "p"-core.
In this section "G" will denote a finite group, though some aspects generalize to locally finite groups and to profinite groups.
Definition.
For a prime "p", the p"-core of a finite group is defined to be its largest normal "p"-subgroup. It is the normal core of every Sylow p-subgroup of the group. The "p"-core of "G" is often denoted formula_1, and in particular appears in one of the definitions of the Fitting subgroup of a finite group. Similarly, the p"′-core is the largest normal subgroup of "G" whose order is coprime to "p" and is denoted formula_2. In the area of finite insoluble groups, including the classification of finite simple groups, the 2′-core is often called simply the core and denoted formula_3. This causes only a small amount of confusion, because one can usually distinguish between the core of a group and the core of a subgroup within a group. The "p"′,"p"-core, denoted formula_4 is defined by formula_5. For a finite group, the "p"′,"p"-core is the unique largest normal "p"-nilpotent subgroup.
The "p"-core can also be defined as the unique largest subnormal "p"-subgroup; the "p"′-core as the unique largest subnormal "p"′-subgroup; and the "p"′,"p"-core as the unique largest subnormal "p"-nilpotent subgroup.
The "p"′ and "p"′,"p"-core begin the upper "p"-series. For sets "π"1, "π"2, ..., "π""n"+1 of primes, one defines subgroups O"π"1, "π"2, ..., "π""n"+1("G") by:
formula_6
The upper "p"-series is formed by taking "π"2"i"−1 = "p"′ and "π"2"i" = "p;" there is also a lower "p"-series. A finite group is said to be p"-nilpotent if and only if it is equal to its own "p"′,"p"-core. A finite group is said to be p"-soluble if and only if it is equal to some term of its upper "p"-series; its "p"-length is the length of its upper "p"-series. A finite group "G" is said to be p-constrained for a prime "p" if formula_7.
Every nilpotent group is "p"-nilpotent, and every "p"-nilpotent group is "p"-soluble. Every soluble group is "p"-soluble, and every "p"-soluble group is "p"-constrained. A group is "p"-nilpotent if and only if it has a normal "p"-complement, which is just its "p"′-core.
Significance.
Just as normal cores are important for group actions on sets, "p"-cores and "p"′-cores are important in modular representation theory, which studies the actions of groups on vector spaces. The "p"-core of a finite group is the intersection of the kernels of the irreducible representations over any field of characteristic "p". For a finite group, the "p"′-core is the intersection of the kernels of the ordinary (complex) irreducible representations that lie in the principal "p"-block. For a finite group, the "p"′,"p"-core is the intersection of the kernels of the irreducible representations in the principal "p"-block over any field of characteristic "p". Also, for a finite group, the "p"′,"p"-core is the intersection of the centralizers of the abelian chief factors whose order is divisible by "p" (all of which are irreducible representations over a field of size "p" lying in the principal block). For a finite, "p"-constrained group, an irreducible module over a field of characteristic "p" lies in the principal block if and only if the "p"′-core of the group is contained in the kernel of the representation.
Solvable radicals.
A related subgroup in concept and notation is the solvable radical. The solvable radical is defined to be the largest solvable normal subgroup, and is denoted formula_8. There is some variance in the literature in defining the "p"′-core of "G". A few authors in only a few papers (for instance John G. Thompson's N-group papers, but not his later work) define the "p"′-core of an insoluble group "G" as the "p"′-core of its solvable radical in order to better mimic properties of the 2′-core.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Core}_S(H) := \\bigcap_{s \\in S}{s^{-1}Hs}."
},
{
"math_id": 1,
"text": "O_p(G)"
},
{
"math_id": 2,
"text": "O_{p'}(G)"
},
{
"math_id": 3,
"text": "O(G)"
},
{
"math_id": 4,
"text": "O_{p',p}(G)"
},
{
"math_id": 5,
"text": "O_{p',p}(G)/O_{p'}(G) = O_p(G/O_{p'}(G))"
},
{
"math_id": 6,
"text": "O_{\\pi_1,\\pi_2,\\dots,\\pi_{n+1}}(G)/O_{\\pi_1,\\pi_2,\\dots,\\pi_{n}}(G) = O_{\\pi_{n+1}}( G/O_{\\pi_1,\\pi_2,\\dots,\\pi_{n}}(G) )"
},
{
"math_id": 7,
"text": "C_G(O_{p',p}(G)/O_{p'}(G)) \\subseteq O_{p',p}(G)"
},
{
"math_id": 8,
"text": "O_\\infty(G)"
}
] | https://en.wikipedia.org/wiki?curid=893516 |
893559 | Split-complex number | The reals with an extra square root of +1 adjoined
In algebra, a split complex number (or hyperbolic number, also perplex number, double number) is based on a hyperbolic unit j satisfying formula_0 A split-complex number has two real number components x and y, and is written formula_1 The "conjugate" of z is formula_2 Since formula_3 the product of a number z with its conjugate is formula_4 an isotropic quadratic form.
The collection D of all split complex numbers formula_5 for &NoBreak;&NoBreak; forms an algebra over the field of real numbers. Two split-complex numbers w and z have a product wz that satisfies formula_6 This composition of N over the algebra product makes ("D", +, ×, *) a composition algebra.
A similar algebra based on &NoBreak;&NoBreak; and component-wise operations of addition and multiplication, &NoBreak;&NoBreak; where xy is the quadratic form on &NoBreak;&NoBreak; also forms a quadratic space. The ring isomorphism
formula_7
relates proportional quadratic forms, but the mapping is not an isometry since the multiplicative identity (1, 1) of &NoBreak;&NoBreak; is at a distance &NoBreak;&NoBreak; from 0, which is normalized in D.
Split-complex numbers have many other names; see "" below. See the article "Motor variable" for functions of a split-complex number.
Definition.
A split-complex number is an ordered pair of real numbers, written in the form
formula_8
where x and y are real numbers and the hyperbolic unit j satisfies
formula_9
In the field of complex numbers the imaginary unit i satisfies formula_10 The change of sign distinguishes the split-complex numbers from the ordinary complex ones. The hyperbolic unit j is "not" a real number but an independent quantity.
The collection of all such z is called the split-complex plane. Addition and multiplication of split-complex numbers are defined by
formula_11
This multiplication is commutative, associative and distributes over addition.
Conjugate, modulus, and bilinear form.
Just as for complex numbers, one can define the notion of a split-complex conjugate. If
formula_12
then the conjugate of z is defined as
formula_13
The conjugate is an involution which satisfies similar properties to the complex conjugate. Namely,
formula_14
The squared modulus of a split-complex number formula_15 is given by the isotropic quadratic form
formula_16
It has the composition algebra property:
formula_17
However, this quadratic form is not positive-definite but rather has signature (1, −1), so the modulus is "not" a norm.
The associated bilinear form is given by
formula_18
where formula_15 and formula_19 Here, the "real part" is defined by formula_20. Another expression for the squared modulus is then
formula_21
Since it is not positive-definite, this bilinear form is not an inner product; nevertheless the bilinear form is frequently referred to as an "indefinite inner product". A similar abuse of language refers to the modulus as a norm.
A split-complex number is invertible if and only if its modulus is nonzero (formula_22), thus numbers of the form "x" ± "j x" have no inverse. The multiplicative inverse of an invertible element is given by
formula_23
Split-complex numbers which are not invertible are called null vectors. These are all of the form ("a" ± "j a") for some real number a.
The diagonal basis.
There are two nontrivial idempotent elements given by formula_24 and formula_25 Recall that idempotent means that formula_26 and formula_27 Both of these elements are null:
formula_28
It is often convenient to use e and e∗ as an alternate basis for the split-complex plane. This basis is called the diagonal basis or null basis. The split-complex number z can be written in the null basis as
formula_29
If we denote the number formula_30 for real numbers a and b by ("a", "b"), then split-complex multiplication is given by
formula_31
The split-complex conjugate in the diagonal basis is given by
formula_32
and the squared modulus by
formula_33
Isomorphism.
On the basis {e, e*} it becomes clear that the split-complex numbers are ring-isomorphic to the direct sum &NoBreak;&NoBreak; with addition and multiplication defined pairwise.
The diagonal basis for the split-complex number plane can be invoked by using an ordered pair ("x", "y") for formula_34 and making the mapping
formula_35
Now the quadratic form is formula_36 Furthermore,
formula_37
so the two parametrized hyperbolas are brought into correspondence with S.
The action of hyperbolic versor formula_38 then corresponds under this linear transformation to a squeeze mapping
formula_39
Though lying in the same isomorphism class in the category of rings, the split-complex plane and the direct sum of two real lines differ in their layout in the Cartesian plane. The isomorphism, as a planar mapping, consists of a counter-clockwise rotation by 45° and a dilation by √2. The dilation in particular has sometimes caused confusion in connection with areas of a hyperbolic sector. Indeed, hyperbolic angle corresponds to area of a sector in the &NoBreak;&NoBreak; plane with its "unit circle" given by formula_40 The contracted unit hyperbola formula_41 of the split-complex plane has only "half the area" in the span of a corresponding hyperbolic sector. Such confusion may be perpetuated when the geometry of the split-complex plane is not distinguished from that of &NoBreak;&NoBreak;.
Geometry.
A two-dimensional real vector space with the Minkowski inner product is called (1 + 1)-dimensional Minkowski space, often denoted &NoBreak;&NoBreak; Just as much of the geometry of the Euclidean plane &NoBreak;&NoBreak; can be described with complex numbers, the geometry of the Minkowski plane &NoBreak;}&NoBreak; can be described with split-complex numbers.
The set of points
formula_42
is a hyperbola for every nonzero a in &NoBreak;&NoBreak; The hyperbola consists of a right and left branch passing through ("a", 0) and (−"a", 0). The case "a" = 1 is called the unit hyperbola. The conjugate hyperbola is given by
formula_43
with an upper and lower branch passing through (0, "a") and (0, −"a"). The hyperbola and conjugate hyperbola are separated by two diagonal asymptotes which form the set of null elements:
formula_44
These two lines (sometimes called the null cone) are perpendicular in &NoBreak;&NoBreak; and have slopes ±1.
Split-complex numbers z and w are said to be hyperbolic-orthogonal if ⟨"z", "w"⟩ = 0. While analogous to ordinary orthogonality, particularly as it is known with ordinary complex number arithmetic, this condition is more subtle. It forms the basis for the simultaneous hyperplane concept in spacetime.
The analogue of Euler's formula for the split-complex numbers is
formula_45
This formula can be derived from a power series expansion using the fact that cosh has only even powers while that for sinh has odd powers. For all real values of the hyperbolic angle θ the split-complex number "λ" = exp("jθ") has norm 1 and lies on the right branch of the unit hyperbola. Numbers such as λ have been called hyperbolic versors.
Since λ has modulus 1, multiplying any split-complex number z by λ preserves the modulus of z and represents a "hyperbolic rotation" (also called a Lorentz boost or a squeeze mapping). Multiplying by λ preserves the geometric structure, taking hyperbolas to themselves and the null cone to itself.
The set of all transformations of the split-complex plane which preserve the modulus (or equivalently, the inner product) forms a group called the generalized orthogonal group O(1, 1). This group consists of the hyperbolic rotations, which form a subgroup denoted SO+(1, 1), combined with four discrete reflections given by
formula_46 and formula_47
The exponential map
formula_48
sending θ to rotation by exp("jθ") is a group isomorphism since the usual exponential formula applies:
formula_49
If a split-complex number z does not lie on one of the diagonals, then z has a polar decomposition.
Algebraic properties.
In abstract algebra terms, the split-complex numbers can be described as the quotient of the polynomial ring &NoBreak;&NoBreak; by the ideal generated by the polynomial formula_50
formula_51
The image of x in the quotient is the "imaginary" unit j. With this description, it is clear that the split-complex numbers form a commutative algebra over the real numbers. The algebra is "not" a field since the null elements are not invertible. All of the nonzero null elements are zero divisors.
Since addition and multiplication are continuous operations with respect to the usual topology of the plane, the split-complex numbers form a topological ring.
The algebra of split-complex numbers forms a composition algebra since
formula_52
for any numbers z and w.
From the definition it is apparent that the ring of split-complex numbers is isomorphic to the group ring &NoBreak;&NoBreak; of the cyclic group C2 over the real numbers &NoBreak;&NoBreak;
Matrix representations.
One can easily represent split-complex numbers by matrices. The split-complex number formula_34 can be represented by the matrix formula_53
Addition and multiplication of split-complex numbers are then given by matrix addition and multiplication. The squared modulus of z is given by the determinant of the corresponding matrix.
In fact there are many representations of the split-complex plane in the four-dimensional ring of 2x2 real matrices. The real multiples of the identity matrix form a real line in the matrix ring M(2,R). Any hyperbolic unit "m" provides a basis element with which to extend the real line to the split-complex plane. The matrices
formula_54
which square to the identity matrix satisfy formula_55
For example, when "a" = 0, then ("b,c") is a point on the standard hyperbola. More generally, there is a hypersurface in M(2,R) of hyperbolic units, any one of which serves in a basis to represent the split-complex numbers as a subring of M(2,R).
The number formula_34 can be represented by the matrix formula_56
History.
The use of split-complex numbers dates back to 1848 when James Cockle revealed his tessarines. William Kingdon Clifford used split-complex numbers to represent sums of spins. Clifford introduced the use of split-complex numbers as coefficients in a quaternion algebra now called split-biquaternions. He called its elements "motors", a term in parallel with the "rotor" action of an ordinary complex number taken from the circle group. Extending the analogy, functions of a motor variable contrast to functions of an ordinary complex variable.
Since the late twentieth century, the split-complex multiplication has commonly been seen as a Lorentz boost of a spacetime plane. In that model, the number "z" = "x" + "y" "j" represents an event in a spatio-temporal plane, where "x" is measured in seconds and y in Light-seconds. The future corresponds to the quadrant of events {"z" : |"y"| < "x"}, which has the split-complex polar decomposition formula_57. The model says that z can be reached from the origin by entering a frame of reference of rapidity a and waiting ρ nanoseconds. The split-complex equation
formula_58
expressing products on the unit hyperbola illustrates the additivity of rapidities for collinear velocities. Simultaneity of events depends on rapidity a;
formula_59
is the line of events simultaneous with the origin in the frame of reference with rapidity "a".
Two events z and w are hyperbolic-orthogonal when formula_60 Canonical events exp("aj") and "j" exp("aj") are hyperbolic orthogonal and lie on the axes of a frame of reference in which the events simultaneous with the origin are proportional to "j" exp("aj").
In 1933 Max Zorn was using the split-octonions and noted the composition algebra property. He realized that the Cayley–Dickson construction, used to generate division algebras, could be modified (with a factor gamma, γ) to construct other composition algebras including the split-octonions. His innovation was perpetuated by Adrian Albert, Richard D. Schafer, and others. The gamma factor, with R as base field, builds split-complex numbers as a composition algebra. Reviewing Albert for Mathematical Reviews, N. H. McCoy wrote that there was an "introduction of some new algebras of order 2e over "F" generalizing Cayley–Dickson algebras." Taking "F" = R and "e" = 1 corresponds to the algebra of this article.
In 1935 J.C. Vignaux and A. Durañona y Vedia developed the split-complex geometric algebra and function theory in four articles in "Contribución a las Ciencias Físicas y Matemáticas", National University of La Plata, República Argentina (in Spanish). These expository and pedagogical essays presented the subject for broad appreciation.
In 1941 E.F. Allen used the split-complex geometric arithmetic to establish the nine-point hyperbola of a triangle inscribed in "zz"∗ = 1.
In 1956 Mieczyslaw Warmus published "Calculus of Approximations" in "Bulletin de l’Académie polonaise des sciences" (see link in References). He developed two algebraic systems, each of which he called "approximate numbers", the second of which forms a real algebra. D. H. Lehmer reviewed the article in Mathematical Reviews and observed that this second system was isomorphic to the "hyperbolic complex" numbers, the subject of this article.
In 1961 Warmus continued his exposition, referring to the components of an approximate number as midpoint and radius of the interval denoted.
Synonyms.
Different authors have used a great variety of names for the split-complex numbers. Some of these include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "j^2=1."
},
{
"math_id": 1,
"text": "z=x+yj ."
},
{
"math_id": 2,
"text": "z^*=x-yj."
},
{
"math_id": 3,
"text": "j^2=1,"
},
{
"math_id": 4,
"text": "N(z) := zz^* = x^2 - y^2,"
},
{
"math_id": 5,
"text": "z=x+yj"
},
{
"math_id": 6,
"text": "N(wz)=N(w)N(z)."
},
{
"math_id": 7,
"text": "\\begin{align}\n D &\\to \\mathbb{R}^2 \\\\\n x + yj &\\mapsto (x - y, x + y)\n\\end{align}"
},
{
"math_id": 8,
"text": "z = x + jy"
},
{
"math_id": 9,
"text": "j^2 = +1"
},
{
"math_id": 10,
"text": "i^2 = -1 ."
},
{
"math_id": 11,
"text": "\\begin{align}\n (x + jy) + (u + jv) &= (x + u) + j(y + v) \\\\\n (x + jy)(u + jv) &= (xu + yv) + j(xv + yu).\n\\end{align}"
},
{
"math_id": 12,
"text": " z = x + jy ~,"
},
{
"math_id": 13,
"text": " z^* = x - jy ~."
},
{
"math_id": 14,
"text": "\\begin{align}\n (z + w)^* &= z^* + w^* \\\\\n (zw)^* &= z^* w^* \\\\\n \\left(z^*\\right)^* &= z.\n\\end{align}"
},
{
"math_id": 15,
"text": "z=x+jy"
},
{
"math_id": 16,
"text": "\\lVert z \\rVert^2 = z z^* = z^* z = x^2 - y^2 ~."
},
{
"math_id": 17,
"text": "\\lVert z w \\rVert = \\lVert z \\rVert \\lVert w \\rVert ~."
},
{
"math_id": 18,
"text": "\\langle z, w \\rangle = \\operatorname\\mathrm{Re}\\left(zw^*\\right) = \\operatorname\\mathrm{Re} \\left(z^* w\\right) = xu - yv ~,"
},
{
"math_id": 19,
"text": "w=u+jv."
},
{
"math_id": 20,
"text": "\\operatorname\\mathrm{Re}(z) = \\tfrac{1}{2}(z + z^*) = x"
},
{
"math_id": 21,
"text": " \\lVert z \\rVert^2 = \\langle z, z \\rangle ~."
},
{
"math_id": 22,
"text": "\\lVert z \\rVert \\ne 0"
},
{
"math_id": 23,
"text": "z^{-1} = \\frac{z^*}{ {\\lVert z \\rVert}^2} ~."
},
{
"math_id": 24,
"text": "e=\\tfrac{1}{2}(1-j)"
},
{
"math_id": 25,
"text": "e^* = \\tfrac{1}{2}(1+j)."
},
{
"math_id": 26,
"text": "ee=e"
},
{
"math_id": 27,
"text": "e^*e^*=e^*."
},
{
"math_id": 28,
"text": "\\lVert e \\rVert = \\lVert e^* \\rVert = e^* e = 0 ~."
},
{
"math_id": 29,
"text": " z = x + jy = (x - y)e + (x + y)e^* ~."
},
{
"math_id": 30,
"text": "z=ae+be^*"
},
{
"math_id": 31,
"text": "\\left( a_1, b_1 \\right) \\left( a_2, b_2 \\right) = \\left( a_1 a_2, b_1 b_2 \\right) ~."
},
{
"math_id": 32,
"text": "(a, b)^* = (b, a)"
},
{
"math_id": 33,
"text": " \\lVert (a, b) \\rVert^2 = ab."
},
{
"math_id": 34,
"text": "z = x + jy"
},
{
"math_id": 35,
"text": "\n(u, v) = (x, y) \\begin{pmatrix}1 & 1 \\\\1 & -1\\end{pmatrix} = (x, y) S ~.\n"
},
{
"math_id": 36,
"text": "uv = (x + y)(x - y) = x^2 - y^2 ~."
},
{
"math_id": 37,
"text": "\n(\\cosh a, \\sinh a)\n\\begin{pmatrix} 1 & 1 \\\\ 1 & -1 \\end{pmatrix}\n= \\left(e^a, e^{-a}\\right)\n"
},
{
"math_id": 38,
"text": "e^{bj} \\!"
},
{
"math_id": 39,
"text": "\n\\sigma: (u, v) \\mapsto \\left(ru, \\frac{v}{r}\\right),\\quad r = e^b ~.\n"
},
{
"math_id": 40,
"text": "\\{(a,b) \\in \\R \\oplus \\R : ab=1\\}."
},
{
"math_id": 41,
"text": "\\{\\cosh a+j\\sinh a : a \\in \\R\\}"
},
{
"math_id": 42,
"text": "\\left\\{ z : \\lVert z \\rVert^2 = a^2 \\right\\}"
},
{
"math_id": 43,
"text": "\\left\\{ z : \\lVert z \\rVert^2 = -a^2 \\right\\}"
},
{
"math_id": 44,
"text": "\\left\\{ z : \\lVert z \\rVert = 0 \\right\\}."
},
{
"math_id": 45,
"text": "\\exp(j\\theta) = \\cosh(\\theta) + j\\sinh(\\theta)."
},
{
"math_id": 46,
"text": "z \\mapsto \\pm z"
},
{
"math_id": 47,
"text": "z \\mapsto \\pm z^*."
},
{
"math_id": 48,
"text": "\\exp\\colon (\\R, +) \\to \\mathrm{SO}^{+}(1, 1)"
},
{
"math_id": 49,
"text": "e^{j(\\theta + \\phi)} = e^{j\\theta}e^{j\\phi}."
},
{
"math_id": 50,
"text": "x^2-1,"
},
{
"math_id": 51,
"text": "\\R[x]/(x^2-1 )."
},
{
"math_id": 52,
"text": "\\lVert zw \\rVert = \\lVert z \\rVert \\lVert w \\rVert ~"
},
{
"math_id": 53,
"text": "z \\mapsto \\begin{pmatrix}x & y \\\\ y & x\\end{pmatrix}."
},
{
"math_id": 54,
"text": "m = \\begin{pmatrix}a & c \\\\ b & -a \\end{pmatrix}"
},
{
"math_id": 55,
"text": "a^2 + bc = 1 ."
},
{
"math_id": 56,
"text": "x\\ I + y\\ m ."
},
{
"math_id": 57,
"text": "z = \\rho e^{aj} \\!"
},
{
"math_id": 58,
"text": "e^{aj} \\ e^{bj} = e^{(a + b)j}"
},
{
"math_id": 59,
"text": "\\{ z = \\sigma j e^{aj} : \\sigma \\isin \\R \\}"
},
{
"math_id": 60,
"text": "z^*w+zw^* = 0."
}
] | https://en.wikipedia.org/wiki?curid=893559 |
893640 | Schreier–Sims algorithm | Algorithm for solving various problems in computational group theory
The Schreier–Sims algorithm is an algorithm in computational group theory, named after the mathematicians Otto Schreier and Charles Sims. This algorithm can find the order of a finite permutation group, determine whether a given permutation is a member of the group, and other tasks in polynomial time. It was introduced by Sims in 1970, based on Schreier's subgroup lemma. The running time was subsequently improved by Donald Knuth in 1991. Later, an even faster randomized version of the algorithm was developed.
Background and timing.
The algorithm is an efficient method of computing a base and strong generating set (BSGS) of a permutation group. In particular, an SGS determines the order of a group and makes it easy to test membership in the group. Since the SGS is critical for many algorithms in computational group theory, computer algebra systems typically rely on the Schreier–Sims algorithm for efficient calculations in groups.
The running time of Schreier–Sims varies on the implementation. Let formula_0 be given by formula_1 generators. For the deterministic version of the algorithm, possible running times are:
The use of Schreier vectors can have a significant influence on the performance of implementations of the Schreier–Sims algorithm.
The Monte Carlo variations of the Schreier–Sims algorithm have the estimated complexity:
formula_6 requiring memory formula_7.
Modern computer algebra systems, such as GAP and Magma, typically use an optimized Monte Carlo algorithm.
Outline of basic algorithm.
Following is C++-style pseudo-code for the basic idea of the Schreier-Sims algorithm. It is meant to leave out all finer details, such as memory management or any kind of low-level optimization, so as not to obfuscate the most important ideas of the algorithm. Its goal is not to compile.
struct Group
uint stabPoint; // An index into the base for the point stabilized by this group's subgroup.
OrbitTree orbitTree; // A tree to keep track of the orbit in our group of the point stabilized by our subgroup.
TransversalSet transversalSet; // A set of coset representatives of this group's subgroup.
GeneratorSet generatorSet; // A set of permutations generating this group.
Group* subGroup; // A pointer to this group's subgroup, or null to mean the trivial group.
Group(uint stabPoint)
this->stabPoint = stabPoint;
subGroup = nullptr;
};
// The given set of generators need not be a strong generating set. It just needs to generate the group at the root of the chain.
Group* MakeStabChain(const GeneratorSet& generatorSet, uint* base)
Group* group = new Group(0);
for (generator in generatorSet)
group->Extend(generator, base);
return group;
// Extend the stabilizer chain rooted at this group with the given generator.
void Group::Extend(const Permutation& generator, uint* base)
// This is the major optimization of Schreier-Sims. Weed out redundant Schreier generators.
if (IsMember(generator))
return;
// Our group just got bigger, but the stabilizer chain rooted at our subgroup is still the same.
generatorSet.Add(generator);
// Explore all new orbits we can reach with the addition of the new generator.
// Note that if the tree was empty to begin with, the identity must be returned
// in the set to satisfy a condition of Schreier's lemma.
newTerritorySet = orbitTree.Grow(generator, base);
// By the orbit-stabilizer theorem, the permutations in the returned set are
// coset representatives of the cosets of our subgroup.
for (permutation in newTerritorySet)
transversalSet.Add(permutation);
// We now apply Schreier's lemma to find new generators for our subgroup.
// Some iterations of this loop are redundant, but we ignore that for simplicity.
for (cosetRepresentative in transversalSet)
for (generator in generatorSet)
schreierGenerator = CalcSchreierGenerator(cosetRepresentative, generator);
if (schreierGenerator.IsIdentity())
continue;
if (!subGroup)
subGroup = new Group(stabPoint + 1);
subGroup->Extend(schreierGenerator, base);
Notable details left out here include the growing of the orbit tree and the calculation of each new Schreier generator. In place of the orbit tree, a Schreier vector can be used, but the idea is essentially the same. The tree is rooted at the identity element, which fixes the point stabilized by the subgroup. Each node of the tree can represent a permutation that, when combined with all permutations in the path from the root to it, takes that point to some new point not visited by any other node of the tree. By the orbit-stabilizer theorem, these form a transversal of the subgroup of our group that stabilizes the point whose entire orbit is maintained by the tree. Calculating a Schreier generator is a simple application of the Schreier's subgroup lemma.
Another detail left out is the membership test. This test is based upon the sifting process. A permutation is sifted down the chain at each step by finding the containing coset, then using that coset's representative to find a permutation in the subgroup, and the process is repeated in the subgroup with that found permutation. If the end of the chain is reached (i.e., we reach the trivial subgroup), then the sifted permutation was a member of the group at the top of the chain. | [
{
"math_id": 0,
"text": " G \\leq S_n "
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "O(n^2 \\log^3 |G| + tn \\log |G|) "
},
{
"math_id": 3,
"text": "O(n^2 \\log |G| + tn)"
},
{
"math_id": 4,
"text": "O(n^3 \\log^3 |G| + tn^2 \\log |G|) "
},
{
"math_id": 5,
"text": "O(n \\log^2 |G| + tn) "
},
{
"math_id": 6,
"text": "O(n \\log n \\log^4 |G| + tn \\log |G|)"
},
{
"math_id": 7,
"text": "O(n \\log |G| + tn)"
}
] | https://en.wikipedia.org/wiki?curid=893640 |
893698 | Stone's theorem on one-parameter unitary groups | Theorem relating unitary operators to one-parameter Lie groups
In mathematics, Stone's theorem on one-parameter unitary groups is a basic theorem of functional analysis that establishes a one-to-one correspondence between self-adjoint operators on a Hilbert space formula_0 and one-parameter families
formula_1
of unitary operators that are strongly continuous, i.e.,
formula_2
and are homomorphisms, i.e.,
formula_3
Such one-parameter families are ordinarily referred to as strongly continuous one-parameter unitary groups.
The theorem was proved by Marshall Stone (1930, 1932), and John von Neumann (1932) showed that the requirement that formula_4 be strongly continuous can be relaxed to say that it is merely weakly measurable, at least when the Hilbert space is separable.
This is an impressive result, as it allows one to define the derivative of the mapping formula_5 which is only supposed to be continuous. It is also related to the theory of Lie groups and Lie algebras.
Formal statement.
The statement of the theorem is as follows.
Theorem. Let formula_4 be a strongly continuous one-parameter unitary group. Then there exists a unique (possibly unbounded) operator formula_6, that is self-adjoint on formula_7 and such that
formula_8
The domain of formula_9 is defined by
formula_10
Conversely, let formula_6 be a (possibly unbounded) self-adjoint operator on formula_11 Then the one-parameter family formula_1 of unitary operators defined by
formula_12
is a strongly continuous one-parameter group.
In both parts of the theorem, the expression formula_13 is defined by means of the functional calculus, which uses the spectral theorem for unbounded self-adjoint operators.
The operator formula_9 is called the infinitesimal generator of formula_14 Furthermore, formula_9 will be a bounded operator if and only if the operator-valued mapping formula_15 is norm-continuous.
The infinitesimal generator formula_9 of a strongly continuous unitary group formula_1 may be computed as
formula_16
with the domain of formula_9 consisting of those vectors formula_17 for which the limit exists in the norm topology. That is to say, formula_9 is equal to formula_18 times the derivative of formula_19 with respect to formula_20 at formula_21. Part of the statement of the theorem is that this derivative exists—i.e., that formula_9 is a densely defined self-adjoint operator. The result is not obvious even in the finite-dimensional case, since formula_19 is only assumed (ahead of time) to be continuous, and not differentiable.
Example.
The family of translation operators
formula_22
is a one-parameter unitary group of unitary operators; the infinitesimal generator of this family is an extension of the differential operator
formula_23
defined on the space of continuously differentiable complex-valued functions with compact support on formula_24 Thus
formula_25
In other words, motion on the line is generated by the momentum operator.
Applications.
Stone's theorem has numerous applications in quantum mechanics. For instance, given an isolated quantum mechanical system, with Hilbert space of states H, time evolution is a strongly continuous one-parameter unitary group on formula_0. The infinitesimal generator of this group is the system Hamiltonian.
Using Fourier transform.
Stone's Theorem can be recast using the language of the Fourier transform. The real line formula_26 is a locally compact abelian group. Non-degenerate *-representations of the group C*-algebra formula_27 are in one-to-one correspondence with strongly continuous unitary representations of formula_28 i.e., strongly continuous one-parameter unitary groups. On the other hand, the Fourier transform is a *-isomorphism from formula_27 to formula_29 the formula_30-algebra of continuous complex-valued functions on the real line that vanish at infinity. Hence, there is a one-to-one correspondence between strongly continuous one-parameter unitary groups and *-representations of formula_31 As every *-representation of formula_32 corresponds uniquely to a self-adjoint operator, Stone's Theorem holds.
Therefore, the procedure for obtaining the infinitesimal generator of a strongly continuous one-parameter unitary group is as follows:
The precise definition of formula_27 is as follows. Consider the *-algebra formula_38 the continuous complex-valued functions on formula_26 with compact support, where the multiplication is given by convolution. The completion of this *-algebra with respect to the formula_39-norm is a Banach *-algebra, denoted by formula_40 Then formula_27 is defined to be the enveloping formula_30-algebra of formula_41, i.e., its completion with respect to the largest possible formula_30-norm. It is a non-trivial fact that, via the Fourier transform, formula_27 is isomorphic to formula_31 A result in this direction is the Riemann-Lebesgue Lemma, which says that the Fourier transform maps formula_42 to formula_31
Generalizations.
The Stone–von Neumann theorem generalizes Stone's theorem to a "pair" of self-adjoint operators, formula_43, satisfying the canonical commutation relation, and shows that these are all unitarily equivalent to the position operator and momentum operator on formula_44
The Hille–Yosida theorem generalizes Stone's theorem to strongly continuous one-parameter semigroups of contractions on Banach spaces.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{H}"
},
{
"math_id": 1,
"text": "(U_{t})_{t \\in \\R}"
},
{
"math_id": 2,
"text": "\\forall t_0 \\in \\R, \\psi \\in \\mathcal{H}: \\qquad \\lim_{t \\to t_0} U_t(\\psi) = U_{t_0}(\\psi),"
},
{
"math_id": 3,
"text": "\\forall s,t \\in \\R : \\qquad U_{t + s} = U_t U_s."
},
{
"math_id": 4,
"text": "(U_t)_{t \\in \\R}"
},
{
"math_id": 5,
"text": "t \\mapsto U_t,"
},
{
"math_id": 6,
"text": "A: \\mathcal{D}_A \\to \\mathcal{H}"
},
{
"math_id": 7,
"text": "\\mathcal{D}_A"
},
{
"math_id": 8,
"text": "\\forall t \\in \\R : \\qquad U_t = e^{itA}."
},
{
"math_id": 9,
"text": "A"
},
{
"math_id": 10,
"text": "\\mathcal{D}_A = \\left \\{ \\psi \\in \\mathcal{H} \\left | \\lim_{\\varepsilon \\to 0} \\frac{-i}{\\varepsilon} \\left(U_{\\varepsilon} (\\psi) - \\psi \\right) \\text{ exists} \\right. \\right \\}."
},
{
"math_id": 11,
"text": "\\mathcal{D}_A \\subseteq \\mathcal{H}."
},
{
"math_id": 12,
"text": "\\forall t \\in \\R : \\qquad U_{t} := e^{itA}"
},
{
"math_id": 13,
"text": "e^{itA}"
},
{
"math_id": 14,
"text": "(U_{t})_{t \\in \\R}."
},
{
"math_id": 15,
"text": "t \\mapsto U_{t}"
},
{
"math_id": 16,
"text": "A\\psi = -i\\lim_{\\varepsilon\\to 0}\\frac{U_\\varepsilon\\psi-\\psi}{\\varepsilon},"
},
{
"math_id": 17,
"text": "\\psi"
},
{
"math_id": 18,
"text": "-i"
},
{
"math_id": 19,
"text": "U_t"
},
{
"math_id": 20,
"text": "t"
},
{
"math_id": 21,
"text": "t=0"
},
{
"math_id": 22,
"text": "\\left[ T_t(\\psi) \\right](x) = \\psi(x + t)"
},
{
"math_id": 23,
"text": "-i \\frac{d}{dx}"
},
{
"math_id": 24,
"text": "\\R."
},
{
"math_id": 25,
"text": "T_{t} = e^{t \\frac{d}{dx}}."
},
{
"math_id": 26,
"text": "\\R"
},
{
"math_id": 27,
"text": "C^*(\\R)"
},
{
"math_id": 28,
"text": "\\R,"
},
{
"math_id": 29,
"text": "C_0(\\R),"
},
{
"math_id": 30,
"text": "C^*"
},
{
"math_id": 31,
"text": "C_0(\\R)."
},
{
"math_id": 32,
"text": "C_0(\\R)"
},
{
"math_id": 33,
"text": "\\rho"
},
{
"math_id": 34,
"text": "\\forall f \\in C_c(\\R): \\quad \\rho(f) := \\int_{\\R} f(t) ~ U_{t} dt,"
},
{
"math_id": 35,
"text": "\\tau"
},
{
"math_id": 36,
"text": "C_0(\\R )"
},
{
"math_id": 37,
"text": "(U_{t})_{t \\in \\R }."
},
{
"math_id": 38,
"text": "C_c(\\R),"
},
{
"math_id": 39,
"text": "L^1"
},
{
"math_id": 40,
"text": "(L^1(\\R),\\star)."
},
{
"math_id": 41,
"text": "(L^1(\\R),\\star)"
},
{
"math_id": 42,
"text": "L^1(\\R)"
},
{
"math_id": 43,
"text": "(P,Q)"
},
{
"math_id": 44,
"text": "L^2(\\R)."
}
] | https://en.wikipedia.org/wiki?curid=893698 |
89371 | Combinational logic | Type of digital logic implemented by Boolean circuits
<imagemap>
File:Automata theory.svgClasses of automata text to hide from printing but not mirroring
In automata theory, combinational logic (also referred to as time-independent logic) is a type of digital logic that is implemented by Boolean circuits, where the output is a pure function of the present input only. This is in contrast to sequential logic, in which the output depends not only on the present input but also on the history of the input. In other words, sequential logic has "memory" while combinational logic does not.
Combinational logic is used in computer circuits to perform Boolean algebra on input signals and on stored data. Practical computer circuits normally contain a mixture of combinational and sequential logic. For example, the part of an arithmetic logic unit, or ALU, that does mathematical calculations is constructed using combinational logic. Other circuits used in computers, such as half adders, full adders, half subtractors, full subtractors, multiplexers, demultiplexers, encoders and decoders are also made by using combinational logic.
Practical design of combinational logic systems may require consideration of the finite time required for practical logical elements to react to changes in their inputs. Where an output is the result of the combination of several different paths with differing numbers of switching elements, the output may momentarily change state before settling at the final state, as the changes propagate along different paths.
Representation.
Combinational logic is used to build circuits that produce specified outputs from certain inputs. The construction of combinational logic is generally done using one of two methods: a sum of products, or a product of sums. Consider the following truth table :
Using sum of products, all logical statements which yield true results are summed, giving the result:
formula_0
Using Boolean algebra, the result simplifies to the following equivalent of the truth table:
formula_1
Logic formula minimization.
Minimization (simplification) of combinational logic formulas is done using the following rules based on the laws of Boolean algebra:
formula_2
formula_3
formula_4
formula_5
formula_6
With the use of minimization (sometimes called logic optimization), a simplified logical function or circuit may be arrived upon, and the logic combinational circuit becomes smaller, and easier to analyse, use, or build.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(A \\wedge \\neg B \\wedge \\neg C) \\vee (A \\wedge B \\wedge C) \\,"
},
{
"math_id": 1,
"text": "A \\wedge ((\\neg B \\wedge \\neg C) \\vee (B \\wedge C)) \\,"
},
{
"math_id": 2,
"text": "\\begin{align}\n(A \\vee B) \\wedge (A \\vee C) &= A \\vee (B \\wedge C) \\\\\n(A \\wedge B) \\vee (A \\wedge C) &= A \\wedge (B \\vee C)\n\\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align}\nA \\vee (A \\wedge B) &= A \\\\\nA \\wedge (A \\vee B) &= A\n\n\\end{align}"
},
{
"math_id": 4,
"text": "\\begin{align} \n\nA \\vee (\\lnot A \\wedge B) &= A \\vee B \\\\\nA \\wedge(\\lnot A \\vee B) &= A \\wedge B\n\n\\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align} \n\n(A \\vee B)\\wedge(\\lnot A \\vee B)&=B \\\\\n(A \\wedge B) \\vee (\\lnot A \\wedge B)&=B\n\n\\end{align}"
},
{
"math_id": 6,
"text": "\\begin{align}\n\n(A \\wedge B) \\vee (\\lnot A \\wedge C) \\vee (B \\wedge C) &= (A \\wedge B) \\vee (\\lnot A \\wedge C) \\\\\n(A \\vee B) \\wedge (\\lnot A \\vee C) \\wedge (B \\vee C) &= (A \\vee B) \\wedge (\\lnot A \\vee C)\n\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=89371 |
893727 | Strong generating set | In abstract algebra, especially in the area of group theory, a strong generating set of a permutation group is a generating set that clearly exhibits the permutation structure as described by a stabilizer chain. A stabilizer chain is a sequence of subgroups, each containing the next and each stabilizing one more point.
Let formula_0 be a group of permutations of the set formula_1 Let
formula_2
be a sequence of distinct integers, formula_3 such that the pointwise stabilizer of formula_4 is trivial (i.e., let formula_4 be a base for formula_5). Define
formula_6
and define formula_7 to be the pointwise stabilizer of formula_8. A strong generating set (SGS) for G relative to the base formula_4 is a set
formula_9
such that
formula_10
for each formula_11 such that formula_12.
The base and the SGS are said to be non-redundant if
formula_13
for formula_14.
A base and strong generating set (BSGS) for a group can be computed using the Schreier–Sims algorithm. | [
{
"math_id": 0,
"text": "G \\leq S_n"
},
{
"math_id": 1,
"text": "\\{ 1, 2, \\ldots, n \\}."
},
{
"math_id": 2,
"text": " B = (\\beta_1, \\beta_2, \\ldots, \\beta_r) "
},
{
"math_id": 3,
"text": "\\beta_i \\in \\{ 1, 2, \\ldots, n \\} ,"
},
{
"math_id": 4,
"text": " B "
},
{
"math_id": 5,
"text": " G "
},
{
"math_id": 6,
"text": " B_i = (\\beta_1, \\beta_2, \\ldots, \\beta_i),\\, "
},
{
"math_id": 7,
"text": " G^{(i)} "
},
{
"math_id": 8,
"text": " B_i "
},
{
"math_id": 9,
"text": " S \\subseteq G "
},
{
"math_id": 10,
"text": " \\langle S \\cap G^{(i)} \\rangle = G^{(i)} "
},
{
"math_id": 11,
"text": " i "
},
{
"math_id": 12,
"text": " 1 \\leq i \\leq r "
},
{
"math_id": 13,
"text": " G^{(i)} \\neq G^{(j)} "
},
{
"math_id": 14,
"text": " i \\neq j "
}
] | https://en.wikipedia.org/wiki?curid=893727 |
8937649 | Das Kapital, Volume I | 1867 book by Karl Marx
Capital. A Critique of Political Economy. Volume I: The Process of Production of Capital () is the first of three treatises that make up , a critique of political economy by the German philosopher and economist Karl Marx. First published on 14 September 1867, Volume I was the product of a decade of research and redrafting and is the only part of to be completed during Marx's life. It focuses on the aspect of capitalism that Marx refers to as the capitalist mode of production or how capitalism organises society to produce goods and services.
The first two parts of the work deal with the fundamentals of classical economics, including the nature of value, money, and commodities. In these sections, Marx defends and expands upon the labour theory of value as advanced by Adam Smith and David Ricardo. Starting with the next three parts, the focus of Volume I shifts to surplus value (the value of a finished commodity minus the cost of production), which he divides into absolute and relative forms. Marx argues that the relations of production specific to capitalism allow capital owners to accumulate more relative surplus value by material improvements to the means of production, thus driving the Industrial Revolution. However, for Marx, not only does the extraction of surplus value motivate economic growth, but it is also the source of class conflict between workers and the owners of capital. Parts Four, Five, and Six discuss how workers struggle with capital owners over control of the surplus value they produce, punctuated with examples of the horrors of wage slavery.
Moreover, Marx argues that the drive to accumulate more capital creates contradictions within capitalism, such as technological unemployment, various inefficiencies, and crises of overproduction. The penultimate part explains how capitalist systems sustain (or "reproduce") themselves once established. Throughout the work, Marx places capitalism in a historically specific context, considering it not as an abstract ideal but as the result of concrete historical developments. This is the special focus of the final part, which argues that capitalism initially develops not through the future capitalist class being more frugal and hard-working than the future working class (a process called primitive/previous/original accumulation by the pro-capitalist classical political economists, like Adam Smith), but through the violent expropriation of property by those that eventually (through that expropriation) become the capitalist class — hence the sarcastic title of the final part, "So-called Primitive Accumulation".
In Volume I of "Kapital", Marx uses various logical, historical, literary, and other strategies to illustrate his points. His primary analytical tool is historical materialism, which applies the Hegelian method of immanent critique to the material basis of societies. As such, Volume I includes copious amounts of historical data and concrete examples from the industrial societies of the mid-nineteenth century, especially the United Kingdom.
Within Marx's lifetime, he completed three editions of Volume I: the first two in German, the last in French. A third German edition, which was still in progress at the time of his death, was finished and published by Friedrich Engels in 1883. It is disputed among scholars whether the French or third German edition should be considered authoritative, as Marx presented his theories slightly differently in each one.
<templatestyles src="Template:TOC limit/styles.css" />
Book contents.
Part One: Commodities and Money.
Chapters 1, 2, and 3 are a theoretical discussion of the commodity, value, exchange, and the genesis of money. As Marx writes, "[b]eginnings are always difficult in all sciences. [...] [T]he section that contains the analysis of commodities, will, therefore present the greatest difficulty". The modern reader is often perplexed by Marx's saying, "one coat is equal to twenty yards of linen". Professor John Kenneth Galbraith reminds us that "the purchase of a coat by an average citizen was an action comparable in modern times to the purchase of an automobile or even a house".
Chapter 1: The Commodity.
Section 1. The Two Factors of the Commodity.
Marx says a commodity is a use-value and also an exchange-value. He explains that as a use-value, the commodity meets a human want or need of any kind, i.e., it is a useful thing. The use-value of the commodity is determined by how useful the commodity is. However, the actual use-value is immeasurable. He explains that use-value can only be determined "in use or consumption". After determining the commodity as a use-value, Marx explains that a commodity is also an exchange-value. He explains this as the quantity of other commodities that it will exchange for. Marx gives the example of corn and iron. No matter their relationship, there will always be an equation where a certain amount of corn will exchange for a certain amount of iron. He sets up this example to say that all commodities are essentially parallel in that they can always be exchanged for certain quantities of other commodities. He also explains that one cannot determine the commodity's exchange-value simply by looking at it or examining its natural qualities. The exchange-value is not material, but it is instead a measure made by humans. In order to determine the exchange-value, one must see the commodity being exchanged with other commodities. Marx explains that while these two aspects of commodities are separate, at the same time, they are also connected in that one cannot be discussed without the other. He explains that while the use-value of something can only change in quality, the exchange-value can only change in quantity.
Marx then explains that a commodity's exchange-value merely expresses its value. Value is what connects all commodities so that they can all be exchanged with each other. The value of a commodity is determined by its socially necessary labour time, defined as "the labour time required to produce any use-value under the conditions of production, normal for a given society and with the average degree of skill and intensity of labour prevalent in that society". Therefore, Marx explains, the value of a commodity does not stay constant as it advances or it varies in labour productivity which may occur for many reasons. However, value does not mean anything unless it conjoins back to use-value. If a commodity is produced and no one wants it or it has no use, then "the labour does not count as labour", and therefore it has no value. He also says that one can produce use-value without being a commodity. If one produces something solely for their own benefit or need, one has produced use-value, but no commodity. Value can only be derived when the commodity has use-value for others. Marx calls this social use-value. He writes all of this to explain that all aspects of the commodity (use-value, exchange-value and value) are separate from each other but are also essentially connected.
Section 2. The Dual Character of the Labour Embodied in Commodities.
In this section, Marx discusses the relationship between labour and value. He states that if there is a change in the quantity of labour expended to produce an article, the article's value will change. This is a direct correlation. Marx gives as an example the value of linen versus thread to explain the worth of each commodity in a capitalist society. Linen is hypothetically twice as valuable as thread because more socially necessary labour time was used to create it. Use-value of every commodity is produced by useful labour. Use-value measures the actual usefulness of a commodity, whereas value is a measurement of exchange value. Objectively speaking, linen and thread have some value. Different forms of labour create different kinds of use-values. The value of the different use-values created by different types of labour can be compared because both are expenditures of human labour. One coat and 20 yards of linen take the same amount of socially necessary labour time to make, so they have the same value. As we have expected in the production of the commodities, it lessens the capacity to create a high value of products.
Section 3. The Value-Form or Exchange-Value.
(a) The Simple, Isolated, or Accidental Form of Value.
In this section, Marx explains that commodities come in double form, namely natural form and value-form. We do not know commodities' values until we know how much human labour was put into them. Commodities are traded for one another after their values are decided socially; then, there is value-relation which lets us trade between different kinds of commodities. Marx explains value without using money. He uses 20 yards of linen and a coat to show the value of each other (20 yards of linen = 1 coat, or 20 yards of linen are worth 1 coat). The statement "20 yards of linen are worth 1 coat" labels two forms of value. The first form, the relative form of value, is the commodity that comes first in the statement (the 20 yards of linen in the example). The second form, the equivalent form of value, is the commodity that comes second in the statement (the 1 coat in the example). He adds that comparing 20 yards of linen to itself (20 yards of linen = 20 yards of linen, or 20 yards of linen are worth 20 yards of linen) is meaningless because there is no expression of value. Linen is an object of utility whose value cannot be determined until it is compared to another commodity. Determining the value of a commodity depends on its position in the expression of comparative exchange value.
(b) The Total or Expanded Form of Value.
Marx begins this section with an equation for the expanded form of value in which "z commodity A = u commodity B or = v commodity C or = w commodity D or = x commodity E or = etc." and where the lowercase letters (z, u, v, w, and x) represent quantities of a commodity and upper case letters (A, B, C, D, and E) represent specific commodities so that an example of this could be: "20 yards of linen = 1 coat or = 10 lb. tea or = 40 lb. coffee or = 1 quarter of corn or = 2 ounces of gold or = <templatestyles src="Fraction/styles.css" />1⁄2 ton of iron or = etc." Marx explains that with this example of the expanded form of value, the linen "is now expressed in terms of innumerable other members of the world of commodities. Every other commodity now becomes a mirror of linen's value". At this point, the particular use-value of linen becomes unimportant, but rather it is the magnitude of value (determined by socially necessary labour time) possessed in a quantity of linen which determines its exchange with other commodities. This chain of particular kinds (different commodities) of values is endless as it contains every commodity and changes constantly as new commodities come into being.
(c) The General Form of Value.
Marx begins this section with the table:
<templatestyles src="Block indent/styles.css"/>formula_0
Marx then divides this subset of section 3 into three parts:
(d) The Money Form.
<templatestyles src="Block indent/styles.css"/>formula_1
Here, Marx illustrates the shift to money form. Universal equivalent form or universal exchangeability has caused gold to replace linen in the socially accepted exchange customs. Once it had reached a set value in the world of commodities, gold became the money commodity. Money form is distinct from sections A, B, and C.
Now that gold has a relative value against a commodity (such as linen), it can attain price form as Marx states:
<templatestyles src="Template:Blockquote/styles.css" />The 'price form' of linen is therefore: 20 yards of linen = 2 ounces of gold, or, if 2 ounces of gold when coined are £2, 20 yards of linen = £2.
This illustrates the application of price form as a universal equivalent. Marx concludes this section by pointing out that "the simple commodity-form is therefore the germ of the money-form". The simplified application of this idea is then illustrated as such:
<templatestyles src="Block indent/styles.css"/>x of commodity A = y of commodity B
Section 4. The Fetishism of the Commodity and Its Secret.
Marx's inquiry in this section focuses on the nature of the commodity, apart from its basic use-value. In other words, why does the commodity appear to have an exchange-value as if it was an intrinsic characteristic of the commodity instead of a measurement of the homogenous human labour spent to create the commodity? Marx explains that this sort of fetishism, which he attributes to a thing a characteristic when it is a social product, originates in the fact that under a commodity-based society, the social labour, the social relations between producers and their mutual interdependence, solely manifest in the market in the process of exchange. Therefore, the value of the commodity is determined independently from private producers, so it seems that the market determines the value apparently based on a characteristic of the commodity; it seems as if there are relations between commodities instead of relations between producers.
Marx also explains that due to the historical circumstances of capitalist society, the values of commodities are usually studied by political economists in their most advanced form, i.e. money. These economists see the value of the commodity as something metaphysically autonomous from the social labour that is the actual determinant of value. Marx calls this fetishism—the process whereby the society that originally generated an idea eventually and through the distance of time forgets that the idea is a social and, therefore, all-too-human product. This society will no longer look beneath the veneer of the idea (in this case, the value of commodities) as it currently exists. This society will take the idea as a natural and/or God-given inevitability that they are powerless to alter.
Marx compares this fetishism to the manufacturing of religious beliefs. He argues that people initially create a deity to fulfil whatever desire or need they have in present circumstances; then, these products of the human brain appear as autonomous figures endowed with a life of their own and enter into relations with each other and the human race. Similarly, commodities only enter into relation with each other through exchange which is a purely social phenomenon. Before that, they are simply useful items, but not commodities. Value itself cannot come from use-value because there is no way to compare the usefulness of an item; there are too many potential functions.
Once in exchange, commodities' values are determined by the amount of socially useful labour-time put into them because labour can be generalised. For example, it takes longer to mine diamonds than to dig quartz; therefore, diamonds are worth more. Fetishism within capitalism occurs once labour has been socially divided and centrally coordinated and the worker no longer owns the means of production. They no longer have access to the knowledge of how much labour goes into a product because they no longer control its distribution. The only obvious determinant of value remaining to the mass of people is the value assigned in the past. Thus, the value of a commodity seems to arise from a mystical property inherent to it rather than from labour-time, the actual determinant of value.
Chapter 2: Exchange.
In this chapter, Marx explains the social and private characteristics of the exchange process. According to Marx, owners of commodities must recognise one another as owners of commodities which embody value. He explains exchange not merely as a swapping of items but as a contract between the two. This exchange also allows the commodity in question to realise its exchange value. Marx explains that the realisation of exchange value always precedes that of use value because one must obtain the item before its actual utility is realised.
Furthermore, Marx explains that the use value in question can only be realised by he who purchases the commodity. In contrast, he who is selling a commodity must find no utility in the item, save the utility of its exchange value. Marx concludes the chapter with an abstraction about the necessitated advent of money wherever exchange takes place, starting between nations and gradually becoming more and more domestic. This money, form which arises from the necessity of liquidating exchange, becomes the universal equivalent form set aside from all commodities as a mere measure of value, creating a money-commodities dualism.
Chapter 3: Money, or the Circulation of Commodities.
Section 1. The Measure of Values.
(a) Functions of Metallic Money.
In this chapter, Marx examines the functions of money commodities. According to Marx, the main function of money is to provide commodities with the medium for expressing their values, i.e. labour time. The function of money as a measure of value serves only in an imaginary or ideal capacity. That is, the money that performs the functions of a measure of value is only imaginary because society has given the money its value. For example, the value of one ton of iron is expressed by an imaginary quantity of the money commodity, which contains the same amount of labour as the iron.
(b) Multiple Forms of Metallic Money.
As a measure of value and a standard of price, money performs two functions. First, it measures value as the social incarnation of human labour. Second, it serves as a price standard as a quantity of metal with a fixed weight. As in any case, where quantities of the same denomination are to be measured, the stability of the measurement is of the utmost importance. Hence, the less the unit of measurement is subject to variations, the better it fulfils its role. Metallic currency can only serve as a measure of value because it is a product of human labour.
Commodities with definite prices appear in this form: a commodity A = x gold; b commodity B = y gold; c commodity C = z gold, etc., where a, b, c represent definite quantities of the commodities A, B, C and x, y, z definite quantities of gold.
Despite the varieties of commodities, their values become magnitudes of the same denomination, namely gold-magnitudes. Since these commodities are all magnitudes of gold, they are comparable and interchangeable.
(c) Price.
Price is the money-name of the labour objectified in a commodity. Like the relative form of value in general, price expresses the value of a commodity by asserting that a given quantity of the equivalent is directly interchangeable. The price form implies the exchangeability of commodities for money and the necessity of exchange. Gold is an ideal measure of value only because it has already established itself as the money commodity in the exchange process.
Section 2. The Means of Circulation.
(a) The Metamorphosis of Commodities.
In this section, Marx further examines the paradoxical nature of the exchange of commodities. The contradictions within the exchange process provide the structure for social metabolism. The process of social metabolism "transfers commodities from hands in which they are non-use-values to hands in which they are use-values". Commodities can only exist as values for a seller and use-values for a buyer. For a commodity to be both a value and a use-value, it must be produced for exchange. The exchange process alienates the ordinary commodity when its antithesis (the money commodity) becomes involved. During exchange, the money commodity confronts the ordinary commodity disguising the true form of the ordinary commodity. Commodities as use-values and money as exchange-value are now on opposite poles and exist as separate entities. In practice, gold or money functions as exchange-value while commodities function as use-values in the exchange process. A commodity's existence is only validated through the form of money, and money is only validated through the form of a commodity. This dualistic phenomenon involving money and commodities directly relates to Marx's concept of use-value and value.
formula_2
Marx examines the two metamorphoses of the commodity through sale and purchase. In this process, "as far as concerns its material content, the movement is C-C, the exchange of one commodity for another, the metabolic interaction of social labour, in whose result the process itself becomes extinguished".
formula_3
In the sale process, the value of a commodity measured by socially necessary labour-time is then measured by the universal equivalent, i.e. gold.
formula_4
Through the purchase process, all commodities lose their form by the universal alienator, i.e. money. Marx states that since "every commodity disappears when it becomes money", it is "impossible to tell from the money itself how it got into the hands of its possessor, or what article has been changed into it".
<templatestyles src="Block indent/styles.css"/>formula_5
A purchase represents a sale, although they are two separate transformations. This process allows for the movement of commodities and the circulation of money.
(b) The Circulation of Money.
The circulation of money is first initiated by the transformation of a commodity into money. The commodity is taken from its natural state and transformed into its monetary state. When this happens, the commodity "falls out of circulation into consumption". The previous commodity, now in its monetary form, replaces a new and different commodity continuing the circulation of money. In this process, money is the means for the movement and circulation of commodities. Money assumes the measure of value of a commodity, i.e. the socially necessary labour-time. The repetition of this process constantly removes commodities from their starting places, taking them out of the sphere of circulation. Money circulates in the sphere and fluctuates with the sum of all the commodities that co-exist within the sphere. The price of commodities, and thus the quantity of money in circulation, varies by three factors: "the movement of prices, the quantity of commodities in circulation, and the velocity of circulation of money".
(c) Coin and the Symbol of Value.
Money takes the shape of a coin because of how it behaves in the sphere of circulation. Gold became the universal equivalent by the measurement of its weight in relation to commodities. This process was a job that belonged to the state. The problem with gold was that it wore down as it circulated from hand to hand, so the state introduced new circulating media such as silver, copper, and inconvertible paper money issued by the state itself as a representation of gold. Marx views money as a "symbolic existence" that haunts the sphere of circulation and arbitrarily measures the product of labour.
Section 3. Money.
(a) Hoarding.
The exchange of money is a continuous flow of sales and purchase. Marx writes that "[i]n order to be able to buy without selling, [one] must have previously sold without buying". This simple illustration demonstrates the essence of hoarding. In order to potentially buy without selling a commodity in one's possession, one must have hoarded some degree of money in the past. Money becomes greatly desired due to potential purchasing power. If one has money, one can exchange it for commodities and vice versa. However, while satisfying this newly arisen fetish for gold, the hoard causes the hoarder to make personal sacrifices and explains its amorality "doing away all distinctions" by citing "Timon of Athens" by William Shakespeare.
(b) Means of Payment.
In this section, Marx analyses the relationship between debtor and creditor and exemplifies the idea of the transfer of debt. In relation to this, Marx discusses how the money-form has become a means of incremental payment for a service or purchase. He states that the "function of money as means of payment begins to spread out beyond the sphere of circulation of commodities. It becomes the universal material of contracts". Due to fixed payments and the like, debtors are forced to hoard money in preparation for these dates as Marx states: "While hoarding, as a distinct mode of acquiring riches, vanishes with the progress of civil society, the formation of reserves of the means of payment grows with that progress".
(c) World Money.
Countries have reserves of gold and silver for two purposes: (1) home circulation and (2) external circulation in world markets. Marx says that it is essential for countries to hoard as money is needed "as the medium of the home circulation and home payments, and in part out of its function of money of the world". Accounting for this hoarding in the context of hoarded money's inability to contribute to the growth of a capitalist society, Marx states that banks are the relief to this problem:
<templatestyles src="Template:Blockquote/styles.css" />Countries in which the bourgeois form of production is developed to a certain extent, limit the hoards concentrated in the strong rooms of the banks to the minimum required for the proper performance of their peculiar functions. Whenever these hoards are strikingly above their average level, it is, with some exceptions, an indication of stagnation in the circulation of commodities, of an interruption in the even flow of their metamorphoses.
Part Two: The Transformation of Money into Capital.
In part two, Marx explains the three components necessary to create capital through the process of circulation. The first section of Part II, Chapter 4, explains the general formula for capital; Chapter 5 delves further by explaining the contradictions of the general formula; and the last section of Part II, Chapter 6, describes the sale and purchase of labour power within the general formula.
As described by Marx, money can only be transformed into capital through the circulation of commodities. Money originates not as capital but only as means of exchange. Money becomes capital when it is used as a standard for exchange. The circulation of commodities has two forms that comprise the general formula: C-M-C and M-C-M. C-M-C represents the process of first selling a commodity for money (C-M) and then using that money to buy another commodity (M-C), or as Marx states, "selling in order to buy". M-C-M describes transacting money for a commodity (M-C) and then selling the commodity for more capital (C-M).
The largest distinction between the two forms appears through the result of each. During C-M-C, a commodity sold will be replaced by a commodity bought. In this form, money only acts as a means of exchange. The transaction ends there, with the exchange of use-values, and according to Marx, the money has "been spent once and for all". The C-M-C form facilitates the exchange of one use-value for another.
On the contrary, money is essentially exchanged for more money during M-C-M. The person who invested money into a commodity sells it for money. The money returns to the initial starting place, so the money is not spent as in the C-M-C form of exchange, but it is instead advanced. The only function of this process lies in its ability to valorise. By withdrawing more money from circulation than the amount put in, money can be reinvested in circulation, creating a repeated accumulation of monetary wealth—a never-ending process. Thus, M-C-M' becomes the objective of M-C-M. M' represents the money returned in the circulative process (M) plus the surplus value gained (M∆): M'=M+M∆. Capital can only be created once the process of M-C-M has been completed and money returns to the starting point to be re-entered into circulation.
Marx points out that "in its pure form, the exchange of commodities is an exchange of equivalents, and thus it is not a method of increasing value", so a contradiction reveals itself. If the participating individuals exchanged equal values, neither of the individuals would increase capital. The needs being satisfied would be the only gain. The creation of surplus-value then becomes rather peculiar for Marx because commodities, in accordance with socially assigned necessary values, should not create surplus-value if traded fairly. Marx investigates the matter and concludes that "surplus-value cannot arise from circulation, and therefore that, for it to be formed, something must take place in the background which is not visible in the circulation itself". According to Marx, labour determines the value of a commodity. Through the example of a piece of leather, Marx then describes how humans can increase the value of a commodity through labour. Turning the leather into boots increases the value of the leather because now more labour has been applied to the leather. Marx then explains the contradiction of the general formula. Capital cannot be created from circulation because equal exchange of commodities creates no surplus value, and unequal exchange of commodities changes the distribution of wealth, but it still does not produce surplus-value. Capital cannot be created without circulation either because labour creates value within the general formula. Thus, Marx writes that "[i]t must have its origin both in circulation and not in circulation". However, a "double result" remains, namely that the capitalist must buy commodities at their value, sell them at their value and yet conclude the process with more money than at the beginning. The profit seemingly originates both inside and outside the general formula.
The intricacies of the general formula relate to the role of labour-power.
In the last section of part two, Marx investigates labour-power as a commodity. Labour-power existing on the market depends on two fulfillments: the workers must offer it for temporary sale on the market, and the workers must not possess the means to subsistence. As long as the labour-power is sold temporarily, the worker is not considered a slave. Worker dependence for a means of subsistence ensures a large working force necessary for the production of capital. The value of labour bought on the market as a commodity represents the definite amount of socially necessary labour objectified in the worker, or according to Marx, "the labour-time necessary for the production [of the worker]", which means the food, education, shelter, health, etc. required to create and maintain a worker. The capitalists need workers to combine with their means of production to create a sellable commodity, and workers need capitalists to provide a wage that pays for a means of subsistence. Within the capitalist mode of production, it is custom to pay for labour-power only after it has been exercised over a period of time, fixed by a contract (i.e. the work week).
Part Three: The Production of Absolute Surplus-Value.
In part three, Marx explores the production of absolute surplus value. To understand this, one must first understand the labour process itself. According to Marx, absolute surplus value production arises directly from the labour process.
There are two sides to the labour process. On one side is the buyer of labour power or the capitalist. On the other side, there is the worker. For the capitalist, the worker possesses only one use-value: labour power. The capitalist buys from the worker his labour-power, or his ability to do work. In return, the worker receives a wage or a means of subsistence.
Marx says of the labour process: "In the labour process, therefore, man's activity, via the instruments of labour, effects an alteration in the object of labour. [...] The product of the process is a use-value, a piece of natural material adapted to human needs by means of change in its form. Labour has become bound up in its object: labour has been objectified, the object has been worked on". The labour the worker has put forth to produce the object has been transferred to the object, thus giving it value.
Under capitalism, the capitalist owns everything in the production process, such as the raw materials that the commodity is made of, the means of production and the labour power (worker) itself. At the end of the labour process, the capitalist owns the product of their labour, not the workers who produce the commodities. Since the capitalist owns everything in the production process, he can sell it for profit. The capitalist wants to produce "[a]n article destined to be sold, a commodity; and secondly he wants to produce a commodity greater in value than the sum of the values of the commodities used to produce it, namely the means of production and the labour-power he purchased with his good money on the open market. His aim is to produce not only a use-value, but a commodity; not only use-value, but value; and not just value, but also surplus value".
The goal of the capitalist is to produce surplus value. However, producing surplus value proves to be difficult. Profit cannot be made if all goods are purchased at full price. Surplus value cannot arise from buying the inputs of production at a low price and then selling the commodity at a higher price. This is due to the economic law of one price, which states "that if trade were free, then identical goods should sell for about the same price throughout the world". This law means profit cannot be made simply by purchasing and selling goods. Price changes on the open market will force other capitalists to adjust their prices to be more competitive, resulting in one price.
Thus, where does surplus value originate? Quite simply, the origin of surplus value arises from the worker. To better understand how this happens, consider the following example from Marx's "Capital, Volume I". A capitalist hires a worker to spin ten pounds of cotton into yarn. Suppose the value of the cotton is one dollar per pound. The entire value of the cotton is 10 dollars. The production process naturally causes wear and tear on the machinery that is used to help produce the yarn. Suppose this wearing down of machinery costs the capitalist two dollars. The value of labour-power is three dollars per day. Now also suppose that the working day is six hours. In this example, the production process yields 15 dollars and also costs the capitalist 15 dollars; therefore, there is no profit.
Now consider the process again, but this time the working day is 12 hours. In this case, there is 20 dollars produced from the 20 pounds of cotton. Wear and tear on machinery now costs the capitalist four dollars. However, the value of labour-power is still only three dollars per day. The entire production process costs the capitalist 27 dollars. However, the capitalist can now sell the yarn for 30 dollars. This is because the yarn still holds 12 hours of socially necessary labour time (equivalent to six dollars).
The key to this is that workers exchange their labour power in return for a means of subsistence. In this example, the means of subsistence have not changed; therefore the wage is still only 3 dollars per day. Notice that while the labour only costs the capitalist 3 dollars, the labour-power produces 12 hours worth of socially necessary labour time. The secret of surplus value resides in the fact that there is a difference between the value of labour-power and what that labour-power can produce in a given amount of time. Labour-power can produce more than its own value.
By working on materials during the production process, the worker both preserves the value of the material and adds new value to the material. This value is added because of the labour necessary to transform the raw material into a commodity. According to Marx, value only exists in use-values, so how does the worker transfer value to a good? It is because "[m]an himself, viewed merely as the physical existence of labour-power, is a natural object, a thing, although a living, conscious thing, and labour is the physical manifestation of that power". For commodities to be produced with surplus value, two things must be true. Man must be a living commodity (a commodity that produces labour-power), and it must be the nature of this labour-power to produce more than its own value.
When capitalists begin production, they initially spend their money on two inputs. These inputs can be represented with the capital advanced equation: formula_6; where C is capital advanced, c is constant capital, and v is variable capital. Constant capital is the means of production (factories, machinery, raw materials, etc.). Constant capital has a fixed value which can be transferred to the commodity, although the value added to the commodity can never be more than the value of constant capital itself. The source of surplus value comes instead from variable capital or labour-power. Labour power is the only commodity capable of producing more value than it possesses.
The accumulation of capital occurs after the production process is completed. The equation for the accumulation of capital is formula_7'formula_8. Here, C' is the value created during the production process. C' is equal to constant capital plus variable capital plus some extra amount of surplus value (s) which arises out of variable capital. Marx says that surplus value is "merely a congealed quantity of surplus labour-time [...], nothing but objectified surplus labour".
To better understand the rate of surplus value, one must understand that there are two parts to the working day. One part of the working day is the time necessary to produce the value of the worker's labour-power. The second part of the working day is surplus labour time which produces no value for the labourer but produces value for the capitalist. The rate of surplus value is a ratio of surplus labour time (s) to necessary labour time (v). Marx also refers to the rate of surplus value (s/v) as the rate of exploitation.
Capitalists often maximise profits by manipulating the rate of surplus value, which can be done through the increase of surplus labour time. This method is referred to as the production of absolute surplus value. In this case, capitalists merely increase the length of the working day. Although there are physical restrictions to the working day, such as general human needs, the working day is not fixed. This allows for great flexibility in the number of hours worked per day.
This flexibility in working hours leads to a class struggle between capitalists and workers. The capitalist argues that they can extract all the value from a day's labour since that is what they bought. By contrast, the worker demands a limited working day. The worker needs to be able to renew his labour power to sell it anew. The capitalist sees working fewer hours as theft from capital, and the worker sees working too many hours as theft from labourers. This class struggle can be seen throughout history, and eventually, laws such as Factory Acts were put in place to limit the length of a working day and child labour. This forced capitalists to find a new way in which to exploit workers.
Part Four: The Production of Relative Surplus-Value.
Part four of "Capital, Volume I" consists of four chapters:
In Chapter 12, Marx explains a decrease in the value of labour-power by increasing production. Chapters 13–15 examine ways in which the productivity of this labour is increased.
A – – – – – – – – – – B – – C
Chapter 12: The Concept of Relative Surplus-Value.
The section from A to B represents the necessary labour, and the section from B to C represents the surplus labour. Remember, the value of labour-power is "the labour-time necessary to produce labour-power". What is of interest to Marx is "[h]ow can the production of surplus-value be increased, i.e. how can surplus labour be prolonged, without any prolongation, or independently of any prolongation, of the line AC?" Marx says it is in the best interest of the capitalist to divide the working day like this:
A – – – – – – – – – B' – B – – C
This shows that the amount of surplus labour is increased while the amount of necessary labour is decreased. Marx calls this decrease in necessary labour and increase in surplus value as relative surplus-value, whereas when there is an actual lengthening in the working day and surplus value is produced, this is called absolute surplus-value. For this to happen, the productivity of labour must increase. According to Marx, the perpetual drive of capital is to increase the productivity of labour, leading to a decrease in the value of commodities. In this, the value of the worker's means of subsistence decreases, resulting in a decrease in the value of her labour power.
Chapter 13: Co-operation.
According to Marx, co-operation happens "when numerous workers work together side by side in accordance with a plan, whether in the same process, or in different but connected processes". Co-operation also shortens the time needed to complete a given task. Marx says that "[i]f the labour process is complicated, then the sheer number of the co-operators permits the apportionment of various operations to different hands, and consequently their simultaneous performance. The time necessary for the completion of the whole work is thereby shortened". The effort by the capitalist to organise co-operation is simply for reasons of increasing production. While this is the case, Marx quickly notes that the collective powers of co-operation are not created by capital. According to Marx, this is a disguise or a fetish. He cites the building of the pyramids, which occurred before the organisation of a capitalist mode of production.
Chapter 14: The Division of Labour and Manufacture.
Section 1. The Dual Origin of Manufacture.
In this section, Marx examines manufacture as a method of production involving specialised workers, or craftsmen, working on their own detailed tasks. He cites the assembly of a carriage as an example of the first way this is brought about. In this, multiple skilled workers are brought together to produce specialised parts once unique to their craft, contributing to the overall production of the commodity. Another way this manufacture arises is by splitting up a single handicraft into multiple specialised areas, further introducing a division of labour.
Section 2. The Specialized Worker and his Tools.
In this section, Marx argues that a worker who performs only one task throughout his life will perform his job faster and more productively, forcing capital to favour the specialised worker over the traditional craftsman. He also states that a specialised worker doing only one task can use a more specialised tool, which cannot do many jobs yet can do the one job well more efficiently than a traditional craftsman using a multi-purpose tool on any specific task.
Section 3. The Two Fundamental Forms of Manufacture: Heterogeneous and Organic.
In this section, Marx argues that a division of labour within production produces a hierarchy of labour, skilled and unskilled and also a variation in wages. However, according to Marx, this division within the labour process reduces a worker's skills collectively, which devalues their labour power.
Section 4. The Division of Labour in Manufacture and the Division of Labour in Society.
In this section, Marx states that division of labour existed long before the establishment of a capitalist mode of production. He argues that despite its existence prior to capital, division of labour is unique under capital because its goal is to increase the rate and mass of surplus value, not create a "combined product of specialised labours".
Section 5. The Capitalist Character of Division.
In this section, Marx discusses an increased class struggle brought about by capital, or in this case, in the division of labour. Creating such a division disguises the efforts and work of such a division as that of the capitalist. According to Marx, division of labour is a subdivision of a worker's potential and sets limitations on his mental and physical capacity, making him reliant upon the capitalist to exercise his specialised skill.
Chapter 15: Machinery and Large-Scale Industry.
Section 1. Development of Machinery.
In this section, Marx explains the significance of machinery to capitalists and how it is applied to the workforce. The goal of introducing machinery into the workforce is to increase productivity. When productivity is increased, the commodity being produced is cheapened. Relative surplus value is amplified because machinery shortens the part of the day that the worker works for his or her means of subsistence and increases the time that the worker produces for the capitalist.
Marx discusses tools and machines and their application to the process of production. Marx claims that many experts, including himself, cannot distinguish between tools and machines. He states they "call a tool a simple machine and a machine a complex tool". Marx continues to elaborate on this misinterpretation of the definition, explaining that some people distinguish between a tool and a machine "by saying that in the case of the tool, man is the motive power, whereas the power behind the machine is a natural force independent of man, for instance an animal, water, wind and so on". Marx explains a flaw with this approach by comparing two examples. He points out that a plough which is powered by an animal would be considered to be a machine, and Claussen's circular loom that can weave at a tremendous speed is powered by one worker and therefore considered to be a tool. Marx precisely defines the machine when he says that "[t]he machine, therefore, is a mechanism that, after being set in motion, performs with its tools the same operation as the worker formerly did with similar tools. Whether the motive power is derived from man, or in turn from a machine, makes no difference here".
There are three parts to fully developed machinery:
Marx believes the working machine is the most important part of developed machinery. It is what began the industrial revolution of the 18th century, and even today, it continues to turn craft into industry.
The machine can replace a worker, who works at one specific job with one tool, with a mechanism that accomplishes the same task, but with many similar tools and at a much faster rate. One machine doing one specific task soon becomes a fleet of cooperating machines accomplishing the entire production process. This aspect of automation enables the capitalist to replace large numbers of human workers with machines, creating a large pool of available workers that the capitalist can choose from to form his human workforce. Workers no longer need to be skilled in a particular trade because their job has been reduced to oversight and maintenance of their mechanical successors.
The development of machinery is an interesting cycle where inventors started inventing machines to complete necessary tasks. The machine-making industry grew, and workers focused on creating these machines, the objects that steal work from their creators. With so many machines being developed, the need for new machines to create old machines increased. For example, the spinning machine started a need for printing and dyeing and designing the cotton gin. Marx states, "[w]ithout steam engines, the hydraulic press could not have been made. Along with the press, came the mechanical lathe and an iron cutting machine. Labour assumes a material mode of existence which necessitates the replacement of human force by natural forces".
Section 2. The Value Transferred by Machinery to the Product.
As seen in the previous section, the machine does not replace the tool, which is powered by man. The tool multiplies and expands into the working machine created by man. Workers now go to work not to handle the tools of production but to work with the machine which handles the tools. Large-scale industry increases labour productivity to an extraordinary degree by incorporating its fast-paced efficiency within the production process. What is not as clear is that this new increase in productivity does not require an equal increase in expended labour by the worker. Machinery creates no new value. The machine accumulates value from the labour that went into producing it and merely transfers its value into the product it produces until its value is used up.
Only labour-power, which is capitalists buy, can create new value. Machinery transfers its value into the product at a rate dependent upon how much the total value of the machinery is, with Marx stating: "The less value it gives up, the more productive it is, and the more its services approach those rendered by natural forces". The general rule of machinery is that the labour used to create it must be less than how much human work it replaces when used in production. Otherwise, the machinery would not effectively raise surplus value and instead depreciate it. This is why some machinery is not chosen to replace actual human workers, as it would not be cost-effective.
Section 3. The Proximate Effects of Machinery on the Workman.
Section three examines some of the effects of the industrial revolution on the individual worker. It is divided into three subsections; the first discusses how industrial equipment enables capitalists to appropriate supplementary labour. Marx notes that since machinery can reduce reliance upon a worker's physical strength, it enables women and children to do work that previously men could only do. Thus, it depreciates an individual's labour-power by introducing many more potential workers into the exploitable pool of labourers.
The second subsection describes how mechanisation can effectively shorten the working time needed to produce an individual commodity by increasing labour productivity. However, because of the need to recoup the capital outlay required to introduce a given machine, it must be productively operated every day for as long as possible.
In the third subsection, Marx discusses how mechanisation influences the intensification of labour. Although the introduction of the Factory Acts limited the allowable length of the work day, it did nothing to halt the drive for more efficiency. Control over workers' tools is transferred to the machine, preventing them from setting their work pace and rhythm. As the machines are continuously adapted and streamlined, the effect is an ever-increasing intensification of the labourer's work activity.
Section 4. The Factory.
Marx begins this section with two descriptions of the factory as a whole:
<templatestyles src="Template:Blockquote/styles.css" />Combined co-operation of many orders of workpeople, adult and young, in tending with assiduous skill, a system of productive machines, continuously impelled by a central power (the prime mover); on the other hand, as a vast automaton, composed of various mechanical and intellectual organs, acting in uninterrupted concert for the production of a common object, all of them being subordinate to a self-regulated moving force.
This twofold description shows the characteristics of the relationship between the collective body of labour power and the machine. In the first description, the workers, or collective labour power, are viewed as separate entities from the machine. In the second description, the machine is the dominant force, with the collective labour acting as mere appendages of the self operating machine. Marx uses the latter description to display the characteristics of the modern factory system under capitalism.
In the factory, the tools of the worker disappear and the worker's skill is passed on to the machine. The division of labour and specialization of skills re-appear in the factory, only now as a more exploitative form of capitalist production (work is still organized into co-operative groups). Work in the factory usually consists of two groups, people who are employed on the machines and those who attend to the machines. The third group outside of the factory is a superior class of workers, trained in the maintenance and repair of the machines.
Factory work begins at childhood to ensure that a person may adapt to the systematic movements of the automated machine, therefore increasing productivity for the capitalist. Marx describes this work as being extremely exhausting to the nervous system and void of intellectual activity. Factory work robs workers of basic working conditions like clean air, light, space and protection. Marx ends this section by asking if Charles Fourier was wrong when he called factories mitigated jails.
Section 5. The Struggle between Worker and Machine.
This section opens with a historical summary of workers' revolts against the imposition of mechanical instruments of production such as ribbon weaving. Marx notes that by the early 19th century the introduction of power looms and other manufacturing equipment resulted in widespread destruction of machinery by the Luddite movement. These attacks in turn gave the government at the time a pretext for severe crackdowns. Marx argues that "[i]t took both time and experience before workers learned to distinguish between machinery and their employment by capital, and therefore to transfer their attacks from the material instruments of production to the form of society which utilizes those instruments".
Marx describes the machine as the instrument of labour for the capitalists' material mode of existence. The machine competes with the worker, diminishing the use-value of the worker's labour-power. Marx also points out that the advance in technology of machines led to the substitution of less skilled work for more skilled work which ultimately led to a change in wages. During the progression of machinery, the numbers of skilled workers decreased while child labour flourished, increasing profits for the capitalist.
Section 6. The Compensation Theory, With Regard to the Workers Displaced by Machinery.
In this section, Marx sets forth to illuminate the error within the compensation theory of the political economists. According to this theory, the displacement of workers by machinery will necessarily set free an equal stable, amount of variable capital previously used for the purchase of labour-power and remains available for the same purpose. However, Marx argues that the introduction of machinery is simply a shift of variable capital to constant capital. The capital set free cannot be used for compensation since the displacement of variable capital available becomes embodied in the machinery purchased.
The capital that may become available for the compensation will always be less than the total amount of capital previously used to purchase labour-power before the addition of machinery. Furthermore, the remainder of variable capital available is directed towards hiring workers with the expertise skills to operate new machinery. Therefore, the conversion of the greater part of the total capital is now used as constant capital, a reduction of variable capital necessarily follows. As a result of machinery, displaced workers are not so quickly compensated by employment in other industries, but they instead are forced into an expanding labour-market at a disadvantage and available for greater capitalist exploitation without the ability to procure the means of subsistence for survival.
Marx also argues that the introduction of machinery may increase employment in other industries, yet this expansion "has nothing in common with the so-called theory of compensation". Greater productivity will necessarily generate an expansion of production into peripheral fields that provide raw materials. Conversely, machinery introduced to industries that produce raw materials will lead to an increase in those industries that consume them. The production of greater surplus-value leads to greater wealth of the ruling classes, an increase in the labour-market and consequently the establishment of new industries. As such, Marx cites the growth of the domestic service industry equated to greater servitude by the exploited classes.
Section 7. Repulsion and Attraction of Workers Through The Development of Machine Production, Crises in the Cotton Industry.
The political economist apology for the displacement of workers by machinery asserts that there is a corresponding increase in employment. Marx is quick to cite the example of the silk industry in which an actual decrease of employment appears simultaneously with an increase of existing machinery. On the other hand, an increase in the number of factory workers employed is the result of "the gradual annexation of neighboring branches of industry" and "the building of more factories or the extension of old factories in a given industry".
Furthermore, Marx argues that an increase in factory workers is relative since the displacement of workers creates a proportionately wider gap between the increase of machinery and a proportionate decrease of labour required to operate that machinery. The constant expansion of capitalism and ensuing technical advances leads to extension of markets until it reaches all corners of the globe, thus creating cycles of economic prosperity and crisis. Finally, the "repulsion and attraction" of workers results as a cycle in which there is a constant displacement of workers by machinery which necessarily leads to increased productivity followed by a relative expansion of industry and higher employment of labour. This sequence renews itself as all components of the cycle lead to novel technological innovation for "replacing labour-power".
Part Five: The Production of Absolute and Relative Surplus-Value.
In Chapters 16–18, Marx examines how the capitalist strategies for the production of both absolute and relative surplus-value are combined and can function simultaneously.
Chapter 16: The Rise of Surplus Value.
Marx describes the process of taking the worker's individual productive actions and making them a collective effort of many workers. This action takes the worker further away from the actual production of the commodity and then allows the capitalist to use the worker only to create surplus value. The surplus value is increased first through absolute methods such as extending the work day and then through relative methods such as increasing worker productivity. These actions are the general foundations of capitalism as described by Marx.
The worker's transformation from producer of commodities for use in survival to producer of surplus value is necessary in the progression to capitalism. In production outside the capitalist system, the worker produces everything they need to survive. When the worker moves beyond producing what they need to survive, they can provide their work for a wage and create part of some product in return for a wage to buy what they need to survive. Capitalism takes advantage of this extra time by paying the worker a wage that allows them to survive, but it is less than the value the same worker creates. Through large scale manufacturing and economies of scale, the workers are placed progressively further away from manufacturing products themselves and only function as part of a whole collective that creates the commodities. This changes the concept of productive labour from the production of commodities to the production of surplus value. The worker is only productive to the capitalist if they can maximise the surplus value the capitalist is earning.
Not simply content with the transformation of the worker from a creator of commodities to creator of surplus value, capitalist must devise new ways to increase the surplus that he is receiving. The first, or absolute, way the capitalist can increase surplus value is through the prolongation of the working day so the worker has more time to create value. The second, or relative, way the capitalist can increase surplus value is to revolutionize changes in the production method. If the worker can only produce the means for himself in the time he works during the day, there would be no extra time for him to create surplus value for the capitalist. The capitalist must then either enable the worker to complete the paid work time more quickly through relative means, or he must increase the work day in absolute terms. Without enabling unpaid work to exist, the capitalist system would not be able to sustain itself.
With surplus labour resting on a natural basis, there are no natural obstacles preventing one man from imposing his labour burden on another. As a worker looks into the possible options of getting out of capitalist exploitation or the initial "animal condition", one of the obvious options is becoming a capitalist himself. This is called socialized labour which exists when the surplus labour of one worker becomes necessary for the existence of another.
Marx mentions two natural conditions of wealth that are helpful in understanding the progression of socialized labour over time. The two conditions are natural wealth in the means of subsistence and natural wealth in the instruments of labour. Over time, society has moved more from the former to the latter. It was not that long ago that the majority of society produced for themselves and did not have to be concerned about producing surplus labour for others. We did labour for others, but it was not in effort to create surplus value; it was to help others.
Marx uses the Egyptians as an example to illustrate a society's potential when there is extra time that does not have to be used toward creating surplus value. The Egyptians lived in a very fertile land (natural subsistence wealth) so they could raise children at a very low cost. This is the main reason why the population grew so large. One might think all the great Egyptian structures were possible because of the large population, but it is due to the availability of labour time.
In regards to capitalism, you might think that a greater natural wealth of subsistence would result in greater growth and capitalist production (like the Egyptians), but that is not the case. So why is capitalism so strong in many countries that do not have excess natural resources? The answer is the necessity of bringing a natural force under the control of society (irrigation in Persia and India, flow of water in Egypt, etc.), As Marx says, "favourable conditions provide the possibility, not the reality of surplus labour".
Marx displays an example of surplus labour occurring in these favorable conditions in the case of the East Indies. The inhabitants would be able to produce enough to satisfy all of his needs with only twelve working hours per week. This provides for more than enough leisure time until capitalist production takes hold. He may be required to work six days per week to satisfy his needs—there can be no explanation of why it is necessary for him to provide the extra five days of surplus labour.
Marx then critiques famed economist David Ricardo and the lack of addressing the issue of surplus-value. Ricardo does not take the time to discuss the origin of surplus-value and sidestepped the entire issue altogether. Agreeing with classical economists, John Stuart Mill finds that the productive power, or surplus value, is the source of profits, but he adds that the necessities of life take less time to produce than is required by society. Therefore, this becomes the reason capital will realize a profit. Mill goes on to assert that profit can only be gleaned from productive power and not exchange which falls in line with Marx's theories.
The next critique of Mill goes on to the percentage that is gained from the labourer. Marx finds it to be "absolutely false" in the fact that the percentage of surplus labour will always be more than the profits. This is due to the amount of capital invested. Following his conclusions, Marx calls Mill's ideas an optical illusion as he looks into the advancing of capital. Mill looks at labourers and considers them to be a form of capitalist—they are advancing the capitalist their labour ahead of time and receiving it at the end of the project for more of a share. Marx hits the idea out with the analogy of the American peasant being his own slave as he is doing forced labour for himself.
Marx examined surplus value and showed it to be a necessity in capitalism. This surplus value is derived from the difference between the value the worker creates and the wage he earns. Chapter 16 looked into the ways in which the capitalist is able to increase surplus-value and takes a direct attack against economists Ricardo and Mill.
Chapter 17: Changes of Magnitude in the price of Labour-Power and in Surplus-Value.
The value of labour power, also known as wage, is the first thing that Marx begins to re-explain in the opening of the chapter, stressing that it is equal to the quantity of the "necessaries of life habitually required by the average labourer". By re-stressing the importance of this concept, Marx is building a foundation on which he can begin to elaborate his argument on the changing price of labour. In order to make his argument, Marx states that he will leave out two certain factors of change (the expenses of labour power that differ with each mode of production and the diversity of labour power between men and women, children and adults) and that he will also be making two assumptions. The two assumptions made are that (1) the commodities are sold at their values; and (2) the price of labour-power occasionally rises above its value, but it never falls beneath it.
Given these assumptions, Marx begins to formulate his argument by first establishing the three determinants of the price of labour power. These three determinants, or circumstances as Marx calls them, are the length of the working day, the normal intensity of labour and the productiveness of labour. Formulating these three circumstances into different combinations of variables and constants, Marx begins to clarify the changes in magnitude in the price of labour-power. The majority of Chapter XVII is dedicated to the chief combinations of these three factors.
I. Length of the working day and Intensity of labour constant; Productiveness of labour variable..
Starting out with these assumptions, Marx explains that there are three laws that determine the value of labour-power. The first of these three laws states that a working day of given number of hours will always produce the same amount of value. This value will always be a constant, no matter the productiveness of labour, or the price of the commodity produced. The second states that the surplus-value and labour-power are negatively correlated or that when surplus-value increases a unit and value stays the same labour-power must decrease one unit also. The third of these laws is that a change in surplus-value presupposes a change in that of the labour-power.
Given these three laws, Marx explains how the productiveness of labour, being the variable, changes the magnitude of labour-value. Marx explains that "a change in the magnitude of surplus-value, presupposes a movement in the value of labour-power, which movement is brought about by a variation in the productiveness of labour". This variation in the productiveness of labour is what eventually leads to the developing change in value which is then divided by either the labourers through extra labour-value, or the capitalist through extra surplus value.
II. Working-day constant; Productiveness of labour constant; Intensity of labour variable..
The Intensity of labour is the expenditure that the labourer puts into a commodity. The increase in the intensity of labour results in the increase of value that the labour is producing. This increase that the labourer is producing is again divided amongst the capitalist and labourer in the form of either surplus-value or an increase in the value of labour power. Although they may both increase simultaneously, the addition to the labour may not be an addition if the extra payment received from his increase in intensity does not cover the wear and tear it has on the labourer.
III. Productivity and Intensity of Labour Constant; Length of Working Day variable..
In this example, it is possible to change the length of the working day by either lengthening or shortening the time spent at work. Leaving the other two variables constant, reducing the length of the work day leaves the labour-power's value the same as it was before. This reducing of the length of the work day will reduce the surplus labour and surplus value dropping it below its value.
The other option in changing the workday is to lengthen it. If the labour-power stays the same with a longer workday, then the surplus-value will increase relatively and absolutely. The relative value of labour-power will fall even if it will not fall absolutely. With the lengthening of the workday and the nominal price staying the same, the price of labour-power possibly could fall below its value. The value is estimated to be what is produced by the worker and a longer workday will affect the production and therefore the value.
It is fine to assume the other variables stay constant, but a change in the work day with the others constant will not result in the outcomes supposed here. A change in the work day by the capitalists will most definitely affect the productivity and intensity of the labour.
IV. Simultaneous Variations in the Duration, Productivity and Intensity of Labour..
In the real world, it is almost never possible to isolate each of the aspects of labour. Two or even three of the variables may vary and in different aspects. One may move up while another moves down, or in the same direction. The combinations are endless, but may be characterized by the first three examples. However, Marx limits his analysis to two cases:
The price of labour-power is affected by many things that can be broken down. The three main elements of intensity, productivity and length of workday were broken down and analyzed separately and then together. From the examples presented, it is possible to see what would happen in any and all situations.
Part Six: Wages.
In Chapters 19–22, Marx examines the ways in which capital manipulates the money wage as ways of both concealing exploitation and of extorting increased amounts of unpaid labour from workers.
Chapter 19: The Transformation of the Value (and Respective Price) of Labour-Power into Wages.
In this chapter, Marx discusses how the "value of labour-power is represented in its converted form as wages". The form of wages is intended to disguise the division of the working day into necessary labour (labour that is for the value of labour-power) and surplus labour (labour that is totally toward the profit of the capitalist). In other words, paid and unpaid labour for the worker. The worker in this situation feels as though he is using his labour as means of producing surplus for his own consumption, when in reality his labour-power has already been purchased by the capitalist and he merely works as a means to produce surplus value for the capitalist.
There are two distinct forms of wages that is used in the production of capital, namely time-wages and piece-wages. These forms facilitate the illusion of the actual value of labour-power and the increase of productivity from the worker employed.
Chapter 20: Time-Wages.
Marx presents the unit for measurement of time-wages to be the value of the day's labour-power divided by the number of hours in the average working day. However, an extension in the period of labour produces a fall in the price of labour, culminating in the fall in the daily or weekly wage. Yet as Marx specifies, this is to the advantage of the capitalist as more hours of production leads to surplus value for the capitalist, stating: "If one man does the work of 1½ or 2 men, the supply of labour increases, although the supply of labour-power on the market remains constant. The competition thus created between the workers allows the capitalist to force down the price of labour, while the fall in price allows him, on the other hand, to force up the hours of work even further". To make the worker feel his extra time and labour is well spent, the capitalist employs the trick of overtime.
Chapter 21: Piece-Wages.
Marx explains the exploitative nature of the piece-wage system. Under this system, workers are paid a pre-determined amount for each piece they produce, creating a modified form of the time-wage system. A key difference is in the fact that the piece-wage system provides an exact measure of the intensity of labour, meaning that the capitalists know about how long it takes to produce one piece of finished product. Those who cannot meet these standards of production will not be allowed to keep their jobs. This system also allows for middlemen (wholesaler or reseller) to usurp positions between the capitalists and labourers. These middlemen make their money solely from paying labour less than capitalists are actually allotting, thus bringing about worker on worker exploitation.
Logic would lead a labourer to believe that straining one's labour power "as intensely as possible" works in one's own interests because the more efficiently they produce the more they will be paid. Therefore, the workday will lengthen to the extent that worker's allow and necessitate. However, prolongation in the workday requires the price of labour to fall. Marx elucidates that "the piece-wage therefore has a tendency, while raising the wages of individuals above the average, to lower this average itself," and "it is apparent that the piece-wage is the form of wage most appropriate to the capitalist mode of production". He gives examples of the weaving industry around the time of the Anti-Jacobin War where "piece-wages had fallen so low that in spite of the very great lengthening of the working day, the daily wage was then lower than it had been before". In this example, we are able to see how piece-wages do nothing but decrease the value of labour and better disguise the true way the workers are exploited.
Part Seven: The Process of Accumulation of Capital.
In Chapters 23–25, Marx explores the ways in which profits are used to recreate capitalist class relations on an ever expanding scale and the ways in which this expansion of capitalism creates periodic crises for capitalist accumulation. For Marx, these crises in accumulation are also always crises in the perpetuation of the class relations necessary for capitalist production and so are also opportunities for revolutionary change.
Chapter 23: Simple Reproduction.
<templatestyles src="Template:Blockquote/styles.css" />The economic character of capitalist becomes firmly fixed to a man only if his money constantly functions as capital (p. 711).
<templatestyles src="Template:Blockquote/styles.css" />[S]urplus-value acquires the form of a revenue arising out of capital. If this revenue serves the capitalist only as a fund to provide for his consumption, and if it is consumed as periodically as it is gained, then, other things being equal, simple reproduction takes place (p. 712).
<templatestyles src="Template:Blockquote/styles.css" />When a person consumes the whole of his property, by taking upon himself debts equal to the value of that property, it is clear that his property represents nothing but the sum total of his debts. And so it is with the capitalist; when he has consumed the equivalent of his original capital, the value of his present capital represents nothing but the total amount of surplus-value appropriated by him without payment. Not a single atom of the value of his old capital continues to exist (p. 715).
<templatestyles src="Template:Blockquote/styles.css" />The fact that the worker performs acts of individual consumption in his own interest, and not to please the capitalist, is something entirely irrelevant to the matter. The consumption of food by a beast of burden does not become any less a necessary aspect of the production process because the beast enjoys what it eats (p. 718).
<templatestyles src="Template:Blockquote/styles.css" />The reproduction of the working class implies at the same time the transmission and accumulation of skills from one generation to another (p. 719).
<templatestyles src="Template:Blockquote/styles.css" />In reality, the worker belongs to capital before he has sold himself to the capitalist. His economic bondage is at once mediated through and concealed by, the periodic renewal of the act by which he sells himself, his change of masters, and the oscillations in the market-price of his labour (pp. 723–724).
Chapter 24: The Transformation of Surplus-Value into Capital.
Capitalist production on a progressively increasing scale. The inversion which converts the property laws of commodity production into laws of capitalist appropriation.
<templatestyles src="Template:Blockquote/styles.css" />[S]urplus-value can be transformed into capital only because the surplus product, whose value it is, already comprises the material components of a new quantity of capital (p. 727).
<templatestyles src="Template:Blockquote/styles.css" />All capital needs to do is to incorporate this additional labour-power, annually supplied by the working class in the shape of labour-powers of all ages, with the additional means of production comprised in the annual product (p. 727).
<templatestyles src="Template:Blockquote/styles.css" />[T]he working class creates by the surplus labour of one year the capital destined to employ additional labour in the following year. And this is what is called creating capital out of capital (p. 729).
<templatestyles src="Template:Blockquote/styles.css" />The constant sale and purchase of labour-power is the form;the content is the constant appropriation by the capitalist, without equivalent, of a portion of the labour of others which has already been objectified, and his repeated exchange of this labour for a greater quantity of the living labour of others (p. 730).
The political economists' erroneous conception of reproduction on an increasing scale.
<templatestyles src="Template:Blockquote/styles.css" />The classical economists are therefore quite right to maintain that the consumption of surplus product by productive, instead of unproductive, workers is a characteristic feature of the process of accumulation (p. 736).
<templatestyles src="Template:Blockquote/styles.css" />The movements of the individual capitals and personal revenues cross and intermingle, and become lost in a general alternation of positions, i.e. in the circulation of society's wealth (p. 737).
Division of surplus-value into capital and revenue. The abstinence theory.
<templatestyles src="Template:Blockquote/styles.css" />One part of the surplus-value is consumed by the capitalist as revenue, the other part is employed as capital, i.e. it is accumulated ... the ratio of these parts determines the magnitude of accumulation (p. 738)
<templatestyles src="Template:Blockquote/styles.css" />[T]he development of capitalist production makes it necessary constantly to increase the amount of capital laid out in a given industrial undertaking, and competition subordinates every individual capitalist to the immanent laws of capitalist production as external and coercive laws. It compels him to keep extending his capital, so as to preserve it, and he can only extend it by means of progressive accumulation (p. 739).
<templatestyles src="Template:Blockquote/styles.css" />Accumulation is the conquest of the world of social wealth (p. 739).
<templatestyles src="Template:Blockquote/styles.css" />Accumulation for the sake of accumulation, production for the sake of production: this was the formula in which classical economics expressed the historical mission of the bourgeoisie in the period of its domination (p. 742).
The circumstances which independently of the proportional division of surplus-value into capital and revenue determine the extent of accumulation:
Chapter 25, Sections 3 and 4: The General Law of Capitalist Accumulation.
Although originally appearing as its quantitative extension only, the accumulation of capital is effected under a progressive qualitative change in its composition and a constant increase of its constant at the expense of its variable constituent. Capitalist production can by no means content itself with the quantity of disposable labour-power which the natural increase of population yields. It requires for its free play an industrial reserve army independent of these natural limits. Up to this point, it has been assumed that the increase or diminution of the variable capital corresponds rigidly with the increase or diminution of the number of labourers employed. The number of labourers commanded by capital may remain the same or even fall while the variable capital increases. This is the case if the individual labourer yields more labour and therefore his wages increase and this, although the price of labour remains the same or even falls, only more slowly than the mass of labour rises. In this case, increase of variable capital becomes an index of more labour, but not of more labourers employed. It is the absolute interest of every capitalist to press a given quantity of labour out of a smaller, rather than a greater number of labourers, if the cost is about the same. In the latter case, the outlay of constant capital increases in proportion to the mass of labour set in action; in the former that increase is much smaller. The more extended the scale of production, the stronger this motive. Its force increases with the accumulation of capital.
We have seen that the development of the capitalist mode of production and of the productive power of labour—at once the cause and effect of accumulation—enables the capitalist, with the same outlay of variable capital, to set in action more labour by greater exploitation (extensive or intensive) of each individual labour-power. We have further seen that the capitalist buys with the same capital a greater mass of labour-power as he progressively replaces skilled labourers by less skilled, mature labour-power by immature, male by female, that of adults by that of young persons or children. On the one hand, with the progress of accumulation a larger variable capital sets more labour in action without enlisting more labourers; on the other, a variable capital of the same magnitude sets in action more labour with the same mass of labour-power; and, finally, a greater number of inferior labour-power by displacement of higher.
The production of a relative surplus-population, or the setting free of labourers, goes on therefore yet more rapidly than the technical revolution of the process of production that accompanies and is accelerated by, the advances of accumulation; and more rapidly than the corresponding diminution of the variable part of capital as compared with the constant. If the means of production, as they increase in extent and effective power, become to a less extent means of employment of labourers, this state of things is again modified by the fact that in proportion as the productiveness of labour increases, capital increases its supply of labour more quickly than its demand for labourers. The over-work of the employed part of the working class swells the ranks of the reserve whilst conversely the greater pressure that the latter by its competition exerts on the former, forces these to submit to over-work and to subjugation under the dictates of capital. The condemnation of one part of the working-class to enforced idleness by the over-work of the other part and the converse becomes a means of enriching the individual capitalists and accelerates at the same time the production of the industrial reserve army on a scale corresponding with the advance of social accumulation. How important is this element in the formation of the relative surplus-population is shown by the example of England. Her technical means for saving labour are colossal. Nevertheless, if to-morrow morning labour generally were reduced to a rational amount and proportioned to the different sections of the working-class according to age and sex, the working population to hand would be absolutely insufficient for the carrying on of national production on its present scale. The great majority of the labourers now unproductive would have to be turned into productive ones.
This is the place to return to one of the grand exploits of economic apologetics. It will be remembered that if through the introduction of new, or the extension of old, machinery, a portion of variable capital is transformed into constant, the economic apologist interprets this operation which fixes capital and by that very act sets labourers free in exactly the opposite way, pretending that it sets free capital for the labourers. Only now can one fully understand the effrontery of these apologists. What are set free are not only the labourers immediately turned out by the machines, but also their future substitutes in the rising generation and the additional contingent that with the usual extension of trade on the old basis would be regularly absorbed. They are now all set free and every new bit of capital looking out for employment can dispose of them. Whether it attracts them or others, the effect on the general labour demand will be nil, if this capital is just sufficient to take out of the market as many labourers as the machines threw upon it. If it employs a smaller number, that of the supernumeraries increases; if it employs a greater, the general demand for labour only increases to the extent of the excess of the employed over those set free. The impulse that additional capital, seeking an outlet, would otherwise have given to the general demand for labour, is therefore in every case neutralised to the extent of the labourers thrown out of employment by the machine. That is to say, the mechanism of capitalistic production so manages matters that the absolute increase of capital is accompanied by no corresponding rise in the general demand for labour. Thus, the apologist calls a compensation for the misery, the sufferings, the possible death of the displaced labourers during the transition period that banishes them into the industrial reserve army out of antagonism of capital accumulation. The demand for labour is not identical with increase of capital, nor supply of labour with increase of the working class. It is not a case of two independent forces working on one another—dés sont pipés.
Capital works on both sides at the same time. If its accumulation on the one hand increases the demand for labour, it increases on the other the supply of labourers by the setting free of them whilst at the same time the pressure of the unemployed compels those that are employed to furnish more labour and therefore makes the supply of labour, to a certain extent, independent of the supply of labourers. The action of the law of supply and demand of labour on this basis completes the despotism of capital. Therefore, as soon as the labourers learn the secret, how it comes to pass that in the same measure as they work more as they produce more wealth for others and as the productive power of their labour increases so in the same measure even their function as a means of the self-expansion of capital becomes more and more precarious for them; as soon as they discover that the degree of intensity of the competition among themselves depends wholly on the pressure of the relative surplus population; and as soon as they try to organise by trade unions a regular co-operation between employed and unemployed in order to destroy or to weaken the ruinous effects of this natural law of capitalistic production on their class, so soon capital and its sycophant political economy cry out at the infringement of the eternal and so to say sacred law of supply and demand. Every combination of employed and unemployed disturbs the harmonious action of this law. On the other hand, as soon as (e.g. in the colonies) adverse circumstances prevent the creation of an industrial reserve army and with it the absolute dependence of the working class upon the capitalist class, capital, along with its commonplace Sancho Panza, rebels against the sacred law of supply and demand and tries to check its inconvenient action by forcible means and state interference.
Part Eight: So-called Primitive Accumulation.
Chapter 26: The Secret of Primitive Accumulation.
In order to understand the desire for and techniques utilized by the bourgeoisie to accumulate capital before the rise of capitalism itself, one must look to the notion of primitive accumulation as the main impetus for this drastic change in history. Primitive accumulation refers to the essential lucrative method employed by the capitalist class that brought about the transition in to the capitalist mode of production following the end of the feudal system. Marx states that the means of production and a bare level of subsistence must be stripped from the common producer to allow for this to take place. The means of production refers to the tools or processes used to create a product or provide a service.
Chapter 27: Expropriation of the Agricultural Population From the Land.
The central process for and secret behind primitive accumulation involved the expropriation of agricultural lands and any form of wealth from the population of commoners by the capitalists which typically was characterized by brutal and violent struggles between the two opposing classes. Since the peasantry was not subjected to the laws of feudalism any longer, they were ultimately freed from their lords and the land to assimilate into this new mode of production as a wage labourer. As a result, every freed proletariat had only their labour power to sell to the bourgeoisie to meet their needs to simply survive. Marx refers to the Tillage Act 1488, the Tillage Act 1533 and the Poor Relief Act 1601.
Chapter 28: Bloody Legislation Against the Expropriated, from the End of the 15th Century. Forcing Down of Wages by Acts of Parliament.
The integration process into this new mode of production came at a cost to the proletariat since the strenuous demands of finding alternative work proved to be too much of a burden for most. As a result, the working class often initially resorted to thievery and begging to meet their needs under this new form of human existence. To make matters worse, harsh legislation seen in England and France declared these individuals to be vagabonds and rogues subject to the laws of the state. Furthermore, the working class also suffered due to legislative measures taken in England to keep working wages astonishingly low while the cost of living rose. In particular, Marx refers to the Vagabonds Act 1530 (22 Hen. 8. c. 12), the Vagabonds Act 1536 (27 Hen. 8. c. 25), the Vagabonds Act 1547 (1 Edw. 4. c. 3), allowing someone to take as a slave the person they accurately denounce as an idler if they refused to work, the Vagabonds Act 1572 (14 Eliz. 1. c. 5), providing unlicensed beggars above 14 years of age are to be severely flogged, the Poor Act 1575 (18 Eliz. 1. c. 3), the Vagabonds Act 1597 (39 Eliz. 1. c. 4), introducing penal transportation, and the Vagabonds Act 1603 (1 Jas. 1. c. 7) which was only repealed by the Vagrants Act 1713 (13 Ann. c. 26). Marx also recounts wage fixing legislation, including the Statute of Labourers 1351, the Statute of Apprentices (which was extended to weavers by King James I), the Journeymen Tailors, London Act 1720 (7 Geo. 1 St. 1. c. 13), the Silk Manufacturers Act 1772 (13 Geo. 3. c. 68) and the Colliers (Scotland) Act 1799 (39 Geo. 3. c. 56).
Chapter 29: Genesis of the Capitalist Farmer.
The origin of the capitalists in England spawned out of the "great landed proprietors" who reaped the benefits of the surplus value made from the expropriated land they had acquired at practically no cost. The progressive fall of the value of precious metals and money brought more profit to the capitalist farmers as the wage labourers beneath them were forced to accept lower wages. It comes as no surprise that the class of capitalist farmers in England became enormously wealthy given the circumstances of the time.
Chapter 30: Reaction of the Agricultural Revolution on Industry. Creation of the Home-Market for Industrial Capital.
The British Agricultural Revolution (17th–19th centuries) not only caused many changes in the way people worked, but in social structure as well. When industrialisation provided the cheapest and most efficient tools for agricultural production, it caused a reduced need for the peasant farm workers which displaced most of the working class from the countryside. Faced with the choice of selling their labour for a wage or becoming a capitalist, there emerged a class of entrepreneurs who through the exploitation of wage labourers became the capitalist class. As the system grew, there became a need for cheaper and more readily available materials, thus colonization was born. By expanding into new territories and enslaving indigenous cultures, primitive accumulation became a source of quick and easy capital. Famine even became a tool for capitalists in 1769–1770 when England raised the price of rice in India so that only the rich could afford it. National debt soon became a tool of control for capitalists who turned unproductive money into capital through lending and exchange. Encouraged to participate in the creation of debt, each worker participates in the creation of "joint-stock companies, the stock-exchange, and modern bankocracy". The international credit system conceals the source of its generation, i.e. the exploitation of slave and wage labourers.
Chapter 31: The Genesis of the Industrial Capitalist.
The shift in the ownership of the means of production from the proletariat to the bourgeoisie left the common producer with only his labour power to sell. This means they are free proprietors of the conditions of their labour. During this process of transference, private property was replaced by capitalist private property through the highest form of exploitation and the shift from the days of free labour to wage labour had taken place. Capitalist private property was formed from the capital mode of appropriation which dwindled away the once existent private property founded on the personal labour of workers.
Chapter 32: Historical Tendency of Capitalist Accumulation.
Marx states that as capitalism grows, the number of wage labourers grows exponentially. Therefore, ultimately there will be a revolution in which the capitalists are expropriated from their means of wealth by the majority. In other words, the seeds of destruction are already implanted within capitalism. Marx stresses that the demise of capitalism does not necessarily mean the return of feudalism and private property, but rather "it does indeed establish individual property on the basis of the achievements of the capitalist era, namely co-operation and the possession in common of the land and the means of production produced by labour itself". That is to say that the transformation will revert to the time where private property is seen as social property.
Chapter 33: The Modern Theory of Colonization.
Marx claims that two types of private property exist in a political economy. The first form is the labour of the producer himself and the other form rests in a capitalist's exploitation of others. In the industrialized capitalist world of Western Europe, this is easily attained through the usage of laws and private property. However, capitalists constantly find obstacles in the colonies where workers work for their own enrichment rather than that of the capitalist. Capitalists overcome this obstacle by the use of force and by the political backing of the motherland. If domination over the workers free will cannot be achieved, Marx then asks, "how did capital and wage-labour come into existence?" This comes about through the division of workers into owners of capital and owners of labour. This system causes workers to essentially expropriate themselves in order to accumulate capital. This self-expropriation served as primitive accumulation for the capitalists and therefore was the catalyst for capitalism in the colonies.
Publication history.
During his life, Karl Marx oversaw the first and second German language editions as well as a third edition in French. This French edition was to be an important basis of the third German edition that Friedrich Engels instead oversaw after Marx died in 1883.
The "Marx-Engels-Gesamtausgabe" contains critical editions with apparatus the different editions.
There are a number of different English translations. There is some controversy as to the choice of edition that has been translated as representing this work to foreign language readers.
As Marx notes in the afterword to the second German edition of "Capital", Marx's different editions of "Capital" reflect his reworking of published material especially in the presentation of the work particularly on the theory of value.
The "Marx-Engels-Gesamtausgabe" contains the four variants of "Capital, Volume I" in their entirety.
In spite of enormous effort, Marx did not live to complete his goal to publish the remaining volumes of . After Marx died, Engels published as editor and in some ways expanded from Marx's economic manuscripts of volumes II (1885) and III (1894). Scholars are divided over which of several plans for the work was Marx's final. Because the project was not definitively completed, volume one's role in the critique of political economy leaves unanswered scientific questions that Marxian political economists continue to debate.
Presentation method and literary form.
Marxian political economists are divided over the methodological character driving Marx's choice of presentation order of economic concepts, a question frustrating quicker completion of this book in Marx's adult life.
There are logical, historical, sociological and other interpretations attempting to clarify method which Marx did not explain because his writing project on dialectics had lower priority than other matters.
Since 1867, scholars have promoted different interpretations of the purpose driving volume one's long and oftentimes expansive argument. Key writers include Louis Althusser, Harry Cleaver, Richard D. Wolff, David Harvey, Michael Lebowitz, Moishe Postone, Fred Moseley, Michael Heinrich and others. In the Arabic world, Marx's ideas were discussed by thinkers such as Sadiq Jalal Al-Azm, Al-Tayyeb Tizini, Rizgar Akrawi and others.
There exist multiple plans for the project of and consequently whether or not Marx did complete his project is an ongoing debate among Marxian political economists.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\n \\begin{bmatrix}\n 1 & \\mbox{coat}\\\\\n 10 & \\mbox{lb. of tea}\\\\\n 40 & \\mbox{lb. of coffee}\\\\\n 1 & \\mbox{quarter of corn}\\\\\n 2 & \\mbox{ounces of gold}\\\\\n 1/2 & \\mbox{ton of iron}\\\\\n x & \\mbox{commodity A, etc.}\\\\\n \\end{bmatrix}\n=\n20 \\mbox{ yards of linen}\n"
},
{
"math_id": 1,
"text": "\n \\begin{bmatrix}\n 1 & \\mbox{coat}\\\\\n 10 & \\mbox{lb. of tea}\\\\\n 40 & \\mbox{lb. of coffee}\\\\\n 1 & \\mbox{quarter of corn}\\\\\n 20 & \\mbox{yards of linen}\\\\\n 1/2 & \\mbox{ton of iron}\\\\\n x & \\mbox{commodity A, etc.}\\\\\n \\end{bmatrix}\n=\n2 \\mbox{ ounces of gold}\n"
},
{
"math_id": 2,
"text": "C \\to M \\to C"
},
{
"math_id": 3,
"text": "C \\to M"
},
{
"math_id": 4,
"text": "M \\to C"
},
{
"math_id": 5,
"text": "M \\to C = C \\to M"
},
{
"math_id": 6,
"text": "C = c + v"
},
{
"math_id": 7,
"text": "C"
},
{
"math_id": 8,
"text": " = c + v + s"
}
] | https://en.wikipedia.org/wiki?curid=8937649 |
89405 | Frustum | Portion of a solid that lies between two parallel planes cutting the solid
In geometry, a ; (pl.: frusta or frustums) is the portion of a solid (normally a pyramid or a cone) that lies between two parallel planes cutting the solid. In the case of a pyramid, the base faces are polygonal and the side faces are trapezoidal. A right frustum is a right pyramid or a right cone truncated perpendicularly to its axis; otherwise, it is an oblique frustum.
In a "truncated cone" or "truncated pyramid", the truncation plane is not necessarily parallel to the cone's base, as in a frustum.
If all its edges are forced to become of the same length, then a frustum becomes a "prism" (possibly oblique or/and with irregular bases).
Elements, special cases, and related concepts.
A frustum's axis is that of the original cone or pyramid. A frustum is circular if it has circular bases; it is right if the axis is perpendicular to both bases, and oblique otherwise.
The height of a frustum is the perpendicular distance between the planes of the two bases.
Cones and pyramids can be viewed as degenerate cases of frusta, where one of the cutting planes passes through the apex (so that the corresponding base reduces to a point). The pyramidal frusta are a subclass of prismatoids.
Two frusta with two congruent bases joined at these congruent bases make a bifrustum.
Formulas.
Volume.
The formula for the volume of a pyramidal square frustum was introduced by the ancient Egyptian mathematics in what is called the Moscow Mathematical Papyrus, written in the 13th dynasty (c. 1850 BC):
formula_0
where a and b are the base and top side lengths, and h is the height.
The Egyptians knew the correct formula for the volume of such a truncated square pyramid, but no proof of this equation is given in the Moscow papyrus.
The volume of a conical or pyramidal frustum is the volume of the solid before slicing its "apex" off, minus the volume of this "apex":
formula_1
where "B"1 and "B"2 are the base and top areas, and "h"1 and "h"2 are the perpendicular heights from the apex to the base and top planes.
Considering that
formula_2
the formula for the volume can be expressed as the third of the product of this proportionality, formula_3, and of the difference of the cubes of the heights "h"1 and "h"2 only:
formula_4
By using the identity "a"3 − "b"3 = ("a" − "b")("a"2 + "ab" + "b"2), one gets:
formula_5
where "h"1 − "h"2 = "h" is the height of the frustum.
Distributing formula_3 and substituting from its definition, the Heronian mean of areas "B"1 and "B"2 is obtained:
formula_6
the alternative formula is therefore:
formula_7
Heron of Alexandria is noted for deriving this formula, and with it, encountering the imaginary unit: the square root of negative one.
In particular:
formula_8
where "r"1 and "r"2 are the base and top radii.
formula_9
where "a"1 and "a"2 are the base and top side lengths.
Surface area.
For a right circular conical frustum the slant height formula_10 is
<templatestyles src="Block indent/styles.css"/>formula_11
the lateral surface area is
<templatestyles src="Block indent/styles.css"/>formula_12
and the total surface area is
<templatestyles src="Block indent/styles.css"/>formula_13
where "r"1 and "r"2 are the base and top radii respectively.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V = \\frac{h}{3}\\left(a^2 + ab + b^2\\right),"
},
{
"math_id": 1,
"text": "V = \\frac{h_1 B_1 - h_2 B_2}{3},"
},
{
"math_id": 2,
"text": "\\frac{B_1}{h_1^2} = \\frac{B_2}{h_2^2} = \\frac{\\sqrt{B_1B_2}}{h_1h_2} = \\alpha,"
},
{
"math_id": 3,
"text": "\\alpha"
},
{
"math_id": 4,
"text": "V = \\frac{h_1 \\alpha h_1^2 - h_2 \\alpha h_2^2}{3} = \\alpha\\frac{h_1^3 - h_2^3}{3}."
},
{
"math_id": 5,
"text": "V = (h_1 - h_2)\\alpha\\frac{h_1^2 + h_1h_2 + h_2^2}{3},"
},
{
"math_id": 6,
"text": "\\frac{B_1 + \\sqrt{B_1B_2} + B_2}{3};"
},
{
"math_id": 7,
"text": "V = \\frac{h}{3}\\left(B_1 + \\sqrt{B_1B_2} + B_2\\right)."
},
{
"math_id": 8,
"text": "V = \\frac{\\pi h}{3}\\left(r_1^2 + r_1r_2 + r_2^2\\right),"
},
{
"math_id": 9,
"text": "V = \\frac{nh}{12}\\left(a_1^2 + a_1a_2 + a_2^2\\right)\\cot\\frac{\\pi}{n},"
},
{
"math_id": 10,
"text": "s"
},
{
"math_id": 11,
"text": "\\displaystyle s=\\sqrt{\\left(r_1-r_2\\right)^2+h^2},"
},
{
"math_id": 12,
"text": "\\displaystyle \\pi\\left(r_1+r_2\\right)s,"
},
{
"math_id": 13,
"text": "\\displaystyle \\pi\\left(\\left(r_1+r_2\\right)s+r_1^2+r_2^2\\right),"
}
] | https://en.wikipedia.org/wiki?curid=89405 |
894139 | Causal contact | Sharing an event that affects entities in a causal way
Two entities are in causal contact if there may be an event that has affected both in a causal way. Every object of mass in space, for instance, exerts a field force on all other objects of mass, according to Newton's law of universal gravitation. Because this force exerted by one object affects the motion of the other, it can be said that these two objects are in causal contact.
The only objects not in causal contact are those for which there is no event in the history of the universe that could have sent a beam of light to both. For example, if the universe were not expanding and had existed for 10 billion years, anything more than 20 billion light-years away from the earth would not be in causal contact with it. Anything less than 20 billion light-years away "would" because an event occurring 10 billion years in the past that was 10 billion light-years away from both the earth and the object under question could have affected both.
A good illustration of this principle is the light cone, which is constructed as follows. Taking as event formula_0 a flash of light (light pulse) at time formula_1, all events that can be reached by this pulse from formula_0 form the future light cone of formula_0, whilst those events that can send a light pulse to formula_0 form the past light cone of formula_0.
Given an event formula_2, the light cone classifies all events in spacetime into 5 distinct categories:
See the causal structure of Minkowski spacetime for a more detailed discussion.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "t_0"
},
{
"math_id": 2,
"text": "E"
}
] | https://en.wikipedia.org/wiki?curid=894139 |
89425 | Arrow's impossibility theorem | Proof all ranked voting rules have spoilers
Arrow's impossibility theorem is a key result in social choice, discovered by Kenneth Arrow, showing that no ranked voting rule can behave rationally. Specifically, any such rule violates "independence of irrelevant alternatives", the idea that a choice between formula_0 and formula_1 should not depend on the quality of a third, unrelated option formula_2. The result is most often cited in election science and voting theory, where formula_2 is called a spoiler candidate. In this context, Arrow's theorem can be restated as showing that no ranked voting rule can eliminate the spoiler effect.
However, Arrow's theorem is substantially broader, and can be applied to other methods of social decision-making besides voting. It therefore generalizes Nicolas de Condorcet's voting paradox, and shows similar problems will exist for any collective decision-making procedure based on relative comparisons.
Despite this, some ranked methods are much more susceptible to spoilers than others. Plurality-rule methods like first-past-the-post and ranked-choice voting (RCV) in particular are highly sensitive to spoilers, manufacturing them even in some situations where they are not forced. By contrast, majority-rule methods uniquely minimize the possibility of spoilers, limiting them to rare situations called cyclic ties. Under some models of voter behavior (e.g. a left-right spectrum), spoiler effects can vanish entirely for Condorcet methods.
Rated methods, based on cardinal utility, are not affected by Arrow's theorem or IIA failures at all. Arrow initially asserted the information provided by these systems was meaningless and therefore could not be used to prevent paradoxes, leading him to overlook them. However, he and other authors would later recognize this as having been a mistake, with Arrow admitting rules based on cardinal utilities (such as score and approval voting) are not subject to his theorem.
Background.
Arrow's theorem falls under the branch of welfare economics called social choice theory, which deals with aggregating preferences and beliefs to make optimal decisions. The goal of social choice is to identify a social choice rule, a mathematical function that determines which of two outcomes or options is better, according to all members of a society. Such a procedure can be a market, voting system, constitution, or even a moral or ethical framework. Ideally, such a procedure should satisfy the properties of rational choice, avoiding any kind of self-contradiction.
Axioms of voting systems.
Preferences.
In social choice theory, ordinal preferences are modeled as orderings of candidates. If "A" and "B" are different candidates or alternatives, then formula_3 means "A" is preferred to "B". Individual preferences (or ballots) are required to satisfy intuitive properties of orderings, e.g. they must be transitive—if formula_4 and formula_5, then formula_6. The social choice function is then a mathematical function that maps the individual orderings to a new ordering that represents the preferences of all of society.
Basic assumptions.
Arrow's theorem assumes as background that non-degenerate ranked social choice rules satisfy:
Arrow's original statement of the theorem included the assumption of nonperversity, i.e. "increasing" the rank of an outcome should not make them "lose". However, this assumption is not needed or used in his proof, except to derive the weaker Pareto efficiency axiom, and as a result is not related to the paradox. While Arrow considered it an obvious requirement of any proposed social choice rule, ranked-choice runoff (RCV) fails this condition. Arrow later corrected his statement of the theorem to include runoffs and other perverse voting rules.
Rationality.
Among the most important axioms of rational choice is "independence of irrelevant alternatives", which says that when deciding between "A" and "B", one's opinion about a third option "C" should not affect their decision.
IIA is sometimes illustrated with a short joke by philosopher Sidney Morgenbesser:
Morgenbesser, ordering dessert, is told by a waitress that he can choose between blueberry or apple pie. He orders apple. Soon the waitress comes back and explains cherry pie is also an option. Morgenbesser replies "In that case, I'll have blueberry."
Arrow's theorem shows that if a society wishes to make decisions while avoiding such self-contradictions, it cannot use methods that discard cardinal information.
Theorem.
Intuitive argument.
Condorcet's example is already enough to see the impossibility of a fair ranked voting system, given stronger conditions for fairness than Arrow's theorem assumes. Suppose we have three candidates (formula_0, formula_1, and formula_2) and three voters whose preferences are as follows:
If formula_2 is chosen as the winner, it can be argued any fair voting system would say formula_1 should win instead, since two voters (1 and 2) prefer formula_1 to formula_2 and only one voter (3) prefers formula_2 to formula_1. However, by the same argument formula_0 is preferred to formula_1, and formula_2 is preferred to formula_0, by a margin of two to one on each occasion. Thus, even though each individual voter has consistent preferences, the preferences of society are contradictory: formula_0 is preferred over formula_1 which is preferred over formula_2 which is preferred over formula_0.
Because of this example, some authors credit Condorcet with having given an intuitive argument that presents the core of Arrow's theorem. However, Arrow's theorem is substantially more general; it applies to methods of making decisions other than one-man-one-vote elections, such as markets or weighted voting, based on ranked ballots.
Formal statement.
Let "A" be a set of "alternatives". A "preference" on "A" is a complete and transitive binary relation on "A" (sometimes called a total preorder), that is, a subset "R" of "A" × "A" satisfying:
The element (a, b) being in "R" is interpreted to mean that alternative a is preferred to alternative b. This situation is often denoted formula_9 or aRb. Denote the set of all preferences on "A" by Π("A").
Let "N" be a positive integer. An "ordinal (ranked)" "social welfare function" is a function
formula_10
which aggregates voters' preferences into a single preference on "A". An "N"-tuple ("R"1, …, "R""N") ∈ Π("A")"N" of voters' preferences is called a "preference profile".
Arrow's impossibility theorem: If there are at least three alternatives, then there is no social welfare function satisfying all three of the conditions listed below:
If alternative a is preferred to b for all orderings "R"1 , …, "R""N", then a is preferred to b by F("R"1, "R"2, …, "R""N").
There is no individual "i" whose preferences always prevail. That is, there is no "i" ∈ {1, …, "N"} such that for all ("R"1, …, "R""N") ∈ Π(A)"N" and all a and b, when a is preferred to b by "Ri" then a is preferred to b by F("R"1, "R"2, …, "R""N").
For two preference profiles ("R"1, …, "R""N") and ("S"1, …, "S""N") such that for all individuals "i", alternatives a and b have the same order in "Ri" as in "Si", alternatives a and b have the same order in F("R"1, …, "R""N") as in F("S"1, …, "S""N").
Generalizations.
Arrow's impossibility theorem still holds if Pareto efficiency is weakened to the following condition:
For any two alternatives a and b, there exists some preference profile "R"1 , …, "R""N" such that a is preferred to b by F("R"1, "R"2, …, "R""N").
Interpretation and practical solutions.
Arrow's theorem establishes that no ranked voting rule can "always" satisfy independence of irrelevant alternatives, but it says nothing about the frequency of spoilers. This led Arrow to remark that "Most systems are not going to work badly all of the time. All I proved is that all can work badly at times."
Attempts at dealing with the effects of Arrow's theorem take one of two approaches: either accepting his rule and searching for the least spoiler-prone methods, or dropping his assumption of ranked voting to focus on studying rated voting rules.
Minimizing IIA failures: Majority-rule methods.
The first set of methods studied by economists are the majority-rule, or "Condorcet", methods. These rules limit spoilers to situations where majority rule is self-contradictory, called Condorcet cycles, and as a result uniquely minimize the possibility of a spoiler effect among ranked rules. Condorcet believed voting rules should satisfy both independence of irrelevant alternatives and the majority rule principle, i.e. if most voters rank "Alice" ahead of "Bob", "Alice" should defeat "Bob" in the election.
Unfortunately, as Condorcet proved, this rule can be self-contradictory (intransitive), because there can be a rock-paper-scissors cycle with three or more candidates defeating each other in a circle. Thus, Condorcet proved a weaker form of Arrow's impossibility theorem long before Arrow, under the stronger assumption that a voting system in the two-candidate case will agree with a simple majority vote.
Unlike pluralitarian rules such as ranked-choice runoff (RCV) or first-preference plurality, Condorcet methods avoid the spoiler effect in non-cyclic elections, where candidates can be chosen by majority rule. Political scientists have found such cycles to be fairly rare, likely in the range of a few percent, suggesting they may be of limited practical concern. Spatial voting models also suggest such paradoxes are likely to be infrequent or even non-existent.
Left-right spectrum.
Soon after Arrow published his theorem, Duncan Black showed his own remarkable result, the median voter theorem. The theorem proves that if voters and candidates are arranged on a left-right spectrum, Arrow's conditions are all fully compatible, and all will be met by any rule satisfying Condorcet's majority-rule principle.
More formally, Black's theorem assumes preferences are "single-peaked": a voter's happiness with a candidate goes up and then down as the candidate moves along some spectrum. For example, in a group of friends choosing a volume setting for music, each friend would likely have their own ideal volume; as the volume gets progressively too loud or too quiet, they would be increasingly dissatisfied. If the domain is restricted to profiles where every individual has a single-peaked preference with respect to the linear ordering, then social preferences are acyclic. In this situation, Condorcet methods satisfy a wide variety of highly-desirable properties, including being fully spoilerproof.
The rule does not fully generalize from the political spectrum to the political compass, a result related to the McKelvey-Schofield Chaos Theorem. However, a well-defined Condorcet winner does exist if the distribution of voters is rotationally symmetric or otherwise has a uniquely-defined median. In realistic cases, where voters' opinions often follow a roughly-normal distribution or can be accurately summarized in one or two dimensions, Condorcet cycles tend to be rare.
Generalized stability theorems.
The Campbell-Kelly theorem shows that Condorcet methods are the most spoiler-resistant class of ranked voting systems: whenever it is possible for some ranked voting system to avoid a spoiler effect, a Condorcet method will do so. In other words, replacing a ranked method with its Condorcet variant (i.e. elect a Condorcet winner if they exist, and otherwise run the method) will sometimes prevent a spoiler effect, but never cause a new one.
In 1977, Ehud Kalai and Eitan Muller gave a full characterization of domain restrictions admitting a nondictatorial and strategyproof social welfare function. These correspond to preferences for which there is a Condorcet winner.
Holliday and Pacuit devised a voting system that provably minimizes the number of candidates who are capable of spoiling an election, albeit at the cost of occasionally failing vote positivity (though at a much lower rate than seen in instant-runoff voting).
Eliminating IIA failures: Rated voting.
As shown above, the proof of Arrow's theorem relies crucially on the assumption of ranked voting, and is not applicable to rated voting systems. As a result, systems like score voting and graduated majority judgment pass independence of irrelevant alternatives. These systems ask voters to rate candidates on a numerical scale (e.g. from 0–10), and then elect the candidate with the highest average (for score voting) or median (graduated majority judgment).
While Arrow's theorem does not apply to graded systems, Gibbard's theorem still does: no voting game can be straightforward (i.e. have a single, clear, always-best strategy), so the informal dictum that "no voting system is perfect" still has some mathematical basis.
Meaningfulness of cardinal information.
Arrow's framework assumed individual and social preferences are orderings or rankings, i.e. statements about which outcomes are better or worse than others. Taking inspiration from the strict behaviorism popular in psychology, some philosophers and economists rejected the idea of comparing internal human experiences of well-being. Such philosophers claimed it was impossible to compare the strength of preferences across people who disagreed; Sen gives as an example that it would be impossible to know whether the Great Fire of Rome was good or bad, because despite killing thousands of Romans, it had the positive effect of letting Nero expand his palace.
Arrow originally agreed with these positions and rejected cardinal utility, leading him to focus his theorem on preference rankings; his goal in adding the independence axiom was, in part, to prevent from the social choice function from "sneaking in" cardinal information by attempting to infer it from the rankings. As a result, Arrow initially interpreted his theorem as a kind of mathematical proof of nihilism or egoism. However, he later reversed this opinion, admitting cardinal methods can provide useful information that allows them to evade his theorem. Similarly, Amartya Sen first claimed interpersonal comparability is necessary for IIA, but later came to argue in favor of cardinal methods for assessing social choice, arguing it would only require "rather limited levels of partial comparability" to hold in practice.
Balinski and Laraki dispute the necessity of any genuinely cardinal information for rated voting methods to pass IIA. They argue the availability of a common language with verbal grades is sufficient for IIA by allowing voters to give consistent responses to questions about candidate quality. In other words, they argue most voters will not change their beliefs about whether a candidate is "good", "bad", or "neutral" simply because another candidate joins or drops out of a race.
John Harsanyi noted Arrow's theorem could be considered a weaker version of his own theorem and other utility representation theorems like the VNM theorem, which generally show that rational behavior requires consistent cardinal utilities. Harsanyi and Vickrey each independently derived results showing such interpersonal comparisons of utility could be rigorously defined as individual preferences over the lottery of birth.
Other scholars have noted that interpersonal comparisons of utility are not unique to cardinal voting, but are instead a necessity of any non-dictatorial (or non-egoist) choice procedure, with cardinal voting rules simply making these comparisons explicit. David Pearce referred to Arrow's original interpretation of the theorem as a mathematical proof of nihilism or egoism is effectively circular reasoning, and Hildreth pointed out that "any procedure that extends the partial ordering of [Pareto efficiency] must involve interpersonal comparisons of utility." These observations have led to the rise of implicit utilitarian voting, which identifies ranked procedures with approximations of the utilitarian rule (i.e. score voting), making such implicit comparisons more explicit.
In psychometrics, there is a near-universal scientific consensus for the usefulness and meaningfulness of self-reported ratings, which empirically show greater validity and reliability than rankings in measuring human opinions. Research has consistently found cardinal rating scales (e.g. Likert scales) provide more information than rankings alone. Kaiser and Oswald conducted an empirical review of four decades of research including over 700,000 participants who provided self-reported ratings of utility, with the goal of identifying whether people "have a sense of an actual underlying scale for their innermost feelings". They found responses to these questions were consistent with all expectations of a well-specified quantitative measure. Furthermore, such ratings were highly predictive of important decisions (such as international migration and divorce), with larger effect sizes than standard socioeconomic predictors like income and demographics. Ultimately, the authors concluded "this feelings-to-actions relationship takes a generic form, is consistently replicable, and is fairly close to linear in structure. Therefore, it seems that human beings can successfully operationalize an integer scale for feelings".
Nonstandard spoilers.
Behavioral economists have shown individual irrationality involves violations of IIA (e.g. with decoy effects), suggesting human behavior might cause IIA failures even if the voting method itself does not. However, past research on similar effects has found their effects are typically small, and such psychological spoiler effects can occur regardless of the electoral system in use. Balinski and Laraki discuss techniques of ballot design based on psychometrics that minimize these psychological effects, such as asking voters to give each candidate a verbal grade (e.g. "bad", "neutral", "good", "excellent") and issuing instructions to voters that refer to their ballots as judgments of individual candidates.
Esoteric solutions.
In addition to the above practical resolutions, there exist unusual (less-than-practical) situations where Arrow's requirement of IIA can be satisfied.
Supermajority rules.
Supermajority rules can avoid Arrow's theorem at the cost of being poorly-decisive (i.e. frequently failing to return a result). In this case, a threshold that requires a formula_11 majority for ordering 3 outcomes, formula_12 for 4, etc. does not produce voting paradoxes.
In spatial (n-dimensional ideology) models of voting, this can be relaxed to require only formula_13 (roughly 64%) of the vote to prevent cycles, so long as the distribution of voters is well-behaved (quasiconcave). These results provide some justification for the common requirement of a two-thirds majority for constitutional amendments, which is sufficient to prevent cyclic preferences in most situations.
Uncountable voter sets.
Fishburn shows all of Arrow's conditions can be satisfied for uncountable sets of voters given the axiom of choice; however, Kirman and Sondermann showed this requires disenfranchising almost all members of a society (eligible voters form a set of measure 0), leading them to refer to such societies as "invisible dictatorships".
Common misconceptions.
Arrow's theorem is not related to strategic voting, which does not appear in his framework, though the theorem does have important implications for strategic voting (being used as a lemma to prove Gibbard's theorem). The Arrovian framework of social welfare assumes all voter preferences are known and the only issue is in aggregating them.
Monotonicity (sometimes called non-perversity) is not a condition of Arrow's theorem (contrary to a mistake by Arrow himself, who included the axiom in his original statement of the theorem but did not use it). Dropping the assumption does not allow for constructing a social welfare function that meets his other conditions.
Contrary to a common misconception, Arrow's theorem deals with the limited class of ranked-choice voting systems, rather than voting systems as a whole. | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "A \\succ B"
},
{
"math_id": 4,
"text": "A \\succeq B"
},
{
"math_id": 5,
"text": "B \\succeq C"
},
{
"math_id": 6,
"text": "A \\succeq C"
},
{
"math_id": 7,
"text": "B \\succ A"
},
{
"math_id": 8,
"text": "A \\succ C"
},
{
"math_id": 9,
"text": "\\mathbf a \\succ \\mathbf b"
},
{
"math_id": 10,
"text": " \\mathrm{F} : \\Pi(A)^N \\to \\Pi(A) "
},
{
"math_id": 11,
"text": "2/3"
},
{
"math_id": 12,
"text": "3/4"
},
{
"math_id": 13,
"text": "1-e^{-1}"
}
] | https://en.wikipedia.org/wiki?curid=89425 |
8947106 | Finite von Neumann algebra | In mathematics, a finite von Neumann algebra is a von Neumann algebra in which every isometry is a unitary. In other words, for an operator "V" in a finite von Neumann algebra if formula_0, then formula_1. In terms of the comparison theory of projections, the identity operator is not (Murray-von Neumann) equivalent to any proper subprojection in the von Neumann algebra.
Properties.
Let formula_2 denote a finite von Neumann algebra with center formula_3. One of the fundamental characterizing properties of finite von Neumann algebras is the existence of a center-valued trace. A von Neumann algebra formula_2 is finite if and only if there exists a normal positive bounded map formula_4 with the properties:
Examples.
Finite-dimensional von Neumann algebras.
The finite-dimensional von Neumann algebras can be characterized using Wedderburn's theory of semisimple algebras.
Let C"n" × "n" be the "n" × "n" matrices with complex entries. A von Neumann algebra M is a self adjoint subalgebra in C"n" × "n" such that M contains the identity operator "I" in C"n" × "n".
Every such M as defined above is a semisimple algebra, i.e. it contains no nilpotent ideals. Suppose "M" ≠ 0 lies in a nilpotent ideal of M. Since "M*" ∈ M by assumption, we have "M*M", a positive semidefinite matrix, lies in that nilpotent ideal. This implies ("M*M")"k" = 0 for some "k". So "M*M" = 0, i.e. "M" = 0.
The center of a von Neumann algebra M will be denoted by "Z"(M). Since M is self-adjoint, "Z"(M) is itself a (commutative) von Neumann algebra. A von Neumann algebra N is called a factor if "Z"(N) is one-dimensional, that is, "Z"(N) consists of multiples of the identity "I".
Theorem Every finite-dimensional von Neumann algebra M is a direct sum of "m" factors, where "m" is the dimension of "Z"(M).
Proof: By Wedderburn's theory of semisimple algebras, "Z"(M) contains a finite orthogonal set of idempotents (projections) {"Pi"} such that "PiPj" = 0 for "i" ≠ "j", Σ "Pi" = "I", and
formula_13
where each "Z"(M")Pi" is a commutative simple algebra. Every complex simple algebras is isomorphic to
the full matrix algebra C"k" × "k" for some "k". But "Z"(M")Pi" is commutative, therefore one-dimensional.
The projections "Pi" "diagonalizes" M in a natural way. For "M" ∈ M, "M" can be uniquely decomposed into "M" = Σ "MPi". Therefore,
formula_14
One can see that "Z"(MPi") = "Z"(M)Pi". So "Z"(MPi") is one-dimensional and each MPi" is a factor. This proves the claim.
For general von Neumann algebras, the direct sum is replaced by the direct integral. The above is a special case of the central decomposition of von Neumann algebras. | [
{
"math_id": 0,
"text": "V^*V = I"
},
{
"math_id": 1,
"text": "VV^* = I"
},
{
"math_id": 2,
"text": "\\mathcal{M}"
},
{
"math_id": 3,
"text": "\\mathcal{Z}"
},
{
"math_id": 4,
"text": "\\tau : \\mathcal{M} \\to \\mathcal{Z}"
},
{
"math_id": 5,
"text": "\\tau(AB) = \\tau(BA), A, B \\in \\mathcal{M}"
},
{
"math_id": 6,
"text": "A \\ge 0"
},
{
"math_id": 7,
"text": "\\tau(A) = 0"
},
{
"math_id": 8,
"text": "A = 0"
},
{
"math_id": 9,
"text": "\\tau(C) = C"
},
{
"math_id": 10,
"text": "C \\in \\mathcal{Z}"
},
{
"math_id": 11,
"text": "\\tau(CA) = C\\tau(A)"
},
{
"math_id": 12,
"text": "A \\in \\mathcal{M}"
},
{
"math_id": 13,
"text": "\nZ(\\mathbf M) = \\bigoplus _i Z(\\mathbf M) P_i "
},
{
"math_id": 14,
"text": "{\\mathbf M} = \\bigoplus_i {\\mathbf M} P_i ."
},
{
"math_id": 15,
"text": "II_1"
}
] | https://en.wikipedia.org/wiki?curid=8947106 |
894774 | Emission theory (relativity) | Emission theory, also called emitter theory or ballistic theory of light, was a competing theory for the special theory of relativity, explaining the results of the Michelson–Morley experiment of 1887. Emission theories obey the principle of relativity by having no preferred frame for light transmission, but say that light is emitted at speed "c" relative to its source instead of applying the invariance postulate. Thus, emitter theory combines electrodynamics and mechanics with a simple Newtonian theory. Although there are still proponents of this theory outside the scientific mainstream, this theory is considered to be conclusively discredited by most scientists.
History.
The name most often associated with emission theory is Isaac Newton. In his "corpuscular theory" Newton visualized light "corpuscles" being thrown off from hot bodies at a nominal speed of "c" with respect to the emitting object, and obeying the usual laws of Newtonian mechanics, and we then expect light to be moving towards us with a speed that is offset by the speed of the distant emitter ("c" ± "v").
In the 20th century, special relativity was created by Albert Einstein to solve the apparent conflict between electrodynamics and the principle of relativity. The theory's geometrical simplicity was persuasive, and the majority of scientists accepted relativity by 1911. However, a few scientists rejected the second basic postulate of relativity: the constancy of the speed of light in all inertial frames. So different types of emission theories were proposed where the speed of light depends on the velocity of the source, and the Galilean transformation is used instead of the Lorentz transformation. All of them can explain the negative outcome of the Michelson–Morley experiment, since the speed of light is constant with respect to the interferometer in all frames of reference. Some of those theories were:
Albert Einstein is supposed to have worked on his own emission theory before abandoning it in favor of his special theory of relativity. Many years later R.S. Shankland reports Einstein as saying that Ritz's theory had been "very bad" in places and that he himself had eventually discarded emission theory because he could think of no form of differential equations that described it, since it leads to the waves of light becoming "all mixed up".
Refutations of emission theory.
The following scheme was introduced by de Sitter to test emission theories:
formula_0
where "c" is the speed of light, "v" that of the source, "c' " the resultant speed of light, and "k" a constant denoting the extent of source dependence which can attain values between 0 and 1. According to special relativity and the stationary aether, "k"=0, while emission theories allow values up to 1. Numerous terrestrial experiments have been performed, over very short distances, where no "light dragging" or extinction effects could come into play, and again the results confirm that light speed is independent of the speed of the source, conclusively ruling out emission theories.
Astronomical sources.
In 1910 Daniel Frost Comstock and in 1913 Willem de Sitter wrote that for the case of a double-star system seen edge-on, light from the approaching star might be expected to travel faster than light from its receding companion, and overtake it. If the distance was great enough for an approaching star's "fast" signal to catch up with and overtake the "slow" light that it had emitted earlier when it was receding, then the image of the star system should appear completely scrambled. De Sitter argued that none of the star systems he had studied showed the extreme optical effect behavior, and this was considered the death knell for Ritzian theory and emission theory in general, with formula_1.
The effect of extinction on de Sitter's experiment has been considered in detail by Fox, and it arguably undermines the cogency of de Sitter type evidence based on binary stars. However, similar observations have been made more recently in the x-ray spectrum by Brecher (1977), which have a long enough extinction distance that it should not affect the results. The observations confirm that the speed of light is independent of the speed of the source, with formula_2.
Hans Thirring argued in 1924, that an atom which is accelerated during the emission process by thermal collisions in the sun, is emitting light rays having different velocities at their start- and endpoints. So one end of the light ray would overtake the preceding parts, and consequently the distance between the ends would be elongated up to 500 km until they reach Earth, so that the mere existence of sharp spectral lines in the sun's radiation, disproves the ballistic model.
Terrestrial sources.
Such experiments include that of Sadeh (1963) who used a time-of-flight technique to measure velocity differences of photons traveling in opposite direction, which were produced by positron annihilation. Another experiment was conducted by Alväger et al. (1963), who compared the time of flight of gamma rays from moving and resting sources. Both experiments found no difference, in accordance with relativity.
Filippas and Fox (1964) did not consider Sadeh (1963) and Alväger (1963) to have sufficiently controlled for the effects of extinction. So they conducted an experiment using a setup specifically designed to account for extinction. Data collected from various detector-target distances were consistent with there being no dependence of the speed of light on the velocity of the source, and were inconsistent with modeled behavior assuming c ± v both with and without extinction.
Continuing their previous investigations, Alväger et al. (1964) observed π0-mesons which decay into photons at 99.9% light speed. The experiment showed that the photons didn't attain the velocity of their sources and still traveled at the speed of light, with formula_3. The investigation of the media which were crossed by the photons showed that the extinction shift was not sufficient to distort the result significantly.
Also measurements of neutrino speed have been conducted. Mesons travelling nearly at light speed were used as sources. Since neutrinos only participate in the electroweak interaction, extinction plays no role. Terrestrial measurements provided upper limits of formula_4.
Interferometry.
The Sagnac effect demonstrates that one beam on a rotating platform covers less distance than the other beam, which creates the shift in the interference pattern. Georges Sagnac's original experiment has been shown to suffer extinction effects, but since then, the Sagnac effect has also been shown to occur in vacuum, where extinction plays no role.
The predictions of Ritz's version of emission theory were consistent with almost all terrestrial interferometric tests save those involving the propagation of light in moving media, and Ritz did not consider the difficulties presented by tests such as the Fizeau experiment to be insurmountable. Tolman, however, noted that a Michelson–Morley experiment using an extraterrestrial light source could provide a decisive test of the Ritz hypothesis. In 1924, Rudolf Tomaschek performed a modified Michelson–Morley experiment using starlight, while Dayton Miller used sunlight. Both experiments were inconsistent with the Ritz hypothesis.
Babcock and Bergman (1964) placed rotating glass plates between the mirrors of a common-path interferometer set up in a static Sagnac configuration. If the glass plates behave as new sources of light so that the total speed of light emerging from their surfaces is "c" + "v", a shift in the interference pattern would be expected. However, there was no such effect which again confirms special relativity, and which again demonstrates the source independence of light speed. This experiment was executed in vacuum, thus extinction effects should play no role.
Albert Abraham Michelson (1913) and Quirino Majorana (1918/9) conducted interferometer experiments with resting sources and moving mirrors (and vice versa), and showed that there is no source dependence of light speed in air. Michelson's arrangement was designed to distinguish between three possible interactions of moving mirrors with light: (1) "the light corpuscles are reflected as projectiles from an elastic wall", (2) "the mirror surface acts as a new source", (3) "the velocity of light is independent of the velocity of the source". His results were consistent with source independence of light speed. Majorana analyzed the light from moving sources and mirrors using an unequal arm Michelson interferometer that was extremely sensitive to wavelength changes. Emission theory asserts that Doppler shifting of light from a moving source represents a frequency shift with no shift in wavelength. Instead, Majorana detected wavelength changes inconsistent with emission theory.
Beckmann and Mandics (1965) repeated the Michelson (1913) and Majorana (1918) moving mirror experiments in high vacuum, finding "k" to be less than 0.09. Although the vacuum employed was insufficient to definitively rule out extinction as the reason for their negative results, it was sufficient to make extinction highly unlikely. Light from the moving mirror passed through a Lloyd interferometer, part of the beam traveling a direct path to the photographic film, part reflecting off the Lloyd mirror. The experiment compared the speed of light hypothetically traveling at "c + v" from the moving mirrors, versus reflected light hypothetically traveling at "c" from the Lloyd mirror.
Other refutations.
Emission theories use the Galilean transformation, according to which time coordinates are invariant when changing frames ("absolute time"). Thus the Ives–Stilwell experiment, which confirms relativistic time dilation, also refutes the emission theory of light. As shown by Howard Percy Robertson, the complete Lorentz transformation can be derived, when the Ives–Stillwell experiment is considered together with the Michelson–Morley experiment and the Kennedy–Thorndike experiment.
Furthermore, quantum electrodynamics places the propagation of light in an entirely different, but still relativistic, context, which is completely incompatible with any theory that postulates a speed of light that is affected by the speed of the source. | [
{
"math_id": 0,
"text": "c'=c\\pm kv\\,"
},
{
"math_id": 1,
"text": "k < 2\\times10^{-3}"
},
{
"math_id": 2,
"text": "k < 2\\times10^{-9}"
},
{
"math_id": 3,
"text": "k =(-3\\pm13)\\times10^{-5}"
},
{
"math_id": 4,
"text": "k\\leq10^{-6}"
}
] | https://en.wikipedia.org/wiki?curid=894774 |
894779 | Todd–Coxeter algorithm | In group theory, the Todd–Coxeter algorithm, created by J. A. Todd and H. S. M. Coxeter in 1936, is an algorithm for solving the coset enumeration problem. Given a presentation of a group "G" by generators and relations and a subgroup "H" of "G", the algorithm enumerates the cosets of "H" on "G" and describes the permutation representation of "G" on the space of the cosets (given by the left multiplication action). If the order of a group "G" is relatively small and the subgroup "H" is known to be uncomplicated (for example, a cyclic group), then the algorithm can be carried out by hand and gives a reasonable description of the group "G". Using their algorithm, Coxeter and Todd showed that certain systems of relations between generators of known groups are complete, i.e. constitute systems of defining relations.
The Todd–Coxeter algorithm can be applied to infinite groups and is known to terminate in a finite number of steps, provided that the index of "H" in "G" is finite. On the other hand, for a general pair consisting of a group presentation and a subgroup, its running time is not bounded by any computable function of the index of the subgroup and the size of the input data.
Description of the algorithm.
One implementation of the algorithm proceeds as follows. Suppose that formula_0, where formula_1 is a set of generators and formula_2 is a set of relations and denote by formula_3 the set of generators formula_1 and their inverses. Let formula_4 where the formula_5 are words of elements of formula_3. There are three types of tables that will be used: a coset table, a relation table for each relation in formula_2, and a subgroup table for each generator formula_5 of formula_6. Information is gradually added to these tables, and once they are filled in, all cosets have been enumerated and the algorithm terminates.
The coset table is used to store the relationships between the known cosets when multiplying by a generator. It has rows representing cosets of formula_6 and a column for each element of formula_3. Let formula_7 denote the coset of the "i"th row of the coset table, and let formula_8 denote generator of the "j"th column. The entry of the coset table in row "i", column "j" is defined to be (if known) "k", where "k" is such that formula_9.
The relation tables are used to detect when some of the cosets we have found are actually equivalent. One relation table for each relation in formula_2 is maintained. Let formula_10 be a relation in formula_2, where formula_11. The relation table has rows representing the cosets of formula_6, as in the coset table. It has "t" columns, and the entry in the "i"th row and "j"th column is defined to be (if known) "k", where formula_12. In particular, the formula_13'th entry is initially "i", since formula_14.
Finally, the subgroup tables are similar to the relation tables, except that they keep track of possible relations of the generators of formula_6. For each generator formula_15 of formula_6, with formula_11, we create a subgroup table. It has only one row, corresponding to the coset of formula_6 itself. It has "t" columns, and the entry in the "j"th column is defined (if known) to be "k", where formula_16.
When a row of a relation or subgroup table is completed, a new piece of information formula_17, formula_18, is found. This is known as a "deduction". From the deduction, we may be able to fill in additional entries of the relation and subgroup tables, resulting in possible additional deductions. We can fill in the entries of the coset table corresponding to the equations formula_17 and formula_19.
However, when filling in the coset table, it is possible that we may already have an entry for the equation, but the entry has a different value. In this case, we have discovered that two of our cosets are actually the same, known as a "coincidence". Suppose formula_20, with formula_21. We replace all instances of "j" in the tables with "i". Then, we fill in all possible entries of the tables, possibly leading to more deductions and coincidences.
If there are empty entries in the table after all deductions and coincidences have been taken care of, add a new coset to the tables and repeat the process. We make sure that when adding cosets, if "Hx" is a known coset, then "Hxg" will be added at some point for all formula_18. (This is needed to guarantee that the algorithm will terminate provided formula_22 is finite.)
When all the tables are filled, the algorithm terminates. We then have all needed information on the action of formula_23 on the cosets of formula_6. | [
{
"math_id": 0,
"text": " G = \\langle X \\mid R \\rangle "
},
{
"math_id": 1,
"text": " X "
},
{
"math_id": 2,
"text": " R "
},
{
"math_id": 3,
"text": " X' "
},
{
"math_id": 4,
"text": " H = \\langle h_1, h_2, \\ldots, h_s \\rangle "
},
{
"math_id": 5,
"text": " h_i "
},
{
"math_id": 6,
"text": " H "
},
{
"math_id": 7,
"text": " C_i "
},
{
"math_id": 8,
"text": " g_j \\in X' "
},
{
"math_id": 9,
"text": " C_k = C_ig_j "
},
{
"math_id": 10,
"text": " 1 = g_{n_1} g_{n_2} \\cdots g_{n_t} "
},
{
"math_id": 11,
"text": " g_{n_i} \\in X' "
},
{
"math_id": 12,
"text": " C_k = C_i g_{n_1} g_{n_2} \\cdots g_{n_j} "
},
{
"math_id": 13,
"text": "(i,t)"
},
{
"math_id": 14,
"text": " g_{n_1} g_{n_2} \\cdots g_{n_t} = 1"
},
{
"math_id": 15,
"text": " h_n = g_{n_1} g_{n_2} \\cdots g_{n_t} "
},
{
"math_id": 16,
"text": " C_k = H g_{n_1} g_{n_2} \\cdots g_{n_j} "
},
{
"math_id": 17,
"text": " C_i = C_j g "
},
{
"math_id": 18,
"text": " g \\in X' "
},
{
"math_id": 19,
"text": " C_j = C_i g^{-1} "
},
{
"math_id": 20,
"text": " C_i = C_j "
},
{
"math_id": 21,
"text": " i < j "
},
{
"math_id": 22,
"text": " |G : H| "
},
{
"math_id": 23,
"text": " G "
}
] | https://en.wikipedia.org/wiki?curid=894779 |
89480 | Flywheel | Mechanical device for storing rotational energy
A flywheel is a mechanical device that uses the conservation of angular momentum to store rotational energy, a form of kinetic energy proportional to the product of its moment of inertia and the square of its rotational speed. In particular, assuming the flywheel's moment of inertia is constant (i.e., a flywheel with fixed mass and second moment of area revolving about some fixed axis) then the stored (rotational) energy is directly associated with the square of its rotational speed.
Since a flywheel serves to store mechanical energy for later use, it is natural to consider it as a kinetic energy analogue of an electrical capacitor. Once suitably abstracted, this shared principle of energy storage is described in the generalized concept of an accumulator. As with other types of accumulators, a flywheel inherently smooths sufficiently small deviations in the power output of a system, thereby effectively playing the role of a low-pass filter with respect to the mechanical velocity (angular, or otherwise) of the system. More precisely, a flywheel's stored energy will donate a surge in power output upon a drop in power input and will conversely absorb any excess power input (system-generated power) in the form of rotational energy.
Common uses of a flywheel include smoothing a power output in reciprocating engines, energy storage, delivering energy at higher rates than the source, controlling the orientation of a mechanical system using gyroscope and reaction wheel, etc. Flywheels are typically made of steel and rotate on conventional bearings; these are generally limited to a maximum revolution rate of a few thousand RPM. High energy density flywheels can be made of carbon fiber composites and employ magnetic bearings, enabling them to revolve at speeds up to 60,000 RPM (1 kHz).
History.
The principle of the flywheel is found in the Neolithic spindle and the potter's wheel, as well as circular sharpening stones in antiquity. In the early 11th century, Ibn Bassal pioneered the use of flywheel in noria and saqiyah. The use of the flywheel as a general mechanical device to equalize the speed of rotation is, according to the American medievalist Lynn White, recorded in the "De diversibus artibus" (On various arts) of the German artisan Theophilus Presbyter (ca. 1070–1125) who records applying the device in several of his machines.
In the Industrial Revolution, James Watt contributed to the development of the flywheel in the steam engine, and his contemporary James Pickard used a flywheel combined with a crank to transform reciprocating motion into rotary motion.
Physics.
The kinetic energy (or more specifically rotational energy) stored by the flywheel's rotor can be calculated by formula_0. ω is the angular velocity, and formula_1 is the moment of inertia of the flywheel about its axis of symmetry. The moment of inertia is a measure of resistance to torque applied on a spinning object (i.e. the higher the moment of inertia, the slower it will accelerate when a given torque is applied). The moment of inertia can be known by mass (formula_2) and radius (formula_3). For a solid cylinder it is formula_4, for a thin-walled empty cylinder it is approximately formula_5, and for a thick-walled empty cylinder with constant density it is formula_6.
For a given flywheel design, the kinetic energy is proportional to the ratio of the hoop stress to the material density and to the mass. The specific tensile strength of a flywheel can be defined as formula_7. The flywheel material with the highest specific tensile strength will yield the highest energy storage per unit mass. This is one reason why carbon fiber is a material of interest. For a given design the stored energy is proportional to the hoop stress and the volume.
An electric motor-powered flywheel is common in practice. The output power of the electric motor is approximately equal to the output power of the flywheel. It can be calculated by formula_8, where formula_9 is the voltage of rotor winding, formula_10 is stator voltage, and formula_11 is the angle between two voltages. Increasing amounts of rotation energy can be stored in the flywheel until the rotor shatters. This happens when the hoop stress within the rotor exceeds the ultimate tensile strength of the rotor material. Tensile stress can be calculated by formula_12, where formula_13 is the density of the cylinder, formula_14 is the radius of the cylinder, and formula_15 is the angular velocity of the cylinder.
Design.
A rimmed flywheel has a rim, a hub, and spokes. Calculation of the flywheel's moment of inertia can be more easily analysed by applying various simplifications. One method is to assume the spokes, shaft and hub have zero moments of inertia, and the flywheel's moment of inertia is from the rim alone. Another is to lump moments of inertia of spokes, hub and shaft may be estimated as a percentage of the flywheel's moment of inertia, with the majority from the rim, so that formula_16. For example, if the moments of inertia of hub, spokes and shaft are deemed negligible, and the rim's thickness is very small compared to its mean radius (formula_17), the radius of rotation of the rim is equal to its mean radius and thus formula_18.
A shaftless flywheel eliminates the annulus holes, shaft or hub. It has higher energy density than conventional design but requires a specialized magnetic bearing and control system. The specific energy of a flywheel is determined byformula_19, in which formula_20 is the shape factor, formula_21 the material's tensile strength and formula_22 the density. While a typical flywheel has a shape factor of 0.3, the shaftless flywheel has a shape factor close to 0.6, out of a theoretical limit of about 1.
A superflywheel consists of a solid core (hub) and multiple thin layers of high-strength flexible materials (such as special steels, carbon fiber composites, glass fiber, or graphene) wound around it. Compared to conventional flywheels, superflywheels can store more energy and are safer to operate. In case of failure, a superflywheel does not explode or burst into large shards like a regular flywheel, but instead splits into layers. The separated layers then slow a superflywheel down by sliding against the inner walls of the enclosure, thus preventing any further destruction. Although the exact value of energy density of a superflywheel would depend on the material used, it could theoretically be as high as 1200 Wh (4.4 MJ) per kg of mass for graphene superflywheels. The first superflywheel was patented in 1964 by the Soviet-Russian scientist Nurbei Guilia.
Materials.
Flywheels are made from many different materials; the application determines the choice of material. Small flywheels made of lead are found in children's toys. Cast iron flywheels are used in old steam engines. Flywheels used in car engines are made of cast or nodular iron, steel or aluminum. Flywheels made from high-strength steel or composites have been proposed for use in vehicle energy storage and braking systems.
The efficiency of a flywheel is determined by the maximum amount of energy it can store per unit weight. As the flywheel's rotational speed or angular velocity is increased, the stored energy increases; however, the stresses also increase. If the hoop stress surpass the tensile strength of the material, the flywheel will break apart. Thus, the tensile strength limits the amount of energy that a flywheel can store.
In this context, using lead for a flywheel in a child's toy is not efficient; however, the flywheel velocity never approaches its burst velocity because the limit in this case is the pulling-power of the child. In other applications, such as an automobile, the flywheel operates at a specified angular velocity and is constrained by the space it must fit in, so the goal is to maximize the stored energy per unit volume. The material selection therefore depends on the application.
Applications.
Flywheels are often used to provide continuous power output in systems where the energy source is not continuous. For example, a flywheel is used to smooth the fast angular velocity fluctuations of the crankshaft in a reciprocating engine. In this case, a crankshaft flywheel stores energy when torque is exerted on it by a firing piston and then returns that energy to the piston to compress a fresh charge of air and fuel. Another example is the friction motor which powers devices such as toy cars. In unstressed and inexpensive cases, to save on cost, the bulk of the mass of the flywheel is toward the rim of the wheel. Pushing the mass away from the axis of rotation heightens rotational inertia for a given total mass.
A flywheel may also be used to supply intermittent pulses of energy at power levels that exceed the abilities of its energy source. This is achieved by accumulating energy in the flywheel over a period of time, at a rate that is compatible with the energy source, and then releasing energy at a much higher rate over a relatively short time when it is needed. For example, flywheels are used in power hammers and riveting machines.
Flywheels can be used to control direction and oppose unwanted motions. Flywheels in this context have a wide range of applications: gyroscopes for instrumentation, ship stability, satellite stabilization (reaction wheel), keeping a toy spin spinning (friction motor), stabilizing magnetically-levitated objects (Spin-stabilized magnetic levitation).
Flywheels may also be used as an electric compensator, like a synchronous compensator, that can either produce or sink reactive power but would not affect the real power. The purposes for that application are to improve the power factor of the system or adjust the grid voltage. Typically, the flywheels used in this field are similar in structure and installation as the synchronous motor (but it is called synchronous compensator or synchronous condenser in this context). There are also some other kinds of compensator using flywheels, like the single phase induction machine. But the basic ideas here are the same, the flywheels are controlled to spin exactly at the frequency which you want to compensate. For a synchronous compensator, you also need to keep the voltage of rotor and stator in phase, which is the same as keeping the magnetic field of rotor and the total magnetic field in phase (in the rotating frame reference).
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{1}{2} I \\omega^2"
},
{
"math_id": 1,
"text": " I "
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "\\frac{1}{2} mr^2"
},
{
"math_id": 5,
"text": "m r^2"
},
{
"math_id": 6,
"text": "\\frac{1}{2} m({r_\\mathrm{external}}^2 + {r_\\mathrm{internal}}^2) "
},
{
"math_id": 7,
"text": "\\frac{\\sigma_t}{\\rho} "
},
{
"math_id": 8,
"text": "(V_i)(V_t)\\left ( \\frac{\\sin(\\delta)}{X_S}\\right )"
},
{
"math_id": 9,
"text": "V_i"
},
{
"math_id": 10,
"text": "V_t"
},
{
"math_id": 11,
"text": "\\delta"
},
{
"math_id": 12,
"text": " \\rho r^2 \\omega^2 "
},
{
"math_id": 13,
"text": " \\rho "
},
{
"math_id": 14,
"text": " r "
},
{
"math_id": 15,
"text": " \\omega "
},
{
"math_id": 16,
"text": "I_\\mathrm{rim}=KI_\\mathrm{flywheel}"
},
{
"math_id": 17,
"text": "R"
},
{
"math_id": 18,
"text": "I_\\mathrm{rim}=M_\\mathrm{rim}R^2"
},
{
"math_id": 19,
"text": "\\frac{E}{M} = K \\frac{\\sigma}{\\rho} "
},
{
"math_id": 20,
"text": "K "
},
{
"math_id": 21,
"text": "\\sigma "
},
{
"math_id": 22,
"text": "\\rho "
}
] | https://en.wikipedia.org/wiki?curid=89480 |
89486 | Inequation | Mathematical statement that two values are not equal
In mathematics, an inequation is a statement that an inequality holds between two values. It is usually written in the form of a pair of expressions denoting the values in question, with a relational sign between them indicating the specific inequality relation. Some examples of inequations are:
In some cases, the term "inequation" can be considered synonymous to the term "inequality", while in other cases, an inequation is reserved only for statements whose inequality relation is "not equal to" (≠).
Chains of inequations.
A shorthand notation is used for the conjunction of several inequations involving common expressions, by chaining them together. For example, the chain
formula_4
is shorthand for
formula_5
which also implies that formula_6 and formula_7.
In rare cases, chains without such implications about distant terms are used.
For example formula_8 is shorthand for formula_9, which does not imply formula_10 Similarly, formula_11 is shorthand for formula_12, which does not imply any order of formula_13 and formula_14.
Solving inequations.
Similar to equation solving, "inequation solving" means finding what values (numbers, functions, sets, etc.) fulfill a condition stated in the form of an inequation or a conjunction of several inequations. These expressions contain one or more "unknowns", which are free variables for which values are sought that cause the condition to be fulfilled. To be precise, what is sought are often not necessarily actual values, but, more in general, expressions. A "solution" of the inequation is an assignment of expressions to the "unknowns" that satisfies the inequation(s); in other words, expressions such that, when they are substituted for the unknowns, make the inequations true propositions.
Often, an additional "objective" expression (i.e., an optimization equation) is given, that is to be minimized or maximized by an "optimal" solution.
For example,
formula_15
is a conjunction of inequations, partly written as chains (where formula_16 can be read as "and"); the set of its solutions is shown in blue in the picture (the red, green, and orange line corresponding to the 1st, 2nd, and 3rd conjunct, respectively). For a larger example. see Linear programming#Example.
Computer support in solving inequations is described in constraint programming; in particular, the simplex algorithm finds optimal solutions of linear inequations. The programming language Prolog III also supports solving algorithms for particular classes of inequalities (and other relations) as a basic language feature. For more, see constraint logic programming.
Combinations of meanings.
Usually because of the properties of certain functions (like square roots), some inequations are equivalent to a combination of multiple others. For example, the inequation formula_17 is logically equivalent to the following three inequations combined:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a < b"
},
{
"math_id": 1,
"text": "x+y+z \\leq 1"
},
{
"math_id": 2,
"text": "n > 1"
},
{
"math_id": 3,
"text": "x \\neq 0"
},
{
"math_id": 4,
"text": "0 \\leq a < b \\leq 1"
},
{
"math_id": 5,
"text": "0 \\leq a ~ ~ \\mathrm{and} ~ ~ a < b ~ ~ \\mathrm{and} ~ ~ b \\leq 1"
},
{
"math_id": 6,
"text": "0 < b"
},
{
"math_id": 7,
"text": "a < 1"
},
{
"math_id": 8,
"text": "i \\neq 0 \\neq j"
},
{
"math_id": 9,
"text": "i \\neq 0 ~ ~ \\mathrm{and} ~ ~ 0 \\neq j"
},
{
"math_id": 10,
"text": "i \\neq j."
},
{
"math_id": 11,
"text": "a < b > c"
},
{
"math_id": 12,
"text": "a < b ~ ~ \\mathrm{and} ~ ~ b > c"
},
{
"math_id": 13,
"text": "a"
},
{
"math_id": 14,
"text": "c"
},
{
"math_id": 15,
"text": "0 \\leq x_1 \\leq 690 - 1.5 \\cdot x_2 \\;\\land\\; 0 \\leq x_2 \\leq 530 - x_1 \\;\\land\\; x_1 \\leq 640 - 0.75 \\cdot x_2"
},
{
"math_id": 16,
"text": "\\land"
},
{
"math_id": 17,
"text": "\\textstyle \\sqrt{{f(x)}} < g(x)"
},
{
"math_id": 18,
"text": " f(x) \\ge 0"
},
{
"math_id": 19,
"text": " g(x) > 0"
},
{
"math_id": 20,
"text": " f(x) < \\left(g(x)\\right)^2"
}
] | https://en.wikipedia.org/wiki?curid=89486 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.