id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
12830014 | Eight-point algorithm | The eight-point algorithm is an algorithm used in computer vision to estimate the essential matrix or the fundamental matrix related to a stereo camera pair from a set of corresponding image points. It was introduced by Christopher Longuet-Higgins in 1981 for the case of the essential matrix. In theory, this algorithm can be used also for the fundamental matrix, but in practice the normalized eight-point algorithm, described by Richard Hartley in 1997, is better suited for this case.
The algorithm's name derives from the fact that it estimates the essential matrix or the fundamental matrix from a set of eight (or more) corresponding image points. However, variations of the algorithm can be used for fewer than eight points.
Coplanarity constraint.
One may express the epipolar geometry of two cameras and a point in space with an algebraic equation. Observe that, no matter where the point formula_0 is in space, the vectors formula_1, formula_2 and formula_3 belong to the same plane. Call formula_4 the coordinates of point formula_0 in the left eye's reference frame and call formula_5 the coordinates of formula_0 in the right eye's reference frame and call formula_6 the rotation and translation between the two reference frames s.t. formula_7 is the relationship between the coordinates of formula_0 in the two reference frames. The following equation always holds because the vector generated from formula_8 is orthogonal to both formula_9 and formula_4 :
formula_10
Because formula_11, we get
formula_12.
Replacing formula_13 with formula_14, we get
formula_15
Observe that formula_16 may be thought of as a matrix; Longuet-Higgins used the symbol formula_17 to denote it. The product formula_18 is often called essential matrix and denoted with formula_19.
The vectors formula_20 are parallel to the vectors formula_21 and therefore the coplanarity constraint holds if we substitute these vectors. If we call formula_22 the coordinates of the projections of formula_0 onto the left and right image planes, then the coplanarity constraint may be written as
formula_23
Basic algorithm.
The basic eight-point algorithm is here described for the case of estimating the essential matrix formula_24. It consists of three steps. First, it formulates a homogeneous linear equation, where the solution is directly related to formula_24, and then solves the equation, taking into account that it may not have an exact solution. Finally, the internal constraints of the resulting matrix are managed. The first step is described in Longuet-Higgins' paper, the second and third steps are standard approaches in estimation theory.
The constraint defined by the essential matrix formula_24 is
formula_25
for corresponding image points represented in normalized image coordinates formula_26. The problem which the algorithm solves is to determine formula_24 for a set of matching image points. In practice, the image coordinates of the image points are affected by noise and the solution may also be over-determined which means that it may not be possible to find formula_24 which satisfies the above constraint exactly for all points. This issue is addressed in the second step of the algorithm.
Step 1: Formulating a homogeneous linear equation.
With
formula_27 and formula_28 and formula_29
the constraint can also be rewritten as
formula_30
or
formula_31
where
formula_32 and formula_33
that is, formula_34 represents the essential matrix in the form of a 9-dimensional vector and this vector must be orthogonal to the vector formula_35 which can be seen as a vector representation of the formula_36 matrix formula_37.
Each pair of corresponding image points produces a vector formula_35. Given a set of 3D points formula_38 this corresponds to a set of vectors formula_39 and all of them must satisfy
formula_40
for the vector formula_34. Given sufficiently many (at least eight) linearly independent vectors formula_39 it is possible to determine formula_34 in a straightforward way. Collect all vectors formula_39 as the columns of a matrix formula_41 and it must then be the case that
formula_42
This means that formula_34 is the solution to a homogeneous linear equation.
Step 2: Solving the equation.
A standard approach to solving this equation implies that formula_34 is a right singular vector of formula_41 corresponding to a singular value that equals zero. Provided that at least eight linearly independent vectors formula_39 are used to construct formula_41 it follows that this singular vector is unique (disregarding scalar multiplication) and, consequently, formula_34 and then formula_24 can be determined.
In the case that more than eight corresponding points are used to construct formula_41 it is possible that it does not have any singular value equal to zero. This case occurs in practice when the image coordinates are affected by various types of noise. A common approach to deal with this situation is to describe it as a total least squares problem; find formula_34 which minimizes
formula_43
when formula_44. The solution is to choose formula_34 as the left singular vector corresponding to the "smallest" singular value of formula_41. A reordering of this formula_34 back into a formula_36 matrix gives the result of this step, here referred to as formula_45.
Step 3: Enforcing the internal constraint.
Another consequence of dealing with noisy image coordinates is that the resulting matrix may not satisfy the internal constraint of the essential matrix, that is, two of its singular values are equal and nonzero and the other is zero. Depending on the application, smaller or larger deviations from the internal constraint may or may not be a problem. If it is critical that the estimated matrix satisfies the internal constraints, this can be accomplished by finding the matrix formula_46 of rank 2 which minimizes
formula_47
where formula_45 is the resulting matrix from Step 2 and the Frobenius matrix norm is used. The solution to the problem is given by first computing a singular value decomposition of formula_45:
formula_48
where formula_49 are orthogonal matrices and formula_50 is a diagonal matrix which contains the singular values of formula_45. In the ideal case, one of the diagonal elements of formula_50 should be zero, or at least small compared to the other two which should be equal. In any case, set
formula_51
where formula_52 are the largest and second largest singular values in formula_50 respectively. Finally, formula_46 is given by
formula_53
The matrix formula_46 is the resulting estimate of the essential matrix provided by the algorithm.
Normalized algorithm.
The basic eight-point algorithm can in principle be used also for estimating the fundamental matrix formula_54. The defining constraint for formula_54 is
formula_55
where formula_26 are the homogeneous representations of corresponding image coordinates (not necessary normalized). This means that it is possible to form a matrix formula_41 in a similar way as for the essential matrix and solve the equation
formula_56
for formula_57 which is a reshaped version of formula_54. By following the procedure outlined above, it is then possible to determine formula_54 from a set of eight matching points. In practice, however, the resulting fundamental matrix may not be useful for determining epipolar constraints.
Difficulty.
The problem is that the resulting formula_41 often is ill-conditioned. In theory, formula_41 should have one singular value equal to zero and the rest are non-zero. In practice, however, some of the non-zero singular values can become small relative to the larger ones. If more than eight corresponding points are used to construct formula_41, where the coordinates are only approximately correct, there may not be a well-defined singular value which can be identified as approximately zero. Consequently, the solution of the homogeneous linear system of equations may not be sufficiently accurate to be useful.
Cause.
Hartley addressed this estimation problem in his 1997 article. His analysis of the problem shows that the problem is caused by the poor distribution of the homogeneous image coordinates in their space, formula_58. A typical homogeneous representation of the 2D image coordinate formula_59 is
formula_27
where both formula_60 lie in the range 0 to 1000–2000 for a modern digital camera. This means that the first two coordinates in formula_61 vary over a much larger range than the third coordinate. Furthermore, if the image points which are used to construct formula_41 lie in a relatively small region of the image, for example at formula_62, again the vector formula_61 points in more or less the same direction for all points. As a consequence, formula_41 will have one large singular value and the remaining are small.
Solution.
As a solution to this problem, Hartley proposed that the coordinate system of each of the two images should be transformed, independently, into a new coordinate system according to the following principle.
This principle results, normally, in a distinct coordinate transformation for each of the two images. As a result, new homogeneous image coordinates formula_64 are given by
formula_65
formula_66
where formula_67 are the transformations (translation and scaling) from the old to the new "normalized image coordinates". This normalization is only dependent on the image points which are used in a single image and is, in general, distinct from normalized image coordinates produced by a normalized camera.
The epipolar constraint based on the fundamental matrix can now be rewritten as
formula_68
where formula_69. This means that it is possible to use the normalized homogeneous image coordinates formula_64 to estimate the transformed fundamental matrix formula_70 using the basic eight-point algorithm described above.
The purpose of the normalization transformations is that the matrix formula_71, constructed from the normalized image coordinates, in general, has a better condition number than formula_41 has. This means that the solution formula_72 is more well-defined as a solution of the homogeneous equation formula_73 than formula_57 is relative to formula_41. Once formula_72 has been determined and reshaped into formula_70 the latter can be "de-normalized" to give formula_54 according to
formula_74
In general, this estimate of the fundamental matrix is a better one than would have been obtained by estimating from the un-normalized coordinates.
Using fewer than eight points.
Each point pair contributes with one constraining equation on the element in formula_24. Since formula_24 has five degrees of freedom it should therefore be sufficient with only five point pairs to determine formula_24. David Nister proposed an efficient solution to estimate the essential matrix from set of five paired points, known as the five-point algorithm. Hartley et. al. later proposed a modified and more stable five-point algorithm based on Nister's algorithm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "\\overline{O_L P}"
},
{
"math_id": 2,
"text": "\\overline{O_R P}"
},
{
"math_id": 3,
"text": " \\overline{O_R O_L}"
},
{
"math_id": 4,
"text": "X_L"
},
{
"math_id": 5,
"text": "X_R"
},
{
"math_id": 6,
"text": "R, T"
},
{
"math_id": 7,
"text": "X_R = R (X_L-T) "
},
{
"math_id": 8,
"text": "T \\wedge X_L "
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "\n X_L^T T \\wedge X_L - T^T T \\wedge X_L = (X_L-T)^T T \\wedge X_L = 0\n"
},
{
"math_id": 11,
"text": " I = R^T R "
},
{
"math_id": 12,
"text": "\n (X_L-T)^T R^T R T \\wedge X_L = 0\n"
},
{
"math_id": 13,
"text": "(X_L-T)^TR^T "
},
{
"math_id": 14,
"text": "X_R^T"
},
{
"math_id": 15,
"text": "\n X_R^T R T \\wedge X_L = X_R^T R S X_L = X_R^T E X_L = 0\n"
},
{
"math_id": 16,
"text": "T \\wedge "
},
{
"math_id": 17,
"text": "S"
},
{
"math_id": 18,
"text": " R T \\wedge = R S "
},
{
"math_id": 19,
"text": " E "
},
{
"math_id": 20,
"text": "\\overline{O_L p_L}, \\overline{O_R p_R}"
},
{
"math_id": 21,
"text": " \\overline{O_L P}, \\overline{O_R P}"
},
{
"math_id": 22,
"text": "y, y'"
},
{
"math_id": 23,
"text": "\n y'^T \\mathbf{E} y = 0\n"
},
{
"math_id": 24,
"text": " \\mathbf{E} "
},
{
"math_id": 25,
"text": " (\\mathbf{y}')^{T} \\, \\mathbf{E} \\, \\mathbf{y} = 0"
},
{
"math_id": 26,
"text": " \\mathbf{y}, \\mathbf{y}' "
},
{
"math_id": 27,
"text": " \\mathbf{y} = \\begin{pmatrix} y_{1} \\\\ y_{2} \\\\ 1 \\end{pmatrix} "
},
{
"math_id": 28,
"text": " \\mathbf{y}' = \\begin{pmatrix} y'_{1} \\\\ y'_{2} \\\\ 1 \\end{pmatrix} "
},
{
"math_id": 29,
"text": " \\mathbf{E} = \\begin{pmatrix} e_{11} & e_{12} & e_{13} \\\\ e_{21} & e_{22} & e_{23} \\\\ e_{31} & e_{32} & e_{33} \\end{pmatrix} "
},
{
"math_id": 30,
"text": " y'_1 y_1 e_{11} + y'_1 y_2 e_{12} + y'_1 e_{13} + y'_2 y_1 e_{21} + y'_2 y_2 e_{22} + y'_2 e_{23} + y_1 e_{31} + y_2 e_{32} + e_{33} = 0 \\, "
},
{
"math_id": 31,
"text": " \\mathbf{e} \\cdot \\tilde{\\mathbf{y}} = 0 "
},
{
"math_id": 32,
"text": " \\tilde{\\mathbf{y}} = \\begin{pmatrix} y'_1 y_1 \\\\ y'_1 y_2 \\\\ y'_1 \\\\ y'_2 y_1 \\\\ y'_2 y_2 \\\\ y'_2 \\\\ y_1 \\\\ y_2 \\\\ 1 \\end{pmatrix} "
},
{
"math_id": 33,
"text": " \\mathbf{e} = \\begin{pmatrix} e_{11} \\\\ e_{12} \\\\ e_{13} \\\\ e_{21} \\\\ e_{22} \\\\ e_{23} \\\\ e_{31} \\\\ e_{32} \\\\ e_{33} \\end{pmatrix} "
},
{
"math_id": 34,
"text": " \\mathbf{e} "
},
{
"math_id": 35,
"text": " \\tilde{\\mathbf{y}} "
},
{
"math_id": 36,
"text": " 3 \\times 3 "
},
{
"math_id": 37,
"text": " \\mathbf{y}' \\, \\mathbf{y}^{T} "
},
{
"math_id": 38,
"text": " \\mathbf{P}_k "
},
{
"math_id": 39,
"text": " \\tilde{\\mathbf{y}}_{k} "
},
{
"math_id": 40,
"text": " \\mathbf{e} \\cdot \\tilde{\\mathbf{y}}_{k} = 0 "
},
{
"math_id": 41,
"text": " \\mathbf{Y} "
},
{
"math_id": 42,
"text": " \\mathbf{e}^{T} \\, \\mathbf{Y} = \\mathbf{0} "
},
{
"math_id": 43,
"text": " \\| \\mathbf{e}^{T} \\, \\mathbf{Y} \\| "
},
{
"math_id": 44,
"text": " \\| \\mathbf{e} \\| = 1 "
},
{
"math_id": 45,
"text": " \\mathbf{E}_{\\rm est} "
},
{
"math_id": 46,
"text": " \\mathbf{E}' "
},
{
"math_id": 47,
"text": " \\| \\mathbf{E}' - \\mathbf{E}_{\\rm est} \\| "
},
{
"math_id": 48,
"text": " \\mathbf{E}_{\\rm est} = \\mathbf{U} \\, \\mathbf{S} \\, \\mathbf{V}^{T} "
},
{
"math_id": 49,
"text": " \\mathbf{U}, \\mathbf{V} "
},
{
"math_id": 50,
"text": " \\mathbf{S} "
},
{
"math_id": 51,
"text": " \\mathbf{S}' = \\begin{pmatrix} s_1 & 0 & 0 \\\\ 0 & s_2 & 0 \\\\ 0 & 0 & 0 \\end{pmatrix} ,"
},
{
"math_id": 52,
"text": " s_1, s_2 "
},
{
"math_id": 53,
"text": " \\mathbf{E}' = \\mathbf{U} \\, \\mathbf{S}' \\, \\mathbf{V}^{T} "
},
{
"math_id": 54,
"text": " \\mathbf{F} "
},
{
"math_id": 55,
"text": " (\\mathbf{y}')^{T} \\, \\mathbf{F} \\, \\mathbf{y} = 0"
},
{
"math_id": 56,
"text": " \\mathbf{f}^{T} \\, \\mathbf{Y} = \\mathbf{0} "
},
{
"math_id": 57,
"text": " \\mathbf{f} "
},
{
"math_id": 58,
"text": " \\mathbb{R}^{3} "
},
{
"math_id": 59,
"text": " (y_{1}, y_{2}) \\, "
},
{
"math_id": 60,
"text": " y_{1}, y_{2} \\, "
},
{
"math_id": 61,
"text": " \\mathbf{y} "
},
{
"math_id": 62,
"text": " (700,700) \\pm (100,100) \\, "
},
{
"math_id": 63,
"text": " \\sqrt{2} "
},
{
"math_id": 64,
"text": " \\mathbf{\\bar y}, \\mathbf{\\bar y}' "
},
{
"math_id": 65,
"text": " \\mathbf{\\bar y} = \\mathbf{T} \\, \\mathbf{y} "
},
{
"math_id": 66,
"text": " \\mathbf{\\bar y}' = \\mathbf{T}' \\, \\mathbf{y}' "
},
{
"math_id": 67,
"text": " \\mathbf{T}, \\mathbf{T}' "
},
{
"math_id": 68,
"text": " 0 = (\\mathbf{\\bar y}')^{T} \\, ((\\mathbf{T}')^{T})^{-1} \\, \\mathbf{F} \\, \\mathbf{T}^{-1}\\, \\mathbf{\\bar y} = (\\mathbf{\\bar y}')^{T} \\, \\mathbf{\\bar F} \\, \\mathbf{\\bar y} "
},
{
"math_id": 69,
"text": " \\mathbf{\\bar F} = ((\\mathbf{T}')^{T})^{-1} \\, \\mathbf{F} \\, \\mathbf{T}^{-1} "
},
{
"math_id": 70,
"text": " \\mathbf{\\bar F} "
},
{
"math_id": 71,
"text": " \\mathbf{\\bar Y} "
},
{
"math_id": 72,
"text": " \\mathbf{\\bar f} "
},
{
"math_id": 73,
"text": " \\mathbf{\\bar Y} \\, \\mathbf{\\bar f} "
},
{
"math_id": 74,
"text": " \\mathbf{F} = (\\mathbf{T}')^{T} \\, \\mathbf{\\bar F} \\, \\mathbf{T} "
}
] | https://en.wikipedia.org/wiki?curid=12830014 |
12834472 | Trimethylamine N-oxide reductase | Trimethylamine "N"-oxide reductase (TOR or TMAO reductase, EC 1.7.2.3) is a microbial enzyme that can reduce trimethylamine "N"-oxide (TMAO) into trimethylamine (TMA), as part of the electron transport chain. The enzyme has been purified from "E. coli" and the photosynthetic bacteria "Roseobacter denitrificans".
Trimethylamine oxide is found at high concentrations in the tissues of fish, and the bacterial reduction of this compound to foul-smelling trimethylamine is a major process in the spoilage of fish.
Classification.
TMAO reductase has an enzyme commission (EC) number of 1.7.2.3. EC numbers are a system of enzyme nomenclature, and each part of this nomenclature refers to a progressive classification of the enzyme with regards to its reaction. The first number defines the reaction type, the second number provides information on involved compounds, the third number specifies the type of reaction, and the fourth number completes the unique serial number for each enzyme.
Trimethylamine N-oxide reductase has the EC number 1.7.2.3, and these components refer to the following enzyme classifications:
Species distribution.
TMAO is an organic osmolyte that has the useful biological function of protecting proteins against denaturing stresses such as high concentration of urea. Various bacteria grow anaerobically using TMAO as an alternative electron transport chain, allowing for growth on non-fermentable carbon sources such as glycerol. Bacteria capable of reducing TMAO to TMA are found throughout three different ecological niches. TMAO-reducing, to date, has been observed in marine bacteria, photosynthetic bacteria living in shallow ponds, and in enterobacteria.
TMAO reductases have been studied in several organisms, and a common conserved feature is the presence of a molybdenum cofactor in all the known terminal enzymes.
Based on their substrate specificity, these enzymes can be divided into two groups:
The first group consists of species such as "Escherichia coli", "Shewanella putrefaciens", and "Roseobacter denitrificans" while the second group consists of species such as "Proteus vulgaris", "Rhodobacter capsulatus", and "Rhodobacter sphaeroides".
The TMAO respiratory system has been mostly widely studied at the molecular level in "E. coli" and "Rhodobacter" species.
Reaction mechanism.
In "E. coli", TMAO reductase is encoded by the "tor"CAD operon. The torC gene encodes a pentahemic c-type cytochrome (TorC). TorC is likely to transfer electrons directly to the periplasmic TorA terminal enzyme encoded by the torA gene. The anaerobic expression of the torCAD operon is strictly controlled by the presence of TMAO or related compounds.
There are several different metabolic pathways that involve TMAO and TMA. The reduction of TMAO to TMA, catalyzed by TMAO reductase, as part of the electron transport chain follows the following reaction:
NADH + H+ + trimethylamine N-oxide formula_0 NAD+ + trimethylamine + H2O
However, both the "R. denitrificans" and "E. coli" enzymes can accept electrons from cytochromes:
trimethylamine + 2 (ferricytochrome c)-subunit + H2O → trimethylamine "N"-oxide + 2 (ferrocytochrome c)-subunit + 2 H+
Other reactions involving TMAO and TMA include:
Structure.
In "E. coli", it has been shown that an inducible, periplasmic TMAO reductase is responsible for almost all TMAO reduction (with the rest being DMSO reduction). While no structural analysis of this "E. coli" enzyme has been reported, TMAO reductase from "Shewanella massilia" has been isolated and characterized at a resolution of 2.5 Å.
TMAO reductases have been studied in several organisms, and a common feature is the presence of a molybdenum cofactor in all the known terminal enzymes. The common form of the molybdopterin molecule is a tricyclic ring system comprising a pterin group fused to a pyran ring. The role of this pyran ring could be a way of controlling the oxidation state of the molybdenum cofactor and/or facilitating proton diffusion. Furthermore, the arrangement of aromatic residues in the funnel-like entrance leading to the active center is closely related to that of DMSO reductase structures. A hydrophobic pocket, formed by two tryptophan and two tyrosine residues, is also present in the TMAO reductase and contains highly conserved residues.
When comparing TMAO reductase of "S. massilia" to DMSO reductase from "R. Sphaeroides" and "R. capsulatus," the overall structure is strikingly similar. However, one major difference in TMAO reductase is a missing tyrosine (Tyr114), in DMSO reductase of "R. capsulatus". It is replaced by a threonine (Thr116) in the TMAO reductase, and the backbone stretch around this residue, from residue 100 to 116, is not identical to that in the DMSO reductases. A direct consequence of the missing residue is a wider accessible space, adjacent to the molybdenum active center, which potentially exists to accommodates the somewhat bulkier trimethylamine-oxide molecules more easily than the dimethylsulfoxide molecules. This different demonstrates how an enzyme's form is almost always directly tied to its function.
However, recent discrepancies have risen regarding the structure of the TMAO reductase active site. The proposed active site contains several anomalous bond lengths; one Mo-O bond length is too short for a Mo-O single-bond coordination, and the four Mo-S bond lengths are all considerably longer than expected. Moreover, the proposed molybdenum coordination of the active site is extremely crowded, with the distances between several supposedly nonbonding atoms being significantly shorter than the sum of their van der Waals radii and some bond angles being unreasonably small. Now, it is being hypothesized that this overcrowding is due to the cocrystallization of multiple forms of the enzyme.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=12834472 |
12834891 | D-amino acid dehydrogenase | Class of enzymes
D-amino-acid dehydrogenase (EC 1.4.99.1) is a bacterial enzyme that catalyses the oxidation of D-amino acids into their corresponding oxoacids. It contains both flavin and nonheme iron as cofactors. The enzyme has a very broad specificity and can act on most D-amino acids.
D-amino acid + H2O + acceptor <=> a 2-oxo acid + NH3 + reduced acceptor
This reaction is distinct from the oxidation reaction catalysed by D-amino acid oxidase that uses oxygen as a second substrate, as the dehydrogenase can use many different compounds as electron acceptors, with the physiological substrate being coenzyme Q.
D-amino acid dehydrogenase is an enzyme that catalyzes NADPH from NADP+ and D- glucose to produce D- amino acids and glucose dehydrogenase. Some but not limited to these amino acids are D-leucine, D-isoleucine, and D-Valine, which are essential amino acids that humans cannot synthesize due to the fact that they are not included in their diet. Moreover, D- amino acids catalyzes the formation of 2-oxo acids to produce D- amino acids in the presence of DCIP which is an electron acceptor. D-amino acids are used as components of pharmaceutical products, such as antibiotics, anticoagulants, and pesticides, because they have been shown to be not only more potent than their L enantiomers, but also more resistant to enzyme degradation. D-amino acid dehydrogenase enzymes have been synthesized via mutagenesis with an ability to produce straight, branched, cyclic aliphatic and aromatic D-amino acids. Solubilized D-amino acid dehydrogenase tends to increase its affinity for D-alanine, D-asparagine, and D-formula_0-amino-n-butyrate.
In "E. coli" K12 D-amino acid dehydrogenase is most active with D-alanine as its substrate, as this amino acid is the sole source of carbon, nitrogen, and energy. The enzyme works optimally at pH 8.9 and has a Michaelis constant for D-alanine equal to 30 mM. DAD discovered in gram-negative "E. coli" B membrane can convert L-amino acids into D-amino acids as well.
Additionally, D- amino acid dehydrogenase is used in Dye-Linked dehydrogenase (Dye-DHs) which uses artificial dyes such as 2,6-Dichloroindophenol (DCIP) as their electron acceptor rather than using their natural electron acceptors. This can accelerate the reaction between the enzyme and the substrate when the electrons are being transferred.
Use in synthesis reactions
D-Amino Acid Dehydrogenase has shown itself to be effective in the synthesis of branched-chain amino acids such as D-Leucine, D-Isoleucine, and D-Valine. In the given study, researchers were successfully able to use D-amino acid dehydrogenase to create high amounts of these products from the starting material of 2-oxo acids, in the presence of ammonia. The conditions for this were variable, though the best results appeared at around 65 °C.
Amino Acids obtained through these reactions resulted in a high enantioselectivity of >99% and high yields of >99%.
Given the nature of this enzyme, it may be possible to use it in order to create non-branched D-amino acids as well as modified D-amino acids.
Obtaining D-Amino Acid Dehydrogenase
In one study, in order to test the viability of using D-amino dehydrogenase in synthesis reactions, researchers used mutant bacteria to obtain and create different strains of the enzyme. These researchers found that it only required five mutations in order to modify the selective D-Amino Dehydrogenase into working with other D-amino acids. They also found that it retained its highly selective nature, capable of receiving mostly D-enantiomers after mutation, with yields in excess of 95%.
A heat-stable variant of D-amino acid dehydrogenase was found in the bacterium Rhodothermus marinus JCM9785. This variant is involved in the catabolism of trans-4-hydroxy-L-proline.
From the given studies, in order to obtain D-amino acid dehydrogenase one must first introduce and express it within a given bacterial species, some of which have been previously referenced. It must then be purified under favorable conditions. These are based upon the particular species of D-amino acid dehydrogenase used in a given research experiment. Under incorrect conditions, the protein may denature. For example, it was found that specifically D-alanine dehydrogenases from E. coli and P. aeruginosa would lose most of their activity when subjected to conditions of 37 - 42 °C. After this, it is possible to separate and purify through existing methods.
Artificial D-Amino Acid Dehydrogenase
Due to the drawbacks of current methods, researchers have begun work on creating an artificial enzyme capable of producing the same D-amino acids as enzymes from naturally occurring sources. By adding five amino acids to a given sample isolated from U. thermosphaericus, they succeeded. By modifying the amino acid sequence, researchers were able to change the specificity of the molecule towards certain reactants and products, showing that it may be possible to use artificial D-amino acid dehydrogenase to screen for certain D-amino acid products.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=12834891 |
12838118 | Macbeath surface | In Riemann surface theory and hyperbolic geometry, the Macbeath surface, also called Macbeath's curve or the Fricke–Macbeath curve, is the genus-7 Hurwitz surface.
The automorphism group of the Macbeath surface is the simple group PSL(2,8), consisting of 504 symmetries.
Triangle group construction.
The surface's Fuchsian group can be constructed as the principal congruence subgroup of the (2,3,7) triangle group in a suitable tower of principal congruence subgroups. Here the choices of quaternion algebra and Hurwitz quaternion order are described at the triangle group page. Choosing the ideal formula_0 in the ring of integers, the corresponding principal congruence subgroup defines this surface of genus 7. Its systole is about 5.796, and the number of systolic loops is 126 according to R. Vogeler's calculations.
It is possible to realize the resulting triangulated surface as a non-convex polyhedron without self-intersections.
Historical note.
This surface was originally discovered by Robert Fricke (1899), but named after Alexander Murray Macbeath due to his later independent rediscovery of the same curve. Elkies writes that the equivalence between the curves studied by Fricke and Macbeath "may first have been observed by Serre in a 24.vii.1990 letter to Abhyankar".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\langle 2 \\rangle"
}
] | https://en.wikipedia.org/wiki?curid=12838118 |
1283865 | Hyperbolic triangle | Triangle in hyperbolic geometry
In hyperbolic geometry, a hyperbolic triangle is a triangle in the hyperbolic plane. It consists of three line segments called "sides" or "edges" and three points called "angles" or "vertices".
Just as in the Euclidean case, three points of a hyperbolic space of an arbitrary dimension always lie on the same plane. Hence planar hyperbolic triangles also describe triangles possible in any higher dimension of hyperbolic spaces.
Definition.
A hyperbolic triangle consists of three non-collinear points and the three segments between them.
Properties.
Hyperbolic triangles have some properties that are analogous to those of triangles in Euclidean geometry:
Hyperbolic triangles have some properties that are analogous to those of triangles in spherical or elliptic geometry:
Hyperbolic triangles have some properties that are the opposite of the properties of triangles in spherical or elliptic geometry:
Hyperbolic triangles also have some properties that are not found in other geometries:
Triangles with ideal vertices.
The definition of a triangle can be generalized, permitting vertices on the ideal boundary of the plane while keeping the sides within the plane. If a pair of sides is "limiting parallel" (i.e. the distance between them approaches zero as they tend to the ideal point, but they do not intersect), then they end at an ideal vertex represented as an "omega point".
Such a pair of sides may also be said to form an angle of zero.
A triangle with a zero angle is impossible in Euclidean geometry for straight sides lying on distinct lines. However, such zero angles are possible with tangent circles.
A triangle with one ideal vertex is called an omega triangle.
Special Triangles with ideal vertices are:
Triangle of parallelism.
A triangle where one vertex is an ideal point, one angle is right: the third angle is the angle of parallelism for the length of the side between the right and the third angle.
Schweikart triangle.
The triangle where two vertices are ideal points and the remaining angle is right, one of the first hyperbolic triangles (1818) described by Ferdinand Karl Schweikart.
Ideal triangle.
The triangle where all vertices are ideal points, an ideal triangle is the largest possible triangle in hyperbolic geometry because of the zero sum of the angles.
Standardized Gaussian curvature.
The relations among the angles and sides are analogous to those of spherical trigonometry; the length scale for both spherical geometry and hyperbolic geometry can for example be defined as the length of a side of an equilateral triangle with fixed angles.
The length scale is most convenient if the lengths are measured in terms of the absolute length (a special unit of length analogous to a relations between distances in spherical geometry). This choice for this length scale makes formulas simpler.
In terms of the Poincaré half-plane model absolute length corresponds to the infinitesimal metric formula_0 and in the Poincaré disk model to formula_1.
In terms of the (constant and negative) Gaussian curvature K of a hyperbolic plane, a unit of absolute length corresponds to a length of
formula_2.
In a hyperbolic triangle the sum of the angles "A", "B", "C" (respectively opposite to the side with the corresponding letter) is strictly less than a straight angle. The difference between the measure of a straight angle and the sum of the measures of a triangle's angles is called the defect of the triangle. The area of a hyperbolic triangle is equal to its defect multiplied by the square of R:
formula_3.
This theorem, first proven by Johann Heinrich Lambert, is related to Girard's theorem in spherical geometry.
Trigonometry.
In all the formulas stated below the sides a, b, and c must be measured in absolute length, a unit so that the Gaussian curvature K of the plane is −1. In other words, the quantity R in the paragraph above is supposed to be equal to 1.
Trigonometric formulas for hyperbolic triangles depend on the hyperbolic functions sinh, cosh, and tanh.
Trigonometry of right triangles.
If "C" is a right angle then:
formula_4
formula_5
formula_6.
formula_7.
formula_8.
formula_9
Relations between angles.
We also have the following equations:
formula_10
formula_11
formula_12
formula_13
formula_14
Area.
The area of a right angled triangle is:
formula_15
The area for any other triangle is:
formula_16
also
formula_17
Angle of parallelism.
The instance of an omega triangle with a right angle provides the configuration to examine the angle of parallelism in the triangle.
In this case angle "B" = 0, a = c = formula_18 and formula_19, resulting in formula_20.
Equilateral triangle.
The trigonometry formulas of right triangles also give the relations between the sides "s" and the angles "A" of an equilateral triangle (a triangle where all sides have the same length and all angles are equal).
The relations are:
formula_21
formula_22
General trigonometry.
Whether "C" is a right angle or not, the following relationships hold:
The hyperbolic law of cosines is as follows:
formula_23
Its dual theorem is
formula_24
There is also a "law of sines":
formula_25
and a four-parts formula:
formula_26
which is derived in the same way as the analogue formula in spherical trigonometry.
See also.
For hyperbolic trigonometry:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ds=\\frac{|dz|}{\\operatorname{Im}(z)}"
},
{
"math_id": 1,
"text": "ds=\\frac{2|dz|}{1-|z|^2}"
},
{
"math_id": 2,
"text": "R=\\frac{1}{\\sqrt{-K}}"
},
{
"math_id": 3,
"text": "(\\pi-A-B-C) R^2{}{}\\!"
},
{
"math_id": 4,
"text": "\\sin A=\\frac{\\textrm{sinh(opposite)}}{\\textrm{sinh(hypotenuse)}}=\\frac{\\sinh a}{\\,\\sinh c\\,}.\\,"
},
{
"math_id": 5,
"text": "\\cos A=\\frac{\\textrm{tanh(adjacent)}}{\\textrm{tanh(hypotenuse)}}=\\frac{\\tanh b}{\\,\\tanh c\\,}.\\,"
},
{
"math_id": 6,
"text": "\\tan A=\\frac{\\textrm{tanh(opposite)}}{\\textrm{sinh(adjacent)}} = \\frac{\\tanh a}{\\,\\sinh b\\,}"
},
{
"math_id": 7,
"text": "\\textrm{cosh(adjacent)}= \\frac{\\cos B}{\\sin A}"
},
{
"math_id": 8,
"text": "\\textrm{cosh(hypotenuse)}= \\textrm{cosh(adjacent)} \\textrm{cosh(opposite)}"
},
{
"math_id": 9,
"text": "\\textrm{cosh(hypotenuse)}= \\frac{\\cos A \\cos B}{\\sin A\\sin B} = \\cot A \\cot B"
},
{
"math_id": 10,
"text": " \\cos A = \\cosh a \\sin B"
},
{
"math_id": 11,
"text": " \\sin A = \\frac{\\cos B}{\\cosh b}"
},
{
"math_id": 12,
"text": " \\tan A = \\frac{\\cot B}{\\cosh c}"
},
{
"math_id": 13,
"text": " \\cos B = \\cosh b \\sin A"
},
{
"math_id": 14,
"text": " \\cosh c = \\cot A \\cot B"
},
{
"math_id": 15,
"text": "\\textrm{Area} = \\frac{\\pi}{2} - \\angle A - \\angle B"
},
{
"math_id": 16,
"text": "\\textrm{Area} = {\\pi} - \\angle A - \\angle B - \\angle C"
},
{
"math_id": 17,
"text": "\\textrm{Area}= 2 \\arctan (\\tanh (\\frac{a}{2})\\tanh (\\frac{b}{2}) )"
},
{
"math_id": 18,
"text": " \\infty "
},
{
"math_id": 19,
"text": "\\textrm{tanh}(\\infty )= 1"
},
{
"math_id": 20,
"text": "\\cos A= \\textrm{tanh(adjacent)}"
},
{
"math_id": 21,
"text": "\\cos A= \\frac{\\textrm{tanh}(\\frac12 s) }{\\textrm{tanh} (s)}"
},
{
"math_id": 22,
"text": "\\cosh( \\frac12 s)= \\frac{\\cos(\\frac12 A)}{\\sin( A)}= \\frac{1}{2 \\sin(\\frac12 A)}"
},
{
"math_id": 23,
"text": "\\cosh c=\\cosh a\\cosh b-\\sinh a\\sinh b \\cos C,"
},
{
"math_id": 24,
"text": "\\cos C= -\\cos A\\cos B+\\sin A\\sin B \\cosh c,"
},
{
"math_id": 25,
"text": "\\frac{\\sin A}{\\sinh a} = \\frac{\\sin B}{\\sinh b} = \\frac{\\sin C}{\\sinh c},"
},
{
"math_id": 26,
"text": "\\cos C\\cosh a=\\sinh a\\coth b-\\sin C\\cot B"
}
] | https://en.wikipedia.org/wiki?curid=1283865 |
1283871 | Maxwell coil | Device used to produce magnetic fields
A Maxwell coil is a device for producing a large volume of almost constant (or constant-gradient) magnetic field. It is named in honour of the Scottish physicist James Clerk Maxwell.
A Maxwell coil is an improvement of a Helmholtz coil: in operation it provides an even more uniform magnetic field (than a Helmholtz coil), but at the expense of more material and complexity.
Description.
A constant-field Maxwell coil set consists of three coils oriented on the surface of a virtual sphere. According to Maxwell's original 1873 design: each of the outer coils should be of radius formula_0, and distance formula_1 from the plane of the central coil of radius formula_2.
Maxwell specified the number of windings as 64 for the central coil and 49 for the outer coils. Though Maxwell did not specifically state that current for the coils came from the same source, his work was specifically describing the construction of a sensitive galvanometer designed to detect a single current source. It follows that the ampere-turns for each of the smaller coils must be exactly formula_3 of the turns of the larger.
Gradient-field Maxwell coil.
A gradient-field Maxwell coil is essentially the same geometry of the 3-coil configuration above, with the central coil removed to leave only the smaller two coils and the current in one of these reversed. This produces a uniform-gradient magnetic field near the centre of the two coils. Maxwell describes the use of the 2-coil configuration for the generation of a uniform force on a small test coil. A Maxwell coil of this type is similar to a Helmholtz coil with the coil distance increased from coil radius formula_2 to formula_4 and the coils fed with opposite currents.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\textstyle \\sqrt{\\frac{4}{7}}R "
},
{
"math_id": 1,
"text": "\\textstyle \\sqrt{\\frac{3}{7}}R"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "\\frac{49}{64}"
},
{
"math_id": 4,
"text": "\\sqrt{3}R"
}
] | https://en.wikipedia.org/wiki?curid=1283871 |
12842125 | Group Hopf algebra | In mathematics, the group Hopf algebra of a given group is a certain construct related to the symmetries of group actions. Deformations of group Hopf algebras are foundational in the theory of quantum groups.
Definition.
Let "G" be a group and "k" a field. The "group Hopf algebra" of "G" over "k", denoted "kG" (or "k"["G"]), is as a set (and a vector space) the free vector space on "G" over "k". As an algebra, its product is defined by linear extension of the group composition in "G", with multiplicative unit the identity in "G"; this product is also known as convolution.
Note that while the group algebra of a "finite" group can be identified with the space of functions on the group, for an infinite group these are different. The group algebra, consisting of "finite" sums, corresponds to functions on the group that vanish for cofinitely many points; topologically (using the discrete topology), these are the functions with compact support.
However, the group algebra formula_0 and formula_1 – the commutative algebra of functions of "G" into "k" – are dual: given an element of the group algebra formula_2 and a function on the group formula_3 these pair to give an element of "k" via formula_4 which is a well-defined sum because it is finite.
Hopf algebra structure.
We give "kG" the structure of a cocommutative Hopf algebra by defining the coproduct, counit, and antipode to be the linear extensions of the following maps defined on "G":
formula_5
formula_6
formula_7
The required Hopf algebra compatibility axioms are easily checked. Notice that formula_8, the set of group-like elements of "kG" (i.e. elements formula_9 such that formula_10 and formula_11), is precisely "G".
Symmetries of group actions.
Let "G" be a group and "X" a topological space. Any action formula_12 of "G" on "X" gives a homomorphism formula_13, where "F"("X") is an appropriate algebra of "k"-valued functions, such as the Gelfand–Naimark algebra formula_14 of continuous functions vanishing at infinity. The homomorphism formula_15 is defined by formula_16, with the adjoint formula_17 defined by
formula_18
for formula_19, and formula_20.
This may be described by a linear mapping
formula_21
formula_22
where formula_23, formula_24 are the elements of "G", and formula_25, which has the property that group-like elements in formula_26 give rise to automorphisms of "F"("X").
formula_27 endows "F"("X") with an important extra structure, described below.
Hopf module algebras and the Hopf smash product.
Let "H" be a Hopf algebra. A (left) "Hopf H-module algebra" "A" is an algebra which is a (left) module over the algebra "H" such that formula_28 and
formula_29
whenever formula_30, formula_31 and formula_32 in sumless Sweedler notation. When formula_27 has been defined as in the previous section, this turns "F"("X") into a left Hopf "kG"-module algebra, which allows the following construction.
Let "H" be a Hopf algebra and "A" a left Hopf "H"-module algebra. The "smash product" algebra formula_33 is the vector space formula_34 with the product
formula_35,
and we write formula_36 for formula_37 in this context.
In our case, formula_38 and formula_39, and we have
formula_40.
In this case the smash product algebra formula_41 is also denoted by formula_42.
The cyclic homology of Hopf smash products has been computed. However, there the smash product is called a crossed product and denoted formula_43- not to be confused with the crossed product derived from formula_44-dynamical systems.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k[G]"
},
{
"math_id": 1,
"text": "k^G"
},
{
"math_id": 2,
"text": "x = \\sum_{g\\in G} a_g g"
},
{
"math_id": 3,
"text": "f\\colon G \\to k,"
},
{
"math_id": 4,
"text": "(x,f) = \\sum_{g\\in G} a_g f(g),"
},
{
"math_id": 5,
"text": "\\Delta(x) = x \\otimes x;"
},
{
"math_id": 6,
"text": "\\epsilon(x) = 1_{k};"
},
{
"math_id": 7,
"text": "S(x) = x^{-1}. "
},
{
"math_id": 8,
"text": "\\mathcal{G}(kG)"
},
{
"math_id": 9,
"text": "a \\in kG"
},
{
"math_id": 10,
"text": "\\Delta(a) = a \\otimes a"
},
{
"math_id": 11,
"text": "\\epsilon(a)=1"
},
{
"math_id": 12,
"text": "\\alpha\\colon G \\times X \\to X"
},
{
"math_id": 13,
"text": "\\phi_\\alpha\\colon G \\to \\mathrm{Aut}(F(X))"
},
{
"math_id": 14,
"text": "C_0(X)"
},
{
"math_id": 15,
"text": "\\phi_{\\alpha}"
},
{
"math_id": 16,
"text": "\\phi_\\alpha(g) = \\alpha^*_g"
},
{
"math_id": 17,
"text": "\\alpha^*_{g}"
},
{
"math_id": 18,
"text": "\\alpha^*_g(f)x = f(\\alpha(g,x))"
},
{
"math_id": 19,
"text": "g \\in G, f \\in F(X)"
},
{
"math_id": 20,
"text": "x \\in X"
},
{
"math_id": 21,
"text": "\\lambda\\colon kG \\otimes F(X) \\to F(X) "
},
{
"math_id": 22,
"text": "\\lambda((c_1 g_1 + c_2 g_2 + \\cdots ) \\otimes f)(x) = c_1 f(g_1 \\cdot x) + c_2 f(g_2 \\cdot x) + \\cdots"
},
{
"math_id": 23,
"text": "c_1,c_2,\\ldots \\in k"
},
{
"math_id": 24,
"text": "g_1, g_2,\\ldots"
},
{
"math_id": 25,
"text": "g_i \\cdot x := \\alpha(g_i,x)"
},
{
"math_id": 26,
"text": "kG"
},
{
"math_id": 27,
"text": "\\lambda"
},
{
"math_id": 28,
"text": "h \\cdot 1_A = \\epsilon(h)1_A"
},
{
"math_id": 29,
"text": "h \\cdot (ab) = (h_{(1)} \\cdot a)(h_{(2)} \\cdot b)"
},
{
"math_id": 30,
"text": "a, b \\in A"
},
{
"math_id": 31,
"text": "h \\in H"
},
{
"math_id": 32,
"text": "\\Delta(h) = h_{(1)} \\otimes h_{(2)}"
},
{
"math_id": 33,
"text": "A\\mathop{\\#} H"
},
{
"math_id": 34,
"text": "A \\otimes H"
},
{
"math_id": 35,
"text": "(a \\otimes h)(b \\otimes k) := a(h_{(1)} \\cdot b) \\otimes h_{(2)}k"
},
{
"math_id": 36,
"text": "a\\mathop{\\#} h"
},
{
"math_id": 37,
"text": "a \\otimes h"
},
{
"math_id": 38,
"text": "A = F(X)"
},
{
"math_id": 39,
"text": "H = kG"
},
{
"math_id": 40,
"text": "(a\\mathop{\\#} g_1)(b\\mathop{\\#} g_2) = a(g_1 \\cdot b)\\mathop{\\#} g_1 g_2"
},
{
"math_id": 41,
"text": "A\\mathop{\\#} kG"
},
{
"math_id": 42,
"text": "A\\mathop{\\#} G"
},
{
"math_id": 43,
"text": "A \\rtimes H"
},
{
"math_id": 44,
"text": "C^{*}"
}
] | https://en.wikipedia.org/wiki?curid=12842125 |
1284226 | Elementary class | In model theory, a branch of mathematical logic, an elementary class (or axiomatizable class) is a class consisting of all structures satisfying a fixed first-order theory.
Definition.
A class "K" of structures of a signature σ is called an elementary class if there is a first-order theory "T" of signature σ, such that "K" consists of all models of "T", i.e., of all σ-structures that satisfy "T". If "T" can be chosen as a theory consisting of a single first-order sentence, then "K" is called a basic elementary class.
More generally, "K" is a pseudo-elementary class if there is a first-order theory "T" of a signature that extends σ, such that "K" consists of all σ-structures that are reducts to σ of models of "T". In other words, a class "K" of σ-structures is pseudo-elementary if and only if there is an elementary class "K'" such that "K" consists of precisely the reducts to σ of the structures in "K'".
For obvious reasons, elementary classes are also called axiomatizable in first-order logic, and basic elementary classes are called finitely axiomatizable in first-order logic. These definitions extend to other logics in the obvious way, but since the first-order case is by far the most important, axiomatizable implicitly refers to this case when no other logic is specified.
Conflicting and alternative terminology.
While the above is nowadays standard terminology in "infinite" model theory, the slightly different earlier definitions are still in use in finite model theory, where an elementary class may be called a Δ-elementary class, and the terms elementary class and first-order axiomatizable class are reserved for basic elementary classes (Ebbinghaus et al. 1994, Ebbinghaus and Flum 2005). Hodges calls elementary classes axiomatizable classes, and he refers to basic elementary classes as definable classes. He also uses the respective synonyms ECformula_0 class and EC class (Hodges, 1993).
There are good reasons for this diverging terminology. The signatures that are considered in general model theory are often infinite, while a single first-order sentence contains only finitely many symbols. Therefore, basic elementary classes are atypical in infinite model theory. Finite model theory, on the other hand, deals almost exclusively with finite signatures. It is easy to see that for every finite signature σ and for every class "K" of σ-structures closed under isomorphism there is an elementary class formula_1 of σ-structures such that "K" and formula_1 contain precisely the same finite structures. Hence, elementary classes are not very interesting for finite model theorists.
Easy relations between the notions.
Clearly every basic elementary class is an elementary class, and every elementary class is a pseudo-elementary class. Moreover, as an easy consequence of the compactness theorem, a class of σ-structures is basic elementary if and only if it is elementary and its complement is also elementary.
Examples.
A basic elementary class.
Let σ be a signature consisting only of a unary function symbol "f". The class "K" of σ-structures in which "f" is one-to-one is a basic elementary class. This is witnessed by the theory "T", which consists only of the single sentence
formula_2.
An elementary, basic pseudoelementary class that is not basic elementary.
Let σ be an arbitrary signature. The class "K" of all infinite σ-structures is elementary. To see this, consider the sentences
formula_3 "formula_4",
formula_5 "formula_6",
and so on. (So the sentence formula_7 says that there are at least "n" elements.) The infinite σ-structures are precisely the models of the theory
formula_8.
But "K" is not a basic elementary class. Otherwise the infinite σ-structures would be precisely those that satisfy a certain first-order sentence τ. But then the set
formula_9 would be inconsistent. By the compactness theorem, for some natural number "n" the set formula_10 would be inconsistent. But this is absurd, because this theory is satisfied by any finite σ-structure with formula_11 or more elements.
However, there is a basic elementary class "K'" in the signature σ' = σ formula_12 {"f"}, where "f" is a unary function symbol, such that "K" consists exactly of the reducts to σ of σ'-structures in "K'". "K'" is axiomatised by the single sentence formula_13, which expresses that "f" is injective but not surjective. Therefore, "K" is elementary and what could be called basic pseudo-elementary, but not basic elementary.
Pseudo-elementary class that is non-elementary.
Finally, consider the signature σ consisting of a single unary relation symbol "P". Every σ-structure is partitioned into two subsets: Those elements for which "P" holds, and the rest. Let "K" be the class of all σ-structures for which these two subsets have the same cardinality, i.e., there is a bijection between them. This class is not elementary, because a σ-structure in which both the set of realisations of "P" and its complement are countably infinite satisfies precisely the same first-order sentences as a σ-structure in which one of the sets is countably infinite and the other is uncountable.
Now consider the signature formula_14, which consists of "P" along with a unary function symbol "f". Let formula_1 be the class of all formula_14-structures such that "f" is a bijection and "P" holds for "x" iff "P" does not hold for "f(x)". formula_1 is clearly an elementary class, and therefore "K" is an example of a pseudo-elementary class that is not elementary.
Non-pseudo-elementary class.
Let σ be an arbitrary signature. The class "K" of all finite σ-structures is not elementary, because (as shown above) its complement is elementary but not basic elementary. Since this is also true for every signature extending σ, "K" is not even a pseudo-elementary class.
This example demonstrates the limits of expressive power inherent in first-order logic as opposed to the far more expressive second-order logic. Second-order logic, however, fails to retain many desirable properties of first-order logic, such as the completeness and compactness theorems. | [
{
"math_id": 0,
"text": "_\\Delta"
},
{
"math_id": 1,
"text": "K'"
},
{
"math_id": 2,
"text": "\\forall x\\forall y( (f(x)=f(y)) \\to (x=y) )"
},
{
"math_id": 3,
"text": "\\rho_2={}"
},
{
"math_id": 4,
"text": "\\exist x_1\\exist x_2(x_1 \\not =x_2)"
},
{
"math_id": 5,
"text": "\\rho_3={}"
},
{
"math_id": 6,
"text": "\\exist x_1\\exist x_2\\exist x_3((x_1 \\not =x_2) \\land (x_1 \\not =x_3) \\land (x_2 \\not =x_3))"
},
{
"math_id": 7,
"text": "\\rho_n"
},
{
"math_id": 8,
"text": "T_\\infty=\\{\\rho_2, \\rho_3, \\rho_4, \\dots\\}"
},
{
"math_id": 9,
"text": "\\{\\neg\\tau, \\rho_2, \\rho_3, \\rho_4, \\dots\\}"
},
{
"math_id": 10,
"text": "\\{\\neg\\tau, \\rho_2, \\rho_3, \\rho_4, \\dots, \\rho_n\\}"
},
{
"math_id": 11,
"text": "n+1"
},
{
"math_id": 12,
"text": "\\cup"
},
{
"math_id": 13,
"text": "(\\forall x\\forall y(f(x) = f(y) \\rightarrow x=y) \\land \\exists y\\neg\\exists x(y = f(x))),"
},
{
"math_id": 14,
"text": "\\sigma'"
}
] | https://en.wikipedia.org/wiki?curid=1284226 |
1284311 | Johnson's algorithm | Computer-based path-finding method
Johnson's algorithm is a way to find the shortest paths between all pairs of vertices in an edge-weighted directed graph. It allows some of the edge weights to be negative numbers, but no negative-weight cycles may exist. It works by using the Bellman–Ford algorithm to compute a transformation of the input graph that removes all negative weights, allowing Dijkstra's algorithm to be used on the transformed graph. It is named after Donald B. Johnson, who first published the technique in 1977.
A similar reweighting technique is also used in Suurballe's algorithm for finding two disjoint paths of minimum total length between the same two vertices in a graph with non-negative edge weights.
Algorithm description.
Johnson's algorithm consists of the following steps:
Example.
The first three stages of Johnson's algorithm are depicted in the illustration below.
The graph on the left of the illustration has two negative edges, but no negative cycles. The center graph shows the new vertex q, a shortest path tree as computed by the Bellman–Ford algorithm with q as starting vertex, and the values "h"("v") computed at each other node as the length of the shortest path from q to that node. Note that these values are all non-positive, because q has a length-zero edge to each vertex and the shortest path can be no longer than that edge. On the right is shown the reweighted graph, formed by replacing each edge weight &NoBreak;&NoBreak; by "w"("u","v") + "h"("u") − "h"("v"). In this reweighted graph, all edge weights are non-negative, but the shortest path between any two nodes uses the same sequence of edges as the shortest path between the same two nodes in the original graph. The algorithm concludes by applying Dijkstra's algorithm to each of the four starting nodes in the reweighted graph.
Correctness.
In the reweighted graph, all paths between a pair s and t of nodes have the same quantity "h"("s") − "h"("t") added to them. The previous statement can be proven as follows: Let p be an &NoBreak;&NoBreak; path. Its weight W in the reweighted graph is given by the following expression:
formula_0
Every formula_1 is cancelled by formula_2 in the previous bracketed expression; therefore, we are left with the following expression for "W":
formula_3
The bracketed expression is the weight of "p" in the original weighting.
Since the reweighting adds the same amount to the weight of every &NoBreak;&NoBreak; path, a path is a shortest path in the original weighting if and only if it is a shortest path after reweighting. The weight of edges that belong to a shortest path from "q" to any node is zero, and therefore the lengths of the shortest paths from "q" to every node become zero in the reweighted graph; however, they still remain shortest paths. Therefore, there can be no negative edges: if edge "uv" had a negative weight after the reweighting, then the zero-length path from "q" to "u" together with this edge would form a negative-length path from "q" to "v", contradicting the fact that all vertices have zero distance from "q". The non-existence of negative edges ensures the optimality of the paths found by Dijkstra's algorithm. The distances in the original graph may be calculated from the distances calculated by Dijkstra's algorithm in the reweighted graph by reversing the reweighting transformation.
Analysis.
The time complexity of this algorithm, using Fibonacci heaps in the implementation of Dijkstra's algorithm, is formula_4: the algorithm uses formula_5 time for the Bellman–Ford stage of the algorithm, and formula_6 for each of the formula_7 instantiations of Dijkstra's algorithm. Thus, when the graph is sparse, the total time can be faster than the Floyd–Warshall algorithm, which solves the same problem in time formula_8.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(w(s, p_1) + h(s) - h(p_1)\\right) + \\left(w(p_1, p_2) + h(p_1) - h(p_2)\\right) + ... + \\left(w(p_n, t) + h(p_n) - h(t)\\right)."
},
{
"math_id": 1,
"text": "+h(p_i)"
},
{
"math_id": 2,
"text": "-h(p_i)"
},
{
"math_id": 3,
"text": "\\left(w(s, p_1) + w(p_1, p_2) + \\cdots + w(p_n, t)\\right)+ h(s) - h(t)"
},
{
"math_id": 4,
"text": "O(|V|^2\\log |V| + |V||E|)"
},
{
"math_id": 5,
"text": "O(|V||E|)"
},
{
"math_id": 6,
"text": "O(|V|\\log |V| + |E|)"
},
{
"math_id": 7,
"text": "|V|"
},
{
"math_id": 8,
"text": "O(|V|^3)"
}
] | https://en.wikipedia.org/wiki?curid=1284311 |
12845371 | Borel fixed-point theorem | Fixed-point theorem in algebraic geometry
In mathematics, the Borel fixed-point theorem is a fixed-point theorem in algebraic geometry generalizing the Lie–Kolchin theorem. The result was proved by Armand Borel (1956).
Statement.
If "G" is a connected, solvable, linear algebraic group acting regularly on a non-empty, complete algebraic variety "V" over an algebraically closed field "k", then there is a "G" fixed-point of "V".
A more general version of the theorem holds over a field "k" that is not necessarily algebraically closed. A solvable algebraic group "G" is "split over k" or "k-split" if "G" admits a composition series whose composition factors are isomorphic (over "k") to the additive group formula_0 or the multiplicative group formula_1. If "G" is a connected, "k"-split solvable algebraic group acting regularly on a complete variety "V" having a "k"-rational point, then there is a "G" fixed-point of "V".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb G_a"
},
{
"math_id": 1,
"text": "\\mathbb G_m"
}
] | https://en.wikipedia.org/wiki?curid=12845371 |
1284664 | Big Five personality traits | Personality model consisting of five broad dimensions
In trait theory, the Big Five personality traits (sometimes known as the five-factor model of personality or OCEAN model) are a group of five unique characteristics used to study personality:
When factor analysis is applied to personality survey data, semantic associations between aspects of personality and specific terms are often applied to the same person. For example, someone described as conscientious is more likely to be described as "always prepared" rather than "messy". These associations suggest five broad dimensions used in common language to describe the human personality, temperament, and psyche.
Those labels for the five factors may be remembered using the acronyms "OCEAN" or "CANOE". Beneath each proposed global factor, there are a number of correlated and more specific primary factors. For example, extraversion is typically associated with qualities such as gregariousness, assertiveness, excitement-seeking, warmth, activity, and positive emotions. These traits are not black and white; each one is treated as a spectrum.
History.
The Big Five model was built to understand the relationship between personality and academic behaviour. It was defined by several independent sets of researchers who analysed words describing people's behaviour. These researchers first studied relationships between a large number of words related to personality traits. They made lists of these words shorter by 5–10 times and then used factor analysis to group the remaining traits (with data mostly based upon people's estimations, in self-report questionnaires and peer ratings) in order to find the basic factors of personality.
The initial model was advanced in 1958 by Ernest Tupes and Raymond Christal, research psychologists working at Lackland Air Force Base in Texas, but failed to reach scholars and scientists until the 1980s. In 1990, J.M. Digman advanced his five-factor model of personality, which Lewis Goldberg put at the highest-organised level. These five overarching domains have been found to contain most known personality traits and are assumed to represent the basic structure behind them all.
At least four sets of researchers have worked independently for decades to reflect personality traits in language and have mainly identified the same five factors: Tupes and Christal were first, followed by Goldberg at the Oregon Research Institute, Cattell at the University of Illinois, and finally Costa and McCrae. These four sets of researchers used somewhat different methods in finding the five traits, making the sets of five factors have varying names and meanings. However, all have been found to be strongly correlated with their corresponding factors. Studies indicate that the Big Five traits are not nearly as powerful in predicting and explaining actual behaviour as the more numerous facets or primary traits.
Each of the Big Five personality traits contains two separate, but correlated, aspects reflecting a level of personality below the broad domains but above the many facet scales also making up part of the Big Five. The aspects are labelled as follows: Volatility and Withdrawal for Neuroticism; Enthusiasm and Assertiveness for Extraversion; Intellect and Openness for Openness to Experience; Industriousness and Orderliness for Conscientiousness; and Compassion and Politeness for Agreeableness.
Finding the five factors.
In 1884, British scientist Sir Francis Galton became the first person known to consider deriving a comprehensive taxonomy of human personality traits by sampling language. The idea that this may be possible is known as the lexical hypothesis. In 1936, American psychologists Gordon Allport of Harvard University and Henry Odbert of Dartmouth College implemented Galton's hypothesis. They organised for three anonymous people to categorise adjectives from Webster's New International Dictionary and a list of common slang words. The result was a list of 4504 adjectives they believed were descriptive of observable and relatively permanent traits.
In 1943, Raymond Cattell of Harvard University took Allport and Odbert's list and reduced this to a list of roughly 160 terms by eliminating words with very similar meanings. To these, he added terms from 22 other psychological categories, and additional "interest" and "abilities" terms. This resulted in a list of 171 traits. From this he used factor analysis to derive 60 "personality clusters or syndromes" and an additional 7 minor clusters. Cattell then narrowed this down to 35 terms, and later added a 36th factor in the form of an IQ measure. Through factor analysis from 1945 to 1948, he created 11 or 12 factor solutions.
In 1947, Hans Eysenck of University College London published his book "Dimensions of Personality". He posited that the two most important personality dimensions were "Extraversion" and "Neuroticism", a term that he coined.
In July 1949, Donald Fiske of the University of Chicago used 22 terms either adapted from Cattell's 1947 study, and through surveys of male university students and statistics derived five factors: "Social Adaptability", "Emotional Control", "Conformity", "Inquiring Intellect", and "Confident Self-expression". In the same year, Cattell, with Maurice Tatsuoka and Herbert Eber, found 4 additional factors, which they believed consisted of information that could only be provided through self-rating. With this understanding, they created the sixteen factor 16PF Questionnaire.
In 1953, John W French of Educational Testing Service published an extensive meta-analysis of personality trait factor studies.
In 1957, Ernest Tupes of the United States Air Force undertook a personality trait study of US Air Force officers. Each was rated by their peers using Cattell's 35 terms (or in some cases, the 30 most reliable terms). In 1958, Tupes and Raymond Christal began a US Air Force study by taking 37 personality factors and other data found in Cattell's 1947 paper, Fiske's 1949 paper, and Tupes' 1957 paper. Through statistical analysis, they derived five factors they labeled "Surgency", "Agreeableness", "Dependability", "Emotional Stability", and "Culture". In addition to the influence of Cattell and Fiske's work, they strongly noted the influence of French's 1953 study. Tupes and Christal further tested and explained their 1958 work in a 1961 paper.
Warren Norman of the University of Michigan replicated Tupes and Christal's work in 1963. He relabeled "Surgency" as "Extroversion or Surgency", and "Dependability" as "Conscientiousness". He also found four subordinate scales for each factor. Norman's paper was much more read than Tupes and Christal's papers had been. Norman's later Oregon Research Institute colleague Lewis Goldberg continued this work.
In the 4th edition of the 16PF Questionnaire released in 1968, 5 "global factors" derived from the 16 factors were identified: "Extraversion", "Independence", "Anxiety", "Self-control" and "Tough-mindedness". 16PF advocates have since called these "the original Big 5".
Hiatus in research.
During the 1970s, the changing zeitgeist made publication of personality research difficult. In his 1968 book "Personality and Assessment", Walter Mischel asserted that personality instruments could not predict behavior with a correlation of more than 0.3. Social psychologists like Mischel argued that attitudes and behavior were not stable, but varied with the situation. Predicting behavior from personality instruments was claimed to be impossible.
Renewed attention.
In 1978, Paul Costa and Robert McCrae of the National Institutes of Health published a book chapter describing their Neuroticism-Extroversion-Openness (NEO) model. The model was based on the three factors in its name. They used Eysenck's concept of "Extroversion" rather than Carl Jung's. Each factor had six facets. The authors expanded their explanation of the model in subsequent papers.
Also in 1978, British psychologist Peter Saville of Brunel University applied statistical analysis to 16PF results, and determined that the model could be reduced to five factors, "Anxiety", "Extraversion", "Warmth", "Imagination" and "Conscientiousness".
At a 1980 symposium in Honolulu, Lewis Goldberg, Naomi Takemoto-Chock, Andrew Comrey, and John M. Digman, reviewed the available personality instruments of the day. In 1981, Digman and Takemoto-Chock of the University of Hawaii reanalysed data from Cattell, Tupes, Norman, Fiske and Digman. They re-affirmed the validity of the five factors, naming them "Friendly Compliance vs. Hostile Non-compliance", "Extraversion vs. Introversion", "Ego Strength vs. Emotional Disorganization", "Will to Achieve" and "Intellect". They also found weak evidence for the existence of a sixth factor, "Culture".
Peter Saville and his team included the five-factor "Pentagon" model as part of the Occupational Personality Questionnaires (OPQ) in 1984. This was the first commercially available Big Five test. Its factors are "Extroversion", "Vigorous", "Methodical", "Emotional Stability", and "Abstract".
This was closely followed by another commercial test, the NEO PI three-factor personality inventory, published by Costa and McCrae in 1985. It used the three NEO factors. The methodology employed in constructing the NEO instruments has since been subject to critical scrutiny.
Emerging methodologies increasingly confirmed personality theories during the 1980s. Though generally failing to predict single instances of behavior, researchers found that they could predict patterns of behavior by aggregating large numbers of observations. As a result, correlations between personality and behavior increased substantially, and it became clear that "personality" did in fact exist.
In 1992, the NEO PI evolved into the NEO PI-R, adding the factors "Agreeableness" and "Conscientiousness", and becoming a Big Five instrument. This set the names for the factors that are now most commonly used. The NEO maintainers call their model the "Five Factor Model" (FFM). Each NEO personality dimension has six subordinate facets.
Subsequent developments.
Wim Hofstee at the University of Groningen used a lexical hypothesis approach with the Dutch language to develop what became the International Personality Item Pool in the 1990s. Further development in Germany and the United States saw the pool based on three languages. Its questions and results have been mapped to various Big Five personality typing models.
Kibeom Lee and Michael Ashton released a book describing their HEXACO model in 2004. It adds a sixth factor, "Honesty-Humility" to the five (which it calls "Emotionality", "Extraversion", "Agreeableness", "Conscientiousness", and "Openness to Experience"). Each of these factors has four facets.
In 2007, Colin DeYoung, Lena C. Quilty and Jordan Peterson concluded that the 10 aspects of the Big Five may have distinct biological substrates. This was derived through factor analyses of two data samples with the International Personality Item Pool, followed by cross-correlation with scores derived from 10 genetic factors identified as underlying the shared variance among the Revised NEO Personality Inventory facets.
By 2009, personality and social psychologists generally agreed that both personal and situational variables are needed to account for human behavior.
A FFM-associated test was used by Cambridge Analytica, and was part of the "psychographic profiling" controversy during the 2016 US presidential election.
Descriptions of the particular personality traits.
Openness to experience.
Openness to experience is a general appreciation for art, emotion, adventure, unusual ideas, imagination, curiosity, and variety of experience. People who are open to experience are intellectually curious, open to emotion, sensitive to beauty, and willing to try new things. They tend to be, when compared to closed people, more creative and more aware of their feelings. They are also more likely to hold unconventional beliefs. Open people can be perceived as unpredictable or lacking focus, and more likely to engage in risky behaviour or drug-taking. Moreover, individuals with high openness are said to pursue self-actualisation specifically by seeking out intense, euphoric experiences. Conversely, those with low openness want to be fulfilled by persevering and are characterised as pragmatic and data-driven – sometimes even perceived to be dogmatic and closed-minded. Some disagreement remains about how to interpret and contextualise the openness factor as there is a lack of biological support for this particular trait. Openness has not shown a significant association with any brain regions as opposed to the other four traits which did when using brain imaging to detect changes in volume associated with each trait.
Conscientiousness.
Conscientiousness is a tendency to be self-disciplined, act dutifully, and strive for achievement against measures or outside expectations. It is related to people's level of impulse control, regulation, and direction. High conscientiousness is often perceived as being stubborn and focused. Low conscientiousness is associated with flexibility and spontaneity, but can also appear as sloppiness and lack of reliability. High conscientiousness indicates a preference for planned rather than spontaneous behaviour.
Extraversion.
Extraversion is characterised by breadth of activities (as opposed to depth), surgency from external activities/situations, and energy creation from external means. The trait is marked by pronounced engagement with the external world. Extraverts enjoy interacting with people, and are often perceived as energetic. They tend to be enthusiastic and action-oriented. They possess high group visibility, like to talk, and assert themselves. Extraverts may appear more dominant in social settings, as opposed to introverts in that setting.
Introverts have lower social engagement and energy levels than extraverts. They tend to seem quiet, low-key, deliberate, and less involved in the social world. Their lack of social involvement should not be interpreted as shyness or depression, but as greater independence of their social world than extraverts. Introverts need less stimulation and more time alone than extraverts. This does not mean that they are unfriendly or antisocial; rather, they are aloof and reserved in social situations.
Generally, people are a combination of extraversion and introversion, with personality psychologist Hans Eysenck suggesting a model by which differences in their brains produce these traits.
Agreeableness.
Agreeableness is the general concern for social harmony. Agreeable individuals value getting along with others. They are generally considerate, kind, generous, trusting and trustworthy, helpful, and willing to compromise their interests with others. Agreeable people also have an optimistic view of human nature.
Disagreeable individuals place self-interest above getting along with others. They are generally unconcerned with others' well-being and are less likely to extend themselves for other people. Sometimes their skepticism about others' motives causes them to be suspicious, unfriendly, and uncooperative. Disagreeable people are often competitive or challenging, which can be seen as argumentative or untrustworthy.
Because agreeableness is a social trait, research has shown that one's agreeableness positively correlates with the quality of relationships with one's team members. Agreeableness also positively predicts transformational leadership skills. In a study conducted among 169 participants in leadership positions in a variety of professions, individuals were asked to take a personality test and be directly evaluated by supervised subordinates. Very agreeable leaders were more likely to be considered transformational rather than transactional. Although the relationship was not strong ("r=0.32", "β=0.28", "p<0.01"), it was the strongest of the Big Five traits. However, the same study could not predict leadership effectiveness as evaluated by the leader's direct supervisor.
Conversely, agreeableness has been found to be negatively related to transactional leadership in the military. A study of Asian military units showed that agreeable people are more likely to be poor transactional leaders. Therefore, with further research, organisations may be able to determine an individual's potential for performance based on their personality traits. For instance, in their journal article "Which Personality Attributes Are Most Important in the Workplace?" Paul Sackett and Philip Walmsley claim that conscientiousness and agreeableness are "important to success across many different jobs."
Neuroticism.
Neuroticism is the tendency to have strong negative emotions, such as anger, anxiety, or depression. It is sometimes called emotional instability, or is reversed and referred to as emotional stability. According to Hans Eysenck's (1967) theory of personality, neuroticism is associated with low tolerance for stress or strongly disliked changes. Neuroticism is a classic temperament trait that has been studied in temperament research for decades, even before it was adapted by the Five Factor Model.
Neurotic people are emotionally reactive and vulnerable to stress. They are more likely to interpret ordinary situations as threatening. They can perceive minor frustrations as hopelessly difficult. Their negative emotional reactions tend to stay for unusually long periods of time, which means they are often in a bad mood. For instance, neuroticism is connected to pessimism toward work, to certainty that work hinders personal relationships, and to higher levels of anxiety from the pressures at work. Furthermore, neurotic people may display more skin-conductance reactivity than calm and composed people. These problems in emotional regulation can make a neurotic person think less clearly, make worse decisions, and cope less effectively with stress. Being disappointed with one's life achievements can make one more neurotic and increase one's chances of falling into clinical depression. Moreover, neurotic individuals tend to experience more negative life events, but neuroticism also changes in response to positive and negative life experiences. Also, neurotic people tend to have worse psychological well-being.
At the other end of the scale, less neurotic individuals are less easily upset and are less emotionally reactive. They tend to be calm, emotionally stable, and free from persistent negative feelings. Freedom from negative feelings does not mean that low scorers experience a lot of positive feelings; that is related to extraversion instead.
Neuroticism is similar but not identical to being neurotic in the Freudian sense (i.e., neurosis). Some psychologists prefer to call neuroticism by the term emotional instability to differentiate it from the term neurotic in a career test.
Biological and developmental factors.
The factors that influence a personality are called the determinants of personality. These factors determine the traits which a person develops in the course of development from a child.
Temperament and personality.
There are debates between temperament researchers and personality researchers as to whether or not biologically based differences define a concept of temperament or a part of personality. The presence of such differences in pre-cultural individuals (such as animals or young infants) suggests that they belong to temperament since personality is a socio-cultural concept. For this reason developmental psychologists generally interpret individual differences in children as an expression of temperament rather than personality. Some researchers argue that temperaments and personality traits are age-specific demonstrations of virtually the same internal qualities. Some believe that early childhood temperaments may become adolescent and adult personality traits as individuals' basic genetic characteristics interact with their changing environments to various degrees.
Researchers of adult temperament point out that, similarly to sex, age, and mental illness, temperament is based on biochemical systems whereas personality is a product of socialisation of an individual possessing these four types of features. Temperament interacts with socio-cultural factors, but, similar to sex and age, still cannot be controlled or easily changed by these factors.
Therefore, it is suggested that temperament (neurochemically based individual differences) should be kept as an independent concept for further studies and not be confused with personality (culturally-based individual differences, reflected in the origin of the word "persona" (Lat) as a "social mask").
Moreover, temperament refers to dynamic features of behaviour (energetic, tempo, sensitivity, and emotionality-related), whereas personality is to be considered a psycho-social construct comprising the content characteristics of human behaviour (such as values, attitudes, habits, preferences, personal history, self-image). Temperament researchers point out that the lack of attention to surviving temperament research by the creators of the Big Five model led to an overlap between its dimensions and dimensions described in multiple temperament models much earlier. For example, neuroticism reflects the traditional temperament dimension of emotionality studied by Jerome Kagan's group since the '60s. Extraversion was also first introduced as a temperament type by Jung from the '20s.
Heritability.
A 1996 behavioural genetics study of twins suggested that heritability (the degree of "variation" in a trait within a population that is due to genetic variation in that population) and environmental factors both influence all five factors to the same degree. Among four twin studies examined in 2003, the mean percentage for heritability was calculated for each personality and it was concluded that heritability influenced the five factors broadly. The self-report measures were as follows: openness to experience was estimated to have a 57% genetic influence, extraversion 54%, conscientiousness 49%, neuroticism 48%, and agreeableness 42%.
Non-humans.
The Big Five personality traits have been assessed in some non-human species but methodology is debatable. In one series of studies, human ratings of chimpanzees using the Hominoid Personality Questionnaire, revealed factors of extraversion, conscientiousness and agreeableness– as well as an additional factor of dominance–across hundreds of chimpanzees in zoological parks, a large naturalistic sanctuary, and a research laboratory. Neuroticism and openness factors were found in an original zoo sample, but were not replicated in a new zoo sample or in other settings (perhaps reflecting the design of the CPQ). A study review found that markers for the three dimensions extraversion, neuroticism, and agreeableness were found most consistently across different species, followed by openness; only chimpanzees showed markers for conscientious behavior.
A study completed in 2020 concluded that dolphins have some similar personality traits to humans. Both are large brained intelligent animals but have evolved separately for millions of years.
Development during childhood and adolescence.
Research on the Big Five, and personality in general, has focused primarily on individual differences in adulthood, rather than in childhood and adolescence, and often include temperament traits. Recently, there has been growing recognition of the need to study child and adolescent personality trait development in order to understand how traits develop and change throughout the lifespan.
Recent studies have begun to explore the developmental origins and trajectories of the Big Five among children and adolescents, especially those that relate to temperament. Many researchers have sought to distinguish between personality and temperament. Temperament often refers to early behavioral and affective characteristics that are thought to be driven primarily by genes. Models of temperament often include four trait dimensions: surgency/sociability, negative emotionality, persistence/effortful control, and activity level. Some of these differences in temperament are evident at, if not before, birth. For example, both parents and researchers recognize that some newborn infants are peaceful and easily soothed while others are comparatively fussy and hard to calm. Unlike temperament, however, many researchers view the development of personality as gradually occurring throughout childhood. Contrary to some researchers who question whether children have stable personality traits, Big Five or otherwise, most researchers contend that there are significant psychological differences between children that are associated with relatively stable, distinct, and salient behavior patterns.
The structure, manifestations, and development of the Big Five in childhood and adolescence have been studied using a variety of methods, including parent- and teacher-ratings, preadolescent and adolescent self- and peer-ratings, and observations of parent-child interactions. Results from these studies support the relative stability of personality traits across the human lifespan, at least from preschool age through adulthood. More specifically, research suggests that four of the Big Five – namely Extraversion, Neuroticism, Conscientiousness, and Agreeableness – reliably describe personality differences in childhood, adolescence, and adulthood. However, some evidence suggests that Openness may not be a fundamental, stable part of childhood personality. Although some researchers have found that Openness in children and adolescents relates to attributes such as creativity, curiosity, imagination, and intellect, many researchers have failed to find distinct individual differences in Openness in childhood and early adolescence. Potentially, Openness may (a) manifest in unique, currently unknown ways in childhood or (b) may only manifest as children develop socially and cognitively. Other studies have found evidence for all of the Big Five traits in childhood and adolescence as well as two other child-specific traits: Irritability and Activity. Despite these specific differences, the majority of findings suggest that personality traits – particularly Extraversion, Neuroticism, Conscientiousness, and Agreeableness – are evident in childhood and adolescence and are associated with distinct social-emotional patterns of behavior that are largely consistent with adult manifestations of those same personality traits. Some researchers have proposed the youth personality trait is best described by six trait dimensions: neuroticism, extraversion, openness to experience, agreeableness, conscientiousness, and activity. Despite some preliminary evidence for this "Little Six" model, research in this area has been delayed by a lack of available measures.
Previous research has found evidence that most adults become more agreeable and conscientious and less neurotic as they age. This has been referred to as the maturation effect. Many researchers have sought to investigate how trends in adult personality development compare to trends in youth personality development. Two main population-level indices have been important in this area of research: rank-order consistency and mean-level consistency. Rank-order consistency indicates the relative placement of individuals within a group. Mean-level consistency indicates whether groups increase or decrease on certain traits throughout the lifetime.
Findings from these studies indicate that, consistent with adult personality trends, youth personality becomes increasingly more stable in terms of rank-order throughout childhood. Unlike adult personality research, which indicates that people become agreeable, conscientious, and emotionally stable with age, some findings in youth personality research have indicated that mean levels of agreeableness, conscientiousness, and openness to experience decline from late childhood to late adolescence. The disruption hypothesis, which proposes that biological, social, and psychological changes experienced during youth result in temporary dips in maturity, has been proposed to explain these findings.
Extraversion/positive emotionality.
In Big Five studies, extraversion has been associated with surgency. Children with high Extraversion are energetic, talkative, social, and dominant with children and adults; whereas, children with low Extraversion tend to be quiet, calm, inhibited, and submissive to other children and adults. Individual differences in Extraversion first manifest in infancy as varying levels of positive emotionality. These differences in turn predict social and physical activity during later childhood and may represent, or be associated with, the behavioral activation system. In children, Extraversion/Positive Emotionality includes four sub-traits: three traits that are similar to the previously described traits of temperament – "activity", "sociability", "shyness", and the trait of "dominance".
Development throughout adulthood.
Many studies of longitudinal data, which correlate people's test scores over time, and cross-sectional data, which compare personality levels across different age groups, show a high degree of stability in personality traits during adulthood, especially Neuroticism that is often regarded as a temperament trait similarly to longitudinal research in temperament for the same traits. It is shown that the personality stabilizes for working-age individuals within about four years after starting working. There is also little evidence that adverse life events can have any significant impact on the personality of individuals. More recent research and meta-analyses of previous studies, however, indicate that change occurs in all five traits at various points in the lifespan. The new research shows evidence for a maturation effect. On average, levels of agreeableness and conscientiousness typically increase with time, whereas extraversion, neuroticism, and openness tend to decrease. Research has also demonstrated that changes in Big Five personality traits depend on the individual's current stage of development. For example, levels of agreeableness and conscientiousness demonstrate a negative trend during childhood and early adolescence before trending upwards during late adolescence and into adulthood. In addition to these group effects, there are individual differences: different people demonstrate unique patterns of change at all stages of life.
In addition, some research (Fleeson, 2001) suggests that the Big Five should not be conceived of as dichotomies (such as extraversion vs. introversion) but as continua. Each individual has the capacity to move along each dimension as circumstances (social or temporal) change. He is or she is therefore not simply on one end of each trait dichotomy but is a blend of both, exhibiting some characteristics more often than others:
Research regarding personality with growing age has suggested that as individuals enter their elder years (79–86), those with lower IQ see a raise in extraversion, but a decline in conscientiousness and physical well-being.
Group differences.
Gender differences.
Some cross-cultural research has shown some patterns of gender differences on responses to the NEO-PI-R and the Big Five Inventory. For example, women consistently report higher Neuroticism, Agreeableness, warmth (an extraversion facet) and openness to feelings, and men often report higher assertiveness (a facet of extraversion) and openness to ideas as assessed by the NEO-PI-R.
A study of gender differences in 55 nations using the Big Five Inventory found that women tended to be somewhat higher than men in neuroticism, extraversion, agreeableness, and conscientiousness. The difference in neuroticism was the most prominent and consistent, with significant differences found in 49 of the 55 nations surveyed.
Gender differences in personality traits are largest in prosperous, healthy, and more gender-egalitarian nations. The explanation for this, as stated by the researchers of a 2001 paper, is that actions by women in individualistic, egalitarian countries are more likely to be attributed to their personality, rather than being attributed to ascribed gender roles within collectivist, traditional countries.
Measured differences in the magnitude of sex differences between more or less developed world regions were caused by the changes in the measured personalities of men, not women, in these respective regions. That is, men in highly developed world regions were less neurotic, less extraverted, less conscientious and less agreeable compared to men in less developed world regions. Women, on the other hand tended not to differ in personality traits across regions.
Birth-order differences.
Frank Sulloway argues that firstborns are more conscientious, more socially dominant, less agreeable, and less open to new ideas compared to siblings that were born later. Large-scale studies using random samples and self-report personality tests, however, have found milder effects than Sulloway claimed, or no significant effects of birth order on personality. A study using the Project Talent data, which is a large-scale representative survey of American high school students, with 272,003 eligible participants, found statistically significant but very small effects (the average absolute correlation between birth order and personality was .02) of birth order on personality, such that firstborns were slightly more conscientious, dominant, and agreeable, while also being less neurotic and less sociable. Parental socioeconomic status and participant gender had much larger correlations with personality.
In 2002, the Journal of Psychology posted a Big Five Personality Trait Difference; where researchers explored the relationship between the five-factor model and the Universal-Diverse Orientation (UDO) in counselor trainees. (Thompson, R., Brossart, D., and Mivielle, A., 2002). UDO is known as one social attitude that produces a strong awareness and/or acceptance towards the similarities and differences among individuals. (Miville, M., Romas, J., Johnson, J., and Lon, R. 2002) The study found that the counselor trainees that are more open to the idea of creative expression (a facet of Openness to Experience, Openness to Aesthetics) among individuals are more likely to work with a diverse group of clients, and feel comfortable in their role.
Cultural differences.
Individual differences in personality traits are widely understood to be conditioned by cultural context.
Research into the Big Five has been pursued in a variety of languages and cultures, such as German, Chinese, and South Asian. For example, Thompson has claimed to find the Big Five structure across several cultures using an international English language scale.
Cheung, van de Vijver, and Leong (2011) suggest, however, that the Openness factor is particularly unsupported in Asian countries and that a different fifth factor is identified.
Sopagna Eap et al. (2008) found that European-American men scored higher than Asian-American men on extroversion, conscientiousness, and openness, while Asian-American men scored higher than European-American men on neuroticism. Benet-Martínez and Karakitapoglu-Aygün (2003) arrived at similar results.
Recent work has found relationships between Geert Hofstede's cultural factors, Individualism, Power Distance, Masculinity, and Uncertainty Avoidance, with the average Big Five scores in a country. For instance, the degree to which a country values individualism correlates with its average extraversion, whereas people living in cultures which are accepting of large inequalities in their power structures tend to score somewhat higher on conscientiousness.
A 2017 study has found that countries' average personality trait levels are correlated with their political systems. Countries with higher average trait Openness tended to have more democratic institutions, an association that held even after factoring out other relevant influences such as economic development.
Attempts to replicate the Big Five have succeeded in some countries but not in others. Some research suggests, for instance, that Hungarians do not have a single agreeableness factor. Other researchers have found evidence for agreeableness but not for other factors.
Health.
Personality and dementia.
Some diseases cause changes in personality. For example, although gradual memory impairment is the hallmark feature of Alzheimer's disease, a systematic review of personality changes in Alzheimer's disease by Robins Wahlin and Byrne, published in 2011, found systematic and consistent trait changes mapped to the Big Five. The largest change observed was a decrease in conscientiousness. The next most significant changes were an increase in Neuroticism and decrease in Extraversion, but Openness and Agreeableness were also decreased. These changes in personality could assist with early diagnosis.
A study published in 2023 found that the Big Five personality traits may also influence the quality of life experienced by people with Alzheimer's disease and other dementias, post diagnosis. In this study people with dementia with lower levels of Neuroticism self-reported higher quality of life than those with higher levels of Neuroticism while those with higher levels of the other four traits self-reported higher quality of life than those with lower levels of these traits. This suggests that as well as assisting with early diagnosis, the Big Five personality traits could help identify people with dementia potentially more vulnerable to adverse outcomes and inform personalized care planning and interventions.
Personality disorders.
As of 2002[ [update]], there were over fifty published studies relating the FFM to personality disorders. Since that time, quite a number of additional studies have expanded on this research base and provided further empirical support for understanding the DSM personality disorders in terms of the FFM domains.
In her review of the personality disorder literature published in 2007, Lee Anna Clark asserted that "the five-factor model of personality is widely accepted as representing the higher-order structure of both normal and abnormal personality traits". However, other researchers disagree that this model is widely accepted (see the section Critique below) and suggest that it simply replicates early temperament research. Noticeably, FFM publications never compare their findings to temperament models even though temperament and mental disorders (especially personality disorders) are thought to be based on the same neurotransmitter imbalances, just to varying degrees.
The five-factor model was claimed to significantly predict all ten personality disorder symptoms and outperform the Minnesota Multiphasic Personality Inventory (MMPI) in the prediction of borderline, avoidant, and dependent personality disorder symptoms. However, most predictions related to an increase in Neuroticism and a decrease in Agreeableness, and therefore did not differentiate between the disorders very well.
Common mental disorders.
Converging evidence from several nationally representative studies has established three classes of mental disorders which are especially common in the general population: Depressive disorders (e.g., major depressive disorder (MDD), dysthymic disorder), anxiety disorders (e.g., generalized anxiety disorder (GAD), post-traumatic stress disorder (PTSD), panic disorder, agoraphobia, specific phobia, and social phobia), and substance use disorders (SUDs). The Five Factor personality profiles of users of different drugs may be different. For example, the typical profile for heroin users is formula_0, whereas for ecstasy users the high level of N is not expected but E is higher: formula_1.
These common mental disorders (CMDs) have been empirically linked to the Big Five personality traits, neuroticism in particular. Numerous studies have found that having high scores of neuroticism significantly increases one's risk for developing a common mental disorder. A large-scale meta-analysis (n > 75,000) examining the relationship between all of the Big Five personality traits and common mental disorders found that low conscientiousness yielded consistently strong effects for each common mental disorder examined (i.e., MDD, dysthymic disorder, GAD, PTSD, panic disorder, agoraphobia, social phobia, specific phobia, and SUD). This finding parallels research on physical health, which has established that conscientiousness is the strongest personality predictor of reduced mortality, and is highly negatively correlated with making poor health choices. In regards to the other personality domains, the meta-analysis found that all common mental disorders examined were defined by high neuroticism, most exhibited low extraversion, only SUD was linked to agreeableness (negatively), and no disorders were associated with Openness. A meta-analysis of 59 longitudinal studies showed that high neuroticism predicted the development of anxiety, depression, substance abuse, psychosis, schizophrenia, and non-specific mental distress, also after adjustment for baseline symptoms and psychiatric history.
The personality-psychopathology models.
Five major models have been posed to explain the nature of the relationship between personality and mental illness. There is currently no single "best model", as each of them has received at least some empirical support. These models are not mutually exclusive – more than one may be operating for a particular individual and various mental disorders may be explained by different models.
Physical health.
To examine how the Big Five personality traits are related to subjective health outcomes (positive and negative mood, physical symptoms, and general health concern) and objective health conditions (chronic illness, serious illness, and physical injuries), Jasna Hudek-Knezevic and Igor Kardum conducted a study from a sample of 822 healthy volunteers (438 women and 384 men). Out of the Big Five personality traits, they found neuroticism most related to worse subjective health outcomes and optimistic control to better subjective health outcomes. When relating to objective health conditions, connections drawn were presented weak, except that neuroticism significantly predicted chronic illness, whereas optimistic control was more closely related to physical injuries caused by accident.
Being highly conscientious may add as much as five years to one's life. The Big Five personality traits also predict positive health outcomes. In an elderly Japanese sample, conscientiousness, extraversion, and openness were related to lower risk of mortality.
Higher conscientiousness is associated with lower obesity risk. In already obese individuals, higher conscientiousness is associated with a higher likelihood of becoming non-obese over a five-year period.
Effect of personality traits through life.
Education.
Academic achievement.
Personality plays an important role in academic achievement. A study of 308 undergraduates who completed the Five Factor Inventory Processes and reported their GPA suggested that conscientiousness and agreeableness have a positive relationship with all types of learning styles (synthesis-analysis, methodical study, fact retention, and elaborative processing), whereas neuroticism shows an inverse relationship. Moreover, extraversion and openness were proportional to elaborative processing. The Big Five personality traits accounted for 14% of the variance in GPA, suggesting that personality traits make some contributions to academic performance. Furthermore, reflective learning styles (synthesis-analysis and elaborative processing) were able to mediate the relationship between openness and GPA. These results indicate that intellectual curiosity significantly enhances academic performance if students combine their scholarly interest with thoughtful information processing.
A recent study of Israeli high-school students found that those in the gifted program systematically scored higher on openness and lower on neuroticism than those not in the gifted program. While not a measure of the Big Five, gifted students also reported less state anxiety than students not in the gifted program. Specific Big Five personality traits predict learning styles in addition to academic success.
Studies conducted on college students have concluded that hope, which is linked to agreeableness, conscientiousness, neuroticism, and openness, has a positive effect on psychological well-being. Individuals high in neurotic tendencies are less likely to display hopeful tendencies and are negatively associated with well-being. Personality can sometimes be flexible and measuring the big five personality for individuals as they enter certain stages of life may predict their educational identity. Recent studies have suggested the likelihood of an individual's personality affecting their educational identity.
Learning styles.
Learning styles have been described as "enduring ways of thinking and processing information".
In 2008, the Association for Psychological Science (APS) commissioned a report that concludes that no significant evidence exists that learning-style assessments should be included in the education system. Thus it is premature, at best, to conclude that the evidence links the Big Five to "learning styles", or "learning styles" to learning itself.
However, the APS report also suggested that all existing learning styles have not been exhausted and that there could exist learning styles worthy of being included in educational practices. There are studies that conclude that personality and thinking styles may be intertwined in ways that link thinking styles to the Big Five personality traits. There is no general consensus on the number or specifications of particular learning styles, but there have been many different proposals.
As one example, Schmeck, Ribich, and Ramanaiah (1997) defined four types of learning styles:
When all four facets are implicated within the classroom, they will each likely improve academic achievement. This model asserts that students develop either agentic/shallow processing or reflective/deep processing. Deep processors are more often found to be more conscientious, intellectually open, and extraverted than shallow processors. Deep processing is associated with appropriate study methods (methodical study) and a stronger ability to analyze information (synthesis analysis), whereas shallow processors prefer structured fact retention learning styles and are better suited for elaborative processing. The main functions of these four specific learning styles are as follows:
Openness has been linked to learning styles that often lead to academic success and higher grades like synthesis analysis and methodical study. Because conscientiousness and openness have been shown to predict all four learning styles, it suggests that individuals who possess characteristics like discipline, determination, and curiosity are more likely to engage in all of the above learning styles.
According to the research carried out by Komarraju, Karau, Schmeck & Avdic (2011), conscientiousness and agreeableness are positively related with all four learning styles, whereas neuroticism was negatively related with those four. Furthermore, extraversion and openness were only positively related to elaborative processing, and openness itself correlated with higher academic achievement.
In addition, a previous study by psychologist Mikael Jensen has shown relationships between the Big Five personality traits, learning, and academic achievement. According to Jensen, all personality traits, except neuroticism, are associated with learning goals and motivation. Openness and conscientiousness influence individuals to learn to a high degree unrecognized, while extraversion and agreeableness have similar effects. Conscientiousness and neuroticism also influence individuals to perform well in front of others for a sense of credit and reward, while agreeableness forces individuals to avoid this strategy of learning. Jensen's study concludes that individuals who score high on the agreeableness trait will likely learn just to perform well in front of others.
Besides openness, all Big Five personality traits helped predict the educational identity of students. Based on these findings, scientists are beginning to see that the Big Five traits might have a large influence of on academic motivation that leads to predicting a student's academic performance.
Some authors suggested that Big Five personality traits combined with learning styles can help predict some variations in the academic performance and the academic motivation of an individual which can then influence their academic achievements. This may be seen because individual differences in personality represent stable approaches to information processing. For instance, conscientiousness has consistently emerged as a stable predictor of success in exam performance, largely because conscientious students experience fewer study delays. Conscientiousness shows a positive association with the four learning styles because students with high levels of conscientiousness develop focused learning strategies and appear to be more disciplined and achievement-oriented.
<templatestyles src="Template:Blockquote/styles.css" />Personality and learning styles are both likely to play significant roles in influencing academic achievement. College students (308 undergraduates) completed the Five Factor Inventory and the Inventory of Learning Processes and reported their grade point average. Two of the Big Five traits, conscientiousness and agreeableness, were positively related with all four learning styles (synthesis analysis, methodical study, fact retention, and elaborative processing), whereas neuroticism was negatively related with all four learning styles. In addition, extraversion and openness were positively related with elaborative processing. The Big Five together explained 14% of the variance in grade point average (GPA), and learning styles explained an additional 3%, suggesting that both personality traits and learning styles contribute to academic performance. Further, the relationship between openness and GPA was mediated by reflective learning styles (synthesis-analysis and elaborative processing). These latter results suggest that being intellectually curious fully enhances academic performance when students combine this scholarly interest with thoughtful information processing. Implications of these results are discussed in the context of teaching techniques and curriculum design.
Distance Learning.
When the relationship between the five-factor personality traits and academic achievement in distance education settings was examined in brief, the openness personality trait was found to be the most important variable that has a positive relationship with academic achievement in distance education environments. In addition, it was found that self-discipline, extraversion, and adaptability personality traits are generally in a positive relationship with academic achievement. The most important personality trait that has a negative relationship with academic achievement has emerged as neuroticism. The results generally show that individuals who are organized, planned, determined, who are oriented to new ideas and independent thinking have increased success in distance education environments. On the other hand, it can be said that individuals with anxiety and stress tendencies generally have lower academic success.
Employment.
Occupation and personality fit.
Researchers have long suggested that work is more likely to be fulfilling to the individual and beneficial to society when there is alignment between the person and their occupation. For instance, software programmers and scientists often rank high on Openness to experience and tend to be intellectually curious, think in symbols and abstractions, and find repetition boring. Psychologists and sociologists rank higher on Agreeableness and Openness than economists and jurists.
Work success.
It is believed that the Big Five traits are predictors of future performance outcomes to varying degrees. Specific facets of the Big Five traits are also thought to be indicators of success in the workplace, and each individual facet can give a more precise indication as to the nature of a person. Different traits' facets are needed for different occupations. Various facets of the Big Five traits can predict the success of people in different environments. The estimated levels of an individual's success in jobs that require public speaking versus one-on-one interactions will differ according to whether that person has particular traits' facets.
Job outcome measures include job and training proficiency and personnel data. However, research demonstrating such prediction has been criticized, in part because of the apparently low correlation coefficients characterizing the relationship between personality and job performance. In a 2007 article states: "The problem with personality tests is ... that the validity of personality measures as predictors of job performance is often disappointingly low. The argument for using personality tests to predict performance does not strike me as convincing in the first place."
Such criticisms were put forward by Walter Mischel, whose publication caused a two-decades' long crisis in personality psychometrics. However, later work demonstrated that the correlations obtained by psychometric personality researchers were actually very respectable by comparative standards, and that the economic value of even incremental increases in prediction accuracy was exceptionally large, given the vast difference in performance by those who occupy complex job positions.
Research has suggested that individuals who are considered leaders typically exhibit lower amounts of neurotic traits, maintain higher levels of openness, balanced levels of conscientiousness, and balanced levels of extraversion. Further studies have linked professional burnout to neuroticism, and extraversion to enduring positive work experience. Studies have linked national innovation, leadership, and ideation to openness to experience and conscientiousness. Occupational self-efficacy has also been shown to be positively correlated with conscientiousness and negatively correlated with neuroticism. Some research has also suggested that the conscientiousness of a supervisor is positively associated with an employee's perception of abusive supervision. Others have suggested that low agreeableness and high neuroticism are traits more related to abusive supervision.
Openness is positively related to proactivity at the individual and the organizational levels and is negatively related to team and organizational proficiency. These effects were found to be completely independent of one another. This is also counter-conscientious and has a negative correlation to Conscientiousness.
Agreeableness is negatively related to individual task proactivity. Typically this is associated with lower career success and being less able to cope with conflict. However there are benefits to the Agreeableness personality trait including higher subjective well-being; more positive interpersonal interactions and helping behavior; lower conflict; lower deviance and turnover. Furthermore, attributes related to Agreeableness are important for workforce readiness for a variety of occupations and performance criteria. Research has suggested that those who are high in agreeableness are not as successful in accumulating income.
Extraversion results in greater leadership emergence and effectiveness; as well as higher job and life satisfaction. However extraversion can lead to more impulsive behaviors, more accidents and lower performance in certain jobs.
Conscientiousness is highly predictive of job performance in general, and is positively related to all forms of work role performance, including job performance and job satisfaction, greater leadership effectiveness, lower turnover and deviant behaviors. However this personality trait is associated with reduced adaptability, lower learning in initial stages of skill acquisition and more interpersonally abrasiveness, when also low in agreeableness.
Neuroticism is negatively related to all forms of work role performance. This increases the chance of engaging in risky behaviors.
Two theories have been integrated in an attempt to account for these differences in work role performance. Trait activation theory posits that within a person trait levels predict future behavior, that trait levels differ between people, and that work-related cues activate traits which leads to work relevant behaviors. Role theory suggests that role senders provide cues to elicit desired behaviors. In this context, role senders provide workers with cues for expected behaviors, which in turn activates personality traits and work relevant behaviors. In essence, expectations of the role sender lead to different behavioral outcomes depending on the trait levels of individual workers, and because people differ in trait levels, responses to these cues will not be universal.
Romantic relationships.
The Big Five model of personality was used for attempts to predict satisfaction in romantic relationships, relationship quality in dating, engaged, and married couples.
Political identification.
The Big Five Personality Model also has applications in the study of political psychology. Studies have been finding links between the big five personality traits and political identification. It has been found by several studies that individuals who score high in Conscientiousness are more likely to possess a right-wing political identification. On the opposite end of the spectrum, a strong correlation was identified between high scores in Openness to Experience and a left-leaning ideology. While the traits of agreeableness, extraversion, and neuroticism have not been consistently linked to either conservative or liberal ideology, with studies producing mixed results, such traits are promising when analyzing the strength of an individual's party identification. However, correlations between the Big Five and political beliefs, while present, tend to be small, with one study finding correlations ranged from 0.14 to 0.24.
Scope of predictive power.
The predictive effects of the Big Five personality traits relate mostly to social functioning and rules-driven behavior and are not very specific for prediction of particular aspects of behavior. For example, it was noted by all temperament researchers that high neuroticism precedes the development of all common mental disorders and is not associated with personality. Further evidence is required to fully uncover the nature and differences between personality traits, temperament and life outcomes. Social and contextual parameters also play a role in outcomes and the interaction between the two is not yet fully understood.
Religiosity.
Though the effect sizes are small: Of the Big Five personality traits high Agreeableness, Conscientiousness and Extraversion relate to general religiosity, while Openness relate negatively to religious fundamentalism and positively to spirituality. High Neuroticism may be related to extrinsic religiosity, whereas intrinsic religiosity and spirituality reflect Emotional Stability.
Measurements.
Several measures of the Big Five exist:
The most frequently used measures of the Big Five comprise either items that are self-descriptive sentences or, in the case of lexical measures, items that are single adjectives. Due to the length of sentence-based and some lexical measures, short forms have been developed and validated for use in applied research settings where questionnaire space and respondent time are limited, such as the 40-item balanced "International English Big-Five Mini-Markers" or a very brief (10 item) measure of the Big Five domains. Research has suggested that some methodologies in administering personality tests are inadequate in length and provide insufficient detail to truly evaluate personality. Usually, longer, more detailed questions will give a more accurate portrayal of personality. The five factor structure has been replicated in peer reports. However, many of the substantive findings rely on self-reports.
Much of the evidence on the measures of the Big 5 relies on self-report questionnaires, which makes self-report bias and falsification of responses difficult to deal with and account for. It has been argued that the Big Five tests do not create an accurate personality profile because the responses given on these tests are not true in all cases and can be falsified. For example, questionnaires are answered by potential employees who might choose answers that paint them in the best light.
Research suggests that a relative-scored Big Five measure in which respondents had to make repeated choices between equally desirable personality descriptors may be a potential alternative to traditional Big Five measures in accurately assessing personality traits, especially when lying or biased responding is present. When compared with a traditional Big Five measure for its ability to predict GPA and creative achievement under both normal and "fake good"-bias response conditions, the relative-scored measure significantly and consistently predicted these outcomes under both conditions; however, the Likert questionnaire lost its predictive ability in the faking condition. Thus, the relative-scored measure proved to be less affected by biased responding than the Likert measure of the Big Five.
Andrew H. Schwartz analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age.
Critique.
The proposed Big Five model has been subjected to considerable critical scrutiny in a number of published studies. One prominent critic of the model has been Jack Block at the University of California, Berkeley. In response to Block, the model was defended in a paper published by Costa and McCrae. This was followed by a number of published critical replies from Block.
It has been argued that there are limitations to the scope of the Big Five model as an explanatory or predictive theory. It has also been argued that measures of the Big Five account for only 56% of the normal personality trait sphere alone (not even considering the abnormal personality trait sphere). Also, the static Big Five is not theory driven, it is merely a statistically driven investigation of certain descriptors that tend to cluster together often based on less-than-optimal factor analytic procedures. Measures of the Big Five constructs appear to show some consistency in interviews, self-descriptions and observations, and this static five-factor structure seems to be found across a wide range of participants of different ages and cultures. However, while genotypic temperament trait dimensions might appear across different cultures, the phenotypic expression of personality traits differs profoundly across different cultures as a function of the different socio-cultural conditioning and experiential learning that takes place within different cultural settings.
Moreover, the fact that the Big Five model was based on lexical hypothesis (i.e. on the verbal descriptors of individual differences) indicated strong methodological flaws in this model, especially related to its main factors, Extraversion and Neuroticism. First, there is a natural pro-social bias of language in people's verbal evaluations. After all, language is an invention of group dynamics that was developed to facilitate socialization and the exchange of information and to synchronize group activity. This social function of language therefore creates a sociability bias in verbal descriptors of human behavior: there are more words related to social than physical or even mental aspects of behavior. The sheer number of such descriptors will cause them to group into the largest factor in any language, and such grouping has nothing to do with the way that core systems of individual differences are set up. Second, there is also a negativity bias in emotionality (i.e. most emotions have negative affectivity), and there are more words in language to describe negative rather than positive emotions. Such asymmetry in emotional valence creates another bias in language. Experiments using the lexical hypothesis approach indeed demonstrated that the use of lexical material skews the resulting dimensionality according to a sociability bias of language and a negativity bias of emotionality, grouping all evaluations around these two dimensions. This means that the two largest dimensions in the Big Five model might be just an artifact of the lexical approach that this model employed.
Limited scope.
One common criticism is that the Big Five does not explain all of human personality. Some psychologists have dissented from the model precisely because they feel it neglects other domains of personality, such as religiosity, manipulativeness/machiavellianism, honesty, sexiness/seductiveness, thriftiness, conservativeness, masculinity/femininity, snobbishness/egotism, sense of humour, and risk-taking/thrill-seeking. Dan P. McAdams has called the Big Five a "psychology of the stranger", because they refer to traits that are relatively easy to observe in a stranger; other aspects of personality that are more privately held or more context-dependent are excluded from the Big Five.
There may be debate as to what counts as personality and what does not and the nature of the questions in the survey greatly influence outcome. Multiple particularly broad question databases have failed to produce the Big Five as the top five traits.
In many studies, the five factors are not fully orthogonal to one another; that is, the five factors are not independent. Orthogonality is viewed as desirable by some researchers because it minimizes redundancy between the dimensions. This is particularly important when the goal of a study is to provide a comprehensive description of personality with as few variables as possible.
Methodological issues.
Factor analysis, the statistical method used to identify the dimensional structure of observed variables, lacks a universally recognized basis for choosing among solutions with different numbers of factors. A five factor solution depends on some degree of interpretation by the analyst. A larger number of factors may underlie these five factors. This has led to disputes about the "true" number of factors. Big Five proponents have responded that although other solutions may be viable in a single data set, only the five-factor structure consistently replicates across different studies.
Surveys in studies are often online surveys of college students. Results do not always replicate when run on other populations or in other languages.
Moreover, the factor analysis that this model is based on is a linear method incapable of capturing nonlinear, feedback and contingent relationships between core systems of individual differences.
Theoretical status.
A frequent criticism is that the Big Five is not based on any underlying theory; it is merely an empirical finding that certain descriptors cluster together under factor analysis. Although this does not mean that these five factors do not exist, the underlying causes behind them are unknown.
Jack Block's final published work before his death in January 2010 drew together his lifetime perspective on the five-factor model.
He summarized his critique of the model in terms of:
He went on to suggest that repeatedly observed higher order factors hierarchically above the proclaimed Big Five personality traits may promise deeper biological understanding of the origins and implications of these superfactors.
Evidence for six factors rather than five.
It has been noted that even though early lexical studies in the English language indicated five large groups of personality traits, more recent, and more comprehensive, cross-language studies have provided evidence for six large groups rather than five, with the sixth factor being Honesty-Humility. These six groups form the basis of the HEXACO model of personality structure. Based on these findings it has been suggested that the Big Five system should be replaced by HEXACO, or revised to better align with lexical evidence.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\rm N}\\Uparrow, {\\rm O}\\Uparrow, {\\rm A}\\Downarrow, {\\rm C}\\Downarrow"
},
{
"math_id": 1,
"text": "{\\rm E}\\Uparrow, {\\rm O}\\Uparrow, {\\rm A}\\Downarrow, {\\rm C}\\Downarrow"
}
] | https://en.wikipedia.org/wiki?curid=1284664 |
12850812 | Wijsman convergence | Wijsman convergence is a variation of Hausdorff convergence suitable for work with unbounded sets.
Intuitively, Wijsman convergence is to convergence in the Hausdorff metric as pointwise convergence is to uniform convergence.
History.
The convergence was defined by Robert Wijsman.
The same definition was used earlier by Zdeněk Frolík.
Yet earlier, Hausdorff in his book "Grundzüge der Mengenlehre" defined so called "closed limits";
for proper metric spaces it is the same as Wijsman convergence.
Definition.
Let ("X", "d") be a metric space and let Cl("X") denote the collection of all "d"-closed subsets of "X". For a point "x" ∈ "X" and a set "A" ∈ Cl("X"), set
formula_0
A sequence (or net) of sets "A""i" ∈ Cl("X") is said to be Wijsman convergent to "A" ∈ Cl("X") if, for each "x" ∈ "X",
formula_1
Wijsman convergence induces a topology on Cl("X"), known as the Wijsman topology.
formula_2
The Hausdorff and Wijsman topologies on Cl("X") coincide if and only if ("X", "d") is a totally bounded space.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d(x, A) = \\inf_{a \\in A} d(x, a)."
},
{
"math_id": 1,
"text": "d(x, A_{i}) \\to d(x, A)."
},
{
"math_id": 2,
"text": "d_{\\mathrm{H}} (A, B) = \\sup_{x \\in X} \\big| d(x, A) - d(x, B) \\big|."
}
] | https://en.wikipedia.org/wiki?curid=12850812 |
12851733 | Berger's isoembolic inequality | Gives a lower bound on the volume of a Riemannian manifold
In mathematics, Berger's isoembolic inequality is a result in Riemannian geometry that gives a lower bound on the volume of a Riemannian manifold and also gives a necessary and sufficient condition for the manifold to be isometric to the m-dimensional sphere with its usual "round" metric. The theorem is named after the mathematician Marcel Berger, who derived it from an inequality proved by Jerry Kazdan.
Statement of the theorem.
Let ("M", "g") be a closed m-dimensional Riemannian manifold with injectivity radius inj("M"). Let vol("M") denote the Riemannian volume of M and let "c""m" denote the volume of the standard m-dimensional sphere of radius one. Then
formula_0
with equality if and only if ("M", "g") is isometric to the m-sphere with its usual round metric. This result is known as Berger's "isoembolic inequality". The proof relies upon an analytic inequality proved by Kazdan. The original work of Berger and Kazdan appears in the appendices of Arthur Besse's book "Manifolds all of whose geodesics are closed." At this stage, the isoembolic inequality appeared with a non-optimal constant. Sometimes Kazdan's inequality is called "Berger–Kazdan inequality".
References.
<templatestyles src="Reflist/styles.css" />
Books. | [
{
"math_id": 0,
"text": "\\mathrm{vol} (M) \\geq \\frac{c_m (\\mathrm{inj}(M))^m}{\\pi^m},"
}
] | https://en.wikipedia.org/wiki?curid=12851733 |
12852798 | Berezin transform | In mathematics — specifically, in complex analysis — the Berezin transform is an integral operator acting on functions defined on the open unit disk "D" of the complex plane C. Formally, for a function "ƒ" : "D" → C, the Berezin transform of "ƒ" is a new function "Bƒ" : "D" → C defined at a point "z" ∈ "D" by
formula_0
where "w" denotes the complex conjugate of "w" and formula_1 is the area measure. It is named after Felix Alexandrovich Berezin. | [
{
"math_id": 0,
"text": "(B f)(z) = \\int_D \\frac{(1 - |z|^2)^2}{| 1 - z \\bar{w} |^4} f(w) \\, \\mathrm{d}A (w),"
},
{
"math_id": 1,
"text": "\\mathrm{d}A"
}
] | https://en.wikipedia.org/wiki?curid=12852798 |
1285375 | Hyperbolic metric space | Concept in mathematics
In mathematics, a hyperbolic metric space is a metric space satisfying certain metric relations (depending quantitatively on a nonnegative real number δ) between points. The definition, introduced by Mikhael Gromov, generalizes the metric properties of classical hyperbolic geometry and of trees. Hyperbolicity is a large-scale property, and is very useful to the study of certain infinite groups called Gromov-hyperbolic groups.
Definitions.
In this paragraph we give various definitions of a formula_0-hyperbolic space. A metric space is said to be (Gromov-) hyperbolic if it is formula_0-hyperbolic for some formula_1.
Definition using the Gromov product.
Let formula_2 be a metric space. The Gromov product of two points formula_3 with respect to a third one formula_4 is defined by the formula:
formula_5
Gromov's definition of a hyperbolic metric space is then as follows: formula_6 is formula_0-hyperbolic if and only if all formula_7 satisfy the "four-point condition"
formula_8
Note that if this condition is satisfied for all formula_9 and one fixed base point formula_10, then it is satisfied for all formula_11 with a constant formula_12. Thus the hyperbolicity condition only needs to be verified for one fixed base point; for this reason, the subscript for the base point is often dropped from the Gromov product.
Definitions using triangles.
Up to changing formula_0 by a constant multiple, there is an equivalent geometric definition involving triangles when the metric space formula_6 is "geodesic", i.e. any two points formula_13 are end points of a geodesic segment formula_14 (an isometric image of a compact subinterval formula_15 of the reals). Note that the definition via Gromov products does not require the space to be geodesic.
Let formula_16. A geodesic triangle with vertices formula_17 is the union of three geodesic segments formula_18 (where formula_19 denotes a segment with endpoints formula_20 and formula_21).
formula_22
formula_23
formula_24
formula_25
formula_26
formula_27
The δ-slim triangle condition
If for any point formula_28 there is a point in formula_29 at distance less than formula_0 of formula_30, and similarly for points on the other edges, and formula_31 then the triangle is said to be "formula_0-slim" .
A definition of a formula_0-hyperbolic space is then a geodesic metric space all of whose geodesic triangles are formula_0-slim. This definition is generally credited to Eliyahu Rips.
Another definition can be given using the notion of a formula_32-approximate center of a geodesic triangle: this is a point which is at distance at most formula_32 of any edge of the triangle (an "approximate" version of the incenter). A space is formula_0-hyperbolic if every geodesic triangle has a formula_0-center.
These two definitions of a formula_0-hyperbolic space using geodesic triangles are not exactly equivalent, but there exists formula_33 such that a formula_0-hyperbolic space in the first sense is formula_34-hyperbolic in the second, and vice versa. Thus the notion of a hyperbolic space is independent of the chosen definition.
Examples.
The hyperbolic plane is hyperbolic: in fact the incircle of a geodesic triangle is the circle of largest diameter contained in the triangle and every geodesic triangle lies in the interior of an ideal triangle, all of which are isometric with incircles of diameter 2 log 3. Note that in this case the Gromov product also has a simple interpretation in terms of the incircle of a geodesic triangle. In fact the quantity ("A","B")"C" is just the hyperbolic distance "p" from "C" to either of the points of contact of the incircle with the adjacent sides: for from the diagram "c" = ("a" – "p") + ("b" – "p"), so that
"p" = ("a" + "b" – "c")/2 = ("A","B")"C".
The Euclidean plane is not hyperbolic, for example because of the existence of homotheties.
Two "degenerate" examples of hyperbolic spaces are spaces with bounded diameter (for example finite or compact spaces) and the real line.
Metric trees and more generally real trees are the simplest interesting examples of hyperbolic spaces as they are 0-hyperbolic (i.e. all triangles are tripods).
The 1-skeleton of the triangulation by Euclidean equilateral triangles is not hyperbolic (it is in fact quasi-isometric to the Euclidean plane). A triangulation of the plane formula_35 has a hyperbolic 1-skeleton if every vertex has degree 7 or more.
The two-dimensional grid is not hyperbolic (it is quasi-isometric to the Euclidean plane). It is the Cayley graph of the fundamental group of the torus; the Cayley graphs of the fundamental groups of a surface of higher genus is hyperbolic (it is in fact quasi-isometric to the hyperbolic plane).
Hyperbolicity and curvature.
The hyperbolic plane (and more generally any Hadamard manifolds of sectional curvature formula_36) is formula_37-hyperbolic. If we scale the Riemannian metric by a factor formula_38 then the distances are multiplied by formula_39 and thus we get a space that is formula_40-hyperbolic. Since the curvature is multiplied by formula_41 we see that in this example the more (negatively) curved the space is, the lower the hyperbolicity constant.
Similar examples are CAT spaces of negative curvature. With respect to curvature and hyperbolicity it should be noted however that while curvature is a property that is essentially local, hyperbolicity is a large-scale property which does not see local (i.e. happening in a bounded region) metric phenomena. For example, the union of an hyperbolic space with a compact space with any metric extending the original ones remains hyperbolic.
Important properties.
Invariance under quasi-isometry.
One way to make precise the meaning of "large scale" is to require invariance under quasi-isometry. This is true of hyperbolicity.
"If a geodesic metric space formula_42 is quasi-isometric to a formula_0-hyperbolic space formula_6 then there exists formula_43 such that formula_42 is formula_43-hyperbolic."
The constant formula_43 depends on formula_0 and on the multiplicative and additive constants for the quasi-isometry.
Approximate trees in hyperbolic spaces.
The definition of an hyperbolic space in terms of the Gromov product can be seen as saying that the metric relations between any four points are the same as they would be in a tree, up to the additive constant formula_0. More generally the following property shows that any finite subset of an hyperbolic space looks like a finite tree.
"For any formula_44 there is a constant formula_32 such that the following holds: if formula_45 are points in a formula_0-hyperbolic space formula_6 there is a finite tree formula_46 and an embedding formula_47 such that formula_48 for all formula_49 and "
formula_50
The constant formula_32 can be taken to be formula_51 with formula_52 and this is optimal.
Exponential growth of distance and isoperimetric inequalities.
In an hyperbolic space formula_6 we have the following property:
"There are formula_53 such that for all formula_54 with formula_55, every path formula_56 joining formula_22 to formula_23 and staying at distance at least formula_57 of formula_20 has length at least formula_58. "
Informally this means that the circumference of a "circle" of radius formula_57 grows exponentially with formula_57. This is reminiscent of the isoperimetric problem in the Euclidean plane. Here is a more specific statement to this effect.
"Suppose that formula_6 is a cell complex of dimension 2 such that its 1-skeleton is hyperbolic, and there exists formula_32 such that the boundary of any 2-cell contains at most formula_32 1-cells. Then there is a constant formula_38 such that for any finite subcomplex formula_59 we have "
formula_60
Here the area of a 2-complex is the number of 2-cells and the length of a 1-complex is the number of 1-cells. The statement above is a linear isoperimetric inequality; it turns out that having such an isoperimetric inequality characterises Gromov-hyperbolic spaces. Linear isoperimetric inequalities were inspired by the small cancellation conditions from combinatorial group theory.
Quasiconvex subspaces.
A subspace formula_42 of a geodesic metric space formula_6 is said to be quasiconvex if there is a constant formula_32 such that any geodesic in formula_6 between two points of formula_42 stays within distance formula_32 of formula_42.
"A quasi-convex subspace of an hyperbolic space is hyperbolic."
Asymptotic cones.
All asymptotic cones of an hyperbolic space are real trees. This property characterises hyperbolic spaces.
The boundary of a hyperbolic space.
Generalising the construction of the ends of a simplicial tree there is a natural notion of boundary at infinity for hyperbolic spaces, which has proven very useful for analysing group actions.
In this paragraph formula_6 is a geodesic metric space which is hyperbolic.
Definition using the Gromov product.
A sequence formula_61 is said to "converge to infinity" if for some (or any) point formula_20 we have that formula_62 as both formula_63 and formula_30 go to infinity. Two sequences formula_64 converging to infinity are considered equivalent when formula_65 (for some or any formula_20). The "boundary" of formula_6 is the set of equivalence classes of sequences which converge to infinity, which is denoted formula_66.
If formula_67 are two points on the boundary then their Gromov product is defined to be:
formula_68
which is finite iff formula_69. One can then define a topology on formula_66 using the functions formula_70. This topology on formula_66 is metrisable and there is a distinguished family of metrics defined using the Gromov product.
Definition for proper spaces using rays.
Let formula_71 be two quasi-isometric embeddings of formula_72 into formula_6 ("quasi-geodesic rays"). They are considered equivalent if and only if the function formula_73 is bounded on formula_72. If the space formula_6 is proper then the set of all such embeddings modulo equivalence with its natural topology is homeomorphic to formula_66 as defined above.
A similar realisation is to fix a basepoint and consider only quasi-geodesic rays originating from this point. In case formula_6 is geodesic and proper one can also restrict to genuine geodesic rays.
Examples.
When formula_74 is a simplicial regular tree the boundary is just the space of ends, which is a Cantor set. Fixing a point formula_75 yields a natural distance on formula_76: two points represented by rays formula_71 originating at formula_22 are at distance formula_77.
When formula_6 is the unit disk, i.e. the Poincaré disk model for the hyperbolic plane, the hyperbolic metric on the disk is
formula_78
and the Gromov boundary can be identified with the unit circle.
The boundary of formula_63-dimensional hyperbolic space is homeomorphic to the formula_79-dimensional sphere and the metrics are similar to the one above.
Busemann functions.
If formula_6 is proper then its boundary is homeomorphic to the space of Busemann functions on formula_6 modulo translations.
The action of isometries on the boundary and their classification.
A quasi-isometry between two hyperbolic spaces formula_80 induces a homeomorphism between the boundaries.
In particular the group of isometries of formula_6 acts by homeomorphisms on formula_66. This action can be used to classify isometries according to their dynamical behaviour on the boundary, generalising that for trees and classical hyperbolic spaces. Let formula_81 be an isometry of formula_6, then one of the following cases occur:
More examples.
Subsets of the theory of hyperbolic groups can be used to give more examples of hyperbolic spaces, for instance the Cayley graph of a small cancellation group. It is also known that the Cayley graphs of certain models of random groups (which is in effect a randomly-generated infinite regular graph) tend to be hyperbolic very often.
It can be difficult and interesting to prove that certain spaces are hyperbolic. For example, the following hyperbolicity results have led to new phenomena being discovered for the groups acting on them.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\delta"
},
{
"math_id": 1,
"text": "\\delta > 0"
},
{
"math_id": 2,
"text": "(X,d)"
},
{
"math_id": 3,
"text": "y, z \\in X"
},
{
"math_id": 4,
"text": "x \\in X"
},
{
"math_id": 5,
"text": "(y,z)_x = \\frac 1 2 \\left( d(x, y) + d(x, z) - d(y, z) \\right)."
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "x,y,z,w \\in X"
},
{
"math_id": 8,
"text": " (x,z)_w \\ge \\min \\left( (x,y)_w, (y,z)_w \\right) - \\delta"
},
{
"math_id": 9,
"text": "x,y,z \\in X"
},
{
"math_id": 10,
"text": "w_0"
},
{
"math_id": 11,
"text": ""
},
{
"math_id": 12,
"text": "2\\delta"
},
{
"math_id": 13,
"text": "x, y \\in X"
},
{
"math_id": 14,
"text": "[x,y]"
},
{
"math_id": 15,
"text": "[a,b]"
},
{
"math_id": 16,
"text": "x, y, z \\in X"
},
{
"math_id": 17,
"text": "x,y,z"
},
{
"math_id": 18,
"text": "[x,y], [y,z], [z,x]"
},
{
"math_id": 19,
"text": "[p,q]"
},
{
"math_id": 20,
"text": "p"
},
{
"math_id": 21,
"text": "q"
},
{
"math_id": 22,
"text": "x"
},
{
"math_id": 23,
"text": "y"
},
{
"math_id": 24,
"text": "z"
},
{
"math_id": 25,
"text": "B_\\delta([x,y])"
},
{
"math_id": 26,
"text": "B_\\delta([z,x])"
},
{
"math_id": 27,
"text": "B_\\delta([y,z])"
},
{
"math_id": 28,
"text": "m \\in [x,y]"
},
{
"math_id": 29,
"text": "[y,z] \\cup [z,x]"
},
{
"math_id": 30,
"text": "m"
},
{
"math_id": 31,
"text": "\\delta \\ge 0"
},
{
"math_id": 32,
"text": "C"
},
{
"math_id": 33,
"text": "k > 1"
},
{
"math_id": 34,
"text": " k \\cdot \\delta"
},
{
"math_id": 35,
"text": "\\mathbb R^2"
},
{
"math_id": 36,
"text": "\\le -1"
},
{
"math_id": 37,
"text": "2"
},
{
"math_id": 38,
"text": "\\lambda > 0"
},
{
"math_id": 39,
"text": "\\lambda"
},
{
"math_id": 40,
"text": "\\lambda\\cdot\\delta"
},
{
"math_id": 41,
"text": "\\lambda^{-1}"
},
{
"math_id": 42,
"text": "Y"
},
{
"math_id": 43,
"text": "\\delta'"
},
{
"math_id": 44,
"text": "n, \\delta"
},
{
"math_id": 45,
"text": "x_1, \\ldots, x_n"
},
{
"math_id": 46,
"text": "T"
},
{
"math_id": 47,
"text": "f : T \\to X"
},
{
"math_id": 48,
"text": "x_i \\in f(T)"
},
{
"math_id": 49,
"text": "i = 1, \\ldots, n"
},
{
"math_id": 50,
"text": "\\forall i, j : d(f^{-1}(x_i), f^{-1}(x_j)) \\le d(x_i, x_j) \\le d(f^{-1}(x_i), f^{-1}(x_j)) + C "
},
{
"math_id": 51,
"text": "\\delta \\cdot h(n)"
},
{
"math_id": 52,
"text": "h(n) = O(\\log n)"
},
{
"math_id": 53,
"text": "\\mu, K > 0"
},
{
"math_id": 54,
"text": "p, x, y \\in X"
},
{
"math_id": 55,
"text": "d(p, x) = d(p, y) =: r"
},
{
"math_id": 56,
"text": "\\alpha"
},
{
"math_id": 57,
"text": "r"
},
{
"math_id": 58,
"text": "e^{\\mu \\cdot d(x,y)} - K"
},
{
"math_id": 59,
"text": "Y \\subset X"
},
{
"math_id": 60,
"text": " \\operatorname{area}(Y) \\le \\lambda \\cdot \\operatorname{length(\\partial Y)} "
},
{
"math_id": 61,
"text": "(x_n) \\in X^{\\mathbb N}"
},
{
"math_id": 62,
"text": "(x_n, x_m)_p \\rightarrow \\infty"
},
{
"math_id": 63,
"text": "n"
},
{
"math_id": 64,
"text": "(x_n), (y_n)"
},
{
"math_id": 65,
"text": "\\lim_{n\\to +\\infty}(x_n, y_n)_p = +\\infty"
},
{
"math_id": 66,
"text": "\\partial X"
},
{
"math_id": 67,
"text": "\\xi, \\eta"
},
{
"math_id": 68,
"text": " (\\xi, \\eta)_p = \\sup_{(x_n)=\\xi, (y_n)=\\eta} \\left( \\liminf_{n, m\\to +\\infty} (x_n, y_m)_p \\right)"
},
{
"math_id": 69,
"text": "\\xi \\neq \\eta"
},
{
"math_id": 70,
"text": "(\\cdot, \\xi)"
},
{
"math_id": 71,
"text": "\\alpha, \\beta"
},
{
"math_id": 72,
"text": "[0, +\\infty["
},
{
"math_id": 73,
"text": "t \\mapsto d(\\alpha(t), \\beta(t))"
},
{
"math_id": 74,
"text": "X = T"
},
{
"math_id": 75,
"text": "x \\in T"
},
{
"math_id": 76,
"text": "\\partial T"
},
{
"math_id": 77,
"text": "\\exp(-\\operatorname{Length}(\\alpha \\cap \\beta))"
},
{
"math_id": 78,
"text": " ds^2 = {4|dz|^2\\over (1-|z|^2)^2}"
},
{
"math_id": 79,
"text": "n-1"
},
{
"math_id": 80,
"text": "X, Y"
},
{
"math_id": 81,
"text": "g"
},
{
"math_id": 82,
"text": "\\xi_+, \\xi_-"
},
{
"math_id": 83,
"text": "\\{g^n\\xi : n \\ge 0\\}, \\xi \\not= \\xi_-"
},
{
"math_id": 84,
"text": "\\xi_+"
}
] | https://en.wikipedia.org/wiki?curid=1285375 |
1285524 | LHCb experiment | Experiment at the Large Hadron Collider
The LHCb (Large Hadron Collider beauty) experiment is a particle physics detector experiment collecting data at the Large Hadron Collider at CERN. LHCb is a specialized b-physics experiment, designed primarily to measure the parameters of CP violation in the interactions of b-hadrons (heavy particles containing a bottom quark). Such studies can help to explain the matter-antimatter asymmetry of the Universe. The detector is also able to perform measurements of production cross sections, exotic hadron spectroscopy, charm physics and electroweak physics in the forward region. The LHCb collaborators, who built, operate and analyse data from the experiment, are composed of approximately 1650 people from 98 scientific institutes, representing 22 countries. Vincenzo Vagnoni succeeded on July 1, 2023 as spokesperson for the collaboration from Chris Parkes (spokesperson 2020–2023). The experiment is located at point 8 on the LHC tunnel close to Ferney-Voltaire, France just over the border from Geneva. The (small) MoEDAL experiment shares the same cavern.
Physics goals.
The experiment has wide physics program covering many important aspects of heavy flavour (both beauty and charm), electroweak and quantum chromodynamics (QCD) physics. Six key measurements have been identified involving B mesons. These are described in a roadmap document that formed the core physics programme for the first high energy LHC running in 2010–2012. They include:
The LHCb detector.
The fact that the two b-hadrons are predominantly produced in the same forward cone is exploited in the layout of the LHCb detector. The LHCb detector is a single arm forward spectrometer with a polar angular coverage from 10 to 300 milliradians (mrad) in the horizontal and 250 mrad in the vertical plane. The asymmetry between the horizontal and vertical plane is determined by a large dipole magnet with the main field component in the vertical direction.
Subsystems.
The Vertex Locator (VELO) is built around the proton interaction region. It is used to measure the particle trajectories close to the interaction point in order to precisely separate primary and secondary vertices.
The detector operates at from the LHC beam. This implies an enormous flux of particles; VELO has been designed to withstand integrated fluences of more than 1014 p/cm2 per year for a period of about three years. The detector operates in vacuum and is cooled to approximately using a biphase CO2 system. The data of the VELO detector are amplified and read out by the Beetle ASIC.
The RICH-1 detector (Ring imaging Cherenkov detector) is located directly after the vertex detector. It is used for particle identification of low-momentum tracks.
The main tracking system is placed before and after the dipole magnet. It is used to reconstruct the trajectories of charged particles and to measure their momenta. The tracker consists of three subdetectors:
Following the tracking system is RICH-2. It allows the identification of the particle type of high-momentum tracks.
The electromagnetic and hadronic calorimeters provide measurements of the energy of electrons, photons, and hadrons. These measurements are used at trigger level to identify the particles with large transverse momentum (high-Pt particles).
The muon system is used to identify and trigger on muons in the events.
LHCb upgrade (2019–2021).
At the end of 2018, the LHC was shut down for upgrades, with a restart currently planned for early 2022. For the LHCb detector, almost all subdetectors are to be modernised or replaced. It will get a fully new tracking system composed of a modernised vertex locator, upstream tracker (UT) and scintillator fibre tracker (SciFi). The RICH detectors will also be updated, as well as the whole detector electronics. However, the most important change is the switch to the fully software trigger of the experiment, which means that every recorded collision will be analysed by sophisticated software programmes without an intermediate hardware filtering step (which was found to be a bottleneck in the past).
Results.
During the 2011 proton-proton run, LHCb recorded an integrated luminosity of 1 fb−1 at a collision energy of 7 TeV. In 2012, about 2 fb−1 was collected at an energy of 8 TeV. During 2015–2018 (Run 2 of the LHC), about 6 fb−1 was collected at a center-of-mass energy of 13 TeV. In addition, small samples were collected in proton-lead, lead-lead, and xenon-xenon collisions. The LHCb design also allowed the study of collisions of particle beams with a gas (helium or neon) injected inside the VELO volume, making it similar to a fixed-target experiment; this setup is usually referred to as "SMOG". These datasets allow the collaboration to carry out the physics programme of precision Standard Model tests with many additional measurements. As of 2021, LHCb has published more than 500 scientific papers.
Hadron spectroscopy.
LHCb is designed to study beauty and charm hadrons. In addition to precision studies of the known particles such as mysterious X(3872), a number of new hadrons have been discovered by the experiment. As of 2021, all four LHC experiments have discovered about 60 new hadrons in total, vast majority of which by LHCb. In 2015, analysis of the decay of bottom lambda baryons (Λ) in the LHCb experiment revealed the apparent existence of pentaquarks, in what was described as an "accidental" discovery. Other notable discoveries are those of the "doubly charmed" baryon formula_0 in 2017, being a first known baryon with two heavy quarks; and of the fully-charmed tetraquark formula_1 in 2020, made of two charm quarks and two charm antiquarks.
<templatestyles src="Reflist/styles.css" />
CP violation and mixing.
Studies of charge-parity (CP) violation in B-meson decays is the primary design goal of the LHCb experiment. As of 2021, LHCb measurements confirm with a remarkable precision the picture described by the CKM unitarity triangle. The angle formula_2 of the unitarity triangle is now known to about 4°, and is in agreement with indirect determinations.
In 2019, LHCb announced discovery of CP violation in decays of charm mesons. This is the first time CP violation is seen in decays of particles other than kaons or B mesons. The rate of the observed CP asymmetry is at the upper edge of existing theoretical predictions, which triggered some interest among particle theorists regarding possible impact of physics beyond the Standard Model.
In 2020, LHCb announced discovery of time-dependent CP violation in decays of Bs mesons. The oscillation frequency of Bs mesons to its antiparticle and vice versa was measured to a great precision in 2021.
Rare decays.
Rare decays are the decay modes harshly suppressed in the Standard Model, which makes them sensitive to potential effects from yet unknown physics mechanisms.
In 2014, LHCb and CMS experiments published a joint paper in Nature announcing the discovery of the very rare decay formula_3, rate of which was found close to the Standard Model predictions. This measurement has harshly limited the possible parameter space of supersymmetry theories, which have predicted a large enhancement in rate. Since then, LHCb has published several papers with more precise measurements in this decay mode.
Anomalies were found in several rare decays of B mesons. The most famous example in the so-called formula_4 angular observable was found in the decay formula_5, where the deviation between the data and theoretical prediction has persisted for years. The decay rates of several rare decays also differ from the theoretical predictions, though the latter have sizeable uncertainties.
Lepton flavour universality.
In the Standard Model, couplings of charged leptons (electron, muon and tau lepton) to the gauge bosons are expected to be identical, with the only difference emerging from the lepton masses. This postulate is referred to as "lepton flavour universality". As a consequence, in decays of b hadrons, electrons and muons should be produced at similar rates, and the small difference due to the lepton masses is precisely calculable.
LHCb has found deviations from this predictions by comparing the rate of the decay formula_6 to that of formula_7, and in similar processes. However, as the decays in question are very rare, a larger dataset needs to be analysed in order to make definitive conclusions.
In March 2021, LHCb announced that the anomaly in lepton universality crossed the "3 sigma" statistical significance threshold, which translates to a p-value of 0.1%. The measured value of formula_8, where symbol formula_9 denotes probability of a given decay to happen, was found to be formula_10 while the Standard Model predicts it to be very close to unity. In December 2022 improved measurements discarded this anomaly.
In August 2023 joined searches in leptonic decays formula_11 by the LHCb and semileptonic decays formula_12 by Belle II (with formula_13) set new limits for universality violations.
Other measurements.
LHCb has contributed to studies of quantum chromodynamics, electroweak physics, and provided cross-section measurements for astroparticle physics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Xi_{\\rm cc}^{++}"
},
{
"math_id": 1,
"text": "\\mathrm{T}_{\\rm cccc}"
},
{
"math_id": 2,
"text": "\\gamma \\, \\,(\\alpha_3)"
},
{
"math_id": 3,
"text": "\\mathrm{B}^0_{\\rm s} \\to \\mu^+\\mu^-"
},
{
"math_id": 4,
"text": "\\mathrm{P}_5^'"
},
{
"math_id": 5,
"text": "\\mathrm{B}^0 \\to \\mathrm{K}^{*0} \\mu^+\\mu^-"
},
{
"math_id": 6,
"text": "\\mathrm{B}^+ \\to \\mathrm{K}^+ \\mu^+ \\mu^-"
},
{
"math_id": 7,
"text": "\\mathrm{B}^+ \\to \\mathrm{K}^+ \\mathrm{e}^+ \\mathrm{e}^-"
},
{
"math_id": 8,
"text": "R_{\\rm K} = \\frac{\\mathcal{B}(\\mathrm{B}^+ \\to \\mathrm{K}^+ \\mu^+\\mu^-)}{\\mathcal{B}(\\mathrm{B}^+ \\to \\mathrm{K}^+ \\mathrm{e}^+\\mathrm{e}^-)}"
},
{
"math_id": 9,
"text": "\\mathcal{B}"
},
{
"math_id": 10,
"text": "0.846^{+0.044}_{-0.041}"
},
{
"math_id": 11,
"text": "b\\rightarrow s\\ell^+\\ell^-"
},
{
"math_id": 12,
"text": "b\\rightarrow s\\ell\\nu"
},
{
"math_id": 13,
"text": "\\ell=e,\\mu"
}
] | https://en.wikipedia.org/wiki?curid=1285524 |
1285648 | Zirconium tungstate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Zirconium tungstate is the zirconium salt of tungstic acid with the formula . The phase formed at ambient pressure by reaction of ZrO2 and WO3 is a metastable cubic phase, which has "negative thermal expansion" characteristics, namely it shrinks over a wide range of temperatures when heated. In contrast to most other ceramics exhibiting negative CTE (coefficient of thermal expansion), the CTE of ZrW2O8 is isotropic and has a large negative magnitude (average CTE of -7.2x10−6K−1) over a wide range of temperature (-273 °C to 777 °C). A number of other phases are formed at high pressures.
Cubic phase.
Cubic zirconium tungstate (alpha-ZrW2O8), one of the several known phases of zirconium tungstate (ZrW2O8) is perhaps one of the most studied materials to exhibit negative thermal expansion. It has been shown to contract continuously over a previously unprecedented temperature range of 0.3 to 1050 K (at higher temperatures the material decomposes). Since the structure is cubic, as described below, the thermal contraction is isotropic - equal in all directions. There is much ongoing research attempting to elucidate why the material exhibits such dramatic negative thermal expansion.
This phase is thermodynamically unstable at room temperature with respect to the binary oxides ZrO2 and WO3, but may be synthesised by heating stoichiometric quantities of these oxides together and then quenching the material by rapidly cooling it from approximately 900 °C to room temperature.
The structure of cubic zirconium tungstate consists of corner-sharing ZrO6 octahedral and WO4 tetrahedral structural units. Its unusual expansion properties are thought to be due to vibrational modes known as Rigid Unit Modes (RUMs), which involve the coupled rotation of the polyhedral units that make up the structure, and lead to contraction.
Detailed crystal structure.
The arrangement of the groups in the structure of cubic ZrW2O8 is analogous to the simple NaCl structure, with ZrO6 octahedra at the Na sites, and W2O8 groups at the Cl sites. The unit cell consists of 44 atoms aligned in a primitive cubic Bravais lattice, with unit cell length 9.15462 Angstroms.
The ZrO6 octahedra are only slightly distorted from a regular conformation, and all oxygen sites in a given octahedron are related by symmetry. The W2O8 unit is made up of two crystallographically distinct WO4 tetrahedra, which are not formally bonded to each other. These two types of tetrahedra differ with respect to the W-O bond lengths and angles. The WO4 tetrahedra are distorted from a regular shape since one oxygen is unconstrained (an atom that is bonded only to the central tungsten (W) atom), and the three other oxygens are each bonded to a zirconium atom ("i.e." the "corner-sharing" of polyhedra).
The structure has "P213" space group symmetry at low temperatures. At higher temperatures, a centre of inversion is introduced by the disordering of the orientation of tungstate groups, and the space group above the phase transition temperature (~180C) is "Paformula_0".
Octahedra and tetrahedra are linked together by sharing an oxygen atom. In the image, note the corner-touching between octahedra and tetrahedra; these are the location of the shared oxygen. The vertices of the tetrahedra and octahedra represent the oxygen, which are spread about the central zirconium and tungsten. Geometrically, the two shapes can "pivot" around these corner-sharing oxygens, without a distortion of the polyhedra themselves. This pivoting is what is thought to lead to the negative thermal expansion, as in certain low frequency normal modes this leads to the contracting 'RUMs' mentioned above.
High pressure forms.
At high pressure, zirconium tungstate undergoes a series of phase transitions, first to an amorphous phase, and then to a U3O8-type phase, in which the zirconium and tungsten atoms are disordered.
Zirconium tungstate-copper system.
Through hot-isostatically pressing (HIP) a ZrW2O8-Cu composite (system) can be realized. Work done by C. Verdon and D.C. Dunand in 1997 used similarly sized zirconium tungstate and copper powder in a low carbon steel can coated with Cu, and they were HIPed under 103MPa pressure for 3 hours at 600 °C. A control experiment was also conducted, with only a heat treatment (i.e., no pressing) for the same powder mixture also under 600 °C for 3 hours in a quartz tube gettered with titanium.
The results from X-ray diffraction (XRD) in the graph in Verdon & Dunand's paper shows expected products. (a) is from the as received zirconium tungstate powder, (b) is the result from the control experiment , and (c) is the ceramic product from the HIP process. Apparently there are new phases formed according to Spectrum (c) with no ZrW2O8 left. While for the control experiment only partial amount of ZrW2O8 was decomposed.
While complex oxides containing Cu, Zr, and W were believed to be created, selected area diffraction (SAD) of the ceramic product has proven the existence of Cu2O as precipitates after reaction. A model consisted of two concurrent processes were surmised (as presented): (b) the decomposition of the ceramic and loss of oxygen under low oxygen partial pressure at high temperature leads to Cu2O formation; (c) copper diffuses into the ceramic and forms new oxides that absorb some oxygen upon cooling.
Since only very few oxides, those of noble metals which are very expensive, are less stable than Cu2O and Cu2O was believed to be more stable than ZrW2O8, kinetic control of the reaction must be taken into account. For example, reducing reaction time and temperature helps alleviate the residual stress caused by different phases of the ceramic during reaction, which could lead to a delamination of the ceramic particles from the matrix and an increase in the CTE.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\bar3"
}
] | https://en.wikipedia.org/wiki?curid=1285648 |
12857474 | Paradoxes of material implication | The paradoxes of material implication are a group of true formulae involving material conditionals whose translations into natural language are intuitively false when the conditional is translated as "if ... then ...". A material conditional formula formula_0 is true unless formula_1 is true and formula_2 is false. If natural language conditionals were understood in the same way, that would mean that the sentence "If the Nazis had won World War Two, everybody would be happy" is vacuously true. Given that such problematic consequences follow from a seemingly correct assumption about logic, they are called "paradoxes". They demonstrate a mismatch between classical logic and robust intuitions about meaning and reasoning.
Paradox of entailment.
As the best known of the paradoxes, and most formally simple, the paradox of entailment makes the best introduction.
In natural language, an instance of the paradox of entailment arises:
"It is raining"
And
"It is not raining"
Therefore
"George Washington is made of rakes."
This arises from the principle of explosion, a law of classical logic stating that inconsistent premises always make an argument valid; that is, inconsistent premises imply any conclusion at all. This seems paradoxical because although the above is a logically valid argument, it is not sound (not all of its premises are true).
Construction.
Validity is defined in classical logic as follows:
"An argument (consisting of premises and a conclusion) is valid if and only if there is no possible situation in which all the premises are true and the conclusion is false. "
For example a valid argument might run:
"If it is raining, water exists" (1st premise)
"It is raining" (2nd premise)
"Water exists" (Conclusion)
In this example there is no possible situation in which the premises are true while the conclusion is false. Since there is no counterexample, the argument is valid.
But one could construct an argument in which the premises are inconsistent. This would satisfy the test for a valid argument since there would be "no possible situation in which all the premises are true" and therefore "no possible situation in which all the premises are true and the conclusion is false".
For example an argument with inconsistent premises might run:
"It is definitely raining" (1st premise; true)
"It is not raining" (2nd premise; false)
"George Washington is made of rakes" (Conclusion)
As there is no possible situation where both premises could be true, then there is certainly no possible situation in which the premises could be true while the conclusion was false. So the argument is valid whatever the conclusion is; inconsistent premises imply all conclusions.
Simplification.
The classical paradox formulae are closely tied to conjunction elimination,
formula_3
which can be derived from the paradox formulae, for example from (1) by importation.
In addition, there are serious problems with trying to use material implication as representing the English "if ... then ...". For example, the following are valid inferences:
but mapping these back to English sentences using "if" gives paradoxes.
The first might be read "If John is in London then he is in England, and if he is in Paris then he is in France. Therefore, it is true that either (a) if John is in London then he is in France, or (b) if he is in Paris then he is in England." Using material implication, if John is "not" in London then (a) is true; whereas if he "is" in London then, because he is not in Paris, (b) is true. Either way, the conclusion that at least one of (a) or (b) is true is valid.
But this does not match how "if ... then ..." is used in natural language: the most likely scenario in which one would say "If John is in London then he is in England" is if one "does not know" where John is, but nonetheless knows that "if" he is in London, he is in England. Under this interpretation, both premises are true, but both clauses of the conclusion are false.
The second example can be read "If both switch A and switch B are closed, then the light is on. Therefore, it is either true that if switch A is closed, the light is on, or that if switch B is closed, the light is on." Here, the most likely natural-language interpretation of the "if ... then ..." statements would be ""whenever" switch A is closed, the light is on," and ""whenever" switch B is closed, the light is on." Again, under this interpretation both clauses of the conclusion may be false (for instance in a series circuit, with a light that comes on only when "both" switches are closed).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " P \\rightarrow Q "
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "(p \\land q) \\to p"
},
{
"math_id": 4,
"text": "(p \\to q) \\land (r \\to s)\\ \\vdash\\ (p \\to s) \\lor (r \\to q)"
},
{
"math_id": 5,
"text": "(p \\land q) \\to r\\ \\vdash\\ (p \\to r) \\lor (q \\to r)"
}
] | https://en.wikipedia.org/wiki?curid=12857474 |
1285781 | Haugh unit | The Haugh unit is a measure of egg protein quality based on the height of its egg white (albumen). The test was introduced by Raymond Haugh in 1937 and is an important industry measure of egg quality next to other measures such as shell thickness and strength.
An egg is weighed, then broken onto a flat surface (breakout method), and a micrometer used to determine the height of the thick albumen (egg white) that immediately surrounds the yolk. The height, correlated with the weight, determines the Haugh unit, or HU, rating. The higher the number, the better the quality of the egg (fresher, higher quality eggs have thicker whites). Although the measurement determines the protein content and freshness of the egg, it does not measure other important nutrient contents such as the micronutrient or vitamins present in the egg.
Formula.
The formula for calculating the Haugh unit is:
formula_0
Where:
Haugh Index :
AA : 72 or more
A : 71 - 60
B : 59 - 31
C : 30 or less
Below are the USDA's terms describing egg white and its corresponding Haugh unit:
(b) Firm (AA quality). A white that is sufficiently thick or viscous to prevent the yolk outline from being more than slightly defined or indistinctly indicated when the egg is twirled. With respect to a broken-out egg, a firm white has a Haugh unit value of 72 or higher when measured at a temperature between 45formula_1F and 60formula_1F.
(c) Reasonably firm (A quality). A white that is somewhat less thick or viscous than a firm white. A reasonably firm white permits the yolk to approach the shell more closely which results in a fairly well defined yolk outline when the egg is twirled. With respect to a broken-out egg, a reasonably firm white has a Haugh unit value of 60 up to, but not including, 72 when measured at a temperature between 45formula_1F and 60formula_1F.
(d) Weak and watery (B quality). A white that is weak, thin, and generally lacking in viscosity. A weak and watery white permits the yolk to approach the shell closely, thus causing the yolk outline to appear plainly visible and dark when the egg is twirled. With respect to a broken-out egg, a weak and watery white has a Haugh unit value lower than 60 when measured at a temperature between 45formula_1F and 60formula_1F.
United States Standards, Grades, and Weight Classes for Shell Eggs, AMS 56, Effective July 20, 2000
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "HU = 100 * log(h-1.7w^{0.37} + 7.6)"
},
{
"math_id": 1,
"text": "^o"
}
] | https://en.wikipedia.org/wiki?curid=1285781 |
12859904 | Connexive logic | Connexive logic is a class of non-classical logics designed to exclude the paradoxes of material implication. The characteristic that separates connexive logic from other non-classical logics is its acceptance of Aristotle's thesis, i.e. the formula,
formula_0
as a logical truth. Aristotle's thesis asserts that no statement follows from its own denial. Stronger connexive logics also accept Boethius' thesis,
formula_1
which states that if a statement implies one thing, it does not imply its opposite.
Relevance logic is another logical theory that tries to avoid the paradoxes of material implication.
History.
Connexive logic is arguably one of the oldest approaches to logic. Aristotle's thesis is named after Aristotle because he uses this principle in a passage in the "Prior Analytics".
It is impossible that the same thing should be necessitated by the being and the not-being of the same thing. I mean, for example, that it is impossible that B should necessarily be great if A is white, and that B should necessarily be great if A is not white. For if B is not great A cannot be white. But if, when A is not white, it is necessary that B should be great, it necessarily results that if B is not great, B itself is great. But this is impossible. "An. Pr". ii 4.57b3.
The sense of this passage is to perform a "reductio ad absurdum" proof on the claim that two formulas, (A → B) and (~A → B), can be true simultaneously. The proof is:
Aristotle then declares the last line to be impossible, completing the "reductio". But if it is impossible, its denial, ~(~B → B), is a logical truth.
Aristotelian syllogisms (as opposed to Boolean syllogisms) appear to be based on connexive principles. For example, the contrariety of A and E statements, "All S are P," and "No S are P," follows by a "reductio ad absurdum" argument similar to the one given by Aristotle.
Later logicians, notably Chrysippus, are also thought to have endorsed connexive principles. By 100 BCE logicians had divided into four or five distinct schools concerning the correct understanding of conditional ("if...then...") statements. Sextus Empiricus described one school as follows:
And those who introduce the notion of connexion say that a conditional is sound when the contradictory of its consequent is incompatible with its antecedent.
The term "connexivism" is derived from this passage (as translated by Kneale and Kneale).
It is believed that Sextus was here describing the school of Chrysippus. That this school accepted Aristotle's thesis seems clear because the definition of the conditional,
requires that Aristotle's thesis be a logical truth, provided we assume that every statement is compatible with itself, which seems fairly fundamental to the concept of compatibility.
The medieval philosopher Boethius also accepted connexive principles. In "De Syllogismo Hypothetico", he argues that from, "If A, then if B then C," and "If B then not-C," we may infer "not-A," by modus tollens. However, this follows only if the two statements, "If B then C," and "If B then not-C," are considered incompatible.
Since Aristotelian logic was the standard logic studied until the 19th century, it could reasonably be claimed that connexive logic was the accepted school of thought among logicians for most of Western history. (Of course, logicians were not necessarily aware of belonging to the connexivist school.) However, in the 19th century Boolean syllogisms, and a propositional logic based on truth functions, became the standard. Since then, relatively few logicians have subscribed to connexivism. These few include Everett J. Nelson and P. F. Strawson.
Connecting antecedent to consequent.
The objection that is made to the truth-functional definition of conditionals is that there is no requirement that the consequent "actually follow" from the antecedent. So long as the antecedent is false or the consequent true, the conditional is considered to be true whether there is any relation between the antecedent and the consequent or not. Hence, as the philosopher Charles Sanders Peirce once remarked, you can cut up a newspaper, sentence by sentence, put all the sentences in a hat, and draw any two at random. It is guaranteed that either the first sentence will imply the second, or vice versa. But when we use the words "if" and "then" we generally mean to assert that there is some relation between the antecedent and the consequent. What is the nature of that relationship? Relevance (or relevant) logicians take the view that, in addition to saying that the consequent cannot be false while the antecedent is true, the antecedent must be "relevant" to the consequent. At least initially, this means that there must be at least some terms (or variables) that appear in both the antecedent and the consequent. Connexivists generally claim instead that there must be some "real connection" between the antecedent and the consequent, such as might be the result of real class inclusion relations. For example, the class relations, "All men are mortal," would provide a real connection that would warrant the conditional, "If Socrates is a man, then Socrates is mortal." However, more remote connections, for example "If she apologized to him, then he lied to me." (suggested by Bennett) still defy connexivist analysis.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\lnot ( \\lnot p \\rightarrow p ) "
},
{
"math_id": 1,
"text": " (p \\rightarrow q) \\rightarrow \\lnot(p \\rightarrow \\lnot q) "
}
] | https://en.wikipedia.org/wiki?curid=12859904 |
1286130 | Pretzel link | Link formed from a finite number of twisted sections
In the mathematical theory of knots, a pretzel link is a special kind of link. It consists of a finite number of tangles made of two intertwined circular helices. The tangles are connected cyclicly, and the first component of the first tangle is connected to the second component of the second tangle, the first component of the second tangle is connected to the second component of the third tangle, and so on. Finally, the first component of the last tangle is connected to the second component of the first. A pretzel link which is also a knot (that is, a link with one component) is a pretzel knot.
Each tangle is characterized by its number of twists: positive if they are counter-clockwise or left-handed, negative if clockwise or right-handed. In the standard projection of the formula_0 pretzel link, there are formula_1left-handed crossings in the first tangle, formula_2in the second, and, in general, formula_3in the nth.
A pretzel link can also be described as a Montesinos link with integer tangles.
Some basic results.
The formula_4 pretzel link is a knot iff both formula_5 and all the formula_6 are odd or exactly one of the formula_6 is even.
The formula_0 pretzel link is split if at least two of the formula_6 are zero; but the converse is false.
The formula_7 pretzel link is the mirror image of the formula_0 pretzel link.
The formula_0 pretzel link is isotopic to the formula_8 pretzel link. Thus, too, the formula_0 pretzel link is isotopic to the formula_9 pretzel link.
The formula_10 pretzel link is isotopic to the formula_11 pretzel link. However, if one orients the links in a canonical way, then these two links have opposite orientations.
Some examples.
The (1, 1, 1) pretzel knot is the (right-handed) trefoil; the (−1, −1, −1) pretzel knot is its mirror image.
The (5, −1, −1) pretzel knot is the stevedore knot (61).
If p, q, r are distinct odd integers greater than 1, then the ("p", "q", "r") pretzel knot is a non-invertible knot.
The (2"p", 2"q", 2"r") pretzel link is a link formed by three linked unknots.
The (−3, 0, −3) pretzel knot (square knot (mathematics)) is the connected sum of two trefoil knots.
The (0, q, 0) pretzel link is the split union of an unknot and another knot.
Montesinos.
A Montesinos link is a special kind of link that generalizes pretzel links (a pretzel link can also be described as a Montesinos link with integer tangles). A Montesinos link which is also a knot (i.e., a link with one component) is a Montesinos knot.
A Montesinos link is composed of several rational tangles. One notation for a Montesinos link is formula_12.
In this notation, formula_13 and all the formula_14 and formula_15 are integers. The Montesinos link given by this notation consists of the sum of the rational tangles given by the integer formula_13 and the rational tangles formula_16
These knots and links are named after the Spanish topologist José María Montesinos Amilibia, who first introduced them in 1973.
Utility.
(−2, 3, 2n + 1) pretzel links are especially useful in the study of 3-manifolds. Many results have been stated about the manifolds that result from Dehn surgery on the (−2,3,7) pretzel knot in particular.
The hyperbolic volume of the complement of the (−2,3,8) pretzel link is 4 times Catalan's constant, approximately 3.66. This pretzel link complement is one of two two-cusped hyperbolic manifolds with the minimum possible volume, the other being the complement of the Whitehead link.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(p_1,\\,p_2,\\dots,\\,p_n)"
},
{
"math_id": 1,
"text": "p_1"
},
{
"math_id": 2,
"text": "p_2"
},
{
"math_id": 3,
"text": "p_n"
},
{
"math_id": 4,
"text": "(p_1,p_2,\\dots,p_n)"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "p_i"
},
{
"math_id": 7,
"text": "(-p_1,-p_2,\\dots,-p_n)"
},
{
"math_id": 8,
"text": "(p_2,\\,p_3,\\dots,\\,p_n,\\,p_1)"
},
{
"math_id": 9,
"text": "(p_k,\\,p_{k+1},\\dots,\\,p_n,\\,p_1,\\,p_2,\\dots,\\,p_{k-1})"
},
{
"math_id": 10,
"text": "(p_1,\\,p_2,\\,\\dots,\\,p_n)"
},
{
"math_id": 11,
"text": "(p_n,\\,p_{n-1},\\dots,\\,p_2,\\,p_1)"
},
{
"math_id": 12,
"text": "K(e;\\alpha_1 /\\beta_1,\\alpha_2 /\\beta_2,\\ldots,\\alpha_n /\\beta_n)"
},
{
"math_id": 13,
"text": "e"
},
{
"math_id": 14,
"text": "\\alpha_i"
},
{
"math_id": 15,
"text": "\\beta_i"
},
{
"math_id": 16,
"text": "\\alpha_1 /\\beta_1,\\alpha_2 /\\beta_2,\\ldots,\\alpha_n /\\beta_n"
}
] | https://en.wikipedia.org/wiki?curid=1286130 |
1286768 | Use-define chain | Data structure that tracks variable use and definitions
Within computer science, a use-definition chain (or UD chain) is a data structure that consists of a use "U", of a variable, and all the definitions "D" of that variable that can reach that use without any other intervening definitions. A UD Chain generally means the assignment of some value to a variable.
A counterpart of a "UD Chain" is a definition-use chain (or DU chain), which consists of a definition "D" of a variable and all the uses "U" reachable from that definition without any other intervening definitions.
Both UD and DU chains are created by using a form of static code analysis known as data flow analysis. Knowing the use-def and def-use chains for a program or subprogram is a prerequisite for many compiler optimizations, including constant propagation and common subexpression elimination.
Purpose.
Making the use-define or define-use chains is a step in liveness analysis, so that logical representations of all the variables can be identified and tracked through the code.
Consider the following snippet of code:
int x = 0; /* A */
x = x + y; /* B */
/* 1, some uses of x */
x = 35; /* C */
/* 2, some more uses of x */
Notice that codice_0 is assigned a value at three points (marked A, B, and C). However, at the point marked "1", the use-def chain for codice_0 should indicate that its current value must have come from line B (and its value at line B must have come from line A). Contrariwise, at the point marked "2", the use-def chain for codice_0 indicates that its current value must have come from line C. Since the value of the codice_0 in block 2 does not depend on any definitions in block 1 or earlier, codice_0 might as well be a different variable there; practically speaking, it "is" a different variable — call it codice_5.
int x = 0; /* A */
x = x + y; /* B */
/* 1, some uses of x */
int x2 = 35; /* C */
/* 2, some uses of x2 */
The process of splitting codice_0 into two separate variables is called live range splitting. See also static single assignment form.
Setup.
The list of statements determines a strong order among statements.
For a variable, such as "v", its declaration is identified as "V" (italic capital letter), and for short, its declaration is identified as &NoBreak;&NoBreak;. In general, a declaration of a variable can be in an outer scope (e.g., a global variable).
Definition of a variable.
When a variable, "v", is on the LHS of an assignment statement, such as &NoBreak;&NoBreak;, then &NoBreak;&NoBreak; is a definition of "v". Every variable ("v") has at least one definition by its declaration ("V") (or initialization).
Use of a variable.
If variable, "v", is on the RHS of statement &NoBreak;&NoBreak;, there is a statement, &NoBreak;&NoBreak; with "i" < "j" and &NoBreak;&NoBreak;, that it is a definition of "v" and it has a use at &NoBreak;&NoBreak; (or, in short, when a variable, "v", is on the RHS of a statement &NoBreak;&NoBreak;, then "v" has a use at statement &NoBreak;&NoBreak;).
Execution.
Consider the sequential execution of the list of statements, &NoBreak;&NoBreak;, and what can now be observed as the computation at statement, "j":
Execution example for def-use-chain.
This example is based on a Java algorithm for finding the gcd. (It is not important to understand what this function does.)
/**
* @param(a, b) The values used to calculate the divisor.
* @return The greatest common divisor of a and b.
int gcd(int a, int b) {
int c = a;
int d = b;
if (c == 0)
return d;
while (d != 0) {
if (c > d)
c = c - d;
else
d = d - c;
return c;
To find out all def-use-chains for variable d, do the following steps:
Repeat these steps in the following style: combine each write access with each read access (but NOT the other way round).
The result should be:
[d, d=b, return d]
[d, d=b, while(d!=0)]
[d, d=b, if(c>d)]
[d, d=b, c=c-d]
[d, d=b, d=d-c]
[d, d=d-c, while(d!=0)]
[d, d=d-c, if(c>d)]
[d, d=d-c, c=c-d]
[d, d=d-c, d=d-c]
You have to take care, if the variable is changed by the time.
For example: From line 7 down to line 13 in the source code, d is not redefined / changed.
At line 14, d could be redefined. This is why you have to recombine this write access on d with all possible read accesses which could be reached.
In this case, only the code beyond line 10 is relevant. Line 7, for example, cannot be reached again. For your understanding, you can imagine 2 different variables d:
[d1, d1=b, return d1]
[d1, d1=b, while(d1!=0)]
[d1, d1=b, if(c>d1)]
[d1, d1=b, c=c-d1]
[d1, d1=b, d1=d1-c]
[d2, d2=d2-c, while(d2!=0)]
[d2, d2=d2-c, if(c>d2)]
[d2, d2=d2-c, c=c-d2]
[d2, d2=d2-c, d2=d2-c]
As a result, you could get something like this. The variable d1 would be replaced by b
/**
* @param(a, b) The values used to calculate the divisor.
* @return The greatest common divisor of a and b.
int gcd(int a, int b) {
int c = a;
int d;
if (c == 0)
return b;
if (b != 0) {
if (c > b) {
c = c - b;
d = b;
else
d = b - c;
while (d != 0) {
if (c > d)
c = c - d;
else
d = d - c;
}
return c;
Method of building a "use-def" (or "ud") chain.
With this algorithm, two things are accomplished:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|A(i)|"
}
] | https://en.wikipedia.org/wiki?curid=1286768 |
12868239 | Heteroskedasticity-consistent standard errors | Asymptotic variances under heteroskedasticity
The topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors (or simply robust standard errors), Eicker–Huber–White standard errors (also Huber–White standard errors or White standard errors), to recognize the contributions of Friedhelm Eicker, Peter J. Huber, and Halbert White.
In regression and time-series modelling, basic forms of models make use of the assumption that the errors or disturbances "u""i" have the same variance across all observation points. When this is not the case, the errors are said to be heteroskedastic, or to have heteroskedasticity, and this behaviour will be reflected in the residuals formula_0 estimated from a fitted model. Heteroskedasticity-consistent standard errors are used to allow the fitting of a model that does contain heteroskedastic residuals. The first such approach was proposed by Huber (1967), and further improved procedures have been produced since for cross-sectional data, time-series data and GARCH estimation.
Heteroskedasticity-consistent standard errors that differ from classical standard errors may indicate model misspecification. Substituting heteroskedasticity-consistent standard errors does not resolve this misspecification, which may lead to bias in the coefficients. In most situations, the problem should be found and fixed. Other types of standard error adjustments, such as clustered standard errors or HAC standard errors, may be considered as extensions to HC standard errors.
History.
Heteroskedasticity-consistent standard errors are introduced by Friedhelm Eicker, and popularized in econometrics by Halbert White.
Problem.
Consider the linear regression model for the scalar formula_1.
formula_2
where formula_3 is a "k" x 1 column vector of explanatory variables (features), formula_4 is a "k" × 1 column vector of parameters to be estimated, and formula_5 is the residual error.
The ordinary least squares (OLS) estimator is
formula_6
where formula_7 is a vector of observations formula_8, and formula_9 denotes the matrix of stacked formula_10 values observed in the data.
If the sample errors have equal variance formula_11 and are uncorrelated, then the least-squares estimate of formula_4 is BLUE (best linear unbiased estimator), and its variance is estimated with
formula_12
where formula_13 are the regression residuals.
When the error terms do not have constant variance (i.e., the assumption of formula_14 is untrue), the OLS estimator loses its desirable properties. The formula for variance now cannot be simplified:
formula_15
where formula_16
While the OLS point estimator remains unbiased, it is not "best" in the sense of having minimum mean square error, and the OLS variance estimator formula_17 does not provide a consistent estimate of the variance of the OLS estimates.
For any non-linear model (for instance logit and probit models), however, heteroskedasticity has more severe consequences: the maximum likelihood estimates of the parameters will be biased (in an unknown direction), as well as inconsistent (unless the likelihood function is modified to correctly take into account the precise form of heteroskedasticity). As pointed out by Greene, “simply computing a robust covariance matrix for an otherwise inconsistent estimator does not give it redemption.”
Solution.
If the regression errors formula_18 are independent, but have distinct variances formula_19, then formula_20 which can be estimated with formula_21. This provides White's (1980) estimator, often referred to as "HCE" (heteroskedasticity-consistent estimator):
formula_22
where as above formula_9 denotes the matrix of stacked formula_23 values from the data. The estimator can be derived in terms of the generalized method of moments (GMM).
Also often discussed in the literature (including White's paper) is the covariance matrix formula_24 of the formula_25-consistent limiting distribution:
formula_26
where
formula_27
and
formula_28
Thus,
formula_29
and
formula_30
Precisely which covariance matrix is of concern is a matter of context.
Alternative estimators have been proposed in MacKinnon & White (1985) that correct for unequal variances of regression residuals due to different leverage. Unlike the asymptotic White's estimator, their estimators are unbiased when the data are homoscedastic.
Of the four widely available different options, often denoted as HC0-HC3, the HC3 specification appears to work best, with tests relying on the HC3 estimator featuring better power and closer proximity to the targeted size, especially in small samples. The larger the sample, the smaller the difference between the different estimators.
An alternative to explicitly modelling the heteroskedasticity is using a resampling method such as the wild bootstrap. Given that the studentized bootstrap, which standardizes the resampled statistic by its standard error, yields an asymptotic refinement, heteroskedasticity-robust standard errors remain nevertheless useful.
Instead of accounting for the heteroskedastic errors, most linear models can be transformed to feature homoskedastic error terms (unless the error term is heteroskedastic by construction, e.g. in a linear probability model). One way to do this is using weighted least squares, which also features improved efficiency properties.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\widehat{u}_i "
},
{
"math_id": 1,
"text": "y"
},
{
"math_id": 2,
"text": "\ny = \\mathbf{x}^{\\top} \\boldsymbol{\\beta} + \\varepsilon, \\,\n"
},
{
"math_id": 3,
"text": "\\mathbf{x}"
},
{
"math_id": 4,
"text": "\\boldsymbol{\\beta}"
},
{
"math_id": 5,
"text": "\\varepsilon"
},
{
"math_id": 6,
"text": "\n\\widehat \\boldsymbol{\\beta}_\\mathrm{OLS} = (\\mathbf{X}^{\\top} \\mathbf{X})^{-1} \\mathbf{X}^\\top \\mathbf{y}. \\,\n"
},
{
"math_id": 7,
"text": "\\mathbf{y}"
},
{
"math_id": 8,
"text": "y_i"
},
{
"math_id": 9,
"text": "\\mathbf{X}"
},
{
"math_id": 10,
"text": "\\mathbf{x}_i"
},
{
"math_id": 11,
"text": "\\sigma^2"
},
{
"math_id": 12,
"text": "\\hat{\\mathbb{V}}\\left[\\widehat\\boldsymbol\\beta_\\mathrm{OLS}\\right] = s^2 (\\mathbf{X}^{\\top}\\mathbf{X})^{-1}, \\quad s^2 = \\frac{\\sum_i \\widehat \\varepsilon_i^2}{n-k} "
},
{
"math_id": 13,
"text": "\\widehat \\varepsilon_i = y_i - \\mathbf{x}_i^{\\top} \\widehat \\boldsymbol{\\beta}_\\mathrm{OLS}"
},
{
"math_id": 14,
"text": " \\mathbb{E}[\\mathbf{u}\\mathbf{u}^{\\top}] = \\sigma^2 \\mathbf{I}_n"
},
{
"math_id": 15,
"text": " \\mathbb{V}\\left[\\widehat\\boldsymbol\\beta_\\mathrm{OLS}\\right] = \\mathbb{V}\\big[ (\\mathbf{X}^{\\top}\\mathbf{X})^{-1} \\mathbf{X}^{\\top}\\mathbf{y} \\big] = (\\mathbf{X}^{\\top}\\mathbf{X})^{-1} \\mathbf{X}^{\\top} \\mathbf{\\Sigma} \\mathbf{X} (\\mathbf{X}^{\\top}\\mathbf{X})^{-1}"
},
{
"math_id": 16,
"text": " \\mathbf{\\Sigma} = \\mathbb{V}[\\mathbf{u}]."
},
{
"math_id": 17,
"text": "\\hat{\\mathbb{V}} \\left[ \\widehat \\boldsymbol{\\beta}_\\mathrm{OLS} \\right]"
},
{
"math_id": 18,
"text": "\\varepsilon_i"
},
{
"math_id": 19,
"text": "\\sigma^2_i"
},
{
"math_id": 20,
"text": "\\mathbf{\\Sigma} = \\operatorname{diag}(\\sigma_1^2, \\ldots, \\sigma_n^2)"
},
{
"math_id": 21,
"text": "\\widehat\\sigma_i^2 = \\widehat \\varepsilon_i^2"
},
{
"math_id": 22,
"text": "\n\\begin{align}\n\\hat{\\mathbb{V}}_\\text{HCE} \\big[ \\widehat \\boldsymbol{\\beta}_\\text{OLS} \\big] &= \\frac{1}{n} \\bigg(\\frac{1}{n} \\sum_i \\mathbf{x}_i \\mathbf{x}_i^{\\top} \\bigg)^{-1} \\bigg(\\frac{1}{n} \\sum_i \\mathbf{x}_i \\mathbf{x}_i^\\top \\widehat{\\varepsilon}_i^2 \\bigg) \\bigg(\\frac{1}{n} \\sum_i \\mathbf{x}_i \\mathbf{x}_i^{\\top} \\bigg)^{-1} \\\\\n&= ( \\mathbf{X}^{\\top} \\mathbf{X} )^{-1} ( \\mathbf{X}^{\\top} \\operatorname{diag}(\\widehat \\varepsilon_1^2, \\ldots, \\widehat \\varepsilon_n^2) \\mathbf{X} ) ( \\mathbf{X}^{\\top} \\mathbf{X})^{-1},\n\\end{align}\n"
},
{
"math_id": 23,
"text": "\\mathbf{x}_i^{\\top}"
},
{
"math_id": 24,
"text": "\\widehat\\mathbf{\\Omega}_n"
},
{
"math_id": 25,
"text": "\\sqrt{n}"
},
{
"math_id": 26,
"text": "\n\\sqrt{n}(\\widehat \\boldsymbol{\\beta}_n - \\boldsymbol{\\beta}) \\, \\xrightarrow{d} \\, \\mathcal{N}(\\mathbf{0}, \\mathbf{\\Omega}),\n"
},
{
"math_id": 27,
"text": "\n\\mathbf{\\Omega} = \\mathbb{E}[\\mathbf{X} \\mathbf{X}^{\\top}]^{-1} \\mathbb{V}[\\mathbf{X} \\boldsymbol{\\varepsilon}]\\operatorname \\mathbb{E}[\\mathbf{X} \\mathbf{X}^{\\top}]^{-1},\n"
},
{
"math_id": 28,
"text": "\n\\begin{align}\n\\widehat\\mathbf{\\Omega}_n &= \\bigg(\\frac{1}{n} \\sum_i \\mathbf{x}_i \\mathbf{x}_i^{\\top} \\bigg)^{-1} \\bigg(\\frac{1}{n} \\sum_i \\mathbf{x}_i \\mathbf{x}_i^{\\top} \\widehat \\varepsilon_i^2 \\bigg) \\bigg(\\frac{1}{n} \\sum_i \\mathbf{x}_i \\mathbf{x}_i^{\\top} \\bigg)^{-1} \\\\\n&= n ( \\mathbf{X}^{\\top} \\mathbf{X} )^{-1} ( \\mathbf{X}^{\\top} \\operatorname{diag}(\\widehat \\varepsilon_1^2, \\ldots, \\widehat \\varepsilon_n^2) \\mathbf{X} ) ( \\mathbf{X}^{\\top} \\mathbf{X})^{-1}\n\\end{align}\n"
},
{
"math_id": 29,
"text": "\n\\widehat \\mathbf{\\Omega}_n = n \\cdot \\hat{\\mathbb{V}}_\\text{HCE}[\\widehat \\boldsymbol{\\beta}_\\text{OLS}]\n"
},
{
"math_id": 30,
"text": "\n\\widehat \\mathbb{V}[\\mathbf{X} \\boldsymbol{\\varepsilon}] = \\frac{1}{n} \\sum_i \\mathbf{x}_i \\mathbf{x}_i^{\\top} \\widehat \\varepsilon_i^2 = \\frac{1}{n} \\mathbf{X}^{\\top} \\operatorname{diag}(\\widehat \\varepsilon_1^2, \\ldots, \\widehat \\varepsilon_n^2) \\mathbf{X}.\n"
}
] | https://en.wikipedia.org/wiki?curid=12868239 |
12868362 | Filling radius | In Riemannian geometry, the filling radius of a Riemannian manifold "X" is a metric invariant of "X". It was originally introduced in 1983 by Mikhail Gromov, who used it to prove his systolic inequality for essential manifolds, vastly generalizing Loewner's torus inequality and Pu's inequality for the real projective plane, and creating systolic geometry in its modern form.
The filling radius of a simple loop "C" in the plane is defined as the largest radius, "R" > 0, of a circle that fits inside "C":
formula_0
Dual definition via neighborhoods.
There is a kind of a dual point of view that allows one to generalize this notion in an extremely fruitful way, as shown by Gromov. Namely, we consider the formula_1-neighborhoods of the loop "C", denoted
formula_2
As formula_3 increases, the formula_1-neighborhood formula_4 swallows up more and more of the interior of the loop. The "last" point to be swallowed up is precisely the center of a largest inscribed circle. Therefore, we can reformulate the above definition by defining
formula_5 to be the infimum of formula_6 such that the loop "C" contracts to a point in formula_4.
Given a compact manifold "X" imbedded in, say, Euclidean space "E", we could define the filling radius "relative" to the imbedding, by minimizing the size of the neighborhood formula_7 in which "X" could be homotoped to something smaller dimensional, e.g., to a lower-dimensional polyhedron. Technically it is more convenient to work with a homological definition.
Homological definition.
Denote by "A" the coefficient ring formula_8 or formula_9, depending on whether or not "X" is orientable. Then the fundamental class, denoted "[X]", of a compact "n"-dimensional manifold "X", is a generator of the homology group formula_10, and we set
formula_11
where formula_12 is the inclusion homomorphism.
To define an "absolute" filling radius in a situation where "X" is equipped with a Riemannian metric "g", Gromov proceeds as follows.
One exploits Kuratowski embedding.
One imbeds "X" in the Banach space formula_13 of bounded Borel functions on "X", equipped with the sup norm formula_14. Namely, we map a point formula_15 to the function formula_16 defined by the formula formula_17
for all formula_18, where "d" is the distance function defined by the metric. By the triangle inequality we have formula_19 and therefore the imbedding is strongly isometric, in the precise sense that internal distance and ambient distance coincide. Such a strongly isometric imbedding is impossible if the ambient space is a Hilbert space, even when "X" is the Riemannian circle (the distance between opposite points must be
π, not 2!). We then set formula_20 in the formula above, and define
formula_21 | [
{
"math_id": 0,
"text": "\\mathrm{FillRad}(C\\subset \\mathbb{R}^2) = R."
},
{
"math_id": 1,
"text": "\\varepsilon"
},
{
"math_id": 2,
"text": "U_\\varepsilon C \\subset \\mathbb{R}^2."
},
{
"math_id": 3,
"text": "\\varepsilon>0"
},
{
"math_id": 4,
"text": "U_\\varepsilon C"
},
{
"math_id": 5,
"text": "\\mathrm{FillRad}(C\\subset \\mathbb{R}^2) "
},
{
"math_id": 6,
"text": "\\varepsilon > 0"
},
{
"math_id": 7,
"text": "U_\\varepsilon X\\subset E"
},
{
"math_id": 8,
"text": "\\mathbb{Z}"
},
{
"math_id": 9,
"text": "\\mathbb{Z}_2"
},
{
"math_id": 10,
"text": "H_n(X;A)\\simeq A"
},
{
"math_id": 11,
"text": "\n\\mathrm{FillRad}(X\\subset E) = \\inf \\left\\{ \\varepsilon > 0 \\mid \\iota_\\varepsilon([X])=0\\in H_n(U_\\varepsilon X) \\right\\},\n"
},
{
"math_id": 12,
"text": "\\iota_\\varepsilon"
},
{
"math_id": 13,
"text": "L^\\infty(X)"
},
{
"math_id": 14,
"text": "\\|\\cdot\\|"
},
{
"math_id": 15,
"text": "x\\in X"
},
{
"math_id": 16,
"text": "f_x\\in L^\\infty(X)"
},
{
"math_id": 17,
"text": "f_x(y) = d(x,y)"
},
{
"math_id": 18,
"text": "y\\in X"
},
{
"math_id": 19,
"text": "d(x,y) = \\| f_x - f_y \\|,"
},
{
"math_id": 20,
"text": "E= L^\\infty(X)"
},
{
"math_id": 21,
"text": "\\mathrm{FillRad}(X)=\\mathrm{FillRad} \\left( X\\subset\nL^{\\infty}(X) \\right)."
},
{
"math_id": 22,
"text": "\\mathrm{FillRad} M\\ge \\frac{\\mathrm{InjRad} M}{2(\\dim M+2)}."
}
] | https://en.wikipedia.org/wiki?curid=12868362 |
1286849 | Correspondence theorem | In group theory, the correspondence theorem (also the lattice theorem, and variously and ambiguously the third and fourth isomorphism theorem) states that if formula_0 is a normal subgroup of a group formula_1, then there exists a bijection from the set of all subgroups formula_2 of formula_1 containing formula_0, onto the set of all subgroups of the quotient group formula_3. Loosely speaking, the structure of the subgroups of formula_3 is exactly the same as the structure of the subgroups of formula_1 containing formula_0, with formula_0 collapsed to the identity element.
Specifically, if
"G" is a group,
formula_4, a normal subgroup of "G",
formula_5, the set of all subgroups "A" of "G" that contain "N", and
formula_6, the set of all subgroups of "G"/"N",
then there is a bijective map formula_7 such that
formula_8 for all formula_9
One further has that if "A" and "B" are in formula_10 then
This list is far from exhaustive. In fact, most properties of subgroups are preserved in their images under the bijection onto subgroups of a quotient group.
More generally, there is a monotone Galois connection formula_20 between the lattice of subgroups of formula_1 (not necessarily containing formula_0) and the lattice of subgroups of formula_3: the lower adjoint of a subgroup formula_21 of formula_1 is given by formula_22 and the upper adjoint of a subgroup formula_23 of formula_3 is a given by formula_24. The associated closure operator on subgroups of formula_1 is formula_25; the associated kernel operator on subgroups of formula_3 is the identity. A proof of the correspondence theorem can be found here.
Similar results hold for rings, modules, vector spaces, and algebras. More generally an analogous result that concerns congruence relations instead of normal subgroups holds for any algebraic structure.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "G/N"
},
{
"math_id": 4,
"text": "N \\triangleleft G"
},
{
"math_id": 5,
"text": "\\mathcal{G} = \\{ A \\mid N \\subseteq A < G \\}"
},
{
"math_id": 6,
"text": "\\mathcal{N} = \\{ S \\mid S < G/N \\}"
},
{
"math_id": 7,
"text": "\\phi: \\mathcal{G} \\to \\mathcal{N}"
},
{
"math_id": 8,
"text": "\\phi(A) = A/N"
},
{
"math_id": 9,
"text": "A \\in \\mathcal{G}."
},
{
"math_id": 10,
"text": "\\mathcal{G}"
},
{
"math_id": 11,
"text": "A \\subseteq B"
},
{
"math_id": 12,
"text": "A/N \\subseteq B/N"
},
{
"math_id": 13,
"text": "|B:A| = |B/N:A/N|"
},
{
"math_id": 14,
"text": "|B:A|"
},
{
"math_id": 15,
"text": "\\langle A,B \\rangle / N = \\left\\langle A/N, B/N \\right\\rangle,"
},
{
"math_id": 16,
"text": "\\langle A, B \\rangle"
},
{
"math_id": 17,
"text": "A\\cup B;"
},
{
"math_id": 18,
"text": "(A \\cap B)/N = A/N \\cap B/N"
},
{
"math_id": 19,
"text": "A/N"
},
{
"math_id": 20,
"text": "(f^*, f_*)"
},
{
"math_id": 21,
"text": "H"
},
{
"math_id": 22,
"text": "f^*(H) = HN/N"
},
{
"math_id": 23,
"text": "K/N"
},
{
"math_id": 24,
"text": "f_*(K/N) = K"
},
{
"math_id": 25,
"text": "\\bar H = HN"
}
] | https://en.wikipedia.org/wiki?curid=1286849 |
12873646 | Porous set | Mathematical concept for metric spaces
In mathematics, a porous set is a concept in the study of metric spaces. Like the concepts of meagre and measure zero sets, a porous set can be considered "sparse" or "lacking bulk"; however, porous sets are not equivalent to either meagre sets or measure zero sets, as shown below.
Definition.
Let ("X", "d") be a complete metric space and let "E" be a subset of "X". Let "B"("x", "r") denote the closed ball in ("X", "d") with centre "x" ∈ "X" and radius "r" > 0. "E" is said to be porous if there exist constants 0 < "α" < 1 and "r"0 > 0 such that, for every 0 < "r" ≤ "r"0 and every "x" ∈ "X", there is some point "y" ∈ "X" with
formula_0
A subset of "X" is called "σ"-porous if it is a countable union of porous subsets of "X".
formula_1
However, if "E" is also porous, then it is possible to take "s" = "αr" (at least for small enough "r"), where 0 < "α" < 1 is a constant that depends only on "E". | [
{
"math_id": 0,
"text": "B(y, \\alpha r) \\subseteq B(x, r) \\setminus E."
},
{
"math_id": 1,
"text": "B(y, s) \\subseteq B(x, r) \\setminus E."
}
] | https://en.wikipedia.org/wiki?curid=12873646 |
1287577 | Vertex operator algebra | Algebra used in 2D conformal field theories and string theory
In mathematics, a vertex operator algebra (VOA) is an algebraic structure that plays an important role in two-dimensional conformal field theory and string theory. In addition to physical applications, vertex operator algebras have proven useful in purely mathematical contexts such as monstrous moonshine and the geometric Langlands correspondence.
The related notion of vertex algebra was introduced by Richard Borcherds in 1986, motivated by a construction of an infinite-dimensional Lie algebra due to Igor Frenkel. In the course of this construction, one employs a Fock space that admits an action of vertex operators attached to elements of a lattice. Borcherds formulated the notion of vertex algebra by axiomatizing the relations between the lattice vertex operators, producing an algebraic structure that allows one to construct new Lie algebras by following Frenkel's method.
The notion of vertex operator algebra was introduced as a modification of the notion of vertex algebra, by Frenkel, James Lepowsky, and Arne Meurman in 1988, as part of their project to construct the moonshine module. They observed that many vertex algebras that appear 'in nature' carry an action of the Virasoro algebra, and satisfy a bounded-below property with respect to an energy operator. Motivated by this observation, they added the Virasoro action and bounded-below property as axioms.
We now have post-hoc motivation for these notions from physics, together with several interpretations of the axioms that were not initially known. Physically, the vertex operators arising from holomorphic field insertions at points in two-dimensional conformal field theory admit operator product expansions when insertions collide, and these satisfy precisely the relations specified in the definition of vertex operator algebra. Indeed, the axioms of a vertex operator algebra are a formal algebraic interpretation of what physicists call chiral algebras (not to be confused with the more precise notion with the same name in mathematics) or "algebras of chiral symmetries", where these symmetries describe the Ward identities satisfied by a given conformal field theory, including conformal invariance. Other formulations of the vertex algebra axioms include Borcherds's later work on singular commutative rings, algebras over certain operads on curves introduced by Huang, Kriz, and others, D-module-theoretic objects called chiral algebras introduced by Alexander Beilinson and Vladimir Drinfeld and factorization algebras, also introduced by Beilinson and Drinfeld.
Important basic examples of vertex operator algebras include the lattice VOAs (modeling lattice conformal field theories), VOAs given by representations of affine Kac–Moody algebras (from the WZW model), the Virasoro VOAs, which are VOAs corresponding to representations of the Virasoro algebra, and the moonshine module "V"♮, which is distinguished by its monster symmetry. More sophisticated examples such as affine W-algebras and the chiral de Rham complex on a complex manifold arise in geometric representation theory and mathematical physics.
Formal definition.
Vertex algebra.
A vertex algebra is a collection of data that satisfy certain axioms.
formula_24
Axioms.
These data are required to satisfy the following axioms:
formula_29
formula_30
Equivalent formulations of locality axiom.
The locality axiom has several equivalent formulations in the literature, e.g., Frenkel–Lepowsky–Meurman introduced the Jacobi identity:
formula_31
where we define the formal delta series by:
formula_32
Borcherds initially used the following two identities: for any vectors "u", "v", and "w", and integers "m" and "n" we have
formula_33
and
formula_34.
He later gave a more expansive version that is equivalent but easier to use: for any vectors "u", "v", and "w", and integers "m", "n", and "q" we have
formula_35
Finally, there is a formal function version of locality: For any formula_36, there is an element
formula_37
such that formula_38 and formula_39 are the corresponding expansions of formula_40 in formula_41 and formula_42.
Vertex operator algebra.
A vertex operator algebra is a vertex algebra equipped with a conformal element formula_43, such that the vertex operator formula_44 is the weight two Virasoro field formula_45:
formula_46
and satisfies the following properties:
A homomorphism of vertex algebras is a map of the underlying vector spaces that respects the additional identity, translation, and multiplication structure. Homomorphisms of vertex operator algebras have "weak" and "strong" forms, depending on whether they respect conformal vectors.
Commutative vertex algebras.
A vertex algebra formula_0 is commutative if all vertex operators formula_17 commute with each other. This is equivalent to the property that all products formula_56 lie in formula_57, or that formula_58. Thus, an alternative definition for a commutative vertex algebra is one in which all vertex operators formula_17 are regular at formula_59.
Given a commutative vertex algebra, the constant terms of multiplication endow the vector space with a commutative and associative ring structure, the vacuum vector formula_53 is a unit and formula_5 is a derivation. Hence the commutative vertex algebra equips formula_0 with the structure of a commutative unital algebra with derivation. Conversely, any commutative ring formula_0 with derivation formula_5 has a canonical vertex algebra structure, where we set formula_60, so that formula_61 restricts to a map formula_62 which is the multiplication map formula_63 with formula_64 the algebra product. If the derivation formula_5 vanishes, we may set formula_65 to obtain a vertex operator algebra concentrated in degree zero.
Any finite-dimensional vertex algebra is commutative.
Thus even the smallest examples of noncommutative vertex algebras require significant introduction.
Basic properties.
The translation operator formula_5 in a vertex algebra induces infinitesimal symmetries on the product structure, and satisfies the following properties:
For a vertex operator algebra, the other Virasoro operators satisfy similar properties:
formula_75
given in the definition also expands to formula_76 in formula_77.
The associativity property of a vertex algebra follows from the fact that the commutator of formula_17 and formula_78 is annihilated by a finite power of formula_79, i.e., one can expand it as a finite linear combination of derivatives of the formal delta function in formula_80, with coefficients in formula_81.
Reconstruction: Let formula_0 be a vertex algebra, and let formula_82 be a set of vectors, with corresponding fields formula_83. If formula_0 is spanned by monomials in the positive weight coefficients of the fields (i.e., finite products of operators formula_84 applied to formula_53, where formula_23 is negative), then we may write the operator product of such a monomial as a normally ordered product of divided power derivatives of fields (here, normal ordering means polar terms on the left are moved to the right). Specifically,
formula_85
More generally, if one is given a vector space formula_0 with an endomorphism formula_5 and vector formula_53, and one assigns to a set of vectors formula_86 a set of fields formula_83 that are mutually local, whose positive weight coefficients generate formula_0, and that satisfy the identity and translation conditions, then the previous formula describes a vertex algebra structure.
Operator product expansion.
In vertex algebra theory, due to associativity, we can abuse notation to write, for formula_87
formula_88
This is the operator product expansion. Equivalently,
formula_89
Since the normal ordered part is regular in formula_90 and formula_91, this can be written more in line with physics conventions as
formula_92
where the equivalence relation formula_93 denotes equivalence up to regular terms.
Commonly used OPEs.
Here some OPEs frequently found in conformal field theory are recorded.
Examples from Lie algebras.
The basic examples come from infinite-dimensional Lie algebras.
Heisenberg vertex operator algebra.
A basic example of a noncommutative vertex algebra is the rank 1 free boson, also called the Heisenberg vertex operator algebra. It is "generated" by a single vector "b", in the sense that by applying the coefficients of the field "b"("z") := "Y"("b","z") to the vector "1", we obtain a spanning set. The underlying vector space is the infinite-variable polynomial ring formula_94, where for positive formula_23, formula_95 acts obviously by multiplication, and formula_96 acts as formula_97. The action of "b"0 is multiplication by zero, producing the "momentum zero" Fock representation "V"0 of the Heisenberg Lie algebra (generated by "b"n for integers "n", with commutation relations ["b"n,"b"m]="n" δn,–m), induced by the trivial representation of the subalgebra spanned by "b"n, n ≥ 0.
The Fock space "V"0 can be made into a vertex algebra by the following definition of the state-operator map on a basis formula_98 with each formula_99,
formula_100
where formula_101 denotes normal ordering of an operator formula_102. The vertex operators may also be written as a functional of a multivariable function f as:
formula_103
if we understand that each term in the expansion of f is normal ordered.
The rank "n" free boson is given by taking an "n"-fold tensor product of the rank 1 free boson. For any vector "b" in "n"-dimensional space, one has a field "b"("z") whose coefficients are elements of the rank "n" Heisenberg algebra, whose commutation relations have an extra inner product term: ["b"n,"c"m]="n" (b,c) δn,–m.
The Heisenberg vertex operator algebra has a one-parameter family of conformal vectors with parameter formula_104 of conformal vectors formula_105 given by
formula_106
with central charge formula_107.
When formula_108, there is the following formula for the Virasoro character:
formula_109
This is the generating function for partitions, and is also written as "q"1/24 times the weight −1/2 modular form 1/η (the reciprocal of the Dedekind eta function). The rank "n" free boson then has an "n" parameter family of Virasoro vectors, and when those parameters are zero, the character is "q""n"/24 times the weight −"n"/2 modular form η−"n".
Virasoro vertex operator algebra.
Virasoro vertex operator algebras are important for two reasons: First, the conformal element in a vertex operator algebra canonically induces a homomorphism from a Virasoro vertex operator algebra, so they play a universal role in the theory. Second, they are intimately connected to the theory of unitary representations of the Virasoro algebra, and these play a major role in conformal field theory. In particular, the unitary Virasoro minimal models are simple quotients of these vertex algebras, and their tensor products provide a way to combinatorially construct more complicated vertex operator algebras.
The Virasoro vertex operator algebra is defined as an induced representation of the Virasoro algebra: If we choose a central charge "c", there is a unique one-dimensional module for the subalgebra C[z]∂z + "K" for which "K" acts by "c"Id, and C[z]∂z acts trivially, and the corresponding induced module is spanned by polynomials in "L"–n = –z−n–1∂z as "n" ranges over integers greater than 1. The module then has partition function
formula_110.
This space has a vertex operator algebra structure, where the vertex operators are defined by:
formula_111
and formula_112. The fact that the Virasoro field "L(z)" is local with respect to itself can be deduced from the formula for its self-commutator:
formula_113
where "c" is the central charge.
Given a vertex algebra homomorphism from a Virasoro vertex algebra of central charge "c" to any other vertex algebra, the vertex operator attached to the image of ω automatically satisfies the Virasoro relations, i.e., the image of ω is a conformal vector. Conversely, any conformal vector in a vertex algebra induces a distinguished vertex algebra homomorphism from some Virasoro vertex operator algebra.
The Virasoro vertex operator algebras are simple, except when "c" has the form 1–6("p"–"q")2/"pq" for coprime integers "p","q" strictly greater than 1 – this follows from Kac's determinant formula. In these exceptional cases, one has a unique maximal ideal, and the corresponding quotient is called a minimal model. When "p" = "q"+1, the vertex algebras are unitary representations of Virasoro, and their modules are known as discrete series representations. They play an important role in conformal field theory in part because they are unusually tractable, and for small "p", they correspond to well-known statistical mechanics systems at criticality, e.g., the Ising model, the tri-critical Ising model, the three-state Potts model, etc. By work of Weiqang Wang concerning fusion rules, we have a full description of the tensor categories of unitary minimal models. For example, when "c"=1/2 (Ising), there are three irreducible modules with lowest "L"0-weight 0, 1/2, and 1/16, and its fusion ring is Z["x","y"]/("x"2–1, "y"2–"x"–1, "xy"–"y").
Affine vertex algebra.
By replacing the Heisenberg Lie algebra with an untwisted affine Kac–Moody Lie algebra (i.e., the universal central extension of the loop algebra on a finite-dimensional simple Lie algebra), one may construct the vacuum representation in much the same way as the free boson vertex algebra is constructed. This algebra arises as the current algebra of the Wess–Zumino–Witten model, which produces the anomaly that is interpreted as the central extension.
Concretely, pulling back the central extension
formula_114
along the inclusion formula_115 yields a split extension, and the vacuum module is induced from the one-dimensional representation of the latter on which a central basis element acts by some chosen constant called the "level". Since central elements can be identified with invariant inner products on the finite type Lie algebra formula_116, one typically normalizes the level so that the Killing form has level twice the dual Coxeter number. Equivalently, level one gives the inner product for which the longest root has norm 2. This matches the loop algebra convention, where levels are discretized by third cohomology of simply connected compact Lie groups.
By choosing a basis "J"a of the finite type Lie algebra, one may form a basis of the affine Lie algebra using "J"a"n" = "J"a "t""n" together with a central element "K". By reconstruction, we can describe the vertex operators by normal ordered products of derivatives of the fields
formula_117
When the level is non-critical, i.e., the inner product is not minus one half of the Killing form, the vacuum representation has a conformal element, given by the Sugawara construction. For any choice of dual bases "J"a, "J"a with respect to the level 1 inner product, the conformal element is
formula_118
and yields a vertex operator algebra whose central charge is formula_119. At critical level, the conformal structure is destroyed, since the denominator is zero, but one may produce operators "L""n" for "n" ≥ –1 by taking a limit as "k" approaches criticality.
Modules.
Much like ordinary rings, vertex algebras admit a notion of module, or representation. Modules play an important role in conformal field theory, where they are often called sectors. A standard assumption in the physics literature is that the full Hilbert space of a conformal field theory decomposes into a sum of tensor products of left-moving and right-moving sectors:
formula_120
That is, a conformal field theory has a vertex operator algebra of left-moving chiral symmetries, a vertex operator algebra of right-moving chiral symmetries, and the sectors moving in a given direction are modules for the corresponding vertex operator algebra.
Definition.
Given a vertex algebra "V" with multiplication "Y", a "V"-module is a vector space "M" equipped with an action "Y"M: "V" ⊗ "M" → "M"(("z")), satisfying the following conditions:
(Identity) "Y"M(1,z) = IdM
(Associativity, or Jacobi identity) For any "u", "v" ∈ "V", "w" ∈ "M", there is an element
formula_121
such that "Y"M("u","z")"Y"M("v","x")"w" and "Y"M("Y"("u","z"–"x")"v","x")"w"
are the corresponding expansions of formula_40 in "M"(("z"))(("x")) and "M"(("x"))(("z"–"x")).
Equivalently, the following "Jacobi identity" holds:
formula_122
The modules of a vertex algebra form an abelian category. When working with vertex operator algebras, the previous definition is sometimes given the name "weak formula_0-module", and genuine "V"-modules must respect the conformal structure given by the conformal vector formula_54. More precisely, they are required to satisfy the additional condition that "L"0 acts semisimply with finite-dimensional eigenspaces and eigenvalues bounded below in each coset of Z. Work of Huang, Lepowsky, Miyamoto, and Zhang has shown at various levels of generality that modules of a vertex operator algebra admit a fusion tensor product operation, and form a braided tensor category.
When the category of "V"-modules is semisimple with finitely many irreducible objects, the vertex operator algebra "V" is called rational. Rational vertex operator algebras satisfying an additional finiteness hypothesis (known as Zhu's "C"2-cofiniteness condition) are known to be particularly well-behaved, and are called "regular". For example, Zhu's 1996 modular invariance theorem asserts that the characters of modules of a regular VOA form a vector-valued representation of formula_123. In particular, if a VOA is "holomorphic", that is, its representation category is equivalent to that of vector spaces, then its partition function is formula_123-invariant up to a constant. Huang showed that the category of modules of a regular VOA is a modular tensor category, and its fusion rules satisfy the Verlinde formula.
Heisenberg algebra modules.
Modules of the Heisenberg algebra can be constructed as Fock spaces formula_124 for formula_104 which are induced representations of the Heisenberg Lie algebra, given by a vacuum vector formula_125 satisfying formula_126 for formula_127, formula_128, and being acted on freely by the negative modes formula_95 for formula_129. The space can be written as formula_130. Every irreducible, formula_131-graded Heisenberg algebra module with gradation bounded below is of this form.
These are used to construct lattice vertex algebras, which as vector spaces are direct sums of Heisenberg modules, when the image of formula_61 is extended appropriately to module elements.
The module category is not semisimple, since one may induce a representation of the abelian Lie algebra where "b"0 acts by a nontrivial Jordan block. For the rank "n" free boson, one has an irreducible module "V"λ for each vector λ in complex "n"-dimensional space. Each vector "b" ∈ Cn yields the operator "b"0, and the Fock space "V"λ is distinguished by the property that each such "b"0 acts as scalar multiplication by the inner product ("b", λ).
Twisted modules.
Unlike ordinary rings, vertex algebras admit a notion of twisted module attached to an automorphism. For an automorphism σ of order "N", the action has the form "V" ⊗ "M" → "M"(("z"1/N)), with the following monodromy condition: if "u" ∈ "V" satisfies σ "u" = exp(2π"ik"/"N")"u", then "u"n = 0 unless "n" satisfies "n"+"k"/"N" ∈ Z (there is some disagreement about signs among specialists). Geometrically, twisted modules can be attached to branch points on an algebraic curve with a ramified Galois cover. In the conformal field theory literature, twisted modules are called twisted sectors, and are intimately connected with string theory on orbifolds.
Additional examples.
Vertex operator algebra defined by an even lattice.
The lattice vertex algebra construction was the original motivation for defining vertex algebras. It is constructed by taking a sum of irreducible modules for the Heisenberg algebra corresponding to lattice vectors, and defining a multiplication operation by specifying intertwining operators between them. That is, if Λ is an even lattice (if the lattice is not even, the structure obtained is instead a vertex superalgebra), the lattice vertex algebra "V"Λ decomposes into free bosonic modules as:
formula_132
Lattice vertex algebras are canonically attached to double covers of even integral lattices, rather than the lattices themselves. While each such lattice has a unique lattice vertex algebra up to isomorphism, the vertex algebra construction is not functorial, because lattice automorphisms have an ambiguity in lifting.
The double covers in question are uniquely determined up to isomorphism by the following rule: elements have the form ±eα for lattice vectors "α" ∈ Λ (i.e., there is a map to Λ sending eα to α that forgets signs), and multiplication satisfies the relations "e"α"e"β = (–1)(α,β)"e"β"e"α. Another way to describe this is that given an even lattice Λ, there is a unique (up to coboundary) normalised cocycle "ε"("α", "β") with values ±1 such that (−1)("α","β")
"ε"("α", "β") "ε"("β", "α"), where the normalization condition is that ε(α, 0) = ε(0, α) = 1 for all "α" ∈ Λ. This cocycle induces a central extension of Λ by a group of order 2, and we obtain a twisted group ring C"ε"[Λ] with basis "eα" ("α" ∈ Λ), and multiplication rule "eαeβ"
"ε"("α", "β")"e""α"+"β" – the cocycle condition on ε ensures associativity of the ring.
The vertex operator attached to lowest weight vector vλ in the Fock space Vλ is
formula_133
where zλ is a shorthand for the linear map that takes any element of the α-Fock space Vα to the monomial "z"("λ","α"). The vertex operators for other elements of the Fock space are then determined by reconstruction.
As in the case of the free boson, one has a choice of conformal vector, given by an element "s" of the vector space Λ ⊗ C, but the condition that the extra Fock spaces have integer "L"0 eigenvalues constrains the choice of "s": for an orthonormal basis xi, the vector 1/2 "x"i,12 + "s"2 must satisfy ("s", "λ") ∈ Z for all λ ∈ Λ, i.e., "s" lies in the dual lattice.
If the even lattice Λ is generated by its "root vectors" (those satisfying (α, α)=2), and any two root vectors are joined by a chain of root vectors with consecutive inner products non-zero then the vertex operator algebra is the unique simple quotient of the vacuum module of the affine Kac–Moody algebra of the corresponding simply laced simple Lie algebra at level one. This is known as the Frenkel–Kac (or Frenkel–Kac–Segal) construction, and is based on the earlier construction by Sergio Fubini and Gabriele Veneziano of the tachyonic vertex operator in the dual resonance model. Among other features, the zero modes of the vertex operators corresponding to root vectors give a construction of the underlying simple Lie algebra, related to a presentation originally due to Jacques Tits. In particular, one obtains a construction of all ADE type Lie groups directly from their root lattices. And this is commonly considered the simplest way to construct the 248-dimensional group "E"8.
Monster vertex algebra.
The monster vertex algebra formula_134 (also called the "moonshine module") is the key to Borcherds's proof of the Monstrous moonshine conjectures. It was constructed by Frenkel, Lepowsky, and Meurman in 1988. It is notable because its character is the j-invariant with no constant term, formula_135, and its automorphism group is the monster group. It is constructed by orbifolding the lattice vertex algebra constructed from the Leech lattice by the order 2 automorphism induced by reflecting the Leech lattice in the origin. That is, one forms the direct sum of the Leech lattice VOA with the twisted module, and takes the fixed points under an induced involution. Frenkel, Lepowsky, and Meurman conjectured in 1988 that formula_134 is the unique holomorphic vertex operator algebra with central charge 24, and partition function formula_135. This conjecture is still open.
Chiral de Rham complex.
Malikov, Schechtman, and Vaintrob showed that by a method of localization, one may canonically attach a bcβγ (boson–fermion superfield) system to a smooth complex manifold. This complex of sheaves has a distinguished differential, and the global cohomology is a vertex superalgebra. Ben-Zvi, Heluani, and Szczesny showed that a Riemannian metric on the manifold induces an "N"=1 superconformal structure, which is promoted to an "N"=2 structure if the metric is Kähler and Ricci-flat, and a hyperkähler structure induces an "N"=4 structure. Borisov and Libgober showed that one may obtain the two-variable elliptic genus of a compact complex manifold from the cohomology of the Chiral de Rham complex. If the manifold is Calabi–Yau, then this genus is a weak Jacobi form.
Vertex algebra associated to a surface defect.
A vertex algebra can arise as a subsector of higher dimensional quantum field theory which localizes to a two real-dimensional submanifold of the space on which the higher dimensional theory is defined. A prototypical example is the construction of Beem, Leemos, Liendo, Peelaers, Rastelli, and van Rees which associates a vertex algebra to any 4d "N"=2 superconformal field theory. This vertex algebra has the property that its character coincides with the Schur index of the 4d superconformal theory. When the theory admits a weak coupling limit, the vertex algebra has an explicit description as a BRST reduction of a bcβγ system.
Vertex operator superalgebras.
By allowing the underlying vector space to be a superspace (i.e., a Z/2Z-graded vector space formula_136) one can define a "vertex superalgebra" by the same data as a vertex algebra, with 1 in "V"+ and "T" an even operator. The axioms are essentially the same, but one must incorporate suitable signs into the locality axiom, or one of the equivalent formulations. That is, if "a" and "b" are homogeneous, one compares "Y"("a","z")"Y"("b","w") with ε"Y"("b","w")"Y"("a","z"), where ε is –1 if both "a" and "b" are odd and 1 otherwise. If in addition there is a Virasoro element ω in the even part of "V"2, and the usual grading restrictions are satisfied, then "V" is called a "vertex operator superalgebra".
One of the simplest examples is the vertex operator superalgebra generated by a single free fermion ψ. As a Virasoro representation, it has central charge 1/2, and decomposes as a direct sum of Ising modules of lowest weight 0 and 1/2. One may also describe it as a spin representation of the Clifford algebra on the quadratic space "t"1/2C["t","t"−1]("dt")1/2 with residue pairing. The vertex operator superalgebra is holomorphic, in the sense that all modules are direct sums of itself, i.e., the module category is equivalent to the category of vector spaces.
The tensor square of the free fermion is called the free charged fermion, and by boson–fermion correspondence, it is isomorphic to the lattice vertex superalgebra attached to the odd lattice Z. This correspondence has been used by Date–Jimbo–Kashiwara-Miwa to construct soliton solutions to the KP hierarchy of nonlinear PDEs.
Superconformal structures.
The Virasoro algebra has some supersymmetric extensions that naturally appear in superconformal field theory and superstring theory. The "N"=1, 2, and 4 superconformal algebras are of particular importance.
Infinitesimal holomorphic superconformal transformations of a supercurve (with one even local coordinate "z" and "N" odd local coordinates θ1...,θN) are generated by the coefficients of a super-stress–energy tensor "T"(z, θ1, ..., θN).
When "N"=1, "T" has odd part given by a Virasoro field "L"("z"), and even part given by a field
formula_137
subject to commutation relations
By examining the symmetry of the operator products, one finds that there are two possibilities for the field "G": the indices "n" are either all integers, yielding the Ramond algebra, or all half-integers, yielding the Neveu–Schwarz algebra. These algebras have unitary discrete series representations at central charge
formula_140
and unitary representations for all "c" greater than 3/2, with lowest weight "h" only constrained by "h"≥ 0 for Neveu–Schwarz and "h" ≥ "c"/24 for Ramond.
An "N"=1 superconformal vector in a vertex operator algebra "V" of central charge "c" is an odd element τ ∈ "V" of weight 3/2, such that
formula_141
"G"−1/2τ = ω, and the coefficients of "G"("z") yield an action of the "N"=1 Neveu–Schwarz algebra at central charge "c".
For "N"=2 supersymmetry, one obtains even fields "L"("z") and "J"("z"), and odd fields "G"+(z) and "G"−(z). The field "J"("z") generates an action of the Heisenberg algebras (described by physicists as a "U"(1) current). There are both Ramond and Neveu–Schwarz "N"=2 superconformal algebras, depending on whether the indexing on the "G" fields is integral or half-integral. However, the "U"(1) current gives rise to a one-parameter family of isomorphic superconformal algebras interpolating between Ramond and Neveu–Schwartz, and this deformation of structure is known as spectral flow. The unitary representations are given by discrete series with central charge "c" = 3-6/"m" for integers "m" at least 3, and a continuum of lowest weights for "c" > 3.
An "N"=2 superconformal structure on a vertex operator algebra is a pair of odd elements τ+, τ− of weight 3/2, and an even element μ of weight 1 such that τ± generate "G"±(z), and μ generates "J"("z").
For "N"=3 and 4, unitary representations only have central charges in a discrete family, with "c"=3"k"/2 and 6"k", respectively, as "k" ranges over positive integers.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "1\\in V"
},
{
"math_id": 2,
"text": "|0\\rangle"
},
{
"math_id": 3,
"text": "\\Omega"
},
{
"math_id": 4,
"text": "T:V\\rightarrow V"
},
{
"math_id": 5,
"text": "T"
},
{
"math_id": 6,
"text": "Y:V\\otimes V\\rightarrow V((z))"
},
{
"math_id": 7,
"text": "V((z))"
},
{
"math_id": 8,
"text": " \\cdot_n : u \\otimes v \\mapsto u_n v"
},
{
"math_id": 9,
"text": "n \\in \\mathbb{Z}"
},
{
"math_id": 10,
"text": " u_n \\in \\mathrm{End}(V)"
},
{
"math_id": 11,
"text": "v"
},
{
"math_id": 12,
"text": "N"
},
{
"math_id": 13,
"text": "u_n v = 0"
},
{
"math_id": 14,
"text": "n < N"
},
{
"math_id": 15,
"text": "V\\rightarrow \\mathrm{End}(V)[[z^{\\pm 1}]]"
},
{
"math_id": 16,
"text": "u\\in V"
},
{
"math_id": 17,
"text": "Y(u,z)"
},
{
"math_id": 18,
"text": "z^{-n-1}"
},
{
"math_id": 19,
"text": "u_{n}"
},
{
"math_id": 20,
"text": "\\mathrm{End}(V)[[z^{\\pm 1}]]"
},
{
"math_id": 21,
"text": "A(z) = \\sum_{n \\in \\mathbb{Z}}A_n z^n, A_n \\in \\mathrm{End}(V)"
},
{
"math_id": 22,
"text": "v \\in V, A_n v = 0"
},
{
"math_id": 23,
"text": "n"
},
{
"math_id": 24,
"text": "u \\otimes v \\mapsto Y(u,z)v = \\sum_{n \\in \\mathbf{Z}} u_n v z^{-n-1}."
},
{
"math_id": 25,
"text": "u\\in V\\,,\\,Y(1,z)u=u"
},
{
"math_id": 26,
"text": "\\,Y(u,z)1\\in u+zV[[z]]"
},
{
"math_id": 27,
"text": "T(1)=0"
},
{
"math_id": 28,
"text": "u,v\\in V"
},
{
"math_id": 29,
"text": "[T,Y(u,z)]v = TY(u,z)v - Y(u,z)Tv = \\frac{d}{dz}Y(u,z)v"
},
{
"math_id": 30,
"text": " (z-x)^N Y(u, z) Y(v, x) = (z-x)^N Y(v, x) Y(u, z)."
},
{
"math_id": 31,
"text": "\\forall u,v, w \\in V : \\qquad z^{-1}\\delta\\left(\\frac{y-x}{z}\\right)Y(u,x)Y(v,y)w - z^{-1}\\delta\\left(\\frac{-y+x}{z}\\right)Y(v,y)Y(u,x)w = y^{-1}\\delta\\left(\\frac{x+z}{y}\\right)Y(Y(u,z)v,y)w,"
},
{
"math_id": 32,
"text": "\\delta\\left(\\frac{y-x}{z}\\right) := \\sum_{s \\geq 0, r \\in \\mathbf{Z}} \\binom{r}{s} (-1)^s y^{r-s}x^s z^{-r}."
},
{
"math_id": 33,
"text": "(u_m (v))_n (w) = \\sum_{i \\geq 0} (-1)^i \\binom{m}{i} \\left (u_{m-i} (v_{n+i} (w)) - (-1)^m v_{m+n-i} (u_i (w)) \\right)"
},
{
"math_id": 34,
"text": " u_m v=\\sum_{i\\geq 0}(-1)^{m+i+1}\\frac{T^{i}}{i!}v_{m+i}u "
},
{
"math_id": 35,
"text": "\\sum_{i \\in \\mathbf{Z}} \\binom{m}{i} \\left(u_{q+i} (v) \\right )_{m+n-i} (w) = \\sum_{i\\in \\mathbf{Z}} (-1)^i \\binom{q}{i} \\left (u_{m+q-i} \\left(v_{n+i} (w) \\right ) - (-1)^q v_{n+q-i} \\left (u_{m+i} (w) \\right ) \\right)"
},
{
"math_id": 36,
"text": "u,v,w\\in V"
},
{
"math_id": 37,
"text": "X(u,v,w;z,x) \\in V[[z,x]] \\left[z^{-1}, x^{-1}, (z-x)^{-1} \\right]"
},
{
"math_id": 38,
"text": "Y(u,z)Y(v,x)w"
},
{
"math_id": 39,
"text": "Y(v,x)Y(u,z)w"
},
{
"math_id": 40,
"text": "X(u,v,w;z,x)"
},
{
"math_id": 41,
"text": "V((z))((x))"
},
{
"math_id": 42,
"text": "V((x))((z))"
},
{
"math_id": 43,
"text": "\\omega \\in V"
},
{
"math_id": 44,
"text": "Y(\\omega,z)"
},
{
"math_id": 45,
"text": "L(z)"
},
{
"math_id": 46,
"text": "Y(\\omega, z) = \\sum_{n\\in\\mathbf{Z}} \\omega_{n} {z^{-n-1}} = L(z) = \\sum_{n\\in\\mathbf{Z}} L_n z^{-n-2}"
},
{
"math_id": 47,
"text": "[L_m,L_n]=(m-n)L_{m+n}+\\frac{1}{12}\\delta_{m+n,0}(m^3-m)c\\,\\mathrm{Id}_V"
},
{
"math_id": 48,
"text": "c"
},
{
"math_id": 49,
"text": "L_0"
},
{
"math_id": 50,
"text": "u"
},
{
"math_id": 51,
"text": "u_n v"
},
{
"math_id": 52,
"text": "\\mathrm{deg}(u)+\\mathrm{deg}(v)-n-1"
},
{
"math_id": 53,
"text": "1"
},
{
"math_id": 54,
"text": "\\omega"
},
{
"math_id": 55,
"text": "L_{-1}=T"
},
{
"math_id": 56,
"text": "Y(u,z)v"
},
{
"math_id": 57,
"text": "V[[z]]"
},
{
"math_id": 58,
"text": "Y(u, z) \\in \\operatorname{End}[[z]]"
},
{
"math_id": 59,
"text": "z = 0"
},
{
"math_id": 60,
"text": "Y(u,z)v=u_{-1}vz^0=uv"
},
{
"math_id": 61,
"text": "Y"
},
{
"math_id": 62,
"text": "Y:V \\rightarrow \\operatorname{End}(V)"
},
{
"math_id": 63,
"text": "u \\mapsto u \\cdot"
},
{
"math_id": 64,
"text": "\\cdot"
},
{
"math_id": 65,
"text": "\\omega=0"
},
{
"math_id": 66,
"text": "\\,Y(u,z)1=e^{zT}u"
},
{
"math_id": 67,
"text": "\\,Tu=u_{-2}1"
},
{
"math_id": 68,
"text": "\\,Y(Tu,z)=\\frac{\\mathrm{d}Y(u,z)}{\\mathrm{d}z}"
},
{
"math_id": 69,
"text": "\\,e^{xT}Y(u,z)e^{-xT}=Y(e^{xT}u,z)=Y(u,z+x)"
},
{
"math_id": 70,
"text": "Y(u,z)v=e^{zT}Y(v,-z)u"
},
{
"math_id": 71,
"text": "\\,x^{L_0}Y(u,z)x^{-L_0}=Y(x^{L_0}u,xz)"
},
{
"math_id": 72,
"text": "\\,e^{xL_1}Y(u,z)e^{-xL_1}=Y(e^{x(1-xz)L_1}(1-xz)^{-2L_0}u,z(1-xz)^{-1})"
},
{
"math_id": 73,
"text": "[L_m, Y(u,z)] = \\sum_{k=0}^{m+1} \\binom{m+1}{k} z^k Y(L_{m-k}u, z)"
},
{
"math_id": 74,
"text": "m\\geq -1"
},
{
"math_id": 75,
"text": "X(u,v,w;z,x) \\in V[[z,x]][z^{-1}, x^{-1}, (z-x)^{-1}]"
},
{
"math_id": 76,
"text": "Y(Y(u,z-x)v,x)w"
},
{
"math_id": 77,
"text": "V((x))((z-x))"
},
{
"math_id": 78,
"text": "Y(v,z)"
},
{
"math_id": 79,
"text": "z-x"
},
{
"math_id": 80,
"text": "(z-x)"
},
{
"math_id": 81,
"text": "\\mathrm{End}(V)"
},
{
"math_id": 82,
"text": "J_a"
},
{
"math_id": 83,
"text": "J^a(z)\\in \\mathrm{End}(V)[[z^{\\pm 1}]]"
},
{
"math_id": 84,
"text": "J^{a}_{n}"
},
{
"math_id": 85,
"text": "Y(J^{a_1}_{n_1+1}J^{a_2}_{n_2+1}...J^{a_k}_{n_k+1}1, z) = :\\frac{\\partial^{n_1}}{\\partial z^{n_1}}\\frac{J^{a_1}(z)}{n_1!}\\frac{\\partial^{n_2}}{\\partial z^{n_2}}\\frac{J^{a_2}(z)}{n_2!} \\cdots \\frac{\\partial^{n_k}}{\\partial z^{n_k}}\\frac{J^{a_k}(z)}{n_k!}:"
},
{
"math_id": 86,
"text": "J^a"
},
{
"math_id": 87,
"text": "A, B, C \\in V,"
},
{
"math_id": 88,
"text": "Y(A, z)Y(B,w)C = \\sum_{n \\in \\mathbb{Z}}\\frac{Y(A_{(n)}\\cdot B, w)}{(z-w)^{n+1}}C."
},
{
"math_id": 89,
"text": "Y(A, z)Y(B,w) = \\sum_{n \\geq 0}\\frac{Y(A_{(n)}\\cdot B, w)}{(z-w)^{n+1}} + :Y(A,z)Y(B,w):."
},
{
"math_id": 90,
"text": "z"
},
{
"math_id": 91,
"text": "w"
},
{
"math_id": 92,
"text": "Y(A, z)Y(B,w) \\sim \\sum_{n \\geq 0}\\frac{Y(A_{(n)}\\cdot B, w)}{(z-w)^{n+1}},"
},
{
"math_id": 93,
"text": "\\sim"
},
{
"math_id": 94,
"text": "\\mathbb{C}[b_{-1}, b_{-2}, \\cdots]"
},
{
"math_id": 95,
"text": "b_{-n}"
},
{
"math_id": 96,
"text": "b_n"
},
{
"math_id": 97,
"text": "n\\partial_{b_{-n}}"
},
{
"math_id": 98,
"text": "b_{j_1}b_{j_2}...b_{j_k}"
},
{
"math_id": 99,
"text": "j_i < 0"
},
{
"math_id": 100,
"text": "Y( b_{j_1}b_{j_2}...b_{j_k}, z) := \\frac{1}{(-j_1 - 1)!(-j_2 - 1)!\\cdots (-j_k - 1)!}:\\partial^{-j_1 - 1}b(z)\\partial^{-j_2 - 1}b(z)...\\partial^{-j_k - 1}b(z):"
},
{
"math_id": 101,
"text": ":\\mathcal{O}:"
},
{
"math_id": 102,
"text": "\\mathcal{O}"
},
{
"math_id": 103,
"text": " Y[f,z] \\equiv :f\\left(\\frac{b(z)}{0!},\\frac{b'(z)}{1!},\\frac{b''(z)}{2!},...\\right): "
},
{
"math_id": 104,
"text": "\\lambda \\in \\mathbb{C}"
},
{
"math_id": 105,
"text": "\\omega_\\lambda"
},
{
"math_id": 106,
"text": "\\omega_\\lambda = \\frac{1}{2}b_{-1}^2 + \\lambda b_{-2},"
},
{
"math_id": 107,
"text": "c_\\lambda = 1 - 12\\lambda^2"
},
{
"math_id": 108,
"text": "\\lambda = 0"
},
{
"math_id": 109,
"text": "Tr_V q^{L_0} := \\sum_{n \\in \\mathbf{Z}} \\dim V_n q^n = \\prod_{n \\geq 1} (1-q^n)^{-1}"
},
{
"math_id": 110,
"text": "Tr_V q^{L_0} = \\sum_{n \\in \\mathbf{R}} \\dim V_n q^n = \\prod_{n \\geq 2} (1-q^n)^{-1}"
},
{
"math_id": 111,
"text": "Y(L_{-n_1-2}L_{-n_2-2}...L_{-n_k-2}|0\\rangle,z) \\equiv \\frac{1}{n_1!n_2!..n_k!}:\\partial^{n_1}L(z)\\partial^{n_2}L(z)...\\partial^{n_k}L(z):"
},
{
"math_id": 112,
"text": "\\omega = L_{-2}|0\\rangle"
},
{
"math_id": 113,
"text": "[L(z),L(x)] =\\left(\\frac{\\partial}{\\partial x}L(x)\\right)w^{-1}\\delta \\left(\\frac{z}{x}\\right)-2L(x)x^{-1}\\frac{\\partial}{\\partial z}\\delta \\left(\\frac{z}{x}\\right)-\\frac{1}{12}cx^{-1}\\left(\\frac{\\partial}{\\partial z}\\right)^3\\delta \\left(\\frac{z}{x}\\right)"
},
{
"math_id": 114,
"text": "0 \\to \\mathbb{C} \\to \\hat{\\mathfrak{g}} \\to \\mathfrak{g}[t,t^{-1}] \\to 0"
},
{
"math_id": 115,
"text": "\\mathfrak{g}[t] \\to \\mathfrak{g}[t,t^{-1}]"
},
{
"math_id": 116,
"text": "\\mathfrak{g}"
},
{
"math_id": 117,
"text": "J^a(z) = \\sum_{n=-\\infty}^\\infty J^a_n z^{-n-1} = \\sum_{n=-\\infty}^\\infty (J^a t^n) z^{-n-1}."
},
{
"math_id": 118,
"text": "\\omega = \\frac{1}{2(k+h^\\vee)} \\sum_a J_{a,-1} J^a_{-1} 1"
},
{
"math_id": 119,
"text": "k \\cdot \\dim \\mathfrak{g}/(k+h^\\vee)"
},
{
"math_id": 120,
"text": "\\mathcal{H} \\cong \\bigoplus_{i \\in I} M_i \\otimes \\overline{M_i}"
},
{
"math_id": 121,
"text": "X(u,v,w;z,x) \\in M[[z,x]][z^{-1}, x^{-1}, (z-x)^{-1}]"
},
{
"math_id": 122,
"text": "z^{-1}\\delta\\left(\\frac{y-x}{z}\\right)Y^M(u,x)Y^M(v,y)w - z^{-1}\\delta\\left(\\frac{-y+x}{z}\\right)Y^M(v,y)Y^M(u,x)w = y^{-1}\\delta\\left(\\frac{x+z}{y}\\right)Y^M(Y(u,z)v,y)w."
},
{
"math_id": 123,
"text": "\\mathrm{SL}(2, \\mathbb{Z})"
},
{
"math_id": 124,
"text": "\\pi_\\lambda"
},
{
"math_id": 125,
"text": "v_\\lambda"
},
{
"math_id": 126,
"text": "b_nv_\\lambda = 0"
},
{
"math_id": 127,
"text": "n > 0"
},
{
"math_id": 128,
"text": "b_0v_\\lambda = 0"
},
{
"math_id": 129,
"text": "n>0"
},
{
"math_id": 130,
"text": "\\mathbb{C}[b_{-1}, b_{-2}, \\cdots]v_\\lambda"
},
{
"math_id": 131,
"text": "\\mathbb{Z}"
},
{
"math_id": 132,
"text": "V_\\Lambda \\cong \\bigoplus_{\\lambda \\in \\Lambda} V_\\lambda"
},
{
"math_id": 133,
"text": "Y(v_\\lambda,z) = e_\\lambda :\\exp \\int \\lambda(z): = e_\\lambda z^\\lambda \\exp \\left (\\sum_{n<0} \\lambda_n \\frac{z^{-n}}{n} \\right )\\exp \\left (\\sum_{n>0} \\lambda_n \\frac{z^{-n}}{n} \\right ),"
},
{
"math_id": 134,
"text": "V^\\natural"
},
{
"math_id": 135,
"text": "j(\\tau) - 744"
},
{
"math_id": 136,
"text": " V=V_+\\oplus V_-"
},
{
"math_id": 137,
"text": "G(z) = \\sum_n G_n z^{-n-3/2}"
},
{
"math_id": 138,
"text": "[G_m,L_n] = (m-n/2)G_{m+n}"
},
{
"math_id": 139,
"text": "[G_m,G_n] = (m-n)L_{m+n} + \\delta_{m,-n} \\frac{4m^2+1}{12}c"
},
{
"math_id": 140,
"text": "\\hat{c} = \\frac{2}{3}c = 1-\\frac{8}{m(m+2)} \\quad m \\geq 3"
},
{
"math_id": 141,
"text": "Y(\\tau,z) = G(z) = \\sum_{m \\in \\mathbb{Z}+1/2} G_n z^{-n-3/2},"
},
{
"math_id": 142,
"text": "j_*j^*(A \\boxtimes A) \\to \\Delta_* A"
}
] | https://en.wikipedia.org/wiki?curid=1287577 |
1287679 | Proca action | Action of a massive abelian gauge field
In physics, specifically field theory and particle physics, the Proca action describes a massive spin-1 field of mass "m" in Minkowski spacetime. The corresponding equation is a relativistic wave equation called the Proca equation. The Proca action and equation are named after Romanian physicist Alexandru Proca.
The Proca equation is involved in the Standard Model and describes there the three massive vector bosons, i.e. the Z and W bosons.
This article uses the (+−−−) metric signature and tensor index notation in the language of 4-vectors.
Lagrangian density.
The field involved is a complex 4-potential formula_0, where formula_1 is a kind of generalized electric potential and formula_2 is a generalized magnetic potential. The field formula_3 transforms like a complex four-vector.
The Lagrangian density is given by:
formula_4
where formula_5 is the speed of light in vacuum, formula_6 is the reduced Planck constant, and formula_7 is the 4-gradient.
Equation.
The Euler–Lagrange equation of motion for this case, also called the Proca equation, is:
formula_8
which is equivalent to the conjunction of
formula_9
with (in the massive case)
formula_10
which may be called a generalized Lorenz gauge condition. For non-zero sources, with all fundamental constants included, the field equation is:
formula_11
When formula_12, the source free equations reduce to Maxwell's equations without charge or current, and the above reduces to Maxwell's charge equation. This Proca field equation is closely related to the Klein–Gordon equation, because it is second order in space and time.
In the vector calculus notation, the source free equations are:
formula_13
formula_14
and formula_15 is the D'Alembert operator.
Gauge fixing.
The Proca action is the gauge-fixed version of the Stueckelberg action via the Higgs mechanism. Quantizing the Proca action requires the use of second class constraints.
If formula_16, they are not invariant under the gauge transformations of electromagnetism
formula_17
where formula_18 is an arbitrary function.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " B^\\mu = \\left (\\frac{\\phi}{c}, \\mathbf{A} \\right)"
},
{
"math_id": 1,
"text": " \\phi "
},
{
"math_id": 2,
"text": " \\mathbf{A} "
},
{
"math_id": 3,
"text": "B^\\mu"
},
{
"math_id": 4,
"text": "\\mathcal{L}=-\\frac{1}{2}(\\partial_\\mu B_\\nu^*-\\partial_\\nu B_\\mu^*)(\\partial^\\mu B^\\nu-\\partial^\\nu B^\\mu)+\\frac{m^2 c^2}{\\hbar^2}B_\\nu^* B^\\nu."
},
{
"math_id": 5,
"text": " c "
},
{
"math_id": 6,
"text": " \\hbar "
},
{
"math_id": 7,
"text": " \\partial_{\\mu}"
},
{
"math_id": 8,
"text": "\\partial_\\mu(\\partial^\\mu B^\\nu - \\partial^\\nu B^\\mu)+\\left(\\frac{mc}{\\hbar}\\right)^2 B^\\nu=0"
},
{
"math_id": 9,
"text": "\\left[\\partial_\\mu \\partial^\\mu+ \\left(\\frac{mc}{\\hbar}\\right)^2\\right]B^\\nu=0"
},
{
"math_id": 10,
"text": "\\partial_\\mu B^\\mu=0 \\!"
},
{
"math_id": 11,
"text": "c{{\\mu }_{0}}{{j}^{\\nu }}=\\left( {{g}^{\\mu \\nu }}\\left( {{\\partial }_{\\sigma }}{{\\partial }^{\\sigma }}+{{m}^{2}}{{c}^{2}}/{{\\hbar }^{2}} \\right)-{{\\partial }^{\\nu }}{{\\partial }^{\\mu }} \\right){{B}_{\\mu }}"
},
{
"math_id": 12,
"text": " m = 0 "
},
{
"math_id": 13,
"text": "\\Box \\phi - \\frac{\\partial }{\\partial t} \\left(\\frac{1}{c^2}\\frac{\\partial \\phi}{\\partial t} + \\nabla\\cdot\\mathbf{A}\\right) =-\\left(\\frac{mc}{\\hbar}\\right)^2\\phi \\!"
},
{
"math_id": 14,
"text": "\\Box \\mathbf{A} + \\nabla \\left(\\frac{1}{c^2}\\frac{\\partial \\phi}{\\partial t} + \\nabla\\cdot\\mathbf{A}\\right) =-\\left(\\frac{mc}{\\hbar}\\right)^2\\mathbf{A}\\!"
},
{
"math_id": 15,
"text": "\\Box "
},
{
"math_id": 16,
"text": "m \\neq 0"
},
{
"math_id": 17,
"text": "B^\\mu \\rightarrow B^\\mu - \\partial^\\mu f "
},
{
"math_id": 18,
"text": " f "
}
] | https://en.wikipedia.org/wiki?curid=1287679 |
12880414 | Appleton–Hartree equation | Mathematical expression
The Appleton–Hartree equation, sometimes also referred to as the Appleton–Lassen equation, is a mathematical expression that describes the refractive index for electromagnetic wave propagation in a cold magnetized plasma. The Appleton–Hartree equation was developed independently by several different scientists, including Edward Victor Appleton, Douglas Hartree and German radio physicist H. K. Lassen. Lassen's work, completed two years prior to Appleton and five years prior to Hartree, included a more thorough treatment of collisional plasma; but, published only in German, it has not been widely read in the English speaking world of radio physics. Further, regarding the derivation by Appleton, it was noted in the historical study by Gillmor that Wilhelm Altar (while working with Appleton) first calculated the dispersion relation in 1926.
Equation.
The dispersion relation can be written as an expression for the frequency (squared), but it is also common to write it as an expression for the index of refraction:
formula_0
The full equation is typically given as follows:
formula_1
or, alternatively, with damping term formula_2 and rearranging terms:
formula_3
Definition of terms:
formula_4: complex refractive index
formula_5: imaginary unit
formula_6
formula_7
formula_8
formula_9: electron collision frequency
formula_10: angular frequency
formula_11: ordinary frequency (cycles per second, or Hertz)
formula_12: electron plasma frequency
formula_13: electron gyro frequency
formula_14: permittivity of free space
formula_15: ambient magnetic field strength
formula_16: electron charge
formula_17: electron mass
formula_18: angle between the ambient magnetic field vector and the wave vector
Modes of propagation.
The presence of the formula_19 sign in the Appleton–Hartree equation gives two separate solutions for the refractive index. For propagation perpendicular to the magnetic field, i.e., formula_20, the '+' sign represents the "ordinary mode," and the '−' sign represents the "extraordinary mode." For propagation parallel to the magnetic field, i.e., formula_21, the '+' sign represents a left-hand circularly polarized mode, and the '−' sign represents a right-hand circularly polarized mode. See the article on electromagnetic electron waves for more detail.
formula_22 is the vector of the propagation plane.
Reduced forms.
Propagation in a collisionless plasma.
If the electron collision frequency formula_9 is negligible compared to the wave frequency of interest formula_23, the plasma can be said to be "collisionless." That is, given the condition
formula_24,
we have
formula_25,
so we can neglect the formula_26 terms in the equation. The Appleton–Hartree equation for a cold, collisionless plasma is therefore,
formula_27
Quasi-longitudinal propagation in a collisionless plasma.
If we further assume that the wave propagation is primarily in the direction of the magnetic field, i.e., formula_28, we can neglect the formula_29 term above. Thus, for quasi-longitudinal propagation in a cold, collisionless plasma, the Appleton–Hartree equation becomes,
formula_30
References.
<templatestyles src="Refbegin/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n^2 = \\left(\\frac{ck}{\\omega}\\right)^2."
},
{
"math_id": 1,
"text": "n^2 = 1 - \\frac{X}{1 - iZ - \\frac{\\frac{1}{2}Y^2\\sin^2\\theta}{1 - X - iZ} \\pm \\frac{1}{1 - X - iZ}\\left(\\frac{1}{4}Y^4\\sin^4\\theta + Y^2\\cos^2\\theta\\left(1 - X - iZ\\right)^2\\right)^{1/2}}"
},
{
"math_id": 2,
"text": "Z = 0"
},
{
"math_id": 3,
"text": "n^2 = 1 - \\frac{X\\left(1-X\\right)}{1 - X - {\\frac{1}{2}Y^2\\sin^2\\theta} \\pm \\left(\\left(\\frac{1}{2}Y^2\\sin^2\\theta\\right)^2 + \\left(1-X\\right)^2Y^2\\cos^2\\theta\\right)^{1/2}}"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "i=\\sqrt{-1}"
},
{
"math_id": 6,
"text": "X = \\frac{\\omega_0^2}{\\omega^2}"
},
{
"math_id": 7,
"text": "Y = \\frac{\\omega_H}{\\omega}"
},
{
"math_id": 8,
"text": "Z = \\frac{\\nu}{\\omega}"
},
{
"math_id": 9,
"text": "\\nu"
},
{
"math_id": 10,
"text": "\\omega = 2\\pi f"
},
{
"math_id": 11,
"text": "f"
},
{
"math_id": 12,
"text": "\\omega_0 = 2\\pi f_0 = \\sqrt{\\frac{Ne^2}{\\epsilon_0 m}}"
},
{
"math_id": 13,
"text": "\\omega_H = 2\\pi f_H = \\frac{B_0 |e|}{m}"
},
{
"math_id": 14,
"text": "\\epsilon_0"
},
{
"math_id": 15,
"text": "B_0"
},
{
"math_id": 16,
"text": "e"
},
{
"math_id": 17,
"text": "m"
},
{
"math_id": 18,
"text": "\\theta"
},
{
"math_id": 19,
"text": "\\pm"
},
{
"math_id": 20,
"text": "\\mathbf k\\perp \\mathbf B_0"
},
{
"math_id": 21,
"text": "\\mathbf k\\parallel \\mathbf B_0"
},
{
"math_id": 22,
"text": "\\mathbf k"
},
{
"math_id": 23,
"text": "\\omega"
},
{
"math_id": 24,
"text": "\\nu \\ll \\omega"
},
{
"math_id": 25,
"text": "Z = \\frac{\\nu}{\\omega} \\ll 1"
},
{
"math_id": 26,
"text": "Z"
},
{
"math_id": 27,
"text": "n^2 = 1 - \\frac{X}{1 - \\frac{\\frac{1}{2}Y^2\\sin^2\\theta}{1 - X} \\pm \\frac{1}{1 - X}\\left(\\frac{1}{4}Y^4\\sin^4\\theta + Y^2\\cos^2\\theta\\left(1 - X\\right)^2\\right)^{1/2}}"
},
{
"math_id": 28,
"text": "\\theta \\approx 0"
},
{
"math_id": 29,
"text": "Y^4\\sin^4\\theta"
},
{
"math_id": 30,
"text": "n^2 = 1 - \\frac{X}{1 - \\frac{\\frac{1}{2}Y^2\\sin^2\\theta}{1 - X} \\pm Y\\cos\\theta}"
}
] | https://en.wikipedia.org/wiki?curid=12880414 |
12882 | Gallon | Units of volume
<templatestyles src="Template:Infobox/styles-images.css" />
The gallon is a unit of volume in British imperial units and United States customary units. Three different versions are in current use:
There are two pints in a quart and four quarts in a gallon. Different sizes of pints account for the different sizes of the imperial and US gallons.
The IEEE standard symbol for both US (liquid) and imperial gallon is gal, not to be confused with the gal (symbol: Gal), a CGS unit of acceleration.
Definitions.
The gallon currently has one definition in the imperial system, and two definitions (liquid and dry) in the US customary system. Historically, there were many definitions and redefinitions.
English system gallons.
There were a number of systems of liquid measurements in the United Kingdom prior to the 19th century.
Imperial gallon.
The British imperial gallon (frequently called simply "gallon") is defined as exactly 4.54609 dm3 (4.54609 litres). It is used in some Commonwealth countries, and until 1976 was defined as the volume of water at whose mass is . There are four imperial quarts in a gallon, two imperial pints in a quart, and there are 20 imperial fluid ounces in an imperial pint, yielding 160 fluid ounces in an imperial gallon.
US liquid gallon.
The US liquid gallon (frequently called simply "gallon") is legally defined as 231 cubic inches, which is exactly . A US liquid gallon can contain about of water at , and is about 16.7% less than the imperial gallon. There are four quarts in a gallon, two pints in a quart and 16 US fluid ounces in a US pint, which makes the US fluid ounce equal to of a US gallon.
In order to overcome the effects of expansion and contraction with temperature when using a gallon to specify a quantity of material for purposes of trade, it is common to define the temperature at which the material will occupy the specified volume. For example, the volume of petroleum products and alcoholic beverages are both referenced to in government regulations.
US dry gallon.
Since the dry measure is one-eighth of a US "Winchester" bushel of cubic inches, it is equal to exactly 268.8025 cubic inches, which is . The US dry gallon is not used in commerce, and is also not listed in the relevant statute, which jumps from the dry pint to the bushel.
Worldwide usage.
Imperial gallon.
As of 2021, the imperial gallon continues to be used as the standard petrol unit on 10 Caribbean island groups, consisting of:
All 12 of the Caribbean islands use miles per hour for speed limits signage, and drive on the left side of the road.
The United Arab Emirates ceased selling petrol by the imperial gallon in 2010 and switched to the litre, with Guyana following suit in 2013. In 2014, Myanmar switched from the imperial gallon to the litre.
Antigua and Barbuda has proposed switching to selling petrol by litres since 2015.
In the European Union the gallon was removed from the list of legally defined primary units of measure catalogue in the EU directive 80/181/EEC for trading and official purposes, effective from 31 December 1994. Under the directive the gallon could still be used, but only as a supplementary or secondary unit.
As a result of the EU directive Ireland and the United Kingdom passed legislation to replace the gallon with the litre as a primary unit of measure in trade and in the conduct of public business, effective from 31 December 1993, and 30 September 1995 respectively. Though the gallon has ceased to be a primary unit of trade, it can still be legally used in both the UK and Ireland as a supplementary unit. However, barrels and large containers of beer, oil and other fluids are commonly measured in multiples of an imperial gallon.
Miles per imperial gallon is used as the primary fuel economy unit in the United Kingdom and as a supplementary unit in Canada on official documentation.
US liquid gallon.
Other than the United States, petrol is sold by the US gallon in 13 other countries, and one US territory:
The latest country to cease using the gallon is El Salvador in June 2021.
The Imperial and US liquid gallon.
Both the US gallon and imperial gallon are used in the Turks and Caicos Islands (due to an increase in tax duties which was disguised by levying the same duty on the US gallon (3.79 L) as was previously levied on the Imperial gallon (4.55 L)) and the Bahamas.
Legacy.
In some parts of the Middle East, such as the United Arab Emirates and Bahrain, 18.9-litre water cooler bottles are marketed as five-gallon bottles.
Relationship to other units.
Both the US liquid and imperial gallon are divided into four quarts ("quart"er gallons), which in turn are divided into two pints, which in turn are divided into two cups (not in customary use outside the US), which in turn are further divided into two gills. Thus, both gallons are equal to four quarts, eight pints, sixteen cups, or thirty-two gills.
The imperial gill is further divided into five fluid ounces, whereas the US gill is divided into four fluid ounces, meaning an imperial fluid ounce is of an imperial pint, or of an imperial gallon, while a US fluid ounce is of a US pint, or of a US gallon. Thus, the imperial gallon, quart, pint, cup and gill are approximately 20% larger than their US counterparts, meaning these are not interchangeable, but the imperial fluid ounce is only approximately 4% smaller than the US fluid ounce, meaning these are often used interchangeably.
Historically, a common bottle size for liquor in the US was the "fifth", i.e. one-fifth of a US gallon (or one-sixth of an imperial gallon). While spirit sales in the US were switched to metric measures in 1976, a 750 mL bottle is still sometimes known as a "fifth".
History.
The term derives most immediately from "galun", "galon" in Old Norman French, but the usage was common in several languages, for example in Old French and (bowl) in Old English. This suggests a common origin in Romance Latin, but the ultimate source of the word is unknown.
The gallon originated as the base of systems for measuring wine and beer in England. The sizes of gallon used in these two systems were different from each other: the first was based on the wine gallon (equal in size to the US gallon), and the second one either the ale gallon or the larger imperial gallon.
By the end of the 18th century, there were three definitions of the gallon in common use:
The "corn" or "dry gallon" is used (along with the dry quart and pint) in the United States for grain and other dry commodities. It is one-eighth of the (Winchester) bushel, originally defined as a cylindrical measure of inches in diameter and 8 inches in depth, which made the bushel . The bushel was later defined to be 2150.42 cubic inches exactly, thus making its gallon exactly (); in previous centuries, there had been a corn gallon of between 271 and 272 cubic inches.
The "wine", "fluid", or "liquid gallon" has been the standard US gallon since the early 19th century. The wine gallon, which some sources relate to the volume occupied by eight medieval merchant pounds of wine, was at one time defined as the volume of a cylinder 6 inches deep and 7 inches in diameter, i.e. . It was redefined during the reign of Queen Anne in 1706 as 231 cubic inches exactly, the earlier definition with π approximated to .
formula_0
Although the wine gallon had been used for centuries for import duty purposes, there was no legal standard of it in the Exchequer, while a smaller gallon (224 cu in) was actually in use, requiring this statute; the 231 cubic inch gallon remains the US definition today.
In 1824, Britain adopted a close approximation to the "ale gallon" known as the "imperial gallon", and abolished all other gallons in favour of it. Inspired by the kilogram-litre relationship, the imperial gallon was based on the volume of 10 pounds of distilled water weighed in air with brass weights with the barometer standing at 30 inches of mercury (14.7345 pound-force per square inch) and at a temperature of .
In 1963, this definition was refined as the space occupied by 10 pounds of distilled water of density weighed in air of density against weights of density 8.136 g/mL (the original "brass" was refined as the densities of brass alloys vary depending on metallurgical composition), which was calculated as to ten significant figures.
The precise definition of exactly cubic decimetres (also , ≈ ) came after the litre was redefined in 1964. This was adopted shortly afterwards in Canada, and adopted in 1976 in the United Kingdom.
Sizes of gallons.
Historically, gallons of various sizes were used in many parts of Western Europe. In these localities, it has been replaced as the unit of capacity by the litre.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\pi r^2h \\approx \\frac{22}{7}\\times\\left ( \\frac{7 ~ \\mathrm{in}}{2} \\right )^2\\times6 ~ \\mathrm{in} = 231 ~ \\mathrm{in}^3."
}
] | https://en.wikipedia.org/wiki?curid=12882 |
12883 | Gini coefficient | Measure of inequality of a distribution
In economics, the Gini coefficient ( ), also known as the Gini index or Gini ratio, is a measure of statistical dispersion intended to represent the income inequality, the wealth inequality, or the consumption inequality within a nation or a social group. It was developed by Italian statistician and sociologist Corrado Gini.
The Gini coefficient measures the inequality among the values of a frequency distribution, such as levels of income. A Gini coefficient of 0 reflects perfect equality, where all income or wealth values are the same, while a Gini coefficient of 1 (or 100%) reflects maximal inequality among values, a situation where a single individual has all the income while all others have none.
The Gini coefficient was proposed by Corrado Gini as a measure of inequality of income or wealth. For OECD countries in the late 20th century, considering the effect of taxes and transfer payments, the income Gini coefficient ranged between 0.24 and 0.49, with Slovakia being the lowest and Mexico the highest. African countries had the highest pre-tax Gini coefficients in 2008–2009, with South Africa having the world's highest, estimated to be 0.63 to 0.7. However, this figure drops to 0.52 after social assistance is taken into account, and drops again to 0.47 after taxation. The country with the lowest Gini coefficient is Slovakia, with a Gini coefficient of 0.232. The Gini coefficient of the global income in 2005 has been estimated to be between 0.61 and 0.68 by various sources.
There are some issues in interpreting a Gini coefficient, as the same value may result from many different distribution curves. To mitigate this, the demographic structure should be taken into account. Countries with an aging population, or those with an increased birth rate, experience an increasing pre-tax Gini coefficient even if real income distribution for working adults remains constant. Many scholars have devised over a dozen variants of the Gini coefficient.
History.
The Gini coefficient was developed by the Italian statistician Corrado Gini and published in his 1912 paper "Variabilità e mutabilità" (). Building on the work of American economist Max Lorenz, Gini proposed that the difference between the hypothetical straight line depicting perfect equality, and the actual line depicting people's incomes, be used as a measure of inequality. In this paper, he introduced the concept of simple mean difference as a measure of variability.
He then applied the simple mean difference of observed variables to income and wealth inequality in his work "On the measurement of concentration and variability of characters" in 1914. Here, he presented the concentration ratio, which further developed in the Gini coefficient used today. Secondly, Gini observed that his proposed ratio can be also achieved by improving methods already introduced by Lorenz, Chatelain, or Séailles.
In 1915, Gaetano Pietra introduced a geometrical interpretation between Gini’s proposed ratio and the ratio between the area of observed concentration and maximum concentration. This altered version of Gini coefficient became the most commonly used inequality index in upcoming years.
According to data from OECD, Gini coefficient was first officially used country-wide in Canada in the 1970s. Canadian index of income inequality ranged from 0.303 to 0.284 from 1976 to the end of 1980s. OECD started to publish more countries’ data since the start of the 21st century. Central European countries Slovenia, Czechia, and Slovakia have had the lowest inequality index out of all OECD countries ever since the 2000s. Scandinavian countries also frequently appeared at the top of the list of equality in the last decades.
Definition.
The Gini coefficient is an index for the degree of inequality in the distribution of income/wealth, used to estimate how far a country's wealth or income distribution deviates from an equal distribution.
The Gini coefficient is usually defined mathematically based on the Lorenz curve, which plots the proportion of the total income of the population (y-axis) that is cumulatively earned by the bottom "x" of the population (see diagram). The line at 45 degrees thus represents perfect equality of incomes. The Gini coefficient can then be thought of as the ratio of the area that lies between the line of equality and the Lorenz curve (marked "A" in the diagram) over the total area under the line of equality (marked "A" and "B" in the diagram); i.e., G
"A"/("A" + "B"). If there are no negative incomes, it is also equal to 2"A" and 1 − 2"B" due to the fact that "A" + "B"
0.5.
Assuming non-negative income or wealth for all, the Gini coefficient's theoretical range is from 0 (total equality) to 1 (absolute inequality). This measure is often rendered as a percentage, spanning 0 to 100. However, if negative values are factored in, as in cases of debt, the Gini index could exceed 1. Typically, we presuppose a positive mean or total, precluding a Gini coefficient below zero.
An alternative approach is to define the Gini coefficient as half of the relative mean absolute difference, which is equivalent to the definition based on the Lorenz curve. The mean absolute difference is the average absolute difference of all pairs of items of the population, and the relative mean absolute difference is the mean absolute difference divided by the average, formula_0, to normalize for scale. If "x""i" is the wealth or income of person "i", and there are "n" persons, then the Gini coefficient "G" is given by:
formula_1
When the income (or wealth) distribution is given as a continuous probability density function "p"("x"), the Gini coefficient is again half of the relative mean absolute difference:
formula_2
where formula_3 is the mean of the distribution, and the lower limits of integration may be replaced by zero when all incomes are positive.
Calculation.
While the income distribution of any particular country will not correspond perfectly to the theoretical models, these models can provide a qualitative explanation of the income distribution in a nation given the Gini coefficient.
Example: Two levels of income.
The extreme cases are represented by the most equal possible society in which every person receives the same income ("G"
0), and the most unequal society (with "N" individuals) where a single person receives 100% of the total income and the remaining "N" − 1 people receive none ("G"
1 − 1/"N").
A simple case assumes just two levels of income, low and high. If the high income group is a proportion "u" of the population and earns a proportion "f" of all income, then the Gini coefficient is "f" − "u". A more graded distribution with these same values "u" and "f" will always have a higher Gini coefficient than "f" − "u".
For example, if the wealthiest "u =" 20% of the population has "f =" 80% of all income (see Pareto principle), the income Gini coefficient is at least 60%. In another example, if "u =" 1% of the world's population owns "f =" 50% of all wealth, the wealth Gini coefficient is at least 49%.
Alternative expressions.
In some cases, this equation can be applied to calculate the Gini coefficient without direct reference to the Lorenz curve. For example, (taking "y" to indicate the income or wealth of a person or household):
formula_5
This may be simplified to:
formula_6
The Gini coefficient can also be considered as half the relative mean absolute difference. For a random sample "S" with values formula_4, the sample Gini coefficient
formula_7
is a consistent estimator of the population Gini coefficient, but is not in general unbiased. In simplified form:
formula_8
There does not exist a sample statistic that is always an unbiased estimator of the population Gini coefficient.
Discrete probability distribution.
For a discrete probability distribution with probability mass function formula_9 formula_10, where formula_11 is the fraction of the population with income or wealth formula_12, the Gini coefficient is:
formula_13
where
formula_14
If the points with non-zero probabilities are indexed in increasing order formula_15, then:
formula_16
where
formula_17 and formula_18 These formulas are also applicable in the limit, as formula_19
Continuous probability distribution.
When the population is large, the income distribution may be represented by a continuous probability density function "f"("x") where "f"("x") "dx" is the fraction of the population with wealth or income in the interval "dx" about "x". If "F"("x") is the cumulative distribution function for "f"("x"):
formula_20
and "L"("x") is the Lorenz function:
formula_21
then the Lorenz curve "L"("F") may then be represented as a function parametric in "L"("x") and "F"("x") and the value of "B" can be found by integration:
formula_22
The Gini coefficient can also be calculated directly from the cumulative distribution function of the distribution "F"("y"). Defining μ as the mean of the distribution, and specifying that "F"("y") is zero for all negative values, the Gini coefficient is given by:
formula_23
The latter result comes from integration by parts. "(Note that this formula can be applied when there are negative values if the integration is taken from minus infinity to plus infinity.)"
The Gini coefficient may be expressed in terms of the quantile function "Q"("F") "(inverse of the cumulative distribution function: Q(F(x)) = x)"
formula_24
Since the Gini coefficient is independent of scale, if the distribution function can be expressed in the form "f(x,φ,a,b,c...)" where "φ" is a scale factor and "a, b, c..." are dimensionless parameters, then the Gini coefficient will be a function only of "a, b, c...". For example, for the exponential distribution, which is a function of only "x" and a scale parameter, the Gini coefficient is a constant, equal to 1/2.
For some functional forms, the Gini index can be calculated explicitly. For example, if "y" follows a log-normal distribution with the standard deviation of logs equal to formula_25, then formula_26 where formula_27 is the error function ( since formula_28, where formula_29 is the cumulative distribution function of a standard normal distribution). In the table below, some examples for probability density functions with support on formula_30 are shown. The Dirac delta distribution represents the case where everyone has the same wealth (or income); it implies no variations between incomes.
Other approaches.
Sometimes the entire Lorenz curve is not known, and only values at certain intervals are given. In that case, the Gini coefficient can be approximated using various techniques for interpolating the missing values of the Lorenz curve. If ("X""k", "Y""k") are the known points on the Lorenz curve, with the "X""k" indexed in increasing order ("X""k" – 1 < "X""k"), so that:
If the Lorenz curve is approximated on each interval as a line between consecutive points, then the area B can be approximated with trapezoids and:
formula_34
is the resulting approximation for G. More accurate results can be obtained using other methods to approximate the area B, such as approximating the Lorenz curve with a quadratic function across pairs of intervals or building an appropriately smooth approximation to the underlying distribution function that matches the known data. If the population mean and boundary values for each interval are also known, these can also often be used to improve the accuracy of the approximation.
The Gini coefficient calculated from a sample is a statistic, and its standard error, or confidence intervals for the population Gini coefficient, should be reported. These can be calculated using bootstrap techniques, mathematically complicated and computationally demanding even in an era of fast computers. Economist Tomson Ogwang made the process more efficient by setting up a "trick regression model" in which respective income variables in the sample are ranked, with the lowest income being allocated rank 1. The model then expresses the rank (dependent variable) as the sum of a constant "A" and a normal error term whose variance is inversely proportional to "y""k":
formula_35
Thus, "G" can be expressed as a function of the weighted least squares estimate of the constant "A" and that this can be used to speed up the calculation of the jackknife estimate for the standard error. Economist David Giles argued that the standard error of the estimate of "A" can be used to derive the estimate of "G" directly without using a jackknife. This method only requires using ordinary least squares regression after ordering the sample data. The results compare favorably with the estimates from the jackknife with agreement improving with increasing sample size.
However, it has been argued that this depends on the model's assumptions about the error distributions and the independence of error terms. These assumptions are often not valid for real data sets. There is still ongoing debate surrounding this topic.
Guillermina Jasso and Angus Deaton independently proposed the following formula for the Gini coefficient:
formula_36
where formula_37 is mean income of the population, Pi is the income rank P of person i, with income X, such that the richest person receives a rank of 1 and the poorest a rank of "N". This effectively gives higher weight to poorer people in the income distribution, which allows the Gini to meet the Transfer Principle. Note that the Jasso-Deaton formula rescales the coefficient so that its value is one if all the formula_38 are zero except one. Note however Allison's reply on the need to divide by N² instead.
FAO explains another version of the formula.
Generalized inequality indices.
The Gini coefficient and other standard inequality indices reduce to a common form. Perfect equality—the absence of inequality—exists when and only when the inequality ratio, formula_39, equals 1 for all j units in some population (for example, there is perfect income equality when everyone's income formula_40 equals the mean income formula_41, so that formula_42 for everyone). Measures of inequality, then, are measures of the average deviations of the formula_42 from 1; the greater the average deviation, the greater the inequality. Based on these observations the inequality indices have this common form:
formula_43
where "p""j" weights the units by their population share, and "f"("r""j") is a function of the deviation of each unit's "r""j" from 1, the point of equality. The insight of this generalized inequality index is that inequality indices differ because they employ different functions of the distance of the inequality ratios (the "r""j") from 1.
Of income distributions.
Gini coefficients of income are calculated on a market income and a disposable income basis. The Gini coefficient on market income—sometimes referred to as a pre-tax Gini coefficient—is calculated on income before taxes and transfers. It measures inequality in income without considering the effect of taxes and social spending already in place in a country. The Gini coefficient on disposable income—sometimes referred to as the after-tax Gini coefficient—is calculated on income after taxes and transfers. It measures inequality in income after considering the effect of taxes and social spending already in place in a country.
For OECD countries over the 2008–2009 period, the Gini coefficient (pre-taxes and transfers) for a total population ranged between 0.34 and 0.53, with South Korea the lowest and Italy the highest. The Gini coefficient (after-taxes and transfers) for a total population ranged between 0.25 and 0.48, with Denmark the lowest and Mexico the highest. For the United States, the country with the largest population among OECD countries, the pre-tax Gini index was 0.49, and the after-tax Gini index was 0.38 in 2008–2009. The OECD average for total populations in OECD countries was 0.46 for the pre-tax income Gini index and 0.31 for the after-tax income Gini index. Taxes and social spending that were in place in 2008–2009 period in OECD countries significantly lowered effective income inequality, and in general, "European countries—especially Nordic and Continental welfare states—achieve lower levels of income inequality than other countries."
Using the Gini can help quantify differences in welfare and compensation policies and philosophies. However, it should be borne in mind that the Gini coefficient can be misleading when used to make political comparisons between large and small countries or those with different immigration policies (see limitations section).
The Gini coefficient for the entire world has been estimated by various parties to be between 0.61 and 0.68. The graph shows the values expressed as a percentage in their historical development for a number of countries.
Regional income Gini indices.
According to UNICEF, Latin America and the Caribbean region had the highest net income Gini index in the world at 48.3, on an unweighted average basis in 2008. The remaining regional averages were: sub-Saharan Africa (44.2), Asia (40.4), Middle East and North Africa (39.2), Eastern Europe and Central Asia (35.4), and High-income Countries (30.9). Using the same method, the United States is claimed to have a Gini index of 36, while South Africa had the highest income Gini index score of 67.8.
World income Gini index since 1800s.
Taking income distribution of all human beings, worldwide income inequality has been constantly increasing since the early 19th century (and will keep on increasing over the years) . There was a steady increase in the global income inequality Gini score from 1820 to 2002, with a significant increase between 1980 and 2002. This trend appears to have peaked and begun a reversal with rapid economic growth in emerging economies, particularly in the large populations of BRIC countries.
The table below presents the estimated world income Gini coefficients over the last 200 years, as calculated by Milanovic.
More detailed data from similar sources plots a continuous decline since 1988. This is attributed to globalization increasing incomes for billions of poor people, mostly in countries like China and India. Developing countries like Brazil have also improved basic services like health care, education, and sanitation; others like Chile and Mexico have enacted more progressive tax policies.
Of social development.
The Gini coefficient is widely used in fields as diverse as sociology, economics, health science, ecology, engineering, and agriculture. For example, in social sciences and economics, in addition to income Gini coefficients, scholars have published education Gini coefficients and opportunity Gini coefficients.
Education.
Education Gini index estimates the inequality in education for a given population. It is used to discern trends in social development through educational attainment over time. A study across 85 countries by three World Bank economists, Vinod Thomas, Yan Wang, and Xibo Fan, estimated Mali had the highest education Gini index of 0.92 in 1990 (implying very high inequality in educational attainment across the population), while the United States had the lowest education inequality Gini index of 0.14. Between 1960 and 1990, China, India and South Korea had the fastest drop in education inequality Gini Index. They also claim education Gini index for the United States slightly increased over the 1980–1990 period.
Though India's education Gini Index has been falling from 1960 through 1990, most of the population still has not received any education, while 10 percent of the population received more than 40% of the total educational hours in the nation. This means that a large portion of capable children in the country are not receiving the support necessary to allow them to become positive contributors to society. This will lead to a deadweight loss to the national society because there are many people who are underdeveloped and underutilized.
Opportunity.
Similar in concept to the Gini income coefficient, the Gini opportunity coefficient measures inequality in opportunities. The concept builds on Amartya Sen's suggestion that inequality coefficients of social development should be premised on the process of enlarging people's choices and enhancing their capabilities, rather than on the process of reducing income inequality. Kovacevic, in a review of the Gini opportunity coefficient, explained that the coefficient estimates how well a society enables its citizens to achieve success in life where the success is based on a person's choices, efforts and talents, not their background defined by a set of predetermined circumstances at birth, such as gender, race, place of birth, parent's income and circumstances beyond the control of that individual.
In 2003, Roemer reported Italy and Spain exhibited the largest opportunity inequality Gini index amongst advanced economies.
Income mobility.
In 1978, Anthony Shorrocks introduced a measure based on income Gini coefficients to estimate income mobility. This measure, generalized by Maasoumi and Zandvakili, is now generally referred to as Shorrocks index, sometimes as Shorrocks mobility index or Shorrocks rigidity index. It attempts to estimate whether the income inequality Gini coefficient is permanent or temporary and to what extent a country or region enables economic mobility to its people so that they can move from one (e.g., bottom 20%) income quantile to another (e.g., middle 20%) over time. In other words, the Shorrocks index compares inequality of short-term earnings, such as the annual income of households, to inequality of long-term earnings, such as 5-year or 10-year total income for the same households.
Shorrocks index is calculated in several different ways, a common approach being from the ratio of income Gini coefficients between short-term and long-term for the same region or country.
A 2010 study using social security income data for the United States since 1937 and Gini-based Shorrock's indices concludes that income mobility in the United States has had a complicated history, primarily due to the mass influx of women into the American labor force after World War II. Income inequality and income mobility trends have been different for men and women workers between 1937 and the 2000s. When men and women are considered together, the Gini coefficient-based Shorrocks index trends imply long-term income inequality has been substantially reduced among all workers, in recent decades for the United States. Other scholars, using just 1990s data or other short periods have come to different conclusions. For example, Sastre and Ayala conclude from their study of income Gini coefficient data between 1993 and 1998 for six developed economies that France had the least income mobility, Italy the highest, and the United States and Germany intermediate levels of income mobility over those five years.
Features.
The Gini coefficient has features that make it useful as a measure of dispersion in a population, and inequalities in particular. The coefficient ranges from 0, for perfect equality, to 1, indicating perfect inequality. The Gini is based on the comparison of cumulative proportions of the population against cumulative proportions of income they receive.
Limitations.
Relative, not absolute.
The Gini coefficient is a relative measure. The Gini coefficient of a developing country can rise (due to increasing inequality of income) even when the number of people in absolute poverty decreases. This is because the Gini coefficient measures relative, not absolute, wealth.
Gini coefficients are simple, and this simplicity can lead to oversights and can confuse the comparison of different populations; for example, while both Bangladesh (per capita income of $1,693) and the Netherlands (per capita income of $42,183) had an income Gini coefficient of 0.31 in 2010, the quality of life, economic opportunity and absolute income in these countries are very different, i.e. countries may have identical Gini coefficients, but differ greatly in wealth. Basic necessities may be available to all in a developed economy, while in an undeveloped economy with the same Gini coefficient, basic necessities may be unavailable to most or unequally available due to lower absolute wealth.
Mathematical limitations.
Gini has some mathematical limitations as well. It is not additive and different sets of people cannot be averaged to obtain the Gini coefficient of all the people in the sets.
Even when the total income of a population is the same, in certain situations two countries with different income distributions can have the same Gini index (e.g. cases when income Lorenz Curves cross). Table A illustrates one such situation. Both countries have a Gini coefficient of 0.2, but the average income distributions for household groups are different. As another example, in a population where the lowest 50% of individuals have no income, and the other 50% have equal income, the Gini coefficient is 0.5; whereas for another population where the lowest 75% of people have 25% of income and the top 25% have 75% of the income, the Gini index is also 0.5. Economies with similar incomes and Gini coefficients can have very different income distributions. Bellù and Liberati claim that ranking income inequality between two populations is not always possible based on their Gini indices. Similarly, computational social scientist Fabian Stephany illustrates that income inequality within the population, e.g., in specific socioeconomic groups of same age and education, also remains undetected by conventional Gini indices.
Income Gini can conceal wealth inequality.
A Gini index does not contain information about absolute national or personal incomes. Populations can simultaneously have very low income Gini indices and very high wealth Gini indexes. By measuring inequality in income, the Gini ignores the differential efficiency of the use of household income. By ignoring wealth (except as it contributes to income), the Gini can create the appearance of inequality when the people compared are at different stages in their life. Wealthy countries such as Sweden can show a low Gini coefficient for the disposable income of 0.31, thereby appearing equal, yet have a very high Gini coefficient for wealth of 0.79 to 0.86, suggesting an extremely unequal wealth distribution in its society. These factors are not assessed in income-based Gini.
Country size and granularity bias.
Gini index has a downward-bias for small populations. Counties or states or countries with small populations and less diverse economies will tend to report small Gini coefficients. For economically diverse large population groups, a much higher coefficient is expected than for each of its regions. For example, taking the world economy as a whole and income distribution for all human beings, different scholars estimate the global Gini index to range between 0.61 and 0.68.
As with other inequality coefficients, the Gini coefficient is influenced by the granularity of the measurements. For example, five 20% quantiles (low granularity) will usually yield a lower Gini coefficient than twenty 5% quantiles (high granularity) for the same distribution. Philippe Monfort has shown that using inconsistent or unspecified granularity limits the usefulness of Gini coefficient measurements.
Changes in population.
Changing income inequality, measured by Gini coefficients, can be due to structural changes in a society such as growing population (increased birth rates, aging populations, emigration, immigration) and income mobility.
Another limitation of the Gini coefficient is that it is not a proper measure of egalitarianism, as it only measures income dispersion. For example, suppose two equally egalitarian countries pursue different immigration policies. In that case, the country accepting a higher proportion of low-income or impoverished migrants will report a higher Gini coefficient and, therefore, may exhibit more income inequality.
Household vs individual.
The Gini coefficient measure gives different results when applied to individuals instead of households, for the same economy and same income distributions. If household data is used, the measured value of income Gini depends on how the household is defined. The comparison is not meaningful when different populations are not measured with consistent definitions. Furthermore, changes to the household income Gini can be driven by changes in household formation, such as increased divorce rates or extended family households splitting into nuclear families.
Deininger and Squire (1996) show that the income Gini coefficient based on individual income rather than household income is different. For example, for the United States, they found that the individual income-based Gini index was 0.35, while for France, 0.43. According to their individual-focused method, in the 108 countries they studied, South Africa had the world's highest Gini coefficient at 0.62, Malaysia had Asia's highest Gini coefficient at 0.5, Brazil the highest at 0.57 in Latin America and the Caribbean region, and Turkey the highest at 0.5 in OECD countries.
Billionaire Thomas Kwok claimed the income Gini coefficient for Hong Kong has been high (0.434 in 2010), in part because of structural changes in its population. Over recent decades, Hong Kong has witnessed increasing numbers of small households, elderly households, and elderly living alone. The combined income is now split into more households. Many older people live separately from their children in Hong Kong. These social changes have caused substantial changes in household income distribution. The income Gini coefficient, claims Kwok, does not discern these structural changes in its society. Household money income distribution for the United States, summarized in Table C of this section, confirms that this issue is not limited to just Hong Kong. According to the US Census Bureau, between 1979 and 2010, the population of the United States experienced structural changes in overall households; the income for all income brackets increased in inflation-adjusted terms, household income distributions shifted into higher income brackets over time, while the income Gini coefficient increased.
Instantaneous inequality vs lifetime inequality.
The Gini coefficient is unable to discern the effects of structural changes in populations. Expanding on the importance of life-span measures, the Gini coefficient as a point-estimate of equality at a certain time ignores life-span changes in income. Typically, increases in the proportion of young or old members of a society will drive apparent changes in equality simply because people generally have lower incomes and wealth when they are young than when they are old. Because of this, factors such as age distribution within a population and mobility within income classes can create the appearance of inequality when none exist, taking into account demographic effects. Thus a given economy may have a higher Gini coefficient at any timepoint compared to another, while the Gini coefficient calculated over individuals' lifetime income is lower than the apparently more equal (at a given point in time) economy's. Essentially, what matters is not just inequality in any particular year but the distribution composition over time.
Benefits and income in kind.
Inaccuracies in assign monetary value to income in kind reduce the accuracy of Gini as a measurement of true inequality.
While taxes and cash transfers are relatively straightforward to account for, other government benefits can be difficult to value. Benefits such as subsidized housing, medical care, and education are difficult to value objectively, as it depends on the quality and extent of the benefit. In absence of a free market, valuing these income transfers as household income is subjective. The theoretical model of the Gini coefficient is limited to accepting correct or incorrect subjective assumptions.
In subsistence-driven and informal economies, people may have significant income in other forms than money, for example, through subsistence farming or bartering. These forms of income tend to accrue to poor segments of populations in emerging and transitional economy countries such as those in sub-Saharan Africa, Latin America, Asia, and Eastern Europe. Informal economy accounts for over half of global employment and as much as 90 percent of employment in some of the poorer sub-Saharan countries with high official Gini inequality coefficients. Schneider et al., in their 2010 study of 162 countries, report about 31.2%, or about $20 trillion, of world's GDP is informal. In developing countries, the informal economy predominates for all income brackets except the richer, urban upper-income bracket populations. Even in developed economies, 8% (United States) to 27% (Italy) of each nation's GDP is informal. The resulting informal income predominates as a livelihood activity for those in the lowest income brackets. The value and distribution of the incomes from informal or underground economy is difficult to quantify, making true income Gini coefficients estimates difficult. Different assumptions and quantifications of these incomes will yield different Gini coefficients.
Alternatives.
Given the limitations of the Gini coefficient, other statistical methods are used in combination or as an alternative measure of population dispersity. For example, "entropy measures" are frequently used (e.g. the Atkinson index or the Theil Index and Mean log deviation as special cases of the generalized entropy index). These measures attempt to compare the distribution of resources by intelligent agents in the market with a maximum entropy random distribution, which would occur if these agents acted like non-interacting particles in a closed system following the laws of statistical physics.
Relation to other statistical measures.
There is a summary measure of the diagnostic ability of a binary classifier system that is also called the "Gini coefficient", which is defined as twice the area between the receiver operating characteristic (ROC) curve and its diagonal. It is related to the AUC (Area Under the ROC Curve) measure of performance given by formula_44 and to Mann–Whitney U. Although both Gini coefficients are defined as areas between certain curves and share certain properties, there is no simple direct relationship between the Gini coefficient of statistical dispersion and the Gini coefficient of a classifier.
The Gini index is also related to the Pietra index — both of which measure statistical heterogeneity and are derived from the Lorenz curve and the diagonal line.
In certain fields such as ecology, inverse Simpson's index formula_45 is used to quantify diversity, and this should not be confused with the Simpson index formula_46. These indicators are related to Gini. The inverse Simpson index increases with diversity, unlike the Simpson index and Gini coefficient, which decrease with diversity. The Simpson index is in the range [0, 1], where 0 means maximum and 1 means minimum diversity (or heterogeneity). Since diversity indices typically increase with increasing heterogeneity, the Simpson index is often transformed into inverse Simpson, or using the complement formula_47, known as the Gini-Simpson Index.
The Lorenz curve is another method of graphical representation of wealth distribution. It was developed 9 years before the Gini coefficient, which quantifies the extent to which the Lorenz curve deviates from the perfect equality line (with slope of 1). The Hoover index (also known as Robin Hood index) presents the percentage of total population's income that would have to be redistributed to make the Gini coefficient equal to 0 (perfect equality).
Gini coefficients for pre-modern societies.
In recent decades, researchers have attempted to estimate Gini coefficients for pre-20th century societies. In the absence of household income surveys and income taxes, scholars have relied on proxy variables. These include wealth taxes in medieval European city states, patterns of landownership in Roman Egypt, variation of the size of houses in societies from ancient Greece to Aztec Mexico, and inheritance and dowries in Babylonian society. Other data does not directly document variations in wealth or income but are known to reflect inequality, such as the ratio of rents to wages or of labor to capital.
Other uses.
Although the Gini coefficient is most popular in economics, it can, in theory, be applied in any field of science that studies a distribution. For example, in ecology, the Gini coefficient has been used as a measure of biodiversity, where the cumulative proportion of species is plotted against the cumulative proportion of individuals. In health, it has been used as a measure of the inequality of health-related quality of life in a population. In education, it has been used as a measure of the inequality of universities. In chemistry it has been used to express the selectivity of protein kinase inhibitors against a panel of kinases. In engineering, it has been used to evaluate the fairness achieved by Internet routers in scheduling packet transmissions from different flows of traffic.
The Gini coefficient is sometimes used for the measurement of the discriminatory power of rating systems in credit risk management.
A 2005 study accessed US census data to measure home computer ownership and used the Gini coefficient to measure inequalities amongst whites and African Americans. Results indicated that although decreasing overall, home computer ownership inequality was substantially smaller among white households.
A 2016 peer-reviewed study titled Employing the Gini coefficient to measure participation inequality in treatment-focused Digital Health Social Networks illustrated that the Gini coefficient was helpful and accurate in measuring shifts in inequality, however as a standalone metric it failed to incorporate overall network size.
Discriminatory power refers to a credit risk model's ability to differentiate between defaulting and non-defaulting clients. The formula formula_48, in the calculation section above, may be used for the final model and at the individual model factor level to quantify the discriminatory power of individual factors. It is related to the accuracy ratio in population assessment models.
The Gini coefficient has also been applied to analyze inequality in dating apps.
Kaminskiy and Krivtsov extended the concept of the Gini coefficient from economics to reliability theory and proposed a Gini-type coefficient that helps to assess the degree of aging of non-repairable systems or aging and rejuvenation of repairable systems. The coefficient is defined between −1 and 1 and can be used in both empirical and parametric life distributions. It takes negative values for the class of decreasing failure rate distributions and point processes with decreasing failure intensity rate and is positive for the increasing failure rate distributions and point processes with increasing failure intensity rate. The value of zero corresponds to the exponential life distribution or the Homogeneous Poisson Process.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\bar{x}"
},
{
"math_id": 1,
"text": "G = \\frac{\\displaystyle{\\sum_{i=1}^n \\sum_{j=1}^n \\left| x_i - x_j \\right|}}{\\displaystyle{2 n^2 \\bar{x}}} = \\frac{\\displaystyle{\\sum_{i=1}^n \\sum_{j=1}^n \\left| x_i - x_j \\right|}}{\\displaystyle{2 n \\sum_{i=1}^n x_i}} "
},
{
"math_id": 2,
"text": "G = \\frac{1}{2\\mu}\\int_{-\\infty}^\\infty\\int_{-\\infty}^\\infty p(x)p(y)\\,|x-y|\\,dx\\,dy"
},
{
"math_id": 3,
"text": "\\textstyle\\mu=\\int_{-\\infty}^\\infty x p(x) \\,dx"
},
{
"math_id": 4,
"text": "y_1 \\leq y_2\\leq \\cdots \\leq y_n "
},
{
"math_id": 5,
"text": "G = \\frac{1}{n}\\left ( n+1 - 2 \\left ( \\frac{\\sum_{i=1}^n (n+1-i)y_i}{\\sum_{i=1}^n y_i} \\right ) \\right ). "
},
{
"math_id": 6,
"text": "G = \\frac{2 \\sum_{i=1}^n i y_i}{n \\sum_{i=1}^n y_i} -\\frac{n+1}{n}."
},
{
"math_id": 7,
"text": "G(S) = \\frac{1}{n-1}\\left (n+1 - 2 \\left ( \\frac{\\sum_{i=1}^n (n+1-i)y_i}{\\sum_{i=1}^n y_i}\\right ) \\right )"
},
{
"math_id": 8,
"text": "G(S) = 1 - \\frac{2}{n-1}\\left ( n - \\frac{\\sum_{i=1}^n iy_i}{\\sum_{i=1}^n y_i}\\right ). "
},
{
"math_id": 9,
"text": "f ( y_i ),"
},
{
"math_id": 10,
"text": "i = 1,\\ldots, n"
},
{
"math_id": 11,
"text": "f ( y_i )"
},
{
"math_id": 12,
"text": "y_i >0 "
},
{
"math_id": 13,
"text": "G = \\frac{1}{2\\mu} \\sum\\limits_{i=1}^n \\sum\\limits_{j=1}^n \\, f(y_i) f(y_j)|y_i-y_j|"
},
{
"math_id": 14,
"text": "\\mu=\\sum\\limits_{i=1}^n y_i f(y_i)."
},
{
"math_id": 15,
"text": "(y_i < y_{i+1})"
},
{
"math_id": 16,
"text": "G = 1 - \\frac{\\sum_{i=1}^n f(y_i)(S_{i-1}+S_i)}{S_n}"
},
{
"math_id": 17,
"text": "S_i = \\sum_{j=1}^i f(y_j)\\,y_j\\,"
},
{
"math_id": 18,
"text": "S_0 = 0."
},
{
"math_id": 19,
"text": "n\\rightarrow\\infty."
},
{
"math_id": 20,
"text": "F(x)=\\int_0^x f(x)\\,dx"
},
{
"math_id": 21,
"text": "L(x)=\\frac{\\int_0^x x\\,f(x)\\,dx}{\\int_0^\\infty x\\,f(x)\\,dx}"
},
{
"math_id": 22,
"text": "B = \\int_0^1 L(F) \\,dF. "
},
{
"math_id": 23,
"text": "G = 1 - \\frac{1}{\\mu}\\int_0^\\infty (1-F(y))^2 \\,dy = \\frac{1}{\\mu}\\int_0^\\infty F(y)(1-F(y)) \\,dy"
},
{
"math_id": 24,
"text": "G=\\frac{1}{2 \\mu}\\int_0^1 \\int_0^1 |Q(F_1)-Q(F_2)|\\,dF_1\\,dF_2 ."
},
{
"math_id": 25,
"text": "\\sigma"
},
{
"math_id": 26,
"text": "G = \\operatorname{erf}\\left(\\frac{\\sigma }{2 }\\right)"
},
{
"math_id": 27,
"text": "\\operatorname{erf}"
},
{
"math_id": 28,
"text": " G=2 \\Phi \\left(\\frac{\\sigma }{\\sqrt{2}}\\right)-1"
},
{
"math_id": 29,
"text": "\\Phi"
},
{
"math_id": 30,
"text": "[0,\\infty)"
},
{
"math_id": 31,
"text": "\\Gamma(\\,)"
},
{
"math_id": 32,
"text": "B(\\,)"
},
{
"math_id": 33,
"text": "I_k(\\,)"
},
{
"math_id": 34,
"text": "G_1 = 1 - \\sum_{k=1}^{n} (X_{k} - X_{k-1}) (Y_{k} + Y_{k-1})"
},
{
"math_id": 35,
"text": "k = A + \\ N(0, s^{2}/y_k) "
},
{
"math_id": 36,
"text": "G = \\frac{N+1}{N-1}-\\frac{2}{N(N-1)\\mu}(\\sum_{i=1}^n P_iX_i)"
},
{
"math_id": 37,
"text": "\\mu"
},
{
"math_id": 38,
"text": "X_i"
},
{
"math_id": 39,
"text": "r_j = x_j / \\overline{x}"
},
{
"math_id": 40,
"text": "x_j"
},
{
"math_id": 41,
"text": "\\overline{x}"
},
{
"math_id": 42,
"text": "r_j=1"
},
{
"math_id": 43,
"text": "\\text{Inequality} = \\sum_j p_j \\, f(r_j), "
},
{
"math_id": 44,
"text": "AUC = (G+1)/2"
},
{
"math_id": 45,
"text": "1/\\lambda"
},
{
"math_id": 46,
"text": "\\lambda"
},
{
"math_id": 47,
"text": "1 - \\lambda"
},
{
"math_id": 48,
"text": "G_1"
}
] | https://en.wikipedia.org/wiki?curid=12883 |
12883102 | Digital delay line | A digital delay line (or simply delay line, also called delay filter) is a discrete element in a digital filter, which allows a signal to be delayed by a number of samples. Delay lines are commonly used to delay audio signals feeding loudspeakers to compensate for the speed of sound in air, and to align video signals with accompanying audio, called audio-to-video synchronization. Delay lines may compensate for electronic processing latency so that multiple signals leave a device simultaneously despite having different pathways.
Digital delay lines are widely used building blocks in methods to simulate room acoustics, musical instruments and effects units. Digital waveguide synthesis shows how digital delay lines can be used as sound synthesis methods for various musical instruments such as string instruments and wind instruments.
If a delay line holds a non-integer value smaller than one, it results in a fractional delay line (also called interpolated delay line or fractional delay filter). A series of an integer delay line and a fractional delay filter is commonly used for modelling "arbitrary delay" filters in digital signal processing. The Dattorro scheme is an industry standard implementation of digital filters using fractional delay lines.
Theory.
The standard delay line with integer delay is derived from the Z-transform of a discrete-time signal formula_0 delayed by formula_1 samples:formula_2 formula_3 formula_4
In this case, formula_5 is the integer delay filter with:
formula_6
The discrete-time domain filter for integer delay formula_7 as the inverse zeta transform of formula_8 is trivial, since it is an impulse shifted by formula_7:
formula_9
Working in the discrete-time domain with fractional delays is less trivial. In its most general theoretical form, a delay line with arbitrary fractional delay is defined as a standard delay line with delay formula_10, which can be modelled as the sum of an integer component formula_11 and a fractional component formula_12 which is smaller than one sample:(Fractional) Delay Line - formula_13 Domain
This is the formula_14 domain representation of a non-trivial digital filter design problem: the solution is an any time-domain filter that represents or approximates the inverse Z-transform of formula_15.
Filter design solutions.
Naive solution.
The conceptually easiest solution is obtained by sampling the continuous-time domain solution, which is trivial for any delay value. Given a continuous-time signal formula_0 delayed by formula_10 samples, or formula_16 seconds:formula_17 formula_18 formula_19
In this case, formula_20 is the continuous-time domain fractional delay filter with:
formula_21
The naive solution for the sampled filter formula_22 is the sampled inverse Fourier transform of formula_23, which produces a non-causal IIR filter shaped as a Cardinal Sine formula_24 shifted by formula_25:formula_26The continuous-time domain formula_27 is shifted by the fractional delay while the sampling is always aligned to the cartesian plane, therefore:
Truncated causal FIR solution.
The conceptually easiest implementable solution is the causal truncation of the naive solution above. formula_30Truncating the impulse response might however cause instability, which can be mitigated in a few ways:
formula_34
formula_35What follows is an expansion of the formula above displaying the resulting filters of order up to formula_36:
All-pass IIR phase-approximated solution.
Another approach is designing an IIR filter of order formula_38 with a Z-transform structure that forces it to be an all-pass while still approximating a formula_37 delay:formula_39The reciprocally placed zeros and poles of formula_40 respectively flatten the frequency formula_41 response, while the phase is function of the phase of formula_42. Therefore, the problem becomes designing the FIR filter formula_42, that is finding its coefficients formula_43 as a function of D (note that formula_44 always), so that the phase approximates best the desired value formula_45.
The main solutions are:
formula_46
formula_47
formula_49
What follows is an expansion of the formula above displaying the resulting coefficients of order up to formula_36:
Commercial history.
Digital delay lines were first used to compensate for the speed of sound in air in 1973 to provide appropriate delay times for the distant speaker towers at the Summer Jam at Watkins Glen rock festival in New York, with 600,000 people in the audience. New York City–based company Eventide Clock Works provided digital delay devices each capable of 200 milliseconds of delay. Four speaker towers were placed from the stage, their signal delayed 175 ms to compensate for the speed of sound between the main stage speakers and the delay towers. Six more speaker towers were placed 400 feet from the stage, requiring 350 ms of delay, and a further six towers were placed 600 feet away from the stage, fed with 525 ms of delay. Each Eventide DDL 1745 module contained one hundred 1000-bit shift register chips and a bespoke digital-to-analog converter, and cost $3,800 (). | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "y[n] = x[n-M]"
},
{
"math_id": 3,
"text": "\\xrightarrow[]{\\mathcal{Z} }"
},
{
"math_id": 4,
"text": "Y(z) = \\overbrace{ z^{-M} }^{H_M(z)} X(z). "
},
{
"math_id": 5,
"text": "z^{-M} = H_M(z)"
},
{
"math_id": 6,
"text": "\\begin{cases} |\\centerdot| = 1 = 0dB, & \\text{zero dB gain} \\\\ \\measuredangle = -\\omega M, & \\text{linear phase with } \\omega=2\\pi fT_s \\text{ where } T_s \\text{ is the sampling period in seconds } [s]. \\end{cases}"
},
{
"math_id": 7,
"text": "M "
},
{
"math_id": 8,
"text": "H_{M}(z) "
},
{
"math_id": 9,
"text": "h_m[n] = \\begin{cases} \\text{1}, & \\text{for } n = M \\\\ 0, & \\text{for } n \\neq M. \\end{cases}"
},
{
"math_id": 10,
"text": "D \\in \\mathbb{R}"
},
{
"math_id": 11,
"text": "M \\in \\mathbb{Z}"
},
{
"math_id": 12,
"text": "d \\in \\mathbb{R} "
},
{
"math_id": 13,
"text": " \\mathcal{Z} "
},
{
"math_id": 14,
"text": "\\mathcal{Z}"
},
{
"math_id": 15,
"text": "H_{D}(z) "
},
{
"math_id": 16,
"text": "\\tau = DT_s "
},
{
"math_id": 17,
"text": "y(t) = x(t-D)"
},
{
"math_id": 18,
"text": "\\xrightarrow[]{\\mathcal{F} }"
},
{
"math_id": 19,
"text": "Y(\\omega) = \\overbrace{ e^{-j\\omega D} }^{H_{ideal}(\\omega)} X(\\omega). "
},
{
"math_id": 20,
"text": "e^{-j\\omega D} = H_{ideal}(\\omega) "
},
{
"math_id": 21,
"text": "\\begin{cases} |\\centerdot| = 1 = 0dB, & \\text{zero dB gain} \\\\ \\measuredangle = -\\omega D, & \\text{linear phase} \\\\ \\tau_{gr} = -{d\\measuredangle \\over{d\\omega}} = D, & \\text{constant group delay} \\\\ \\tau_{ph} = -{ \\measuredangle \\over{\\omega}} = -D, & \n \\text{constant phase delay.} \\end{cases}"
},
{
"math_id": 22,
"text": "h_{ideal}[n] "
},
{
"math_id": 23,
"text": "H_{ideal}(\\omega) "
},
{
"math_id": 24,
"text": "sinc()"
},
{
"math_id": 25,
"text": "D"
},
{
"math_id": 26,
"text": "h_{ideal}[n] = \\mathcal{F}^{-1} [H_{ideal}(\\omega)] =\n{1\\over{2\\pi}} \\int\\limits_{-\\pi}^{+\\pi} e^{j\\omega D} e^{j\\omega n} d\\omega = \nsinc(n- D) = \n{sin(\\pi (n-D))\\over{\\pi (n-D)}} "
},
{
"math_id": 27,
"text": "sinc"
},
{
"math_id": 28,
"text": "D \\in \\mathbb{N} "
},
{
"math_id": 29,
"text": "D \\in \\mathbb{R} "
},
{
"math_id": 30,
"text": "h_{\\tau}[n] = \\begin{cases} sinc(n-D) & \\text{for } 0\\leq n\\leq N\\\\ 0 & \\text{otherwise} \\end{cases} \\;\\;\\;\\;\\; \\text{where}\\;\\;\\;\\;\\; {N-1\\over{2}} < D < {N+1\\over{2}}\\;\\;\\;\\;\\;\\text{and}\\;\\;\\;\\;\\;N\\;\\text{is the order of the filter.}"
},
{
"math_id": 31,
"text": "L "
},
{
"math_id": 32,
"text": "sinc() "
},
{
"math_id": 33,
"text": "h_{\\tau}[n] = \\begin{cases} w(n-D)sinc(n-D) & \\text{for } L\\leq n\\leq L + N\\\\ 0 & \\text{otherwise} \\end{cases} \\;\\;\\;\\;\\; \\text{where}\\;\\;\\;\\;\\; L = \\begin{cases} round(D)- {N \\over{2}} & \\text{for even } N\\\\ \\lfloor D \\rfloor - {N-1 \\over{2}} & \\text{for odd } N \\end{cases}"
},
{
"math_id": 34,
"text": "E_{LS} = {1\\over{2\\pi}} \\int\\limits_{-\\alpha\\pi}^{\\alpha\\pi} w(\\omega) \n|H^{truncated}_D(e^{j\\omega})- H^{id}_D(e^{j\\omega})|^2 d\\omega\n\\;\\;\\;\\;\\;\\text{where } 0<\\alpha \\leq 1 \\text{ is the passband width parameter}\n\n "
},
{
"math_id": 35,
"text": "h_{D}[n] = \\prod_{k=0 ,\\; k\\neq n}^N {D-k\\over{n-k}} \\;\\;\\;\\;\\;\\text{where}\\;\\;\\;\\;\\;0\\leq n \\leq N"
},
{
"math_id": 36,
"text": "N=3 "
},
{
"math_id": 37,
"text": "D "
},
{
"math_id": 38,
"text": "N"
},
{
"math_id": 39,
"text": "H_D(z) = {z^{-N} A(z) \\over{A(z^{-1})}} = \n{a_N+a_{N-1}z^{-1}+...+a_1z^{-(N-1)}+z^{-N}\\over{1+a_1z^{-1}+...+a_{N-1}z^{-(N-1)}+a_Nz^{-N}}}\n\\;\\;\\;\\;\\; \\text{which has} \\;\\;\\;\\;\\; \n\\begin{cases} |\\centerdot| = 1 = 0dB & 0dB \\text{ gain} \\\\ \\measuredangle_{H_D(z)} = -N\\omega + 2\\measuredangle_{A(z)} = -D\\omega & \\text{desired value for delay } D \\end{cases} "
},
{
"math_id": 40,
"text": "A(z) \\text{ and } A(z^{-1}) "
},
{
"math_id": 41,
"text": "|\\centerdot| "
},
{
"math_id": 42,
"text": "A(z) "
},
{
"math_id": 43,
"text": "a_k "
},
{
"math_id": 44,
"text": "a_0=1 "
},
{
"math_id": 45,
"text": "\\measuredangle_{H_D(z)} = -D\\omega "
},
{
"math_id": 46,
"text": "E_{LS} = {1\\over{2\\pi}} \\int\\limits_{-\\pi}^{\\pi} w(\\omega) \n\n|\\underbrace{ \\underbrace{-D\\omega}_{\\measuredangle_{ID}} - \\underbrace{(-N\\omega+2\\measuredangle_{A(z)})}_{\\measuredangle_{H}} }_{\\Delta\\measuredangle_{ H_D }}|^2d\\omega\n\n\n "
},
{
"math_id": 47,
"text": "E_{LS} = {1\\over{2\\pi}} \\int\\limits_{-\\pi}^{\\pi} w(\\omega) \n| {{ \\Delta\\measuredangle_{ H_D } }\\over{\\omega}} |^2\n\n\n "
},
{
"math_id": 48,
"text": "D>0 "
},
{
"math_id": 49,
"text": "a_k = (-1)^k\\binom{N}{k}\\prod_{l=0}^N {D+l \\over{D+k+l}}\n\\;\\;\\;\\;\\; \\text{where} \\;\\;\\;\\;\\;\n\\binom{n}{k} = {N! \\over{k! (N-k)! }} "
}
] | https://en.wikipedia.org/wiki?curid=12883102 |
12884582 | Technetium (99mTc) albumin aggregated | Injectable radiopharmaceutical
Technetium 99mTc albumin aggregated (99mTc-MAA) is an injectable radiopharmaceutical used in nuclear medicine. It consists of a sterile aqueous suspension of Technetium-99m (99mTc) labeled to human albumin aggregate particles. It is commonly used for lung perfusion scanning. It is also less commonly used to visualise a peritoneovenous shunt and for isotope venography.
Preparation.
DraxImage MAA kits for preparing 99mTc-MAA are available in the United States from only a single manufacturer; Jubilant DraxImage Inc. The kits are delivered to nuclear pharmacies as lyophilized powders of non-radioactive ingredients sealed under nitrogen. A nuclear pharmacist adds anywhere from 50 - 100 mCi of Na[99mTcO4] to the reaction vial to make the final product, in the pH range of 3.8 to 8.0. After being allowed to react at room temperature for 15 minutes to ensure maximum labeling of the human albumin with 99mTc, the kit can then be diluted with sterile normal saline as needed.
Once prepared the product will have a turbid white appearance.
Quality control.
No less than 90% of MAA particles can be between 10 - 90 micrometres in size and no particles may exceed 150 micrometres due to the risk of pulmonary artery blockade.
No less than 90% of the radioactivity present in the product must be tagged to albumin particles. Thus, no more than 10% soluble impurities may be present.
Dosage and imaging.
The typical adult dose for a lung imaging study is 40-150 Megabecquerels (1-4 mCi) (containing between 100,000 - 200,000 albumin particles). The particle burden should be lowered for most pediatric patients and lowered to 50,000 for infants. The use of more than 250,000 particles in a dose is controversial as little extra data is acquired from such scans while there is an increased risk of toxicity. Patients with pulmonary hypertension should be administered a minimum number of particles to achieve a lung scan (i.e. 60,000). In any patient by administering a greater quantity of particles than necessary for the diagnostic procedure increases the risks of toxicity.
Because of gravity effects, people administered 99mTc MAA should be in the supine position to ensure as even a distribution of particles throughout the lungs as possible.
The total percentage of particles trapped in the lungs can be determined through a whole body scan after the administration of 99mTc MAA through the equation:
formula_0.
History.
The technetium tc 99m aggregated albumin kit was approved for use in the United States in December 1987.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\%\\ \\text{Right-to-left shunt} = \\left(\\frac{(\\text{Total body counts}) - (\\text{Total lung counts})}{(\\text{Total body counts})}\\right)\\times 100\\%"
}
] | https://en.wikipedia.org/wiki?curid=12884582 |
12885890 | Toric manifold | In mathematics, a toric manifold is a topological analogue of toric variety in algebraic geometry. It is an even-dimensional manifold with an effective smooth action of an formula_0-dimensional compact torus which is locally standard with the orbit space a simple convex polytope.
The aim is to do combinatorics on the quotient polytope and obtain information on the manifold above. For example, the Euler characteristic and the cohomology ring of the manifold can be described in terms of the polytope.
The Atiyah and Guillemin-Sternberg theorem.
This theorem states that the image of the moment map of a Hamiltonian toric action is the convex hull of the set of moments of the points fixed by the action. In particular, this image is a convex polytope.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=12885890 |
12886528 | Discontinuous Galerkin method | In applied mathematics, discontinuous Galerkin methods (DG methods) form a class of numerical methods for solving differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic, parabolic and mixed form problems arising from a wide range of applications. DG methods have in particular received considerable interest for problems with a dominant first-order part, e.g. in electrodynamics, fluid mechanics and plasma physics. Indeed, the solutions of such problems may involve strong gradients (and even discontinuities) so that classical finite element methods fail, while finite volume methods are restricted to low order approximations.
Discontinuous Galerkin methods were first proposed and analyzed in the early 1970s as a technique to numerically solve partial differential equations. In 1973 Reed and Hill introduced a DG method to solve the hyperbolic neutron transport equation.
The origin of the DG method for elliptic problems cannot be traced back to a single publication as features such as jump penalization in the modern sense were developed gradually. However, among the early influential contributors were Babuška, J.-L. Lions, Joachim Nitsche and Miloš Zlámal. DG methods for elliptic problems were already developed in a paper by Garth Baker in the setting of 4th order equations in 1977. A more complete account of the historical development and an introduction to DG methods for elliptic problems is given in a publication by Arnold, Brezzi, Cockburn and Marini. A number of research directions and challenges on DG methods are collected in the proceedings volume edited by Cockburn, Karniadakis and Shu.
Overview.
Much like the continuous Galerkin (CG) method, the discontinuous Galerkin (DG) method is a finite element method formulated relative to a weak formulation of a particular model system. Unlike traditional CG methods that are conforming, the DG method works over a trial space of functions that are only piecewise continuous, and thus often comprise more inclusive function spaces than the finite-dimensional inner product subspaces utilized in conforming methods.
As an example, consider the continuity equation for a scalar unknown formula_0 in a spatial domain formula_1 without "sources" or "sinks" :
formula_2
where formula_3 is the flux of formula_0.
Now consider the finite-dimensional space of discontinuous piecewise polynomial functions over the spatial domain formula_1 restricted to a discrete triangulation formula_4, written as
formula_5
for formula_6 the space of polynomials with degrees less than or equal to formula_7 over element formula_8 indexed by formula_9. Then for finite element shape functions formula_10 the solution is represented by
formula_11
Then similarly choosing a test function
formula_12
multiplying the continuity equation by formula_13 and integrating by parts in space, the semidiscrete DG formulation becomes:
formula_14
Scalar hyperbolic conservation law.
A scalar hyperbolic conservation law is of the form
formula_15
where one tries to solve for the unknown scalar function formula_16, and the functions formula_17 are typically given.
Space discretization.
The formula_18-space will be discretized as
formula_19
Furthermore, we need the following definitions
formula_20
Basis for function space.
We derive the basis representation for the function space of our solution formula_21.
The function space is defined as
formula_22
where formula_23 denotes the restriction of formula_24 onto the interval formula_25, and formula_26 denotes the space of polynomials of maximal degree formula_27.
The index formula_28 should show the relation to an underlying discretization given by formula_29.
Note here that formula_24 is not uniquely defined at the intersection points formula_30.
At first we make use of a specific polynomial basis on the interval formula_31, the Legendre polynomials formula_32, i.e.,
formula_33
Note especially the orthogonality relations
formula_34
Transformation onto the interval formula_35, and normalization is achieved by functions formula_36
formula_37
which fulfill the orthonormality relation
formula_38
Transformation onto an interval formula_25 is given by formula_39
formula_40
which fulfill
formula_41
For formula_42-normalization we define formula_43, and for formula_44-normalization we define formula_45, s.t.
formula_46
Finally, we can define the basis representation of our solutions formula_47
formula_48
Note here, that formula_47 is not defined at the interface positions.
Besides, prism bases are employed for planar-like structures, and are capable for 2-D/3-D hybridation.
DG-scheme.
The conservation law is transformed into its weak form by multiplying with test functions, and integration over test intervals
formula_49
By using partial integration one is left with
formula_50
The fluxes at the interfaces are approximated by numerical fluxes formula_51 with
formula_52
where formula_53 denotes the left- and right-hand sided limits.
Finally, the "DG-Scheme" can be written as
formula_54
Scalar elliptic equation.
A scalar elliptic equation is of the form
formula_55
This equation is the steady-state heat equation, where formula_21 is the temperature. Space discretization is the same as above. We recall that the interval formula_56 is partitioned into formula_57 intervals of length formula_58.
We introduce jump formula_59 and average formula_60 of functions at the node formula_61:
formula_62
The interior penalty discontinuous Galerkin (IPDG) method is: find formula_63 satisfying
formula_64
where the bilinear forms formula_65 and formula_66 are
formula_67
and
formula_68
The linear forms formula_69 and formula_70 are
formula_71
and
formula_72
The penalty parameter formula_73 is a positive constant. Increasing its value will reduce the jumps in the discontinuous solution. The term formula_74 is chosen to be equal to formula_75 for the symmetric interior penalty Galerkin method; it is equal to formula_76 for the non-symmetric interior penalty Galerkin method.
Direct discontinuous Galerkin method.
The direct discontinuous Galerkin (DDG) method is a new discontinuous Galerkin method for solving diffusion problems. In 2009, Liu and Yan first proposed the DDG method for solving diffusion equations. The advantages of this method compared with Discontinuous Galerkin method is that the direct discontinuous Galerkin method derives the numerical format by directly taking the numerical flux of the function and the first derivative term without introducing intermediate variables. We still can get a reasonable numerical results by using this method, and the derivation process is more simple, the amount of calculation is greatly reduced.
The direct discontinuous finite element method is a branch of the Discontinuous Galerkin methods. It mainly includes transforming the problem into variational form, regional unit splitting, constructing basis functions, forming and solving discontinuous finite element equations, and convergence and error analysis.
For example, consider a nonlinear diffusion equation, which is one-dimensional:
formula_77, in which formula_78
Space discretization.
Firstly, define formula_79, and formula_80. Therefore we have done the space discretization of formula_81. Also, define formula_82.
We want to find an approximation formula_83 to formula_84 such that formula_85, formula_86,
formula_87, formula_88 is the polynomials space in formula_89 with degree at most formula_90.
Formulation of the scheme.
Flux: formula_91.
formula_92: the exact solution of the equation.
Multiply the equation with a smooth function formula_93 so that we obtain the following equations:
formula_94,
formula_95
Here formula_96 is arbitrary, the exact solution formula_84 of the equation is replaced by the approximate solution formula_83, that is to say, the numerical solution we need is obtained by solving the differential equations.
The numerical flux.
Choosing a proper numerical flux is critical for the accuracy of DDG method.
The numerical flux needs to satisfy the following conditions:
♦ It is consistent with formula_97
♦ The numerical flux is conservative in the single value on formula_98.
♦ It has the formula_99-stability;
♦ It can improve the accuracy of the method.
Thus, a general scheme for numerical flux is given:
formula_100
In this flux, formula_90 is the maximum order of polynomials in two neighboring computing units. formula_101 is the jump of a function. Note that in non-uniform grids, formula_102 should be formula_103 and formula_104 in uniform grids.
Error estimates.
Denote that the error between the exact solution formula_84 and the numerical solution formula_83 is formula_105 .
We measure the error with the following norm:
formula_106
and we have formula_107,formula_108
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho"
},
{
"math_id": 1,
"text": "\\Omega"
},
{
"math_id": 2,
"text": " \\frac{\\partial \\rho} {\\partial t} + \\nabla \\cdot \\mathbf{J} = 0,"
},
{
"math_id": 3,
"text": "\\mathbf{J}"
},
{
"math_id": 4,
"text": "\\Omega_h "
},
{
"math_id": 5,
"text": " S_h^p(\\Omega_h)=\\{v_{|\\Omega_{e_i}}\\in P^p(\\Omega_{e_i}), \\ \\ \\forall\\Omega_{e_i}\\in \\Omega_h\\}"
},
{
"math_id": 6,
"text": "P^p(\\Omega_{e_i})"
},
{
"math_id": 7,
"text": " p"
},
{
"math_id": 8,
"text": "\\Omega_{e_i}"
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "N_j\\in P^p"
},
{
"math_id": 11,
"text": "\\rho_h^i = \\sum_{j=1}^{\\text{dofs}} \\rho_j^i (t)N_j^i (\\boldsymbol{x}), \\quad \\forall \\boldsymbol{x} \\in \\Omega_{e_i}."
},
{
"math_id": 12,
"text": "\\varphi_h^i(\\boldsymbol{x})=\\sum_{j=1}^{\\text{dofs}} \\varphi_j^i N_j^i(\\boldsymbol{x}), \\quad \\forall \\boldsymbol{x}\\in\\Omega_{e_i},"
},
{
"math_id": 13,
"text": "\\varphi_h^i"
},
{
"math_id": 14,
"text": " \\frac{d}{dt}\\int_{\\Omega_{e_i}}\\rho_h^i\\varphi_h^i \\, d\\boldsymbol{x} + \\int_{\\partial\\Omega_{e_{i}}} \\varphi_h^i \\mathbf{J}_h \\cdot\\boldsymbol{n} \\, d\\boldsymbol{x} = \\int_{\\Omega_{e_i}}\\mathbf{J}_h\\cdot\\nabla\\varphi_h^i \\, d\\boldsymbol{x}."
},
{
"math_id": 15,
"text": "\n\\begin{align}\n \\partial_t u + \\partial_x f(u) &= 0\n \\quad \\text{for} \\quad\n t>0,\\, x\\in \\R\n \\\\\n u(0,x) &= u_0(x)\\,,\n\\end{align}\n"
},
{
"math_id": 16,
"text": " u \\equiv u(t,x) "
},
{
"math_id": 17,
"text": " f,u_0 "
},
{
"math_id": 18,
"text": " x "
},
{
"math_id": 19,
"text": "\n \\R = \\bigcup_k I_k\n \\,, \\quad\n I_k := \\left( x_k, x_{k+1} \\right)\n \\quad \\text{for} \\quad\n x_k<x_{k+1}\\,.\n"
},
{
"math_id": 20,
"text": "\n h_k := | I_k | \\,, \\quad\n h := \\sup_k h_k \\,, \\quad\n \\hat{x}_k := x_k + \\frac{h_k}{2}\\,.\n"
},
{
"math_id": 21,
"text": " u "
},
{
"math_id": 22,
"text": "\n S_h^p := \n \\left\\lbrace \n v \\in L^2(\\R)\n :\n v\\Big|_{I_k} \\in \\Pi_p\n \\right\\rbrace \n \\quad \\text{for} \\quad\n p \\in \\N_0 \\,,\n"
},
{
"math_id": 23,
"text": " {v|}_{I_k} "
},
{
"math_id": 24,
"text": " v "
},
{
"math_id": 25,
"text": " I_k "
},
{
"math_id": 26,
"text": " \\Pi_p "
},
{
"math_id": 27,
"text": " p "
},
{
"math_id": 28,
"text": " h "
},
{
"math_id": 29,
"text": " \\left(x_k\\right)_k "
},
{
"math_id": 30,
"text": " (x_k)_k "
},
{
"math_id": 31,
"text": " [-1,1] "
},
{
"math_id": 32,
"text": " (P_n)_{n\\in\\N_0} "
},
{
"math_id": 33,
"text": "\n P_0(x) = 1\n \\,,\\quad\n P_1(x)=x\n \\,,\\quad\n P_2(x) = \\frac{1}{2} (3x^2-1)\n \\,,\\quad \\dots\n"
},
{
"math_id": 34,
"text": "\n \\left\\langle P_i,P_j \\right\\rangle_{L^2([-1,1])} = \\frac{2}{2i+1} \\delta_{ij}\n \\quad \\forall \\, i,j \\in \\N_0 \\,.\n"
},
{
"math_id": 35,
"text": " [0,1] "
},
{
"math_id": 36,
"text": " (\\varphi_i)_i "
},
{
"math_id": 37,
"text": "\n \\varphi_i (x) := \\sqrt{2i+1} P_i(2x-1)\n \\quad \\text{for} \\quad x\\in [0,1]\\,,\n"
},
{
"math_id": 38,
"text": "\n \\left\\langle \\varphi_i,\\varphi_j \\right\\rangle_{L^2([0,1])} = \\delta_{ij}\n \\quad \\forall \\, i,j \\in \\N_0 \\,.\n"
},
{
"math_id": 39,
"text": " \\left( \\bar{\\varphi}_{ki}\\right)_i "
},
{
"math_id": 40,
"text": "\n \\bar{\\varphi}_{ki} := \\frac{1}{\\sqrt{h_k}} \\varphi_i \\left( \\frac{x-x_k}{h_k} \\right)\n \\quad \\text{for} \\quad x\\in I_k\\,,\n"
},
{
"math_id": 41,
"text": "\n \\left\\langle \\bar{\\varphi}_{ki},\\bar{\\varphi}_{kj} \\right\\rangle_{L^2(I_k)} = \\delta_{ij}\n \\quad \\forall \\, i,j \\in \\N_0 \\forall \\, k \\,.\n"
},
{
"math_id": 42,
"text": " L^\\infty "
},
{
"math_id": 43,
"text": " \\varphi_{ki}:= \\sqrt{h_k} \\bar{\\varphi}_{ki} "
},
{
"math_id": 44,
"text": " L^1 "
},
{
"math_id": 45,
"text": " \\tilde{\\varphi}_{ki}:= \\frac{1}{\\sqrt{h_k}} \\bar{\\varphi}_{ki} "
},
{
"math_id": 46,
"text": "\n \\| \\varphi_{ki} \\|_{L^\\infty (I_k) }\n = \\| \\varphi_i \\|_{L^\\infty ([0,1]) }\n =: c_{i,\\infty}\n \\quad \\text{and} \\quad\n \\| \\tilde{\\varphi}_{ki} \\|_{L^1 (I_k) }\n = \\| \\varphi_i \\|_{L^1 ([0,1]) }\n =: c_{i,1}\n \\,.\n"
},
{
"math_id": 47,
"text": " u_h "
},
{
"math_id": 48,
"text": "\n\\begin{align}\n u_h(t,x) :=& \n \\sum_{i=0}^p u_{ki}(t) \\varphi_{ki} (x)\n \\quad \\text{for} \\quad\n x \\in (x_k,x_{k+1})\n \\\\\n u_{ki} (t) =& \n \\left\\langle u_h(t, \\cdot ),\\tilde{\\varphi}_{ki} \\right\\rangle_{L^2(I_k)} \\,.\n\\end{align}\n"
},
{
"math_id": 49,
"text": "\n\\begin{align}\n \\partial_t u + \\partial_x f(u) &= 0\n \\\\\n \\Rightarrow \\quad \n \\left\\langle \\partial_t u , v \\right\\rangle_{L^2(I_k)}\n + \\left\\langle \\partial_x f(u) , v \\right\\rangle_{L^2(I_k)}\n &= 0\n \\quad \\text{for} \\quad\n \\forall \\, v \\in S_h^p\n \\\\\n \\Leftrightarrow \\quad\n \\left\\langle \\partial_t u , \\tilde{\\varphi}_{ki} \\right\\rangle_{L^2(I_k)}\n + \\left\\langle \\partial_x f(u) , \\tilde{\\varphi}_{ki} \\right\\rangle_{L^2(I_k)}\n &= 0\n \\quad \\text{for} \\quad\n \\forall \\, k \\; \\forall\\, i \\leq p \n \\,.\n\\end{align}\n"
},
{
"math_id": 50,
"text": "\n\\begin{align}\n \\frac{\\mathrm d}{\\mathrm d t} u_{ki}(t)\n + f(u(t, x_{k+1} )) \\tilde{\\varphi}_{ki}(x_{k+1}) \n - f(u(t, x_k )) \\tilde{\\varphi}_{ki}(x_k) \n - \\left\\langle f(u(t,\\,\\cdot\\,)) , \\tilde{\\varphi}_{ki}' \\right\\rangle_{L^2(I_k)}\n =0\n \\quad \\text{for} \\quad\n \\forall \\, k \\; \\forall\\, i \\leq p \n \\,.\n\\end{align}\n"
},
{
"math_id": 51,
"text": "g"
},
{
"math_id": 52,
"text": "\ng_k := g(u_k^-,u_k^+) \n \\,, \\quad\n u_k^\\pm := u(t,x_k^\\pm) \\,,\n"
},
{
"math_id": 53,
"text": "u_k^{\\pm}"
},
{
"math_id": 54,
"text": "\n\\begin{align}\n \\frac{\\mathrm d}{\\mathrm d t} u_{ki}(t)\n + g_{k+1} \\tilde{\\varphi}_{ki}(x_{k+1}) \n - g_k \\tilde{\\varphi}_{ki}(x_k) \n - \\left\\langle f(u(t,\\,\\cdot\\,)) , \\tilde{\\varphi}_{ki}' \\right\\rangle_{L^2(I_k)}\n =0\n \\quad \\text{for} \\quad\n \\forall \\, k \\; \\forall\\, i \\leq p \n \\,.\n\\end{align}\n"
},
{
"math_id": 55,
"text": "\n\\begin{align}\n -\\partial_{xx} u &= f(x)\n \\quad \\text{for} \\quad\n x\\in (a,b)\n \\\\\n u(x) &= g(x)\\,\\quad\\text{for}\\,\\quad x=a,b\n\\end{align}\n"
},
{
"math_id": 56,
"text": "(a,b)"
},
{
"math_id": 57,
"text": "N+1"
},
{
"math_id": 58,
"text": "h"
},
{
"math_id": 59,
"text": "[{}\\cdot{}]"
},
{
"math_id": 60,
"text": "\\{{}\\cdot{}\\}"
},
{
"math_id": 61,
"text": " x_k"
},
{
"math_id": 62,
"text": "\n [v]\\Big|_{x_k} = v(x_k^+)-v(x_k^-), \\quad \\{v\\}\\Big|_{x_k} = 0.5 (v(x_k^+)+v(x_k^-))\n"
},
{
"math_id": 63,
"text": "u_h"
},
{
"math_id": 64,
"text": "\nA(u_h,v_h) + A_{\\partial}(u_h,v_h) = \\ell(v_h) + \\ell_\\partial(v_h)\n"
},
{
"math_id": 65,
"text": "A"
},
{
"math_id": 66,
"text": "A_\\partial"
},
{
"math_id": 67,
"text": "\nA(u_h,v_h) = \\sum_{k=1}^{N+1} \\int_{x_{k-1}}^{x_k}\\partial_x u_h \\partial_x v_h\n-\\sum_{k=1}^N \\{ \\partial_x u_h\\}_{x_k} [v_h]_{x_k}\n+\\varepsilon\\sum_{k=1}^N \\{ \\partial_x v_h\\}_{x_k} [u_h]_{x_k}\n+\\frac{\\sigma}{h} \\sum_{k=1}^N [u_h]_{x_k} [v_h]_{x_k}\n"
},
{
"math_id": 68,
"text": "\nA_\\partial(u_h,v_h) = \\partial_x u_h(a) v_h(a) -\\partial_x u_h(b) v_h(b)\n-\\varepsilon \\partial_x v_h(a) u_h(a) + \\varepsilon\\partial_x v_h(b) u_h(b)\n+\\frac{\\sigma}{h} \\big(u_h(a) v_h(a) + u_h(b) v_h(b)\\big)\n"
},
{
"math_id": 69,
"text": "\\ell"
},
{
"math_id": 70,
"text": "\\ell_\\partial"
},
{
"math_id": 71,
"text": "\n\\ell(v_h) = \\int_a^b f v_h\n"
},
{
"math_id": 72,
"text": "\n\\ell_\\partial(v_h) = -\\varepsilon \\partial_x v_h(a) g(a) + \\varepsilon\\partial_x v_h(b) g(b)\n+\\frac{\\sigma}{h} \\big( g(a) v_h(a) + g(b) v_h(b) \\big)\n"
},
{
"math_id": 73,
"text": "\\sigma"
},
{
"math_id": 74,
"text": "\\varepsilon"
},
{
"math_id": 75,
"text": "-1"
},
{
"math_id": 76,
"text": "+1"
},
{
"math_id": 77,
"text": " U_t - {(a(U)\\cdot U_x)}_x = 0 \\ \\ in \\ (0,1) \\times (0,T)"
},
{
"math_id": 78,
"text": " U(x,0) = U_0(x)\\ \\ on\\ (0,1)"
},
{
"math_id": 79,
"text": "\\left \\{ I_j =\\left ( x_{j-\\frac{1}{2}},\\ x_{j+\\frac{1}{2}} \\right ), j=1...N\\right \\}"
},
{
"math_id": 80,
"text": "\\Delta x_j=x_{j+\\frac{1}{2}}-x_{j-\\frac{1}{2}}"
},
{
"math_id": 81,
"text": "x"
},
{
"math_id": 82,
"text": " \\Delta x=\\max_{1\\leq j< N}\\ \\Delta x_j"
},
{
"math_id": 83,
"text": "u"
},
{
"math_id": 84,
"text": "U"
},
{
"math_id": 85,
"text": " \\forall t\\in \\left [ 0,T \\right ]"
},
{
"math_id": 86,
"text": "u \\in \\mathbb{V}_{\\Delta x}"
},
{
"math_id": 87,
"text": "\\mathbb{V}_{\\Delta x} := \\left \\{ v\\in L^2\\left ( 0,1 \\right ):{v|}_{I_j}\\in P^k\\left ( I_j \\right ), \\ j=1,...,N \\right \\}"
},
{
"math_id": 88,
"text": " P^k\\left ( I_j \\right ) "
},
{
"math_id": 89,
"text": "I_j"
},
{
"math_id": 90,
"text": "k"
},
{
"math_id": 91,
"text": "h:= h\\left ( U,U_x \\right )=a\\left ( U \\right )U_x"
},
{
"math_id": 92,
"text": "U "
},
{
"math_id": 93,
"text": "v\\in H^1\\left (0,1 \\right )"
},
{
"math_id": 94,
"text": "\\int _{I_j} U_tvdx-h_{j+\\frac{1}{2}}v_{j+\\frac{1}{2}}+h_{j-\\frac{1}{2}} v_{j-\\frac{1}{2}} +\\int a\\left ( U \\right )U_xv_xdx=0 "
},
{
"math_id": 95,
"text": " \\int _{I_j} U\\left ( x,0 \\right )v\\left ( x \\right )dx=\\int _{I_j}U_0\\left ( x \\right )v\\left ( x \\right )dx"
},
{
"math_id": 96,
"text": "v"
},
{
"math_id": 97,
"text": "h={b\\left ( u \\right )}_x=a\\left ( u \\right )u_x"
},
{
"math_id": 98,
"text": "x_{j+\\frac{1}{2}}"
},
{
"math_id": 99,
"text": "L^2"
},
{
"math_id": 100,
"text": "\\widehat{h}=D_xb(u)=\\beta_0\\frac{\\left [ b\\left ( u \\right ) \\right ]}{\\Delta x}+\\overline{{b\\left ( u \\right )}_x}+\\sum_{m=1}^{\\frac{k}{2}}\\beta_m{\\left ( \\Delta x \\right )}^{2m-1}\\left [ \\partial _x^{2m}b\\left ( u \\right ) \\right ]"
},
{
"math_id": 101,
"text": "\\left [\\cdot \\right ]"
},
{
"math_id": 102,
"text": "\\Delta x"
},
{
"math_id": 103,
"text": "\\left ( \\frac{\\Delta x_j+\\Delta x_{j+1}}{2} \\right )"
},
{
"math_id": 104,
"text": "\\frac{1}{N}"
},
{
"math_id": 105,
"text": "e= u-U"
},
{
"math_id": 106,
"text": "\\left | \\left | \\left | v(\\cdot ,t) \\right | \\right | \\right | ={\\left (\\int_{0}^{1}v^2dx+\\left ( 1-\\gamma \\right )\\int_{0}^{t}\\sum_{j=1}^{N}\\int_{I_j} v_x^2 dxd\\tau +\\alpha \\int_{0}^{t}\\sum_{j=1}^{N} {\\left [ v \\right ] }^2/\\Delta x\\cdot d\\tau \\right )}^{0.5} "
},
{
"math_id": 107,
"text": "\\left | \\left | \\left | U(\\cdot ,T) \\right | \\right | \\right |\\leq \\left | \\left | \\left | U(\\cdot ,0) \\right | \\right | \\right |"
},
{
"math_id": 108,
"text": "\\left | \\left | \\left | u(\\cdot ,T) \\right | \\right | \\right |\\leq \\left | \\left | \\left | U(\\cdot ,0) \\right | \\right | \\right |"
}
] | https://en.wikipedia.org/wiki?curid=12886528 |
12886758 | Generic property | Property holding for typical examples
In mathematics, properties that hold for "typical" examples are called generic properties. For instance, a generic property of a class of functions is one that is true of "almost all" of those functions, as in the statements, "A generic polynomial does not have a root at zero," or "A generic square matrix is invertible." As another example, a generic property of a space is a property that holds at "almost all" points of the space, as in the statement, "If "f" : "M" → "N" is a smooth function between smooth manifolds, then a generic point of N is not a critical value of f." (This is by Sard's theorem.)
There are many different notions of "generic" (what is meant by "almost all") in mathematics, with corresponding dual notions of "almost none" (negligible set); the two main classes are:
There are several natural examples where those notions are not equal. For instance, the set of Liouville numbers is generic in the topological sense, but has Lebesgue measure zero.
In measure theory.
In measure theory, a generic property is one that holds almost everywhere. The dual concept is a null set, that is, a set of measure zero.
In probability.
In probability, a generic property is an event that occurs almost surely, meaning that it occurs with probability 1. For example, the law of large numbers states that the sample mean converges almost surely to the population mean. This is the definition in the measure theory case specialized to a probability space.
In discrete mathematics.
In discrete mathematics, one uses the term almost all to mean cofinite (all but finitely many), cocountable (all but countably many), for sufficiently large numbers, or, sometimes, asymptotically almost surely. The concept is particularly important in the study of random graphs.
In topology.
In topology and algebraic geometry, a generic property is one that holds on a dense open set, or more generally on a residual set (a countable intersection of dense open sets), with the dual concept being a closed nowhere dense set, or more generally a meagre set (a countable union of nowhere dense closed sets).
However, density alone is not sufficient to characterize a generic property. This can be seen even in the real numbers, where both the rational numbers and their complement, the irrational numbers, are dense. Since it does not make sense to say that both a set and its complement exhibit typical behavior, both the rationals and irrationals cannot be examples of sets large enough to be typical. Consequently, we rely on the stronger definition above which implies that the irrationals are typical and the rationals are not.
For applications, if a property holds on a residual set, it may not hold for every point, but perturbing it slightly will generally land one inside the residual set (by nowhere density of the components of the meagre set), and these are thus the most important case to address in theorems and algorithms.
In function spaces.
A property is generic in "Cr" if the set holding this property contains a residual subset in the "Cr" topology. Here "C"r is the function space whose members are continuous functions with r continuous derivatives from a manifold "M" to a manifold "N".
The space "C""r"("M", "N"), of "C""r" mappings between "M" and "N", is a Baire space, hence any residual set is dense. This property of the function space is what makes generic properties "typical".
In algebraic geometry.
Algebraic varieties.
A property of an irreducible algebraic variety "X" is said to be true generically if it holds except on a proper Zariski-closed subset of "X", in other words, if it holds on a non-empty Zariski-open subset. This definition agrees with the topological one above, because for irreducible algebraic varieties any non-empty open set is dense.
For example, by the Jacobian criterion for regularity, a generic point of a variety over a field of characteristic zero is smooth. (This statement is known as generic smoothness.) This is true because the Jacobian criterion can be used to find equations for the points which are not smooth: They are exactly the points where the Jacobian matrix of a point of "X" does not have full rank. In characteristic zero, these equations are non-trivial, so they cannot be true for every point in the variety. Consequently, the set of all non-regular points of "X" is a proper Zariski-closed subset of "X".
Here is another example. Let "f" : "X" → "Y" be a regular map between two algebraic varieties. For every point "y" of "Y", consider the dimension of the fiber of "f" over "y", that is, dim "f"−1("y"). Generically, this number is constant. It is not necessarily constant everywhere. If, say, "X" is the blowup of "Y" at a point and "f" is the natural projection, then the relative dimension of "f" is zero except at the point which is blown up, where it is dim "Y" - 1.
Some properties are said to hold "very generically". Frequently this means that the ground field is uncountable and that the property is true except on a countable union of proper Zariski-closed subsets (i.e., the property holds on a dense Gδ set). For instance, this notion of very generic occurs when considering rational connectedness. However, other definitions of very generic can and do occur in other contexts.
Generic point.
In algebraic geometry, a generic point of an algebraic variety is a point whose coordinates do not satisfy any other algebraic relation than those satisfied by every point of the variety. For example, a generic point of an affine space over a field "k" is a point whose coordinates are algebraically independent over "k".
In scheme theory, where the points are the sub varieties, a generic point of a variety is a point whose closure for the Zariski topology is the whole variety.
A generic property is a property of the generic point. For any reasonable property, it turns out that the property is true generically on the subvariety (in the sense of being true on an open dense subset) if and only if the property is true at the generic point. Such results are frequently proved using the methods of limits of affine schemes developed in EGA IV 8.
General position.
A related concept in algebraic geometry is general position, whose precise meaning depends on the context. For example, in the Euclidean plane, three points in general position are not collinear. This is because the property of not being collinear is a generic property of the configuration space of three points in R2.
In computability.
In computability and algorithmic randomness, an infinite string of natural numbers formula_0 is called 1-generic if, for every c.e. set formula_1, either formula_2 has an initial segment formula_3 in formula_4, or formula_2 has an initial segment formula_3 such that every extension formula_5 is "not" in W. 1-generics are important in computability, as many constructions can be simplified by considering an appropriate 1-generic. Some key properties are:
1-genericity is connected to the topological notion of "generic", as follows. Baire space formula_7 has a topology with basic open sets formula_8 for every finite string of natural numbers formula_9. Then, an element formula_0 is 1-generic if and only if it is "not" on the boundary of any open set. In particular, 1-generics are required to meet every dense open set (though this is a strictly weaker property, called "weakly 1-generic").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f \\in \\omega^\\omega"
},
{
"math_id": 1,
"text": "W \\subseteq \\omega^{<\\omega}"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "\\sigma"
},
{
"math_id": 4,
"text": "W"
},
{
"math_id": 5,
"text": "\\tau \\succcurlyeq \\sigma"
},
{
"math_id": 6,
"text": "f' \\equiv_\\mathrm{T} f \\oplus \\varnothing'"
},
{
"math_id": 7,
"text": "\\omega^\\omega"
},
{
"math_id": 8,
"text": "[\\sigma] = \\{ f: \\sigma \\preccurlyeq f \\}"
},
{
"math_id": 9,
"text": "\\sigma \\in \\omega^{<\\omega}"
},
{
"math_id": 10,
"text": "f\\colon M \\to N"
}
] | https://en.wikipedia.org/wiki?curid=12886758 |
1288985 | Stueckelberg action | Special case of the abelian Higgs mechanism
In field theory, the Stueckelberg action (named after Ernst Stueckelberg) describes a massive spin-1 field as an R (the real numbers are the Lie algebra of U(1)) Yang–Mills theory coupled to a real scalar field formula_0. This scalar field takes on values in a real 1D affine representation of R with formula_1 as the coupling strength.
formula_2
This is a special case of the Higgs mechanism, where, in effect, λ and thus the mass of the Higgs scalar excitation has been taken to infinity, so the Higgs has decoupled and can be ignored, resulting in a nonlinear, affine representation of the field, instead of a linear representation — in contemporary terminology, a U(1) nonlinear σ-model.
Gauge-fixing formula_3, yields the Proca action.
This explains why, unlike the case for non-abelian vector fields, quantum electrodynamics with a massive photon is, in fact, renormalizable, even though it is not manifestly gauge invariant (after the Stückelberg scalar has been eliminated in the Proca action).
Stueckelberg extension of the Standard Model.
The Stueckelberg extension of the Standard Model ("StSM)" consists of a gauge invariant kinetic term for a massive U(1) gauge field. Such a term can be implemented into the Lagrangian of the Standard Model
without destroying the renormalizability of the theory and further provides a mechanism for
mass generation that is distinct from the Higgs mechanism in the context of Abelian gauge theories.
The model involves a non-trivial
mixing of the Stueckelberg and the Standard Model sectors by including an additional term in the effective Lagrangian of the Standard Model given by
formula_4
The first term above is the Stueckelberg field strength, formula_5 and formula_6 are topological mass parameters and formula_7 is the axion.
After symmetry breaking in the electroweak sector the photon remains massless. The model predicts a new type of gauge boson dubbed formula_8 which inherits a very distinct narrow decay width in this model. The St sector of the StSM decouples from the SM in limit formula_9.
Stueckelberg type couplings arise quite naturally in theories involving compactifications of higher-dimensional string theory, in particular, these couplings appear in the dimensional reduction of the ten-dimensional N = 1 supergravity coupled to supersymmetric Yang–Mills gauge fields in the presence of internal gauge fluxes. In the context of intersecting D-brane model building, products of U(N) gauge groups are broken to their SU(N) subgroups via the Stueckelberg couplings and thus the Abelian gauge fields become massive. Further, in a much simpler fashion one may consider a model with only one extra dimension (a type of Kaluza–Klein model) and compactify down to a four-dimensional theory. The resulting Lagrangian will contain massive vector gauge bosons that acquire masses through the Stueckelberg mechanism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "\\mathcal{L}=-\\frac{1}{4}(\\partial^\\mu A^\\nu-\\partial^\\nu A^\\mu)(\\partial_\\mu A_\\nu-\\partial_\\nu A_\\mu)+\\frac{1}{2}(\\partial^\\mu \\phi+m A^\\mu)(\\partial_\\mu \\phi+m A_\\mu) "
},
{
"math_id": 3,
"text": "\\phi=0"
},
{
"math_id": 4,
"text": "\\mathcal{L}_{\\rm St}=-\\frac{1}{4}C_{\\mu \\nu }C^{\\mu\\nu }+g_XC_{\\mu }\\mathcal{J}_X^{\\mu }-\\frac{1}{2}\\left(\\partial _{\\mu }\\sigma +M_1C_{\\mu}+M_2B_{\\mu }\\right)^2."
},
{
"math_id": 5,
"text": "M_1"
},
{
"math_id": 6,
"text": "M_2"
},
{
"math_id": 7,
"text": "\\sigma "
},
{
"math_id": 8,
"text": "Z'_{\\rm St}"
},
{
"math_id": 9,
"text": "M_2/M_1 \\to 0"
}
] | https://en.wikipedia.org/wiki?curid=1288985 |
12891058 | Triangulation (computer vision) | Method of determining a point in 3D space
In computer vision, triangulation refers to the process of determining a point in 3D space given its projections onto two, or more, images. In order to solve this problem it is necessary to know the parameters of the camera projection function from 3D to 2D for the cameras involved, in the simplest case represented by the camera matrices. Triangulation is sometimes also referred to as reconstruction or intersection.
The triangulation problem is in principle trivial. Since each point in an image corresponds to a line in 3D space, all points on the line in 3D are projected to the point in the image. If a pair of corresponding points in two, or more images, can be found it must be the case that they are the projection of a common 3D point x. The set of lines generated by the image points must intersect at x (3D point) and the algebraic formulation of the coordinates of x (3D point) can be computed in a variety of ways, as is presented below.
In practice, however, the coordinates of image points cannot be measured with arbitrary accuracy. Instead, various types of noise, such as geometric noise from lens distortion or interest point detection error, lead to inaccuracies in the measured image coordinates. As a consequence, the lines generated by the corresponding image points do not always intersect in 3D space. The problem, then, is to find a 3D point which optimally fits the measured image points. In the literature there are multiple proposals for how to define optimality and how to find the optimal 3D point. Since they are based on different optimality criteria, the various methods produce different estimates of the 3D point x when noise is involved.
Introduction.
In the following, it is assumed that triangulation is made on corresponding image points from two views generated by pinhole cameras.
The image to the left illustrates the epipolar geometry of a pair of stereo cameras of pinhole model. A point x (3D point) in 3D space is projected onto the respective image plane along a line (green) which goes through the camera's focal point, formula_0 and formula_1, resulting in the two corresponding image points formula_2 and formula_3. If formula_2 and formula_3 are given and the geometry of the two cameras are known, the two projection lines (green lines) can be determined and it must be the case that they intersect at point x (3D point). Using basic linear algebra that intersection point can be determined in a straightforward way.
The image to the right shows the real case. The position of the image points formula_2 and formula_3 cannot be measured exactly. The reason is a combination of factors such as
As a consequence, the measured image points are formula_4 and formula_5 instead of formula_2 and formula_3. However, their projection lines (blue) do not have to intersect in 3D space or come close to x. In fact, these lines intersect if and only if formula_4 and formula_5 satisfy the epipolar constraint defined by the fundamental matrix. Given the measurement noise in formula_4 and formula_5 it is rather likely that the epipolar constraint is not satisfied and the projection lines do not intersect.
This observation leads to the problem which is solved in triangulation. Which 3D point xest is the best estimate of x given formula_4 and formula_5 and the geometry of the cameras? The answer is often found by defining an error measure which depends on xest and then minimizing this error. In the following sections, some of the various methods for computing xest presented in the literature are briefly described.
All triangulation methods produce xest = x in the case that formula_6 and formula_7, that is, when the epipolar constraint is satisfied (except for singular points, see below). It is what happens when the constraint is not satisfied which differs between the methods.
Properties.
A triangulation method can be described in terms of a function formula_8 such that
formula_9
where formula_10 are the homogeneous coordinates of the detected image points and formula_11 are the camera matrices. x (3D point) is the homogeneous representation of the resulting 3D point. The formula_12 sign implies that formula_8 is only required to produce a vector which is equal to x up to a multiplication by a non-zero scalar since homogeneous vectors are involved.
Before looking at the specific methods, that is, specific functions formula_8, there are some general concepts related to the methods that need to be explained. Which triangulation method is chosen for a particular problem depends to some extent on these characteristics.
Singularities.
Some of the methods fail to correctly compute an estimate of x (3D point) if it lies in a certain subset of the 3D space, corresponding to some combination of formula_13. A point in this subset is then a "singularity" of the triangulation method. The reason for the failure can be that some equation system to be solved is under-determined or that the projective representation of xest becomes the zero vector for the singular points.
Invariance.
In some applications, it is desirable that the triangulation is independent of the coordinate system used to represent 3D points; if the triangulation problem is formulated in one coordinate system and then transformed into another the resulting estimate xest should transform in the same way. This property is commonly referred to as "invariance". Not every triangulation method assures invariance, at least not for general types of coordinate transformations.
For a homogeneous representation of 3D coordinates, the most general transformation is a projective transformation, represented by a formula_14 matrix formula_15. If the homogeneous coordinates are transformed according to
formula_16
then the camera matrices must transform as (Ck)
formula_17
to produce the same homogeneous image coordinates (yk)
formula_18
If the triangulation function formula_19 is invariant to formula_15 then the following relation must be valid
formula_20
from which follows that
formula_21 for all formula_10
For each triangulation method, it can be determined if this last relation is valid. If it is, it may be satisfied only for a subset of the projective transformations, for example, rigid or affine transformations.
Computational complexity.
The function formula_19 is only an abstract representation of a computation which, in practice, may be relatively complex. Some methods result in a formula_19 which is a closed-form continuous function while others need to be decomposed into a series of computational steps involving, for example, SVD or finding the roots of a polynomial. Yet another class of methods results in formula_19 which must rely on iterative estimation of some parameters. This means that both the computation time and the complexity of the operations involved may vary between the different methods.
Methods.
Mid-point method.
Each of the two image points formula_4 and formula_5 has a corresponding projection line (blue in the right image above), here denoted as formula_22 and formula_23, which can be determined given the camera matrices formula_11. Let formula_24 be a distance function between a (3D line) L and a x (3D point) such that
formula_25 is the Euclidean distance between formula_26 and formula_27.
The "midpoint method" finds the point xest which minimizes
formula_28
It turns out that xest lies exactly at the middle of the shortest line segment which joins the two projection lines.
Via the essential matrix.
The problem to be solved there is how to compute formula_29 given corresponding normalized image coordinates formula_30 and formula_31. If the essential matrix is known and the corresponding rotation and translation transformations have been determined, this algorithm (described in Longuet-Higgins' paper) provides a solution.
Let formula_32 denote row "k" of the rotation matrix formula_33:
formula_34
Combining the above relations between 3D coordinates in the two coordinate systems and the mapping between 3D and 2D points described earlier gives
formula_35
or
formula_36
Once formula_37 is determined, the other two coordinates can be computed as
formula_38
The above derivation is not unique. It is also possible to start with an expression for formula_39 and derive an expression for formula_37 according to
formula_40
In the ideal case, when the camera maps the 3D points according to a perfect pinhole camera and the resulting 2D points can be detected without any noise, the two expressions for formula_37 are equal. In practice, however, they are not and it may be advantageous to combine the two estimates of formula_37, for example, in terms of some sort of average.
There are also other types of extensions of the above computations which are possible. They started with an expression of the primed image coordinates and derived 3D coordinates in the unprimed system. It is also possible to start with unprimed image coordinates and obtain primed 3D coordinates, which finally can be transformed into unprimed 3D coordinates. Again, in the ideal case the result should be equal to the above expressions, but in practice they may deviate.
A final remark relates to the fact that if the essential matrix is determined from corresponding image coordinate, which often is the case when 3D points are determined in this way, the translation vector formula_41 is known only up to an unknown positive scaling. As a consequence, the reconstructed 3D points, too, are undetermined with respect to a positive scaling. | [
{
"math_id": 0,
"text": " \\mathbf{O}_{1} "
},
{
"math_id": 1,
"text": " \\mathbf{O}_{2} "
},
{
"math_id": 2,
"text": " \\mathbf{y}_{1} "
},
{
"math_id": 3,
"text": " \\mathbf{y}_{2} "
},
{
"math_id": 4,
"text": " \\mathbf{y}'_{1} "
},
{
"math_id": 5,
"text": " \\mathbf{y}'_{2} "
},
{
"math_id": 6,
"text": " \\mathbf{y}_{1} = \\mathbf{y}'_{1} "
},
{
"math_id": 7,
"text": " \\mathbf{y}_{2} = \\mathbf{y}'_{2} "
},
{
"math_id": 8,
"text": " \\tau \\, "
},
{
"math_id": 9,
"text": " \\mathbf{x} \\sim \\tau(\\mathbf{y}'_{1}, \\mathbf{y}'_{2}, \\mathbf{C}_{1}, \\mathbf{C}_{2}) "
},
{
"math_id": 10,
"text": " \\mathbf{y}'_{1}, \\mathbf{y}'_{2} "
},
{
"math_id": 11,
"text": " \\mathbf{C}_{1}, \\mathbf{C}_{2} "
},
{
"math_id": 12,
"text": " \\sim \\, "
},
{
"math_id": 13,
"text": " \\mathbf{y}'_{1}, \\mathbf{y}'_{2}, \\mathbf{C}_{1}, \\mathbf{C}_{2} "
},
{
"math_id": 14,
"text": " 4 \\times 4 "
},
{
"math_id": 15,
"text": " \\mathbf{T} "
},
{
"math_id": 16,
"text": " \\mathbf{\\bar x} \\sim \\mathbf{T} \\, \\mathbf{x} "
},
{
"math_id": 17,
"text": " \\mathbf{\\bar C}_{k} \\sim \\mathbf{C}_{k} \\, \\mathbf{T}^{-1} "
},
{
"math_id": 18,
"text": " \\mathbf{y}_{k} \\sim \\mathbf{\\bar C}_{k} \\, \\mathbf{\\bar x} = \\mathbf{C}_{k} \\, \\mathbf{x} "
},
{
"math_id": 19,
"text": " \\tau "
},
{
"math_id": 20,
"text": " \\mathbf{\\bar x}_{\\rm est} \\sim \\mathbf{T} \\, \\mathbf{x}_{\\rm est} "
},
{
"math_id": 21,
"text": " \\tau(\\mathbf{y}'_{1}, \\mathbf{y}'_{2}, \\mathbf{C}_{1}, \\mathbf{C}_{2}) \\sim \\mathbf{T}^{-1} \\, \\tau(\\mathbf{y}'_{1}, \\mathbf{y}'_{2}, \\mathbf{C}_{1} \\, \\mathbf{T}^{-1}, \\mathbf{C}_{2} \\, \\mathbf{T}^{-1}), "
},
{
"math_id": 22,
"text": " \\mathbf{L}'_{1} "
},
{
"math_id": 23,
"text": " \\mathbf{L}'_{2} "
},
{
"math_id": 24,
"text": " d\\, "
},
{
"math_id": 25,
"text": " d(\\mathbf{L}, \\mathbf{x})"
},
{
"math_id": 26,
"text": " \\mathbf{L} "
},
{
"math_id": 27,
"text": " \\mathbf{x} "
},
{
"math_id": 28,
"text": " d(\\mathbf{L}'_{1}, \\mathbf{x})^{2} + d(\\mathbf{L}'_{2}, \\mathbf{x})^{2} "
},
{
"math_id": 29,
"text": " (x_{1}, x_{2}, x_{3}) "
},
{
"math_id": 30,
"text": " (y_{1}, y_{2}) "
},
{
"math_id": 31,
"text": " (y'_{1}, y'_{2}) "
},
{
"math_id": 32,
"text": " \\mathbf{r}_{k} "
},
{
"math_id": 33,
"text": " \\mathbf{R} "
},
{
"math_id": 34,
"text": " \\mathbf{R} = \\begin{pmatrix} - \\mathbf{r}_{1} - \\\\ - \\mathbf{r}_{2} - \\\\ - \\mathbf{r}_{3} - \\end{pmatrix} "
},
{
"math_id": 35,
"text": " y'_{1} = \\frac{x'_{1}}{x'_{3}} = \\frac{\\mathbf{r}_{1} \\cdot (\\tilde{\\mathbf{x}} - \\mathbf{t})}{\\mathbf{r}_{3} \\cdot (\\tilde{\\mathbf{x}} - \\mathbf{t})} = \\frac{\\mathbf{r}_{1} \\cdot (\\mathbf{y} - \\mathbf{t}/x_{3})}{\\mathbf{r}_{3} \\cdot (\\mathbf{y} - \\mathbf{t}/x_{3})} "
},
{
"math_id": 36,
"text": "x_{3} = \\frac{ (\\mathbf{r}_{1} - y'_{1} \\, \\mathbf{r}_{3}) \\cdot \\mathbf{t} }{ (\\mathbf{r}_{1} - y'_{1} \\, \\mathbf{r}_{3}) \\cdot \\mathbf{y} } "
},
{
"math_id": 37,
"text": " x_{3} "
},
{
"math_id": 38,
"text": " \\begin{pmatrix} x_1 \\\\ x_2 \\end{pmatrix} = x_3 \\begin{pmatrix} y_1 \\\\ y_2 \\end{pmatrix} "
},
{
"math_id": 39,
"text": " y'_{2} "
},
{
"math_id": 40,
"text": "x_{3} = \\frac{ (\\mathbf{r}_{2} - y'_{2} \\, \\mathbf{r}_{3}) \\cdot \\mathbf{t} }{ (\\mathbf{r}_{2} - y'_{2} \\, \\mathbf{r}_{3}) \\cdot \\mathbf{y} } "
},
{
"math_id": 41,
"text": " \\mathbf{t} "
}
] | https://en.wikipedia.org/wiki?curid=12891058 |
1289256 | Flight plan | Document filed by a pilot or flight dispatcher indicating the aircraft's flight path
Flight plans are documents filed by a pilot or flight dispatcher with the local Air Navigation Service Provider (e.g., the FAA in the United States) prior to departure which indicate the plane's planned route or flight path. Flight plan format is specified in ICAO Doc 4444. They generally include basic information such as departure and arrival points, estimated time en route, alternate airports in case of bad weather, type of flight (whether instrument flight rules [IFR] or visual flight rules [VFR]), the pilot's information, number of people on board, and information about the aircraft itself. In most countries, flight plans are required for flights under IFR, but may be optional for flying VFR unless crossing international borders. Flight plans are highly recommended, especially when flying over inhospitable areas such as water, as they provide a way of alerting rescuers if the flight is overdue. In the United States and Canada, when an aircraft is crossing the Air Defense Identification Zone (ADIZ), either an IFR or a special type of VFR flight plan called a DVFR (Defense VFR) flight plan must be filed. For IFR flights, flight plans are used by air traffic control to initiate tracking and routing services. For VFR flights, their only purpose is to provide needed information should search and rescue operations be required, or for use by air traffic control when flying in a "Special Flight Rules Area."
Route or flight paths.
Routing types used in flight planning are: airway, navaid and direct. A route may be composed of segments of different routing types. For example, a route from Chicago to Rome may include airway routing over the U.S. and Europe, but direct routing over the Atlantic Ocean.
Airway or flight path.
Airway routing occurs along pre-defined pathways called flight paths. Airways can be thought of as three-dimensional highways for aircraft. In most land areas of the world, aircraft are required to fly airways between the departure and destination airports. The rules governing airway routing cover altitude, airspeed, and requirements for entering and leaving the airway (see SIDs and STARs). Most airways are eight nautical miles (14 kilometers) wide, and the airway flight levels keep aircraft separated by at least 1000 vertical feet from aircraft on the flight level above and below. Airways usually intersect at Navaids, which designate the allowed points for changing from one airway to another. Airways have names consisting of one or more letters followed by one or more digits (e.g., V484 or UA419).
The airway structure is divided into high and low altitudes. The low altitude airways in the U.S. which can be navigated using VOR Navaids have names that start with the letter V, and are therefore called Victor Airways. They cover altitudes from approximately 1200 feet above ground level (AGL) to above mean sea level (MSL). T routes are low altitude RNAV only routes which may or may not utilize VOR NAVAIDS. The high altitude airways in the U.S. have names that start with the letter J and are called Jet Routes, or Q for Q routes. Q routes in the U.S. are RNAV only high altitude airways, whereas J routes use VOR NAVAID's the same way V routes do. J & Q routes run from to . The altitude separating the low and high airway structures varies from country to country. For example, it is in Switzerland, and in Egypt.
Navaid.
Navaid routing occurs between Navaids (short for Navigational Aids, see VOR) which are not always connected by airways. Navaid routing is typically only allowed in the continental U.S. If a flight plan specifies Navaid routing between two Navaids which are connected via an airway, the rules for that particular airway must be followed as if the aircraft was flying Airway routing between those two Navaids. Allowable altitudes are covered in Flight Levels.
Direct.
Direct routing occurs when one or both of the route segment endpoints are at a latitude/longitude which is not located at a Navaid. Some flight planning organizations specify that checkpoints generated for a Direct route be a limited distance apart, or limited by time to fly between the checkpoints (i.e. direct checkpoints could be farther apart for a fast aircraft than for a slow one).
SIDs and STARs.
SIDs and STARs are procedures and checkpoints used to enter and leave the airway system by aircraft operating on IFR flight plans. There is a defined transition point at which an airway and a SID or STAR intersect.
A SID, or Standard Instrument Departure, defines a pathway out of an airport and onto the airway structure. A SID is sometimes called a Departure Procedure (DP). SIDs are unique to the associated airport.
A STAR, or Standard Terminal Arrival Route, ('Standard Instrument Arrival' in the UK) defines a pathway into an airport from the airway structure. STARs can be associated with more than one arrival airport, which can occur when two or more airports are in proximity (e.g., San Francisco and San Jose).
Special use airspace.
In general, flight planners are expected to avoid areas called Special Use Airspace (SUA) when planning a flight. In the United States, there are several types of SUA, including Restricted, Warning, Prohibited, Alert, and Military Operations Area (MOA). Examples of Special Use Airspace include a region around the White House in Washington, D.C., and the country of Cuba. Government and military aircraft may have different requirements for particular SUA areas, or may be able to acquire special clearances to traverse through these areas.
Flight levels.
Flight levels (FL) are used by air traffic controllers to simplify the vertical separation of aircraft and one exists every 100 feet relative to an agreed pressure level. Above a transitional altitude, which can vary from country to country and even within a country, the worldwide agreed upon pressure datum of 1013.25 millibars (corresponding to the pressure at sea level for the ICAO Standard Atmosphere, 101.325 kPa) or the equivalent setting of 29.92 inches of mercury is entered into the altimeter and altitude is then referred to as a flight level. The altimeter reading is converted to a flight level by removing the trailing two zeros: for example, 29000 feet becomes FL290. When the pressure at sea level is by chance the international standard then the flight level is also the altitude. To avoid confusion, below the transition altitude, height is referred to as a numeric altitude, for example 'descend 5000 feet' and above the transition altitude, 'climb flight level 250'.
Airways have a set of associated standardized flight levels (sometimes called the "flight model") which must be used when on the airway. On a bi-directional airway, each direction has its own set of flight levels. A valid flight plan must include a legal flight level at which the aircraft will travel the airway. A change in airway may require a change in flight level.
In the US, Canada and Europe for eastbound (heading 0–179 degrees) IFR flights, the flight plan must list an "odd" flight level in 2000 foot increments starting at FL190 (i.e., FL190, FL210, FL230, etc.); Westbound (heading 180–359 degrees) IFR flights must list an "even" flight level in 2000 foot increments starting at FL180 (i.e., FL180, FL200, FL220, etc.). However, Air Traffic Control (ATC) may assign any flight level at any time if traffic situations merit a change in altitude.
Aircraft efficiency increases with height. Burning fuel decreases the weight of an aircraft which may then choose to increase its flight level to further improve fuel consumption. For example, an aircraft may be able to reach FL290 early in a flight, but step climb to FL370 later in the route after weight has decreased due to fuel burn off.
Alternate airports.
Part of flight planning often involves the identification of one or more airports which can be flown to in case of unexpected conditions (such as weather) at the destination airport. The planning process must be careful to include only alternate airports which can be reached with the anticipated fuel load and total aircraft weight and that have capabilities necessary to handle the type of aircraft being flown.
In Canada, unlike the United States, unless specifically exempted by a company Operating Certificate, IFR flight plans require an alternate airport, regardless of the forecast destination weather. In order to be considered as a legally valid alternate, the airport must be forecast to be at or above certain weather minima at the estimated time of arrival (at the alternate). The minimum weather conditions vary based on the type of approach(es) available at the alternate airport, and may be found in the General section of the Canada Air Pilot (CAP).
Fuel.
Aircraft manufacturers are responsible for generating flight performance data which flight planners use to estimate fuel needs for a particular flight. The fuel burn rate is based on specific throttle settings for climbing and cruising. The planner uses the projected weather and aircraft weight as inputs to the flight performance data to estimate the necessary fuel to reach the destination. The fuel burn is usually given as the weight of the fuel (usually pounds or kilograms) instead of the volume (such as gallons or litres) because aircraft weight is critical.
In addition to standard fuel needs, some organizations require that a flight plan include reserve fuel if certain conditions are met. For example, an over-water flight of longer than a specific duration may require the flight plan to include reserve fuel. The reserve fuel may be planned as extra which is left over on the aircraft at the destination, or it may be assumed to be burned during flight (perhaps due to unaccounted for differences between the actual aircraft and the flight performance data).
In case of an in-flight emergency it may be necessary to determine whether it is quicker to divert to the alternate airfield or continue to the destination. This can be calculated according to the formula (known as the Vir Narain formula) as follows:
formula_0
where "C" is the distance from the Critical Point (equitime point) to the destination, "D" the distance between the destination and the alternate airfield, "O" is the groundspeed, "A" is the airspeed, "θ = Φ +/- d" (where "Φ" is the angle between the track to the destination and the track from the destination to the alternate airfield), and "d" is the drift (plus when the drift and the alternate airfield are on the opposite sides of the track, and minus when they are on the same side).
Flight plan timeline.
Flight plans may be submitted before departure or even after the aircraft is in the air. However flight plans may be submitted up to 120 hours in advance either by voice or by data link; though they are usually filled out or submitted just several hours before departure. The minimum recommended time is one hour before departure for domestic flights, and up to three hours before international flights. This depends on the country the aircraft is flying out of.
Other flight planning considerations.
Holding over the destination or alternate airports is a required part of some flight plans. Holding (circling in a pattern designated by the airport control tower) may be necessary if unexpected weather or congestion occurs at the airport. If the flight plan calls for hold planning, the additional fuel and hold time should appear on the flight plan.
Organized Tracks are a series of paths similar to airways which cross ocean areas. Some organized track systems are fixed and appear on navigational charts (e.g., the NOPAC tracks over the Northern Pacific Ocean). Others change on a daily basis depending on weather, west or eastbound and other factors and therefore cannot appear on printed charts (e.g., the North Atlantic Tracks (NAT) over the Atlantic Ocean).
A measurement of elevation, or "height", above a specific land mass (also see MSL).
The ICAO is the specialized agency of the United Nations with a mandate "to ensure the safe, efficient and orderly evolution of international civil aviation." The standards which become accepted by the ICAO member nations "cover all technical and operational aspects of international civil aviation, such as safety, personnel licensing, operation of aircraft, aerodromes, air traffic services, accident investigation and the environment." A simple example of ICAO responsibilities is the unique worldwide names used to identify Navaids, Airways, airports and countries.
A unit of speed used in navigation equal to one nautical mile per hour.
The average height of the surface of the sea for all stages of tide; used as a reference for altitude (also see AGL).
A unit of distance used in aviation and maritime navigation, equal to approximately one minute of arc of latitude on a great circle. It is defined to be 1852 metres exactly, or approximately 1.15 statute mile.
A format for reporting weather information.
The weight of the aircraft with crew, cargo, and passengers, but without fuel.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C = D*O*\\sec\\theta / 2A,"
}
] | https://en.wikipedia.org/wiki?curid=1289256 |
1289716 | Word metric | In group theory, a word metric on a discrete group formula_0 is a way to measure distance between any two elements of formula_0. As the name suggests, the word metric is a metric on formula_0, assigning to any two elements formula_1, formula_2 of formula_0 a distance formula_3 that measures how efficiently their difference formula_4 can be expressed as a word whose letters come from a generating set for the group. The word metric on "G" is very closely related to the Cayley graph of "G": the word metric measures the length of the shortest path in the Cayley graph between two elements of "G".
A generating set for formula_5 must first be chosen before a word metric on formula_5 is specified. Different choices of a generating set will typically yield different word metrics. While this seems at first to be a weakness in the concept of the word metric, it can be exploited to prove theorems about geometric properties of groups, as is done in geometric group theory.
Examples.
The group of integers formula_6.
The group of integers formula_6 is generated by the set {-1,+1}. The integer -3 can be expressed as -1-1-1+1-1, a word of length 5 in these generators. But the word that expresses -3 most efficiently is -1-1-1, a word of length 3. The distance between 0 and -3 in the word metric is therefore equal to 3. More generally, the distance between two integers "m" and "n" in the word metric is equal to |"m"-"n"|, because the shortest word representing the difference "m"-"n" has length equal to |"m"-"n"|.
The group formula_7.
For a more illustrative example, the elements of the group formula_8 can be thought of as vectors in the Cartesian plane with integer coefficients. The group formula_8 is generated by the standard unit vectors formula_9, formula_10 and their inverses formula_11, formula_12. The Cayley graph of formula_8 is the so-called taxicab geometry. It can be pictured in the plane as an infinite square grid of city streets, where each horizontal and vertical line with integer coordinates is a street, and each point of formula_8 lies at the intersection of a horizontal and a vertical street. Each horizontal segment between two vertices represents the generating vector formula_13 or formula_14, depending on whether the segment is travelled in the forward or backward direction, and each vertical segment represents formula_15 or formula_16. A car starting from formula_17 and travelling along the streets to formula_18 can make the trip by many different routes. But no matter what route is taken, the car must travel at least |1 - (-2)| = 3 horizontal blocks and at least |2 - 4| = 2 vertical blocks, for a total trip distance of at least 3 + 2 = 5. If the car goes out of its way the trip may be longer, but the minimal distance travelled by the car, equal in value to the word metric between formula_17 and formula_18 is therefore equal to 5.
In general, given two elements formula_19 and formula_20 of formula_8, the distance between formula_21 and formula_22 in the word metric is equal to formula_23.
Definition.
Let "G" be a group, let "S" be a generating set for "G", and suppose that "S" is closed under the inverse operation on "G". A word over the set "S" is just a finite sequence formula_24 whose entries formula_25 are elements of "S". The integer "L" is called the length of the word formula_26. Using the group operation in "G", the entries of a word formula_24 can be multiplied in order, remembering that the entries are elements of "G". The result of this multiplication is an element formula_27 in the group "G", which is called the evaluation of the word "w". As a special case, the empty word formula_28 has length zero, and its evaluation is the identity element of "G".
Given an element "g" of "G", its word norm |"g"| with respect to the generating set "S" is defined to be the shortest length of a word formula_26 over "S" whose evaluation formula_27 is equal to "g". Given two elements "g","h" in "G", the distance d(g,h) in the word metric with respect to "S" is defined to be formula_29. Equivalently, d("g","h") is the shortest length of a word "w" over "S" such that formula_30.
The word metric on "G" satisfies the axioms for a metric, and it is not hard to prove this. The proof of the symmetry axiom d("g","h") = d("h","g") for a metric uses the assumption that the generating set "S" is closed under inverse.
Variations.
The word metric has an equivalent definition formulated in more geometric terms using the Cayley graph of "G" with respect to the generating set "S". When each edge of the Cayley graph is assigned a metric of length 1, the distance between two group elements "g","h" in "G" is equal to the shortest length of a path in the Cayley graph from the vertex "g" to the vertex "h".
The word metric on "G" can also be defined without assuming that the generating set "S" is closed under inverse. To do this, first symmetrize "S", replacing it by a larger generating set consisting of each formula_31 in "S" as well as its inverse formula_32. Then define the word metric with respect to "S" to be the word metric with respect to the symmetrization of "S".
Example in a free group.
Suppose that "F" is the free group on the two element set formula_33. A word "w" in the symmetric generating set formula_34 is said to be reduced if the letters formula_35 do not occur next to each other in "w", nor do the letters formula_36. Every element formula_37 is represented by a unique reduced word, and this reduced word is the shortest word representing "g". For example, since the word formula_38 is reduced and has length 2, the word norm of formula_26 equals 2, so the distance in the word norm between formula_39 and formula_40 equals 2. This can be visualized in terms of the Cayley graph, where the shortest path between "b" and "a" has length 2.
Theorems.
Isometry of the left action.
The group "G" acts on itself by left multiplication: the action of each formula_41 takes each formula_42 to formula_43. This action is an isometry of the word metric. The proof is simple: the distance between formula_43 and formula_44 equals formula_45, which equals the distance between formula_46 and formula_47.
Bilipschitz invariants of a group.
In general, the word metric on a group "G" is not unique, because different symmetric generating sets give different word metrics. However, finitely generated word metrics are unique up to bilipschitz equivalence: if formula_48, formula_49 are two symmetric, finite generating sets for "G" with corresponding word metrics formula_50, formula_51, then there is a constant formula_52 such that for any formula_53,
formula_54.
This constant "K" is just the maximum of the formula_50 word norms of elements of formula_49 and the formula_51 word norms of elements of formula_48. This proof is also easy: any word over "S" can be converted by substitution into a word over "T", expanding the length of the word by a factor of at most "K", and similarly for converting words over "T" into words over "S".
The bilipschitz equivalence of word metrics implies in turn that the growth rate of a finitely generated group is a well-defined isomorphism invariant of the group, independent of the choice of a finite generating set. This implies in turn that various properties of growth, such as polynomial growth, the degree of polynomial growth, and exponential growth, are isomorphism invariants of groups. This topic is discussed further in the article on the growth rate of a group.
Quasi-isometry invariants of a group.
In geometric group theory, groups are studied by their actions on metric spaces. A principle that generalizes the bilipschitz invariance of word metrics says that any finitely generated word metric on "G" is quasi-isometric to any proper, geodesic metric space on which "G" acts, properly discontinuously and cocompactly. Metric spaces on which "G" acts in this manner are called model spaces for "G".
It follows in turn that any quasi-isometrically invariant property satisfied by the word metric of "G" or by any model space of "G" is an isomorphism invariant of "G". Modern geometric group theory is in large part the study of quasi-isometry invariants. | [
{
"math_id": 0,
"text": " G "
},
{
"math_id": 1,
"text": " g "
},
{
"math_id": 2,
"text": " h "
},
{
"math_id": 3,
"text": " d(g,h) "
},
{
"math_id": 4,
"text": " g^{-1} h "
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "\\mathbb{Z}"
},
{
"math_id": 7,
"text": "\\mathbb{Z} \\oplus \\mathbb{Z}"
},
{
"math_id": 8,
"text": "\\mathbb{Z}\\oplus\\mathbb{Z}"
},
{
"math_id": 9,
"text": "e_1 = \\langle1,0\\rangle"
},
{
"math_id": 10,
"text": "e_2 = \\langle0,1\\rangle"
},
{
"math_id": 11,
"text": " -e_1=\\langle-1,0\\rangle "
},
{
"math_id": 12,
"text": " -e_2=\\langle0,-1\\rangle "
},
{
"math_id": 13,
"text": " e_1 "
},
{
"math_id": 14,
"text": " -e_1"
},
{
"math_id": 15,
"text": " e_2 "
},
{
"math_id": 16,
"text": " -e_2"
},
{
"math_id": 17,
"text": "\\langle1,2\\rangle"
},
{
"math_id": 18,
"text": "\\langle-2,4\\rangle"
},
{
"math_id": 19,
"text": " v = \\langle i,j\\rangle "
},
{
"math_id": 20,
"text": " w = \\langle k,l\\rangle "
},
{
"math_id": 21,
"text": " v "
},
{
"math_id": 22,
"text": " w "
},
{
"math_id": 23,
"text": " |i-k| + |j-l| "
},
{
"math_id": 24,
"text": " w = s_1 \\ldots s_L "
},
{
"math_id": 25,
"text": "s_1, \\ldots, s_L"
},
{
"math_id": 26,
"text": "w"
},
{
"math_id": 27,
"text": "\\bar w"
},
{
"math_id": 28,
"text": "w = \\emptyset"
},
{
"math_id": 29,
"text": "|g^{-1} h|"
},
{
"math_id": 30,
"text": "g \\bar w = h"
},
{
"math_id": 31,
"text": " s "
},
{
"math_id": 32,
"text": " s^{-1} "
},
{
"math_id": 33,
"text": "\\{a,b\\}"
},
{
"math_id": 34,
"text": "\\{a,b,a^{-1},b^{-1}\\}"
},
{
"math_id": 35,
"text": " a,a^{-1}"
},
{
"math_id": 36,
"text": "b,b^{-1}"
},
{
"math_id": 37,
"text": "g \\in F"
},
{
"math_id": 38,
"text": "w = b^{-1} a"
},
{
"math_id": 39,
"text": "b"
},
{
"math_id": 40,
"text": "a"
},
{
"math_id": 41,
"text": "k \\in G"
},
{
"math_id": 42,
"text": "g \\in G"
},
{
"math_id": 43,
"text": "kg"
},
{
"math_id": 44,
"text": "kh"
},
{
"math_id": 45,
"text": "|(kg)^{-1} (kh)| = |g^{-1} h|"
},
{
"math_id": 46,
"text": "g"
},
{
"math_id": 47,
"text": "h"
},
{
"math_id": 48,
"text": "S"
},
{
"math_id": 49,
"text": "T"
},
{
"math_id": 50,
"text": "d_S"
},
{
"math_id": 51,
"text": "d_T"
},
{
"math_id": 52,
"text": "K \\ge 1"
},
{
"math_id": 53,
"text": "g,h \\in G"
},
{
"math_id": 54,
"text": " \\frac{1}{K} \\, d_T(g,h) \\le d_S(g,h) \\le K \\, d_T(g,h) "
}
] | https://en.wikipedia.org/wiki?curid=1289716 |
1289909 | Washburn's equation | Equation describing the penetration length of a liquid into a capillary tube with time
In physics, Washburn's equation describes capillary flow in a bundle of parallel cylindrical tubes; it is extended with some issues also to imbibition into porous materials. The equation is named after Edward Wight Washburn; also known as Lucas–Washburn equation, considering that Richard Lucas wrote a similar paper three years earlier, or the Bell-Cameron-Lucas-Washburn equation, considering J.M. Bell and F.K. Cameron's discovery of the form of the equation in 1906.
Derivation.
In its most general form the Lucas Washburn equation describes the penetration length (formula_0) of a liquid into a capillary pore or tube with time formula_1 as formula_2, where formula_3 is a simplified diffusion coefficient. This relationship, which holds true for a variety of situations, captures the essence of Lucas and Washburn's equation and shows that capillary penetration and fluid transport through porous structures exhibit diffusive behaviour akin to that which occurs in numerous physical and chemical systems. The diffusion coefficient formula_3 is governed by the geometry of the capillary as well as the properties of the penetrating fluid.
A liquid having a dynamic viscosity formula_4 and surface tension formula_5 will penetrate a distance formula_0 into the capillary whose pore radius is formula_6 following the relationship:
formula_7
Where formula_8 is the contact angle between the penetrating liquid and the solid (tube wall).
Washburn's equation is also used commonly to determine the contact angle of a liquid to a powder using a force tensiometer.
In the case of porous materials, many issues have been raised both about the physical meaning of the calculated pore radius formula_6 and the real possibility to use this equation for the calculation of the contact angle of the solid.
The equation is derived for capillary flow in a cylindrical tube in the absence of a gravitational field, but is sufficiently accurate in many cases when the capillary force is still significantly greater than the gravitational force.
In his paper from 1921 Washburn applies Poiseuille's Law for fluid motion in a circular tube. Inserting the expression for the differential volume in terms of the length formula_9 of fluid in the tube formula_10, one obtains
formula_11
where formula_12 is the sum over the participating pressures, such as the atmospheric pressure formula_13, the hydrostatic pressure formula_14 and the equivalent pressure due to capillary forces formula_15. formula_4 is the viscosity of the liquid, and formula_16 is the coefficient of slip, which is assumed to be 0 for wetting materials. formula_6 is the radius of the capillary. The pressures in turn can be written as
formula_17
formula_18
where formula_19 is the density of the liquid and formula_5 its surface tension. formula_20 is the angle of the tube with respect to the horizontal axis. formula_8 is the contact angle of the liquid on the capillary material. Substituting these expressions leads to the first-order differential equation for
the distance the fluid penetrates into the tube formula_9:
formula_21
Washburn's constant.
The Washburn constant may be included in Washburn's equation.
It is calculated as follows:
formula_22
Fluid inertia.
In the derivation of Washburn's equation, the inertia of the liquid is ignored as negligible. This is apparent in the dependence of length formula_0 to the square root of time, formula_23, which gives an arbitrarily large velocity "dL/dt" for small values of "t". An improved version of Washburn's equation, called Bosanquet equation, takes the inertia of the liquid into account.
Applications.
Inkjet printing.
The penetration of a liquid into the substrate flowing under its own capillary pressure can be calculated using a simplified version of Washburn's equation:
formula_24
where the surface tension-to-viscosity ratio formula_25 represents the speed of ink penetration into the substrate. In reality, the evaporation of solvents limits the extent of liquid penetration in a porous layer and thus, for the meaningful modelling of inkjet printing physics it is appropriate to utilise models which account for evaporation effects in limited capillary penetration.
Food.
According to physicist and Ig Nobel prize winner Len Fisher, the Washburn equation can be extremely accurate for more complex materials including biscuits. Following an informal celebration called national biscuit dunking day, some newspaper articles quoted the equation as "Fisher's equation".
Novel capillary pump.
The flow behaviour in traditional capillary follows the Washburn's equation. Recently, novel capillary pumps with a constant pumping flow rate independent of the liquid viscosity were developed, which have a significant advantage over the traditional capillary pump (of which the flow behaviour is Washburn behaviour, namely the flow rate is not constant). These new concepts of capillary pump are of great potential to improve the performance of lateral flow test.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": " L=(Dt)^{\\frac{1}{2}}"
},
{
"math_id": 3,
"text": "D"
},
{
"math_id": 4,
"text": "\\eta"
},
{
"math_id": 5,
"text": "\\gamma"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": "\nL=\\sqrt{\\frac{\\gamma rt\\cos(\\phi)}{2\\eta}}"
},
{
"math_id": 8,
"text": "\\phi"
},
{
"math_id": 9,
"text": "l"
},
{
"math_id": 10,
"text": "dV=\\pi r^2 dl"
},
{
"math_id": 11,
"text": "\\frac{\\delta l}{\\delta t}=\\frac{\\sum P}{8 r^2 \\eta l}(r^4 +4 \\epsilon r^3)"
},
{
"math_id": 12,
"text": "\\sum P"
},
{
"math_id": 13,
"text": "P_A"
},
{
"math_id": 14,
"text": "P_h"
},
{
"math_id": 15,
"text": "P_c"
},
{
"math_id": 16,
"text": "\\epsilon"
},
{
"math_id": 17,
"text": "P_h=h g \\rho - l g \\rho\\sin\\psi"
},
{
"math_id": 18,
"text": "P_c=\\frac{2\\gamma}{r}\\cos\\phi"
},
{
"math_id": 19,
"text": "\\rho"
},
{
"math_id": 20,
"text": "\\psi"
},
{
"math_id": 21,
"text": "\\frac{\\delta l}{\\delta t}=\\frac{[P_A+g \\rho (h-l\\sin\\psi)+\\frac{2\\gamma}{r}\\cos\\phi](r^4 +4 \\epsilon r^3)}{8 r^2 \\eta l}"
},
{
"math_id": 22,
"text": "\n\\frac{10^4 \\left[ \\mathrm{\\frac{\\mu m}{cm}} \\right] \\left[ \\mathrm{\\frac{N}{m^2}} \\right]}{68947.6 \\left[ \\mathrm{\\frac{dynes}{cm^2}} \\right]} = 0.1450(38)"
},
{
"math_id": 23,
"text": "L \\propto \\sqrt{t}"
},
{
"math_id": 24,
"text": "\nl = \\left[ \\frac{r\\cos\\theta}{2} \\right]^{\\frac{1}{2}} \\left[ \\frac{\\gamma}{\\eta} \\right]^{\\frac{1}{2}} t^{\\frac{1}{2}}\n"
},
{
"math_id": 25,
"text": "\\left[ \\tfrac{\\gamma}{\\eta} \\right]^{\\frac{1}{2}}"
}
] | https://en.wikipedia.org/wiki?curid=1289909 |
1289913 | Next-bit test | Testing method for testing the randomness of pseudo-random number generators
In cryptography and the theory of computation, the next-bit test is a test against pseudo-random number generators. We say that a sequence of bits passes the next bit test for at any position formula_0 in the sequence, if any attacker who knows the formula_0 first bits (but not the seed) cannot predict the formula_1st with reasonable computational power.
Precise statement(s).
Let formula_2 be a polynomial, and formula_3 be a collection of sets such that formula_4 contains formula_5-bit long sequences. Moreover, let formula_6 be the probability distribution of the strings in formula_4.
We now define the next-bit test in two different ways.
Boolean circuit formulation.
A predicting collection formula_7 is a collection of boolean circuits, such that each circuit formula_8 has less than formula_9 gates and exactly formula_0 inputs. Let formula_10 be the probability that, on input the formula_0 first bits of formula_11, a string randomly selected in formula_4 with probability formula_12, the circuit correctly predicts formula_13, i.e. :
formula_14
Now, we say that formula_15 passes the next-bit test if for any predicting collection formula_16, any polynomial formula_17 :
formula_18
Probabilistic Turing machines.
We can also define the next-bit test in terms of probabilistic Turing machines, although this definition is somewhat stronger (see Adleman's theorem). Let formula_19 be a probabilistic Turing machine, working in polynomial time. Let formula_20 be the probability that formula_19 predicts the formula_1st bit correctly, i.e.
formula_21
We say that collection formula_3 passes the next-bit test if for all polynomial formula_17, for all but finitely many formula_22, for all formula_23:
formula_24
Completeness for Yao's test.
The next-bit test is a particular case of Yao's test for random sequences, and passing it is therefore a necessary condition for passing Yao's test. However, it has also been shown a sufficient condition by Yao.
We prove it now in the case of the probabilistic Turing machine, since Adleman has already done the work of replacing randomization with non-uniformity in his theorem. The case of Boolean circuits cannot be derived from this case (since it involves deciding potentially undecidable problems), but the proof of Adleman's theorem can be easily adapted to the case of non-uniform Boolean circuit families.
Let formula_19 be a distinguisher for the probabilistic version of Yao's test, i.e. a probabilistic Turing machine, running in polynomial time, such that there is a polynomial formula_17 such that for infinitely many formula_22
formula_25
Let formula_26. We have: formula_27 and formula_28.
Then, we notice that formula_29. Therefore, at least one of the formula_30 should be no smaller than formula_31.
Next, we consider probability distributions formula_32 and formula_33 on formula_34. Distribution formula_32 is the probability distribution of choosing the formula_0 first bits in formula_4 with probability given by formula_6, and the formula_35 remaining bits uniformly at random. We have thus:
formula_36
formula_37
We thus have formula_38 (a simple calculus trick shows this), thus distributions formula_39 and formula_40 can be distinguished by formula_19. Without loss of generality, we can assume that formula_41, with formula_42 a polynomial.
This gives us a possible construction of a Turing machine solving the next-bit test: upon receiving the formula_0 first bits of a sequence, formula_43 pads this input with a guess of bit formula_44 and then formula_45 random bits, chosen with uniform probability. Then it runs formula_19, and outputs formula_44 if the result is formula_46, and formula_47 else. | [
{
"math_id": 0,
"text": "i"
},
{
"math_id": 1,
"text": "(i+1)"
},
{
"math_id": 2,
"text": "P"
},
{
"math_id": 3,
"text": "S=\\{S_k\\}"
},
{
"math_id": 4,
"text": "S_k"
},
{
"math_id": 5,
"text": "P(k)"
},
{
"math_id": 6,
"text": "\\mu_k"
},
{
"math_id": 7,
"text": "C=\\{C_k^i\\}"
},
{
"math_id": 8,
"text": "C_k^i"
},
{
"math_id": 9,
"text": "P_C(k)"
},
{
"math_id": 10,
"text": "p_{k,i}^C"
},
{
"math_id": 11,
"text": "s"
},
{
"math_id": 12,
"text": "\\mu_k(s)"
},
{
"math_id": 13,
"text": "s_{i+1}"
},
{
"math_id": 14,
"text": "\np_{k,i}^C={\\mathcal P} \\left[ C_k(s_1\\ldots s_i)=s_{i+1} \\right | s\\in S_k\\text{ with probability }\\mu_k(s)]\n"
},
{
"math_id": 15,
"text": "\\{S_k\\}_k"
},
{
"math_id": 16,
"text": "C"
},
{
"math_id": 17,
"text": "Q"
},
{
"math_id": 18,
"text": "p_{k,i}^C<\\frac{1}{2}+\\frac{1}{Q(k)}"
},
{
"math_id": 19,
"text": "\\mathcal M"
},
{
"math_id": 20,
"text": "p_{k,i}^{\\mathcal M}"
},
{
"math_id": 21,
"text": "p_{k,i}^{\\mathcal M}={\\mathcal P}[M(s_1\\ldots s_i)=s_{i+1} | s\\in S_k\\text{ with probability }\\mu_k(s)]"
},
{
"math_id": 22,
"text": "k"
},
{
"math_id": 23,
"text": "0<i<k"
},
{
"math_id": 24,
"text": "\np_{k,i}^{\\mathcal M}<\\frac{1}{2}+\\frac{1}{Q(k)}\n"
},
{
"math_id": 25,
"text": "|p_{k,S}^{\\mathcal M}-p_{k,U}^{\\mathcal M}|\\geq\\frac{1}{Q(k)}"
},
{
"math_id": 26,
"text": "R_{k,i}=\\{s_1\\ldots s_iu_{i+1}\\ldots u_{P(k)}| s\\in S_k, u\\in\\{0,1\\}^{P(k)}\\}"
},
{
"math_id": 27,
"text": "R_{k,0}=\\{0,1\\}^{P(k)}"
},
{
"math_id": 28,
"text": "R_{k,P(k)}=S_k"
},
{
"math_id": 29,
"text": "\\sum_{i=0}^{P(k)}|p_{k,R_{k,i+1}}^{\\mathcal M}-p_{k,R_{k,i}}^{\\mathcal M}|\\geq |p^{\\mathcal M}_{k,R_{k,P(k)}}-p^{\\mathcal M}_{k,R_{k,0}}|=|p_{k,S}^{\\mathcal M}-p_{k,U}^{\\mathcal M}|\\geq\\frac{1}{Q(k)}"
},
{
"math_id": 30,
"text": "|p_{k,R_{k,i+1}}^{\\mathcal M}-p_{k,R_{k,i}}^{\\mathcal M}|"
},
{
"math_id": 31,
"text": "\\frac{1}{Q(k)P(k)}"
},
{
"math_id": 32,
"text": "\\mu_{k,i}"
},
{
"math_id": 33,
"text": "\\overline{\\mu_{k,i}}"
},
{
"math_id": 34,
"text": "R_{k,i}"
},
{
"math_id": 35,
"text": "P(k)-i"
},
{
"math_id": 36,
"text": "\\mu_{k,i}(w_1\\ldots w_{P(k)})=\\left(\\sum_{s\\in S_k, s_1\\ldots s_i=w_1\\ldots w_i}\\mu_k(s)\\right)\\left(\\frac{1}{2}\\right)^{P(k)-i}"
},
{
"math_id": 37,
"text": "\\overline{\\mu_{k,i}}(w_1\\ldots w_{P(k)})=\\left(\\sum_{s\\in S_k, s_1\\ldots s_{i-1}(1-s_i)=w_1\\ldots w_i}\\mu_k(s)\\right)\\left(\\frac{1}{2}\\right)^{P(k)-i}"
},
{
"math_id": 38,
"text": "\\mu_{k,i}=\\frac{1}{2}(\\mu_{k,i+1}+\\overline{\\mu_{k,i+1}})"
},
{
"math_id": 39,
"text": "\\mu_{k,i+1}"
},
{
"math_id": 40,
"text": "\\overline{\\mu_{k,i+1}}"
},
{
"math_id": 41,
"text": "p^{\\mathcal M}_{\\mu_{k,i+1}}-p^{\\mathcal M}_{\\overline{\\mu_{k,i+1}}}\\geq\\frac{1}{2}+\\frac{1}{R(k)}"
},
{
"math_id": 42,
"text": "R"
},
{
"math_id": 43,
"text": "\\mathcal N"
},
{
"math_id": 44,
"text": "l"
},
{
"math_id": 45,
"text": "P(k)-i-1"
},
{
"math_id": 46,
"text": "1"
},
{
"math_id": 47,
"text": "1-l"
}
] | https://en.wikipedia.org/wiki?curid=1289913 |
12901731 | Quantum nondemolition measurement | Quantum nondemolition (QND) measurement is a special type of measurement of a quantum system in which the uncertainty of the measured observable does not increase from its measured value during the subsequent normal evolution of the system. This necessarily requires that the measurement process preserves the physical integrity of the measured system, and moreover places requirements on the relationship between the measured observable and the self-Hamiltonian of the system. In a sense, QND measurements are the "most classical" and least disturbing type of measurement in quantum mechanics.
Most devices capable of detecting a single particle and measuring its position strongly modify the particle's state in the measurement process, e.g. photons are destroyed when striking a screen. Less dramatically, the measurement may simply perturb the particle in an unpredictable way; a second measurement, no matter how quickly after the first, is then not guaranteed to find the particle in the same location. Even for ideal, "first-kind" projective measurements in which the particle is in the measured eigenstate immediately after the measurement, the subsequent free evolution of the particle will cause uncertainty in position to quickly grow.
In contrast, a "momentum" (rather than position) measurement of a free particle can be QND because the momentum distribution is preserved by the particle's self-Hamiltonian "p"2/2"m". Because the Hamiltonian of the free particle commutes with the momentum operator, a momentum eigenstate is also an energy eigenstate, so once momentum is measured its uncertainty does not increase due to free evolution.
Note that the term "nondemolition" does not imply that the wave function fails to collapse.
QND measurements are extremely difficult to carry out experimentally. Much of the investigation into QND measurements was motivated by the desire to avoid the standard quantum limit in the experimental detection of gravitational waves. The general theory of QND measurements was laid out by Braginsky, Vorontsov, and Thorne following much theoretical work by Braginsky, Caves, Drever, Hollenhorts, Khalili, Sandberg, Thorne, Unruh, Vorontsov, and Zimmermann.
Technical definition.
Let formula_0 be an observable for some system formula_1 with self-Hamiltonian formula_2. The system formula_1 is measured by an apparatus formula_3 which is coupled to formula_1 through interactions Hamiltonian formula_4 for only brief moments. Otherwise, formula_5 evolves freely according to formula_2. A precise measurement of formula_0 is one which brings the global state of formula_1 and formula_3 into the approximate form
formula_6
where formula_7 are the eigenvectors of formula_0 corresponding to the possible outcomes of the measurement, and formula_8 are the corresponding states of the apparatus which record them.
Allow time-dependence to denote the Heisenberg picture observables:
formula_9
A sequence of measurements of formula_0 are said to be QND measurements if and only if
formula_10
for any formula_11 and formula_12 when measurements are made. If this property holds for "any" choice of formula_11 and formula_12, then formula_0 is said to be a "continuous QND variable". If this only holds for certain discrete times, then formula_0 is said to be a "stroboscopic QND variable".
For example, in the case of a free particle, the energy and momentum are conserved and indeed continuous QND
observables, but the position is not.
On the other hand, for the harmonic oscillator the position
and momentum satisfy periodic in time commutation relations which imply that "x" and "p" are not continuous QND observables. However, if one makes the
measurements at times separated by an integral numbers of half-periods (τ = "k"π/"ω"), then the commutators vanish. This means that x and p are stroboscopic QND observables.
Discussion.
An observable formula_0 which is conserved under free evolution,
formula_13
is automatically a QND variable. A sequence of ideal projective measurements of formula_0 will automatically be QND measurements.
To implement QND measurements on atomic systems, the measurement strength (rate) is competing with atomic decay caused by measurement backaction. People usually use optical depth or cooperativity to characterize the relative ratio between measurement strength and the optical decay. By using nanophotonic waveguides as a quantum interface, it is actually possible to enhance atom-light coupling with a relatively weak field, and hence an enhanced precise quantum measurement with little disruption to the quantum system.
Criticism.
It has been argued that the usage of the term "QND" does not add anything to the usual notion of a strong quantum measurement and can moreover be confusing because of the two different meanings of the word "demolition" in a quantum system (losing the quantum state vs. losing the particle).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "\\mathcal{S}"
},
{
"math_id": 2,
"text": "H_{\\mathcal{S}}"
},
{
"math_id": 3,
"text": "\\mathcal{R}"
},
{
"math_id": 4,
"text": "H_{\\mathcal{RS}}"
},
{
"math_id": 5,
"text": "{\\mathcal{S}}"
},
{
"math_id": 6,
"text": "\\vert \\psi \\rangle \\approx \\sum_i \\vert A_i \\rangle_\\mathcal{S} \\vert R_i \\rangle_\\mathcal{R}"
},
{
"math_id": 7,
"text": "\\vert A_i \\rangle_\\mathcal{S}"
},
{
"math_id": 8,
"text": "\\vert R_i \\rangle_\\mathcal{R}"
},
{
"math_id": 9,
"text": "A(t) = e^{-i t H_\\mathcal{S}} A e^{+i t H_\\mathcal{S}}."
},
{
"math_id": 10,
"text": "[A(t_n),A(t_m)] = 0"
},
{
"math_id": 11,
"text": "t_n"
},
{
"math_id": 12,
"text": "t_m"
},
{
"math_id": 13,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}t} A(t) = \\frac{i}{\\hbar} [H_\\mathcal{S} , A ] = 0,"
}
] | https://en.wikipedia.org/wiki?curid=12901731 |
1290319 | Money multiplier | Ratio of money supply to central bank money
In monetary economics, the money multiplier is the ratio of the money supply to the monetary base (i.e. central bank money). If the money multiplier is stable, it implies that the central bank can control the money supply by determining the monetary base.
In some simplified expositions, the monetary multiplier is presented as simply the reciprocal of the reserve ratio, if any, required by the central bank. More generally, the multiplier will depend on the preferences of households, the legal regulation and the business policies of commercial banks - factors which the central bank can influence, but not control completely.
Because the money multiplier theory offers a potential explanation of the ways in which the central bank can control the total money supply, it is relevant when considering monetary policy strategies that target the money supply. Historically, some central banks have tried to conduct monetary policy by targeting the money supply and its growth rate, particularly in the 1970s and 1980s. The results were not considered satisfactory, however, and starting in the early 1990s, most central banks abandoned trying to steer money growth in favour of targeting inflation directly, using changes in interest rates as the main instrument to influence economic activity. As controlling the size of the money supply has ceased being an important goal for central bank policy generally, the money multiplier parallelly has become less relevant as a tool to understand current monetary policy. It is still often used in introductory economic textbooks, however, as a simple shorthand description of the connections between central bank policies and the money supply.
Definition.
The money multiplier is normally presented in the context of some simple accounting identities: Usually, the money supply ("M") is defined as consisting of two components: (physical) currency ("C") and deposit accounts ("D") held by the general public. By definition, therefore:
formula_0
Additionally, the monetary base ("B") (also known as high-powered money) is normally defined as the sum of currency held by the general public ("C") and the reserves of the banking sector (held either as currency in the vaults of the commercial banks or as deposits at the central bank) ("R"):
formula_1
Rearranging these two definitions result in a third identity:
formula_2
This relation describes the money supply in terms of the level of base money and two ratios: R/D is the ratio of commercial banks' reserves to deposit accounts, and C/D is the general public's ratio of currency to deposits. As the relation is an identity, it holds true by definition, so that a change in the money supply can always be expressed in terms of these three variables alone. This may be advantageous because it is a simple way of summarising money supply changes, but the use of the identity does not in itself provide a behavioural theory of what determines the money supply. If, however, one additionally assumes that the two ratios C/D and R/D are exogenously determined constants, the equation implies that the central bank can control the money supply by controlling the monetary base via open-market operations: In this case, when the monetary base increases by, say, $1, the money supply will increase by
$(1+C/D)/(R/D + C/D). This is the central contents of the money multiplier theory, and
formula_3
is the money multiplier, a multiplier being a factor that measures how much an endogenous variable (in this case, the money supply) changes in response to a change in some exogenous variable (in this case, the money base).
In some textbook applications, the relationship is simplified by assuming that cash does not exist so that the public holds money only in the form of bank deposits. In that case, the currency-deposit ratio C/D equals zero, and the money multiplier simplifies to
formula_4
Empirically, the money multiplier can be found as the ratio of some broad money aggregate like M2 over M0 (base money).
Interpretation.
Generally, the currency-deposit ratio C/D reflects the preferences of households about the form of money they wish to hold (currency versus deposits). The reserve-deposit ratio R/D will be determined by the business policies of commercial banks and the laws regulating banks. Benjamin Friedman in his chapter on the money supply in "The New Palgrave Dictionary of Economics" writes that both central bank reserves (supplied by the central bank and demanded by the commercial banks for several reasons) and deposits (supplied by commercial banks and demanded by households and non-financial firms) are traded in markets with equilibria of demand and supply which depend on the interest rate as well as a number of other factors. Consequently, the money multiplier representation should be interpreted as "really just a shorthand simplification that works well or badly depending on the strength of the relevant interest elasticities and the extent of variation in interest rates and the many other factors involved."
The importance of excess reserves.
In some presentations of the money multiplier theory, the further simplification is made that commercial banks only hold the reserves that are legally required by the monetary authorities so that the R/D ratio is determined directly by the central banks. In many countries the monetary authorities maintain reserve requirements that secure a minimum level of reserves at all times. However, commercial banks may often hold excess reserves, i.e. reserves held in excess of the legal reserve requirements. This is for instance the case in countries that do not impose legal reserve requirements at all like the United States, the United Kingdom, Canada, Australia, New Zealand and the Scandinavian countries. The possibility of banks voluntarily choosing to hold excess reserves, in amounts that may change over time as the opportunity costs for banks change, are one reason why the monetary multiplier may not be stable. For instance, following the introduction of interest rates on excess reserves in the US, a large growth in excess reserves occurred in the Financial crisis of 2007–2010, US bank excess reserves growing over 500-fold, from under $2 billion in August 2008 to over $1,000 billion in November 2009.
The insight that banks may adjust their reserve/deposit ratio endogenously, making the money multiplier unstable, is old. Paul Samuelson noted in his bestselling textbook in 1948 that:
<templatestyles src="Template:Blockquote/styles.css" />By increasing the volume of their government securities and loans and by lowering Member Bank legal reserve requirements, the Reserve Banks can encourage an increase in the supply of money and bank deposits. They can encourage but, without taking drastic action, they cannot "compel." For in the middle of a deep depression just when we want Reserve policy to be most effective, the Member Banks are likely to be timid about buying new investments or making loans. If the Reserve authorities buy government bonds in the open market and thereby swell bank reserves, the banks will not put these funds to work but will simply hold reserves. Result: no 5 for 1, “no nothing,” simply a substitution on the bank’s balance sheet of idle cash for old government bonds.
Restated, increases in central bank money may not result in commercial bank money because the money is not "required" to be lent out – it may instead result in a growth of unlent (i.e. excess) reserves. This situation has been referred to as "pushing on a string": withdrawal of central bank money "compels" commercial banks to curtail lending (one can "pull" money via this mechanism), but input of central bank money does not compel commercial banks to lend (one cannot "push" via this mechanism).
The amount of its assets that a bank chooses to hold as excess reserves is a decreasing function of the amount by which the market rate for loans to the general public from commercial banks exceeds the interest rate on excess reserves and of the amount by which the market rate for loans to other banks (in the US, the federal funds rate) exceeds the interest rate on excess reserves. Since the money multiplier in turn depends negatively on the desired reserve/deposit ratio, the money multiplier depends positively on these two opportunity costs. Moreover, the public’s choice of the currency/deposit ratio depends negatively on market rates of return on highly liquid substitutes for currency; since the currency ratio negatively affects the money multiplier, the money multiplier is positively affected by the return on these substitutes. Note that when making predictions assuming a constant multiplier, the predictions are valid only if these ratios do not in fact change. Sometimes this holds, and sometimes it does not; for example, increases in central bank money (i.e. base money) may result in increases in commercial bank money – and will, if these ratios (and thus multiplier) stay constant – or may result in increases in excess reserves but little or no change in commercial bank money, in which case the reserve–deposit ratio will grow and the multiplier will fall.
"Loans first" model.
An alternative interpretation of the direction of causality in the identity described above is that the connection between the money supply and the monetary base goes from the former to the latter: Interest-rate-targeting central banks supply whatever amount of reserves that the banking system demands, given the reserve requirements and the amount of deposits that have been created.
In this alternative model of money creation, loans are first extended by commercial banks – say, $1,000 of loans, which may then require that the bank borrow $100 of reserves either from depositors or other private sources of financing, or from the central bank. This view is advanced in endogenous money theories. It is also occasionally referred to as a "Loans first" model as opposed to the traditional multiplier theory, which can be labelled a "Reserves first" model.
Monetary policy in practice.
Whereas used in many textbooks, the realism of the money multiplier theory is questioned by several economists, and it is generally rejected as a useful description of actual central bank behaviour today, partly because major central banks generally have not tried to control the monetary supply during the last decades, hence making the theory irrelevant, partly because it is doubtful as to how large an extent the central banks would be able to control the money supply, should they wish to. The last question is a matter of the stability of the money multiplier.
Historical attempts to steer the money supply.
Historically, central banks have in some periods used strategies of trying to target a certain level or growth rate of money supply, in particular during the late 1970s and 1980s, inspired by monetarist theory and the quantity theory of money. However, these strategies turned out to not work very well and were abandoned again. In the United States, short-term interest rates became fourfould more volatile during the years 1979-1982 when the Federal Reserve adopted a moderate version of monetary base control, and the targeted monetary aggregate at the time, M1, even increased its short-term volatility.
Current monetary policy.
Starting in the early 1990s, a fundamental rethinking of monetary policy took place in major central banks, shifting to targeting inflation rather than monetary growth and generally using interest rates to implement goals rather than quantitative measures like holding the quantity of base money at fixed levels. As a result, modern central banks hardly ever conduct their policies by trying to control the money supply, implying also that the monetary multiplier theory has become more irrelevant as a tool to understand current monetary policy.
Charles Goodhart notes in his chapter on the monetary base in "The New Palgrave" that the banking system has virtually never worked in the way hypothesized by the monetary multiplier theory. Instead, central banks have used their powers to effect a desired level of interest rates rather than achieve a pre-determined quantity of monetary base or of some monetary aggregate. He also mentions that the institutional development of the financial markets, notably interbank lending markets, implies that the monetary base multiplier no longer would, or could, work in the textbook fashion. Instead, he argues that the behavioural process leading to a change in monetary bases runs from an initial change in interest rates to a subsequent readjustment in monetary aggregate quantities, endogenously determining these as well as the accommodating monetary base. Also David Romer notes in his graduate textbook "Advanced Macroeconomics" that it is difficult for central banks to control broad monetary aggregates like M2, causing central banks generally to assign the behaviour of the money supply an unimportant role in policy, focusing instead on adjusting nominal interest rates to stabilize the economy. Gregory Mankiw, author of one of the widely read intermediate textbooks ("Macroeconomics") that present the money multiplier theory, notes in its 11th edition that even though the Federal Reserve can influence the money supply, it cannot control it fully because households' decisions and banks' discretion in the conduct of their business may change the money supply in ways unanticipated by the central bank.
After the financial crisis, several central banks, including the Federal Reserve, Bank of England, Deutsche Bundesbank, the Hungarian National Bank and Danmarks Nationalbank have issued explanations of money creation supporting the view that central banks generally do not control the creation of money, nor do they try to, though their interest rate-setting monetary policies naturally affect the amount of loans and deposits that commercial banks create. The Federal Reserve in 2021 launched several educational resources to facilitate teaching the conduct of current monetary policy, recommending teachers to avoid relying on the money multiplier concept, which was described as obsolete and unusable.
Jaromir Benes and Michael Kumhof of the IMF Research Department, argue that: the "deposit multiplier" of the undergraduate economics textbook, where monetary aggregates are created at the initiative of the central bank, through an initial injection of high-powered money into the banking system that gets multiplied through bank lending, turns the actual operation of the monetary transmission mechanism on its head. At all times, when banks ask for reserves, the central bank obliges. According to this model, reserves therefore impose no constraint and the deposit multiplier is therefore a myth. The authors therefore argue that private banks are almost fully in control of the money creation process.
Besides the mainstream questioning of the usefulness of the money multiplier theory, the rejection of this theory has also been a theme in the heterodox post-Keynesian school of economic thought.
Example.
As explained above, according to the monetary multiplier theory money creation in a fractional-reserve banking system occurs when a given reserve is lent out by a bank, then deposited at a bank (possibly different), which is then lent out again, the process repeating and the ultimate result being a geometric series.
The following formula for the money multiplier may be used, explicitly accounting for the fact that the public has a desire to hold some currency in the form of cash and that commercial banks may desire to hold reserves in excess of the legal reserve requirements:
formula_5
Here the Desired Reserve Ratio is the sum of the required reserve ratio and the excess reserve ratio.
The formula above is derived from the following procedure. Let the monetary base be normalized to unity. Define the legal reserve ratio, formula_6, the excess reserves ratio, formula_7, the currency/deposit ratio with respect to deposits, formula_8; suppose the demand for funds is unlimited; then the theoretical superior limit for deposits is defined by the following series:
formula_9.
Analogously, the theoretical superior limit for the money held by public is defined by the following series:
formula_10
and the theoretical superior limit for the total loans lent in the market is defined by the following series:
formula_11
By summing up the two quantities, the theoretical money multiplier is defined as
formula_12
where α + β = "Desired Reserve Ratio" and formula_13
The process described above by the geometric series can be represented in the following table, where
Table.
This re-lending process (assuming that no currency is used) can be depicted as follows, assuming a 20% reserve ratio and a $100 initial deposit:
Note that no matter how many times the smaller and smaller amounts of money are re-lended, the legal reserve requirement is never exceeded - because that would be illegal.
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "M = D+C."
},
{
"math_id": 1,
"text": "B = R+C."
},
{
"math_id": 2,
"text": "M =\\frac{1+C/D}{R/D + C/D}B."
},
{
"math_id": 3,
"text": "\\frac{1+C/D}{R/D + C/D}"
},
{
"math_id": 4,
"text": "\\frac{1}{R/D}."
},
{
"math_id": 5,
"text": "m=\\frac{(1+Currency/Deposit Ratio)}{(Currency/Deposit Ratio + Desired Reserve Ratio)}."
},
{
"math_id": 6,
"text": "\\alpha \\in\\left(0, 1\\right)\\;"
},
{
"math_id": 7,
"text": "\\beta \\in\\left(0, 1\\right)\\;"
},
{
"math_id": 8,
"text": "\\gamma \\in\\left(0, 1\\right)\\;"
},
{
"math_id": 9,
"text": "Deposits = \\sum_{n = 0}^{\\infty}\\left[\\left(1 - \\alpha - \\beta - \\gamma\\right)\\right]^{n} = \\frac{1}{\\alpha + \\beta + \\gamma}"
},
{
"math_id": 10,
"text": "Publicly Held Currency = \\gamma \\cdot Deposits = \\frac{\\gamma}{\\alpha + \\beta + \\gamma}"
},
{
"math_id": 11,
"text": "Loans = \\left(1 - \\alpha - \\beta\\right) \\cdot Deposits = \\frac{1 - \\alpha - \\beta}{\\alpha + \\beta + \\gamma}"
},
{
"math_id": 12,
"text": "m = \\frac{Money Stock}{Monetary Base} = \\frac{Deposits + Publicly Held Currency}{Monetary Base} = \\frac{1 + \\gamma}{\\alpha + \\beta + \\gamma}"
},
{
"math_id": 13,
"text": "\\gamma = currency/deposit"
},
{
"math_id": 14,
"text": "k\\;"
},
{
"math_id": 15,
"text": "L_{k} = \\left(1 - \\alpha - \\beta\\right) \\cdot D_{k - 1}"
},
{
"math_id": 16,
"text": "PHM_{k} = \\gamma \\cdot D_{k - 1}"
},
{
"math_id": 17,
"text": "D_{k} = L_{k} - PHM_{k}\\;"
}
] | https://en.wikipedia.org/wiki?curid=1290319 |
12905248 | Peridynamics | Peridynamics is a non-local formulation of continuum mechanics that is oriented toward deformations with discontinuities, especially fractures. Originally, "bond-based" peridynamic has been introduced, wherein, internal interaction forces between a material point and all the other ones with which it can interact, are modeled as a central forces field. This type of force fields can be imagined as a mesh of bonds connecting each point of the body with every other interacting point within a certain distance which depends on material property, called "peridynamic horizon". Later, to overcome bond-based framework limitations for the material Poisson’s ratio (formula_0 for plane stress and formula_1 for plane strain in two-dimesional configurations; formula_1 for three-dimensional ones), "state-base" peridynamics, has been formulated. Its characteristic feature is that the force exchanged between a point and another one is influenced by the deformation state of all other bonds relative to its interaction zone.
The characteristic feature of peridynamics, which makes it different from classical local mechanics, is the presence of finite-range bond between any two points of the material body: it is a feature that approaches such formulations to discrete meso-scale theories of matter.
Etymology.
The term "peridynamic", as an adjective, was proposed in the year 2000 and comes from the prefix "peri," which means "all around", "near", or "surrounding"; and the root "dyna", which means "force" or "power." The term "peridynamics", as a noun, is a shortened form of the phrase "peridynamic model of solid mechanics."
Purpose.
A fracture is a mathematical singularity to which the classical equations of continuum mechanics cannot be applied directly. The peridynamic theory has been proposed with the purpose of mathematically models fractures formation and dynamic in elastic materials. It is founded on integral equations, in contrast with classical continuum mechanics, which is based on partial differential equations. Since partial derivatives do not exist on crack surfaces and other geometric singularities, the classical equations of continuum mechanics cannot be applied directly when such features are present in a deformation. The integral equations of the peridynamic theory hold true also on singularities and can be applied directly, because they do not require partial derivatives. The ability to apply the same equations directly at all points in a mathematical model of a deforming structure helps the peridynamic approach to avoid the need for the special techniques of fracture mechanics like xFEM. For example, in peridynamics, there is no need for a separate crack growth law based on a stress intensity factor.
Definition and basic terminology.
In the context of peridynamic theory, physical bodies are treated as constituted by a continuous points mesh which can exchange long-range mutual interaction forces, within a maximum and well established distance formula_3: the "peridynamic horizon" radius. This perspective approaches much more to molecular dynamics than macroscopic bodies, and as a consequence, is not based on the concept of stress tensor (which is a local concept) and drift toward the notion of "pairwise force" that a material point formula_4 exchanges within its peridynamic horizon. With a Lagrangian point of view, suited for small displacements, the peridynamic horizon is considered fixed in the reference configuration and, then, deforms with the body. Consider a material body represented by formula_5, where formula_6 can be either 1, 2 or 3. The body has a positive density formula_7. Its reference configuration at the initial time is denoted by formula_8. It is important to note that the reference configuration can either be the stress-free configuration or a specific configuration of the body chosen as a reference. In the context of peridynamics, every point in formula_9 interacts with all the points formula_10 within a certain neighborhood defined by formula_11, where formula_3 and formula_12 represents a suitable distance function on formula_13. This neighborhood is often referred to as formula_14 in the literature. It is commonly known as the "horizon" or the "family" of formula_4.
The kinematics of formula_4 is described in terms of its displacement from the reference position, denoted as formula_15. Consequently, the position of formula_4 at a specific time formula_16 is determined by formula_17. Furthermore, for each pair of interacting points, the change in the length of the bond relative to the initial configuration is tracked over time through the relative strain formula_18, which can be expressed as:
formula_19
where formula_20 denotes the Euclidean norm and formula_21.
The interaction between any formula_2 and formula_22 is referred to as a "bond". These pairwise bonds have varying lengths over time in response to the force per unit volume squared, denoted as
formula_23.
This force is commonly known as the "pairwise force function" or "peridynamic kernel", and it encompasses all the constitutive (material-dependent) properties. It describes how the internal forces depend on the deformation. It's worth noting that the dependence of formula_24 on formula_16 has been omitted here for the sake of simplicity in notation. Additionally, an external forcing term, formula_25, is introduced, which results in the following equation of motion, representing the fundamental equation of peridynamics:
formula_26
where the integral term formula_27 is the sum of all of the internal and external per-unit-volume forces acting on formula_4:
formula_28
The vector valued function formula_29 is the force density that formula_22 exerts on formula_2. This force density depends on the relative displacement and relative position vectors between formula_22 and formula_2. The dimension of formula_29 is formula_30.
Bond-based peridynamics.
In this formulation of peridynamics, the kernel is determined by the nature of internal forces and physical constraints that governs the interaction between only two material points. For the sake of brevity, the following quantities are defined formula_31 and formula_32 so that
formula_33
Actio et reactio principle.
For any formula_2 and formula_22 belonging to the neighborhood formula_14, the following relationship holds: formula_34 . This expression reflects the principle of action and reaction, commonly known as Newton's Third Law. It guarantees the conservation of linear momentum in a system composed of mutually interacting particles.
Angular momentum conservation.
For any formula_35 and formula_36 belonging to the neighborhood formula_14, the following condition holds: formula_37 . This condition arises from considering the relative deformed ray-vector connecting formula_35 and formula_36 as formula_38 . The condition is satisfied if and only if the pairwise force density vector has the same direction as the relative deformed ray-vector. In other words, formula_39 for all formula_40 and formula_41 , where formula_42 is a scalar-valued function.
Hyperelastic material.
An hyperelastic material is a material with constitutive relation such that:
formula_43
or, equivalently, by Stokes' theorem
formula_44 ,formula_45
and, thus,
formula_46
In the equation above formula_47 is the scalar valued potential function in formula_48. Due to the necessity of satisfying angular momentum conservation, the condition below on the scalar valued function formula_49 follows
formula_50
where formula_51 is a scalar valued function. Integrating both sides of the equation, the following condition on formula_51 is obtained
formula_52,
for formula_53 a scalar valued function. The elastic nature of formula_54 is evident: the interaction force depends only on the initial relative position between points formula_55 and formula_56 and the modulus of their relative position, formula_57, in the deformed configuration formula_58 at time formula_16. Applying the isotropy hypothesis, the dependence on vector formula_59 can be substituted with a dependence on its modulus formula_60,
formula_61
Bond forces can, thus, be considered as modeling a spring net that connects each point formula_62 pairwise with formula_21.
Linear elastic material.
If formula_63, the peridynamic kernel can be linearised around formula_64:
formula_65
then, a second-order "micro-modulus" tensor can be defined as
formula_66
where formula_67 and formula_68 is the identity tensor. Following application of linear momentum balance, elasticity and isotropy condition, the micro-modulus tensor can be expressed in this form
formula_69
Therefore for a linearised hyperelastic material, its peridynamic kernel holds the following structure
formula_70
Expressions for the peridynamic kernel.
The peridynamic kernel is a versatile function that characterizes the constitutive behavior of materials within the framework of peridynamic theory. One commonly employed formulation of the kernel is used to describe a class of materials known as "prototype micro-elastic brittle" (PMB) materials. In the case of isotropic PMB materials, the pairwise force is assumed to be linearly proportional to the finite stretch experienced by the material, defined as
formula_71,
so that
formula_72
where
formula_73
and where the scalar function formula_74 is defined as follow
formula_75 with
formula_76
The constant formula_77 is referred to as the "micro-modulus constant", and the function formula_78 serves to indicate whether, at a given time formula_79, the bond stretch formula_80 associated with the pair formula_81 has surpassed the critical value formula_82. If the critical value is exceeded, the bond is considered "broken", and a pairwise force of zero is assigned for all formula_83.
After a comparison between the strain energy density value obtained under isotropic extension respectively employing peridynamics and classical continuum theory framework, the physical coherent value of micro-modulus formula_77 can be found
formula_84
where formula_85 is the material bulk modulus.
Following the same approach the micro-modulus constant formula_77 can be extended to formula_86, where formula_77 is now a "micro-modulus function". This function provides a more detailed description of how the intensity of pairwise forces is distributed over the peridynamic horizon formula_87. Intuitively, the intensity of forces decreases as the distance between formula_2 and formula_88 increases, but the specific manner in which this decrease occurs can vary.
The micro-modulus function is expressed as
formula_89
where the constant formula_90 is obtained by comparing peridynamic strain density with the classical mechanical theories; formula_91 is a function defined on formula_92 with the following properties (given the restrictions of momentum conservation and isotropy)
formula_93
where formula_94 is the Dirac Delta function.
Cylindrical micro-modulus.
The simplest expression for the micro-modulus function is
formula_95,
where formula_96: formula_97 is the indicator function of the subset formula_98, defined as
formula_99
Triangular micro-modulus.
It is characterized by formula_91 to a be a linear function
formula_100
Normal micro-modulus.
If one wants to reflects the fact that most common discrete physical systems are characterized by a Maxwell-Boltzmann distribution, in order to include this behavior in peridynamics, the following expression for formula_91 can be utilized
formula_101
Quartic micro-modulus.
In the literature one can find also the following expression for the formula_91function
formula_102
Overall, depending on the specific material property to be modeled, there exists a wide range of expressions for the micro-modulus and, in general, for the peridynamic kernel. The above list is, thus, not exhaustive.
Damage.
Damage is incorporated in the pairwise force function by allowing bonds to break when their elongation exceeds some prescribed value. After a bond breaks, it no longer sustains any force, and the endpoints are effectively disconnected from each other. When a bond breaks, the force it was carrying is redistributed to other bonds that have not yet broken. This increased load makes it more likely that these other bonds will break. The process of bond breakage and load redistribution, leading to further breakage, is how cracks grow in the peridynamic model.
Analytically, the bond braking is specified inside the expression of peridynamic kernel, by the function
formula_76
If the graph of formula_103 versus bond stretching formula_104 is plotted, the action of bond braking function formula_105 in fracture formation is clear. However not only abrupt fracture can be modeled in peridynamic framework and more general expression for formula_105 can be employed.
State-based peridynamics.
The theory described above assumes that each peridynamic bond responds independently of all the others. This is an oversimplification for most materials and leads to restrictions on the types of materials that can be modeled. In particular, this assumption implies that any isotropic linear elastic solid is restricted to a Poisson ratio of 1/4.
To address this lack of generality, the idea of "peridynamic states" was introduced. This allows the force density in each bond to depend on the stretches in all the bonds connected to its endpoints, in addition to its own stretch. For example, the force in a bond could depend on the net volume changes at the endpoints. The effect of this volume change, relative to the effect of the bond stretch, determines the Poisson ratio. With peridynamic states, any material that can be modeled within the standard theory of continuum mechanics can be modeled as a peridynamic material, while retaining the advantages of the peridynamic theory for fracture.
Mathematically the equation of the internal and external force term
formula_28
used in the bond-based formulations is substituted by formula_106
where formula_107 is the force vector state field.
A general m-order state formula_108 is a mathematical object similar to a tensor, with the exception that it is
Vector states are states of order equal to 2. For so called "simple material", formula_109 is defined as
formula_110
where formula_111 is a Riemann-integrable function on formula_14 , and formula_112 is called "deformation vector state field" and is defined by the following relation
formula_113
thus formula_114 is the image of the bond formula_115 under the deformation
such that
formula_116
which means that two distinct particles never occupy the same point as the deformation progresses.
It can be proved that balance of linear momentum follow from the definition of formula_117, while, if the constitutive relation is such that
formula_118
the force vector state field satisfy balance of angular momentum.
Applications.
The growing interest in peridynamics come from its capability to fill the gap between atomistic theories of matter and classical local continuum mechanics. It is applied effectively to micro-scale phenomena, such as crack formation and propagation, wave dispersion, intra-granular fracture. These phenomena can be described by appropriately adjustment of the peridynamic horizon radius, which is directly linked to the extent of non-local interactions between points within the material.
In addition to the aforementioned research fields, peridynamics' non-local approach to discontinuities has found applications in various other areas. In geo-mechanics, it has been employed to study water-induced soil cracks, geo-material failure, rocks fragmentation, and so on. In biology, peridynamics has been used to model long-range interactions in living tissues, cellular ruptures, cracking of bio-membranes, and more. Furthermore, peridynamics has been extended to thermal diffusion theory, enabling the modeling of heat conduction in materials with discontinuities, defects, inhomogeneities, and cracks. It has also been applied to study advection-diffusion phenomena in multi-phase fluids and to construct models for transient advection-diffusion problems. With its versatility, peridynamics has been used in various multi-physics analyses, including micro-structural analysis, fatigue and heat conduction in composite materials, galvanic corrosion in metals, electricity-induced cracks in dielectric materials and more.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1/3"
},
{
"math_id": 1,
"text": "1/4"
},
{
"math_id": 2,
"text": "\\bf x"
},
{
"math_id": 3,
"text": "\\delta > 0"
},
{
"math_id": 4,
"text": "{\\bf x}"
},
{
"math_id": 5,
"text": "\\Omega \\subset \\R^{n}"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "\\rho"
},
{
"math_id": 8,
"text": "\\Omega_{0} \\subset \\R^{n}"
},
{
"math_id": 9,
"text": "\\Omega"
},
{
"math_id": 10,
"text": "{\\bf x}'"
},
{
"math_id": 11,
"text": "d({\\bf x},{\\bf x}')\\leq\\delta"
},
{
"math_id": 12,
"text": "d(\\cdot,\\cdot)"
},
{
"math_id": 13,
"text": "\\Omega_0"
},
{
"math_id": 14,
"text": "B_\\delta({\\bf x})"
},
{
"math_id": 15,
"text": "{\\bf u}({\\bf x}, t): \\Omega_{0} \\times \\mathbb{R}^{+} \\rightarrow \\mathbb{R}^{n}"
},
{
"math_id": 16,
"text": "t"
},
{
"math_id": 17,
"text": "{\\bf y}({\\bf x},t):= {\\bf x}+{\\bf u}({\\bf x}, t)"
},
{
"math_id": 18,
"text": "s({\\bf x},{\\bf x}',t)"
},
{
"math_id": 19,
"text": " s\\left({\\bf x}, {\\bf x}', t\\right)=\\frac{\\left|{\\bf u}\\left({\\bf x}^{\\prime}, t\\right)-{\\bf u}({\\bf x}, t)\\right|}{\\left|{\\bf x}^{\\prime}-{\\bf x}\\right|}, "
},
{
"math_id": 20,
"text": " |\\cdot| "
},
{
"math_id": 21,
"text": " {\\bf x}' \\in B_{\\delta}({\\bf x}) \\cap \\Omega_0 "
},
{
"math_id": 22,
"text": "\\bf x'"
},
{
"math_id": 23,
"text": "{\\bf f}\\equiv {\\bf f}({\\bf x}',{\\bf x},{\\bf u}({\\bf x}'),{\\bf u}({\\bf x}),t)"
},
{
"math_id": 24,
"text": "{\\bf u}"
},
{
"math_id": 25,
"text": "\\mathbf{b}({\\bf x},t)"
},
{
"math_id": 26,
"text": " {\\rho {\\bf u}_{tt}({\\bf x},t) = {\\bf F}({\\bf x},t)}\\, . "
},
{
"math_id": 27,
"text": "{\\bf F}({\\bf x},t)"
},
{
"math_id": 28,
"text": " {{\\bf F}({\\bf x}, t):=\\int_{\\Omega_0 \\cap B_{\\delta}({\\bf x})} {\\bf f}\\left( {\\bf x}', {\\bf x}, {\\bf u} \\left( {\\bf x}' \\right),{\\bf u}({\\bf x}) \\right) dV_{{\\bf x}'}+{\\bf b}({\\bf x}, t)}\\, . "
},
{
"math_id": 29,
"text": "\\bf f"
},
{
"math_id": 30,
"text": "[N/m^6]"
},
{
"math_id": 31,
"text": "{\\bf {\\bf \\xi}} := {\\bf x}'-{\\bf x}"
},
{
"math_id": 32,
"text": "{\\bf \\eta}:={\\bf u}({\\bf x}')-{\\bf u}({\\bf x})"
},
{
"math_id": 33,
"text": "{\\bf f}({\\bf x}'-{\\bf x},{\\bf u}({\\bf x}')-{\\bf u}({\\bf x})) \\equiv \\bf{f}({\\bf \\xi},{\\bf \\eta})"
},
{
"math_id": 34,
"text": "{\\bf f}(-\\eta, -\\xi) = -{\\bf f}(\\eta, \\xi)"
},
{
"math_id": 35,
"text": "\\bf{x}"
},
{
"math_id": 36,
"text": "\\bf{x}'"
},
{
"math_id": 37,
"text": "(\\xi + \\eta) \\times {\\bf f}(\\xi, \\eta) = 0"
},
{
"math_id": 38,
"text": "\\xi + \\eta"
},
{
"math_id": 39,
"text": "{\\bf f}(\\xi, \\eta) = f(\\xi, \\eta)(\\xi + \\eta)"
},
{
"math_id": 40,
"text": "\\xi"
},
{
"math_id": 41,
"text": "\\eta"
},
{
"math_id": 42,
"text": "f(\\xi, \\eta)"
},
{
"math_id": 43,
"text": "\n\n\\int_{\\Gamma} {\\bf f}({\\bf \\xi}, {\\bf \\eta}) \\cdot d {\\bf \\eta}=0\\, , \\quad \\forall \\text{ closed curve } \\Gamma, \\ \\ \\ \\ \\forall{\\bf \\xi}\\neq \\bf{0},\n\n"
},
{
"math_id": 44,
"text": "\n\n\\nabla_{{\\bf \\eta}} \\times {\\bf f}({\\bf \\xi},{\\bf \\eta})=\\bf{0}\\,\n"
},
{
"math_id": 45,
"text": "\\forall \\, {\\bf \\xi}, \\, {\\bf \\eta}"
},
{
"math_id": 46,
"text": "\n\n{\\bf f}({\\bf \\xi},{\\bf \\eta})=\\nabla_{{\\bf \\eta}} \\Phi({\\bf \\xi}, \\, {\\bf \\eta}) \\, \\forall {\\bf \\xi}, \\, {\\bf \\eta} \\, .\n\n"
},
{
"math_id": 47,
"text": " \\Phi({\\bf \\xi},{\\bf \\eta}) "
},
{
"math_id": 48,
"text": " C^2(\\R^n \\setminus\\bf{\\{0\\}} \\times \\R^n) "
},
{
"math_id": 49,
"text": " f({\\bf \\xi},{\\bf \\eta}) "
},
{
"math_id": 50,
"text": "\n\n\\frac{\\partial f({\\bf \\xi},{\\bf \\eta})}{\\partial {\\bf \\eta}}=g({\\bf \\xi},{\\bf \\eta})({\\bf \\xi}+{\\bf \\eta}).\n\n"
},
{
"math_id": 51,
"text": " g({\\bf \\xi},{\\bf \\eta}) "
},
{
"math_id": 52,
"text": "{\\bf f}({\\bf \\xi},{\\bf \\eta})= h(| {\\bf \\xi}+{\\bf \\eta}|,{\\bf \\xi})({\\bf \\xi}+{\\bf \\eta})"
},
{
"math_id": 53,
"text": "h(| {\\bf \\xi}+{\\bf \\eta}|,{\\bf \\xi})"
},
{
"math_id": 54,
"text": " {\\bf f} "
},
{
"math_id": 55,
"text": " {\\bf x} "
},
{
"math_id": 56,
"text": " {\\bf x}' "
},
{
"math_id": 57,
"text": " | {\\bf \\xi}+{\\bf \\eta}| "
},
{
"math_id": 58,
"text": " \\Omega_t"
},
{
"math_id": 59,
"text": " {\\bf \\xi} "
},
{
"math_id": 60,
"text": " |{\\bf \\xi}| "
},
{
"math_id": 61,
"text": "{\\bf f}({\\bf \\xi},{\\bf \\eta})=h(| {\\bf \\xi}+{\\bf \\eta}|,|{\\bf \\xi}|)({\\bf \\xi}+{\\bf \\eta}). "
},
{
"math_id": 62,
"text": " {\\bf x} \\in \\Omega_0 "
},
{
"math_id": 63,
"text": " |{\\bf \\eta}| \\ll 1 "
},
{
"math_id": 64,
"text": " {\\bf \\eta}=\\bf{0} "
},
{
"math_id": 65,
"text": "{\\bf f}({\\bf \\xi},{\\bf \\eta})\\approx {\\bf f}({\\bf \\xi},\\bf{0})+\\left. \\frac{\\partial {\\bf f}({\\bf \\xi},{\\bf \\eta})}{\\partial{\\bf \\eta}}\\right|_{{\\bf \\eta}=\\bf{0}}{\\bf \\eta};"
},
{
"math_id": 66,
"text": "{\\bf C}({\\bf \\xi})=\\left. \\frac{\\partial {\\bf f}({\\bf \\xi},{\\bf \\eta})}{\\partial {\\bf \\eta}}\\right|_{{\\bf \\eta}=\\bf{0}}={\\bf \\xi} \\otimes \\left.\\frac{\\partial f({\\bf \\xi},{\\bf \\eta})}{\\partial {\\bf \\eta}}\\right|_{{\\bf \\eta}=\\bf{0}}+f_0I"
},
{
"math_id": 67,
"text": " f_0:=f({\\bf \\xi},{\\bf 0}) "
},
{
"math_id": 68,
"text": "I"
},
{
"math_id": 69,
"text": "{\\bf C}({\\bf \\xi})=\\lambda(|{\\bf \\xi}|){\\bf \\xi} \\otimes {\\bf \\xi}+f_0I."
},
{
"math_id": 70,
"text": "{\\bf f}({\\bf \\xi},{\\bf \\eta}) \\approx {\\bf f}({\\bf \\xi},{\\bf 0})+\\left(\\lambda(|{\\bf \\xi}|){\\bf \\xi} \\otimes {\\bf \\xi}+f_0I\\right){\\bf \\eta}."
},
{
"math_id": 71,
"text": " s:= (|{\\bf \\xi}+{\\bf \\eta}|-|{\\bf \\xi}|)/|{\\bf \\xi}| "
},
{
"math_id": 72,
"text": "\n\\mathbf{f}({\\bf \\eta}, {\\bf \\xi})=f(|{\\bf \\xi}+{\\bf \\eta}|,|{\\bf \\xi}|) \\bf{n},\n"
},
{
"math_id": 73,
"text": " \\bf{n}:=({\\bf \\xi}+{\\bf \\eta})/|{\\bf \\xi} + {\\bf \\eta}| "
},
{
"math_id": 74,
"text": " f "
},
{
"math_id": 75,
"text": "\nf=cs\\mu(s,t)=c \\; \\frac{|{\\bf \\xi}+{\\bf \\eta}|-|{\\bf \\xi}|}{|{\\bf \\xi}|}\\mu(s,t),\n"
},
{
"math_id": 76,
"text": "\n\\mu(s,t)=\\left\\{\\begin{array}{ll}\n1\\, , & \\text { if } s\\left(t^{\\prime}, {\\bf \\xi}\\right)<s_{0}\\, , \\\\\n0\\, , & \\text { otherwise, }\n\\end{array}\\ \\ \\ \\ \\text { for all } 0 \\leq t^{\\prime} \\leq t\\right.;\n"
},
{
"math_id": 77,
"text": " c "
},
{
"math_id": 78,
"text": " \\mu(s, t) "
},
{
"math_id": 79,
"text": " t'\\leq t "
},
{
"math_id": 80,
"text": " s "
},
{
"math_id": 81,
"text": " ({\\bf x,\\,x'}) "
},
{
"math_id": 82,
"text": " s_0 "
},
{
"math_id": 83,
"text": " t \\geq t' "
},
{
"math_id": 84,
"text": "\nc=\\frac{18 k}{\\pi \\delta^{4}},\n"
},
{
"math_id": 85,
"text": "k"
},
{
"math_id": 86,
"text": " c({\\bf \\xi},\\delta) "
},
{
"math_id": 87,
"text": " B_{\\delta}({\\bf x}) "
},
{
"math_id": 88,
"text": "{\\bf x}' \\in B_{\\delta}({\\bf x}) "
},
{
"math_id": 89,
"text": "\n c({\\bf \\xi},\\delta):=c(\\bf{0},\\delta)k({\\bf \\xi},\\delta)\\, ,\n"
},
{
"math_id": 90,
"text": " c(\\bf{0},\\delta) "
},
{
"math_id": 91,
"text": " k({\\bf \\xi},\\delta) "
},
{
"math_id": 92,
"text": " \\Omega_0 "
},
{
"math_id": 93,
"text": "\n\\left\\{\\begin{array}{l}\nk({\\bf \\xi}, \\delta)=k(-{\\bf \\xi}, \\delta)\\, , \\\\\n\\lim _{{\\bf \\xi} \\rightarrow \\bf{0}} k({\\bf \\xi}, \\delta)=\\max_{{\\bf \\xi}\\ \\in \\R^n}\\{ k({\\bf \\xi},\\delta)\\}\\, , \\\\\n\\lim _{{\\bf \\xi} \\rightarrow \\delta} k({\\bf \\xi}, \\delta)=0 \\, ,\\\\\n\\int_{\\R^n} \\lim _{\\delta \\rightarrow 0} k({\\bf \\xi}, \\delta) d {\\bf x}=\\int_{\\R^n} \\Delta({\\bf \\xi}) d {\\bf x}=1\\, ,\n\\end{array}\\right.\n"
},
{
"math_id": 94,
"text": " \\Delta({\\bf \\xi}) "
},
{
"math_id": 95,
"text": "c(\\bf{0},\\delta)k({\\bf \\xi},\\delta)=c\\bf{1}_{B_{\\delta}({\\bf x}')} "
},
{
"math_id": 96,
"text": " \\bf{1}_{A} "
},
{
"math_id": 97,
"text": " X \\rightarrow \\R "
},
{
"math_id": 98,
"text": " A \\subset X "
},
{
"math_id": 99,
"text": "\n\\mathbf{1}_{A}(x):= \\begin{cases} 1, & x \\in A\\, , \\\\ 0, & x \\notin A\\, , \\end{cases}\\; \\;;\n"
},
{
"math_id": 100,
"text": "\n k({\\bf \\xi},\\delta)= \\left( 1-\\frac{|{\\bf \\xi}|}{\\delta} \\right)\\bf{1}_{B_{\\delta}({\\bf x}')}. \n"
},
{
"math_id": 101,
"text": "\nk({\\bf \\xi},\\delta)=e^{-(|{\\bf \\xi}| / \\delta)^{2}}\\bf{1}_{B_{\\delta}({\\bf x}')};\n"
},
{
"math_id": 102,
"text": "\nk({\\bf \\xi}, \\delta)=\\left(1-\\left(\\frac{\\xi}{\\delta}\\right)^{2}\\right)^{2}\\bf{1}_{B_{\\delta}({\\bf x}')}.\n"
},
{
"math_id": 103,
"text": "{\\bf f}(s,t)"
},
{
"math_id": 104,
"text": "s"
},
{
"math_id": 105,
"text": "\\mu"
},
{
"math_id": 106,
"text": "{\\bf F}({\\bf x}, t) := \\int_{B_\\delta({\\bf x})}\\left\\{\\underline{\\mathbf{T}}[\\mathbf{x}, t]\\left\\langle\\mathbf{x}^{\\prime}-\\mathbf{x}\\right\\rangle-\\underline{\\mathbf{T}}\\left[\\mathbf{x}^{\\prime}, t\\right]\\left\\langle\\mathbf{x}-\\mathbf{x}^{\\prime}\\right\\rangle\\right\\} d V_{\\mathbf{x}^{\\prime}}+\\mathbf{b}(\\mathbf{x}, t), "
},
{
"math_id": 107,
"text": " \\underline{\\mathbf{T}} "
},
{
"math_id": 108,
"text": " \\underline{\\mathbf{A}}\\langle\\cdot\\rangle: B_\\delta({\\bf x}) \\rightarrow \\mathcal{L}_m . "
},
{
"math_id": 109,
"text": "\\underline{\\mathbf{T}}"
},
{
"math_id": 110,
"text": "\\underline{\\mathbf{T}}:=\\underline{\\mathbf{\\hat{T}}}(\\underline{\\mathbf{Y}})"
},
{
"math_id": 111,
"text": "\\underline{\\mathbf{\\hat{T}}}: \\mathcal{V} \\rightarrow \\mathcal{V} "
},
{
"math_id": 112,
"text": "\\underline{\\mathbf{Y}}"
},
{
"math_id": 113,
"text": "\n\n\\underline{\\mathbf{Y}}[\\mathbf{x}, t]\\langle\\boldsymbol{\\xi}\\rangle=\\mathbf{y}(\\mathbf{x}+\\boldsymbol{\\xi}, t)-\\mathbf{y}(\\mathbf{x}, t) \\quad \\forall \\mathbf{x} \\in \\Omega_0, \\xi \\in B_{\\delta}({\\bf x}), t \\geq 0\n\n"
},
{
"math_id": 114,
"text": " \\underline{\\mathbf{Y}}\\left\\langle\\mathbf{x}^{\\prime}-\\mathbf{x}\\right\\rangle "
},
{
"math_id": 115,
"text": " \\mathbf{x}^{\\prime}-\\mathbf{x} "
},
{
"math_id": 116,
"text": "\n\\underline{\\mathbf{Y}}\\langle\\boldsymbol{\\xi}\\rangle=\\mathbf{0} \\text { if and only if } \\boldsymbol{\\xi}=\\mathbf{0},\n\n"
},
{
"math_id": 117,
"text": "{\\bf F}({\\bf x, \\, t })"
},
{
"math_id": 118,
"text": "\\int_{B_\\delta({\\bf x})} \\underline{\\mathbf{Y}}\\langle\\boldsymbol{\\xi}\\rangle \\times \\underline{\\mathbf{T}}\\langle\\boldsymbol{\\xi}\\rangle d V_{\\boldsymbol{\\xi}}=0 \\quad \\forall \\underline{\\mathbf{Y}} \\in \\mathcal{V}"
}
] | https://en.wikipedia.org/wiki?curid=12905248 |
1290557 | Phase correlation | Phase correlation is an approach to estimate the relative translative offset between two similar images (digital image correlation) or other data sets. It is commonly used in image registration and relies on a frequency-domain representation of the data, usually calculated by fast Fourier transforms. The term is applied particularly to a subset of cross-correlation techniques that isolate the phase information from the Fourier-space representation of the cross-correlogram.
Example.
The following image demonstrates the usage of phase correlation to determine relative translative movement between two images corrupted by independent Gaussian noise. The image was translated by (30,33) pixels. Accordingly, one can clearly see a peak in the phase-correlation representation at approximately (30,33).
Method.
Given two input images formula_0 and formula_1:
Apply a window function (e.g., a Hamming window) on both images to reduce edge effects (this may be optional depending on the image characteristics). Then, calculate the discrete 2D Fourier transform of both images.
formula_2
Calculate the cross-power spectrum by taking the complex conjugate of the second result, multiplying the Fourier transforms together elementwise, and normalizing this product elementwise.
formula_3
Where formula_4 is the Hadamard product (entry-wise product) and the absolute values are taken entry-wise as well. Written out entry-wise for element index formula_5:
formula_6
Obtain the normalized cross-correlation by applying the inverse Fourier transform.
formula_7
Determine the location of the peak in formula_8.
formula_9
Commonly, interpolation methods are used to estimate the peak location in the cross-correlogram to non-integer values, despite the fact that the data are discrete, and this procedure is often termed 'subpixel registration'. A large variety of subpixel interpolation methods are given in the technical literature. Common peak interpolation methods such as parabolic interpolation have been used, and the OpenCV computer vision package uses a centroid-based method, though these generally have inferior accuracy compared to more sophisticated methods.
Because the Fourier representation of the data has already been computed, it is especially convenient to use the Fourier shift theorem with real-valued (sub-integer) shifts for this purpose, which essentially interpolates using the sinusoidal basis functions of the Fourier transform. An especially popular FT-based estimator is given by Foroosh "et al." In this method, the subpixel peak location is approximated by a simple formula involving peak pixel value and the values of its nearest neighbors, where formula_10 is the peak value and formula_11 is the nearest neighbor in the x direction (assuming, as in most approaches, that the integer shift has already been found and the comparand images differ only by a subpixel shift).
formula_12
The Foroosh "et al." method is quite fast compared to most methods, though it is not always the most accurate. Some methods shift the peak in Fourier space and apply non-linear optimization to maximize the correlogram peak, but these tend to be very slow since they must apply an inverse Fourier transform or its equivalent in the objective function.
It is also possible to infer the peak location from phase characteristics in Fourier space without the inverse transformation, as noted by Stone. These methods usually use a linear least squares (LLS) fit of the phase angles to a planar model. The long latency of the phase angle computation in these methods is a disadvantage, but the speed can sometimes be comparable to the Foroosh "et al." method depending on the image size. They often compare favorably in speed to the multiple iterations of extremely slow objective functions in iterative non-linear methods.
Since all subpixel shift computation methods are fundamentally interpolative, the performance of a particular method depends on how well the underlying data conform to the assumptions in the interpolator. This fact also may limit the usefulness of high numerical accuracy in an algorithm, since the uncertainty due to interpolation method choice may be larger than any numerical or approximation error in the particular method.
Subpixel methods are also particularly sensitive to noise in the images, and the utility of a particular algorithm is distinguished not only by its speed and accuracy but its resilience to the particular types of noise in the application.
Rationale.
The method is based on the Fourier shift theorem.
Let the two images formula_0 and formula_1 be circularly-shifted versions of each other:
formula_13
(where the images are formula_14 in size).
Then, the discrete Fourier transforms of the images will be shifted relatively in phase:
formula_15
One can then calculate the normalized cross-power spectrum to factor out the phase difference:
formula_16
since the magnitude of an imaginary exponential always is one, and the phase of formula_17 always is zero.
The inverse Fourier transform of a complex exponential is a Dirac delta function, i.e. a single peak:
formula_18
This result could have been obtained by calculating the cross correlation directly. The advantage of this method is that the discrete Fourier transform and its inverse can be performed using the fast Fourier transform, which is much faster than correlation for large images.
Benefits.
Unlike many spatial-domain algorithms, the phase correlation method is resilient to noise, occlusions, and other defects typical of medical or satellite images.
The method can be extended to determine rotation and scaling differences between two images by first converting the images to log-polar coordinates. Due to properties of the Fourier transform, the rotation and scaling parameters can be determined in a manner invariant to translation.
Limitations.
In practice, it is more likely that formula_1 will be a simple linear shift of formula_0, rather than a circular shift as required by the explanation above. In such cases, formula_8 will not be a simple delta function, which will reduce the performance of the method. In such cases, a window function (such as a Gaussian or Tukey window) should be employed during the Fourier transform to reduce edge effects, or the images should be zero padded so that the edge effects can be ignored. If the images consist of a flat background, with all detail situated away from the edges, then a linear shift will be equivalent to a circular shift, and the above derivation will hold exactly. The peak can be sharpened by using edge or vector correlation.
For periodic images (such as a chessboard or picket fence), phase correlation may yield ambiguous results with several peaks in the resulting output.
Applications.
Phase correlation is the preferred method for television standards conversion, as it leaves the fewest artifacts.
See also.
General
Television | [
{
"math_id": 0,
"text": "\\ g_a"
},
{
"math_id": 1,
"text": "\\ g_b"
},
{
"math_id": 2,
"text": "\\ \\mathbf{G}_a = \\mathcal{F}\\{g_a\\}, \\; \\mathbf{G}_b = \\mathcal{F}\\{g_b\\}"
},
{
"math_id": 3,
"text": "\\ R = \\frac{ \\mathbf{G}_a \\circ \\mathbf{G}_b^*}{|\\mathbf{G}_a \\circ \\mathbf{G}_b^*|}"
},
{
"math_id": 4,
"text": "\\circ"
},
{
"math_id": 5,
"text": "(j,k)"
},
{
"math_id": 6,
"text": "\\ R_{jk} = \\frac{ G_{a,jk} \\cdot G_{b,jk}^*}{|G_{a,jk} \\cdot G_{b,jk}^* |}"
},
{
"math_id": 7,
"text": "\\ r = \\mathcal{F}^{-1}\\{R\\}"
},
{
"math_id": 8,
"text": "\\ r"
},
{
"math_id": 9,
"text": "\\ (\\Delta x, \\Delta y) = \\arg \\max_{(x, y)}\\{r\\}"
},
{
"math_id": 10,
"text": "r_{(0,0)}"
},
{
"math_id": 11,
"text": "r_{(1,0)}"
},
{
"math_id": 12,
"text": "\\ \\Delta x = \\frac{ r_{(1,0)} }{r_{(1,0)} \\plusmn r_{(0,0)}}"
},
{
"math_id": 13,
"text": "\\ g_b(x,y) \\ \\stackrel{\\mathrm{def}}{=}\\ g_a((x - \\Delta x) \\bmod M, (y - \\Delta y) \\bmod N)"
},
{
"math_id": 14,
"text": "\\ M \\times N"
},
{
"math_id": 15,
"text": "\\mathbf{G}_b(u,v) = \\mathbf{G}_a(u,v) e^{-2 \\pi i (\\frac{u \\Delta x}{M} + \\frac{v \\Delta y}{N}) }"
},
{
"math_id": 16,
"text": "\n\\begin{align}\n R(u,v) &= \\frac{ \\mathbf{G}_a \\mathbf{G}_b^*}{|\\mathbf{G}_a \\mathbf{G}_b^* |} \\\\\n &= \\frac{ \\mathbf{G}_a \\mathbf{G}_a^* e^{2 \\pi i (\\frac{u \\Delta x}{M} + \\frac{v \\Delta y}{N}) }}{|\\mathbf{G}_a \\mathbf{G}_a^* e^{2 \\pi i (\\frac{u \\Delta x}{M} + \\frac{v \\Delta y}{N}) }|} \\\\\n &= \\frac{ \\mathbf{G}_a \\mathbf{G}_a^* e^{2 \\pi i (\\frac{u \\Delta x}{M} + \\frac{v \\Delta y}{N}) }}{|\\mathbf{G}_a \\mathbf{G}_a^*|} \\\\\n &= e^{2 \\pi i (\\frac{u \\Delta x}{M} + \\frac{v \\Delta y}{N}) }\n\\end{align}\n"
},
{
"math_id": 17,
"text": "\\ \\mathbf{G}_a \\mathbf{G}_a^*"
},
{
"math_id": 18,
"text": "\\ r(x,y) = \\delta(x + \\Delta x, y + \\Delta y)"
}
] | https://en.wikipedia.org/wiki?curid=1290557 |
12906191 | Diophantus II.VIII | The eighth problem of the second book of "Arithmetica" by Diophantus (c. 200/214 AD – c. 284/298 AD) is to divide a square into a sum of two squares.
The solution given by Diophantus.
Diophantus takes the square to be 16 and solves the problem as follows:
To divide a given square into a sum of two squares.
To divide 16 into a sum of two squares.
Let the first summand be formula_0, and thus the second formula_1. The latter is to be a square. I form the square of the difference of an arbitrary multiple of "x" diminished by the root [of] 16, that is, diminished by 4. I form, for example, the square of 2"x" − 4. It is formula_2. I put this expression equal to formula_1. I add to both sides formula_3 and subtract 16. In this way I obtain formula_4, hence formula_5.
Thus one number is 256/25 and the other 144/25. The sum of these numbers is 16 and each summand is a square.
Geometrical interpretation.
Geometrically, we may illustrate this method by drawing the circle "x"2 + "y"2 = 42 and the line "y" = 2"x" - 4. The pair of squares sought are then "x"02 and "y"02, where ("x"0, "y"0) is the point not on the "y"-axis where the line and circle intersect. This is shown in the adjacent diagram.
Generalization of Diophantus's solution.
We may generalize Diophantus's solution to solve the problem for any given square, which we will represent algebraically as "a"2. Also, since Diophantus refers to an arbitrary multiple of "x", we will take the arbitrary multiple to be "tx". Then:
formula_6
Therefore, we find that one of the summands is formula_7 and the other is formula_8. The sum of these numbers is formula_9 and each summand is a square. Geometrically, we have intersected the circle "x"2 + "y"2 = "a"2 with the line "y" = "tx" - "a", as shown in the adjacent diagram. Writing the lengths, OB, OA, and AB, of the sides of triangle OAB as an ordered tuple, we obtain the triple
formula_10.
The specific result obtained by Diophantus may be obtained by taking "a" = 4 and "t" = 2:
formula_11
We see that Diophantus' particular solution is in fact a subtly disguised (3, 4, 5) triple. However, as the triple will always be rational as long as "a" and "t" are rational, we can obtain an infinity of rational triples by changing the value of "t", and hence changing the value of the arbitrary multiple of "x".
This algebraic solution needs only one additional step to arrive at the Platonic sequence formula_12 and that is to multiply all sides of the above triple by a factor formula_13. Notice also that if "a" = 1, the sides [OB, OA, AB] reduce to
formula_14
In modern notation this is just formula_15 for θ shown in the above graph, written in terms of the cotangent "t" of θ/2. In the particular example given by Diophantus, "t" has a value of 2, the arbitrary multiplier of "x". Upon clearing denominators, this expression will generate Pythagorean triples. Intriguingly, the arbitrary multiplier of "x" has become the cornerstone of the generator expression(s).
Diophantus II.IX reaches the same solution by an even quicker route which is very similar to the 'generalized solution' above. Once again the problem is to divide 16 into two squares.
Let the first number be "N" and the second an arbitrary multiple of "N" diminished by the root (of) 16. For example 2"N" − 4. Then:
formula_16
Fermat's famous comment which later became Fermat's Last Theorem appears sandwiched between 'Quaestio VIII' and 'Quaestio IX' on of a 1670 edition of Arithmetica.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x^2"
},
{
"math_id": 1,
"text": "16-x^2"
},
{
"math_id": 2,
"text": "4x^2+16-16x"
},
{
"math_id": 3,
"text": "x^2+16x"
},
{
"math_id": 4,
"text": "5x^2=16x"
},
{
"math_id": 5,
"text": "x=16/5"
},
{
"math_id": 6,
"text": "\n\\begin{align}\n& (tx-a)^2 = a^2-x^2\\\\\n\\Rightarrow\\ \\ & t^2x^2-2atx+a^2 = a^2-x^2\\\\\n\\Rightarrow\\ \\ & x^2(t^2+1) = 2atx\\\\\n\\Rightarrow\\ \\ & x = \\frac{2at}{t^2+1}\\text{ or }x=0.\\\\\n\\end{align}\n"
},
{
"math_id": 7,
"text": "x^2=\\left(\\tfrac{2at}{t^2+1}\\right)^2"
},
{
"math_id": 8,
"text": "(tx-a)^2=\\left(\\tfrac{a(t^2-1)}{t^2+1}\\right)^2"
},
{
"math_id": 9,
"text": "a^2"
},
{
"math_id": 10,
"text": " \\left[ a; \\frac{2at}{t^2+1}; \\frac{a(t^2-1)}{t^2+1}\\right]"
},
{
"math_id": 11,
"text": " \\left[ a; \\frac{2at}{t^2+1}; \\frac{a(t^2-1)}{t^2+1}\\right] =\\left[ \\frac{20}{5};\\frac{16}{5};\\frac{12}{5} \\right]=\\frac{4}{5} \\left[5;4;3\\right]."
},
{
"math_id": 12,
"text": "[\\tfrac{t^2+1}{2};t;\\tfrac{t^2-1}{2}]"
},
{
"math_id": 13,
"text": "\\quad \\tfrac{t^2+1}{2a}"
},
{
"math_id": 14,
"text": " \\left[ 1; \\frac{2t}{t^2+1}; \\frac{t^2-1}{t^2+1}\\right]."
},
{
"math_id": 15,
"text": "(1,\\sin\\theta,\\cos\\theta), "
},
{
"math_id": 16,
"text": "\n\\begin{align}\n& N^2 + (2N - 4)^2 = 16\\\\\n\\Rightarrow\\ \\ & 5N^2+16-16N = 16\\\\\n\\Rightarrow\\ \\ & 5N^2 = 16N\\\\\n\\Rightarrow\\ \\ & N = \\frac{16}{5}\\\\\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=12906191 |
12908 | Global warming potential | Potential heat absorbed by a greenhouse gas
Global warming potential (GWP) is an index to measure how much infrared thermal radiation a greenhouse gas would absorb over a given time frame after it has been added to the atmosphere (or "emitted" to the atmosphere). The GWP makes different greenhouse gases comparable with regard to their "effectiveness in causing radiative forcing".2232 It is expressed as a multiple of the radiation that would be absorbed by the same mass of added carbon dioxide (CO2), which is taken as a reference gas. Therefore, the GWP has a value of 1 for CO2. For other gases it depends on how strongly the gas absorbs infrared thermal radiation, how quickly the gas leaves the atmosphere, and the time frame being considered.
For example, methane has a GWP over 20 years (GWP-20) of 81.2 meaning that, for example, a leak of a tonne of methane is equivalent to emitting 81.2 tonnes of carbon dioxide measured over 20 years. As methane has a much shorter atmospheric lifetime than carbon dioxide, its GWP is much less over longer time periods, with a GWP-100 of 27.9 and a GWP-500 of 7.95.7SM-24
The carbon dioxide equivalent (CO2e or CO2eq or CO2-e or CO2-eq) can be calculated from the GWP. For any gas, it is the mass of CO2 that would warm the earth as much as the mass of that gas. Thus it provides a common scale for measuring the climate effects of different gases. It is calculated as GWP times mass of the other gas.
Definition.
The global warming potential (GWP) is defined as an "index measuring the radiative forcing following an emission of a unit mass of a given substance, accumulated over a chosen time horizon, relative to that of the reference substance, carbon dioxide (CO2). The GWP thus represents the combined effect of the differing times these substances remain in the atmosphere and their effectiveness in causing radiative forcing."2232
In turn, "radiative forcing" is a scientific concept used to quantify and compare the external drivers of change to Earth's energy balance. Radiative forcing is the change in energy flux in the atmosphere caused by natural or anthropogenic factors of climate change as measured in watts per meter squared.
GWP in policymaking.
As governments develop policies to combat emissions from high-GWP sources, policymakers have chosen to use the 100-year GWP scale as the standard in international agreements. The Kigali Amendment to the Montreal Protocol sets the global phase-down of hydrofluorocarbons (HFCs), a group of high-GWP compounds. It requires countries to use a set of GWP100 values equal to those published in the IPCC’s Fourth Assessment Report (AR4). This allows policymakers to have one standard for comparison instead of changing GWP values in new assessment reports. One exception to the GWP100 standard exists: New York state’s Climate Leadership and Community Protection Act requires the use of GWP20, despite being a different standard from all other countries participating in phase downs of HFCs.
Calculated values.
Current values (IPCC Sixth Assessment Report from 2021).
The global warming potential (GWP) depends on both the efficiency of the molecule as a greenhouse gas and its atmospheric lifetime. GWP is measured relative to the same mass of CO2 and evaluated for a specific timescale. Thus, if a gas has a high (positive) radiative forcing but also a short lifetime, it will have a large GWP on a 20-year scale but a small one on a 100-year scale. Conversely, if a molecule has a longer atmospheric lifetime than CO2 its GWP will increase when the timescale is considered. Carbon dioxide is defined to have a GWP of 1 over all time periods.
Methane has an atmospheric lifetime of 12 ± 2 years. The 2021 IPCC report lists the GWP as 83 over a time scale of 20 years, 30 over 100 years and 10 over 500 years. A 2014 analysis, however, states that although methane's initial impact is about 100 times greater than that of CO2, because of the shorter atmospheric lifetime, after six or seven decades, the impact of the two gases is about equal, and from then on methane's relative role continues to decline. The decrease in GWP at longer times is because methane decomposes to water and CO2 through chemical reactions in the atmosphere. Similarly the third most important GHG, nitrous oxide (N2O), is a common gas emitted through the denitrification part of the nitrogen cycle. It has a lifetime of 109 years and an even higher GWP level running at 273 over 20 and 100 years.
Examples of the atmospheric lifetime and GWP relative to CO2 for several greenhouse gases are given in the following table:
Estimates of GWP values over 20, 100 and 500 years are periodically compiled and revised in reports from the Intergovernmental Panel on Climate Change. The most recent report is the IPCC Sixth Assessment Report (Working Group I) from 2023.
The IPCC lists many other substances not shown here. Some have high GWP but only a low concentration in the atmosphere.
The values given in the table assume the same mass of compound is analyzed; different ratios will result from the conversion of one substance to another. For instance, burning methane to carbon dioxide would reduce the global warming impact, but by a smaller factor than 25:1 because the mass of methane burned is less than the mass of carbon dioxide released (ratio 1:2.74). For a starting amount of 1 tonne of methane, which has a GWP of 25, after combustion there would be 2.74 tonnes of CO2, each tonne of which has a GWP of 1. This is a net reduction of 22.26 tonnes of GWP, reducing the global warming effect by a ratio of 25:2.74 (approximately 9 times).
Earlier values from 2007.
The values provided in the table below are from 2007 when they were published in the IPCC Fourth Assessment Report. These values are still used (as of 2020) for some comparisons.
Importance of time horizon.
A substance's GWP depends on the number of years (denoted by a subscript) over which the potential is calculated. A gas which is quickly removed from the atmosphere may initially have a large effect, but for longer time periods, as it has been removed, it becomes less important. Thus methane has a potential of 25 over 100 years (GWP100 = 25) but 86 over 20 years (GWP20 = 86); conversely sulfur hexafluoride has a GWP of 22,800 over 100 years but 16,300 over 20 years (IPCC Third Assessment Report). The GWP value depends on how the gas concentration decays over time in the atmosphere. This is often not precisely known and hence the values should not be considered exact. For this reason when quoting a GWP it is important to give a reference to the calculation.
The GWP for a mixture of gases can be obtained from the mass-fraction-weighted average of the GWPs of the individual gases.
Commonly, a time horizon of 100 years is used by regulators.
Water vapour.
Water vapour does contribute to anthropogenic global warming, but as the GWP is defined, it is negligible for H2O: an estimate gives a 100-year GWP between -0.001 and 0.0005.
H2O can function as a greenhouse gas because it has a profound infrared absorption spectrum with more and broader absorption bands than CO2. Its concentration in the atmosphere is limited by air temperature, so that radiative forcing by water vapour increases with global warming (positive feedback). But the GWP definition excludes indirect effects. GWP definition is also based on emissions, and anthropogenic emissions of water vapour (cooling towers, irrigation) are removed via precipitation within weeks, so its GWP is negligible.
Calculation methods.
When calculating the GWP of a greenhouse gas, the value depends on the following factors:
A high GWP correlates with a large infrared absorption and a long atmospheric lifetime. The dependence of GWP on the wavelength of absorption is more complicated. Even if a gas absorbs radiation efficiently at a certain wavelength, this may not affect its GWP much, if the atmosphere already absorbs most radiation at that wavelength. A gas has the most effect if it absorbs in a "window" of wavelengths where the atmosphere is fairly transparent. The dependence of GWP as a function of wavelength has been found empirically and published as a graph.
Because the GWP of a greenhouse gas depends directly on its infrared spectrum, the use of infrared spectroscopy to study greenhouse gases is centrally important in the effort to understand the impact of human activities on global climate change.
Just as radiative forcing provides a simplified means of comparing the various factors that are believed to influence the climate system to one another, global warming potentials (GWPs) are one type of simplified index based upon radiative properties that can be used to estimate the potential future impacts of emissions of different gases upon the climate system in a relative sense. GWP is based on a number of factors, including the radiative efficiency (infrared-absorbing ability) of each gas relative to that of carbon dioxide, as well as the decay rate of each gas (the amount removed from the atmosphere over a given number of years) relative to that of carbon dioxide.
The radiative forcing capacity (RF) is the amount of energy per unit area, per unit time, absorbed by the greenhouse gas, that would otherwise be lost to space. It can be expressed by the formula:
formula_0
where the subscript "i" represents a wavenumber interval of 10 inverse centimeters. Absi represents the integrated infrared absorbance of the sample in that interval, and Fi represents the RF for that interval.
The Intergovernmental Panel on Climate Change (IPCC) provides the generally accepted values for GWP, which changed slightly between 1996 and 2001, except for methane, which had its GWP almost doubled. An exact definition of how GWP is calculated is to be found in the IPCC's 2001 Third Assessment Report. The GWP is defined as the ratio of the time-integrated radiative forcing from the instantaneous release of 1 kg of a trace substance relative to that of 1 kg of a reference gas:
formula_1
where TH is the time horizon over which the calculation is considered; ax is the radiative efficiency due to a unit increase in atmospheric abundance of the substance (i.e., Wm−2 kg−1) and [x](t) is the time-dependent decay in abundance of the substance following an instantaneous release of it at time t=0. The denominator contains the corresponding quantities for the reference gas (i.e. CO2). The radiative efficiencies ax and ar are not necessarily constant over time. While the absorption of infrared radiation by many greenhouse gases varies linearly with their abundance, a few important ones display non-linear behaviour for current and likely future abundances (e.g., CO2, CH4, and N2O). For those gases, the relative radiative forcing will depend upon abundance and hence upon the future scenario adopted.
Since all GWP calculations are a comparison to CO2 which is non-linear, all GWP values are affected. Assuming otherwise as is done above will lead to lower GWPs for other gases than a more detailed approach would. Clarifying this, while increasing CO2 has less and less effect on radiative absorption as ppm concentrations rise, more powerful greenhouse gases like methane and nitrous oxide have different thermal absorption frequencies to CO2 that are not filled up (saturated) as much as CO2, so rising ppms of these gases are far more significant.
Applications.
Carbon dioxide equivalent.
Carbon dioxide equivalent (CO2e or CO2eq or CO2-e) of a quantity of gas is calculated from its GWP. For any gas, it is the mass of CO2 which would warm the earth as much as the mass of that gas. Thus it provides a common scale for measuring the climate effects of different gases. It is calculated as GWP multiplied by mass of the other gas. For example, if a gas has GWP of 100, two tonnes of the gas have CO2e of 200 tonnes, and 9 tonnes of the gas has CO2e of 900 tonnes.
On a global scale, the warming effects of one or more greenhouse gases in the atmosphere can also be expressed as an equivalent atmospheric concentration of CO2. CO2e can then be the atmospheric concentration of CO2 which would warm the earth as much as a particular concentration of some other gas or of all gases and aerosols in the atmosphere. For example, CO2e of 500 parts per million would reflect a mix of atmospheric gases which warm the earth as much as 500 parts per million of CO2 would warm it. Calculation of the equivalent atmospheric concentration of CO2 of an atmospheric greenhouse gas or aerosol is more complex and involves the atmospheric concentrations of those gases, their GWPs, and the ratios of their molar masses to the molar mass of CO2.
CO2e calculations depend on the time-scale chosen, typically 100 years or 20 years, since gases decay in the atmosphere or are absorbed naturally, at different rates.
The following units are commonly used:
For example, the table above shows GWP for methane over 20 years at 86 and nitrous oxide at 289, so emissions of 1 million tonnes of methane or nitrous oxide are equivalent to emissions of 86 or 289 million tonnes of carbon dioxide, respectively.
Use in Kyoto Protocol and for reporting to UNFCCC.
Under the Kyoto Protocol, in 1997 the Conference of the Parties standardized international reporting, by deciding (see decision number 2/CP.3) that the values of GWP calculated for the IPCC Second Assessment Report were to be used for converting the various greenhouse gas emissions into comparable CO2 equivalents.
After some intermediate updates, in 2013 this standard was updated by the Warsaw meeting of the UN Framework Convention on Climate Change (UNFCCC, decision number 24/CP.19) to require using a new set of 100-year GWP values. They published these values in Annex III, and they took them from the IPCC Fourth Assessment Report, which had been published in 2007. Those 2007 estimates are still used for international comparisons through 2020, although the latest research on warming effects has found other values, as shown in the tables above.
Though recent reports reflect more scientific accuracy, countries and companies continue to use the IPCC Second Assessment Report (SAR) and IPCC Fourth Assessment Report values for reasons of comparison in their emission reports. The IPCC Fifth Assessment Report has skipped the 500-year values but introduced GWP estimations including the climate-carbon feedback (f) with a large amount of uncertainty.
Other metrics to compare greenhouse gases.
The "Global Temperature change Potential" (GTP) is another way to compare gases. While GWP estimates infrared thermal radiation absorbed, GTP estimates the resulting rise in average surface temperature of the world, over the next 20, 50 or 100 years, caused by a greenhouse gas, relative to the temperature rise which the same mass of CO2 would cause. Calculation of GTP requires modelling how the world, especially the oceans, will absorb heat. GTP is published in the same IPCC tables with GWP.
"GWP*" has been proposed to take better account of short-lived climate pollutants (SLCP) such as methane, relating a change in the rate of emissions of SLCPs to a fixed quantity of CO2. However GWP* has itself been criticised both for its suitability as a metric and for inherent design features which can perpetuate injustices and inequity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathit{RF} = \\sum_{i=1}^{100} \\text{abs}_i \\cdot F_i / \\left(\\text{l} \\cdot \\text{d}\\right)"
},
{
"math_id": 1,
"text": "\\mathit{GWP} \\left(x\\right) = \\frac{a_x}{a_r} \\frac{\\int_0^{\\mathit{TH}} [x](t)\\, dt} {\\int_0^{\\mathit{TH}} [r](t)\\, dt}"
}
] | https://en.wikipedia.org/wiki?curid=12908 |
1290811 | HOM | HOM, Hom or similar may refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\operatorname{Hom}(V,K)"
}
] | https://en.wikipedia.org/wiki?curid=1290811 |
12910 | Grothendieck topology | In category theory, a branch of mathematics, a Grothendieck topology is a structure on a category "C" that makes the objects of "C" act like the open sets of a topological space. A category together with a choice of Grothendieck topology is called a site.
Grothendieck topologies axiomatize the notion of an open cover. Using the notion of covering provided by a Grothendieck topology, it becomes possible to define sheaves on a category and their cohomology. This was first done in algebraic geometry and algebraic number theory by Alexander Grothendieck to define the étale cohomology of a scheme. It has been used to define other cohomology theories since then, such as ℓ-adic cohomology, flat cohomology, and crystalline cohomology. While Grothendieck topologies are most often used to define cohomology theories, they have found other applications as well, such as to John Tate's theory of rigid analytic geometry.
There is a natural way to associate a site to an ordinary topological space, and Grothendieck's theory is loosely regarded as a generalization of classical topology. Under meager point-set hypotheses, namely sobriety, this is completely accurate—it is possible to recover a sober space from its associated site. However simple examples such as the indiscrete topological space show that not all topological spaces can be expressed using Grothendieck topologies. Conversely, there are Grothendieck topologies that do not come from topological spaces.
The term "Grothendieck topology" has changed in meaning. In it meant what is now called a Grothendieck pretopology, and some authors still use this old meaning. modified the definition to use sieves rather than covers. Much of the time this does not make much difference, as each Grothendieck pretopology determines a unique Grothendieck topology, though quite different pretopologies can give the same topology.
Overview.
André Weil's famous Weil conjectures proposed that certain properties of equations with integral coefficients should be understood as geometric properties of the algebraic variety that they define. His conjectures postulated that there should be a cohomology theory of algebraic varieties that gives number-theoretic information about their defining equations. This cohomology theory was known as the "Weil cohomology", but using the tools he had available, Weil was unable to construct it.
In the early 1960s, Alexander Grothendieck introduced étale maps into algebraic geometry as algebraic analogues of local analytic isomorphisms in analytic geometry. He used étale coverings to define an algebraic analogue of the fundamental group of a topological space. Soon Jean-Pierre Serre noticed that some properties of étale coverings mimicked those of open immersions, and that consequently it was possible to make constructions that imitated the cohomology functor formula_0. Grothendieck saw that it would be possible to use Serre's idea to define a cohomology theory that he suspected would be the Weil cohomology. To define this cohomology theory, Grothendieck needed to replace the usual, topological notion of an open covering with one that would use étale coverings instead. Grothendieck also saw how to phrase the definition of covering abstractly; this is where the definition of a Grothendieck topology comes from.
Definition.
Motivation.
The classical definition of a sheaf begins with a topological space formula_1. A sheaf associates information to the open sets of formula_1. This information can be phrased abstractly by letting formula_2 be the category whose objects are the open subsets formula_3 of formula_1 and whose morphisms are the inclusion maps formula_4 of open sets formula_3 and formula_5 of formula_1. We will call such maps "open immersions", just as in the context of schemes. Then a presheaf on formula_1 is a contravariant functor from formula_2 to the category of sets, and a sheaf is a presheaf that satisfies the gluing axiom (here including the separation axiom). The gluing axiom is phrased in terms of pointwise covering, i.e., formula_6 covers formula_3 if and only if formula_7. In this definition, formula_8 is an open subset of formula_1. Grothendieck topologies replace each formula_8 with an entire family of open subsets; in this example, formula_8 is replaced by the family of all open immersions formula_9. Such a collection is called a "sieve". Pointwise covering is replaced by the notion of a "covering family"; in the above example, the set of all formula_10 as formula_11 varies is a covering family of formula_3. Sieves and covering families can be axiomatized, and once this is done open sets and pointwise covering can be replaced by other notions that describe other properties of the space formula_1.
Sieves.
In a Grothendieck topology, the notion of a collection of open subsets of "U" stable under inclusion is replaced by the notion of a sieve. If "c" is any given object in "C", a sieve on "c" is a subfunctor of the functor Hom(−, "c"); (this is the Yoneda embedding applied to "c"). In the case of "O"("X"), a sieve "S" on an open set "U" selects a collection of open subsets of "U" that is stable under inclusion. More precisely, consider that for any open subset "V" of "U", "S"("V") will be a subset of Hom("V", "U"), which has only one element, the open immersion "V" → "U". Then "V" will be considered "selected" by "S" if and only if "S"("V") is nonempty. If "W" is a subset of "V", then there is a morphism "S"("V") → "S"("W") given by composition with the inclusion "W" → "V". If "S"("V") is non-empty, it follows that "S"("W") is also non-empty.
If "S" is a sieve on "X", and "f": "Y" → "X" is a morphism, then left composition by "f" gives a sieve on "Y" called the pullback of "S" along "f", denoted by "f"formula_12"S". It is defined as the fibered product "S" ×Hom(−, "X") Hom(−, "Y") together with its natural embedding in Hom(−, "Y"). More concretely, for each object "Z" of "C", "f"formula_12"S"("Z") = { "g": "Z" → "Y" | "fg" formula_13"S"("Z") }, and "f"formula_12"S" inherits its action on morphisms by being a subfunctor of Hom(−, "Y"). In the classical example, the pullback of a collection {"V"i} of subsets of "U" along an inclusion "W" → "U" is the collection {"V"i∩W}.
Grothendieck topology.
A Grothendieck topology "J" on a category "C" is a collection, "for each object c of C", of distinguished sieves on "c", denoted by "J"("c") and called covering sieves of "c". This selection will be subject to certain axioms, stated below. Continuing the previous example, a sieve "S" on an open set "U" in "O"("X") will be a covering sieve if and only if the union of all the open sets "V" for which "S"("V") is nonempty equals "U"; in other words, if and only if "S" gives us a collection of open sets that cover "U" in the classical sense.
Axioms.
The conditions we impose on a Grothendieck topology are:
The base change axiom corresponds to the idea that if {"Ui"} covers "U", then {"Ui" ∩ "V"} should cover "U" ∩ "V". The local character axiom corresponds to the idea that if {"Ui"} covers "U" and {"Vij"}"j formula_13Ji" covers "Ui" for each "i", then the collection {"Vij"} for all "i" and "j" should cover "U". Lastly, the identity axiom corresponds to the idea that any set is covered by itself via the identity map.
Grothendieck pretopologies.
In fact, it is possible to put these axioms in another form where their geometric character is more apparent, assuming that the underlying category "C" contains certain fibered products. In this case, instead of specifying sieves, we can specify that certain collections of maps with a common codomain should cover their codomain. These collections are called covering families. If the collection of all covering families satisfies certain axioms, then we say that they form a Grothendieck pretopology. These axioms are:
For any pretopology, the collection of all sieves that contain a covering family from the pretopology is always a Grothendieck topology.
For categories with fibered products, there is a converse. Given a collection of arrows {"X""α" → "X"}, we construct a sieve "S" by letting "S"("Y") be the set of all morphisms "Y" → "X" that factor through some arrow "X""α" → "X". This is called the sieve generated by {"X""α" → "X"}. Now choose a topology. Say that {"X""α" → "X"} is a covering family if and only if the sieve that it generates is a covering sieve for the given topology. It is easy to check that this defines a pretopology.
(PT 3) is sometimes replaced by a weaker axiom:
(PT 3) implies (PT 3'), but not conversely. However, suppose that we have a collection of covering families that satisfies (PT 0) through (PT 2) and (PT 3'), but not (PT 3). These families generate a pretopology. The topology generated by the original collection of covering families is then the same as the topology generated by the pretopology, because the sieve generated by an isomorphism "Y" → "X" is Hom(−, "X"). Consequently, if we restrict our attention to topologies, (PT 3) and (PT 3') are equivalent.
Sites and sheaves.
Let "C" be a category and let "J" be a Grothendieck topology on "C". The pair ("C", "J") is called a site.
A presheaf on a category is a contravariant functor from "C" to the category of all sets. Note that for this definition "C" is not required to have a topology. A sheaf on a site, however, should allow gluing, just like sheaves in classical topology. Consequently, we define a sheaf on a site to be a presheaf "F" such that for all objects "X" and all covering sieves "S" on "X", the natural map Hom(Hom(−, "X"), "F") → Hom("S", "F"), induced by the inclusion of "S" into Hom(−, "X"), is a bijection. Halfway in between a presheaf and a sheaf is the notion of a separated presheaf, where the natural map above is required to be only an injection, not a bijection, for all sieves "S". A morphism of presheaves or of sheaves is a natural transformation of functors. The category of all sheaves on "C" is the topos defined by the site ("C", "J").
Using the Yoneda lemma, it is possible to show that a presheaf on the category "O"("X") is a sheaf on the topology defined above if and only if it is a sheaf in the classical sense.
Sheaves on a pretopology have a particularly simple description: For each covering family {"X""α" → "X"}, the diagram
formula_15
must be an equalizer. For a separated presheaf, the first arrow need only be injective.
Similarly, one can define presheaves and sheaves of abelian groups, rings, modules, and so on. One can require either that a presheaf "F" is a contravariant functor to the category of abelian groups (or rings, or modules, etc.), or that "F" be an abelian group (ring, module, etc.) object in the category of all contravariant functors from "C" to the category of sets. These two definitions are equivalent.
Examples of sites.
The discrete and indiscrete topologies.
Let C be any category. To define the discrete topology, we declare all sieves to be covering sieves. If C has all fibered products, this is equivalent to declaring all families to be covering families. To define the indiscrete topology, also known as the coarse or chaotic topology, we declare only the sieves of the form Hom(−, "X") to be covering sieves. The indiscrete topology is generated by the pretopology that has only isomorphisms for covering families. A sheaf on the indiscrete site is the same thing as a presheaf.
The canonical topology.
Let C be any category. The Yoneda embedding gives a functor Hom(−, "X") for each object "X" of C. The canonical topology is the biggest (finest) topology such that every representable presheaf, i.e. presheaf of the form Hom(−, "X"), is a sheaf. A covering sieve or covering family for this site is said to be "strictly universally epimorphic" because it consists of the legs of a colimit cone (under the full diagram on the domains of its constituent morphisms) and these colimits are stable under pullbacks along morphisms in C. A topology that is less fine than the canonical topology, that is, for which every covering sieve is strictly universally epimorphic, is called subcanonical. Subcanonical sites are exactly the sites for which every presheaf of the form Hom(−, "X") is a sheaf. Most sites encountered in practice are subcanonical.
Small site associated to a topological space.
We repeat the example that we began with above. Let "X" be a topological space. We defined "O"("X") to be the category whose objects are the open sets of "X" and whose morphisms are inclusions of open sets. Note that for an open set "U" and a sieve "S" on "U", the set "S"("V") contains either zero or one element for every open set "V". The covering sieves on an object "U" of "O"("X") are those sieves "S" satisfying the following condition:
This notion of cover matches the usual notion in point-set topology.
This topology can also naturally be expressed as a pretopology. We say that a family of inclusions {"V""α" formula_16 "U"} is a covering family if and only if the union formula_17"V""α" equals "U". This site is called the "'small site associated to a topological space "X".
Big site associated to a topological space.
Let "Spc" be the category of all topological spaces. Given any family of functions {"u""α" : "V""α" → "X"}, we say that it is a surjective family or that the morphisms "u""α" are jointly surjective if formula_17 "u""α"("V""α") equals "X". We define a pretopology on "Spc" by taking the covering families to be surjective families all of whose members are open immersions. Let "S" be a sieve on "Spc". "S" is a covering sieve for this topology if and only if:
Fix a topological space "X". Consider the comma category "Spc/X" of topological spaces with a fixed continuous map to "X". The topology on "Spc" induces a topology on "Spc/X". The covering sieves and covering families are almost exactly the same; the only difference is that now all the maps involved commute with the fixed maps to "X". This is the big site associated to a topological space X . Notice that "Spc" is the big site associated to the one point space. This site was first considered by Jean Giraud.
The big and small sites of a manifold.
Let "M" be a manifold. "M" has a category of open sets "O"("M") because it is a topological space, and it gets a topology as in the above example. For two open sets "U" and "V" of "M", the fiber product "U" ×"M" "V" is the open set "U" ∩ "V", which is still in "O"("M"). This means that the topology on "O"("M") is defined by a pretopology, the same pretopology as before.
Let "Mfd" be the category of all manifolds and continuous maps. (Or smooth manifolds and smooth maps, or real analytic manifolds and analytic maps, etc.) "Mfd" is a subcategory of "Spc", and open immersions are continuous (or smooth, or analytic, etc.), so "Mfd" inherits a topology from "Spc". This lets us construct the big site of the manifold "M" as the site "Mfd/M". We can also define this topology using the same pretopology we used above. Notice that to satisfy (PT 0), we need to check that for any continuous map of manifolds "X" → "Y" and any open subset "U" of "Y", the fibered product "U" ×"Y" "X" is in "Mfd/M". This is just the statement that the preimage of an open set is open. Notice, however, that not all fibered products exist in "Mfd" because the preimage of a smooth map at a critical value need not be a manifold.
Topologies on the category of schemes.
The category of schemes, denoted "Sch", has a tremendous number of useful topologies. A complete understanding of some questions may require examining a scheme using several different topologies. All of these topologies have associated small and big sites. The big site is formed by taking the entire category of schemes and their morphisms, together with the covering sieves specified by the topology. The small site over a given scheme is formed by only taking the objects and morphisms that are part of a cover of the given scheme.
The most elementary of these is the Zariski topology. Let "X" be a scheme. "X" has an underlying topological space, and this topological space determines a Grothendieck topology. The Zariski topology on "Sch" is generated by the pretopology whose covering families are jointly surjective families of scheme-theoretic open immersions. The covering sieves "S" for "Zar" are characterized by the following two properties:
Despite their outward similarities, the topology on "Zar" is "not" the restriction of the topology on "Spc"! This is because there are morphisms of schemes that are topologically open immersions but that are not scheme-theoretic open immersions. For example, let "A" be a non-reduced ring and let "N" be its ideal of nilpotents. The quotient map "A" → "A/N" induces a map Spec "A/N" → Spec "A", which is the identity on underlying topological spaces. To be a scheme-theoretic open immersion it must also induce an isomorphism on structure sheaves, which this map does not do. In fact, this map is a closed immersion.
The étale topology is finer than the Zariski topology. It was the first Grothendieck topology to be closely studied. Its covering families are jointly surjective families of étale morphisms. It is finer than the Nisnevich topology, but neither finer nor coarser than the "cdh" and l′ topologies.
There are two flat topologies, the "fppf" topology and the "fpqc" topology. "fppf" stands for ', and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat, of finite presentation, and is quasi-finite. "fpqc" stands for ', and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat. In both categories, a covering family is defined to be a family that is a cover on Zariski open subsets. In the fpqc topology, any faithfully flat and quasi-compact morphism is a cover. These topologies are closely related to descent. The "fpqc" topology is finer than all the topologies mentioned above, and it is very close to the canonical topology.
Grothendieck introduced crystalline cohomology to study the "p"-torsion part of the cohomology of characteristic "p" varieties. In the "crystalline topology", which is the basis of this theory, the underlying category has objects given by infinitesimal thickenings together with divided power structures. Crystalline sites are examples of sites with no final object.
Continuous and cocontinuous functors.
There are two natural types of functors between sites. They are given by functors that are compatible with the topology in a certain sense.
Continuous functors.
If ("C", "J") and ("D", "K") are sites and "u" : "C" → "D" is a functor, then "u" is continuous if for every sheaf "F" on "D" with respect to the topology "K", the presheaf "Fu" is a sheaf with respect to the topology "J". Continuous functors induce functors between the corresponding topoi by sending a sheaf "F" to "Fu". These functors are called pushforwards. If formula_18 and formula_19 denote the topoi associated to "C" and "D", then the pushforward functor is formula_20.
"u""s" admits a left adjoint "u""s" called the pullback. "u""s" need not preserve limits, even finite limits.
In the same way, "u" sends a sieve on an object "X" of "C" to a sieve on the object "uX" of "D". A continuous functor sends covering sieves to covering sieves. If "J" is the topology defined by a pretopology, and if "u" commutes with fibered products, then "u" is continuous if and only if it sends covering sieves to covering sieves and if and only if it sends covering families to covering families. In general, it is "not" sufficient for "u" to send covering sieves to covering sieves (see SGA IV 3, 1.9.3).
Cocontinuous functors.
Again, let ("C", "J") and ("D", "K") be sites and "v" : "C" → "D" be a functor. If "X" is an object of "C" and "R" is a sieve on "vX", then "R" can be pulled back to a sieve "S" as follows: A morphism "f" : "Z" → "X" is in "S" if and only if "v"("f") : "vZ" → "vX" is in "R". This defines a sieve. "v" is cocontinuous if and only if for every object "X" of "C" and every covering sieve "R" of "vX", the pullback "S" of "R" is a covering sieve on "X".
Composition with "v" sends a presheaf "F" on "D" to a presheaf "Fv" on "C", but if "v" is cocontinuous, this need not send sheaves to sheaves. However, this functor on presheaf categories, usually denoted formula_21, admits a right adjoint formula_22. Then "v" is cocontinuous if and only if formula_22 sends sheaves to sheaves, that is, if and only if it restricts to a functor formula_23. In this case, the composite of formula_21 with the associated sheaf functor is a left adjoint of "v"* denoted "v"*. Furthermore, "v"* preserves finite limits, so the adjoint functors "v"* and "v"* determine a geometric morphism of topoi formula_24.
Morphisms of sites.
A continuous functor "u" : "C" → "D" is a morphism of sites "D" → "C" ("not" "C" → "D") if "u""s" preserves finite limits. In this case, "u""s" and "u""s" determine a geometric morphism of topoi formula_24. The reasoning behind the convention that a continuous functor "C" → "D" is said to determine a morphism of sites in the opposite direction is that this agrees with the intuition coming from the case of topological spaces. A continuous map of topological spaces "X" → "Y" determines a continuous functor "O"("Y") → "O"("X"). Since the original map on topological spaces is said to send "X" to "Y", the morphism of sites is said to as well.
A particular case of this happens when a continuous functor admits a left adjoint. Suppose that "u" : "C" → "D" and "v" : "D" → "C" are functors with "u" right adjoint to "v". Then "u" is continuous if and only if "v" is cocontinuous, and when this happens, "u""s" is naturally isomorphic to "v"* and "u""s" is naturally isomorphic to "v"*. In particular, "u" is a morphism of sites.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H^1"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "O(X)"
},
{
"math_id": 3,
"text": "U"
},
{
"math_id": 4,
"text": "V\\rightarrow U"
},
{
"math_id": 5,
"text": "V"
},
{
"math_id": 6,
"text": "\\{U_i\\}"
},
{
"math_id": 7,
"text": " \\bigcup_i U_i = U"
},
{
"math_id": 8,
"text": "U_i"
},
{
"math_id": 9,
"text": "V_{ij} \\to U_i"
},
{
"math_id": 10,
"text": "\\{V_{ij} \\to U_i\\}_j"
},
{
"math_id": 11,
"text": "i"
},
{
"math_id": 12,
"text": "^\\ast"
},
{
"math_id": 13,
"text": "\\in"
},
{
"math_id": 14,
"text": "\\ast"
},
{
"math_id": 15,
"text": "F(X) \\rightarrow \\prod_{\\alpha\\in A} F(X_\\alpha) {{{} \\atop \\longrightarrow}\\atop{\\longrightarrow \\atop {}}} \\prod_{\\alpha,\\beta \\in A} F(X_\\alpha\\times_X X_\\beta)"
},
{
"math_id": 16,
"text": "\\sube"
},
{
"math_id": 17,
"text": "\\cup"
},
{
"math_id": 18,
"text": "\\tilde C"
},
{
"math_id": 19,
"text": "\\tilde D"
},
{
"math_id": 20,
"text": "u_s : \\tilde D \\to \\tilde C"
},
{
"math_id": 21,
"text": "\\hat v^*"
},
{
"math_id": 22,
"text": "\\hat v_*"
},
{
"math_id": 23,
"text": "v_* : \\tilde C \\to \\tilde D"
},
{
"math_id": 24,
"text": "\\tilde C \\to \\tilde D"
}
] | https://en.wikipedia.org/wiki?curid=12910 |
1291180 | Linear phase | Filter whose phase response is proportional to frequency
In signal processing, linear phase is a property of a filter where the phase response of the filter is a linear function of frequency. The result is that all frequency components of the input signal are shifted in time (usually delayed) by the same constant amount (the slope of the linear function), which is referred to as the group delay. Consequently, there is no phase distortion due to the time delay of frequencies relative to one another.
For discrete-time signals, perfect linear phase is easily achieved with a finite impulse response (FIR) filter by having coefficients which are symmetric or anti-symmetric. Approximations can be achieved with infinite impulse response (IIR) designs, which are more computationally efficient. Several techniques are:
Definition.
A filter is called a linear phase filter if the phase component of the frequency response is a linear function of frequency. For a continuous-time application, the frequency response of the filter is the Fourier transform of the filter's impulse response, and a linear phase version has the form:
formula_0
where:
For a discrete-time application, the discrete-time Fourier transform of the linear phase impulse response has the form:
formula_2
where:
formula_3 is a Fourier series that can also be expressed in terms of the Z-transform of the filter impulse response. I.e.:
formula_4
where the formula_5 notation distinguishes the Z-transform from the Fourier transform.
Examples.
When a sinusoidformula_6 passes through a filter with constant (frequency-independent) group delay formula_7 the result is:
formula_8
where:
It follows that a complex exponential function:
formula_13
is transformed into:
formula_14
For approximately linear phase, it is sufficient to have that property only in the passband(s) of the filter, where |A(ω)| has relatively large values. Therefore, both magnitude and phase graphs (Bode plots) are customarily used to examine a filter's linearity. A "linear" phase graph may contain discontinuities of π and/or 2π radians. The smaller ones happen where A(ω) changes sign. Since |A(ω)| cannot be negative, the changes are reflected in the phase plot. The 2π discontinuities happen because of plotting the principal value of formula_15 instead of the actual value.
In discrete-time applications, one only examines the region of frequencies between 0 and the Nyquist frequency, because of periodicity and symmetry. Depending on the frequency units, the Nyquist frequency may be 0.5, 1.0, π, or ½ of the actual sample-rate. Some examples of linear and non-linear phase are shown below.
A discrete-time filter with linear phase may be achieved by an FIR filter which is either symmetric or anti-symmetric. A necessary but not sufficient condition is:
formula_16
for some formula_17.
Generalized linear phase.
Systems with generalized linear phase have an additional frequency-independent constant formula_18 added to the phase. In the discrete-time case, for example, the frequency response has the form:
formula_19
formula_20 for formula_21
Because of this constant, the phase of the system is not a strictly linear function of frequency, but it retains many of the useful properties of linear phase systems.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H(\\omega) = A(\\omega)\\ e^{-j \\omega \\tau},"
},
{
"math_id": 1,
"text": "\\tau"
},
{
"math_id": 2,
"text": "H_{2\\pi}(\\omega) = A(\\omega)\\ e^{-j \\omega k/2},"
},
{
"math_id": 3,
"text": "H_{2\\pi}(\\omega)"
},
{
"math_id": 4,
"text": "H_{2\\pi}(\\omega) = \\left. \\widehat H(z) \\, \\right|_{z = e^{j \\omega}} = \\widehat H(e^{j \\omega}),"
},
{
"math_id": 5,
"text": "\\widehat H"
},
{
"math_id": 6,
"text": ",\\ \\sin(\\omega t + \\theta),"
},
{
"math_id": 7,
"text": "\\tau,"
},
{
"math_id": 8,
"text": "A(\\omega)\\cdot \\sin(\\omega (t-\\tau) + \\theta) = A(\\omega)\\cdot \\sin(\\omega t + \\theta - \\omega \\tau),"
},
{
"math_id": 9,
"text": "A(\\omega)"
},
{
"math_id": 10,
"text": "\\omega \\tau"
},
{
"math_id": 11,
"text": "\\omega"
},
{
"math_id": 12,
"text": "-\\tau"
},
{
"math_id": 13,
"text": "e^{i(\\omega t + \\theta)} = \\cos(\\omega t + \\theta) + i\\cdot \\sin(\\omega t + \\theta), "
},
{
"math_id": 14,
"text": "A(\\omega)\\cdot e^{i(\\omega (t-\\tau) + \\theta)} = e^{i(\\omega t + \\theta)}\\cdot A(\\omega) e^{-i\\omega \\tau}"
},
{
"math_id": 15,
"text": "\\omega \\tau,"
},
{
"math_id": 16,
"text": "\\sum_{n =-\\infty}^\\infty h[n] \\cdot \\sin(\\omega \\cdot (n - \\alpha) + \\beta)=0"
},
{
"math_id": 17,
"text": "\\alpha, \\beta \\in \\mathbb{R} "
},
{
"math_id": 18,
"text": "\\beta"
},
{
"math_id": 19,
"text": "H_{2\\pi}(\\omega) = A(\\omega)\\ e^{-j \\omega k/2 + j \\beta},"
},
{
"math_id": 20,
"text": "\\arg \\left[ H_{2\\pi}(\\omega) \\right] = \\beta - \\omega k/2 "
},
{
"math_id": 21,
"text": " -\\pi < \\omega < \\pi "
}
] | https://en.wikipedia.org/wiki?curid=1291180 |
12912917 | Bullough–Dodd model | Integrable 1+1 dimensional quantum field theory
The Bullough–Dodd model is an integrable model in 1+1-dimensional quantum field theory introduced by Robin Bullough and Roger Dodd. Its
Lagrangian density is
formula_0
where formula_1 is a mass parameter, formula_2 is the coupling constant and formula_3 is a real scalar field.
The Bullough–Dodd model belongs to the class of affine Toda field theories.
The spectrum of the model consists of a single massive particle. | [
{
"math_id": 0,
"text": "\\mathcal{L}=\\frac{1}{2}(\\partial_\\mu\\varphi)^2-\\frac{m_0^2}{6g^2}(2e^{g\\varphi}\n+e^{-2g\\varphi})\n"
},
{
"math_id": 1,
"text": "m_0\\,"
},
{
"math_id": 2,
"text": "g\\,"
},
{
"math_id": 3,
"text": "\\varphi\\,"
}
] | https://en.wikipedia.org/wiki?curid=12912917 |
1291319 | Time-invariant system | Dynamical system whose system function is not directly dependent on time
In control theory, a time-invariant (TI) system has a time-dependent system function that is not a direct function of time. Such systems are regarded as a class of systems in the field of system analysis. The time-dependent system function is a function of the time-dependent input function. If this function depends "only" indirectly on the time-domain (via the input function, for example), then that is a system that would be considered time-invariant. Conversely, any direct dependence on the time-domain of the system function could be considered as a "time-varying system".
Mathematically speaking, "time-invariance" of a system is the following property:
"Given a system with a time-dependent output function &NoBreak;&NoBreak;, and a time-dependent input function &NoBreak;&NoBreak;, the system will be considered time-invariant if a time-delay on the input &NoBreak;&NoBreak; directly equates to a time-delay of the output &NoBreak;&NoBreak; function. For example, if time &NoBreak;&NoBreak; is "elapsed time", then "time-invariance" implies that the relationship between the input function &NoBreak;&NoBreak; and the output function &NoBreak;&NoBreak; is constant with respect to time &NoBreak;&NoBreak;"
formula_0
In the language of signal processing, this property can be satisfied if the transfer function of the system is not a direct function of time except as expressed by the input and output.
In the context of a system schematic, this property can also be stated as follows, as shown in the figure to the right:
"If a system is time-invariant then the system block commutes with an arbitrary delay."
If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. Nonlinear time-invariant systems lack a comprehensive, governing theory. Discrete time-invariant systems are known as shift-invariant systems. Systems which lack the time-invariant property are studied as time-variant systems.
Simple example.
To demonstrate how to determine if a system is time-invariant, consider the two systems:
Since the System Function formula_3 for system A explicitly depends on "t" outside of formula_4, it is not time-invariant because the time-dependence is not explicitly a function of the input function.
In contrast, system B's time-dependence is only a function of the time-varying input formula_4. This makes system B time-invariant.
The Formal Example below shows in more detail that while System B is a Shift-Invariant System as a function of time, "t", System A is not.
Formal example.
A more formal proof of why systems A and B above differ is now presented. To perform this proof, the second definition will be used.
System A: Start with a delay of the input formula_5
formula_1
formula_6
Now delay the output by formula_7
formula_1
formula_8
Clearly formula_9, therefore the system is not time-invariant.
System B: Start with a delay of the input formula_5
formula_2
formula_10
Now delay the output by formula_7
formula_2
formula_11
Clearly formula_12, therefore the system is time-invariant.
More generally, the relationship between the input and output is
formula_13
and its variation with time is
formula_14
For time-invariant systems, the system properties remain constant with time,
formula_15
Applied to Systems A and B above:
formula_16 in general, so it is not time-invariant,
formula_17 so it is time-invariant.
Abstract example.
We can denote the shift operator by formula_18 where formula_19 is the amount by which a vector's index set should be shifted. For example, the "advance-by-1" system
formula_20
can be represented in this abstract notation by
formula_21
where formula_22 is a function given by
formula_23
with the system yielding the shifted output
formula_24
So formula_25 is an operator that advances the input vector by 1.
Suppose we represent a system by an operator formula_26. This system is time-invariant if it commutes with the shift operator, i.e.,
formula_27
If our system equation is given by
formula_28
then it is time-invariant if we can apply the system operator formula_26 on formula_22 followed by the shift operator formula_18, or we can apply the shift operator formula_18 followed by the system operator formula_26, with the two computations yielding equivalent results.
Applying the system operator first gives
formula_29
Applying the shift operator first gives
formula_30
If the system is time-invariant, then
formula_31
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y(t) = f( x(t), t ) = f( x(t))."
},
{
"math_id": 1,
"text": "y(t) = t x(t)"
},
{
"math_id": 2,
"text": "y(t) = 10 x(t)"
},
{
"math_id": 3,
"text": "y(t)"
},
{
"math_id": 4,
"text": "x(t)"
},
{
"math_id": 5,
"text": "x_d(t) = x(t + \\delta)"
},
{
"math_id": 6,
"text": "y_1(t) = t x_d(t) = t x(t + \\delta)"
},
{
"math_id": 7,
"text": "\\delta"
},
{
"math_id": 8,
"text": "y_2(t) = y(t + \\delta) = (t + \\delta) x(t + \\delta)"
},
{
"math_id": 9,
"text": "y_1(t) \\ne y_2(t)"
},
{
"math_id": 10,
"text": "y_1(t) = 10 x_d(t) = 10 x(t + \\delta)"
},
{
"math_id": 11,
"text": "y_2(t) = y(t + \\delta) = 10 x(t + \\delta)"
},
{
"math_id": 12,
"text": "y_1(t) = y_2(t)"
},
{
"math_id": 13,
"text": " y(t) = f(x(t), t),"
},
{
"math_id": 14,
"text": "\\frac{\\mathrm{d} y}{\\mathrm{d} t} = \\frac{\\partial f}{\\partial t} + \\frac{\\partial f}{\\partial x} \\frac{\\mathrm{d} x}{\\mathrm{d} t}."
},
{
"math_id": 15,
"text": " \\frac{\\partial f}{\\partial t} =0."
},
{
"math_id": 16,
"text": " f_A = t x(t) \\qquad \\implies \\qquad \\frac{\\partial f_A}{\\partial t} = x(t) \\neq 0 "
},
{
"math_id": 17,
"text": " f_B = 10 x(t) \\qquad \\implies \\qquad \\frac{\\partial f_B}{\\partial t} = 0 "
},
{
"math_id": 18,
"text": "\\mathbb{T}_r"
},
{
"math_id": 19,
"text": "r"
},
{
"math_id": 20,
"text": "x(t+1) = \\delta(t+1) * x(t)"
},
{
"math_id": 21,
"text": "\\tilde{x}_1 = \\mathbb{T}_1 \\tilde{x}"
},
{
"math_id": 22,
"text": "\\tilde{x}"
},
{
"math_id": 23,
"text": "\\tilde{x} = x(t) \\forall t \\in \\R"
},
{
"math_id": 24,
"text": "\\tilde{x}_1 = x(t + 1) \\forall t \\in \\R"
},
{
"math_id": 25,
"text": "\\mathbb{T}_1"
},
{
"math_id": 26,
"text": "\\mathbb{H}"
},
{
"math_id": 27,
"text": "\\mathbb{T}_r \\mathbb{H} = \\mathbb{H} \\mathbb{T}_r \\forall r"
},
{
"math_id": 28,
"text": "\\tilde{y} = \\mathbb{H} \\tilde{x}"
},
{
"math_id": 29,
"text": "\\mathbb{T}_r \\mathbb{H} \\tilde{x} = \\mathbb{T}_r \\tilde{y} = \\tilde{y}_r"
},
{
"math_id": 30,
"text": "\\mathbb{H} \\mathbb{T}_r \\tilde{x} = \\mathbb{H} \\tilde{x}_r"
},
{
"math_id": 31,
"text": "\\mathbb{H} \\tilde{x}_r = \\tilde{y}_r"
}
] | https://en.wikipedia.org/wiki?curid=1291319 |
1291342 | Time-variant system | A time-variant system is a system whose output response depends on moment of observation as well as moment of input signal application. In other words, a time delay or time advance of input not only shifts the output signal in time but also changes other parameters and behavior. Time variant systems respond differently to the same input at different times. The opposite is true for time invariant systems (TIV).
Overview.
There are many well developed techniques for dealing with the response of linear time invariant systems, such as Laplace and Fourier transforms. However, these techniques are not strictly valid for time-varying systems. A system undergoing slow time variation in comparison to its time constants can usually be considered to be time invariant: they are close to time invariant on a small scale. An example of this is the aging and wear of electronic components, which happens on a scale of years, and thus does not result in any behaviour qualitatively different from that observed in a time invariant system: day-to-day, they are effectively time invariant, though year to year, the parameters may change. Other linear time variant systems may behave more like nonlinear systems, if the system changes quickly – significantly differing between measurements.
The following things can be said about a time-variant system:
Linear time-variant systems.
Linear-time variant (LTV) systems are the ones whose parameters vary with time according to previously specified laws. Mathematically, there is a well defined dependence of the system over time and over the input parameters that change over time.
formula_0
In order to solve time-variant systems, the algebraic methods initial conditions of the system i.e. whether the system is zero-input or non-zero input system.
Examples of time-variant systems.
The following time varying systems cannot be modelled by assuming that they are time invariant: | [
{
"math_id": 0,
"text": "y(t) = f ( x(t), t)"
}
] | https://en.wikipedia.org/wiki?curid=1291342 |
1291534 | Walsh matrix | In mathematics, a Walsh matrix is a specific square matrix of dimensions 2"n", where "n" is some particular natural number. The entries of the matrix are either +1 or −1 and its rows as well as columns are orthogonal. The Walsh matrix was proposed by Joseph L. Walsh in 1923. Each row of a Walsh matrix corresponds to a Walsh function.
The Walsh matrices are a special case of Hadamard matrices where the rows are rearranged so that the number of sign changes in a row is in increasing order. In short, a Hadamard matrix is defined by the recursive formula below and is "naturally ordered", whereas a Walsh matrix is "sequency-ordered". Confusingly, different sources refer to either matrix as the Walsh matrix.
The Walsh matrix (and Walsh functions) are used in computing the Walsh transform and have applications in the efficient implementation of certain signal processing operations.
Formula.
The Hadamard matrices of dimension formula_0 for formula_1 are given by the recursive formula (the lowest order of Hadamard matrix is 2):
formula_2
and in general
formula_3
for 2 ≤ "k" ∈ N, where ⊗ denotes the Kronecker product.
Permutation.
We can obtain a Walsh matrix from a Hadamard matrix. For that, we first generate the Hadamard matrix for a given dimension. Then, we count the number of sign changes of each row. Finally, we re-order the rows of the matrix according to the number of sign changes in ascending order.
For example, let us assume that we have a Hadamard matrix of dimension formula_4
formula_5,
where the successive rows have 0, 3, 1, and 2 sign changes (we count the number of times we switch from a positive 1 to a negative 1, and vice versa). If we rearrange the rows in sequency ordering, we obtain:
formula_6
where the successive rows have 0, 1, 2, and 3 sign changes.
Alternative forms of the Walsh matrix.
Sequency ordering.
The sequency ordering of the rows of the Walsh matrix can be derived from the ordering of the Hadamard matrix by first applying the bit-reversal permutation and then the Gray-code permutation:
formula_7
where the successive rows have 0, 1, 2, 3, 4, 5, 6, and 7 sign changes.
formula_8
Dyadic ordering.
where the successive rows have 0, 1, 3, 2, 7, 6, 4, and 5 sign changes.
formula_9
Natural ordering.
where the successive rows have 0, 7, 3, 4, 1, 6, 2, and 5 sign changes (Hadamard matrix).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2^k"
},
{
"math_id": 1,
"text": "k \\in \\mathbb{N}"
},
{
"math_id": 2,
"text": "\\begin{align}\n H\\left(2^1\\right) &= \\begin{bmatrix}\n 1 & 1 \\\\\n 1 & -1\n \\end{bmatrix}, \\\\\n H\\left(2^2\\right) &= \\begin{bmatrix}\n 1 & 1 & 1 & 1 \\\\\n 1 & -1 & 1 & -1 \\\\\n 1 & 1 & -1 & -1 \\\\\n 1 & -1 & -1 & 1 \\\\\n \\end{bmatrix},\n\\end{align}"
},
{
"math_id": 3,
"text": "H\\left(2^k\\right) = \\begin{bmatrix}\n H\\left(2^{k-1}\\right) & H\\left(2^{k-1}\\right) \\\\\n H\\left(2^{k-1}\\right) & -H\\left(2^{k-1}\\right)\n \\end{bmatrix} =\n H(2) \\otimes H\\left(2^{k-1}\\right),\n"
},
{
"math_id": 4,
"text": "2^2"
},
{
"math_id": 5,
"text": "H(4) = \\begin{bmatrix}\n 1 & 1 & 1 & 1 \\\\\n 1 & -1 & 1 & -1 \\\\\n 1 & 1 & -1 & -1 \\\\\n 1 & -1 & -1 & 1 \\\\\n\\end{bmatrix}"
},
{
"math_id": 6,
"text": "W(4) = \\begin{bmatrix}\n 1 & 1 & 1 & 1 \\\\\n 1 & 1 & -1 & -1 \\\\\n 1 & -1 & -1 & 1 \\\\\n 1 & -1 & 1 & -1 \\\\\n\\end{bmatrix},"
},
{
"math_id": 7,
"text": "W(8) = \\begin{bmatrix}\n 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\\\\n 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 \\\\\n 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\\\\n 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\\\\n 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 \\\\\n 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 \\\\\n 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\\\\n\\end{bmatrix},"
},
{
"math_id": 8,
"text": "W(8) = \\begin{bmatrix}\n 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\\\\n 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\\\\n 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 \\\\\n 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\\\\n 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 \\\\\n 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\\\\n 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 \\\\\n\\end{bmatrix},"
},
{
"math_id": 9,
"text": "H (8) = \\begin{bmatrix}\n 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\\\\n 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\\\\n 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\\\\n 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\\\\n 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 \\\\\n 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 \\\\\n 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 \\\\\n\\end{bmatrix},"
}
] | https://en.wikipedia.org/wiki?curid=1291534 |
12916 | Gauss–Legendre algorithm | Quickly converging computation of π
The Gauss–Legendre algorithm is an algorithm to compute the digits of π. It is notable for being rapidly convergent, with only 25 iterations producing 45 million correct digits of π. However, it has some drawbacks (for example, it is computer memory-intensive) and therefore all record-breaking calculations for many years have used other methods, almost always the Chudnovsky algorithm. For details, see Chronology of computation of π.
The method is based on the individual work of Carl Friedrich Gauss (1777–1855) and Adrien-Marie Legendre (1752–1833) combined with modern algorithms for multiplication and square roots. It repeatedly replaces two numbers by their arithmetic and geometric mean, in order to approximate their arithmetic-geometric mean.
The version presented below is also known as the Gauss–Euler, Brent–Salamin (or Salamin–Brent) algorithm; it was independently discovered in 1975 by Richard Brent and Eugene Salamin. It was used to compute the first 206,158,430,000 decimal digits of π on September 18 to 20, 1999, and the results were checked with Borwein's algorithm.
Algorithm.
The first three iterations give (approximations given up to and including the first incorrect digit):
formula_5
formula_6
formula_7
formula_8
formula_9
The algorithm has quadratic convergence, which essentially means that the number of correct digits doubles with each iteration of the algorithm.
Mathematical background.
Limits of the arithmetic–geometric mean.
The arithmetic–geometric mean of two numbers, a0 and b0, is found by calculating the limit of the sequences
formula_10
which both converge to the same limit.
If formula_11 and formula_12 then the limit is formula_13 where formula_14 is the complete elliptic integral of the first kind
formula_15
If formula_16, formula_17, then
formula_18
where formula_19 is the complete elliptic integral of the second kind:
formula_20
Gauss knew of these two results.
Legendre’s identity.
Legendre proved the following identity:
formula_21
for all formula_22.
Elementary proof with integral calculus.
The Gauss-Legendre algorithm can be proven to give results converging to π using only integral calculus. This is done here and here.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_0 = 1\\qquad b_0 = \\frac{1}{\\sqrt{2}}\\qquad t_0 = \\frac{1}{4}\\qquad p_0 = 1."
},
{
"math_id": 1,
"text": "a_n"
},
{
"math_id": 2,
"text": "b_n"
},
{
"math_id": 3,
"text": " \\begin{align}\na_{n+1} & = \\frac{a_n + b_n}{2}, \\\\\n \\\\\n b_{n+1} & = \\sqrt{a_n b_n}, \\\\\n \\\\\n t_{n+1} & = t_n - p_n(a_{n}-a_{n+1})^2, \\\\\n \\\\\n p_{n+1} & = 2p_n.\n \\\\\n \\end{align}\n"
},
{
"math_id": 4,
"text": "\\pi \\approx \\frac{(a_{n+1}+b_{n+1})^2}{4t_{n+1}}."
},
{
"math_id": 5,
"text": "3.140\\dots"
},
{
"math_id": 6,
"text": "3.14159264\\dots"
},
{
"math_id": 7,
"text": "3.1415926535897932382\\dots"
},
{
"math_id": 8,
"text": "3.141592653589793238462643\\dots"
},
{
"math_id": 9,
"text": "3.14159265358979323846264338327\\dots"
},
{
"math_id": 10,
"text": "\\begin{align} a_{n+1} & = \\frac{a_n+b_n}{2}, \\\\[6pt]\n b_{n+1} & = \\sqrt{a_n b_n},\n \\end{align}\n"
},
{
"math_id": 11,
"text": "a_0=1"
},
{
"math_id": 12,
"text": "b_0=\\cos\\varphi"
},
{
"math_id": 13,
"text": "{\\pi \\over 2K(\\sin\\varphi)}"
},
{
"math_id": 14,
"text": "K(k)"
},
{
"math_id": 15,
"text": "K(k) = \\int_0^{\\pi/2} \\frac{d\\theta}{\\sqrt{1-k^2 \\sin^2\\theta}}."
},
{
"math_id": 16,
"text": "c_0 = \\sin\\varphi"
},
{
"math_id": 17,
"text": "c_{i+1} = a_i - a_{i+1}"
},
{
"math_id": 18,
"text": "\\sum_{i=0}^\\infty 2^{i-1} c_i^2 = 1 - {E(\\sin\\varphi)\\over K(\\sin\\varphi)}"
},
{
"math_id": 19,
"text": "E(k)"
},
{
"math_id": 20,
"text": "E(k) = \\int_0^{\\pi/2}\\sqrt {1-k^2 \\sin^2\\theta}\\; d\\theta"
},
{
"math_id": 21,
"text": "K(\\cos \\theta) E(\\sin \\theta ) + K(\\sin \\theta ) E(\\cos \\theta) - K(\\cos \\theta) K(\\sin \\theta) = {\\pi \\over 2},"
},
{
"math_id": 22,
"text": "\\theta"
}
] | https://en.wikipedia.org/wiki?curid=12916 |
1291698 | Optical autocorrelation | Autocorrelation functions realized in optics
In optics, various autocorrelation functions can be experimentally realized. The field autocorrelation may be used to calculate the spectrum of a source of light, while the intensity autocorrelation and the interferometric autocorrelation are commonly used to "estimate" the duration of ultrashort pulses produced by modelocked lasers. The laser pulse duration cannot be easily measured by optoelectronic methods, since the response time of photodiodes and oscilloscopes are at best of the order of 200 femtoseconds, yet laser pulses can be made as short as a few femtoseconds.
In the following examples, the autocorrelation signal is generated by the nonlinear process of second-harmonic generation (SHG). Other techniques based on two-photon absorption may also be used in autocorrelation measurements, as well as higher-order nonlinear optical processes such as third-harmonic generation, in which case the mathematical expressions of the signal will be slightly modified, but the basic interpretation of an autocorrelation trace remains the same. A detailed discussion on interferometric autocorrelation is given in several well-known textbooks.
Field autocorrelation.
For a complex electric field formula_0, the field autocorrelation function is defined by
formula_1
The Wiener-Khinchin theorem states that the Fourier transform of the field autocorrelation is the spectrum of formula_0, i.e., the square of the "magnitude" of the Fourier transform of formula_0. As a result, the field autocorrelation is not sensitive to the spectral "phase".
The field autocorrelation is readily measured experimentally by placing a slow detector at the output of a Michelson interferometer. The detector is illuminated by the input electric field formula_0 coming from one arm, and by the delayed replica formula_2 from the other arm. If the time response of the detector is much larger than the time duration of the signal formula_0, or if the recorded signal is integrated, the detector measures the intensity formula_3 as the delay formula_4 is scanned:
formula_5
Expanding formula_6 reveals that one of the terms is formula_7, proving that a Michelson interferometer can be used to measure the field autocorrelation, or the spectrum of formula_0 (and only the spectrum). This principle is the basis for Fourier transform spectroscopy.
Intensity autocorrelation.
To a complex electric field formula_0 corresponds an intensity formula_8 and an intensity autocorrelation function defined by
formula_9
The optical implementation of the intensity autocorrelation is not as straightforward as for the field autocorrelation. Similarly to the previous setup, two parallel beams with a variable delay are generated, then focused into a second-harmonic-generation crystal (see nonlinear optics) to obtain a signal proportional to formula_10. Only the beam propagating on the optical axis, proportional to the cross-product formula_11, is retained. This signal is then recorded by a slow detector, which measures
formula_12
formula_6 is exactly the intensity autocorrelation formula_7.
The generation of the second harmonic in crystals is a nonlinear process that requires high peak power, unlike the previous setup. However, such high peak power can be obtained from a limited amount of energy by ultrashort pulses, and as a result their intensity autocorrelation is often measured experimentally. Another difficulty with this setup is that both beams must be focused at the same point inside the crystal "as the delay is scanned" in order for the second harmonic to be generated.
It can be shown that the intensity autocorrelation width of a pulse is related to the intensity width. For a Gaussian time profile, the autocorrelation width is formula_13 longer than the width of the intensity, and it is 1.54 longer in the case of a hyperbolic secant squared (sech2) pulse. This numerical factor, which depends on the shape of the pulse, is sometimes called the "deconvolution factor". If this factor is known, or assumed, the time duration (intensity width) of a pulse can be measured using an intensity autocorrelation. However, the phase cannot be measured.
Interferometric autocorrelation.
As a combination of both previous cases, a nonlinear crystal can be used to generate the second harmonic at the output of a Michelson interferometer, in a "collinear geometry". In this case, the signal recorded by a slow detector is
formula_14
formula_6 is called the interferometric autocorrelation. It contains some information about the phase of the pulse: the fringes in the autocorrelation trace wash out as the spectral phase becomes more complex.
Pupil function autocorrelation.
The optical transfer function "T"("w") of an optical system is given by the autocorrelation of its pupil function "f"("x","y"):
formula_15
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E(t)"
},
{
"math_id": 1,
"text": "A(\\tau) = \\int_{-\\infty}^{+\\infty}E(t)E^*(t-\\tau)dt"
},
{
"math_id": 2,
"text": "E(t-\\tau)"
},
{
"math_id": 3,
"text": "I_M"
},
{
"math_id": 4,
"text": "\\tau"
},
{
"math_id": 5,
"text": "I_M(\\tau) = \\int_{-\\infty}^{+\\infty}|E(t)+E(t-\\tau)|^2dt"
},
{
"math_id": 6,
"text": "I_M(\\tau)"
},
{
"math_id": 7,
"text": "A(\\tau)"
},
{
"math_id": 8,
"text": "I(t) = |E(t)|^2"
},
{
"math_id": 9,
"text": "A(\\tau) = \\int_{-\\infty}^{+\\infty}I(t)I(t-\\tau)dt"
},
{
"math_id": 10,
"text": "(E(t)+E(t-\\tau))^2"
},
{
"math_id": 11,
"text": "E(t)E(t-\\tau)"
},
{
"math_id": 12,
"text": "I_M(\\tau) = \\int_{-\\infty}^{+\\infty}|E(t)E(t-\\tau)|^2dt = \\int_{-\\infty}^{+\\infty}I(t)I(t-\\tau)dt"
},
{
"math_id": 13,
"text": "\\sqrt{2}"
},
{
"math_id": 14,
"text": "I_M(\\tau) = \\int_{-\\infty}^{+\\infty}|(E(t)+E(t-\\tau))^2|^2dt"
},
{
"math_id": 15,
"text": "T(w) = \\frac{\\int_{w/2}^{1} \\int_{0}^{\\sqrt{1-x^2}} f(x,y) f^*(x-w,y)dy dx}{\\int_{0}^{1}\\int_{0}^{\\sqrt{1-x^2}}f(x,y)^2 dy dx}"
}
] | https://en.wikipedia.org/wiki?curid=1291698 |
1291808 | Competitive Lotka–Volterra equations | Model of multi-species population dynamics
The competitive Lotka–Volterra equations are a simple model of the population dynamics of species competing for some common resource. They can be further generalised to the generalized Lotka–Volterra equation to include trophic interactions.
Overview.
The form is similar to the Lotka–Volterra equations for predation in that the equation for each species has one term for self-interaction and one term for the interaction with other species. In the equations for predation, the base population model is exponential. For the competition equations, the logistic equation is the basis.
The logistic population model, when used by ecologists often takes the following form:
formula_0
Here x is the size of the population at a given time, r is inherent per-capita growth rate, and K is the carrying capacity.
Two species.
Given two populations, "x"1 and "x"2, with logistic dynamics, the Lotka–Volterra formulation adds an additional term to account for the species' interactions. Thus the competitive Lotka–Volterra equations are:
formula_1
Here, "α"12 represents the effect species 2 has on the population of species 1 and "α"21 represents the effect species 1 has on the population of species 2. These values do not have to be equal. Because this is the competitive version of the model, all interactions must be harmful (competition) and therefore all "α"-values are positive. Also, note that each species can have its own growth rate and carrying capacity. A complete classification of this dynamics, even for all sign patterns of above coefficients, is available, which is based upon equivalence to the 3-type replicator equation.
"N" species.
This model can be generalized to any number of species competing against each other. One can think of the populations and growth rates as vectors, α's as a matrix. Then the equation for any species i becomes
formula_2
or, if the carrying capacity is pulled into the interaction matrix (this doesn't actually change the equations, only how the interaction matrix is defined),
formula_3
where N is the total number of interacting species. For simplicity all self-interacting terms "α"ii are often set to 1.
Possible dynamics.
The definition of a competitive Lotka–Volterra system assumes that all values in the interaction matrix are positive or 0 ("αij" ≥ 0 for all i, j). If it is also assumed that the population of any species will increase in the absence of competition unless the population is already at the carrying capacity ("ri" > 0 for all i), then some definite statements can be made about the behavior of the system.
4-dimensional example.
A simple 4-dimensional example of a competitive Lotka–Volterra system has been characterized by Vano "et al." Here the growth rates and interaction matrix have been set to
formula_5
with formula_6 for all formula_7. This system is chaotic and has a largest Lyapunov exponent of 0.0203. From the theorems by Hirsch, it is one of the lowest-dimensional chaotic competitive Lotka–Volterra systems. The Kaplan–Yorke dimension, a measure of the dimensionality of the attractor, is 2.074. This value is not a whole number, indicative of the fractal structure inherent in a strange attractor. The coexisting equilibrium point, the point at which all derivatives are equal to zero but that is not the origin, can be found by inverting the interaction matrix and multiplying by the unit column vector, and is equal to
formula_8
Note that there are always 2"N" equilibrium points, but all others have at least one species' population equal to zero.
The eigenvalues of the system at this point are 0.0414±0.1903"i", −0.3342, and −1.0319. This point is unstable due to the positive value of the real part of the complex eigenvalue pair. If the real part were negative, this point would be stable and the orbit would attract asymptotically. The transition between these two states, where the real part of the complex eigenvalue pair is equal to zero, is called a Hopf bifurcation.
A detailed study of the parameter dependence of the dynamics was performed by Roques and Chekroun in.
The authors observed that interaction and growth parameters leading respectively to extinction of three species, or coexistence of two, three or four species, are for the most part arranged in large regions with clear boundaries. As predicted by the theory, chaos was also found; taking place however over much smaller islands of the parameter space which causes difficulties in the identification of their location by a random search algorithm. These regions where chaos occurs are, in the three cases analyzed in, situated at the interface between a non-chaotic four species region and a region where extinction occurs. This implies a high sensitivity of biodiversity with respect to parameter variations in the chaotic regions. Additionally, in regions where extinction occurs which are adjacent to chaotic regions, the computation of local Lyapunov exponents revealed that a possible cause of extinction is the overly strong fluctuations in species abundances induced by local chaos.
Spatial arrangements.
Background.
There are many situations where the strength of species' interactions depends on the physical distance of separation. Imagine bee colonies in a field. They will compete for food strongly with the colonies located near to them, weakly with further colonies, and not at all with colonies that are far away. This doesn't mean, however, that those far colonies can be ignored. There is a transitive effect that permeates through the system. If colony "A" interacts with colony "B", and "B" with "C", then "C" affects "A" through "B". Therefore, if the competitive Lotka–Volterra equations are to be used for modeling such a system, they must incorporate this spatial structure.
Matrix organization.
One possible way to incorporate this spatial structure is to modify the nature of the Lotka–Volterra equations to something like a
reaction–diffusion system. It is much easier, however, to keep the format of the equations the same and instead modify the interaction matrix. For simplicity, consider a five species example where all of the species are aligned on a circle, and each interacts only with the two neighbors on either side with strength "α"−1 and "α"1 respectively. Thus, species 3 interacts only with species 2 and 4, species 1 interacts only with species 2 and 5, etc. The interaction matrix will now be
formula_9
If each species is identical in its interactions with neighboring species, then each row of the matrix is just a permutation of the first row. A simple, but non-realistic, example of this type of system has been characterized by Sprott "et al." The coexisting equilibrium point for these systems has a very simple form given by the inverse of the sum of the row
formula_10
Lyapunov functions.
A Lyapunov function is a function of the system "f" = "f"("x") whose existence in a system demonstrates stability. It is often useful to imagine a Lyapunov function as the energy of the system. If the derivative of the function is equal to zero for some orbit not including the equilibrium point, then that orbit is a stable attractor, but it must be either a limit cycle or "n"-torus - but not a strange attractor (this is because the largest Lyapunov exponent of a limit cycle and "n"-torus are zero while that of a strange attractor is positive). If the derivative is less than zero everywhere except the equilibrium point, then the equilibrium point is a stable fixed point attractor. When searching a dynamical system for non-fixed point attractors, the existence of a Lyapunov function can help eliminate regions of parameter space where these dynamics are impossible.
The spatial system introduced above has a Lyapunov function that has been explored by Wildenberg "et al." If all species are identical in their spatial interactions, then the interaction matrix is circulant. The eigenvalues of a circulant matrix are given by
formula_11
for "k" = 0"N" − 1 and where formula_12 the "N"th root of unity. Here "cj" is the "j"th value in the first row of the circulant matrix.
The Lyapunov function exists if the real part of the eigenvalues are positive (Re("λk") > 0 for "k" = 0, …, "N"/2). Consider the system where "α"−2 = "a", "α"−1 = "b", "α"1 = "c", and "α"2 = "d". The Lyapunov function exists if
formula_13
for "k" = 0, …, "N" − 1. Now, instead of having to integrate the system over thousands of time steps to see if any dynamics other than a fixed point attractor exist, one need only determine if the Lyapunov function exists (note: the absence of the Lyapunov function doesn't guarantee a limit cycle, torus, or chaos).
Example: Let "α"−2 = 0.451, "α"−1 = 0.5, and "α"2 = 0.237. If "α"1 = 0.5 then all eigenvalues are negative and the only attractor is a fixed point. If "α"1 = 0.852 then the real part of one of the complex eigenvalue pair becomes positive and there is a strange attractor. The disappearance of this Lyapunov function coincides with a Hopf bifurcation.
Line systems and eigenvalues.
It is also possible to arrange the species into a line. The interaction matrix for this system is very similar to that of a circle except the interaction terms in the lower left and upper right of the matrix are deleted (those that describe the interactions between species 1 and "N", etc.).
formula_14
This change eliminates the Lyapunov function described above for the system on a circle, but most likely there are other Lyapunov functions that have not been discovered.
The eigenvalues of the circle system plotted in the complex plane form a trefoil shape. The eigenvalues from a short line form a sideways Y, but those of a long line begin to resemble the trefoil shape of the circle. This could be due to the fact that a long line is indistinguishable from a circle to those species far from the ends.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{dx \\over dt} = rx\\left(1-{x \\over K}\\right)."
},
{
"math_id": 1,
"text": "\\begin{align}\n{dx_1 \\over dt} &= r_1 x_1\\left(1-\\left({x_1+\\alpha_{12}x_2 \\over K_1}\\right) \\right) \\\\[0.5ex]\n{dx_2 \\over dt} &= r_2 x_2\\left(1-\\left({x_2+\\alpha_{21}x_1 \\over K_2}\\right) \\right).\n\\end{align}"
},
{
"math_id": 2,
"text": "\\frac{dx_i}{dt} = r_i x_i \\left(1- \\frac{\\sum_{j=1}^N \\alpha_{ij}x_j}{K_i} \\right) "
},
{
"math_id": 3,
"text": "\\frac{dx_i}{dt} = r_i x_i \\left( 1 - \\sum_{j=1}^N \\alpha_{ij}x_j \\right) "
},
{
"math_id": 4,
"text": "\\Delta_{N-1} = \\left \\{ x_i : x_i \\ge 0, \\sum_i x_i = 1 \\right \\}"
},
{
"math_id": 5,
"text": "r = \\begin{bmatrix} 1 \\\\ 0.72 \\\\ 1.53 \\\\ 1.27 \\end{bmatrix} \\quad\n\\alpha = \\begin{bmatrix} 1 & 1.09 & 1.52 & 0 \\\\ 0 & 1 & 0.44 & 1.36 \\\\ 2.33 & 0 & 1 & 0.47 \\\\ 1.21 & 0.51 & 0.35 & 1 \\end{bmatrix}"
},
{
"math_id": 6,
"text": "K_i=1"
},
{
"math_id": 7,
"text": "i"
},
{
"math_id": 8,
"text": "\\overline{x} = \\left ( \\alpha \\right )^{-1} \\begin{bmatrix} 1 \\\\ 1 \\\\ 1 \\\\ 1 \\end{bmatrix} = \\begin{bmatrix} 0.3013 \\\\ 0.4586 \\\\ 0.1307 \\\\ 0.3557 \\end{bmatrix}."
},
{
"math_id": 9,
"text": "\\alpha_{ij} = \\begin{bmatrix}1 & \\alpha_1 & 0 & 0 & \\alpha_{-1} \\\\ \\alpha_{-1} & 1 & \\alpha_1 & 0 & 0 \\\\ 0 & \\alpha_{-1} & 1 & \\alpha_1 & 0 \\\\ 0 & 0 & \\alpha_{-1} & 1 & \\alpha_1 \\\\ \\alpha_1 & 0 & 0 & \\alpha_{-1} & 1 \\end{bmatrix}."
},
{
"math_id": 10,
"text": "\\overline{x}_i = \\frac{1}{\\sum_{j=1}^N \\alpha_{ij}} = \\frac{1}{\\alpha_{-1} + 1 + \\alpha_1}."
},
{
"math_id": 11,
"text": "\\lambda_k = \\sum_{j=0}^{N-1} c_j\\gamma^{kj}"
},
{
"math_id": 12,
"text": "\\gamma = e^{i2\\pi/N}"
},
{
"math_id": 13,
"text": "\\begin{align}\n\\operatorname{Re}(\\lambda_k) &= \\operatorname{Re} \\left ( 1+\\alpha_{-2}e^{i2 \\pi k(N-2)/N} + \\alpha_{-1}e^{i2 \\pi k(N-1)/N} + \\alpha_1e^{i2 \\pi k/N} + \\alpha_2e^{i4 \\pi k/N} \\right ) \\\\\n&= 1+(\\alpha_{-2}+\\alpha_2)\\cos \\left ( \\frac{4 \\pi k}{N} \\right ) + (\\alpha_{-1}+\\alpha_1)\\cos \\left ( \\frac{2 \\pi k}{N} \\right ) > 0\n\\end{align}"
},
{
"math_id": 14,
"text": "\\alpha_{ij} = \\begin{bmatrix}1 & \\alpha_1 & 0 & 0 & 0 \\\\ \\alpha_{-1} & 1 & \\alpha_1 & 0 & 0 \\\\ 0 & \\alpha_{-1} & 1 & \\alpha_1 & 0 \\\\ 0 & 0 & \\alpha_{-1} & 1 & \\alpha_1 \\\\ 0 & 0 & 0 & \\alpha_{-1} & 1 \\end{bmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=1291808 |
12918117 | Smart ligand | Sub-type of substance that forms a complex with a biomolecule
Smart ligands are affinity ligands selected with pre-defined equilibrium (formula_0), kinetic (formula_1, formula_2) and thermodynamic (ΔH, ΔS) parameters of biomolecular interaction.
Ligands with desired parameters can be selected from large combinatorial libraries of biopolymers using instrumental separation techniques with well-described kinetic behaviour, such as kinetic capillary electrophoresis (KCE), surface plasmon resonance (SPR), microscale thermophoresis (MST), etc. Known examples of smart ligands include DNA smart aptamers; however, RNA and peptide smart aptamers can also be developed.
Smart ligands can find a set of unique applications in biomedical research, drug discovery and proteomic studies. For example, a panel of DNA smart aptamers has been recently used to develop affinity analysis of proteins with ultra-wide dynamic range of measured concentrations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_{d}"
},
{
"math_id": 1,
"text": "k_{off}"
},
{
"math_id": 2,
"text": "k_{on}"
}
] | https://en.wikipedia.org/wiki?curid=12918117 |
129222 | Unitary matrix | Complex matrix whose conjugate transpose equals its inverse
In advanced linear algebra, an invertible complex square matrix U is unitary if its matrix inverse "U"−1 equals its conjugate transpose "U"*, that is, if
formula_0
where I is the identity matrix.
In physics, especially in quantum mechanics, the conjugate transpose is referred to as the Hermitian adjoint of a matrix and is denoted by a dagger (†), so the equation above is written
formula_1
A complex matrix U is special unitary if it is unitary and its matrix determinant equals 1.
For real numbers, the analogue of a unitary matrix is an orthogonal matrix. Unitary matrices have significant importance in quantum mechanics because they preserve norms, and thus, probability amplitudes.
Properties.
For any unitary matrix U of finite size, the following hold:
For any nonnegative integer "n", the set of all "n" × "n" unitary matrices with matrix multiplication forms a group, called the unitary group U("n").
Every square matrix with unit Euclidean norm is the average of two unitary matrices.
Equivalent conditions.
If "U" is a square, complex matrix, then the following conditions are equivalent:
Elementary constructions.
2 × 2 unitary matrix.
One general expression of a 2 × 2 unitary matrix is
formula_15
which depends on 4 real parameters (the phase of a, the phase of b, the relative magnitude between a and b, and the angle φ). The form is configured so the determinant of such a matrix is
formula_16
The sub-group of those elements formula_17 with formula_18 is called the special unitary group SU(2).
Among several alternative forms, the matrix U can be written in this form:
formula_19
where formula_20 and formula_21 above, and the angles formula_22 can take any values.
By introducing formula_23 and formula_24 has the following factorization:
formula_25
This expression highlights the relation between 2 × 2 unitary matrices and 2 × 2 orthogonal matrices of angle θ.
Another factorization is
formula_26
Many other factorizations of a unitary matrix in basic matrices are possible.
See also.
<templatestyles src="Div col/styles.css"/>
Skew-Hermitian matrix
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U^* U = UU^* = I,"
},
{
"math_id": 1,
"text": "U^\\dagger U = UU^\\dagger = I."
},
{
"math_id": 2,
"text": "U^* U = UU^*"
},
{
"math_id": 3,
"text": "U = VDV^*,"
},
{
"math_id": 4,
"text": "\\left|\\det(U)\\right| = 1"
},
{
"math_id": 5,
"text": "\\det(U)"
},
{
"math_id": 6,
"text": "U"
},
{
"math_id": 7,
"text": "U^*"
},
{
"math_id": 8,
"text": "U^{-1} = U^*"
},
{
"math_id": 9,
"text": "\\Complex^n"
},
{
"math_id": 10,
"text": "U^*U = I"
},
{
"math_id": 11,
"text": "UU^* = I"
},
{
"math_id": 12,
"text": "\\|Ux\\|_2 = \\|x\\|_2"
},
{
"math_id": 13,
"text": "x \\in \\Complex^n"
},
{
"math_id": 14,
"text": "\\|x\\|_2 = \\sqrt{\\sum_{i=1}^n |x_i|^2}"
},
{
"math_id": 15,
"text": "U = \\begin{bmatrix}\n a & b \\\\\n -e^{i\\varphi} b^* & e^{i\\varphi} a^* \\\\\n\\end{bmatrix},\n\\qquad\n\\left| a \\right|^2 + \\left| b \\right|^2 = 1\\ ,"
},
{
"math_id": 16,
"text": " \\det(U) = e^{i \\varphi} ~. "
},
{
"math_id": 17,
"text": "\\ U\\ "
},
{
"math_id": 18,
"text": "\\ \\det(U) = 1\\ "
},
{
"math_id": 19,
"text": "\\ U = e^{i\\varphi / 2} \\begin{bmatrix}\n e^{i\\alpha} \\cos \\theta & e^{i\\beta} \\sin \\theta \\\\\n -e^{-i\\beta} \\sin \\theta & e^{-i\\alpha} \\cos \\theta \\\\\n\\end{bmatrix}\\ ,"
},
{
"math_id": 20,
"text": "\\ e^{i\\alpha} \\cos \\theta = a\\ "
},
{
"math_id": 21,
"text": "\\ e^{i\\beta} \\sin \\theta = b\\ ,"
},
{
"math_id": 22,
"text": "\\ \\varphi, \\alpha, \\beta, \\theta\\ "
},
{
"math_id": 23,
"text": "\\ \\alpha = \\psi + \\delta\\ "
},
{
"math_id": 24,
"text": "\\ \\beta = \\psi - \\delta\\ ,"
},
{
"math_id": 25,
"text": " U = e^{i\\varphi /2} \\begin{bmatrix}\n e^{i\\psi} & 0 \\\\\n 0 & e^{-i\\psi}\n\\end{bmatrix}\n\\begin{bmatrix}\n \\cos \\theta & \\sin \\theta \\\\\n -\\sin \\theta & \\cos \\theta \\\\\n\\end{bmatrix} \n\\begin{bmatrix}\n e^{i\\delta} & 0 \\\\\n 0 & e^{-i\\delta}\n\\end{bmatrix} ~.\n"
},
{
"math_id": 26,
"text": "U = \\begin{bmatrix}\n \\cos \\rho & -\\sin \\rho \\\\\n \\sin \\rho & \\;\\cos \\rho \\\\\n\\end{bmatrix} \n\\begin{bmatrix}\n e^{i\\xi} & 0 \\\\\n 0 & e^{i\\zeta}\n\\end{bmatrix}\n\\begin{bmatrix}\n \\;\\cos \\sigma & \\sin \\sigma \\\\\n -\\sin \\sigma & \\cos \\sigma \\\\\n\\end{bmatrix} ~.\n"
}
] | https://en.wikipedia.org/wiki?curid=129222 |
1292241 | World Chess Solving Championship | Annual chess puzzles competition
The World Chess Solving Championship (WCSC) is an annual competition in the solving of chess problems (also known as chess puzzles) organized by the World Federation for Chess Composition (WFCC), previously by FIDE via the Permanent Commission of the FIDE for Chess Compositions (PCCC).
The participants must solve a series of different types of chess problem in a set amount of time. Points are awarded for correct solutions in the least amount of time. The highest score at the end of the competition is proclaimed the winner.
Format.
The Tournament consists of six rounds over two days, with three rounds each day according to the following table:
Rating.
Formulas.
We assume that the tournament has n solvers, with ratings formula_0 (i = 1, ..., n) and in the tournament their scores are formula_1 (i = 1, ..., n).
For players who didn't already have a rating, a preliminary rating is calculated. This rating is determined at the end of the first tournament in which the solver has participated. The formula that is used to calculate this rating is:
For players who already have a rating, the new rating is calculated as follows.
These formulas are the defaults. If parameters fall outside a predetermined range, correction calculations are being carried out. These can be found in the reference.
Rating list in 2015.
October 1st 2015, Top 10:
Rating list in 2023.
July 1st 2023, Top 10:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_i"
},
{
"math_id": 1,
"text": "S_i"
},
{
"math_id": 2,
"text": "\\frac1n \\sum_{i=1}^{n} (R_i - \\mbox{AveRat})^2"
},
{
"math_id": 3,
"text": "(\\frac1n \\sum_{i=1}^{n} R_i S_i) - \\mbox{AveRat} \\cdot \\mbox{AveSco}"
},
{
"math_id": 4,
"text": "\\frac\\mbox{Covar}\\mbox{VarRat}"
},
{
"math_id": 5,
"text": "\\mbox{AveSco} - \\mbox{Slope} \\cdot \\mbox{AveRat}"
},
{
"math_id": 6,
"text": "\\frac{\\mbox{Score of player} - \\mbox{Intercept}}\\mbox{Slope}"
},
{
"math_id": 7,
"text": "\\mbox{Slope} \\cdot \\mbox{Rat} + \\mbox{Intercept}"
},
{
"math_id": 8,
"text": "\\mbox{TC} \\cdot (\\mbox{Score of player} - \\mbox{ExpScore})"
},
{
"math_id": 9,
"text": "\\mbox{Rat} + \\mbox{ChangeRat}"
}
] | https://en.wikipedia.org/wiki?curid=1292241 |
12928217 | Banded waveguide synthesis | Banded Waveguides Synthesis is a physical modeling synthesis method to simulate sounds of dispersive sounding objects, or objects with strongly inharmonic resonant frequencies efficiently. It can be used to model the sound of instruments based on elastic solids such as vibraphone and marimba bars, singing bowls and bells. It can also be used for other instruments with inharmonic partials, such as membranes or plates. For example, simulations of tabla drums and cymbals have been implemented using this method. Because banded waveguides retain the dynamics of the system, complex non-linear excitations can be implemented. The method was originally invented in 1999 by Georg Essl and Perry Cook to synthesize the sound of bowed vibraphone bars.
In the case of the standard one-dimensional wave equation formula_0 disturbances of all frequencies travel with the same constant speed formula_1. In dispersive media, the traveling speed of disturbances depends on their frequency and we get formula_2 where formula_3 is the frequency of the disturbance. Many physical systems are dispersive, for example the elastic beams described by the Euler–Bernoulli beam equation formula_4 where formula_5 is a material constant.
Banded waveguides model dispersive behavior by splitting the propagation of disturbances into frequency bands. Each frequency band is modeled using a band-limited version of the standard digital waveguide method. Each frequency band is tuned to the resonant frequencies of the sounding object to be modeled to avoid any discretization error at the dominant and audible frequencies.
Banded waveguide synthesis is implemented in most available sound synthesis libraries and programs such as:
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y_{tt}=c^2y_{xx}"
},
{
"math_id": 1,
"text": "c"
},
{
"math_id": 2,
"text": "c(\\omega)"
},
{
"math_id": 3,
"text": "\\omega"
},
{
"math_id": 4,
"text": "y_{tt}=ky_{xxxx}"
},
{
"math_id": 5,
"text": "k"
}
] | https://en.wikipedia.org/wiki?curid=12928217 |
12928899 | Pollard's kangaroo algorithm | In computational number theory and computational algebra, Pollard's kangaroo algorithm (also Pollard's lambda algorithm, see Naming below) is an algorithm for solving the discrete logarithm problem. The algorithm was introduced in 1978 by the number theorist John M. Pollard, in the same paper as his better-known Pollard's rho algorithm for solving the same problem. Although Pollard described the application of his algorithm to the discrete logarithm problem in the multiplicative group of units modulo a prime "p", it is in fact a generic discrete logarithm algorithm—it will work in any finite cyclic group.
Algorithm.
Suppose formula_0 is a finite cyclic group of order formula_1 which is generated by the element formula_2, and we seek to find the discrete logarithm formula_3 of the element formula_4 to the base formula_2. In other words, one seeks formula_5 such that formula_6. The lambda algorithm allows one to search for formula_3 in some interval formula_7. One may search the entire range of possible logarithms by setting formula_8 and formula_9.
1. Choose a set formula_10 of positive integers of mean roughly formula_11 and define a pseudorandom map formula_12.
2. Choose an integer formula_13 and compute a sequence of group elements formula_14 according to:
3. Compute
formula_17
Observe that:
formula_18
4. Begin computing a second sequence of group elements formula_19 according to:
and a corresponding sequence of integers formula_22 according to:
formula_23.
Observe that:
formula_24
5. Stop computing terms of formula_25 and formula_26 when either of the following conditions are met:
A) formula_27 for some formula_28. If the sequences formula_29 and formula_30 "collide" in this manner, then we have:
formula_31
and so we are done.
B) formula_32. If this occurs, then the algorithm has failed to find formula_3. Subsequent attempts can be made by changing the choice of formula_10 and/or formula_33.
Complexity.
Pollard gives the time complexity of the algorithm as formula_34, using a probabilistic argument based on the assumption that formula_33 acts pseudorandomly. Since formula_35 can be represented using formula_36 bits, this is exponential in the problem size (though still a significant improvement over the trivial brute-force algorithm that takes time formula_37). For an example of a subexponential time discrete logarithm algorithm, see the index calculus algorithm.
Naming.
The algorithm is well known by two names.
The first is "Pollard's kangaroo algorithm". This name is a reference to an analogy used in the paper presenting the algorithm, where the algorithm is explained in terms of using a "tame" kangaroo to trap a "wild" kangaroo. Pollard has explained that this analogy was inspired by a "fascinating" article published in the same issue of "Scientific American" as an exposition of the RSA public key cryptosystem. The article described an experiment in which a kangaroo's "energetic cost of locomotion, measured in terms of oxygen consumption at various speeds, was determined by placing kangaroos on a treadmill".
The second is "Pollard's lambda algorithm". Much like the name of another of Pollard's discrete logarithm algorithms, Pollard's rho algorithm, this name refers to the similarity between a visualisation of the algorithm and the Greek letter lambda (formula_38). The shorter stroke of the letter lambda corresponds to the sequence formula_29, since it starts from the position b to the right of x. Accordingly, the longer stroke corresponds to the sequence formula_25, which "collides with" the first sequence (just like the strokes of a lambda intersect) and then follows it subsequently.
Pollard has expressed a preference for the name "kangaroo algorithm", as this avoids confusion with some parallel versions of his rho algorithm, which have also been called "lambda algorithms".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "\\beta"
},
{
"math_id": 5,
"text": "x \\in Z_n"
},
{
"math_id": 6,
"text": "\\alpha^x = \\beta"
},
{
"math_id": 7,
"text": "[a,\\ldots,b]\\subset Z_n"
},
{
"math_id": 8,
"text": "a=0"
},
{
"math_id": 9,
"text": "b=n-1"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "\\sqrt{b-a}"
},
{
"math_id": 12,
"text": "f: G \\rightarrow S"
},
{
"math_id": 13,
"text": "N"
},
{
"math_id": 14,
"text": "\\{x_0,x_1,\\ldots,x_N\\}"
},
{
"math_id": 15,
"text": "x_0 = \\alpha^b\\,"
},
{
"math_id": 16,
"text": "x_{i+1} = x_i\\alpha^{f(x_i)}\\text{ for }i=0,1,\\ldots,N-1"
},
{
"math_id": 17,
"text": "d = \\sum_{i=0}^{N-1}f(x_i)."
},
{
"math_id": 18,
"text": "x_N = x_0\\alpha^d = \\alpha^{b+d}\\, ."
},
{
"math_id": 19,
"text": "\\{y_0,y_1,\\ldots\\}"
},
{
"math_id": 20,
"text": "y_0 = \\beta\\,"
},
{
"math_id": 21,
"text": "y_{i+1} = y_i\\alpha^{f(y_i)}\\text{ for }i=0,1,\\ldots,N-1"
},
{
"math_id": 22,
"text": "\\{d_0,d_1,\\ldots\\}"
},
{
"math_id": 23,
"text": "d_n = \\sum_{i=0}^{n-1}f(y_i)"
},
{
"math_id": 24,
"text": "y_i = y_0\\alpha^{d_i} = \\beta\\alpha^{d_i}\\mbox{ for }i=0,1,\\ldots,N-1"
},
{
"math_id": 25,
"text": "\\{y_i\\}"
},
{
"math_id": 26,
"text": "\\{d_i\\}"
},
{
"math_id": 27,
"text": "y_j = x_N"
},
{
"math_id": 28,
"text": "j"
},
{
"math_id": 29,
"text": "\\{x_i\\}"
},
{
"math_id": 30,
"text": "\\{y_j\\}"
},
{
"math_id": 31,
"text": "x_N = y_j \\Rightarrow \\alpha^{b+d} = \\beta\\alpha^{d_j} \\Rightarrow \\beta = \\alpha^{b+d-d_j} \\Rightarrow \nx \\equiv b+d-d_j \\pmod{n}"
},
{
"math_id": 32,
"text": "d_i > b-a+d"
},
{
"math_id": 33,
"text": "f"
},
{
"math_id": 34,
"text": "O(\\sqrt{b-a})"
},
{
"math_id": 35,
"text": "a, b"
},
{
"math_id": 36,
"text": "O(\\log b)"
},
{
"math_id": 37,
"text": "O(b-a)"
},
{
"math_id": 38,
"text": "\\lambda"
}
] | https://en.wikipedia.org/wiki?curid=12928899 |
12931279 | Functional-theoretic algebra | Mathematical concept
Any vector space can be made into a unital associative algebra, called functional-theoretic algebra, by defining products in terms of two linear functionals. In general, it is a non-commutative algebra. It becomes commutative when the two functionals are the same.
Definition.
Let "AF" be a vector space over a field "F", and let "L"1 and "L"2 be two linear functionals on AF with the property "L"1("e") = "L"2("e") = 1"F" for some "e" in "AF". We define multiplication of two elements "x", "y" in "AF" by
formula_0
It can be verified that the above multiplication is associative and that "e" is the identity of this multiplication.
So, AF forms an associative algebra with unit "e" and is called a "functional theoretic algebra"(FTA).
Suppose the two linear functionals "L"1 and "L"2 are the same, say "L." Then "AF" becomes a commutative algebra with multiplication defined by
formula_1
Example.
"X" is a nonempty set and "F" a field. "F""X" is the set of functions from "X" to "F".
If "f, g" are in "F""X", "x" in "X" and "α" in "F", then define
formula_2
and
formula_3
With addition and scalar multiplication defined as this, "F""X" is a vector space over "F."
Now, fix two elements "a, b" in "X" and define a function "e" from "X" to "F" by "e"("x") = 1"F" for all "x" in "X".
Define "L"1 and "L2" from "F""X" to "F" by "L"1("f") = "f"("a") and "L"2("f") = "f"("b").
Then "L"1 and "L"2 are two linear functionals on "F""X" such that "L"1("e")= "L"2("e")= 1"F"
For "f, g" in "F""X" define
formula_4
Then "F""X" becomes a non-commutative function algebra with the function "e" as the identity of multiplication.
Note that
formula_5
FTA of Curves in the Complex Plane.
Let C denote the field of
Complex numbers.
A continuous function "γ" from the closed
interval [0, 1] of real numbers to the field C is called a
curve. The complex numbers "γ"(0) and "γ"(1) are, respectively,
the initial and terminal points of the curve.
If they coincide, the
curve is called a "loop".
The set "V"[0, 1] of all the curves is a
vector space over C.
We can make this vector space of curves into an
algebra by defining multiplication as above.
Choosing formula_6 we have for "α,β" in "C"[0, 1],
formula_7
Then, "V"[0, 1] is a non-commutative algebra with "e" as the unity.
We illustrate
this with an example.
Example of f-Product of Curves.
Let us take (1) the line segment joining the points (1, 0) and (0, 1) and (2) the unit circle with center at the
origin.
As curves in "V"[0, 1], their equations can be obtained as
formula_8
Since formula_9 the circle "g"
is a loop.
The line segment "f" starts from :formula_10
and ends at formula_11
Now, we get two "f"-products
formula_12 given by
formula_13
and
formula_14
See the Figure.
Observe that formula_15 showing that
multiplication is non-commutative. Also both the products starts from formula_16
References.
<templatestyles src="Refbegin/styles.css" />
Book of Beautiful Curves
Certain Number Theoretic Episodes in Algebra | [
{
"math_id": 0,
"text": " x \\cdot y = L_1(x)y + L_2(y)x - L_1(x) L_2(y) e. "
},
{
"math_id": 1,
"text": " x \\cdot y = L(x)y + L(y)x - L(x)L(y)e. "
},
{
"math_id": 2,
"text": " (f+g)(x) = f(x) + g(x)\\,"
},
{
"math_id": 3,
"text": " (\\alpha f)(x)=\\alpha f(x).\\,"
},
{
"math_id": 4,
"text": " f \\cdot g = L_1(f)g + L_2(g)f - L_1(f) L_2(g) e = f(a)g + g(b)f - f(a)g(b)e. "
},
{
"math_id": 5,
"text": " (f \\cdot g)(a) = f(a)g(a)\\mbox{ and } (f \\cdot g)(b) = f(b)g(b). "
},
{
"math_id": 6,
"text": "e(t) = 1, \\forall \\in [0, 1] "
},
{
"math_id": 7,
"text": " {\\alpha} \\cdot {\\beta} = {\\alpha}(0){\\beta} + {\\beta}(1){\\alpha} - {\\alpha}(0){\\beta}(1)e "
},
{
"math_id": 8,
"text": " f(t)=1 - t + it \\mbox{ and } g(t)= \\cos(2\\pi t)+ i\\sin(2\\pi t)\n"
},
{
"math_id": 9,
"text": "g(0)=g(1)=1"
},
{
"math_id": 10,
"text": " f(0)=1 "
},
{
"math_id": 11,
"text": " f(1)= i "
},
{
"math_id": 12,
"text": " f \\cdot g \\mbox{ and } g \\cdot f"
},
{
"math_id": 13,
"text": "(f\\cdot g)(t)=[-t+\\cos (2\\pi t)]+i[t+\\sin(2\\pi t)]"
},
{
"math_id": 14,
"text": "(g\\cdot f)(t)=[1-t - \\sin (2\\pi t)] +i[t-1+\\cos(2\\pi t)]\n"
},
{
"math_id": 15,
"text": " f\\cdot g \\neq g\\cdot f"
},
{
"math_id": 16,
"text": " f(0)g(0)=1 \\mbox{ and ends at }\n f(1)g(1)= i. "
}
] | https://en.wikipedia.org/wiki?curid=12931279 |
12931723 | William A. Bardeen | American theoretical physicist
William Allan Bardeen (born September 15, 1941, in Washington, Pennsylvania) is an American theoretical physicist who worked at the Fermi National Accelerator Laboratory.
He is renowned for his foundational work on
the chiral anomaly (the Adler-Bardeen theorem), the Yang-Mills and gravitational anomalies, the development of quantum chromodynamics
and the formula_0 scheme frequently used in
perturbative analysis of experimentally observable processes such as deep inelastic scattering, high energy collisions and flavor changing processes.
He also played a major role in developing a theory of
dynamical breaking of electroweak symmetry via top quark condensates, leading to the first composite Higgs models. His work on the chiral symmetry breaking dynamics of heavy-light quark bound states correctly predicted abnormally long-lived resonances which are chiral symmetry partners of the ground states, such as the formula_1. He also developed an analytic, non-perturbative approach for the calculation of non-leptonic decays of Kaons, known as Dual QCD.
Bardeen is considered one of the world's leading authorities on quantum field theory in its application to real-world physical phenomena.
Biography and Family.
After graduating from Cornell University in 1962, Bardeen earned his Ph.D. degree in physics from the University of Minnesota in 1968. Following research appointments at Stony Brook University and the Institute for Advanced Study in Princeton, he was an Associate Professor in the physics department at Stanford University. In 1975, Bardeen joined the staff of the Fermi National Accelerator Laboratory where he has served as Head of the Theoretical Physics Department from
1987-1993 and 1994-1996.
From 1993-1994, he was Head of Theoretical Physics at the SSC Laboratory before the project was terminated by act of Congress.
Bardeen lives in Warrenville, Illinois, with his wife Marge, who was manager of the Education Department at Fermilab. He has two grown children: Charles a retired Project Scientist at the National Center for Atmospheric Research, Boulder, CO, and Karen, who taught chemistry at Oak Park and River Forest High School, Oak Park, Illinois.
He is the son of physicist John Bardeen and Jane Maxwell Bardeen.
Scientific Contributions.
Bardeen is co-inventor of the theory of the chiral anomaly, which is of foundational importance in modern theoretical physics. He developed with Stephen L. Adler the "non-renormalization theorem" (known as the Adler–Bardeen theorem). This proves that the anomaly coefficient is not subject to renormalization to all orders in perturbation theory and anticipates the fact that it is associated with topological index theorems.
Bardeen, in a tour de force, computed the full, nontrivial structure of the chiral anomaly
in non-abelian Yang-Mills gauge theories. This leads to the distinction between the "consistent anomaly" and the "covariant anomaly."
The consistent anomaly is a definition of the loop integrals that satisfies
"Wess-Zumino consistency
conditions" and is symmetric between left- and right-handed chiral fermions in the Feynman loop.
The consistent anomaly is generally related to topology, e.g., the structure and coefficient obtained
by Bardeen's calculation turns out
to be equivalent to that generated by a topological Chern-Simons term built of Yang-Mills fields in 5 dimensions.
In this work he introduced the "Bardeen counter-term" which maintains the conservation of
the vector current in the definition of the loop integral,
and places the anomaly in the axial current. This is known as the "covariant anomaly," and is
relevant to physics where the vector current is conserved (and reverts to the Adler result in QED with coefficient increased by a factor of 3 over that of the consistent anomaly).
These distinctions are crucial to the Wess-Zumino-Witten
term, which is an essential part of chiral Lagrangians, describing the anomalous physics of pseudoscalars and vector mesons,
and topological effects in Yang-Mills gauge theories, such as
the instanton physics in QCD.
With Bruno Zumino, Bardeen formulated the theory of the gravitational anomaly
which is of fundamental importance to string theory.
On sabbatical, working at CERN in 1971, Bardeen collaborated with Murray Gell-Mann, Harald Fritzsch, and Heinrich Leutwyler. They considered a gauge theory
of quark interactions, in which each quark comes in one of three varieties called
“colors.’’ They wrote down the theory now known as quantum chromodynamics
(QCD) and, through the chiral anomaly, established the existence of the three quark colors from the rate of
formula_2 decay.
With his colleagues Andrzej Buras, Dennis Duke and Taizo Muta, Bardeen helped to formulate perturbation theory for quantum chromodynamics, introducing the systematic formula_0 scheme for loop-level perturbation theory, frequently used in the analysis of QCD processes in high energy experiments. With Buras and Jean-Marc Gerard he developed an analytical, non-perturbative framework for the calculation
of non-leptonic decays of K-mesons and formula_3 mixing. This approach,
based on QCD with largeformula_4 led to several results,
confirmed by numerical lattice calculation 30 years later.
One of these results is the identification of the dominant QCD dynamics
responsible for the so-called formula_5 rule in formula_6 decays,
a long-standing puzzle since 1955. With coauthors Sherwin Love and Chung Ngoc Leung, Bardeen also explored theoretical mechanisms for the origin of scale breaking in quantum field theory in conjunction with chiral symmetry breaking and the role of the dilaton.
In the early 1990s it became clear that the top quark was much heavier than originally anticipated, and that
top quarks may be strongly coupled at very short distances. This raised the possibility
that pairs of top quarks could form a composite Higgs boson, which led to top quark condensates, and novel dynamical approaches to electroweak symmetry breaking. The theory, developed with Christopher T. Hill and Manfred Lindner, predicted a heavy top quark, governed by the infrared fixed point (about 20% heavier than the observed top quark mass of 175 GeV), but
it tended to predict too heavy a Higgs boson, almost twice the observed mass of 125 GeV.
Nonetheless, this was the first composite Higgs boson model and the general idea remains an intriguing possibility.
Bardeen and Hill, in 1994, recognized that heavy-light mesons, which contain a heavy quark and a light anti-quark, provide a unique window on the chiral dynamics of a single light quark. They
showed that the (spin) formula_7 ground states are split from the formula_8 parity partners by a universal mass gap of about formula_9 due to the light quark chiral symmetry breaking. This correctly predicted an abnormally long-lived resonance ten years before it was discovered by the BABAR collaboration:
the formula_1. The theory was further developed by Bardeen, Hill and Estia Eichten,
and various decay modes were predicted that have
been confirmed by experiment.
Similar phenomena should be seen in the formula_10 mesons and
formula_11 (heavy-heavy-light baryons).
Honors.
William Bardeen was elected a Fellow of the American Physical Society in 1984. In 1985, Bardeen was awarded a John S. Guggenheim Memorial Foundation Fellowship for research on the application of quantum field theory to elementary particle physics. Previously, he had received the Senior Scientist Award of the Alexander von Humboldt Foundation and an Alfred P. Sloan Foundation Fellowship for research in theoretical physics. Bardeen was awarded the 1996 J.J. Sakurai Prize of the American Physical Society for his work on anomalies and perturbative quantum chromodynamics. He was elected a Fellow of the American Academy of Arts and Sciences in 1998 and a member of the National Academy of Sciences in 1999.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\Lambda_{\\overline{MS}}"
},
{
"math_id": 1,
"text": " D_s^*(2317)"
},
{
"math_id": 2,
"text": " \\pi^0 "
},
{
"math_id": 3,
"text": " \\overline{K}K "
},
{
"math_id": 4,
"text": "-N_{color}"
},
{
"math_id": 5,
"text": "\\Delta I = 1/2"
},
{
"math_id": 6,
"text": "K\\rightarrow \\pi\\pi"
},
{
"math_id": 7,
"text": "(0^-,1^-)"
},
{
"math_id": 8,
"text": "(0^+,1^+)"
},
{
"math_id": 9,
"text": "~ \\Delta M \\approx 350 \\text{ MeV,}~"
},
{
"math_id": 10,
"text": "B_s"
},
{
"math_id": 11,
"text": "ccq, bcq, bbq"
}
] | https://en.wikipedia.org/wiki?curid=12931723 |
1293340 | Classical field theory | Physical theory describing classical fields
A classical field theory is a physical theory that predicts how one or more fields in physics interact with matter through field equations, without considering effects of quantization; theories that incorporate quantum mechanics are called quantum field theories. In most contexts, 'classical field theory' is specifically intended to describe electromagnetism and gravitation, two of the fundamental forces of nature.
A physical field can be thought of as the assignment of a physical quantity at each point of space and time. For example, in a weather forecast, the wind velocity during a day over a country is described by assigning a vector to each point in space. Each vector represents the direction of the movement of air at that point, so the set of all wind vectors in an area at a given point in time constitutes a vector field. As the day progresses, the directions in which the vectors point change as the directions of the wind change.
The first field theories, Newtonian gravitation and Maxwell's equations of electromagnetic fields were developed in classical physics before the advent of relativity theory in 1905, and had to be revised to be consistent with that theory. Consequently, classical field theories are usually categorized as "non-relativistic" and "relativistic". Modern field theories are usually expressed using the mathematics of tensor calculus. A more recent alternative mathematical formalism describes classical fields as sections of mathematical objects called fiber bundles.
Non-relativistic field theories.
Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described.
Newtonian gravitation.
The first field theory of gravity was Newton's theory of gravitation in which the mutual interaction between two masses obeys an inverse square law. This was very useful for predicting the motion of planets around the Sun.
Any massive body "M" has a gravitational field g which describes its influence on other massive bodies. The gravitational field of "M" at a point r in space is found by determining the force F that "M" exerts on a small test mass "m" located at r, and then dividing by "m":
formula_0
Stipulating that "m" is much smaller than "M" ensures that the presence of "m" has a negligible influence on the behavior of "M".
According to Newton's law of universal gravitation, F(r) is given by
formula_1
where formula_2 is a unit vector pointing along the line from "M" to "m", and "G" is Newton's gravitational constant. Therefore, the gravitational field of "M" is
formula_3
The experimental observation that inertial mass and gravitational mass are equal to unprecedented levels of accuracy leads to the identification of the gravitational field strength as identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity.
For a discrete collection of masses, "Mi", located at points, r"i", the gravitational field at a point r due to the masses is
formula_4
If we have a continuous mass distribution "ρ" instead, the sum is replaced by an integral,
formula_5
Note that the direction of the field points from the position r to the position of the masses r"i"; this is ensured by the minus sign. In a nutshell, this means all masses attract.
In the integral form Gauss's law for gravity is
formula_6
while in differential form it is
formula_7
Therefore, the gravitational field g can be written in terms of the gradient of a gravitational potential "φ"(r):
formula_8
This is a consequence of the gravitational force F being conservative.
Electromagnetism.
Electrostatics.
A charged test particle with charge "q" experiences a force F based solely on its charge. We can similarly describe the electric field E generated by the source charge "Q" so that F = "q"E:
formula_9
Using this and Coulomb's law the electric field due to a single charged particle is
formula_10
The electric field is conservative, and hence is given by the gradient of a scalar potential, "V"(r)
formula_11
Gauss's law for electricity is in integral form
formula_12
while in differential form
formula_13
Magnetostatics.
A steady current "I" flowing along a path "ℓ" will exert a force on nearby charged particles that is quantitatively different from the electric field force described above. The force exerted by "I" on a nearby charge "q" with velocity v is
formula_14
where B(r) is the magnetic field, which is determined from "I" by the Biot–Savart law:
formula_15
The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r):
formula_16
Gauss's law for magnetism in integral form is
formula_17
while in differential form it is
formula_18
The physical interpretation is that there are no magnetic monopoles.
Electrodynamics.
In general, in the presence of both a charge density "ρ"(r, "t") and current density J(r, "t"), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to the electric charge density (charge per unit volume) "ρ" and current density (electric current per unit area) J.
Alternatively, one can describe the system in terms of its scalar and vector potentials "V" and A. A set of integral equations known as "retarded potentials" allow one to calculate "V" and A from ρ and J, and from there the electric and magnetic fields are determined via the relations
formula_19
formula_20
Continuum mechanics.
Fluid dynamics.
Fluid dynamics has fields of pressure, density, and flow rate that are connected by conservation laws for energy and momentum. The mass continuity equation is a continuity equation, representing the conservation of mass
formula_21
and the Navier–Stokes equations represent the conservation of momentum in the fluid, found from Newton's laws applied to the fluid,
formula_22
if the density ρ, pressure p, deviatoric stress tensor τ of the fluid, as well as external body forces b, are all given. The velocity field u is the vector field to solve for.
Other examples.
In 1839, James MacCullagh presented field equations to describe reflection and refraction in "An essay toward a dynamical theory of crystalline reflection and refraction".
Potential theory.
The term "potential theory" arises from the fact that, in 19th century physics, the fundamental forces of nature were believed to be derived from scalar potentials which satisfied Laplace's equation. Poisson addressed the question of the stability of the planetary orbits, which had already been settled by Lagrange to the first degree of approximation from the perturbation forces, and derived the Poisson's equation, named after him. The general form of this equation is
formula_23
where "σ" is a source function (as a density, a quantity per unit volume) and ø the scalar potential to solve for.
In Newtonian gravitation; masses are the sources of the field so that field lines terminate at objects that have mass. Similarly, charges are the sources and sinks of electrostatic fields: positive charges emanate electric field lines, and field lines terminate at negative charges. These field concepts are also illustrated in the general divergence theorem, specifically Gauss's law's for gravity and electricity. For the cases of time-independent gravity and electromagnetism, the fields are gradients of corresponding potentials
formula_24
so substituting these into Gauss' law for each case obtains
formula_25
where "ρg" is the mass density, "ρe" the charge density, "G" the gravitational constant and "ke = 1/4πε0" the electric force constant.
Incidentally, this similarity arises from the similarity between Newton's law of gravitation and Coulomb's law.
In the case where there is no source term (e.g. vacuum, or paired charges), these potentials obey Laplace's equation:
formula_26
For a distribution of mass (or charge), the potential can be expanded in a series of spherical harmonics, and the "n"th term in the series can be viewed as a potential arising from the 2"n"-moments (see multipole expansion). For many purposes only the monopole, dipole, and quadrupole terms are needed in calculations.
Relativistic field theory.
Modern formulations of classical field theories generally require Lorentz covariance as this is now recognised as a fundamental aspect of nature. A field theory tends to be expressed mathematically by using Lagrangians. This is a function that, when subjected to an action principle, gives rise to the field equations and a conservation law for the theory. The action is a Lorentz scalar, from which the field equations and symmetries can be readily derived.
Throughout we use units such that the speed of light in vacuum is 1, i.e. "c" = 1.
Lagrangian dynamics.
Given a field tensor formula_27, a scalar called the Lagrangian density formula_28 can be constructed from formula_27 and its derivatives.
From this density, the action functional can be constructed by integrating over spacetime,
formula_29
Where formula_30 is the volume form in curved spacetime. formula_31
Therefore, the Lagrangian itself is equal to the integral of the Lagrangian density over all space.
Then by enforcing the action principle, the Euler–Lagrange equations are obtained
formula_32
Relativistic fields.
Two of the most well-known Lorentz-covariant classical field theories are now described.
Electromagnetism.
Historically, the first (classical) field theories were those describing the electric and magnetic fields (separately). After numerous experiments, it was found that these two fields were related, or, in fact, two aspects of the same field: the electromagnetic field. Maxwell's theory of electromagnetism describes the interaction of charged matter with the electromagnetic field. The first formulation of this field theory used vector fields to describe the electric and magnetic fields. With the advent of special relativity, a more complete formulation using tensor fields was found. Instead of using two vector fields describing the electric and magnetic fields, a tensor field representing these two fields together is used.
The electromagnetic four-potential is defined to be "Aa" = (−"φ", A), and the electromagnetic four-current "ja" = (−"ρ", j). The electromagnetic field at any point in spacetime is described by the antisymmetric (0,2)-rank electromagnetic field tensor
formula_33
The Lagrangian.
To obtain the dynamics for this field, we try and construct a scalar from the field. In the vacuum, we have
formula_34
We can use gauge field theory to get the interaction term, and this gives us
formula_35
The equations.
To obtain the field equations, the electromagnetic tensor in the Lagrangian density needs to be replaced by its definition in terms of the 4-potential "A", and it's this potential which enters the Euler-Lagrange equations. The EM field "F" is not varied in the EL equations. Therefore,
formula_36
Evaluating the derivative of the Lagrangian density with respect to the field components
formula_37
and the derivatives of the field components
formula_38
obtains Maxwell's equations in vacuum. The source equations (Gauss' law for electricity and the Maxwell-Ampère law) are
formula_39
while the other two (Gauss' law for magnetism and Faraday's law) are obtained from the fact that "F" is the 4-curl of "A", or, in other words, from the fact that the Bianchi identity holds for the electromagnetic field tensor.
formula_40
where the comma indicates a partial derivative.
Gravitation.
After Newtonian gravitation was found to be inconsistent with special relativity, Albert Einstein formulated a new theory of gravitation called general relativity. This treats gravitation as a geometric phenomenon ('curved spacetime') caused by masses and represents the gravitational field mathematically by a tensor field called the metric tensor. The Einstein field equations describe how this curvature is produced. Newtonian gravitation is now superseded by Einstein's theory of general relativity, in which gravitation is thought of as being due to a curved spacetime, caused by masses. The Einstein field equations,
formula_41
describe how this curvature is produced by matter and radiation, where "Gab" is the Einstein tensor,
formula_42
written in terms of the Ricci tensor "Rab" and Ricci scalar "R" = "Rabgab", "Tab" is the stress–energy tensor and "κ" = 8"πG"/"c"4 is a constant. In the absence of matter and radiation (including sources) the 'vacuum field equations",
formula_43
can be derived by varying the Einstein–Hilbert action,
formula_44
with respect to the metric, where "g" is the determinant of the metric tensor "gab". Solutions of the vacuum field equations are called vacuum solutions. An alternative interpretation, due to Arthur Eddington, is that formula_45 is fundamental, formula_46 is merely one aspect of formula_45, and formula_47 is forced by the choice of units.
Further examples.
Further examples of Lorentz-covariant classical field theories are
Unification attempts.
Attempts to create a unified field theory based on classical physics are classical unified field theories. During the years between the two World Wars, the idea of unification of gravity with electromagnetism was actively pursued by several mathematicians and physicists like Albert Einstein, Theodor Kaluza, Hermann Weyl, Arthur Eddington, Gustav Mie and Ernst Reichenbacher.
Early attempts to create such theory were based on incorporation of electromagnetic fields into the geometry of general relativity. In 1918, the case for the first geometrization of the electromagnetic field was proposed in 1918 by Hermann Weyl.
In 1919, the idea of a five-dimensional approach was suggested by Theodor Kaluza. From that, a theory called Kaluza-Klein Theory was developed. It attempts to unify gravitation and electromagnetism, in a five-dimensional space-time.
There are several ways of extending the representational framework for a unified field theory which have been considered by Einstein and other researchers. These extensions in general are based in two options. The first option is based in relaxing the conditions imposed on the original formulation, and the second is based in introducing other mathematical objects into the theory. An example of the first option is relaxing the restrictions to four-dimensional space-time by considering higher-dimensional representations. That is used in Kaluza-Klein Theory. For the second, the most prominent example arises from the concept of the affine connection that was introduced into the theory of general relativity mainly through the work of Tullio Levi-Civita and Hermann Weyl.
Further development of quantum field theory changed the focus of searching for unified field theory from classical to quantum description. Because of that, many theoretical physicists gave up looking for a classical unified field theory. Quantum field theory would include unification of two other fundamental forces of nature, the strong and weak nuclear force which act on the subatomic level.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathbf{g}(\\mathbf{r}) = \\frac{\\mathbf{F}(\\mathbf{r})}{m}."
},
{
"math_id": 1,
"text": "\\mathbf{F}(\\mathbf{r}) = -\\frac{G M m}{r^2}\\hat{\\mathbf{r}},"
},
{
"math_id": 2,
"text": "\\hat{\\mathbf{r}}"
},
{
"math_id": 3,
"text": "\\mathbf{g}(\\mathbf{r}) = \\frac{\\mathbf{F}(\\mathbf{r})}{m} = -\\frac{G M}{r^2}\\hat{\\mathbf{r}}."
},
{
"math_id": 4,
"text": "\\mathbf{g}(\\mathbf{r})=-G\\sum_i \\frac{M_i(\\mathbf{r}-\\mathbf{r_i})}{|\\mathbf{r}-\\mathbf{r}_i|^3} \\,, "
},
{
"math_id": 5,
"text": "\\mathbf{g}(\\mathbf{r})=-G \\iiint_V \\frac{\\rho(\\mathbf{x})d^3\\mathbf{x}(\\mathbf{r}-\\mathbf{x})}{|\\mathbf{r}-\\mathbf{x}|^3} \\, , "
},
{
"math_id": 6,
"text": "\\iint\\mathbf{g}\\cdot d \\mathbf{S} = -4\\pi G M"
},
{
"math_id": 7,
"text": "\\nabla \\cdot\\mathbf{g} = -4\\pi G\\rho_m "
},
{
"math_id": 8,
"text": "\\mathbf{g}(\\mathbf{r}) = -\\nabla \\phi(\\mathbf{r})."
},
{
"math_id": 9,
"text": " \\mathbf{E}(\\mathbf{r}) = \\frac{\\mathbf{F}(\\mathbf{r})}{q}."
},
{
"math_id": 10,
"text": "\\mathbf{E} = \\frac{1}{4\\pi\\varepsilon_0} \\frac{Q}{r^2} \\hat{\\mathbf{r}} \\,. "
},
{
"math_id": 11,
"text": " \\mathbf{E}(\\mathbf{r}) = -\\nabla V(\\mathbf{r}) \\, . "
},
{
"math_id": 12,
"text": "\\iint\\mathbf{E}\\cdot d\\mathbf{S} = \\frac{Q}{\\varepsilon_0}"
},
{
"math_id": 13,
"text": "\\nabla \\cdot\\mathbf{E} = \\frac{\\rho_e}{\\varepsilon_0} \\,. "
},
{
"math_id": 14,
"text": "\\mathbf{F}(\\mathbf{r}) = q\\mathbf{v} \\times \\mathbf{B}(\\mathbf{r}),"
},
{
"math_id": 15,
"text": "\\mathbf{B}(\\mathbf{r}) = \\frac{\\mu_0 I}{4\\pi} \\int \\frac{d\\boldsymbol{\\ell} \\times d\\hat{\\mathbf{r}}}{r^2}."
},
{
"math_id": 16,
"text": " \\mathbf{B}(\\mathbf{r}) = \\nabla \\times \\mathbf{A}(\\mathbf{r}) "
},
{
"math_id": 17,
"text": "\\iint\\mathbf{B}\\cdot d\\mathbf{S} = 0, "
},
{
"math_id": 18,
"text": "\\nabla \\cdot\\mathbf{B} = 0. "
},
{
"math_id": 19,
"text": " \\mathbf{E} = -\\nabla V - \\frac{\\partial \\mathbf{A}}{\\partial t}"
},
{
"math_id": 20,
"text": " \\mathbf{B} = \\nabla \\times \\mathbf{A}."
},
{
"math_id": 21,
"text": "\\frac{\\partial \\rho}{\\partial t} + \\nabla \\cdot (\\rho \\mathbf u) = 0 "
},
{
"math_id": 22,
"text": "\\frac {\\partial}{\\partial t} (\\rho \\mathbf u) + \\nabla \\cdot (\\rho \\mathbf u \\otimes \\mathbf u + p \\mathbf I) = \\nabla \\cdot \\boldsymbol \\tau + \\rho \\mathbf b "
},
{
"math_id": 23,
"text": "\\nabla^2 \\phi = \\sigma "
},
{
"math_id": 24,
"text": "\\mathbf{g} = - \\nabla \\phi_g \\,,\\quad \\mathbf{E} = - \\nabla \\phi_e "
},
{
"math_id": 25,
"text": "\\nabla^2 \\phi_g = 4\\pi G \\rho_g \\,, \\quad \\nabla^2 \\phi_e = 4\\pi k_e \\rho_e = - {\\rho_e \\over \\varepsilon_0}"
},
{
"math_id": 26,
"text": "\\nabla^2 \\phi = 0."
},
{
"math_id": 27,
"text": "\\phi"
},
{
"math_id": 28,
"text": "\\mathcal{L}(\\phi,\\partial\\phi,\\partial\\partial\\phi, \\ldots ,x)"
},
{
"math_id": 29,
"text": "\\mathcal{S} = \\int{\\mathcal{L}\\sqrt{-g}\\, \\mathrm{d}^4x}."
},
{
"math_id": 30,
"text": "\\sqrt{-g} \\, \\mathrm{d}^4x"
},
{
"math_id": 31,
"text": "(g\\equiv \\det(g_{\\mu\\nu}))"
},
{
"math_id": 32,
"text": "\\frac{\\delta \\mathcal{S}}{\\delta\\phi} = \\frac{\\partial\\mathcal{L}}{\\partial\\phi} -\\partial_\\mu \\left(\\frac{\\partial\\mathcal{L}}{\\partial(\\partial_\\mu\\phi)}\\right)+ \\cdots +(-1)^m\\partial_{\\mu_1} \\partial_{\\mu_2} \\cdots \\partial_{\\mu_{m-1}} \\partial_{\\mu_m} \\left(\\frac{\\partial\\mathcal{L}}{\\partial(\\partial_{\\mu_1} \\partial_{\\mu_2}\\cdots\\partial_{\\mu_{m-1}}\\partial_{\\mu_m} \\phi)}\\right) = 0."
},
{
"math_id": 33,
"text": "F_{ab} = \\partial_a A_b - \\partial_b A_a."
},
{
"math_id": 34,
"text": "\\mathcal{L} = -\\frac{1}{4\\mu_0}F^{ab}F_{ab}\\,."
},
{
"math_id": 35,
"text": "\\mathcal{L} = -\\frac{1}{4\\mu_0}F^{ab}F_{ab} - j^aA_a\\,."
},
{
"math_id": 36,
"text": "\\partial_b\\left(\\frac{\\partial\\mathcal{L}}{\\partial\\left(\\partial_b A_a\\right)}\\right)=\\frac{\\partial\\mathcal{L}}{\\partial A_a} \\,."
},
{
"math_id": 37,
"text": "\\frac{\\partial\\mathcal{L}}{\\partial A_a} = \\mu_0 j^a \\,, "
},
{
"math_id": 38,
"text": "\\frac{\\partial\\mathcal{L}}{\\partial(\\partial_b A_a)} = F^{ab} \\,, "
},
{
"math_id": 39,
"text": "\\partial_b F^{ab}=\\mu_0 j^a \\, . "
},
{
"math_id": 40,
"text": "6F_{[ab,c]} \\, = F_{ab,c} + F_{ca,b} + F_{bc,a} = 0. "
},
{
"math_id": 41,
"text": "G_{ab} = \\kappa T_{ab} "
},
{
"math_id": 42,
"text": "G_{ab} \\, = R_{ab}-\\frac{1}{2} R g_{ab}"
},
{
"math_id": 43,
"text": "G_{ab} = 0 "
},
{
"math_id": 44,
"text": " S = \\int R \\sqrt{-g} \\, d^4x "
},
{
"math_id": 45,
"text": "R"
},
{
"math_id": 46,
"text": "T"
},
{
"math_id": 47,
"text": "\\kappa"
}
] | https://en.wikipedia.org/wiki?curid=1293340 |
12933953 | Stein's method | Stein's method is a general method in probability theory to obtain bounds on the distance between two probability distributions with respect to a probability metric. It was introduced by Charles Stein, who first published it in 1972, to obtain a bound between the distribution of a sum of formula_0-dependent sequence of random variables and a standard normal distribution in the Kolmogorov (uniform) metric and hence to prove not only a central limit theorem, but also bounds on the rates of convergence for the given metric.
History.
At the end of the 1960s, unsatisfied with the by-then known proofs of a specific central limit theorem, Charles Stein developed a new way of proving the theorem for his statistics lecture. His seminal paper was presented in 1970 at the sixth Berkeley Symposium and published in the corresponding proceedings.
Later, his Ph.D. student Louis Chen Hsiao Yun modified the method so as to obtain approximation results for the Poisson distribution; therefore the Stein method applied to the problem of Poisson approximation is often referred to as the Stein–Chen method.
Probably the most important contributions are the monograph by Stein (1986), where he presents his view of the method and the concept of "auxiliary randomisation", in particular using "exchangeable pairs", and the articles by Barbour (1988) and Götze (1991), who introduced the so-called "generator interpretation", which made it possible to easily adapt the method to many other probability distributions. An important contribution was also an article by Bolthausen (1984) on the so-called "combinatorial central limit theorem".
In the 1990s the method was adapted to a variety of distributions, such as Gaussian processes by Barbour (1990), the binomial distribution by Ehm (1991), Poisson processes by Barbour and Brown (1992), the Gamma distribution by Luk (1994), and many others.
The method gained further popularity in the machine learning community in the mid 2010s, following the development of computable Stein discrepancies and the diverse applications and algorithms based on them.
The basic approach.
Probability metrics.
Stein's method is a way to bound the distance between two probability distributions using a specific probability metric.
Let the metric be given in the form
formula_1
Here, formula_2 and formula_3 are probability measures on a measurable space formula_4, formula_5 and formula_6 are random variables with distribution formula_2 and formula_3 respectively, formula_7 is the usual expectation operator and formula_8 is a set of functions from formula_4 to the set of real numbers. Set formula_8 has to be large enough, so that the above definition indeed yields a metric.
Important examples are the total variation metric, where we let formula_8 consist of all the indicator functions of measurable sets, the Kolmogorov (uniform) metric for probability measures on the real numbers, where we consider all the half-line indicator functions, and the Lipschitz (first order Wasserstein; Kantorovich) metric, where the underlying space is itself a metric space and we take the set formula_8 to be all Lipschitz-continuous functions with Lipschitz-constant 1. However, note that not every metric can be represented in the form (1.1).
In what follows formula_2 is a complicated distribution (e.g., the distribution of a sum of dependent random variables), which we want to approximate by a much simpler and tractable distribution formula_3 (e.g., the standard normal distribution).
The Stein operator.
We assume now that the distribution formula_3 is a fixed distribution; in what follows we shall in particular consider the case where formula_3 is the standard normal distribution, which serves as a classical example.
First of all, we need an operator formula_9, which acts on functions formula_10 from formula_4 to the set of real numbers and 'characterizes' distribution formula_3 in the sense that the following equivalence holds:
formula_11
We call such an operator the "Stein operator".
For the standard normal distribution, Stein's lemma yields such an operator:
formula_12
Thus, we can take
formula_13
There are in general infinitely many such operators and it still remains an open question, which one to choose. However, it seems that for many distributions there is a particular "good" one, like (2.3) for the normal distribution.
There are different ways to find Stein operators.
The Stein equation.
formula_2 is close to formula_3 with respect to formula_14 if the difference of expectations in (1.1) is close to 0. We hope now that the operator formula_9 exhibits the same behavior: if formula_15 then formula_16, and hopefully if formula_17 we have formula_18.
It is usually possible to define a function formula_19 such that
formula_20
We call (3.1) the "Stein equation". Replacing formula_21 by formula_5 and taking expectation with respect to formula_5, we get
formula_22
Now all the effort is worthwhile only if the left-hand side of (3.2) is easier to bound than the right hand side. This is, surprisingly, often the case.
If formula_3 is the standard normal distribution and we use (2.3), then the corresponding Stein equation is
formula_23
If probability distribution Q has an absolutely continuous (with respect to the Lebesgue measure) density q, then
formula_24
Solving the Stein equation.
"Analytic methods". Equation (3.3) can be easily solved explicitly:
formula_25
"Generator method". If formula_9 is the generator of a Markov process formula_26 (see Barbour (1988), Götze (1991)), then the solution to (3.2) is
formula_27
where formula_28 denotes expectation with respect to the process formula_29 being started in formula_21. However, one still has to prove that the solution (4.2) exists for all desired functions formula_30.
Properties of the solution to the Stein equation.
Usually, one tries to give bounds on formula_10 and its derivatives (or differences) in terms of formula_31 and its derivatives (or differences), that is, inequalities of the form
formula_32
for some specific formula_33 (typically formula_34 or formula_35, respectively, depending on the form of the Stein operator), where often formula_36 is the supremum norm. Here, formula_37 denotes the differential operator, but in discrete settings it usually refers to a difference operator. The constants formula_38 may contain the parameters of the distribution formula_3. If there are any, they are often referred to as "Stein factors".
In the case of (4.1) one can prove for the supremum norm that
formula_39
where the last bound is of course only applicable if formula_31 is differentiable (or at least Lipschitz-continuous, which, for example, is not the case if we regard the total variation metric or the Kolmogorov metric!). As the standard normal distribution has no extra parameters, in this specific case the constants are free of additional parameters.
If we have bounds in the general form (5.1), we usually are able to treat many probability metrics together. One can often start with the next step below, if bounds of the form (5.1) are already available (which is the case for many distributions).
An abstract approximation theorem.
We are now in a position to bound the left hand side of (3.1). As this step heavily depends on the form of the Stein operator, we directly regard the case of the standard normal distribution.
At this point we could directly plug in random variable formula_5, which we want to approximate, and try to find upper bounds. However, it is often fruitful to formulate a more general theorem. Consider here the case of local dependence.
Assume that formula_40 is a sum of random variables such that the formula_41 and variance formula_42. Assume that, for every formula_43, there is a set formula_44, such that formula_45 is independent of all the random variables formula_46 with formula_47. We call this set the 'neighborhood' of formula_45. Likewise let formula_48 be a set such that all formula_46 with formula_49 are independent of all formula_50, formula_51. We can think of formula_52 as the neighbors in the neighborhood of formula_45, a second-order neighborhood, so to speak. For a set formula_53 define now the sum formula_54.
Using Taylor expansion, it is possible to prove that
formula_55
Note that, if we follow this line of argument, we can bound (1.1) only for functions where formula_56 is bounded because of the third inequality of (5.2) (and in fact, if formula_31 has discontinuities, so will formula_57). To obtain a bound similar to (6.1) which contains only the expressions formula_58 and formula_59, the argument is much more involved and the result is not as simple as (6.1); however, it can be done.
Theorem A. If formula_5 is as described above, we have for the Lipschitz metric formula_60 that
formula_61
Proof. Recall that the Lipschitz metric is of the form (1.1) where the functions formula_31 are Lipschitz-continuous with Lipschitz-constant 1, thus formula_62. Combining this with (6.1) and the last bound in (5.2) proves the theorem.
Thus, roughly speaking, we have proved that, to calculate the Lipschitz-distance between a formula_5 with local dependence structure and a standard normal distribution, we only need to know the third moments of formula_45 and the size of the neighborhoods formula_63 and formula_52.
Application of the theorem.
We can treat the case of sums of independent and identically distributed random variables with Theorem A.
Assume that formula_64, formula_65 and formula_66. We can take formula_67. From Theorem A we obtain that
formula_68
For sums of random variables another approach related to Steins Method is known as the zero bias transform.
Notes.
<templatestyles src="Reflist/styles.css" />
Literature.
The following text is advanced, and gives a comprehensive overview of the normal case
Another advanced book, but having some introductory character, is
A standard reference is the book by Stein,
which contains a lot of interesting material, but may be a little hard to understand at first reading.
Despite its age, there are few standard introductory books about Stein's method available. The following recent textbook has a chapter (Chapter 2) devoted to introducing Stein's method:
Although the book
is by large parts about Poisson approximation, it contains nevertheless a lot of information about the generator approach, in particular in the context of Poisson process approximation.
The following textbook has a chapter (Chapter 10) devoted to introducing Stein's method of Poisson approximation: | [
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "\n (1.1)\\quad \n d(P,Q) \n = \\sup_{h\\in\\mathcal{H}}\\left|\\int h \\, dP - \\int h \\, dQ \\right|\n = \\sup_{h\\in\\mathcal{H}}\\left|E h(W) - E h(Y) \\right|\n\n"
},
{
"math_id": 2,
"text": "P"
},
{
"math_id": 3,
"text": "Q"
},
{
"math_id": 4,
"text": "\\mathcal{X}"
},
{
"math_id": 5,
"text": "W"
},
{
"math_id": 6,
"text": "Y"
},
{
"math_id": 7,
"text": "E"
},
{
"math_id": 8,
"text": "\\mathcal{H}"
},
{
"math_id": 9,
"text": "\\mathcal{A}"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "\n (2.1)\\quad\n E [(\\mathcal{A}f)](Y) := E ((\\mathcal{A}f)(Y)) = 0\\text{ for all } f \\quad \\iff \\quad Y \\text{ has distribution } Q.\n"
},
{
"math_id": 12,
"text": "\n (2.2)\\quad\n E\\left(f'(Y)-Yf(Y)\\right) = 0\\text{ for all } f\\in C_b^1 \\quad \\iff \\quad Y \\text{ has standard normal distribution.}\n"
},
{
"math_id": 13,
"text": "\n (2.3)\\quad\n(\\mathcal{A}f)(x) = f'(x) - x f(x).\n"
},
{
"math_id": 14,
"text": "d"
},
{
"math_id": 15,
"text": "P=Q"
},
{
"math_id": 16,
"text": "E (\\mathcal{A}f)(W)=0"
},
{
"math_id": 17,
"text": "P\\approx Q"
},
{
"math_id": 18,
"text": "E (\\mathcal{A}f)(W) \\approx 0"
},
{
"math_id": 19,
"text": "f = f_h"
},
{
"math_id": 20,
"text": " (3.1)\\quad \n(\\mathcal{A}f)(x)= h(x) - E[h(Y)] \\qquad\\text{ for all } x .\n"
},
{
"math_id": 21,
"text": "x"
},
{
"math_id": 22,
"text": " (3.2)\\quad\nE(\\mathcal{A}f)(W)=E[h(W)] - E[h(Y)].\n"
},
{
"math_id": 23,
"text": " (3.3)\\quad\nf'(x) - x f(x) = h(x) - E[h(Y)] \\qquad\\text{for all }x . \n"
},
{
"math_id": 24,
"text": " (3.4)\\quad \n (\\mathcal{A}f)(x) = f'(x)+f(x)q'(x)/q(x).\n"
},
{
"math_id": 25,
"text": " (4.1)\\quad\nf(x) = e^{x^2/2}\\int_{-\\infty}^x [h(s)-E h(Y)]e^{-s^2/2} \\, ds.\n"
},
{
"math_id": 26,
"text": "(Z_t)_{t\\geq 0}"
},
{
"math_id": 27,
"text": "\n(4.2)\\quad\nf(x) = -\\int_0^\\infty [E^x h(Z_t)-E h(Y)] \\, dt,\n"
},
{
"math_id": 28,
"text": "E^x"
},
{
"math_id": 29,
"text": "Z"
},
{
"math_id": 30,
"text": "h\\in\\mathcal{H}"
},
{
"math_id": 31,
"text": "h"
},
{
"math_id": 32,
"text": "\n(5.1)\\quad\n\\|D^k f\\| \\leq C_{k,l} \\|D^l h\\|, \n"
},
{
"math_id": 33,
"text": "k,l=0,1,2,\\dots"
},
{
"math_id": 34,
"text": "k\\geq l"
},
{
"math_id": 35,
"text": "k\\geq l-1"
},
{
"math_id": 36,
"text": "\\|\\cdot\\|"
},
{
"math_id": 37,
"text": "D^k"
},
{
"math_id": 38,
"text": "C_{k,l}"
},
{
"math_id": 39,
"text": "\n(5.2)\\quad\n\\|f\\|_\\infty\\leq \\min\\left\\{\\sqrt{\\pi/2}\\|h\\|_\\infty,2\\|h'\\|_\\infty\\right\\},\\quad\n\\|f'\\|_\\infty\\leq \\min\\{2\\|h\\|_\\infty,4\\|h'\\|_\\infty\\},\\quad\n\\|f''\\|_\\infty\\leq 2 \\|h'\\|_\\infty,\n"
},
{
"math_id": 40,
"text": " W=\\sum_{i=1}^n X_i "
},
{
"math_id": 41,
"text": "E[W] = 0"
},
{
"math_id": 42,
"text": "\\operatorname{var}[W] = 1"
},
{
"math_id": 43,
"text": "i=1,\\dots,n"
},
{
"math_id": 44,
"text": "A_i\\subset\\{1,2,\\dots,n\\}"
},
{
"math_id": 45,
"text": "X_i"
},
{
"math_id": 46,
"text": "X_j"
},
{
"math_id": 47,
"text": "j\\not\\in A_i"
},
{
"math_id": 48,
"text": "B_i\\subset\\{1,2,\\dots,n\\}"
},
{
"math_id": 49,
"text": "j\\in A_i"
},
{
"math_id": 50,
"text": "X_k"
},
{
"math_id": 51,
"text": "k\\not\\in B_i"
},
{
"math_id": 52,
"text": "B_i"
},
{
"math_id": 53,
"text": "A\\subset\\{1,2,\\dots,n\\}"
},
{
"math_id": 54,
"text": "X_A := \\sum_{j\\in A} X_j"
},
{
"math_id": 55,
"text": "\n(6.1)\\quad\n\\left|E(f'(W)-Wf(W))\\right| \n\\leq \\|f''\\|_\\infty\\sum_{i=1}^n \\left(\n \\frac{1}{2}E|X_i X_{A_i}^2|\n+ E|X_i X_{A_i}X_{B_i \\setminus A_i}|\n+ E|X_i X_{A_i}| E|X_{B_i}|\n\\right)\n"
},
{
"math_id": 56,
"text": "\\|h'\\|_{\\infty}"
},
{
"math_id": 57,
"text": "f''"
},
{
"math_id": 58,
"text": "\\|f\\|_\\infty"
},
{
"math_id": 59,
"text": "\\|f'\\|_\\infty"
},
{
"math_id": 60,
"text": "d_W"
},
{
"math_id": 61,
"text": "\n(6.2)\\quad\nd_W(\\mathcal{L}(W),N(0,1)) \\leq 2\\sum_{i=1}^n \\left(\n \\frac{1}{2}E|X_i X_{A_i}^2|\n+ E|X_i X_{A_i}X_{B_i \\setminus A_i}|\n+ E|X_i X_{A_i}| E|X_{B_i}|\n\\right).\n"
},
{
"math_id": 62,
"text": "\\|h'\\|\\leq 1"
},
{
"math_id": 63,
"text": "A_i"
},
{
"math_id": 64,
"text": "E X_i = 0"
},
{
"math_id": 65,
"text": " \\operatorname{var} X_i = 1"
},
{
"math_id": 66,
"text": " W=n^{-1/2}\\sum X_i "
},
{
"math_id": 67,
"text": " A_i=B_i=\\{i\\} "
},
{
"math_id": 68,
"text": "\n(7.1)\\quad\nd_W(\\mathcal{L}(W),N(0,1)) \\leq \\frac{5 E|X_1|^3}{n^{1/2}}.\n"
},
{
"math_id": 69,
"text": " E h(X_1+\\cdots+X_n)-E h(Y_1+\\cdots+Y_n) "
},
{
"math_id": 70,
"text": "\\psi(t)"
},
{
"math_id": 71,
"text": "\\psi'(t)+t\\psi(t) = 0"
},
{
"math_id": 72,
"text": "t"
},
{
"math_id": 73,
"text": "\\psi_W(t)"
},
{
"math_id": 74,
"text": "\\psi'_W(t)+t\\psi_W(t)\\approx 0"
},
{
"math_id": 75,
"text": "\\psi_W(t)\\approx \\psi(t)"
}
] | https://en.wikipedia.org/wiki?curid=12933953 |
12935671 | Lune (geometry) | Crescent shape bounded by two circular arcs
In plane geometry, a lune (from la " luna" 'moon') is the concave-convex region bounded by two circular arcs. It has one boundary portion for which the connecting segment of any two nearby points moves outside the region and another boundary portion for which the connecting segment of any two nearby points lies entirely inside the region. A convex-convex region is termed a lens.
Formally, a lune is the relative complement of one disk in another (where they intersect but neither is a subset of the other). Alternatively, if formula_0 and formula_1 are disks, then formula_2 is a lune.
Squaring the lune.
In the 5th century BC, Hippocrates of Chios showed that the Lune of Hippocrates and two other lunes could be exactly squared (converted into a square having the same area) by straightedge and compass. In 1766 the Finnish mathematician Daniel Wijnquist, quoting Daniel Bernoulli, listed all five geometrical squareable lunes, adding to those known by Hippocrates. In 1771 Leonhard Euler gave a general approach and obtained a certain equation to the problem. In 1933 and 1947 it was proven by Nikolai Chebotaryov and his student Anatoly Dorodnov that these five are the only squarable lunes.
Area.
The area of a lune formed by circles of radii "a" and "b" ("b>a") with distance "c" between their centers is
formula_3
where formula_4 is the inverse function of the secant function, and where
formula_5
is the area of a triangle with sides "a, b" and "c".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "A \\smallsetminus A \\cap B"
},
{
"math_id": 3,
"text": "A=2\\Delta+a^2\\sec^{-1}\\left(\\frac{2ac}{b^2-a^2-c^2}\\right)-b^2\\sec^{-1}\\left(\\frac{2bc}{b^2+c^2-a^2}\\right),"
},
{
"math_id": 4,
"text": "\\text{sec}^{-1}"
},
{
"math_id": 5,
"text": " \\Delta= \\frac{1}{4} \\sqrt{(a+b+c)(-a+b+c)(a-b+c)(a+b-c)}"
}
] | https://en.wikipedia.org/wiki?curid=12935671 |
1293719 | Thermal de Broglie wavelength | Physical quantity of ideal and quantum gases
In physics, the thermal de Broglie wavelength (formula_0, sometimes also denoted by formula_1) is roughly the average de Broglie wavelength of particles in an ideal gas at the specified temperature. We can take the average interparticle spacing in the gas to be approximately ("V"/"N")1/3 where V is the volume and N is the number of particles. When the thermal de Broglie wavelength is much smaller than the interparticle distance, the gas can be considered to be a classical or Maxwell–Boltzmann gas. On the other hand, when the thermal de Broglie wavelength is on the order of or larger than the interparticle distance, quantum effects will dominate and the gas must be treated as a Fermi gas or a Bose gas, depending on the nature of the gas particles. The critical temperature is the transition point between these two regimes, and at this critical temperature, the thermal wavelength will be approximately equal to the interparticle distance. That is, the quantum nature of the gas will be evident for
formula_2
i.e., when the interparticle distance is less than the thermal de Broglie wavelength; in this case the gas will obey Bose–Einstein statistics or Fermi–Dirac statistics, whichever is appropriate. This is for example the case for electrons in a typical metal at "T" = 300 K, where the electron gas obeys Fermi–Dirac statistics, or in a Bose–Einstein condensate. On the other hand, for
formula_3
i.e., when the interparticle distance is much larger than the thermal de Broglie wavelength, the gas will obey Maxwell–Boltzmann statistics. Such is the case for molecular or atomic gases at room temperature, and for thermal neutrons produced by a neutron source.
Massive particles.
For massive, non-interacting particles, the thermal de Broglie wavelength can be derived from the calculation of the partition function. Assuming a 1-dimensional box of length L, the partition function (using the energy states of the 1D particle in a box) is
formula_4
Since the energy levels are extremely close together, we can approximate this sum as an integral:
formula_5
Hence,
formula_6
where formula_7 is the Planck constant, m is the mass of a gas particle, formula_8 is the Boltzmann constant, and T is the temperature of the gas.
This can also be expressed using the reduced Planck constant formula_9 as
formula_10
Massless particles.
For massless (or highly relativistic) particles, the thermal wavelength is defined as
formula_11
where "c" is the speed of light. As with the thermal wavelength for massive particles, this is of the order of the average wavelength of the particles in the gas and defines a critical point at which quantum effects begin to dominate. For example, when observing the long-wavelength spectrum of black body radiation, the classical Rayleigh–Jeans law can be applied, but when the observed wavelengths approach the thermal wavelength of the photons in the black body radiator, the quantum Planck's law must be used.
General definition.
A general definition of the thermal wavelength for an ideal gas of particles having an arbitrary power-law relationship between energy and momentum (dispersion relationship), in any number of dimensions, can be introduced. If n is the number of dimensions, and the relationship between energy (E) and momentum (p) is given by
formula_12
(with a and s being constants), then the thermal wavelength is defined as
formula_13
where Γ is the Gamma function. In particular, for a 3-D ("n" = 3) gas of massive or massless particles we have "E" = "p"2/2"m" ("a" = 1/2"m", "s" = 2) and "E" = "pc" ("a" = "c", "s" = 1), respectively, yielding the expressions listed in the previous sections. Note that for massive non-relativistic particles ("s" = 2), the expression does not depend on "n". This explains why the 1-D derivation above agrees with the 3-D case.
Examples.
Some examples of the thermal de Broglie wavelength at 298 K are given below. | [
{
"math_id": 0,
"text": "\\lambda_{\\mathrm{th}}"
},
{
"math_id": 1,
"text": "\\Lambda"
},
{
"math_id": 2,
"text": "\n \\displaystyle \n \\frac{V}{N\\lambda_{\\mathrm{th}}^3} \\le 1 \n \\ , {\\rm or} \\ \n \\left( \\frac{V}{N} \\right)^{1/3} \\le \\lambda_{\\mathrm{th}}\n"
},
{
"math_id": 3,
"text": "\n \\displaystyle \n \\frac{V}{N\\lambda_{\\mathrm{th}}^3} \\gg 1 \n \\ , {\\rm or} \\ \n \\left( \\frac{V}{N} \\right)^{1/3} \\gg \\lambda_{\\mathrm{th}}\n"
},
{
"math_id": 4,
"text": " Z = \\sum_{n} e^{-E_n/k_{\\mathrm B}T} = \\sum_{n} e^{-h^2 n^2 / 8mL^2k_{\\mathrm B} T} ."
},
{
"math_id": 5,
"text": " Z = \\int_0^\\infty e^{-h^2 n^2 / 8mL^2k_{\\mathrm B}T} dn = \\sqrt{\\frac{2\\pi m k_{\\mathrm B} T}{h^2}} L \\equiv \\frac{L}{\\lambda_{\\rm th}} ."
},
{
"math_id": 6,
"text": " \\lambda_{\\rm th} = \\frac{h}{\\sqrt{2\\pi m k_{\\mathrm B} T}} ,"
},
{
"math_id": 7,
"text": " h "
},
{
"math_id": 8,
"text": "k_{\\mathrm B}"
},
{
"math_id": 9,
"text": "\\hbar= \\frac{h}{2\\pi} "
},
{
"math_id": 10,
"text": "\\lambda_{\\mathrm{th}} = {\\sqrt{\\frac{2\\pi\\hbar^2}{ mk_{\\mathrm B}T}}} ."
},
{
"math_id": 11,
"text": "\\lambda_{\\mathrm{th}}= \\frac{hc}{2 \\pi^{1/3} k_{\\mathrm B} T} = \\frac{\\pi^{2/3}\\hbar c}{ k_{\\mathrm B} T} ,"
},
{
"math_id": 12,
"text": "E=ap^s"
},
{
"math_id": 13,
"text": "\n\\lambda_{\\mathrm{th}}=\\frac{h}{\\sqrt{\\pi}}\\left(\\frac{a}{k_{\\mathrm B}T}\\right)^{1/s}\n\\left[\\frac{\\Gamma(n/2+1)}{\\Gamma(n/s+1)}\\right]^{1/n} ,\n"
}
] | https://en.wikipedia.org/wiki?curid=1293719 |
12939 | Geometric algebra | Algebraic structure designed for geometry
In mathematics, a geometric algebra (also known as a Clifford algebra) is an extension of elementary algebra to work with geometrical objects such as vectors. Geometric algebra is built out of two fundamental operations, addition and the geometric product. Multiplication of vectors results in higher-dimensional objects called multivectors. Compared to other formalisms for manipulating geometric objects, geometric algebra is noteworthy for supporting vector division (though generally not for all elements) and addition of objects of different dimensions.
The geometric product was first briefly mentioned by Hermann Grassmann, who was chiefly interested in developing the closely related exterior algebra. In 1878, William Kingdon Clifford greatly expanded on Grassmann's work to form what are now usually called Clifford algebras in his honor (although Clifford himself chose to call them "geometric algebras"). Clifford defined the Clifford algebra and its product as a unification of the Grassmann algebra and Hamilton's quaternion algebra. Adding the dual of the Grassmann exterior product (the "meet") allows the use of the Grassmann–Cayley algebra, and a conformal version of the latter together with a conformal Clifford algebra yields a conformal geometric algebra (CGA) providing a framework for classical geometries. In practice, these and several derived operations allow a correspondence of elements, subspaces and operations of the algebra with geometric interpretations. For several decades, geometric algebras went somewhat ignored, greatly eclipsed by the vector calculus then newly developed to describe electromagnetism. The term "geometric algebra" was repopularized in the 1960s by David Hestenes, who advocated its importance to relativistic physics.
The scalars and vectors have their usual interpretation and make up distinct subspaces of a geometric algebra. Bivectors provide a more natural representation of the pseudovector quantities of 3D vector calculus that are derived as a cross product, such as oriented area, oriented angle of rotation, torque, angular momentum and the magnetic field. A trivector can represent an oriented volume, and so on. An element called a blade may be used to represent a subspace and orthogonal projections onto that subspace. Rotations and reflections are represented as elements. Unlike a vector algebra, a geometric algebra naturally accommodates any number of dimensions and any quadratic form such as in relativity.
Examples of geometric algebras applied in physics include the spacetime algebra (and the less common algebra of physical space) and the conformal geometric algebra. Geometric calculus, an extension of GA that incorporates differentiation and integration, can be used to formulate other theories such as complex analysis and differential geometry, e.g. by using the Clifford algebra instead of differential forms. Geometric algebra has been advocated, most notably by David Hestenes and Chris Doran, as the preferred mathematical framework for physics. Proponents claim that it provides compact and intuitive descriptions in many areas including classical and quantum mechanics, electromagnetic theory, and relativity. GA has also found use as a computational tool in computer graphics and robotics.
Definition and notation.
There are a number of different ways to define a geometric algebra. Hestenes's original approach was axiomatic, "full of geometric significance" and equivalent to the universal Clifford algebra.
Given a finite-dimensional vector space &NoBreak;&NoBreak; over a field &NoBreak;&NoBreak; with a symmetric bilinear form (the "inner product", e.g., the Euclidean or Lorentzian metric) &NoBreak;&NoBreak;, the geometric algebra of the quadratic space &NoBreak;&NoBreak; is the Clifford algebra &NoBreak;&NoBreak;, an element of which is called a multivector. The Clifford algebra is commonly defined as a quotient algebra of the tensor algebra, though this definition is abstract, so the following definition is presented without requiring abstract algebra.
To cover degenerate symmetric bilinear forms, the last condition must be modified. It can be shown that these conditions uniquely characterize the geometric product.
For the remainder of this article, only the real case, &NoBreak;&NoBreak;, will be considered. The notation &NoBreak;&NoBreak; (respectively &NoBreak;&NoBreak;) will be used to denote a geometric algebra for which the bilinear form &NoBreak;&NoBreak; has the signature &NoBreak;&NoBreak; (respectively &NoBreak;&NoBreak;).
The product in the algebra is called the "geometric product", and the product in the contained exterior algebra is called the "exterior product" (frequently called the "wedge product" or the "outer product"). It is standard to denote these respectively by juxtaposition (i.e., suppressing any explicit multiplication symbol) and the symbol &NoBreak;&NoBreak;.
The above definition of the geometric algebra is still somewhat abstract, so we summarize the properties of the geometric product here. For multivectors &NoBreak;&NoBreak;:
The exterior product has the same properties, except that the last property above is replaced by &NoBreak;&NoBreak; for &NoBreak;&NoBreak;.
Note that in the last property above, the real number &NoBreak;&NoBreak; need not be nonnegative if &NoBreak;&NoBreak; is not positive-definite. An important property of the geometric product is the existence of elements that have a multiplicative inverse. For a vector &NoBreak;&NoBreak;, if formula_0 then formula_1 exists and is equal to &NoBreak;&NoBreak;. A nonzero element of the algebra does not necessarily have a multiplicative inverse. For example, if formula_2 is a vector in formula_3 such that &NoBreak;&NoBreak;, the element formula_4 is both a nontrivial idempotent element and a nonzero zero divisor, and thus has no inverse.
It is usual to identify formula_5 and formula_3 with their images under the natural embeddings formula_6 and &NoBreak;&NoBreak;. In this article, this identification is assumed. Throughout, the terms "scalar" and "vector" refer to elements of formula_5 and formula_3 respectively (and of their images under this embedding).
Geometric product.
For vectors &NoBreak;&NoBreak; and &NoBreak;&NoBreak;, we may write the geometric product of any two vectors &NoBreak;&NoBreak; and &NoBreak;&NoBreak; as the sum of a symmetric product and an antisymmetric product:
formula_12
Thus we can define the "inner product" of vectors as
formula_13
so that the symmetric product can be written as
formula_14
Conversely, &NoBreak;&NoBreak; is completely determined by the algebra. The antisymmetric part is the exterior product of the two vectors, the product of the contained exterior algebra:
formula_15
Then by simple addition:
formula_16 the ungeneralized or vector form of the geometric product.
The inner and exterior products are associated with familiar concepts from standard vector algebra. Geometrically, formula_7 and formula_17 are parallel if their geometric product is equal to their inner product, whereas formula_7 and formula_17 are perpendicular if their geometric product is equal to their exterior product. In a geometric algebra for which the square of any nonzero vector is positive, the inner product of two vectors can be identified with the dot product of standard vector algebra. The exterior product of two vectors can be identified with the signed area enclosed by a parallelogram the sides of which are the vectors. The cross product of two vectors in formula_11 dimensions with positive-definite quadratic form is closely related to their exterior product.
Most instances of geometric algebras of interest have a nondegenerate quadratic form. If the quadratic form is fully degenerate, the inner product of any two vectors is always zero, and the geometric algebra is then simply an exterior algebra. Unless otherwise stated, this article will treat only nondegenerate geometric algebras.
The exterior product is naturally extended as an associative bilinear binary operator between any two elements of the algebra, satisfying the identities
formula_18
where the sum is over all permutations of the indices, with formula_19 the sign of the permutation, and formula_20 are vectors (not general elements of the algebra). Since every element of the algebra can be expressed as the sum of products of this form, this defines the exterior product for every pair of elements of the algebra. It follows from the definition that the exterior product forms an alternating algebra.
The equivalent structure equation for Clifford algebra is
formula_21
where formula_22 is the Pfaffian of &NoBreak;&NoBreak; and formula_23 provides combinations, &NoBreak;&NoBreak;, of &NoBreak;&NoBreak; indices divided into &NoBreak;&NoBreak; and &NoBreak;&NoBreak; parts and &NoBreak;&NoBreak; is the parity of the combination.
The Pfaffian provides a metric for the exterior algebra and, as pointed out by Claude Chevalley, Clifford algebra reduces to the exterior algebra with a zero quadratic form. The role the Pfaffian plays can be understood from a geometric viewpoint by developing Clifford algebra from simplices. This derivation provides a better connection between Pascal's triangle and simplices because it provides an interpretation of the first column of ones.
Blades, grades, and basis.
A multivector that is the exterior product of formula_24 linearly independent vectors is called a "blade", and is said to be of grade &NoBreak;&NoBreak;. A multivector that is the sum of blades of grade formula_24 is called a (homogeneous) multivector of grade &NoBreak;&NoBreak;. From the axioms, with closure, every multivector of the geometric algebra is a sum of blades.
Consider a set of formula_24 linearly independent vectors formula_25 spanning an &NoBreak;&NoBreak;-dimensional subspace of the vector space. With these, we can define a real symmetric matrix (in the same way as a Gramian matrix)
formula_26
By the spectral theorem, formula_27 can be diagonalized to diagonal matrix formula_28 by an orthogonal matrix formula_29 via
formula_30
Define a new set of vectors &NoBreak;}&NoBreak;, known as orthogonal basis vectors, to be those transformed by the orthogonal matrix:
formula_31
Since orthogonal transformations preserve inner products, it follows that formula_32 and thus the formula_33 are perpendicular. In other words, the geometric product of two distinct vectors formula_34 is completely specified by their exterior product, or more generally
formula_35
Therefore, every blade of grade formula_24 can be written as the exterior product of formula_24 vectors. More generally, if a degenerate geometric algebra is allowed, then the orthogonal matrix is replaced by a block matrix that is orthogonal in the nondegenerate block, and the diagonal matrix has zero-valued entries along the degenerate dimensions. If the new vectors of the nondegenerate subspace are normalized according to
formula_36
then these normalized vectors must square to formula_37 or &NoBreak;&NoBreak;. By Sylvester's law of inertia, the total number of &NoBreak;&NoBreak; and the total number of &NoBreak;&NoBreak;s along the diagonal matrix is invariant. By extension, the total number formula_38 of these vectors that square to formula_37 and the total number formula_39 that square to formula_40 is invariant. (The total number of basis vectors that square to zero is also invariant, and may be nonzero if the degenerate case is allowed.) We denote this algebra &NoBreak;&NoBreak;. For example, formula_41 models three-dimensional Euclidean space, formula_42 relativistic spacetime and formula_43 a conformal geometric algebra of a three-dimensional space.
The set of all possible products of formula_8 orthogonal basis vectors with indices in increasing order, including formula_9 as the empty product, forms a basis for the entire geometric algebra (an analogue of the PBW theorem). For example, the following is a basis for the geometric algebra &NoBreak;&NoBreak;:
formula_44
A basis formed this way is called a standard basis for the geometric algebra, and any other orthogonal basis for formula_3 will produce another standard basis. Each standard basis consists of formula_45 elements. Every multivector of the geometric algebra can be expressed as a linear combination of the standard basis elements. If the standard basis elements are formula_46 with formula_47 being an index set, then the geometric product of any two multivectors is
formula_48
The terminology "formula_49-vector" is often encountered to describe multivectors containing elements of only one grade. In higher dimensional space, some such multivectors are not blades (cannot be factored into the exterior product of formula_49 vectors). By way of example, formula_50 in formula_51 cannot be factored; typically, however, such elements of the algebra do not yield to geometric interpretation as objects, although they may represent geometric quantities such as rotations. Only &NoBreak;&NoBreak;-, &NoBreak;&NoBreak;-, &NoBreak;&NoBreak;- and &NoBreak;&NoBreak;-vectors are always blades in &NoBreak;&NoBreak;-space.
Versor.
A &NoBreak;&NoBreak;-versor is a multivector that can be expressed as the geometric product of formula_49 invertible vectors. Unit quaternions (originally called versors by Hamilton) may be identified with rotors in 3D space in much the same way as real 2D rotors subsume complex numbers; for the details refer to Dorst.
Some authors use the term "versor product" to refer to the frequently occurring case where an operand is "sandwiched" between operators. The descriptions for rotations and reflections, including their outermorphisms, are examples of such sandwiching. These outermorphisms have a particularly simple algebraic form. Specifically, a mapping of vectors of the form
formula_52 extends to the outermorphism formula_53
Since both operators and operand are versors there is potential for alternative examples such as rotating a rotor or reflecting a spinor always provided that some geometrical or physical significance can be attached to such operations.
By the Cartan–Dieudonné theorem we have that every isometry can be given as reflections in hyperplanes and since composed reflections provide rotations then we have that orthogonal transformations are versors.
In group terms, for a real, non-degenerate &NoBreak;&NoBreak;, having identified the group formula_54 as the group of all invertible elements of &NoBreak;}&NoBreak;, Lundholm gives a proof that the "versor group" formula_55 (the set of invertible versors) is equal to the Lipschitz group formula_56 ( Clifford group, although Lundholm deprecates this usage).
Subgroups of the Lipschitz group.
We denote the grade involution as &NoBreak;}&NoBreak; and reversion as &NoBreak;}&NoBreak;.
Although the Lipschitz group (defined as &NoBreak;}&NoBreak;) and the versor group (defined as &NoBreak;}&NoBreak;) have divergent definitions, they are the same group. Lundholm defines the &NoBreak;}&NoBreak;, &NoBreak;}&NoBreak;, and &NoBreak;}&NoBreak; subgroups of the Lipschitz group.
Multiple analyses of spinors use GA as a representation.
Grade projection.
A &NoBreak;&NoBreak;-graded vector space structure can be established on a geometric algebra by use of the exterior product that is naturally induced by the geometric product.
Since the geometric product and the exterior product are equal on orthogonal vectors, this grading can be conveniently constructed by using an orthogonal basis &NoBreak;}&NoBreak;.
Elements of the geometric algebra that are scalar multiples of formula_9 are of grade formula_57 and are called "scalars". Elements that are in the span of formula_58 are of grade &NoBreak;&NoBreak; and are the ordinary vectors. Elements in the span of formula_59 are of grade formula_10 and are the bivectors. This terminology continues through to the last grade of &NoBreak;&NoBreak;-vectors. Alternatively, &NoBreak;&NoBreak;-vectors are called pseudoscalars, &NoBreak;&NoBreak;-vectors are called pseudovectors, etc. Many of the elements of the algebra are not graded by this scheme since they are sums of elements of differing grade. Such elements are said to be of "mixed grade". The grading of multivectors is independent of the basis chosen originally.
This is a grading as a vector space, but not as an algebra. Because the product of an &NoBreak;&NoBreak;-blade and an &NoBreak;&NoBreak;-blade is contained in the span of formula_57 through &NoBreak;&NoBreak;-blades, the geometric algebra is a filtered algebra.
A multivector formula_60 may be decomposed with the grade-projection operator &NoBreak;&NoBreak;, which outputs the grade-&NoBreak;&NoBreak; portion of &NoBreak;&NoBreak;. As a result:
formula_61
As an example, the geometric product of two vectors formula_62 since formula_63 and formula_64 and &NoBreak;&NoBreak;, for formula_65 other than formula_57 and &NoBreak;&NoBreak;.
A multivector formula_60 may also be decomposed into even and odd components, which may respectively be expressed as the sum of the even and the sum of the odd grade components above:
formula_66
formula_67
This is the result of forgetting structure from a &NoBreak;}&NoBreak;-graded vector space to &NoBreak;&NoBreak;-graded vector space. The geometric product respects this coarser grading. Thus in addition to being a &NoBreak;&NoBreak;-graded vector space, the geometric algebra is a &NoBreak;&NoBreak;-graded algebra, a superalgebra.
Restricting to the even part, the product of two even elements is also even. This means that the even multivectors defines an "even subalgebra". The even subalgebra of an &NoBreak;&NoBreak;-dimensional geometric algebra is algebra-isomorphic (without preserving either filtration or grading) to a full geometric algebra of formula_68 dimensions. Examples include formula_69 and &NoBreak;&NoBreak;.
Representation of subspaces.
Geometric algebra represents subspaces of formula_3 as blades, and so they coexist in the same algebra with vectors from &NoBreak;&NoBreak;. A &NoBreak;&NoBreak;-dimensional subspace formula_70 of formula_3 is represented by taking an orthogonal basis formula_71 and using the geometric product to form the blade &NoBreak;&NoBreak;. There are multiple blades representing &NoBreak;&NoBreak;; all those representing formula_70 are scalar multiples of &NoBreak;&NoBreak;. These blades can be separated into two sets: positive multiples of formula_72 and negative multiples of &NoBreak;&NoBreak;. The positive multiples of formula_72 are said to have "the same orientation" as &NoBreak;&NoBreak;, and the negative multiples the "opposite orientation".
Blades are important since geometric operations such as projections, rotations and reflections depend on the factorability via the exterior product that (the restricted class of) &NoBreak;&NoBreak;-blades provide but that (the generalized class of) grade-&NoBreak;&NoBreak; multivectors do not when &NoBreak;&NoBreak;.
Unit pseudoscalars.
Unit pseudoscalars are blades that play important roles in GA. A unit pseudoscalar for a non-degenerate subspace formula_70 of formula_3 is a blade that is the product of the members of an orthonormal basis for &NoBreak;&NoBreak;. It can be shown that if formula_73 and formula_74 are both unit pseudoscalars for &NoBreak;&NoBreak;, then formula_75 and &NoBreak;&NoBreak;. If one doesn't choose an orthonormal basis for &NoBreak;&NoBreak;, then the Plücker embedding gives a vector in the exterior algebra but only up to scaling. Using the vector space isomorphism between the geometric algebra and exterior algebra, this gives the equivalence class of formula_76 for all &NoBreak;&NoBreak;. Orthonormality gets rid of this ambiguity except for the signs above.
Suppose the geometric algebra formula_77 with the familiar positive definite inner product on formula_78 is formed. Given a plane (two-dimensional subspace) of &NoBreak;&NoBreak;, one can find an orthonormal basis formula_79 spanning the plane, and thus find a unit pseudoscalar formula_80 representing this plane. The geometric product of any two vectors in the span of formula_81 and formula_82 lies in &NoBreak;}&NoBreak;, that is, it is the sum of a &NoBreak;&NoBreak;-vector and a &NoBreak;&NoBreak;-vector.
By the properties of the geometric product, &NoBreak;&NoBreak;. The resemblance to the imaginary unit is not incidental: the subspace formula_83 is &NoBreak;&NoBreak;-algebra isomorphic to the complex numbers. In this way, a copy of the complex numbers is embedded in the geometric algebra for each two-dimensional subspace of formula_3 on which the quadratic form is definite.
It is sometimes possible to identify the presence of an imaginary unit in a physical equation. Such units arise from one of the many quantities in the real algebra that square to &NoBreak;&NoBreak;, and these have geometric significance because of the properties of the algebra and the interaction of its various subspaces.
In &NoBreak;&NoBreak;, a further familiar case occurs. Given a standard basis consisting of orthonormal vectors formula_84 of &NoBreak;&NoBreak;, the set of "all" &NoBreak;&NoBreak;-vectors is spanned by
formula_85
Labelling these &NoBreak;&NoBreak;, formula_86 and formula_49 (momentarily deviating from our uppercase convention), the subspace generated by &NoBreak;&NoBreak;-vectors and &NoBreak;&NoBreak;-vectors is exactly &NoBreak;}&NoBreak;. This set is seen to be the even subalgebra of &NoBreak;&NoBreak;, and furthermore is isomorphic as an &NoBreak;&NoBreak;-algebra to the quaternions, another important algebraic system.
Extensions of the inner and exterior products.
It is common practice to extend the exterior product on vectors to the entire algebra. This may be done through the use of the above mentioned grade projection operator:
formula_87 (the "exterior product")
This generalization is consistent with the above definition involving antisymmetrization. Another generalization related to the exterior product is the commutator product:
formula_88 (the "commutator product")
The regressive product is the dual of the exterior product (respectively corresponding to the "meet" and "join" in this context). The dual specification of elements permits, for blades &NoBreak;&NoBreak; and &NoBreak;&NoBreak;, the intersection (or meet) where the duality is to be taken relative to the a blade containing both &NoBreak;&NoBreak; and &NoBreak;&NoBreak; (the smallest such blade being the join).
formula_89
with &NoBreak;&NoBreak; the unit pseudoscalar of the algebra. The regressive product, like the exterior product, is associative.
The inner product on vectors can also be generalized, but in more than one non-equivalent way. The paper gives a full treatment of several different inner products developed for geometric algebras and their interrelationships, and the notation is taken from there. Many authors use the same symbol as for the inner product of vectors for their chosen extension (e.g. Hestenes and Perwass). No consistent notation has emerged.
Among these several different generalizations of the inner product on vectors are:
formula_90 (the "left contraction")
formula_91 (the "right contraction")
formula_92 (the "scalar product")
formula_93 (the "(fat) dot" product)
makes an argument for the use of contractions in preference to Hestenes's inner product; they are algebraically more regular and have cleaner geometric interpretations.
A number of identities incorporating the contractions are valid without restriction of their inputs.
For example,
formula_94
formula_95
formula_96
formula_97
formula_98
formula_99
Benefits of using the left contraction as an extension of the inner product on vectors include that the identity formula_100 is extended to formula_101 for any vector formula_7 and multivector &NoBreak;&NoBreak;, and that the projection operation formula_102 is extended to formula_103 for any blade formula_104 and any multivector formula_60 (with a minor modification to accommodate null &NoBreak;&NoBreak;, given below).
Dual basis.
Let formula_105 be a basis of &NoBreak;&NoBreak;, i.e. a set of formula_8 linearly independent vectors that span the &NoBreak;&NoBreak;-dimensional vector space &NoBreak;&NoBreak;. The basis that is dual to formula_105 is the set of elements of the dual vector space formula_106 that forms a biorthogonal system with this basis, thus being the elements denoted formula_107 satisfying
formula_108
where formula_109 is the Kronecker delta.
Given a nondegenerate quadratic form on &NoBreak;&NoBreak;, formula_106 becomes naturally identified with &NoBreak;&NoBreak;, and the dual basis may be regarded as elements of &NoBreak;&NoBreak;, but are not in general the same set as the original basis.
Given further a GA of &NoBreak;&NoBreak;, let
formula_110
be the pseudoscalar (which does not necessarily square to &NoBreak;&NoBreak;) formed from the basis &NoBreak;}&NoBreak;. The dual basis vectors may be constructed as
formula_111
where the formula_112 denotes that the &NoBreak;&NoBreak;th basis vector is omitted from the product.
A dual basis is also known as a reciprocal basis or reciprocal frame.
A major usage of a dual basis is to separate vectors into components. Given a vector &NoBreak;&NoBreak;, scalar components formula_113 can be defined as
formula_114
in terms of which formula_7 can be separated into vector components as
formula_115
We can also define scalar components formula_20 as
formula_116
in terms of which formula_7 can be separated into vector components in terms of the dual basis as
formula_117
A dual basis as defined above for the vector subspace of a geometric algebra can be extended to cover the entire algebra. For compactness, we'll use a single capital letter to represent an ordered set of vector indices. I.e., writing
formula_118
where &NoBreak;&NoBreak;,
we can write a basis blade as
formula_119
The corresponding reciprocal blade has the indices in opposite order:
formula_120
Similar to the case above with vectors, it can be shown that
formula_121
where formula_122 is the scalar product.
With formula_60 a multivector, we can define scalar components as
formula_123
in terms of which formula_60 can be separated into component blades as
formula_124
We can alternatively define scalar components
formula_125
in terms of which formula_60 can be separated into component blades as
formula_126
Linear functions.
Although a versor is easier to work with because it can be directly represented in the algebra as a multivector, versors are a subgroup of linear functions on multivectors, which can still be used when necessary. The geometric algebra of an &NoBreak;&NoBreak;-dimensional vector space is spanned by a basis of formula_45 elements. If a multivector is represented by a formula_127 real column matrix of coefficients of a basis of the algebra, then all linear transformations of the multivector can be expressed as the matrix multiplication by a formula_128 real matrix. However, such a general linear transformation allows arbitrary exchanges among grades, such as a "rotation" of a scalar into a vector, which has no evident geometric interpretation.
A general linear transformation from vectors to vectors is of interest. With the natural restriction to preserving the induced exterior algebra, the "outermorphism" of the linear transformation is the unique extension of the versor. If formula_129 is a linear function that maps vectors to vectors, then its outermorphism is the function that obeys the rule
formula_130
for a blade, extended to the whole algebra through linearity.
Modeling geometries.
Although a lot of attention has been placed on CGA, it is to be noted that GA is not just one algebra, it is one of a family of algebras with the same essential structure.
Vector space model.
The even subalgebra of formula_131 is isomorphic to the complex numbers, as may be seen by writing a vector formula_132 in terms of its components in an orthonormal basis and left multiplying by the basis vector &NoBreak;&NoBreak;, yielding
formula_133
where we identify formula_134 since
formula_135
Similarly, the even subalgebra of formula_41 with basis formula_136 is isomorphic to the quaternions as may be seen by identifying &NoBreak;&NoBreak;, formula_137 and &NoBreak;&NoBreak;.
Every associative algebra has a matrix representation; replacing the three Cartesian basis vectors by the Pauli matrices gives a representation of &NoBreak;&NoBreak;:
formula_138
Dotting the "Pauli vector" (a dyad):
formula_139 with arbitrary vectors formula_140 and formula_141 and multiplying through gives:
formula_142 (Equivalently, by inspection, &NoBreak;&NoBreak;)
Spacetime model.
In physics, the main applications are the geometric algebra of Minkowski 3+1 spacetime, &NoBreak;&NoBreak;, called spacetime algebra (STA), or less commonly, &NoBreak;&NoBreak;, interpreted the algebra of physical space (APS).
While in STA, points of spacetime are represented simply by vectors, in APS, points of &NoBreak;&NoBreak;-dimensional spacetime are instead represented by paravectors, a three-dimensional vector (space) plus a one-dimensional scalar (time).
In spacetime algebra the electromagnetic field tensor has a bivector representation &NoBreak;&NoBreak;. Here, the formula_143 is the unit pseudoscalar (or four-dimensional volume element), formula_144 is the unit vector in time direction, and formula_145 and formula_104 are the classic electric and magnetic field vectors (with a zero time component). Using the four-current &NoBreak;}&NoBreak;, Maxwell's equations then become
In geometric calculus, juxtaposition of vectors such as in formula_146 indicate the geometric product and can be decomposed into parts as &NoBreak;&NoBreak;. Here formula_72 is the covector derivative in any spacetime and reduces to formula_147 in flat spacetime. Where formula_148 plays a role in Minkowski &NoBreak;&NoBreak;-spacetime which is synonymous to the role of formula_147 in Euclidean &NoBreak;&NoBreak;-space and is related to the d'Alembertian by &NoBreak;&NoBreak;. Indeed, given an observer represented by a future pointing timelike vector formula_144 we have
formula_149
formula_150
Boosts in this Lorentzian metric space have the same expression formula_151 as rotation in Euclidean space, where formula_152 is the bivector generated by the time and the space directions involved, whereas in the Euclidean case it is the bivector generated by the two space directions, strengthening the "analogy" to almost identity.
The Dirac matrices are a representation of &NoBreak;&NoBreak;, showing the equivalence with matrix representations used by physicists.
Homogeneous models.
Homogeneous models generally refer to a projective representation in which the elements of the one-dimensional subspaces of a vector space represent points of a geometry.
In a geometric algebra of a space of formula_8 dimensions, the rotors represent a set of transformations with formula_153 degrees of freedom, corresponding to rotations – for example, formula_11 when formula_154 and formula_155 when &NoBreak;&NoBreak;. Geometric algebra is often used to model a projective space, i.e. as a "homogeneous model": a point, line, plane, etc. is represented by an equivalence class of elements of the algebra that differ by an invertible scalar factor.
The rotors in a space of dimension formula_156 have formula_157 degrees of freedom, the same as the number of degrees of freedom in the rotations and translations combined for an &NoBreak;&NoBreak;-dimensional space.
This is the case in "Projective Geometric Algebra" (PGA), which is used to represent Euclidean isometries in Euclidean geometry (thereby covering the large majority of engineering applications of geometry). In this model, a degenerate dimension is added to the three Euclidean dimensions to form the algebra &NoBreak;&NoBreak;. With a suitable identification of subspaces to represent points, lines and planes, the versors of this algebra represent all proper Euclidean isometries, which are always screw motions in 3-dimensional space, along with all improper Euclidean isometries, which includes reflections, rotoreflections, transflections, and point reflections.
PGA combines formula_158 with a "complement" operator to obtain join, meet, distance, and angle formulas. In effect, the complement switches basis vectors that are present and absent in the expression of each term of the algebraic representation. For example, in the PGA or 3-dimensional space, the complement of the line formula_159 is the line &NoBreak;}&NoBreak;, because formula_160 and formula_161 are basis elements that are "not" contained in formula_159 but "are" contained in &NoBreak;}&NoBreak;. In the PGA of 2-dimensional space, the complement of formula_159 is &NoBreak;&NoBreak;, since there is no formula_161 element.
PGA is a widely used system that combines geometric algebra with homogeneous representations in geometry, but there exist several other such systems. The conformal model discussed below is homogeneous, as is "Conic Geometric Algebra", and see "Plane-based geometric algebra" for discussion of homogeneous models of elliptic and hyperbolic geometry compared with the Euclidean geometry derived from PGA.
Conformal model.
Working within GA, Euclidean space formula_162 (along with a conformal point at infinity) is embedded projectively in the CGA formula_43 via the identification of Euclidean points with 1D subspaces in the 4D null cone of the 5D CGA vector subspace. This allows all conformal transformations to be performed as rotations and reflections and is covariant, extending incidence relations of projective geometry to rounds objects such as circles and spheres.
Specifically, we add orthogonal basis vectors formula_163 and formula_164 such that formula_165 and formula_166 to the basis of the vector space that generates formula_41 and identify null vectors
formula_167 as the point at the origin and
formula_168 as a conformal point at infinity (see "Compactification"), giving
formula_169
(Some authors set formula_170 and &NoBreak;&NoBreak;.) This procedure has some similarities to the procedure for working with homogeneous coordinates in projective geometry, and in this case allows the modeling of Euclidean transformations of formula_171 as orthogonal transformations of a subset of &NoBreak;}&NoBreak;.
A fast changing and fluid area of GA, CGA is also being investigated for applications to relativistic physics.
Table of models.
Note in this list that &NoBreak;&NoBreak; and &NoBreak;&NoBreak; can be swapped and the same name applies; for example, with "relatively" little change occurring, see sign convention. For example, formula_172 and formula_173 are "both" referred to as Spacetime Algebra.
Geometric interpretation in the vector space model.
Projection and rejection.
For any vector formula_7 and any invertible vector &NoBreak;&NoBreak;,
formula_175
where the projection of formula_7 onto formula_176 (or the parallel part) is
formula_177
and the rejection of formula_7 from formula_176 (or the orthogonal part) is
formula_178
Using the concept of a &NoBreak;&NoBreak;-blade &NoBreak;&NoBreak; as representing a subspace of &NoBreak;&NoBreak; and every multivector ultimately being expressed in terms of vectors, this generalizes to projection of a general multivector onto any invertible &NoBreak;&NoBreak;-blade &NoBreak;&NoBreak; as
formula_179
with the rejection being defined as
formula_180
The projection and rejection generalize to null blades formula_104 by replacing the inverse formula_181 with the pseudoinverse formula_182 with respect to the contractive product. The outcome of the projection coincides in both cases for non-null blades. For null blades &NoBreak;&NoBreak;, the definition of the projection given here with the first contraction rather than the second being onto the pseudoinverse should be used, as only then is the result necessarily in the subspace represented by &NoBreak;&NoBreak;.
The projection generalizes through linearity to general multivectors &NoBreak;&NoBreak;. The projection is not linear in &NoBreak;&NoBreak; and does not generalize to objects &NoBreak;&NoBreak; that are not blades.
Reflection.
Simple reflections in a hyperplane are readily expressed in the algebra through conjugation with a single vector. These serve to generate the group of general rotoreflections and rotations.
The reflection formula_183 of a vector formula_174 along a vector &NoBreak;&NoBreak;, or equivalently in the hyperplane orthogonal to &NoBreak;&NoBreak;, is the same as negating the component of a vector parallel to &NoBreak;&NoBreak;. The result of the reflection will be
formula_184
This is not the most general operation that may be regarded as a reflection when the dimension &NoBreak;&NoBreak;. A general reflection may be expressed as the composite of any odd number of single-axis reflections. Thus, a general reflection formula_185 of a vector formula_7 may be written
formula_186
where
formula_187 and formula_188
If we define the reflection along a non-null vector formula_176 of the product of vectors as the reflection of every vector in the product along the same vector, we get for any product of an odd number of vectors that, by way of example,
formula_189
and for the product of an even number of vectors that
formula_190
Using the concept of every multivector ultimately being expressed in terms of vectors, the reflection of a general multivector formula_60 using any reflection versor formula_191 may be written
formula_192
where formula_193 is the automorphism of reflection through the origin of the vector space (&NoBreak;&NoBreak;) extended through linearity to the whole algebra.
Rotations.
If we have a product of vectors formula_195 then we denote the reverse as
formula_196
As an example, assume that formula_197 we get
formula_198
Scaling formula_199 so that formula_200 then
formula_201
so formula_202 leaves the length of formula_194 unchanged. We can also show that
formula_203
so the transformation formula_202 preserves both length and angle. It therefore can be identified as a rotation or rotoreflection; formula_199 is called a rotor if it is a proper rotation (as it is if it can be expressed as a product of an even number of vectors) and is an instance of what is known in GA as a "versor".
There is a general method for rotating a vector involving the formation of a multivector of the form formula_204 that produces a rotation formula_205 in the plane and with the orientation defined by a &NoBreak;&NoBreak;-blade &NoBreak;&NoBreak;.
Rotors are a generalization of quaternions to &NoBreak;&NoBreak;-dimensional spaces.
Examples and applications.
Hypervolume of a parallelotope spanned by vectors.
For vectors &NoBreak;&NoBreak; and &NoBreak;&NoBreak; spanning a parallelogram we have
formula_206
with the result that &NoBreak;&NoBreak; is linear in the product of the "altitude" and the "base" of the parallelogram, that is, its area.
Similar interpretations are true for any number of vectors spanning an &NoBreak;&NoBreak;-dimensional parallelotope; the exterior product of vectors &NoBreak;&NoBreak;, that is &NoBreak;&NoBreak;, has a magnitude equal to the volume of the &NoBreak;&NoBreak;-parallelotope. An &NoBreak;&NoBreak;-vector does not necessarily have a shape of a parallelotope – this is a convenient visualization. It could be any shape, although the volume equals that of the parallelotope.
Intersection of a line and a plane.
We may define the line parametrically by &NoBreak;&NoBreak;, where &NoBreak;&NoBreak; and &NoBreak;&NoBreak; are position vectors for points P and T and &NoBreak;&NoBreak; is the direction vector for the line.
Then
formula_207 and formula_208
so
formula_209
and
formula_210
Rotating systems.
A rotational quantity such as torque or angular momentum is described in geometric algebra as a bivector. Suppose a circular path in an arbitrary plane containing orthonormal vectors &NoBreak;}&NoBreak; and &NoBreak;}&NoBreak; is parameterized by angle.
formula_211
By designating the unit bivector of this plane as the imaginary number
formula_212
formula_213
this path vector can be conveniently written in complex exponential form
formula_214
and the derivative with respect to angle is
formula_215
For example, torque is generally defined as the magnitude of the perpendicular force component times distance, or work per unit angle. Thus the torque, the rate of change of work &NoBreak;&NoBreak; with respect to angle, due to a force &NoBreak;&NoBreak;, is
formula_216
Rotational quantities are represented in vector calculus in three dimensions using the cross product. Together with a choice of an oriented volume form &NoBreak;&NoBreak;, these can be related to the exterior product with its more natural geometric interpretation of such quantities as a bivectors by using the dual relationship
formula_217
Unlike the cross product description of torque, &NoBreak;&NoBreak;, the geometric algebra description does not introduce a vector in the normal direction; a vector that does not exist in two and that is not unique in greater than three dimensions. The unit bivector describes the plane and the orientation of the rotation, and the sense of the rotation is relative to the angle between the vectors &NoBreak;}&NoBreak; and &NoBreak;}&NoBreak;.
Geometric calculus.
Geometric calculus extends the formalism to include differentiation and integration including differential geometry and differential forms.
Essentially, the vector derivative is defined so that the GA version of Green's theorem is true,
formula_218
and then one can write
formula_219
as a geometric product, effectively generalizing Stokes' theorem (including the differential form version of it).
In 1D when &NoBreak;&NoBreak; is a curve with endpoints &NoBreak;&NoBreak; and &NoBreak;&NoBreak;, then
formula_218
reduces to
formula_220
or the fundamental theorem of integral calculus.
Also developed are the concept of vector manifold and geometric integration theory (which generalizes differential forms).
History.
Before the 20th century.
Although the connection of geometry with algebra dates as far back at least to Euclid's "Elements" in the third century B.C. (see Greek geometric algebra), GA in the sense used in this article was not developed until 1844, when it was used in a "systematic way" to describe the geometrical properties and "transformations" of a space. In that year, Hermann Grassmann introduced the idea of a geometrical algebra in full generality as a certain calculus (analogous to the propositional calculus) that encoded all of the geometrical information of a space. Grassmann's algebraic system could be applied to a number of different kinds of spaces, the chief among them being Euclidean space, affine space, and projective space. Following Grassmann, in 1878 William Kingdon Clifford examined Grassmann's algebraic system alongside the quaternions of William Rowan Hamilton in . From his point of view, the quaternions described certain "transformations" (which he called "rotors"), whereas Grassmann's algebra described certain "properties" (or "Strecken" such as length, area, and volume). His contribution was to define a new product – the "geometric product" – on an existing Grassmann algebra, which realized the quaternions as living within that algebra. Subsequently, Rudolf Lipschitz in 1886 generalized Clifford's interpretation of the quaternions and applied them to the geometry of rotations in &NoBreak;&NoBreak; dimensions. Later these developments would lead other 20th-century mathematicians to formalize and explore the properties of the Clifford algebra.
Nevertheless, another revolutionary development of the 19th-century would completely overshadow the geometric algebras: that of vector analysis, developed independently by Josiah Willard Gibbs and Oliver Heaviside. Vector analysis was motivated by James Clerk Maxwell's studies of electromagnetism, and specifically the need to express and manipulate conveniently certain differential equations. Vector analysis had a certain intuitive appeal compared to the rigors of the new algebras. Physicists and mathematicians alike readily adopted it as their geometrical toolkit of choice, particularly following the influential 1901 textbook "Vector Analysis" by Edwin Bidwell Wilson, following lectures of Gibbs.
In more detail, there have been three approaches to geometric algebra: quaternionic analysis, initiated by Hamilton in 1843 and geometrized as rotors by Clifford in 1878; geometric algebra, initiated by Grassmann in 1844; and vector analysis, developed out of quaternionic analysis in the late 19th century by Gibbs and Heaviside. The legacy of quaternionic analysis in vector analysis can be seen in the use of &NoBreak;&NoBreak;, &NoBreak;&NoBreak;, &NoBreak;&NoBreak; to indicate the basis vectors of &NoBreak;&NoBreak;: it is being thought of as the purely imaginary quaternions. From the perspective of geometric algebra, the even subalgebra of the Space Time Algebra is isomorphic to the GA of 3D Euclidean space and quaternions are isomorphic to the even subalgebra of the GA of 3D Euclidean space, which unifies the three approaches.
20th century and present.
Progress on the study of Clifford algebras quietly advanced through the twentieth century, although largely due to the work of abstract algebraists such as Élie Cartan, Hermann Weyl and Claude Chevalley. The "geometrical" approach to geometric algebras has seen a number of 20th-century revivals. In mathematics, Emil Artin's "Geometric Algebra" discusses the algebra associated with each of a number of geometries, including affine geometry, projective geometry, symplectic geometry, and orthogonal geometry. In physics, geometric algebras have been revived as a "new" way to do classical mechanics and electromagnetism, together with more advanced topics such as quantum mechanics and gauge theory. David Hestenes reinterpreted the Pauli and Dirac matrices as vectors in ordinary space and spacetime, respectively, and has been a primary contemporary advocate for the use of geometric algebra.
In computer graphics and robotics, geometric algebras have been revived in order to efficiently represent rotations and other transformations. For applications of GA in robotics (screw theory, kinematics and dynamics using versors), computer vision, control and neural computing (geometric learning) see Bayro (2010).
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
References and further reading.
<templatestyles src="Refbegin/styles.css" />
"Arranged chronologically"
External links.
<templatestyles src="Refbegin/styles.css" />
English translations of early books and papers
Research groups | [
{
"math_id": 0,
"text": "a^2 \\ne 0 "
},
{
"math_id": 1,
"text": "a^{-1}"
},
{
"math_id": 2,
"text": "u"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "\\textstyle\\frac{1}{2}(1 + u)"
},
{
"math_id": 5,
"text": "\\R"
},
{
"math_id": 6,
"text": "\\R \\to \\mathcal{G}(p,q)"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "1"
},
{
"math_id": 10,
"text": "2"
},
{
"math_id": 11,
"text": "3"
},
{
"math_id": 12,
"text": "ab = \\frac{1}{2} (ab + ba) + \\frac{1}{2} (ab - ba) ."
},
{
"math_id": 13,
"text": "a \\cdot b := g(a,b),"
},
{
"math_id": 14,
"text": "\\frac{1}{2}(ab + ba) = \\frac{1}{2} \\left((a + b)^2 - a^2 - b^2\\right) = a \\cdot b ."
},
{
"math_id": 15,
"text": "a \\wedge b := \\frac{1}{2}(ab - ba) = -(b \\wedge a) ."
},
{
"math_id": 16,
"text": "ab=a \\cdot b + a \\wedge b "
},
{
"math_id": 17,
"text": "b"
},
{
"math_id": 18,
"text": "\\begin{align}\n 1 \\wedge a_i &= a_i \\wedge 1 = a_i \\\\\n a_1 \\wedge a_2\\wedge\\cdots\\wedge a_r &= \\frac{1}{r!}\\sum_{\\sigma\\in\\mathfrak{S}_r} \\operatorname{sgn}(\\sigma) a_{\\sigma(1)}a_{\\sigma(2)} \\cdots a_{\\sigma(r)},\n\\end{align}"
},
{
"math_id": 19,
"text": "\\operatorname{sgn}(\\sigma)"
},
{
"math_id": 20,
"text": "a_i"
},
{
"math_id": 21,
"text": " a_1 a_2 a_3 \\dots a_n = \\sum^{[\\frac{n}2]}_{i=0} \\sum_{\\mu\\in{}\\mathcal{C}}\n(-1)^k \\operatorname{Pf}(a_{\\mu_1}\\cdot a_{\\mu_2},\\dots,a_{\\mu_{2i-1}} \\cdot a_{\\mu_{2i}})\na_{\\mu_{2i+1}}\\land\\dots\\land a_{\\mu_n}"
},
{
"math_id": 22,
"text": "\\operatorname{Pf}(A)"
},
{
"math_id": 23,
"text": "\\mathcal{C} = \\binom{n}{2i}"
},
{
"math_id": 24,
"text": "r"
},
{
"math_id": 25,
"text": "\\{a_1,\\ldots,a_r\\}"
},
{
"math_id": 26,
"text": "[\\mathbf{A}]_{ij} = a_i \\cdot a_j"
},
{
"math_id": 27,
"text": "\\mathbf{A}"
},
{
"math_id": 28,
"text": "\\mathbf{D}"
},
{
"math_id": 29,
"text": "\\mathbf{O}"
},
{
"math_id": 30,
"text": "\\sum_{k,l}[\\mathbf{O}]_{ik}[\\mathbf{A}]_{kl}[\\mathbf{O}^{\\mathrm{T}}]_{lj}=\\sum_{k,l}[\\mathbf{O}]_{ik}[\\mathbf{O}]_{jl}[\\mathbf{A}]_{kl}=[\\mathbf{D}]_{ij}"
},
{
"math_id": 31,
"text": "e_i=\\sum_j[\\mathbf{O}]_{ij}a_j"
},
{
"math_id": 32,
"text": "e_i\\cdot e_j=[\\mathbf{D}]_{ij}"
},
{
"math_id": 33,
"text": "\\{e_1, \\ldots, e_r\\}"
},
{
"math_id": 34,
"text": "e_i \\ne e_j"
},
{
"math_id": 35,
"text": "\\begin{array}{rl}\ne_1e_2\\cdots e_r &= e_1 \\wedge e_2 \\wedge \\cdots \\wedge e_r \\\\\n&= \\left(\\sum_j [\\mathbf{O}]_{1j}a_j\\right) \\wedge \\left(\\sum_j [\\mathbf{O}]_{2j}a_j \\right) \\wedge \\cdots \\wedge \\left(\\sum_j [\\mathbf{O}]_{rj}a_j\\right) \\\\\n&= (\\det \\mathbf{O}) a_1 \\wedge a_2 \\wedge \\cdots \\wedge a_r \n\\end{array}"
},
{
"math_id": 36,
"text": "\\widehat{e_i}=\\frac{1}{\\sqrt{|e_i \\cdot e_i|}}e_i,"
},
{
"math_id": 37,
"text": "+1"
},
{
"math_id": 38,
"text": "p"
},
{
"math_id": 39,
"text": "q"
},
{
"math_id": 40,
"text": "-1"
},
{
"math_id": 41,
"text": "\\mathcal{G}(3,0)"
},
{
"math_id": 42,
"text": "\\mathcal{G}(1,3)"
},
{
"math_id": 43,
"text": "\\mathcal{G}(4,1)"
},
{
"math_id": 44,
"text": "\\{1, e_1, e_2, e_3, e_1e_2, e_2e_3, e_3e_1, e_1e_2e_3\\}"
},
{
"math_id": 45,
"text": "2^n"
},
{
"math_id": 46,
"text": "\\{ B_i \\mid i \\in S \\}"
},
{
"math_id": 47,
"text": "S"
},
{
"math_id": 48,
"text": " \\left( \\sum_i \\alpha_i B_i \\right) \\left( \\sum_j \\beta_j B_j \\right) = \\sum_{i,j} \\alpha_i\\beta_j B_i B_j ."
},
{
"math_id": 49,
"text": "k"
},
{
"math_id": 50,
"text": " e_1 \\wedge e_2 + e_3 \\wedge e_4 "
},
{
"math_id": 51,
"text": "\\mathcal{G}(4,0)"
},
{
"math_id": 52,
"text": " V \\to V : a \\mapsto RaR^{-1}"
},
{
"math_id": 53,
"text": "\\mathcal{G}(V) \\to \\mathcal{G}(V) : A \\mapsto RAR^{-1}."
},
{
"math_id": 54,
"text": "\\mathcal{G}^\\times"
},
{
"math_id": 55,
"text": "\\{ v_1 v_2 \\cdots v_k \\in \\mathcal{G} \\mid v_i \\in V^\\times\\}"
},
{
"math_id": 56,
"text": "\\Gamma"
},
{
"math_id": 57,
"text": "0"
},
{
"math_id": 58,
"text": "\\{e_1,\\ldots,e_n\\}"
},
{
"math_id": 59,
"text": "\\{e_ie_j\\mid 1\\leq i<j\\leq n\\}"
},
{
"math_id": 60,
"text": "A"
},
{
"math_id": 61,
"text": " A = \\sum_{r=0}^{n} \\langle A \\rangle _r "
},
{
"math_id": 62,
"text": " a b = a \\cdot b + a \\wedge b = \\langle a b \\rangle_0 + \\langle a b \\rangle_2"
},
{
"math_id": 63,
"text": "\\langle a b \\rangle_0=a\\cdot b"
},
{
"math_id": 64,
"text": "\\langle a b \\rangle_2 = a\\wedge b"
},
{
"math_id": 65,
"text": "i"
},
{
"math_id": 66,
"text": " A^{[0]} = \\langle A \\rangle _0 + \\langle A \\rangle _2 + \\langle A \\rangle _4 + \\cdots "
},
{
"math_id": 67,
"text": " A^{[1]} = \\langle A \\rangle _1 + \\langle A \\rangle _3 + \\langle A \\rangle _5 + \\cdots "
},
{
"math_id": 68,
"text": "(n-1)"
},
{
"math_id": 69,
"text": "\\mathcal{G}^{[0]}(2,0) \\cong \\mathcal{G}(0,1)"
},
{
"math_id": 70,
"text": "W"
},
{
"math_id": 71,
"text": "\\{b_1,b_2,\\ldots, b_k\\}"
},
{
"math_id": 72,
"text": "D"
},
{
"math_id": 73,
"text": "I"
},
{
"math_id": 74,
"text": "I'"
},
{
"math_id": 75,
"text": "I = \\pm I'"
},
{
"math_id": 76,
"text": "\\alpha I"
},
{
"math_id": 77,
"text": "\\mathcal{G}(n,0)"
},
{
"math_id": 78,
"text": "\\R^n"
},
{
"math_id": 79,
"text": "\\{ b_1, b_2 \\}"
},
{
"math_id": 80,
"text": "I = b_1 b_2"
},
{
"math_id": 81,
"text": "b_1"
},
{
"math_id": 82,
"text": "b_2"
},
{
"math_id": 83,
"text": " \\{ \\alpha_0 + \\alpha_1 I \\mid \\alpha_i \\in \\R \\} "
},
{
"math_id": 84,
"text": "e_i"
},
{
"math_id": 85,
"text": " \\{ e_3 e_2 , e_1 e_3 , e_2 e_1 \\} ."
},
{
"math_id": 86,
"text": "j"
},
{
"math_id": 87,
"text": "C \\wedge D := \\sum_{r,s}\\langle \\langle C \\rangle_r \\langle D \\rangle_s \\rangle_{r+s} "
},
{
"math_id": 88,
"text": "C \\times D := \\tfrac{1}{2}(CD-DC) "
},
{
"math_id": 89,
"text": "C \\vee D := ((CI^{-1}) \\wedge (DI^{-1}))I "
},
{
"math_id": 90,
"text": " C \\;\\rfloor\\; D := \\sum_{r,s}\\langle \\langle C\\rangle_r \\langle D \\rangle_{s} \\rangle_{s-r} "
},
{
"math_id": 91,
"text": " C \\;\\lfloor\\; D := \\sum_{r,s}\\langle \\langle C\\rangle_r \\langle D \\rangle_{s} \\rangle_{r-s} "
},
{
"math_id": 92,
"text": " C * D := \\sum_{r,s}\\langle \\langle C \\rangle_r \\langle D \\rangle_s \\rangle_{0} "
},
{
"math_id": 93,
"text": " C \\bullet D := \\sum_{r,s}\\langle \\langle C\\rangle_r \\langle D \\rangle_{s} \\rangle_{|s-r|} "
},
{
"math_id": 94,
"text": " C \\;\\rfloor\\; D = ( C \\wedge ( D I^{-1} ) ) I "
},
{
"math_id": 95,
"text": " C \\;\\lfloor\\; D = I ( ( I^{-1} C) \\wedge D ) "
},
{
"math_id": 96,
"text": " ( A \\wedge B ) * C = A * ( B \\;\\rfloor\\; C ) "
},
{
"math_id": 97,
"text": " C * ( B \\wedge A ) = ( C \\;\\lfloor\\; B ) * A "
},
{
"math_id": 98,
"text": " A \\;\\rfloor\\; ( B \\;\\rfloor\\; C ) = ( A \\wedge B ) \\;\\rfloor\\; C "
},
{
"math_id": 99,
"text": " ( A \\;\\rfloor\\; B ) \\;\\lfloor\\; C = A \\;\\rfloor\\; ( B \\;\\lfloor\\; C ) ."
},
{
"math_id": 100,
"text": " ab = a \\cdot b + a \\wedge b "
},
{
"math_id": 101,
"text": " aB = a \\;\\rfloor\\; B + a \\wedge B"
},
{
"math_id": 102,
"text": " \\mathcal{P}_b (a) = (a \\cdot b^{-1})b "
},
{
"math_id": 103,
"text": " \\mathcal{P}_B (A) = (A \\;\\rfloor\\; B^{-1}) \\;\\rfloor\\; B"
},
{
"math_id": 104,
"text": "B"
},
{
"math_id": 105,
"text": "\\{ e_1 , \\ldots , e_n \\}"
},
{
"math_id": 106,
"text": "V^{*}"
},
{
"math_id": 107,
"text": "\\{ e^1 , \\ldots , e^n \\}"
},
{
"math_id": 108,
"text": "e^i \\cdot e_j = \\delta^i{}_j,"
},
{
"math_id": 109,
"text": "\\delta"
},
{
"math_id": 110,
"text": "I = e_1 \\wedge \\cdots \\wedge e_n"
},
{
"math_id": 111,
"text": "e^i=(-1)^{i-1}(e_1 \\wedge \\cdots \\wedge \\check{e}_i \\wedge \\cdots \\wedge e_n) I^{-1},"
},
{
"math_id": 112,
"text": "\\check{e}_i"
},
{
"math_id": 113,
"text": "a^i"
},
{
"math_id": 114,
"text": "a^i=a\\cdot e^i\\ ,"
},
{
"math_id": 115,
"text": "a=\\sum_i a^i e_i\\ ."
},
{
"math_id": 116,
"text": "a_i=a\\cdot e_i\\ ,"
},
{
"math_id": 117,
"text": "a=\\sum_i a_i e^i\\ ."
},
{
"math_id": 118,
"text": "J=(j_1,\\dots ,j_n)\\ ,"
},
{
"math_id": 119,
"text": "e_J=e_{j_1}\\wedge e_{j_2}\\wedge\\cdots\\wedge e_{j_n}\\ ."
},
{
"math_id": 120,
"text": "e^J=e^{j_n}\\wedge\\cdots \\wedge e^{j_2}\\wedge e^{j_1}\\ ."
},
{
"math_id": 121,
"text": "e^J * e_K=\\delta^J_K\\ ,"
},
{
"math_id": 122,
"text": "*"
},
{
"math_id": 123,
"text": "A^{ij\\cdots k}=(e^k\\wedge\\cdots\\wedge e^j\\wedge e^i)*A\\ ,"
},
{
"math_id": 124,
"text": "A=\\sum_{i<j<\\cdots<k} A^{ij\\cdots k} e_i\\wedge e_j\\wedge\\cdots \\wedge e_k\\ ."
},
{
"math_id": 125,
"text": "A_{ij\\cdots k}=(e_k\\wedge\\cdots\\wedge e_j\\wedge e_i)*A\\ ,"
},
{
"math_id": 126,
"text": "A=\\sum_{i<j<\\cdots<k} A_{ij\\cdots k} e^i\\wedge e^j\\wedge\\cdots \\wedge e^k\\ ."
},
{
"math_id": 127,
"text": "2^n \\times 1"
},
{
"math_id": 128,
"text": "2^n \\times 2^n"
},
{
"math_id": 129,
"text": "f"
},
{
"math_id": 130,
"text": "\\underline{\\mathsf{f}}(a_1 \\wedge a_2 \\wedge \\cdots \\wedge a_r) = f(a_1) \\wedge f(a_2) \\wedge \\cdots \\wedge f(a_r)"
},
{
"math_id": 131,
"text": "\\mathcal{G}(2,0)"
},
{
"math_id": 132,
"text": "P"
},
{
"math_id": 133,
"text": " Z = e_1 P = e_1 ( x e_1 + y e_2) = x (1) + y ( e_1 e_2) ,"
},
{
"math_id": 134,
"text": "i \\mapsto e_1e_2"
},
{
"math_id": 135,
"text": "(e_1 e_2)^2 = e_1 e_2 e_1 e_2 = -e_1 e_1 e_2 e_2 = -1 ."
},
{
"math_id": 136,
"text": "\\{1, e_2 e_3, e_3 e_1, e_1 e_2 \\}"
},
{
"math_id": 137,
"text": "j \\mapsto -e_3 e_1"
},
{
"math_id": 138,
"text": "\\begin{align}\n e_1 = \\sigma_1 = \\sigma_x &=\n \\begin{pmatrix}\n 0 & 1 \\\\\n 1 & 0\n \\end{pmatrix} \\\\\n e_2 = \\sigma_2 = \\sigma_y &=\n \\begin{pmatrix}\n 0 & -i \\\\\n i & 0\n \\end{pmatrix} \\\\\n e_3 =\\sigma_3 = \\sigma_z &=\n \\begin{pmatrix}\n 1 & 0 \\\\\n 0 & -1\n \\end{pmatrix} \\,.\n\\end{align}"
},
{
"math_id": 139,
"text": "\\sigma = \\sigma_1 e_1 + \\sigma_2 e_2 + \\sigma_3 e_3"
},
{
"math_id": 140,
"text": " a "
},
{
"math_id": 141,
"text": " b "
},
{
"math_id": 142,
"text": "(\\sigma \\cdot a)(\\sigma \\cdot b) = a \\cdot b + a \\wedge b "
},
{
"math_id": 143,
"text": "i = \\gamma_0 \\gamma_1 \\gamma_2 \\gamma_3"
},
{
"math_id": 144,
"text": "\\gamma_0"
},
{
"math_id": 145,
"text": "E"
},
{
"math_id": 146,
"text": "DF"
},
{
"math_id": 147,
"text": "\\nabla"
},
{
"math_id": 148,
"text": "\\bigtriangledown"
},
{
"math_id": 149,
"text": "\\gamma_0\\cdot\\bigtriangledown=\\frac{1}{c}\\frac{\\partial}{\\partial t}"
},
{
"math_id": 150,
"text": "\\gamma_0\\wedge\\bigtriangledown=\\nabla"
},
{
"math_id": 151,
"text": "e^{{\\beta}}"
},
{
"math_id": 152,
"text": "{\\beta}"
},
{
"math_id": 153,
"text": "n(n-1)/2"
},
{
"math_id": 154,
"text": "n=3"
},
{
"math_id": 155,
"text": "6"
},
{
"math_id": 156,
"text": "n+1"
},
{
"math_id": 157,
"text": "n(n-1)/2+n"
},
{
"math_id": 158,
"text": "\\mathcal{G}(3,0,1)"
},
{
"math_id": 159,
"text": "\\boldsymbol{e}_{12}"
},
{
"math_id": 160,
"text": "\\boldsymbol{e}_0"
},
{
"math_id": 161,
"text": "\\boldsymbol{e}_3"
},
{
"math_id": 162,
"text": "\\mathbb E^3"
},
{
"math_id": 163,
"text": "e_+"
},
{
"math_id": 164,
"text": "e_-"
},
{
"math_id": 165,
"text": "e_+^2 = +1"
},
{
"math_id": 166,
"text": "e_-^2 = -1"
},
{
"math_id": 167,
"text": "n_\\text{o} = \\tfrac{1}{2}(e_- - e_+)"
},
{
"math_id": 168,
"text": "n_\\infty = e_- + e_+"
},
{
"math_id": 169,
"text": "n_\\infty \\cdot n_\\text{o} = -1 ."
},
{
"math_id": 170,
"text": "e_4 = n_\\text{o}"
},
{
"math_id": 171,
"text": "\\mathbb{R}^3"
},
{
"math_id": 172,
"text": "\\mathcal{G}(3, 1, 0)"
},
{
"math_id": 173,
"text": "\\mathcal{G}(1, 3, 0)"
},
{
"math_id": 174,
"text": "c"
},
{
"math_id": 175,
"text": " a = amm^{-1} = (a\\cdot m + a \\wedge m)m^{-1} = a_{\\| m} + a_{\\perp m} ,"
},
{
"math_id": 176,
"text": "m"
},
{
"math_id": 177,
"text": " a_{\\| m} = (a \\cdot m)m^{-1} "
},
{
"math_id": 178,
"text": " a_{\\perp m} = a - a_{\\| m} = (a\\wedge m)m^{-1} ."
},
{
"math_id": 179,
"text": " \\mathcal{P}_B (A) = (A \\;\\rfloor\\; B) \\;\\rfloor\\; B^{-1} ,"
},
{
"math_id": 180,
"text": " \\mathcal{P}_B^\\perp (A) = A - \\mathcal{P}_B (A) ."
},
{
"math_id": 181,
"text": "B^{-1}"
},
{
"math_id": 182,
"text": "B^{+}"
},
{
"math_id": 183,
"text": "c'"
},
{
"math_id": 184,
"text": " c' = {-c_{\\| m} + c_{\\perp m}} = {-(c \\cdot m)m^{-1} + (c \\wedge m)m^{-1}}\n= {(-m \\cdot c - m \\wedge c)m^{-1}}\n= -mcm^{-1} "
},
{
"math_id": 185,
"text": "a'"
},
{
"math_id": 186,
"text": " a \\mapsto a' = -MaM^{-1} ,"
},
{
"math_id": 187,
"text": " M = pq \\cdots r"
},
{
"math_id": 188,
"text": " M^{-1} = (pq \\cdots r)^{-1} = r^{-1} \\cdots q^{-1}p^{-1} ."
},
{
"math_id": 189,
"text": " (abc)' = a'b'c' = (-mam^{-1})(-mbm^{-1})(-mcm^{-1}) = -ma(m^{-1}m)b(m^{-1}m)cm^{-1} = -mabcm^{-1} \\,"
},
{
"math_id": 190,
"text": " (abcd)' = a'b'c'd' = (-mam^{-1})(-mbm^{-1})(-mcm^{-1})(-mdm^{-1}) = mabcdm^{-1} ."
},
{
"math_id": 191,
"text": "M"
},
{
"math_id": 192,
"text": " A \\mapsto M\\alpha(A)M^{-1} ,"
},
{
"math_id": 193,
"text": "\\alpha"
},
{
"math_id": 194,
"text": "v"
},
{
"math_id": 195,
"text": "R = a_1a_2 \\cdots a_r"
},
{
"math_id": 196,
"text": "\\widetilde R = a_r\\cdots a_2 a_1."
},
{
"math_id": 197,
"text": " R = ab "
},
{
"math_id": 198,
"text": "R\\widetilde R = abba = ab^2a = a^2b^2 = ba^2b = baab = \\widetilde RR."
},
{
"math_id": 199,
"text": "R"
},
{
"math_id": 200,
"text": "R\\widetilde R = 1"
},
{
"math_id": 201,
"text": "(Rv\\widetilde R)^2 = Rv^{2}\\widetilde R = v^2R\\widetilde R = v^2 "
},
{
"math_id": 202,
"text": "Rv\\widetilde R"
},
{
"math_id": 203,
"text": "(Rv_1\\widetilde R) \\cdot (Rv_2\\widetilde R) = v_1 \\cdot v_2"
},
{
"math_id": 204,
"text": " R = e^{-B \\theta / 2} "
},
{
"math_id": 205,
"text": " \\theta "
},
{
"math_id": 206,
"text": " a \\wedge b = ((a \\wedge b) b^{-1}) b = a_{\\perp b} b "
},
{
"math_id": 207,
"text": "B \\wedge (p-q) = 0"
},
{
"math_id": 208,
"text": "B \\wedge (t + \\alpha v - q) = 0"
},
{
"math_id": 209,
"text": "\\alpha = \\frac{B \\wedge(q-t)}{B \\wedge v} "
},
{
"math_id": 210,
"text": "p = t + \\left(\\frac{B \\wedge (q-t)}{B \\wedge v}\\right) v. "
},
{
"math_id": 211,
"text": "\\mathbf{r} = r(\\widehat{u} \\cos \\theta + \\widehat{\\ \\!v} \\sin \\theta) = r \\widehat{u}(\\cos \\theta + \\widehat{u} \\widehat{\\ \\!v} \\sin \\theta)"
},
{
"math_id": 212,
"text": "{i} = \\widehat{u} \\widehat{\\ \\!v} = \\widehat{u} \\wedge \\widehat{\\ \\!v}"
},
{
"math_id": 213,
"text": "i^2 = -1 "
},
{
"math_id": 214,
"text": " \\mathbf{r} = r \\widehat{u} e^{i\\theta} "
},
{
"math_id": 215,
"text": " \\frac{d \\mathbf{r}}{d\\theta} = r \\widehat{u} i e^{i\\theta} = \\mathbf{r} i ."
},
{
"math_id": 216,
"text": " \\tau = \\frac{dW}{d\\theta} = F \\cdot \\frac{dr}{d\\theta} = F \\cdot (\\mathbf{r} i) ."
},
{
"math_id": 217,
"text": "a \\times b = -I (a \\wedge b) ."
},
{
"math_id": 218,
"text": "\\int_A dA \\,\\nabla f = \\oint_{\\partial A} dx \\, f"
},
{
"math_id": 219,
"text": "\\nabla f = \\nabla \\cdot f + \\nabla \\wedge f"
},
{
"math_id": 220,
"text": "\\int_a^b dx \\, \\nabla f = \\int_a^b dx \\cdot \\nabla f = \\int_a^b df = f(b) -f(a)"
}
] | https://en.wikipedia.org/wiki?curid=12939 |
12940766 | Toric lens | Type of lens
A toric lens is a lens with different optical power and focal length in two orientations perpendicular to each other. One of the lens surfaces is shaped like a "cap" from a torus (see figure at right), and the other one is usually spherical. Such a lens behaves like a combination of a spherical lens and a cylindrical lens. Toric lenses are used primarily in eyeglasses, contact lenses and intraocular lenses to correct astigmatism.
Torus.
A torus is the surface of revolution resulting when a circle with radius "r" rotates around an axis lying within the same plane as the circle, at a distance "R" from the circle's centre (see figure at right). If "R" > "r", a "ring torus" is produced. If "R" = "r", a "horn torus" is produced, where the opening is contracted into a single point. "R" < "r" results in a "spindle torus", where only two "dips" remain from the opening; these dips become less deep as "R" approaches 0. When "R" = 0, the torus degenerates into a sphere with radius "r".
Radius of curvature and optical power.
The greatest radius of curvature of the toric lens surface, "R" + "r", corresponds to the smallest refractive power, "S", given by
formula_0,
where "n" is the index of refraction of the lens material.
The smallest radius of curvature, "r", corresponds to the greatest refractive power, "s", given by
formula_1.
Since "R" + "r" > "r", "S" < "s". The lens behaves approximately like a combination of a spherical lens with optical power "s" and a cylindrical lens with power "s" − "S". In ophthalmology and optometry, "s" − "S" is called the "cylinder power" of the lens.
Note that both the greatest and the smallest curvature have a "circular" shape. Consequently, in contrast with a popular assumption, the toric lens is "not" an ellipsoid of revolution.
Light ray and its refractive power.
Light rays within the ("x","y")-plane of the torus (as defined in the figure above) are refracted according to the greatest radius of curvature, "R" + "r", which means that it has the smallest refractive power, "S".
Light rays within a plane through the axis of revolution (the "z" axis) of the torus are refracted according to the smallest radius of curvature, "r", which means that it has the greatest refractive power, "s".
As a consequence, there are two different refractive powers at orientations perpendicular to each other. At intermediate orientations, the refractive power changes gradually from the greatest to the smallest value, or reverse. This will compensate for the astigmatic aberration of the eye.
Atoric lens.
With modern computer-controlled design, grinding and polishing techniques, good vision corrections can be achieved for even wider angles of view by allowing certain deviations from the toric shape. This is called an "atoric lens" (literally, non-toric lens). They are related to toric lenses in the same way that aspheric lenses are related to spherical lenses.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " S = \\frac{n-1}{R+r} "
},
{
"math_id": 1,
"text": " s = \\frac{n-1}{r} "
}
] | https://en.wikipedia.org/wiki?curid=12940766 |
12941407 | Good regulator | Theorem in cybernetics
The good regulator is a theorem conceived by Roger C. Conant and W. Ross Ashby that is central to cybernetics. Originally stated that "every good regulator of a system must be a model of that system", but more accurately, every good regulator must contain a model of the system. That is, any regulator that is maximally simple among optimal regulators must behave as an image of that system under a homomorphism; while the authors sometimes say 'isomorphism', the mapping they construct is only a homomorphism.
Theorem.
This theorem is obtained by considering the entropy of the variation of the output of the controlled system, and shows that, under very general conditions, that the entropy is minimized when there is a (deterministic) mapping formula_0 from the states of the system to the states of the regulator. The authors view this map formula_1 as making the regulator a 'model' of the system.
With regard to the brain, insofar as it is successful and efficient as a regulator for survival, it must proceed, in learning, by the formation of a model (or models) of its environment.
The theorem is general enough to apply to all regulating and self-regulating or homeostatic systems.
Five variables are defined by the authors as involved in the process of system regulation. formula_2 as primary disturbers, formula_3 as a set of events in the regulator, formula_4 as a set of events in the rest of the system outside of the regulator, formula_5 as the total set of events (or outcomes) that may occur, formula_6 as the subset of formula_5 events (or outcomes) that are desirable to the system.
The principal point that the authors present with this figure is that regulation requires of the regulator to conceive of all variables as it regards the set formula_4 of events concerning the system to be regulated in order to render in satisfactory outcomes formula_6 of this regulation. If the regulator is instead not able to conceive of all variables in the set formula_4 of events concerning the system that exist outside of the regulator, then the set formula_3 of events in the regulator may fail to account for the total variable disturbances formula_2 which in turn may cause errors that lead to outcomes that are not satisfactory to the system (as illustrated by the events in the set formula_5 that are not elements in the set formula_6).
The theorem does not explain what it takes for the system to become a good regulator. Moreover, although highly cited, some concerns have been raised that the formal proof does not actually fully support the statement in the paper title.
In cybernetics, the problem of creating good regulators is addressed by the ethical regulator theorem, and by the theory of practopoiesis. The construction of good regulators is a general problem for any system (e.g., an automated information system) that regulates some domain of application.
When restricted to the ordinary differential equation (ODE) subset of control theory, it is referred to as the internal model principle, which was first articulated in 1976 by B. A. Francis and W. M. Wonham. In this form, it stands in contrast to classical control, in that the classical feedback loop fails to explicitly model the controlled system (although the classical controller may contain an implicit model). | [
{
"math_id": 0,
"text": "h:S\\to R"
},
{
"math_id": 1,
"text": "h"
},
{
"math_id": 2,
"text": "D"
},
{
"math_id": 3,
"text": "R"
},
{
"math_id": 4,
"text": "S"
},
{
"math_id": 5,
"text": "Z"
},
{
"math_id": 6,
"text": "G"
}
] | https://en.wikipedia.org/wiki?curid=12941407 |
12941444 | Local flatness | Property of topological submanifolds
In topology, a branch of mathematics, local flatness is a smoothness condition that can be imposed on topological submanifolds. In the category of topological manifolds, locally flat submanifolds play a role similar to that of embedded submanifolds in the category of smooth manifolds. Violations of local flatness describe ridge networks and crumpled structures, with applications to materials processing and mechanical engineering.
Definition.
Suppose a "d" dimensional manifold "N" is embedded into an "n" dimensional manifold "M" (where "d" < "n"). If formula_0 we say "N" is locally flat at "x" if there is a neighborhood formula_1 of "x" such that the topological pair formula_2 is homeomorphic to the pair formula_3, with the standard inclusion of formula_4 That is, there exists a homeomorphism formula_5 such that the image of formula_6 coincides with formula_7. In diagrammatic terms, the following square must commute:
We call "N" locally flat in "M" if "N" is locally flat at every point. Similarly, a map formula_8 is called locally flat, even if it is not an embedding, if every "x" in "N" has a neighborhood "U" whose image formula_9 is locally flat in "M".
In manifolds with boundary.
The above definition assumes that, if "M" has a boundary, "x" is not a boundary point of "M". If "x" is a point on the boundary of "M" then the definition is modified as follows. We say that "N" is locally flat at a boundary point "x" of "M" if there is a neighborhood formula_10 of "x" such that the topological pair formula_2 is homeomorphic to the pair formula_11, where formula_12 is a standard half-space and formula_7 is included as a standard subspace of its boundary.
Consequences.
Local flatness of an embedding implies strong properties not shared by all embeddings. Brown (1962) proved that if "d" = "n" − 1, then "N" is collared; that is, it has a neighborhood which is homeomorphic to "N" × [0,1] with "N" itself corresponding to "N" × 1/2 (if "N" is in the interior of "M") or "N" × 0 (if "N" is in the boundary of "M"). | [
{
"math_id": 0,
"text": "x \\in N,"
},
{
"math_id": 1,
"text": " U \\subset M"
},
{
"math_id": 2,
"text": "(U, U\\cap N)"
},
{
"math_id": 3,
"text": "(\\mathbb{R}^n,\\mathbb{R}^d)"
},
{
"math_id": 4,
"text": "\\mathbb{R}^d\\to\\mathbb{R}^n."
},
{
"math_id": 5,
"text": "U\\to \\mathbb{R}^n"
},
{
"math_id": 6,
"text": "U\\cap N"
},
{
"math_id": 7,
"text": "\\mathbb{R}^d"
},
{
"math_id": 8,
"text": "\\chi\\colon N\\to M"
},
{
"math_id": 9,
"text": "\\chi(U)"
},
{
"math_id": 10,
"text": "U\\subset M"
},
{
"math_id": 11,
"text": "(\\mathbb{R}^n_+,\\mathbb{R}^d)"
},
{
"math_id": 12,
"text": "\\mathbb{R}^n_+"
}
] | https://en.wikipedia.org/wiki?curid=12941444 |
12941912 | CAT(k) space | Type of metric space in mathematics
In mathematics, a formula_0 space, where formula_1 is a real number, is a specific type of metric space. Intuitively, triangles in a formula_2 space (with formula_3) are "slimmer" than corresponding "model triangles" in a standard space of constant curvature formula_1. In a formula_2 space, the curvature is bounded from above by formula_1. A notable special case is formula_4; complete formula_5 spaces are known as "Hadamard spaces" after the French mathematician Jacques Hadamard.
Originally, Aleksandrov called these spaces “formula_6 domains”.
The terminology formula_2 was coined by Mikhail Gromov in 1987 and is an acronym for Élie Cartan, Aleksandr Danilovich Aleksandrov and Victor Andreevich Toponogov (although Toponogov never explored curvature bounded above in publications).
Definitions.
For a real number formula_1, let formula_7 denote the unique complete simply connected surface (real 2-dimensional Riemannian manifold) with constant curvature formula_1. Denote by formula_8 the diameter of formula_7, which is formula_9 if formula_10 and is formula_11 if formula_12.
Let formula_13 be a geodesic metric space, i.e. a metric space for which every two points formula_14 can be joined by a geodesic segment, an arc length parametrized continuous curve formula_15, whose length
formula_16
is precisely formula_17. Let formula_18 be a triangle in formula_19 with geodesic segments as its sides. formula_18 is said to satisfy the formula_0 inequality if there is a comparison triangle formula_20 in the model space formula_7, with sides of the same length as the sides of formula_18, such that distances between points on formula_18 are less than or equal to the distances between corresponding points on formula_20.
The geodesic metric space formula_13 is said to be a formula_0 space if every geodesic triangle formula_18 in formula_19 with perimeter less than formula_21 satisfies the formula_2 inequality. A (not-necessarily-geodesic) metric space formula_22 is said to be a space with curvature formula_23 if every point of formula_19 has a geodesically convex formula_2 neighbourhood. A space with curvature formula_24 may be said to have non-positive curvature.
Hadamard spaces.
As a special case, a complete CAT(0) space is also known as a Hadamard space; this is by analogy with the situation for Hadamard manifolds. A Hadamard space is contractible (it has the homotopy type of a single point) and, between any two points of a Hadamard space, there is a unique geodesic segment connecting them (in fact, both properties also hold for general, possibly incomplete, CAT(0) spaces). Most importantly, distance functions in Hadamard spaces are convex: if formula_44 are two geodesics in "X" defined on the same interval of time "I", then the function formula_45 given by
formula_46
is convex in "t".
Properties of CAT("k") spaces.
Let formula_13 be a formula_2 space. Then the following properties hold:
Surfaces of non-positive curvature.
In a region where the curvature of the surface satisfies "K" ≤ 0, geodesic triangles satisfy the CAT(0) inequalities of comparison geometry, studied by Cartan, Alexandrov and Toponogov, and considered later from a different point of view by Bruhat and Tits. Thanks to the vision of Gromov, this characterisation of non-positive curvature in terms of the underlying metric space has had a profound impact on modern geometry and in particular geometric group theory. Many results known for smooth surfaces and their geodesics, such as Birkhoff's method of constructing geodesics by his curve-shortening process or van Mangoldt and Hadamard's theorem that a simply connected surface of non-positive curvature is homeomorphic to the plane, are equally valid in this more general setting.
Alexandrov's comparison inequality.
The simplest form of the comparison inequality, first proved for surfaces by Alexandrov around 1940, states that
<templatestyles src="Template:Blockquote/styles.css" />The distance between a vertex of a geodesic triangle and the midpoint of the opposite side is always less than the corresponding distance in the comparison triangle in the plane with the same side-lengths.
The inequality follows from the fact that if "c"("t") describes a geodesic parametrized by arclength and "a" is a fixed point, then
"f"("t")
"d"("a","c"("t"))2 − "t"2
is a convex function, i.e.
formula_62
Taking geodesic polar coordinates with origin at "a" so that ‖"c"("t")‖
"r"("t"), convexity is equivalent to
formula_63
Changing to normal coordinates "u", "v" at "c"("t"), this inequality becomes
"u"2 + "H"−1"H"r"v"2 ≥ 1,
where ("u","v") corresponds to the unit vector "ċ"("t"). This follows from the inequality "H""r" ≥ "H", a consequence of the non-negativity of the derivative of the Wronskian of "H" and "r" from Sturm–Liouville theory.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{\\operatorname{\\textbf{CAT}}}(k)"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "\\operatorname{CAT}(k)"
},
{
"math_id": 3,
"text": "k<0"
},
{
"math_id": 4,
"text": "k=0"
},
{
"math_id": 5,
"text": "\\operatorname{CAT}(0)"
},
{
"math_id": 6,
"text": "\\mathfrak{R}_k"
},
{
"math_id": 7,
"text": "M_k"
},
{
"math_id": 8,
"text": "D_k"
},
{
"math_id": 9,
"text": "\\infty"
},
{
"math_id": 10,
"text": "k \\leq 0"
},
{
"math_id": 11,
"text": "\\frac{\\pi}{\\sqrt{k}}"
},
{
"math_id": 12,
"text": "k>0"
},
{
"math_id": 13,
"text": "(X,d)"
},
{
"math_id": 14,
"text": "x,y\\in X"
},
{
"math_id": 15,
"text": "\\gamma\\colon [a,b] \\to X,\\ \\gamma(a) = x,\\ \\gamma(b) = y"
},
{
"math_id": 16,
"text": "L(\\gamma) = \\sup \\left\\{ \\left. \\sum_{i = 1}^{r} d \\big( \\gamma(t_{i-1}), \\gamma(t_{i}) \\big) \\right| a = t_{0} < t_{1} < \\cdots < t_{r} = b, r\\in \\mathbb{N} \\right\\}"
},
{
"math_id": 17,
"text": "d(x,y)"
},
{
"math_id": 18,
"text": "\\Delta"
},
{
"math_id": 19,
"text": "X"
},
{
"math_id": 20,
"text": "\\Delta'"
},
{
"math_id": 21,
"text": "2D_k"
},
{
"math_id": 22,
"text": "(X,\\,d)"
},
{
"math_id": 23,
"text": "\\leq k"
},
{
"math_id": 24,
"text": "\\leq 0"
},
{
"math_id": 25,
"text": "\\operatorname{CAT}(\\ell)"
},
{
"math_id": 26,
"text": "\\ell>k"
},
{
"math_id": 27,
"text": "n"
},
{
"math_id": 28,
"text": "\\mathbf{E}^n"
},
{
"math_id": 29,
"text": "\\mathbf{H}^n"
},
{
"math_id": 30,
"text": "\\operatorname{CAT}(-1)"
},
{
"math_id": 31,
"text": "\\mathbf{S}^n"
},
{
"math_id": 32,
"text": "\\operatorname{CAT}(1)"
},
{
"math_id": 33,
"text": "r"
},
{
"math_id": 34,
"text": "\\frac{1}{r^2}"
},
{
"math_id": 35,
"text": "\\operatorname{CAT}\\left(\\frac{1}{r^2}\\right)"
},
{
"math_id": 36,
"text": "\\pi r"
},
{
"math_id": 37,
"text": "2r"
},
{
"math_id": 38,
"text": "\\Pi = \\mathbf{E}^2\\backslash\\{\\mathbf{0}\\}"
},
{
"math_id": 39,
"text": "(0,1)"
},
{
"math_id": 40,
"text": "(0,-1)"
},
{
"math_id": 41,
"text": "\\Pi"
},
{
"math_id": 42,
"text": "\\mathbf{E}^3"
},
{
"math_id": 43,
"text": "X = \\mathbf{E}^{3} \\setminus \\{ (x, y, z) \\mid x > 0, y > 0 \\text{ and } z > 0 \\}"
},
{
"math_id": 44,
"text": "\\sigma_1, \\sigma_2"
},
{
"math_id": 45,
"text": "I\\to \\R"
},
{
"math_id": 46,
"text": "t \\mapsto d \\big( \\sigma_{1} (t), \\sigma_{2} (t) \\big)"
},
{
"math_id": 47,
"text": "d(x,y)< D_k"
},
{
"math_id": 48,
"text": "k> 0"
},
{
"math_id": 49,
"text": "x"
},
{
"math_id": 50,
"text": "y"
},
{
"math_id": 51,
"text": "d"
},
{
"math_id": 52,
"text": "D_k/2"
},
{
"math_id": 53,
"text": "\\lambda < D_k"
},
{
"math_id": 54,
"text": "\\epsilon > 0"
},
{
"math_id": 55,
"text": "\\delta = \\delta(k,\\lambda,\\epsilon) > 0"
},
{
"math_id": 56,
"text": "m"
},
{
"math_id": 57,
"text": "d(x,y)\\leq \\lambda"
},
{
"math_id": 58,
"text": "\\max \\bigl\\{ d(x, m'), d(y, m') \\bigr\\} \\leq \\frac1{2} d(x, y) + \\delta,"
},
{
"math_id": 59,
"text": "d(m,m') < \\epsilon"
},
{
"math_id": 60,
"text": "k\\leq 0"
},
{
"math_id": 61,
"text": "k > 0"
},
{
"math_id": 62,
"text": "\\ddot{f}(t) \\ge 0."
},
{
"math_id": 63,
"text": " r\\ddot{r} + \\dot{r}^2 \\ge 1."
}
] | https://en.wikipedia.org/wiki?curid=12941912 |
12942590 | Gromov product | In mathematics, the Gromov product is a concept in the theory of metric spaces named after the mathematician Mikhail Gromov. The Gromov product can also be used to define "δ"-hyperbolic metric spaces in the sense of Gromov.
Definition.
Let ("X", "d") be a metric space and let "x", "y", "z" ∈ "X". Then the Gromov product of "y" and "z" at "x", denoted ("y", "z")"x", is defined by
formula_0
Motivation.
Given three points "x", "y", "z" in the metric space "X", by the triangle inequality there exist non-negative numbers "a", "b", "c" such that formula_1. Then the Gromov products are formula_2. In the case that the points "x", "y", "z" are the outer nodes of a tripod then these Gromov products are the lengths of the edges.
In the hyperbolic, spherical or euclidean plane, the Gromov product ("A", "B")"C" equals the distance "p" between "C" and the point where the incircle of the geodesic triangle "ABC" touches the edge "CB" or "CA". Indeed from the diagram "c" = ("a" – "p") + ("b" – "p"), so that "p" = ("a" + "b" – "c")/2 = ("A","B")"C". Thus for any metric space, a geometric interpretation of ("A", "B")"C" is obtained by isometrically embedding (A, B, C) into the euclidean plane.
formula_3
formula_4
formula_5
formula_6
Points at infinity.
Consider hyperbolic space H"n". Fix a base point "p" and let formula_7 and formula_8 be two distinct points at infinity. Then the limit
formula_9
exists and is finite, and therefore can be considered as a generalized Gromov product. It is actually given by the formula
formula_10
where formula_11 is the angle between the geodesic rays formula_12 and formula_13.
δ-hyperbolic spaces and divergence of geodesics.
The Gromov product can be used to define "δ"-hyperbolic spaces in the sense of Gromov.: ("X", "d") is said to be "δ"-hyperbolic if, for all "p", "x", "y" and "z" in "X",
formula_14
In this case. Gromov product measures how long geodesics remain close together. Namely, if "x", "y" and "z" are three points of a "δ"-hyperbolic metric space then the initial segments of length ("y", "z")"x" of geodesics from "x" to "y" and "x" to "z" are no further than 2"δ" apart (in the sense of the Hausdorff distance between closed sets).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(y, z)_{x} = \\frac1{2} \\big( d(x, y) + d(x, z) - d(y, z) \\big)."
},
{
"math_id": 1,
"text": "d(x,y) = a + b, \\ d(x,z) = a + c, \\ d(y,z) = b + c"
},
{
"math_id": 2,
"text": "(y,z)_x = a, \\ (x,z)_y = b, \\ (x,y)_z = c"
},
{
"math_id": 3,
"text": "d(x, y) = (x, z)_{y} + (y, z)_{x},"
},
{
"math_id": 4,
"text": "0 \\leq (y, z)_{x} \\leq \\min \\big\\{ d(y, x), d(z, x) \\big\\},"
},
{
"math_id": 5,
"text": "\\big| (y, z)_{p} - (y, z)_{q} \\big| \\leq d(p, q),"
},
{
"math_id": 6,
"text": "\\big| (x, y)_{p} - (x, z)_{p} \\big| \\leq d(y, z)."
},
{
"math_id": 7,
"text": "x_\\infty"
},
{
"math_id": 8,
"text": "y_\\infty"
},
{
"math_id": 9,
"text": "\\liminf_{x \\to x_\\infty \\atop y \\to y_\\infty} (x,y)_p"
},
{
"math_id": 10,
"text": "(x_\\infty, y_\\infty)_{p} = \\log \\csc (\\theta/2),"
},
{
"math_id": 11,
"text": "\\theta"
},
{
"math_id": 12,
"text": "px_\\infty"
},
{
"math_id": 13,
"text": "py_\\infty"
},
{
"math_id": 14,
"text": "(x, z)_{p} \\geq \\min \\big\\{ (x, y)_{p}, (y, z)_{p} \\big\\} - \\delta."
}
] | https://en.wikipedia.org/wiki?curid=12942590 |
1294354 | Property P conjecture | Theorem in topology
In geometric topology, the Property P conjecture is a statement about 3-manifolds obtained by Dehn surgery on a knot in the 3-sphere. A knot in the 3-sphere is said to have Property P if every 3-manifold obtained by performing (non-trivial) Dehn surgery on the knot is not simply-connected. The conjecture states that all knots, except the unknot, have Property P.
Research on Property P was started by R. H. Bing, who popularized the name and conjecture.
This conjecture can be thought of as a first step to resolving the Poincaré conjecture, since the Lickorish–Wallace theorem says any closed, orientable 3-manifold results from Dehn surgery on a link.
If a knot formula_0 has Property P, then one cannot construct a counterexample to the Poincaré conjecture by surgery along formula_1.
A proof was announced in 2004, as the combined result of efforts of mathematicians working in several different fields.
Algebraic Formulation.
Let formula_2 denote elements corresponding to a preferred longitude and meridian of a tubular neighborhood of formula_1.
formula_1 has Property P if and only if its Knot group is never trivialised by adjoining a relation of the form formula_3 for some formula_4. | [
{
"math_id": 0,
"text": "K \\subset \\mathbb{S}^{3}"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "[l], [m] \\in \\pi_{1}(\\mathbb{S}^{3} \\setminus K)"
},
{
"math_id": 3,
"text": " m = l^{a} "
},
{
"math_id": 4,
"text": " 0 \\ne a \\in \\mathbb{Z}"
}
] | https://en.wikipedia.org/wiki?curid=1294354 |
1294453 | Insular biogeography | Study of the ecology of isolated habitats
Insular biogeography or island biogeography is a field within biogeography that examines the factors that affect the species richness and diversification of isolated natural communities. The theory was originally developed to explain the pattern of the species–area relationship occurring in oceanic islands. Under either name it is now used in reference to any ecosystem (present or past) that is isolated due to being surrounded by unlike ecosystems, and has been extended to mountain peaks, seamounts, oases, fragmented forests, and even natural habitats isolated by human land development. The field was started in the 1960s by the ecologists Robert H. MacArthur and E. O. Wilson, who coined the term "island biogeography" in their inaugural contribution to Princeton's Monograph in Population Biology series, which attempted to predict the number of species that would exist on a newly created island.
Definitions.
For biogeographical purposes, an insular environment or "island" is any area of habitat suitable for a specific ecosystem, surrounded by an expanse of unsuitable habitat. While this may be a traditional island—a mass of land surrounded by water—the term may also be applied to many nontraditional "islands", such as the peaks of mountains, isolated springs or lakes, and non-contiguous woodlands. The concept is often applied to natural habitats surrounded by human-altered landscapes, such as expanses of grassland surrounded by highways or housing tracts, and national parks. Additionally, what is an insular for one organism may not be so for others, some organisms located on mountaintops may also be found in the valleys, while others may be restricted to the peaks.
Theory.
The theory of insular biogeography proposes that the number of species found in an undisturbed insular environment ("island") is determined by immigration and extinction. And further, that the isolated populations may follow different evolutionary routes, as shown by Darwin's observation of finches in the Galapagos Islands. Immigration and emigration are affected by the distance of an island from a source of colonists (distance effect). Usually this source is the mainland, but it can also be other islands. Islands that are more isolated are less likely to receive immigrants than islands that are less isolated.
The rate of extinction once a species manages to colonize an island is affected by island size; this is the species-area curve or effect. Larger islands contain larger habitat areas and opportunities for more different varieties of habitat. Larger habitat size reduces the probability of extinction due to chance events. Habitat heterogeneity increases the number of species that will be successful after immigration.
Over time, the countervailing forces of extinction and immigration result in an equilibrium level of species richness.
Modifications.
In addition to having an effect on immigration rates, isolation can also affect extinction rates. Populations on islands that are less isolated are less likely to go extinct because individuals from the source population and other islands can immigrate and "rescue" the population from extinction; this is known as the rescue effect.
In addition to having an effect on extinction, island size can also affect immigration rates. Species may actively target larger islands for their greater number of resources and available niches; or, larger islands may accumulate more species by chance just because they are larger. This is the target effect.
Species-area relationships.
Species–area relationships show the relationship between a given area and the species richness within that area. This concept comes from the theory of island biogeography, and is well illustrated on islands because they are relatively isolated. Thus, the immigrating species and the species going extinct from an island are more limited and therefore easier to keep track of. It is expected that as the area and species richness relationship are directly proportional to one another. For example, as the area of a series of islands increase, there is a direct relationship to the increasing species richness of primary producers. It is important to consider that island species area relationships will behave somewhat differently than mainland species area relationships, however the connections between the two can still prove to be useful.
The species-area relationship equation is: formula_0.
In this equation, formula_1 represents the measure of diversity of a species (for example, the number of species) and formula_2 is a constant representing the y-intercept. formula_3 represents the area of the island or space that is being examined and formula_4 represents the slope of the area curve.
This function can also be expressed as a logarithmic function: formula_5 This expression of the function allows for the function to be drawn as a linear function. However, the core meaning of the function is the same: the area of the island dictates the species area relationship.
Historical record.
The theory can be studied through the fossils, which provide a record of life on Earth. 300 million years ago, Europe and North America lay on the equator and were covered by steamy tropical rainforests. Climate change devastated these tropical rainforests during the Carboniferous Period and as the climate grew drier, rainforests fragmented. Shrunken islands of forest were uninhabitable for amphibians but were well suited to reptiles, which became more diverse and even varied their diet in the rapidly changing environment; this Carboniferous rainforest collapse event triggered an evolutionary burst among reptiles.
Research experiments.
The theory of island biogeography was experimentally tested by E. O. Wilson and his student Daniel Simberloff in the mangrove islands in the Florida Keys. Species richness on several small mangroves islands were surveyed. The islands were fumigated with methyl bromide to clear their arthropod communities. Following fumigation, the immigration of species onto the islands was monitored. Within a year the islands had been recolonized to pre-fumigation levels. However, Simberloff and Wilson contended this final species richness was oscillating in quasi-equilibrium. Islands closer to the mainland recovered faster as predicted by the Theory of Island Biogeography. The effect of island size was not tested, since all islands were of approximately equal size.
Research conducted at the rainforest research station on Barro Colorado Island has yielded a large number of publications concerning the ecological changes following the formation of islands, such as the local extinction of large predators and the subsequent changes in prey populations.
Applications to Island Like Systems (ILS).
The theory of island biogeography was originally used to study oceanic islands, but those concepts can be extrapolated to other areas of study. Island species dynamics give information about how species move and interact within Island Like Systems (ILS). Rather than an actual island, ILS are primarily defined by their isolation within an ecosystem. In the case of an island, the area referred to as the matrix is usually the body of water surrounding it. The mainland is often the nearest non-island piece of land. Similarly, in an ILS the “mainland” is the source of immigrating species, however the matrix is far more varied. By imagining how different types of isolated ecosystems, for example a pond that is surrounded by land, are similar to an island ecosystems it can be understood how theories and phenomena that are true of island ecosystems can be applied to ILS. However, the overall immigration and extinction patterns that are outlined in the theory of island biogeography as they play out on islands, also play out between ecosystems on the mainland.
The concepts of area of an island and the level of isolation from a mainland as presented in the theory of island biogeography, apply to ILS. The main difference is in the dynamics of area and isolation. For example, an ILS may have a changing area because of seasons, which may impact its degree of isolation. Resource availability plays an important role in the conditions that an island is under. This is another factor that changes in ILS in comparison to real islands, since generally there is a greater resource availability in some ILS than true islands.
Species-area relationships, as described above, can be applied to Island Like Systems (ILS) as well. It is typically observed that as the area of an ecosystem increases, the species richness is directly proportional. One major difference is that formula_4-values are generally lower for ILSs than true islands. Furthermore, formula_2 values also vary between true islands and ILS, and within types of ILS.
Applications in conservation biology.
Within a few years of the publishing of the theory, its potential application to the field of conservation biology had been realised and was being vigorously debated in ecological circles. The idea that reserves and national parks formed islands inside human-altered landscapes (habitat fragmentation), and that these reserves could lose species as they 'relaxed towards equilibrium' (that is they would lose species as they achieved their new equilibrium number, known as ecosystem decay) caused a great deal of concern. This is particularly true when conserving larger species which tend to have larger ranges. A study by William Newmark, published in the journal Nature and reported in "The New York Times", showed a strong correlation between the size of a protected U.S. National Park and the number of species of mammals.
This led to the debate known as single large or several small (SLOSS), described by writer David Quammen in "The Song of the Dodo" as "ecology's own genteel version of trench warfare". In the years after the publication of Wilson and Simberloff's papers ecologists had found more examples of the species-area relationship, and conservation planning was taking the view that the one large reserve could hold more species than several smaller reserves, and that larger reserves should be the norm in reserve design. This view was in particular championed by Jared Diamond. This led to concern by other ecologists, including Dan Simberloff, who considered this to be an unproven over-simplification that would damage conservation efforts. Habitat diversity was as or more important than size in determining the number of species protected.
Island biogeography theory also led to the development of wildlife corridors as a conservation tool to increase connectivity between habitat islands. Wildlife corridors can increase the movement of species between parks and reserves and therefore increase the number of species that can be supported, but they can also allow for the spread of disease and pathogens between populations, complicating the simple proscription of connectivity being good for biodiversity.
In species diversity, island biogeography most describes allopatric speciation. Allopatric speciation is where new gene pools arise out of natural selection in isolated gene pools. Island biogeography is also useful in considering sympatric speciation, the idea of different species arising from one ancestral species in the same area. Interbreeding between the two differently adapted species would prevent speciation, but in some species, sympatric speciation appears to have occurred.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S= cA^z"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "c"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "z"
},
{
"math_id": 5,
"text": "log(S)=log(c)+zlog(A)"
}
] | https://en.wikipedia.org/wiki?curid=1294453 |
12946607 | Christopher T. Hill | American theoretical physicist
Christopher T. Hill (born June 19, 1951) is an American theoretical physicist at the Fermi National Accelerator Laboratory who did undergraduate work in physics at M.I.T. (B.S., M.S., 1972), and graduate work at Caltech (Ph.D., 1977, Murray Gell-Mann). Hill's Ph.D. thesis, "Higgs Scalars and the Nonleptonic Weak Interactions" (1977) contains one of the first detailed discussions of the two-Higgs-doublet model
and its impact upon weak interactions. His work mainly focuses on new physics that can be probed in laboratory experiments or cosmology.
Hill is an originator, with William A. Bardeen and Manfred Lindner, of the idea that the Higgs boson is composed
of top and anti-top quarks. This emerges from the concept of the
top quark infrared fixed point, with
which Hill predicted (1981) that the top quark would be very heavy, contrary
to most popular ideas at the time. The fixed point prediction
lies within 20% of the observed top quark mass (1995). This implies
that the top quarks may be strongly coupled at very short
distances and could form a composite Higgs boson, which led to top quark condensates, topcolor, and dimensional deconstruction, a renormalizable lattice description of extra dimensions of space.
The original minimal top condensation model predicted the Higgs boson mass to be
about twice the observed value of 125 GeV, but extensions of the
theory achieve concordance with both the Higgs boson and top quark masses.
Several new heavy Higgs bosons, such as a b-quark scalar bound state,
may be accessible to the LHC.
Hill coauthored (with Elizabeth H. Simmons) a comprehensive review of strong dynamical theories and electroweak symmetry breaking
that has shaped many of the experimental searches for new physics at the Tevatron and LHC.
Heavy-light mesons contain a heavy quark and a light anti-quark, and provide a window on the chiral symmetry dynamics of a single light quark.
Hill and Bardeen showed that the (spin) formula_0 ground states are split from the formula_1 parity partners by a universal mass gap of about formula_2 due to the light quark chiral symmetry breaking. This correctly predicted an abnormally long-lived resonance,
the formula_3 (and the now confirmed formula_4), ten years before its discovery, and numerous decay modes which have
been confirmed by experiment.
Similar phenomena should be seen in the formula_5 mesons and
formula_6 (heavy-heavy-strange baryons).
Hill is a contributor to the theory of topological interactions and, with collaborators,
was first to obtain the full Wess-Zumino-Witten term for the standard model which describes the physics of the
chiral anomaly in Lagrangians, including pseudoscalars, spin-1 vector mesons, and the formula_7 and formula_8. The WZW term requires a non-trivial counter-term to map the "consistent" anomaly into the "covariant" anomaly, as
dictated by the conserved currents of the standard model. With the full WZW-term,
new anomalous interactions were revealed such as the formula_9 vertex. This leads to formula_10
where formula_11 is a heavy nucleus, and may contribute
to excess photons seen in low energy neutrino experiments. The result reproduces B+L violation by the anomaly in the standard model, and predicts numerous other anomalous processes.
Hill has given a derivation of the coefficients of consistent and covariant chiral anomalies (even D), and Chern-Simons terms (odd D), without resorting to fermion loops,
from the Dirac monopole construction and its generalization ("Dirac Branes") to higher dimensions.
Hill is an originator of cosmological models of dark energy and dark matter based upon ultra-low mass pseudo-Nambu-Goldstone bosons associated with
symmetries of neutrino masses. He proposed that the cosmological constant is
connected to the neutrino mass, as formula_12 and developed modern theories of the origin of ultra-high-energy nucleons and neutrinos from grand unification relics. He has shown that a cosmic axion field will induce an effective oscillating electric dipole moment for any magnet.
In an unpublished talk at the Vancouver Workshop on Quantum Cosmology (May, 1990),
Hill discussed possible roles for Nambu-Goldstone bosons in cosmology and suggested that a pseudo-Nambu-Goldstone boson might provide a "natural inflaton," the
particle responsible for cosmic inflation. He noted that this required a spontaneously broken global symmetry, such as U(1), near the Planck scale, and explicit symmetry breaking near the Grand Unification Scale. The idea seemed ad hoc, however subsequent
work on Weyl invariant theories offered a better rationale for a natural inflation scenario
connected to Planck scale physics. Hill collaborated with Graham Ross and Pedro G. Ferreira and focused on spontaneously broken scale symmetry (or Weyl symmetry), where the scale of gravity (Planck mass) and the inflationary phase of the ultra-early universe are generated together as part of a unified phenomenon dubbed "inertial symmetry breaking." The Weyl symmetry breaking occurs because the Noether current is the derivative of a scalar operator, called the "kernal." During
a period of pre-Planckian expansion any conserved current must red-shift
to zero, hence the kernal approaches a constant value
which determines the Planck mass and the Einstein-Hilbert action of General Relativity is emergent. The theory is in good agreement with cosmological observation.
Hill has returned to the issue of composite scalars in relativistic field
theory, developing a novel analytic approach to bound states of chiral fermions
by generalizing the Nambu--Jona-Lasinio model to non-pointlike interactions. He feels the most important challenge to the CERN LHC program is to determine if the Brout-Englert-Higgs boson is
a pointlike fundamental particle or a composite bound state near the TeV energy scale. The former case
may evidence some yet-to-be developed version of Supersymmetry; the latter case would imply new dynamics.
Books and Articles.
Hill has authored three popular books with Nobel laureate Leon Lederman
about physics and cosmology, and the commissioning of the Large Hadron Collider.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(0^-,1^-)"
},
{
"math_id": 1,
"text": "(0^+,1^+)"
},
{
"math_id": 2,
"text": "~ \\Delta M \\approx 350 \\text{ MeV,}~"
},
{
"math_id": 3,
"text": " D_{s0^+}^*(2317)"
},
{
"math_id": 4,
"text": " D_{s1^+}^*(2460)"
},
{
"math_id": 5,
"text": "B_s"
},
{
"math_id": 6,
"text": "ccs, bcs, bbs"
},
{
"math_id": 7,
"text": " W^\\pm "
},
{
"math_id": 8,
"text": " Z^0 "
},
{
"math_id": 9,
"text": "\\gamma\\omega Z^0"
},
{
"math_id": 10,
"text": " \\nu + X \\rightarrow \\nu+ \\gamma + X"
},
{
"math_id": 11,
"text": " X "
},
{
"math_id": 12,
"text": "\\Lambda\\sim m_\\nu^4 "
}
] | https://en.wikipedia.org/wiki?curid=12946607 |
12947946 | Flexural modulus | Intensive property in mechanics
In mechanics, the flexural modulus or bending modulus is an intensive property that is computed as the ratio of stress to strain in flexural deformation, or the tendency for a material to resist bending. It is determined from the slope of a stress-strain curve produced by a flexural test (such as the ASTM D790), and uses units of force per area. The flexural modulus defined using the 2-point (cantilever) and 3-point bend tests assumes a linear stress strain response.
For a 3-point test of a rectangular beam behaving as an isotropic linear material, where "w" and "h" are the width and height of the beam, "I" is the second moment of area of the beam's cross-section, "L" is the distance between the two outer supports, and "d" is the deflection due to the load "F" applied at the middle of the beam, the flexural modulus:
formula_0
From elastic beam theory
formula_1
and for rectangular beam
formula_2
thus formula_3 (Elastic modulus)
For very small strains in isotropic materials – like glass, metal or polymer – flexural or bending modulus of elasticity is equivalent to the tensile modulus (Young's modulus) or compressive modulus of elasticity. However, in anisotropic materials, for example wood, these values may not be equivalent. Moreover, composite materials like fiber-reinforced polymers or biological tissues are inhomogeneous combinations of two or more materials, each with different material properties, therefore their tensile, compressive, and flexural moduli usually are not equivalent.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nE_{\\mathrm{flex}} = \\frac {L^3 F}{4 w h^3 d}\n"
},
{
"math_id": 1,
"text": "d = \\frac {L^3 F}{48 I E } "
},
{
"math_id": 2,
"text": " I = \\frac{1}{12}wh^3 "
},
{
"math_id": 3,
"text": "E_{\\mathrm{flex}} = E "
}
] | https://en.wikipedia.org/wiki?curid=12947946 |
12951110 | Topcolor | Topcolor is a model in theoretical physics, of dynamical electroweak symmetry breaking in which the top quark and anti-top quark form a composite Higgs boson by a new force arising from massive "top gluons". The solution to composite Higgs models was actually anticipated in 1981, and found to be the Infrared fixed point for the top quark mass.
Analogy with known physics.
The composite Higgs boson made from a bound pair of top-anti-top quarks is analogous to the phenomenon of superconductivity, where Cooper pairs are formed by the exchange of phonons. The pairing dynamics and its solution was treated in the Bardeen-Hill-Lindner model.
The original topcolor naturally involved an extension of the standard model color gauge group to a product group SU(3)×SU(3)×SU(3)×... One of the gauge groups contains
the top and bottom quarks, and has a sufficiently large coupling constant to cause the condensate to form. The topcolor model anticipates the idea of dimensional deconstruction and extra space dimensions, as well as the large mass of the top quark.
In 2019 this was revisited ("scalar democracy") in which many composite Higgs bosons may form at very high energies, composed of the known quarks and leptons, perhaps bound by universal force (e.g., gravity, or an extension of topcolor). The standard model Higgs boson is then a top-anti-top boundstate. The theory predicts many new Higgs doublets, starting at the TeV mass scale, with formula_0 couplings to the known fermions, that may explain their masses and mixing angles. The first sequential new Higgs bosons should be accessible to the LHC.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\, \\mathcal{O}(1) \\,"
}
] | https://en.wikipedia.org/wiki?curid=12951110 |
12957669 | Victor Andreevich Toponogov | Russian mathematician (1930–2004)
Victor Andreevich Toponogov (; March 6, 1930 – November 21, 2004) was an outstanding Russian mathematician, noted for his contributions to differential geometry and so-called Riemannian geometry "in the large".
Biography.
After finishing secondary school in 1948, Toponogov entered the department of Mechanics and Mathematics at Tomsk State University, graduated with honours in 1953, and continued as a graduate student there until 1956. He moved to an institution in Novosibirsk in 1956 and lived in that city for the rest of his career. Since the institution at Novosibirsk had not yet been fully credentialed, he had defended his Ph.D. thesis at Moscow State University in 1958, on a subject in Riemann spaces. Novosibirsk State University was established in 1959. In 1961, Toponogov became a professor at a newly created Institute of Mathematics and Computing in Novosibirsk affiliated with the state university.
Toponogov's scientific interests were influenced by his advisor Abram Fet, who taught at Tomsk and later at Novosibirsk. Fet was a well-recognized topologist and specialist in variational calculus in the large. Toponogov's work was also strongly influenced by the work of Aleksandr Danilovich Aleksandrov. Later, the class of metric spaces known as CAT("k") spaces would be named after Élie Cartan, Aleksandrov and Toponogov.
Toponogov published over forty papers and some books during his career. His works are concentrated in Riemannian geometry "in the large". A significant number of his students also made notable contributions in this field.
Conjecture on Complete Convex Surfaces.
In 1995 Toponogov made the conjecture:
"On a complete convex surface S homeomorphic to a plane the following equality holds:"
formula_0
"where formula_1 and formula_2 are the principal curvatures of S."
In words, it states that every complete convex surface homeomorphic to a plane must have an umbilic point which may lie at infinity. As such, it is the natural open analog of the Carathéodory conjecture for closed convex surfaces.
In the same paper, Toponogov proved the conjecture under either of two assumptions: the integral of the Gauss curvature is less than formula_3, or the Gauss curvature and the gradients of the curvatures are bounded on "S". The general case remains open.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\inf_{p\\in S}|\\kappa_2(p)-\\kappa_1(p)|=0,"
},
{
"math_id": 1,
"text": "\\kappa_1"
},
{
"math_id": 2,
"text": "\\kappa_2"
},
{
"math_id": 3,
"text": "2\\pi"
}
] | https://en.wikipedia.org/wiki?curid=12957669 |
1296085 | Virtual finite-state machine | A virtual finite-state machine (VFSM) is a finite-state machine (FSM) defined in a virtual environment. The VFSM concept provides a software specification method to describe the behaviour of a control system using assigned names of input control properties and output actions.
The VFSM method introduces an execution model and facilitates the idea of an executable specification. This technology is mainly used in complex machine control, instrumentation, and telecommunication applications.
Why.
Implementing a state machine necessitates the generation of logical conditions (state transition conditions and action conditions). In the hardware environment, where state machines found their original use, this is trivial: all signals are Boolean. In contrast state machines specified and implemented in software require logical conditions that are per se multivalued:
In addition input signals can be unknown due to errors or malfunctions, meaning even digital input signals (considered as classical Boolean values) are in fact 3 values: Low, High, Unknown.
A Positive Logical Algebra solves this problem via virtualization, by creating a Virtual Environment which allows specification of state machines for software using multivalued variables.
Control properties.
A state variable in the VFSM environment may have one or more values which are relevant for the Control—in such a case it is an input variable. Those values are the control properties of this variable. Control properties are not necessarily specific data values but are rather certain states of the variable. For instance, a digital variable could provide three control properties: TRUE, FALSE and UNKNOWN according to its possible boolean values. A numerical (analog) input variable has control properties such as: LOW, HIGH, OK, BAD, UNKNOWN according to its range of desired values. A timer can have its OVER state (time-out occurred) as its most significant control value; other values could be STOPPED or RUNNING.
Actions.
Other state variables in the VFSM environment may be activated by actions—in such a case it is an output variable. For instance, a digital output has two actions: True and False. A numerical (analog) output variable has an action: Set. A timer which is both: an input and output variable can be triggered by actions like: Start, Stop or Reset.
Virtual environment.
The virtual environment characterises the runtime environment in which a virtual machine operates. It is defined by three sets of names:
The input names build virtual conditions to perform state transitions or input actions. The virtual conditions are built using the positive logic algebra. The output names trigger actions; entry actions, exit actions, input actions or transition actions.
Positive logic algebra.
The rules to build a virtual condition are as follows:
Input names and virtual input.
A state of an input is described by Input Names which create a set:
etc.
Virtual input codice_3 is a set of mutually exclusive elements of input names. A codice_3 always contains the element codice_5:
Logical operations on input names.
codice_6 (AND) operation is a set of input names:
"A1" & "B3" & "C2" => codice_7
codice_8 (OR) operation is a table of sets of input names:
"A1" | "B3" | "C2" => formula_0
codice_9 (Complement) is a complement of a set of input names:
~"A2" = codice_10
Logical expression.
A logical expression is an OR-table of AND-sets (a disjunctive normal form):
"A1" & "B3" | "A1" & "B2" & "C4" | "C2" => formula_1
Logical expressions are used to express any logical function.
Evaluation of a logical expression.
The logical value (true, false) of a logical expression is calculated by testing whether any of the AND-sets in the OR-table is a subset of codice_3.
Output names and virtual output.
A state of an output is described by Output Names which create a set:
Virtual output codice_14 is a set of mutually exclusive elements of output names.
Virtual environment.
The Virtual Name and Virtual Output completed by State Names create the Virtual Environment codice_15 where the behaviour is specified.
VFSM execution model.
A subset of all defined input names, which can exist only in a certain situation, is called virtual input or codice_3. For instance temperature can be either "too low", "good" or "too high". Although there are three input names defined, only one of them can exist in a real situation. This one builds the codice_3.
A subset of all defined output names, which can exist only in a certain situation is called virtual output or codice_14. This is built by the current action(s) of the VFSM.
The behavior specification is built by a state table which describes all details of all states of the VFSM.
The VFSM executor is triggered by codice_3 and the current state of the VFSM. In consideration of the behavior specification of the current state, the codice_14 is set.
Figure 2 shows one possible implementation of a VFSM executor. Based on this implementation a typical behavior characteristics must be considered.
State table.
A "state table" defines all details of the behavior of a state of a VFSM. It consists of three columns; the first column names the state, the second lists virtual conditions built out of input names using the positive logic algebra, and the third column contains the output names:
Read the table as following: the first two lines define the entry and exit actions of the current state. The following lines which do not provide the next state represent the input actions. Finally the lines providing the next state represent the state transition conditions and transition actions. All fields are optional. A pure combinatorial VFSM is possible in cases only where input actions are used, but no state transitions are defined. The transition action can be replaced by the proper use of other actions. | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\n\\{ & A1 & \\} \\\\\n\\{ & B3 & \\} \\\\\n\\{ & C2 & \\} \\\\\n\\end{bmatrix}"
},
{
"math_id": 1,
"text": "\\begin{bmatrix}\n\\{ & A1 & B3 & \\} \\\\\n\\{ & A1 & B2 & C4 & \\} \\\\\n\\{ & C2 & \\} \\\\\n\\end{bmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=1296085 |
12964043 | Merge (linguistics) | Merge is one of the basic operations in the Minimalist Program, a leading approach to generative syntax, when two syntactic objects are combined to form a new syntactic unit (a set). Merge also has the property of recursion in that it may be applied to its own output: the objects combined by Merge are either lexical items or sets that were themselves formed by Merge. This recursive property of Merge has been claimed to be a fundamental characteristic that distinguishes language from other cognitive faculties. As Noam Chomsky (1999) puts it, Merge is "an indispensable operation of a recursive system ... which takes two syntactic objects A and B and forms the new object G={A,B}" (p. 2).
Mechanisms of Merge.
Within the Minimalist Program, syntax is derivational, and Merge is the structure-building operation. Merge is assumed to have certain formal properties constraining syntactic structure, and is implemented with specific mechanisms. In terms of a merge-base theory of language acquisition, complements and specifiers are simply notations for first-merge (read as "complement-of" [head-complement]), and later second-merge (read as "specifier-of" [specifier-head]), with merge always forming to a head. First-merge establishes only a set {a, b} and is not an ordered pair. In its original formulation by Chomsky in 1995 Merge was defined as inherently asymmetric; in Moro 2000 it was first proposed that Merge can generate symmetrical structures provided that they are rescued by movement and asymmetry is restored For example, an {N, N}-compound of 'boat-house' would allow the ambiguous readings of either 'a kind of house' and/or 'a kind of boat'. It is only with second-merge that order is derived out of a set {a {a, b}} which yields the recursive properties of syntax. For example, a 'House-boat' {house {house, boat}} now reads unambiguously only as a 'kind of boat'. It is this property of recursion that allows for projection and labeling of a phrase to take place; in this case, that the Noun 'boat' is the head of the compound, and 'house' acting as a kind of specifier/modifier. External-merge (first-merge) establishes substantive 'base structure' inherent to the VP, yielding theta/argument structure, and may go beyond the lexical-category VP to involve the functional-category light verb vP. Internal-merge (second-merge) establishes more formal aspects related to edge-properties of scope and discourse-related material pegged to CP. In a Phase-based theory, this twin vP/CP distinction follows the "duality of semantics" discussed within the Minimalist Program, and is further developed into a dual distinction regarding a probe-goal relation. As a consequence, at the "external/first-merge-only" stage, young children would show an inability to interpret readings from a given ordered pair, since they would only have access to the mental parsing of a non-recursive set. (See Roeper for a full discussion of recursion in child language acquisition). In addition to word-order violations, other more ubiquitous results of a first-merge stage would show that children's initial utterances lack the recursive properties of inflectional morphology, yielding a strict Non-inflectional stage-1, consistent with an incremental Structure building model of child language.
Binary branching.
Merge takes two objects α and β and combines them, creating a binary structure.
Feature checking.
In some variants of the Minimalist Program Merge is triggered by feature checking, e.g. the verb "eat" selects the noun "cheesecake" because the verb has an uninterpretable N-feature [uN] ("u" stands for "uninterpretable"), which must be checked (or deleted) due to full interpretation. By saying that this verb has a nominal uninterpretable feature, we rule out such ungrammatical constructions as *eat beautiful (the verb selects an adjective). Schematically it can be illustrated as:
Strong features.
There are three different accounts of how strong features force movement:
1. "Phonetic Form" (PF) "crash theory" (Chomsky 1993) is conceptually motivated. The argument goes as follows: under the assumption that Logical Form (LF) is invariant, it must be the case that any parametric differences between languages reduce to morphological properties that are reflected at PF (Chomsky 1993:192). Two possible implementations of the PF crash theory are discussed by Chomsky:
PF crash theory: A strong feature that is not checked in overt syntax causes a derivation to crash at PF.
2. "Logical Form" (LF) crash theory (Chomsky 1994) is empirically motivated by VP ellipsis.
LF crash theory: A strong feature that is not checked (and eliminated) in overt syntax causes a derivation to crash at LF.
3. Immediate elimination theory ((Chomsky 1995))
Virus theory: A strong feature must be eliminated (almost) immediately upon its introduction into the phrase marker; otherwise, the derivation cancels.
Constraints.
Initially, the cooperation of Last Resort (LR) and the Uniformity Condition (UC) were the indicators of the structures provided by Bare Phrase which contain labels and are constructed by move, as well the impact of the Structure Preservation Hypothesis.
Projection and labeling.
When we consider the features of the word that provide the label when the word projects, we assume that the categorical feature of the word is always among the features that become the label of the newly created syntactic object. In this example below, Cecchetto demonstrated how projection selects a head as the label.
In this example by Cecchetto (2015), the verb "read" unambiguously labels the structure because "read" is a word, which means it is a probe by definition, in which "read" selects "the book".
the bigger constituent generated by merging the word with the syntactic objects receives the label of the word itself, which allow us to label the tree as demonstrated.
In this tree, the verb "read" is the head selecting the DP "the book", which makes the constituent a VP.
No tampering condition (NTC).
Merge operates blindly, projecting labels in all possible combinations. The subcategorization features of the head act as a filter by admitting only labelled projections that are consistent with the selectional properties of the head. All other alternatives are eliminated. Merge does nothing more than combine two syntactic objects (SO’s) into a unit, but does not affect the properties of the combining elements in any way. This is called the No Tampering Condition (NTC). Therefore, if α (as a syntactic object) has some property before combining with β (which is likewise a syntactic object) it will still have this property after it has combined with β. This allows Merge to account for further merging, which enables structures with movement dependencies (such as wh-movement) to occur. All grammatical dependencies are established under Merge: this means that if α and β are grammatically linked, α and β must have merged.
Bare phrase structure (BPS).
A major development of the Minimalist Program is Bare Phrase Structure (BPS), a theory of phrase structure (structure building operations) developed by Noam Chomsky in 1994. BPS is a representation of the structure of phrases in which syntactic units are not explicitly assigned to categories. The introduction of BPS moves the generative grammar towards dependency grammar (discussed below), which operates with significantly less structure than most phrase structure grammars. The constitutional operation of BPS is Merge. Bare phrase structure attempts to: (i) eliminate unnecessary elements; (ii) generate simpler trees; (ii) account for variation across languages.
Bare Phrase Structure defines projection levels according to the following features:
Fundamental properties.
The minimalist program brings into focus four fundamental properties that govern the structure of human language:
Context-sensitive phrase-structure (PS) rules
ABC → ADC
B = single symbol
A, C, D = string of symbols (D = non-null, A and C can be null)
A and C are non-null when the environment in which B needs to be re-written as D is specified.
Context-free phrase-structure (PS) rules
B → D
B = single non-terminal symbol
D = string of non-terminal symbols (can be non-null) where lexical items can be inserted with their subcategorization features
Further developments.
Since the publication of "bare phrase structure" in 1994., other linguists have continued to build on this theory. In 2002, Chris Collins continued research on Chomsky's proposal to eliminate labels, backing up Chomsky's suggestion of a more simple theory of phrase structure. Collins proposed that economy features, such as Minimality, govern derivations and lead to simpler representations.
In more recent work by John Lowe and John Lundstrand, published in 2020, "minimal phrase structure" is formulated as an extension to bare phrase structure and X-bar theory. However it does not take adopt all of the assumptions associated with the Minimalist Program (see above). Lowe and Lundstrand argue that any successful phrase structure theory, should include the following seven features:
Although Bare Phrase Structure includes many of these features, it does not include all of them, therefore other theories have attempted to incorporate all of these features in order to present a successful phrase structure theory.
External and internal Merge.
Chomsky (2001) distinguishes between external and internal Merge: if A and B are separate objects then we deal with external Merge; if either of them is part of the other it is internal Merge.
Three controversial aspects of Merge.
As it is commonly understood, standard Merge adopts three key assumptions about the nature of syntactic structure and the faculty of language:
While these three assumptions are taken for granted for the most part by those working within the broad scope of the Minimalist Program, other theories of syntax reject one or more of them.
Merge is commonly seen as merging smaller constituents to greater constituents until the greatest constituent, the sentence, is reached. This bottom-up view of structure generation is rejected by representational (non-derivational) theories (e.g. Generalized Phrase Structure Grammar, Head-Driven Phrase Structure Grammar, Lexical Functional Grammar, most dependency grammars, etc.), and it is contrary to early work in Transformational Grammar. The phrase structure rules of context free grammar, for instance, were generating sentence structure top down.
The Minimalist view that Merge is strictly binary is justified with the argument that an formula_0-ary Merge where formula_1 would inevitably lead to both under and overgeneration, and as such Merge must be strictly binary. More formally, the forms of undergeneration given in Marcolli et al., (2023) are such that for any formula_0-ary Merge with formula_1, only strings of length formula_2 for some formula_3 can be generated (so sentences like "it rains" cannot be), and further, there are always strings of length formula_2 that are ambiguous when parsed with binary Merge, for which an formula_0-ary merge with formula_1 would not be able to account for.
Further, formula_0-ary Merge where formula_1 is also said to necessarily lead to overgeneration. If we take a binary tree and an formula_0-ary tree with identical sets of leaves, then the binary tree will have a smaller number of accessible pairs of terms compared to the total formula_0-tuples of accessible terms in the formula_0-ary tree. This is responsible for the generation of ungrammatical sentences like "peanuts monkeys children will throw" (as opposed to "children will throw monkeys peanuts") with a ternary Merge. Despite this, there have also been empirical arguments against strictly binary Merge, such as that coming from constituency tests, and so some theories of grammar such as Head-Driven Phrase Structure Grammar still retain formula_0-ary branching in the syntax.
Merge merges two constituents in such a manner that these constituents become sister constituents and are daughters of the newly created mother constituent. This understanding of how structure is generated is constituency-based (as opposed to dependency-based). Dependency grammars (e.g. Meaning-Text Theory, Functional Generative Description, Word grammar) disagree with this aspect of Merge, since they take syntactic structure to be dependency-based.
Comparison to other approaches.
In other approaches to generative syntax, such as Head-driven phrase structure grammar, Lexical functional grammar and other types of unification grammar, the analogue to Merge is the unification operation of graph theory. In these theories, operations over attribute-value matrices (feature structures) are used to account for many of the same facts. Though Merge is usually assumed to be unique to language, the linguists Jonah Katz and David Pesetsky have argued that the harmonic structure of tonal music is also a result of the operation Merge.
This notion of 'merge' may in fact be related to Fauconnier's 'blending' notion in cognitive linguistics.
Phrase structure grammar.
Phrase structure grammar (PSG) represents immediate constituency relations (i.e. how words group together) as well as linear precedence relations (i.e. how words are ordered). In a PSG, a constituent contains at least one member, but has no upper bound. In contrast, with Merge theory, a constituent contains at most two members. Specifically, in Merge theory, each syntactic object is a constituent.
X-bar theory.
X-bar theory is a template that claims that all lexical items project three levels of structure: X, X', and XP. Consequently, there is a three-way distinction between Head, Complement, and Specifier:
While the first application of Merge is equivalent to the Head-Complement relation, the second application of Merge is equivalent to the Specifier-Head relation. However, the two theories differ in the claims they make about the nature of the Specifier-Head-Complement (S-H-C) structure. In X-bar theory, S-H-C is a primitive, an example of this is Kayne's antisymmetry theory. In a Merge theory, S-H-C is derivative.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n \\geq 3"
},
{
"math_id": 2,
"text": "k(n-1) + 1"
},
{
"math_id": 3,
"text": "k \\geq 1"
}
] | https://en.wikipedia.org/wiki?curid=12964043 |
1296442 | Kolchuga passive sensor | Soviet radar detector
The Kolchuga (Кольчуга "Chainmail") passive sensor is an electronic-warfare support measures (ESM) system developed in the Soviet Union and manufactured in Ukraine. Its detection range is limited by line-of-sight but may be up to for very high altitude, very powerful emitters. Frequently referred to as "Kolchuga Radar", the system is not really a radar, but an ESM system comprising three or four receivers, deployed tens of kilometres apart, which detect and track aircraft by triangulation and multilateration of their RF emissions.
History.
Kolchuga was developed in the 1980s by the Rostov military institute of GRU and Topaz radioelectronic factory in Donetsk. Manufactured since 1987, 44 units were produced before 1 January 1992, and 14 of them were left in Ukraine.
After the break up of the Soviet Union, Kolchuga-M was modernized by the Special Radio Device Design Bureau public holding, the Topaz holding, the Donetsk National Technical University, the Ukrspetsexport state company, and the Investment and Technologies Company. It took eight years (1993–2000) to conduct research, develop algorithms, test solutions on experimental specimens, and launch production. The relatively low-cost Ukrainian Kolchuga-M passive radar station is able to detect and identify practically all known active radio devices mounted on ground, airborne, or marine objects.
Mode of operation.
Kolchuga is an electronic support measures system that employs two or more sites to locate emitters by triangulation. The system is vehicle mounted and comprises a large vertical meshed reflector, with two smaller circular parabolic dishes beneath and a pair of VHF-to-microwave log periodic antennas above. The dishes may exploit amplitude monopulse techniques for improved direction finding, whilst the angled spacing of the log-periodic antenna suggests that they may use phase interferometry to improve angle measurements. Various smaller antennas, presumably for inter-site communications are to the side and rear of the dish.
The detection range is one of the best in its class, but it is highly dependent on the emitted power of the transmitter being tracked, and requires satisfaction of the line of sight condition to at least two receiving sites for triangulation (compared with three sites for a multilateration system such as the VERA passive sensor). A Kolchuga complex can detect and locate air and surface targets and trace their movement to a range generally limited only by the "common" line-of-sight of the stations. Assuming no terrain masking, the line-of-sight range of a single Kolchuga station (in km) is approximately:
formula_0
where "hr(km)" is the height of the radar in km, and "ht(km)" is the height of the target in kilometres, and assuming standard atmospheric radio refraction. Thus, for a Kolchuga at 100 m altitude (above local terrain) and a target at 10 km (30 kft), the range of the system would be approximately 450 km. For targets at altitudes of 20 km (60 kft) the line of sight limitation would be 620 km—but few targets fly at such altitudes. Being line-of-sight limited, the system is an effective early warning air defense system against high power emitters.
System parameters.
According to the manufacturer's brochure (from AIDEX 1997), the upgraded Kolchuga-M is equipped:
The brochure also claims that the system provides:
Special inhibitory sorters omit up to 24 interfering signals, and tracking sorters make it possible to synchronously sort out and track signals from 32 targets;
Target identification.
Kolchuga is able to detect and identify many types of radio devices mounted on ground, airborne, or marine objects. Target detection relies only on an emitter having sufficient power and being within Kolchuga's frequency range. Target identification, however, is more complex and is based on the measurement of different parameters of the transmitted signal—such as its frequency, bandwidth, pulse width, pulse repetition interval, etc. Kolchuga has been reported to use around forty different parameters when identifying a target. These parameters are compared to a database in order to identify both the type of emitter and, in some cases, even the specific piece of equipment (by identifying the unique signature or "fingerprint" that most transmitters have, due to the variations and tolerances in individual components). The database within Kolchuga is said to have the capacity to store around three hundred different types of emitter and up to five hundred specific signatures for each type.
Exports.
Pakistan has expressed interest in purchasing the Kolchuga radar. Ukraine has offered it to Pakistan to counter India's Swordfish Long Range Tracking Radar.
In 2002 the U.S. State Department accused Ukraine of selling Kolchuga to Iraq, based on recordings of the then Ukrainian president Leonid Kuchma supposedly made by Mykola Mel'nychenko. This was followed by political steps from United Kingdom and the United States. No material confirmation has been found in Iraq. See Cassette Scandal for further information.
Unconfirmed reports in September 2006 suggested a sale was made to Iran although this was denied by the Ukrainian government.
On 20 January 2019, Ukrainian news Sources confirmed a Sale of Systems to Israel and Saudi Arabia.
Vietnam.
is a confirmed operator of the Kolchuga system. It has reportedly acquired 4 systems with the unit price of $27 million.
Even though the mentioned figures could be inaccurate or varied, the systems have been formally commissioned by the Vietnamese Air Defense - Air Force Service.
Rumours and speculation of performance.
Since becoming publicly known following the Cassette Scandal, the capabilities of Kolchuga have been the source of many rumours and uninformed speculation. Many observers have a tendency to credit it with magical powers of detection. Many of these do not stand up to detailed engineering analysis, or have not been confirmed, but are recorded in this section for completeness, together with reasons for doubting the claim. Note that the material in this section should not be regarded as accurate. Claims include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d(km) =130(\\sqrt{hr(km)}+\\sqrt{ht(km)})"
}
] | https://en.wikipedia.org/wiki?curid=1296442 |
12965053 | Wigner–Seitz radius | The Wigner–Seitz radius formula_0, named after Eugene Wigner and Frederick Seitz, is the radius of a sphere whose volume is equal to the mean volume per atom in a solid (for first group metals). In the more general case of metals having more valence electrons, formula_0 is the radius of a sphere whose volume is equal to the volume per a free electron. This parameter is used frequently in condensed matter physics to describe the density of a system. Worth to mention, formula_0 is calculated for bulk materials.
Formula.
In a 3-D system with formula_1 free valence electrons in a volume formula_2, the Wigner–Seitz radius is defined by
formula_3
where formula_4 is the particle density. Solving for formula_0 we obtain
formula_5
The radius can also be calculated as
formula_6
where formula_7 is molar mass, formula_8 is count of free valence electrons per particle, formula_9 is mass density and
formula_10 is the Avogadro constant.
This parameter is normally reported in atomic units, i.e., in units of the Bohr radius.
Assuming that each atom in a simple metal cluster occupies the same volume as in a solid, the radius of the cluster is given by
formula_11
where "n" is the number of atoms.
Values of formula_0 for the first group metals:
Wigner–Seitz radius is related to the electronic density by the formula
formula_12
where, "ρ" can be regarded as the average electronic density in the outer portion of the Wigner-Seitz cell.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r_{\\rm s}"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "\\frac{4}{3} \\pi r_{\\rm s}^3 = \\frac{V}{N} = \\frac{1}{n}\\,,"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "r_{\\rm s} = \\left(\\frac{3}{4\\pi n}\\right)^{1/3}."
},
{
"math_id": 6,
"text": "r_{\\rm s}= \\left(\\frac{3M}{4\\pi \\rho N_{V} N_{\\rm A}}\\right)^\\frac{1}{3}\\,,"
},
{
"math_id": 7,
"text": "M"
},
{
"math_id": 8,
"text": "N_{V}"
},
{
"math_id": 9,
"text": "\\rho"
},
{
"math_id": 10,
"text": "N_{\\rm A}"
},
{
"math_id": 11,
"text": "R_0 = r_s n^{1/3}"
},
{
"math_id": 12,
"text": "r_s =0.62035 \\rho^{1/3}"
}
] | https://en.wikipedia.org/wiki?curid=12965053 |
12970 | Gambler's fallacy | Mistaken belief that more frequent chance events will lead to less frequent chance events
The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the belief that, if an event (whose occurrences are independent and identically distributed) has occurred less frequently than expected, it is more likely to happen again in the future (or vice versa). The fallacy is commonly associated with gambling, where it may be believed, for example, that the next dice roll is more than usually likely to be six because there have recently been fewer than the expected number of sixes.
The term "Monte Carlo fallacy" originates from an example of the phenomenon, in which the roulette wheel spun black 26 times in succession at the Monte Carlo Casino in 1913.
Examples.
Coin toss.
The gambler's fallacy can be illustrated by considering the repeated toss of a fair coin. The outcomes in different tosses are statistically independent and the probability of getting heads on a single toss is (one in two). The probability of getting two heads in two tosses is (one in four) and the probability of getting three heads in three tosses is (one in eight). In general, if "Ai" is the event where toss "i" of a fair coin comes up heads, then:
formula_0.
If after tossing four heads in a row, the next coin toss also came up heads, it would complete a run of five successive heads. Since the probability of a run of five successive heads is (one in thirty-two), a person might believe that the next flip would be more likely to come up tails rather than heads again. This is incorrect and is an example of the gambler's fallacy. The event "5 heads in a row" and the event "first 4 heads, then a tails" are equally likely, each having probability . Since the first four tosses turn up heads, the probability that the next toss is a head is:
formula_1.
While a run of five heads has a probability of = 0.03125 (a little over 3%), the misunderstanding lies in not realizing that this is the case "only before the first coin is tossed". After the first four tosses in this example, the results are no longer unknown, so their probabilities are at that point equal to 1 (100%). The probability of a run of coin tosses of any length continuing for one more toss is always 0.5. The reasoning that a fifth toss is more likely to be tails because the previous four tosses were heads, with a run of luck in the past influencing the odds in the future, forms the basis of the fallacy.
Why the probability is 1/2 for a fair coin.
If a fair coin is flipped 21 times, the probability of 21 heads is 1 in 2,097,152. The probability of flipping a head after having already flipped 20 heads in a row is . Assuming a fair coin:
The probability of getting 20 heads then 1 tail, and the probability of getting 20 heads then another head are both 1 in 2,097,152. When flipping a fair coin 21 times, the outcome is equally likely to be 21 heads as 20 heads and then 1 tail. These two outcomes are equally as likely as any of the other combinations that can be obtained from 21 flips of a coin. All of the 21-flip combinations will have probabilities equal to 0.521, or 1 in 2,097,152. Assuming that a change in the probability will occur as a result of the outcome of prior flips is incorrect because every outcome of a 21-flip sequence is as likely as the other outcomes. In accordance with Bayes' theorem, the likely outcome of each flip is the probability of the fair coin, which is .
Other examples.
The fallacy leads to the incorrect notion that previous failures will create an increased probability of success on subsequent attempts. For a fair 16-sided die, the probability of each outcome occurring is (6.25%). If a win is defined as rolling a 1, the probability of a 1 occurring at least once in 16 rolls is:
formula_2
The probability of a loss on the first roll is (93.75%). According to the fallacy, the player should have a higher chance of winning after one loss has occurred. The probability of at least one win is now:
formula_3
By losing one toss, the player's probability of winning drops by two percentage points. With 5 losses and 11 rolls remaining, the probability of winning drops to around 0.5 (50%). The probability of at least one win does not increase after a series of losses; indeed, the probability of success "actually decreases", because there are fewer trials left in which to win. The probability of winning will eventually be equal to the probability of winning a single toss, which is (6.25%) and occurs when only one toss is left.
Reverse position.
After a consistent tendency towards tails, a gambler may also decide that tails has become a more likely outcome. This is a rational and Bayesian conclusion, bearing in mind the possibility that the coin may not be fair; it is not a fallacy. Believing the odds to favor tails, the gambler sees no reason to change to heads. However it is a fallacy that a sequence of trials carries a memory of past results which tend to favor or disfavor future outcomes.
The inverse gambler's fallacy described by Ian Hacking is a situation where a gambler entering a room and seeing a person rolling a double six on a pair of dice may erroneously conclude that the person must have been rolling the dice for quite a while, as they would be unlikely to get a double six on their first attempt.
Retrospective gambler's fallacy.
Researchers have examined whether a similar bias exists for inferences about unknown past events based upon known subsequent events, calling this the "retrospective gambler's fallacy".
An example of a retrospective gambler's fallacy would be to observe multiple successive "heads" on a coin toss and conclude from this that the previously unknown flip was "tails". Real world examples of retrospective gambler's fallacy have been argued to exist in events such as the origin of the Universe. In his book "Universes", John Leslie argues that "the presence of vastly many universes very different in their characters might be our best explanation for why at least one universe has a life-permitting character". Daniel M. Oppenheimer and Benoît Monin argue that "In other words, the 'best explanation' for a low-probability event is that it is only one in a multiple of trials, which is the core intuition of the reverse gambler's fallacy." Philosophical arguments are ongoing about whether such arguments are or are not a fallacy, arguing that the occurrence of our universe says nothing about the existence of other universes or trials of universes. Three studies involving Stanford University students tested the existence of a retrospective gamblers' fallacy. All three studies concluded that people have a gamblers' fallacy retrospectively as well as to future events. The authors of all three studies concluded their findings have significant "methodological implications" but may also have "important theoretical implications" that need investigation and research, saying "[a] thorough understanding of such reasoning processes requires that we not only examine how they influence our predictions of the future, but also our perceptions of the past."
Childbirth.
In 1796, Pierre-Simon Laplace described in "A Philosophical Essay on Probabilities" the ways in which men calculated their probability of having sons: "I have seen men, ardently desirous of having a son, who could learn only with anxiety of the births of boys in the month when they expected to become fathers. Imagining that the ratio of these births to those of girls ought to be the same at the end of each month, they judged that the boys already born would render more probable the births next of girls." The expectant fathers feared that if more sons were born in the surrounding community, then they themselves would be more likely to have a daughter. This essay by Laplace is regarded as one of the earliest descriptions of the fallacy. Likewise, after having multiple children of the same sex, some parents may erroneously believe that they are due to have a child of the opposite sex.
Monte Carlo Casino.
An example of the gambler's fallacy occurred in a game of roulette at the Monte Carlo Casino on August 18, 1913, when the ball fell in black 26 times in a row. This was an extremely unlikely occurrence: the probability of a sequence of either red or black occurring 26 times in a row is ()26-1 or around 1 in 66.6 million, assuming the mechanism is unbiased. Gamblers lost millions of francs betting against black, reasoning incorrectly that the streak was causing an imbalance in the randomness of the wheel, and that it had to be followed by a long streak of red.
Non-examples.
Non-independent events.
The gambler's fallacy does not apply when the probability of different events is not independent. In such cases, the probability of future events can change based on the outcome of past events, such as the statistical permutation of events. An example is when cards are drawn from a deck without replacement. If an ace is drawn from a deck and not reinserted, the next card drawn is less likely to be an ace and more likely to be of another rank. The probability of drawing another ace, assuming that it was the first card drawn and that there are no jokers, has decreased from (7.69%) to (5.88%), while the probability for each other rank has increased from (7.69%) to (7.84%). This effect allows card counting systems to work in games such as blackjack.
Bias.
In most illustrations of the gambler's fallacy and the reverse gambler's fallacy, the trial (e.g. flipping a coin) is assumed to be fair. In practice, this assumption may not hold. For example, if a coin is flipped 21 times, the probability of 21 heads with a fair coin is 1 in 2,097,152. Since this probability is so small, if it happens, it may well be that the coin is somehow biased towards landing on heads, or that it is being controlled by hidden magnets, or similar. In this case, the smart bet is "heads" because Bayesian inference from the empirical evidence — 21 heads in a row — suggests that the coin is likely to be biased toward heads. Bayesian inference can be used to show that when the long-run proportion of different outcomes is unknown but exchangeable (meaning that the random process from which the outcomes are generated may be biased but is equally likely to be biased in any direction) and that previous observations demonstrate the likely direction of the bias, the outcome which has occurred the most in the observed data is the most likely to occur again.
For example, if the "a priori" probability of a biased coin is say 1%, and assuming that such a biased coin would come down heads say 60% of the time, then after 21 heads the probability of a biased coin has increased to about 32%.
The opening scene of the play "Rosencrantz and Guildenstern Are Dead" by Tom Stoppard discusses these issues as one man continually flips heads and the other considers various possible explanations.
Changing probabilities.
If external factors are allowed to change the probability of the events, the gambler's fallacy may not hold. For example, a change in the game rules might favour one player over the other, improving his or her win percentage. Similarly, an inexperienced player's success may decrease after opposing teams learn about and play against their weaknesses. This is another example of bias.
Psychology.
Origins.
The gambler's fallacy arises out of a belief in a law of small numbers, leading to the erroneous belief that small samples must be representative of the larger population. According to the fallacy, streaks must eventually even out in order to be representative. Amos Tversky and Daniel Kahneman first proposed that the gambler's fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic, which states that people evaluate the probability of a certain event by assessing how similar it is to events they have experienced before, and how similar the events surrounding those two processes are. According to this view, "after observing a long run of red on the roulette wheel, for example, most people erroneously believe that black will result in a more representative sequence than the occurrence of an additional red", so people expect that a short run of random outcomes should share properties of a longer run, specifically in that deviations from average should balance out. When people are asked to make up a random-looking sequence of coin tosses, they tend to make sequences where the proportion of heads to tails stays closer to 0.5 in any short segment than would be predicted by chance, a phenomenon known as insensitivity to sample size. Kahneman and Tversky interpret this to mean that people believe short sequences of random events should be representative of longer ones. The representativeness heuristic is also cited behind the related phenomenon of the clustering illusion, according to which people see streaks of random events as being non-random when such streaks are actually much more likely to occur in small samples than people expect.
The gambler's fallacy can also be attributed to the mistaken belief that gambling, or even chance itself, is a fair process that can correct itself in the event of streaks, known as the just-world hypothesis. Other researchers believe that belief in the fallacy may be the result of a mistaken belief in an internal locus of control. When a person believes that gambling outcomes are the result of their own skill, they may be more susceptible to the gambler's fallacy because they reject the idea that chance could overcome skill or talent.
Variations.
Some researchers believe that it is possible to define two types of gambler's fallacy: type one and type two. Type one is the classic gambler's fallacy, where individuals believe that a particular outcome is due after a long streak of another outcome. Type two gambler's fallacy, as defined by Gideon Keren and Charles Lewis, occurs when a gambler underestimates how many observations are needed to detect a favorable outcome, such as watching a roulette wheel for a length of time and then betting on the numbers that appear most often. For events with a high degree of randomness, detecting a bias that will lead to a favorable outcome takes an impractically large amount of time and is very difficult, if not impossible, to do. The two types differ in that type one wrongly assumes that gambling conditions are fair and perfect, while type two assumes that the conditions are biased, and that this bias can be detected after a certain amount of time.
Another variety, known as the retrospective gambler's fallacy, occurs when individuals judge that a seemingly rare event must come from a longer sequence than a more common event does. The belief that an imaginary sequence of die rolls is more than three times as long when a set of three sixes is observed as opposed to when there are only two sixes. This effect can be observed in isolated instances, or even sequentially. Another example would involve hearing that a teenager has unprotected sex and becomes pregnant on a given night, and concluding that she has been engaging in unprotected sex for longer than if we hear she had unprotected sex but did not become pregnant, when the probability of becoming pregnant as a result of each intercourse is independent of the amount of prior intercourse.
Relationship to hot-hand fallacy.
Another psychological perspective states that gambler's fallacy can be seen as the counterpart to basketball's hot-hand fallacy, in which people tend to predict the same outcome as the previous event - known as positive recency - resulting in a belief that a high scorer will continue to score. In the gambler's fallacy, people predict the opposite outcome of the previous event - negative recency - believing that since the roulette wheel has landed on black on the previous six occasions, it is due to land on red the next. Ayton and Fischer have theorized that people display positive recency for the hot-hand fallacy because the fallacy deals with human performance, and that people do not believe that an inanimate object can become "hot." Human performance is not perceived as random, and people are more likely to continue streaks when they believe that the process generating the results is nonrandom. When a person exhibits the gambler's fallacy, they are more likely to exhibit the hot-hand fallacy as well, suggesting that one construct is responsible for the two fallacies.
The difference between the two fallacies is also found in economic decision-making. A study by Huber, Kirchler, and Stockl in 2010 examined how the hot hand and the gambler's fallacy are exhibited in the financial market. The researchers gave their participants a choice: they could either bet on the outcome of a series of coin tosses, use an expert opinion to sway their decision, or choose a risk-free alternative instead for a smaller financial reward. Participants turned to the expert opinion to make their decision 24% of the time based on their past experience of success, which exemplifies the hot-hand. If the expert was correct, 78% of the participants chose the expert's opinion again, as opposed to 57% doing so when the expert was wrong. The participants also exhibited the gambler's fallacy, with their selection of either heads or tails decreasing after noticing a streak of either outcome. This experiment helped bolster Ayton and Fischer's theory that people put more faith in human performance than they do in seemingly random processes.
Neurophysiology.
While the representativeness heuristic and other cognitive biases are the most commonly cited cause of the gambler's fallacy, research suggests that there may also be a neurological component. Functional magnetic resonance imaging has shown that after losing a bet or gamble, known as riskloss, the frontoparietal network of the brain is activated, resulting in more risk-taking behavior. In contrast, there is decreased activity in the amygdala, caudate, and ventral striatum after a riskloss. Activation in the amygdala is negatively correlated with gambler's fallacy, so that the more activity exhibited in the amygdala, the less likely an individual is to fall prey to the gambler's fallacy. These results suggest that gambler's fallacy relies more on the prefrontal cortex, which is responsible for executive, goal-directed processes, and less on the brain areas that control affective decision-making.
The desire to continue gambling or betting is controlled by the striatum, which supports a choice-outcome contingency learning method. The striatum processes the errors in prediction and the behavior changes accordingly. After a win, the positive behavior is reinforced and after a loss, the behavior is conditioned to be avoided. In individuals exhibiting the gambler's fallacy, this choice-outcome contingency method is impaired, and they continue to make risks after a series of losses.
Possible solutions.
The gambler's fallacy is a deep-seated cognitive bias and can be very hard to overcome. Educating individuals about the nature of randomness has not always proven effective in reducing or eliminating any manifestation of the fallacy. Participants in a study by Beach and Swensson in 1967 were shown a shuffled deck of index cards with shapes on them, and were instructed to guess which shape would come next in a sequence. The experimental group of participants was informed about the nature and existence of the gambler's fallacy, and were explicitly instructed not to rely on run dependency to make their guesses. The control group was not given this information. The response styles of the two groups were similar, indicating that the experimental group still based their choices on the length of the run sequence. This led to the conclusion that instructing individuals about randomness is not sufficient in lessening the gambler's fallacy.
An individual's susceptibility to the gambler's fallacy may decrease with age. A study by Fischbein and Schnarch in 1997 administered a questionnaire to five groups: students in grades 5, 7, 9, 11, and college students specializing in teaching mathematics. None of the participants had received any prior education regarding probability. The question asked was: "Ronni flipped a coin three times and in all cases heads came up. Ronni intends to flip the coin again. What is the chance of getting heads the fourth time?" The results indicated that as the students got older, the less likely they were to answer with "smaller than the chance of getting tails", which would indicate a negative recency effect. 35% of the 5th graders, 35% of the 7th graders, and 20% of the 9th graders exhibited the negative recency effect. Only 10% of the 11th graders answered this way, and none of the college students did. Fischbein and Schnarch theorized that an individual's tendency to rely on the representativeness heuristic and other cognitive biases can be overcome with age.
Another possible solution comes from Roney and Trick, Gestalt psychologists who suggest that the fallacy may be eliminated as a result of grouping. When a future event such as a coin toss is described as part of a sequence, no matter how arbitrarily, a person will automatically consider the event as it relates to the past events, resulting in the gambler's fallacy. When a person considers every event as independent, the fallacy can be greatly reduced.
Roney and Trick told participants in their experiment that they were betting on either two blocks of six coin tosses, or on two blocks of seven coin tosses. The fourth, fifth, and sixth tosses all had the same outcome, either three heads or three tails. The seventh toss was grouped with either the end of one block, or the beginning of the next block. Participants exhibited the strongest gambler's fallacy when the seventh trial was part of the first block, directly after the sequence of three heads or tails. The researchers pointed out that the participants that did not show the gambler's fallacy showed less confidence in their bets and bet fewer times than the participants who picked with the gambler's fallacy. When the seventh trial was grouped with the second block, and was perceived as not being part of a streak, the gambler's fallacy did not occur.
Roney and Trick argued that instead of teaching individuals about the nature of randomness, the fallacy could be avoided by training people to treat each event as if it is a beginning and not a continuation of previous events. They suggested that this would prevent people from gambling when they are losing, in the mistaken hope that their chances of winning are due to increase based on an interaction with previous events.
Users.
Types of users.
Within a real-world setting, numerous studies have uncovered that for various decision makers placed in high stakes scenarios, it is likely they will reflect some degree of strong negative autocorrelation in their judgement.
Asylum judges.
In a study aimed at discovering if the negative autocorrelation that exists with the gambler's fallacy existed in the decision made by U.S. asylum judges, results showed that after two successive asylum grants, a judge would be 5.5% less likely to approve a third grant.
Baseball umpires.
In the game of baseball, decisions are made every minute. One particular decision made by umpires which is often subject to scrutiny is the 'strike zone' decision. Whenever a batter does not swing, the umpire must decide if the ball was within a fair region for the batter, known as the strike zone. If outside of this zone, the ball does not count towards outing the batter. In a study of over 12,000 games, results showed that umpires are 1.3% less likely to call a strike if the previous two balls were also strikes.
Loan officers.
In the decision making of loan officers, it can be argued that monetary incentives are a key factor in biased decision making, rendering it harder to examine the gambler's fallacy effect. However, research shows that loan officers who are not incentivised by monetary gain are 8% less likely to approve a loan if they approved one for the previous client.
Lottery players.
Lottery play and jackpots entice gamblers around the globe, with the biggest decision for hopeful winners being what numbers to pick. While most people will have their own strategy, evidence shows that after a number is selected as a winner in the current draw, the same number will experience a significant drop in selections in the following lottery. A popular study by Charles Clotfelter and Philip Cook investigated this effect in 1991, where they concluded bettors would cease to select numbers immediately after they were selected, ultimately recovering selection popularity within three months. Soon after, a 1994 study was constructed by Dek Terrell to test the findings of Clotfelter and Cook. The key change in Terrell's study was the examination of a pari-mutuel lottery in which, a number selected with lower total wagers placed on it will result in a higher pay-out. While this examination did conclude that players in both types of lotteries exhibited behaviour in-line with the gambler's fallacy theory, those who took part in pari-mutuel betting seemed to be less influenced.
The effect the of gambler's fallacy can be observed as numbers are chosen far less frequently soon after they are selected as winners, recovering slowly over a two-month period. For example, on the 11th of April 1988, 41 players selected 244 as the winning combination. Three days later only 24 individuals selected 244, a 41.5% decrease. This is the gambler's fallacy in motion, as lottery players believe that the occurrence of a winning combination in previous days will decrease its likelihood of occurring today.
Video game players.
Several video games feature the use of loot boxes, a collection of in-game items awarded on opening with random contents set by rarity metrics, as a monetization scheme. Since around 2018, loot boxes have come under scrutiny from governments and advocates on the basis they are akin to gambling, particularly for games aimed at youth. Some games use a special "pity-timer" mechanism, that if the player has opened several loot boxes in a row without obtaining a high-rarity item, subsequent loot boxes will improve the odds of a higher-rate item drop. This is considered to feed into the gambler's fallacy since it reinforces the idea that a player will eventually obtain a high-rarity item (a win) after only receiving common items from a string of previous loot boxes.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Pr\\left(\\bigcap_{i=1}^n A_i\\right)=\\prod_{i=1}^n \\Pr(A_i)={1\\over2^n}"
},
{
"math_id": 1,
"text": "\\Pr\\left(A_5|A_1 \\cap A_2 \\cap A_3 \\cap A_4 \\right)=\\Pr\\left(A_5\\right)=\\frac{1}{2}"
},
{
"math_id": 2,
"text": "1-\\left[\\frac{15}{16}\\right]^{16} \\,=\\, 64.39\\%"
},
{
"math_id": 3,
"text": "1-\\left[\\frac{15}{16}\\right]^{15} \\,=\\, 62.02\\%"
}
] | https://en.wikipedia.org/wiki?curid=12970 |
1297317 | No free lunch theorem | Mathematical folklore
In mathematical folklore, the "no free lunch" (NFL) theorem (sometimes pluralized) of David Wolpert and William Macready, alludes to the saying "no such thing as a free lunch", that is, there are no easy shortcuts to success. It appeared in the 1997 "No Free Lunch Theorems for Optimization". Wolpert had previously derived no free lunch theorems for machine learning (statistical inference).
In 2005, Wolpert and Macready themselves indicated that the first theorem in their paper "state[s] that any two optimization algorithms are equivalent when their performance is averaged across all possible problems".
The "no free lunch" (NFL) theorem is an easily stated and easily understood consequence of theorems Wolpert and Macready actually prove. It is weaker than the proven theorems, and thus does not encapsulate them. Various investigators have extended the work of Wolpert and Macready substantively. In terms of how the NFL theorem is used in the context of the research area, the no free lunch in search and optimization is a field that is dedicated for purposes of mathematically analyzing data for statistical identity, particularly search and optimization.
While some scholars argue that NFL conveys important insight, others argue that NFL is of little relevance to machine learning research.
Example.
Posit a toy universe that exists for exactly two days and on each day contains exactly one object: a square or a triangle. The universe has exactly four possible histories:
Any prediction strategy that succeeds for history #2, by predicting a square on day 2 if there is a square on day 1, will fail on history #1, and vice versa. If all histories are equally likely, then any prediction strategy will score the same, with the same accuracy rate of 0.5.
Origin.
Wolpert and Macready give two NFL theorems that are closely related to the folkloric theorem. In their paper, they state:
<templatestyles src="Template:Blockquote/styles.css" />
The first theorem hypothesizes objective functions that do not change while optimization is in progress, and the second hypothesizes objective functions that may change.
<templatestyles src="Math_theorem/styles.css" />
Theorem — For any algorithms "a"1 and "a"2, at iteration step "m"
formula_0
where formula_1 denotes the ordered set of size formula_2 of the cost values formula_3 associated to input values formula_4, formula_5 is the function being optimized and formula_6 is the conditional probability of obtaining a given sequence of cost values from algorithm formula_7 run formula_2 times on function formula_8.
The theorem can be equivalently formulated as follows:
<templatestyles src="Math_theorem/styles.css" />
Theorem — Given a finite set formula_9 and a finite set formula_10 of real numbers, assume that formula_11 is chosen at random according to uniform distribution on the set formula_12 of all possible functions from formula_9 to formula_10. For the problem of optimizing formula_8 over the set formula_9, then no algorithm performs better than blind search.
Here, "blind search" means that at each step of the algorithm, the element formula_13 is chosen at random with uniform probability distribution from the elements of formula_9 that have not been chosen previously.
In essence, this says that when all functions "f" are equally likely, the probability of observing an arbitrary sequence of "m" values in the course of optimization does not depend upon the algorithm. In the analytic framework of Wolpert and Macready, performance is a function of the sequence of observed values (and not e.g. of wall-clock time), so it follows easily that all algorithms have identically distributed performance when objective functions are drawn uniformly at random, and also that all algorithms have identical mean performance. But identical mean performance of all algorithms does not imply Theorem 1, and thus the folkloric theorem is not equivalent to the original theorem.
Theorem 2 establishes a similar, but "more subtle", NFL result for time-varying objective functions.
Motivation.
The NFL theorems were explicitly "not" motivated by the question of what can be inferred (in the case of NFL for machine learning) or found (in the case of NFL for search) when the "environment is uniform random". Rather uniform randomness was used as a tool, to compare the number of environments for which algorithm A outperforms algorithm B to the number of environments for which B outperforms A. NFL tells us that (appropriately weighted) there are just as many environments in both of those sets.
This is true for many definitions of what precisely an "environment" is. In particular, there are just as many prior distributions (appropriately weighted) in which learning algorithm A beats B (on average) as vice versa. This statement about "sets of priors" is what is most important about NFL, not the fact that any two algorithms perform equally for the single, specific prior distribution that assigns equal probability to all environments.
While the NFL is important to understand the fundamental limitation for a set of problems, it does not state anything about each particular instance of a problem that can arise in practice. That is, the NFL states what is contained in its mathematical statements and it is nothing more than that. For example, it applies to the situations where the algorithm is fixed a priori and a worst-case problem for the fixed algorithm is chosen a posteriori. Therefore, if we have a "good" problem in practice or if we can choose a "good" learning algorithm for a given particular problem instance, then the NFL does not mention any limitation about this particular problem instance. Though the NFL might seem contradictory to results from other papers suggesting generalization of learning algorithms or search heuristics, it is important to understand the difference between the exact mathematical logic of the NFL and its intuitive interpretation.
Implications.
To illustrate one of the counter-intuitive implications of NFL, suppose we fix two supervised learning algorithms, C and D. We then sample a target function f to produce a set of input-output pairs, "d". The question is how should we choose whether to train C or D on "d", in order to make predictions for what output would be associated with a point lying outside of "d."
It is common in almost all of science and statistics to answer this question – to choose between C and D – by running cross-validation on "d" with those two algorithms. In other words, to decide whether to generalize from "d" with either C or D"," we see which of them has better out-of-sample performance when tested within "d".
Since C and D are fixed, this use of cross-validation to choose between them is itself an algorithm, i.e., a way of generalizing from an arbitrary dataset. Call this algorithm A. (Arguably, A is a simplified model of the scientific method itself.)
We could also use "anti"-cross-validation to make our choice. In other words, we could choose between C and D based on which has "worse" out-of-sample performance within "d". Again, since C and D are fixed, this use of anti-cross-validation is itself an algorithm. Call that algorithm B.
NFL tells us (loosely speaking) that B must beat A on just as many target functions (and associated datasets "d") as A beats B. In this very specific sense, the scientific method will lose to the "anti" scientific method just as readily as it wins.
NFL only applies if the target function is chosen from a uniform distribution of all possible functions. If this is not the case, and certain target functions are more likely to be chosen than others, then A may perform better than B overall. The contribution of NFL is that it tells us that choosing an appropriate algorithm requires making assumptions about the kinds of target functions the algorithm is being used for. With no assumptions, no "meta-algorithm", such as the scientific method, performs better than random choice.
While some scholars argue that NFL conveys important insight, others argue that NFL is of little relevance to machine learning research. If Occam's razor is correct, for example if sequences of lower Kolmogorov complexity are more probable than sequences of higher complexity, then (as is observed in real life) some algorithms, such as cross-validation, perform better on average on practical problems (when compared with random choice or with anti-cross-validation).
However, there are major formal challenges in using arguments based on Kolmogorov complexity to establish properties of
the real world, since it is uncomputable, and undefined up to an arbitrary additive constant. Partly in recognition of these challenges,
it has recently been argued that there are ways to circumvent the no free lunch theorems without invoking
Turing machines, by using "meta-induction". Moreover, the Kolmogorov complexity of machine learning models can be upper bounded through compressions of their data labeling, and it is possible to produce non-vacuous cross-domain generalization bounds via Kolmogorov complexity.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_f P(d_m^y \\mid f, m, a_1) = \\sum_f P(d_m^y \\mid f, m, a_2),"
},
{
"math_id": 1,
"text": "d_m^y"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "x \\in X"
},
{
"math_id": 5,
"text": "f:X \\rightarrow Y "
},
{
"math_id": 6,
"text": "P(d_m^y \\mid f, m, a)"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "f"
},
{
"math_id": 9,
"text": "V"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "f : V \\to S"
},
{
"math_id": 12,
"text": "S^{V\\!}"
},
{
"math_id": 13,
"text": "v \\in V"
}
] | https://en.wikipedia.org/wiki?curid=1297317 |
1297402 | No free lunch in search and optimization | Average solution cost is the same with any method
In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method. The name alludes to the saying "no such thing as a free lunch", that is, no method offers a "short cut". This is under the assumption that the search space is a probability density function. It does not apply to the case where the search space has underlying structure (e.g., is a differentiable function) that can be exploited more efficiently (e.g., Newton's method in optimization) than random search or even has closed-form solutions (e.g., the extrema of a quadratic polynomial) that can be determined without search at all. For such probabilistic assumptions, the outputs of all procedures solving a particular type of problem are statistically identical. A colourful way of describing such a circumstance, introduced by David Wolpert and William G. Macready in connection with the problems of search and optimization,
is to say that there is no free lunch. Wolpert had previously derived no free lunch theorems for machine learning (statistical inference).
Before Wolpert's article was published, Cullen Schaffer independently proved a restricted version of one of Wolpert's theorems and used it to critique the current state of machine learning research on the problem of induction.
In the "no free lunch" metaphor, each "restaurant" (problem-solving procedure) has a "menu" associating each "lunch plate" (problem) with a "price" (the performance of the procedure in solving the problem). The menus of restaurants are identical except in one regard – the prices are shuffled from one restaurant to the next. For an omnivore who is as likely to order each plate as any other, the average cost of lunch does not depend on the choice of restaurant. But a vegan who goes to lunch regularly with a carnivore who seeks economy might pay a high average cost for lunch. To methodically reduce the average cost, one must use advance knowledge of a) what one will order and b) what the order will cost at various restaurants. That is, improvement of performance in problem-solving hinges on using prior information to match procedures to problems.
In formal terms, there is no free lunch when the probability distribution on problem instances is such that all problem solvers have identically distributed results. In the case of search, a problem instance in this context is a particular objective function, and a result is a sequence of values obtained in evaluation of candidate solutions in the domain of the function. For typical interpretations of results, search is an optimization process. There is no free lunch in search if and only if the distribution on objective functions is invariant under permutation of the space of candidate solutions. This condition does not hold precisely in practice, but an "(almost) no free lunch" theorem suggests that it holds approximately.
Overview.
Some computational problems are solved by searching for good solutions in a space of candidate solutions. A description of how to repeatedly select candidate solutions for evaluation is called a search algorithm. On a particular problem, different search algorithms may obtain different results, but over all problems, they are indistinguishable. It follows that if an algorithm achieves superior results on some problems, it must pay with inferiority on other problems. In this sense there is no free lunch in search. Alternatively, following Schaffer, search performance is conserved. Usually search is interpreted as optimization, and this leads to the observation that there is no free lunch in optimization.
"The 'no free lunch' theorem of Wolpert and Macready," as stated in plain language by Wolpert and Macready themselves, is that "any two algorithms are equivalent when their performance is averaged across all possible problems." The "no free lunch" results indicate that matching algorithms to problems gives higher average performance than does applying a fixed algorithm to all. Igel and Toussaint and English have established a general condition under which there is no free lunch. While it is physically possible, it does not hold precisely. Droste, Jansen, and Wegener have proved a theorem they interpret as indicating that there is "(almost) no free lunch" in practice.
To make matters more concrete, consider an optimization practitioner confronted with a problem. Given some knowledge of how the problem arose, the practitioner may be able to exploit the knowledge in selection of an algorithm that will perform well in solving the problem. If the practitioner does not understand how to exploit the knowledge, or simply has no knowledge, then he or she faces the question of whether some algorithm generally outperforms others on real-world problems. The authors of the "(almost) no free lunch" theorem say that the answer is essentially no, but admit some reservations as to whether the theorem addresses practice.
Theorems.
A "problem" is, more formally, an objective function that associates candidate solutions with goodness values. A search algorithm takes an objective function as input and evaluates candidate solutions one-by-one. The output of the algorithm is the sequence of observed goodness values.
Wolpert and Macready stipulate that an algorithm never reevaluates a candidate solution, and that algorithm performance is measured on outputs. For simplicity, we disallow randomness in algorithms. Under these conditions, when a search algorithm is run on every possible input, it generates each possible output exactly once. Because performance is measured on the outputs, the algorithms are indistinguishable in how often they achieve particular levels of performance.
Some measures of performance indicate how well search algorithms do at optimization of the objective function. Indeed, there seems to be no interesting application of search algorithms in the class under consideration but to optimization problems. A common performance measure is the least index of the least value in the output sequence. This is the number of evaluations required to minimize the objective function. For some algorithms, the time required to find the minimum is proportional to the number of evaluations.
The original no free lunch (NFL) theorems assume that all objective functions are equally likely to be input to search algorithms. It has since been established that there is NFL if and only if, loosely speaking, "shuffling" objective functions has no impact on their probabilities. Although this condition for NFL is physically possible, it has been argued that it certainly does not hold precisely.
The obvious interpretation of "not NFL" is "free lunch," but this is misleading. NFL is a matter of degree, not an all-or-nothing proposition. If the condition for NFL holds approximately, then all algorithms yield approximately the same results over all objective functions. "Not NFL" implies only that algorithms are inequivalent overall by "some" measure of performance. For a performance measure of interest, algorithms may remain equivalent, or nearly so.
Kolmogorov randomness.
Almost all elements of the set of all possible functions (in the set-theoretic sense of "function") are Kolmogorov random, and hence the NFL theorems apply to a set of functions almost all of which cannot be expressed more compactly than as a lookup table that contains a distinct (and random) entry for each point in the search space. Functions that can be expressed more compactly (for example, by a mathematical expression of reasonable size) are by definition not Kolmogorov random.
Further, within the set of all possible objective functions, levels of goodness are equally represented among candidate solutions, hence good solutions are scattered throughout the space of candidates. Accordingly, a search algorithm will rarely evaluate more than a small fraction of the candidates before locating a very good solution.
Almost all objective functions are of such high Kolmogorov complexity that they cannot be stored in a particular computer. More precisely, if we model a given physical computer as a register machine with a given size memory on the order of the memories of modern computers, then most objective functions cannot be stored in their memories. There is more information in the typical objective function or algorithm than Seth Lloyd estimates the observable universe is capable of registering. For instance, if each candidate solution is encoded as a sequence of 300 0's and 1's, and the goodness values are 0 and 1, then most objective functions have Kolmogorov complexity of at least 2300 bits, and this is greater than Lloyd's bound of 1090 ≈ 2299 bits. It follows that the original "no free lunch" theorem does not apply to what can be stored in a physical computer; instead the so-called "tightened" no free lunch theorems need to be applied. It has also been shown that NFL results apply to incomputable functions.
Formal synopsis.
formula_0 is the set of all objective functions "f":"X"→"Y", where formula_1 is a finite solution space and formula_2 is a finite poset. The set of all permutations of "X" is "J". A random variable "F" is distributed on formula_0. For all "j" in "J", "F" o "j" is a random variable distributed on formula_0, with P("F" o "j" = "f") = P("F" = "f" o "j"−1) for all "f" in formula_0.
Let "a"("f") denote the output of search algorithm "a" on input "f". If "a"("F") and "b"("F") are identically distributed for all search algorithms "a" and "b", then "F" has an "NFL distribution". This condition holds if and only if "F" and "F" o "j" are identically distributed for all "j" in "J". In other words, there is no free lunch for search algorithms if and only if the distribution of objective functions is invariant under permutation of the solution space. Set-theoretic NFL theorems have recently been generalized to arbitrary cardinality formula_1 and formula_2.
Origin.
Wolpert and Macready give two principal NFL theorems, the first regarding objective functions that do not change while search is in progress, and the second regarding objective functions that may change.
"Theorem 1": For any pair of algorithms "a"1 and "a"2
formula_3 where formula_4 denotes the ordered set of size formula_5 of the cost values formula_6 associated to input values formula_7, formula_8 is the function being optimized and formula_9 is the conditional probability of obtaining a given sequence of cost values from algorithm formula_10 run formula_5 times on function formula_11.
In essence, this says that when all functions "f" are equally likely, the probability of observing an arbitrary sequence of "m" values in the course of search does not depend upon the search algorithm.
The second theorem establishes a "more subtle" NFL result for time-varying objective functions.
Interpretations of results.
A conventional, but not entirely accurate, interpretation of the NFL results is that "a general-purpose universal optimization strategy is theoretically impossible, and the only way one strategy can outperform another is if it is specialized to the specific problem under consideration". Several comments are in order:
In practice, only highly compressible (far from random) objective functions fit in the storage of computers, and it is not the case that each algorithm performs well on almost all compressible functions. There is generally a performance advantage in incorporating prior knowledge of the problem into the algorithm. While the NFL results constitute, in a strict sense, full employment theorems for optimization professionals, it is important to bear the larger context in mind. For one thing, humans often have little prior knowledge to work with. For another, incorporating prior knowledge does not give much of a performance gain on some problems. Finally, human time is very expensive relative to computer time. There are many cases in which a company would choose to optimize a function slowly with an unmodified computer program rather than rapidly with a human-modified program.
The NFL results do not indicate that it is futile to take "pot shots" at problems with unspecialized algorithms. No one has determined the fraction of practical problems for which an algorithm yields good results rapidly. And there is a practical free lunch, not at all in conflict with theory. Running an implementation of an algorithm on a computer costs very little relative to the cost of human time and the benefit of a good solution. If an algorithm succeeds in finding a satisfactory solution in an acceptable amount of time, a small investment has yielded a big payoff. If the algorithm fails, then little is lost.
Recently some philosophers of science have argued that there are ways to circumvent the no free lunch theorems, by using "meta-induction". Wolpert addresses these arguments in
Coevolution.
Wolpert and Macready have proved that there are free lunches in coevolutionary optimization. Their analysis "covers 'self-play' problems. In these problems, the set of players work together to produce a champion, who then engages one or more antagonists in a subsequent multiplayer game." That is, the objective is to obtain a good player, but without an objective function. The goodness of each player (candidate solution) is assessed by observing how well it plays against others. An algorithm attempts to use players and their quality of play to obtain better players. The player deemed best of all by the algorithm is the champion. Wolpert and Macready have demonstrated that some coevolutionary algorithms are generally superior to other algorithms in quality of champions obtained. Generating a champion through self-play is of interest in evolutionary computation and game theory. The results are inapplicable to coevolution of biological species, which does not yield champions.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Y^X"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "Y"
},
{
"math_id": 3,
"text": "\\sum_f P(d_m^y | f, m, a_1) = \\sum_f P(d_m^y | f, m, a_2),"
},
{
"math_id": 4,
"text": "d_m^y"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "y \\in Y"
},
{
"math_id": 7,
"text": "x \\in X"
},
{
"math_id": 8,
"text": "f:X \\rightarrow Y "
},
{
"math_id": 9,
"text": "P(d_m^y | f, m, a)"
},
{
"math_id": 10,
"text": "a"
},
{
"math_id": 11,
"text": "f"
}
] | https://en.wikipedia.org/wiki?curid=1297402 |
1297539 | Free particle | Particle that is not bound by an external force
In physics, a free particle is a particle that, in some sense, is not bound by an external force, or equivalently not in a region where its potential energy varies. In classical physics, this means the particle is present in a "field-free" space. In quantum mechanics, it means the particle is in a region of uniform potential, usually set to zero in the region of interest since the potential can be arbitrarily set to zero at any point in space.
Classical free particle.
The classical free particle is characterized by a fixed velocity v. The momentum is given by
formula_0
and the kinetic energy (equal to total energy) by
formula_1
where "m" is the mass of the particle and v is the vector velocity of the particle.
Quantum free particle.
Mathematical description.
A free particle with mass formula_2 in non-relativistic quantum mechanics is described by the free Schrödinger equation:
formula_3
where "ψ" is the wavefunction of the particle at position r and time "t". The solution for a particle with momentum p or wave vector k, at angular frequency "ω" or energy "E", is given by a complex plane wave:
formula_4
with amplitude "A" and has two different rules according to its mass:
The eigenvalue spectrum is infinitely degenerate since for each eigenvalue "E">0, there corresponds an infinite number of eigenfunctions corresponding to different directions of formula_5.
The De Broglie relations: formula_6, formula_7 apply. Since the potential energy is (stated to be) zero, the total energy "E" is equal to the kinetic energy, which has the same form as in classical physics:
formula_8
As for "all" quantum particles free "or" bound, the Heisenberg uncertainty principles formula_9 apply. It is clear that since the plane wave has definite momentum (definite energy), the probability of finding the particle's location is uniform and negligible all over the space. In other words, the wave function is not normalizable in a Euclidean space, "these stationary states can not correspond to physical realizable states".
Measurement and calculations.
The integral of the probability density function
formula_10
where * denotes complex conjugate, over all space is the probability of finding the particle in all space, which must be unity if the particle exists:
formula_11
This is the normalization condition for the wave function. The wavefunction is not normalizable for a plane wave, but is for a wave packet.
Fourier decomposition.
The free particle wave function may be represented by a superposition of "momentum" eigenfunctions, with coefficients given by the Fourier transform of the initial wavefunction:
formula_12
where the integral is over all k-space and formula_13 (to ensure that the wave packet is a solution of the free particle Schrödinger equation). Here formula_14 is the value of the wave function at time 0 and formula_15 is the Fourier transform of formula_14. (The Fourier transform formula_16 is essentially the momentum wave function of the position wave function formula_17, but written as a function of formula_18 rather than formula_19.)
The expectation value of the momentum p for the complex plane wave is
formula_20
and for the general wave packet it is
formula_21
The expectation value of the energy E is
formula_22
Group velocity and phase velocity.
The phase velocity is defined to be the speed at which a plane wave solution propagates, namely
formula_23
Note that formula_24 is "not" the speed of a classical particle with momentum formula_25; rather, it is half of the classical velocity.
Meanwhile, suppose that the initial wave function formula_14 is a wave packet whose Fourier transform formula_15 is concentrated near a particular wave vector formula_18. Then the group velocity of the plane wave is defined as
formula_26
which agrees with the formula for the classical velocity of the particle. The group velocity is the (approximate) speed at which the whole wave packet propagates, while the phase velocity is the speed at which the individual peaks in the wave packet move. The figure illustrates this phenomenon, with the individual peaks within the wave packet propagating at half the speed of the overall packet.
Spread of the wave packet.
The notion of group velocity is based on a linear approximation to the dispersion relation formula_27 near a particular value of formula_28. In this approximation, the amplitude of the wave packet moves at a velocity equal to the group velocity "without changing shape". This result is an approximation that fails to capture certain interesting aspects of the evolution a free quantum particle. Notably, the width of the wave packet, as measured by the uncertainty in the position, grows linearly in time for large times. This phenomenon is called the spread of the wave packet for a free particle.
Specifically, it is not difficult to compute an exact formula for the uncertainty formula_29 as a function of time, where formula_30 is the position operator. Working in one spatial dimension for simplicity, we have:
formula_31
where formula_14 is the time-zero wave function. The expression in parentheses in the second term on the right-hand side is the quantum covariance of formula_30 and formula_32.
Thus, for large positive times, the uncertainty in formula_30 grows linearly, with the coefficient of formula_33 equal to formula_34. If the momentum of the initial wave function formula_14 is highly localized, the wave packet will spread slowly and the group-velocity approximation will remain good for a long time. Intuitively, this result says that if the initial wave function has a very sharply defined momentum, then the particle has a sharply defined velocity and will (to good approximation) propagate at this velocity for a long time.
Relativistic quantum free particle.
There are a number of equations describing relativistic particles: see relativistic wave equations. | [
{
"math_id": 0,
"text": "\\mathbf{p}=m\\mathbf{v}"
},
{
"math_id": 1,
"text": "E=\\frac{1}{2}mv^2=\\frac{p^2}{2m}"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": " - \\frac{\\hbar^2}{2m} \\nabla^2 \\ \\psi(\\mathbf{r}, t) = i\\hbar\\frac{\\partial}{\\partial t} \\psi (\\mathbf{r}, t) "
},
{
"math_id": 4,
"text": " \\psi(\\mathbf{r}, t) = Ae^{i(\\mathbf{k}\\cdot\\mathbf{r}-\\omega t)} = Ae^{i(\\mathbf{p}\\cdot\\mathbf{r} - E t)/\\hbar} "
},
{
"math_id": 5,
"text": "\\mathbf{p}"
},
{
"math_id": 6,
"text": " \\mathbf{p} = \\hbar \\mathbf{k}"
},
{
"math_id": 7,
"text": " E = \\hbar \\omega"
},
{
"math_id": 8,
"text": " E = T \\,\\rightarrow \\,\\frac{\\hbar^2 k^2}{2m} =\\hbar \\omega "
},
{
"math_id": 9,
"text": " \\Delta p_x \\Delta x \\geq \\frac{\\hbar}{2}"
},
{
"math_id": 10,
"text": " \\rho(\\mathbf{r},t) = \\psi^*(\\mathbf{r},t)\\psi(\\mathbf{r},t) = |\\psi(\\mathbf{r},t)|^2"
},
{
"math_id": 11,
"text": " \\int_\\mathrm{all\\,space} |\\psi(\\mathbf{r},t)|^2 d^3 \\mathbf{r}=1"
},
{
"math_id": 12,
"text": " \\psi(\\mathbf{r}, t) =\\frac{1}{(\\sqrt{2\\pi})^3} \\int_\\mathrm{all \\, \\mathbf{k} \\, space} \\hat \\psi_0 (\\mathbf{k})e^{i(\\mathbf{k}\\cdot\\mathbf{r}-\\omega t)} d^3 \\mathbf{k} "
},
{
"math_id": 13,
"text": " \\omega = \\omega(\\mathbf{k}) = \\frac{\\hbar \\mathbf{k}^2}{2m}"
},
{
"math_id": 14,
"text": "\\psi_0"
},
{
"math_id": 15,
"text": "\\hat\\psi_0"
},
{
"math_id": 16,
"text": "\\hat\\psi_0(\\mathbf k)"
},
{
"math_id": 17,
"text": "\\psi_0(\\mathbf r)"
},
{
"math_id": 18,
"text": "\\mathbf k"
},
{
"math_id": 19,
"text": "\\mathbf p=\\hbar\\mathbf k"
},
{
"math_id": 20,
"text": " \\langle\\mathbf{p}\\rangle=\\left\\langle \\psi \\left|-i\\hbar\\nabla\\right|\\psi\\right\\rangle = \\hbar\\mathbf{k} ,"
},
{
"math_id": 21,
"text": " \\langle\\mathbf{p}\\rangle = \\int_\\mathrm{all\\,space} \\psi^*(\\mathbf{r},t)(-i\\hbar\\nabla)\\psi(\\mathbf{r},t) d^3 \\mathbf{r} = \\int_\\mathrm{all \\, \\textbf{k} \\, space} \\hbar \\mathbf{k} |\\hat\\psi_0(\\mathbf{k})|^2 d^3 \\mathbf{k}. "
},
{
"math_id": 22,
"text": " \\langle E\\rangle=\\left\\langle \\psi \\left|- \\frac{\\hbar^2}{2m} \\nabla^2 \\right|\\psi\\right\\rangle = \\int_\\text{all space} \\psi^*(\\mathbf{r},t)\\left(- \\frac{\\hbar^2}{2m} \\nabla^2 \\right)\\psi(\\mathbf{r},t) d^3 \\mathbf{r} ."
},
{
"math_id": 23,
"text": " v_p=\\frac{\\omega}{k}=\\frac{\\hbar k}{2m} = \\frac{p}{2m}. "
},
{
"math_id": 24,
"text": "\\frac{p}{2m}"
},
{
"math_id": 25,
"text": "p"
},
{
"math_id": 26,
"text": " v_g= \\nabla\\omega(\\mathbf k)=\\frac{\\hbar\\mathbf k}{m}=\\frac{\\mathbf p}{m},"
},
{
"math_id": 27,
"text": "\\omega(k)"
},
{
"math_id": 28,
"text": "k"
},
{
"math_id": 29,
"text": "\\Delta_{\\psi(t)}X"
},
{
"math_id": 30,
"text": "X"
},
{
"math_id": 31,
"text": "(\\Delta_{\\psi(t)}X)^2 = \\frac{t^2}{m^2}(\\Delta_{\\psi_0}P)^2+\\frac{2t}{m}\\left(\\left\\langle \\tfrac{1}{2}({XP+PX})\\right\\rangle_{\\psi_0} - \\left\\langle X\\right\\rangle_{\\psi_0} \\left\\langle P\\right\\rangle_{\\psi_0} \\right)+(\\Delta_{\\psi_0}X)^2,"
},
{
"math_id": 32,
"text": "P"
},
{
"math_id": 33,
"text": "t"
},
{
"math_id": 34,
"text": "(\\Delta_{\\psi_0}P)/m"
}
] | https://en.wikipedia.org/wiki?curid=1297539 |
1297934 | Immunization (finance) | Strategy to minimize effects of changes in interest rates
In finance, interest rate immunization is a portfolio management strategy designed to take advantage of the offsetting effects of interest rate risk and reinvestment risk.
In theory, immunization can be used to ensure that the value of a portfolio of assets (typically bonds or other fixed income securities) will increase or decrease by the same amount as a designated set of liabilities, thus leaving the equity component of capital unchanged, regardless of changes in the interest rate. It has found applications in financial management of pension funds, insurance companies, banks and savings and loan associations.
Immunization can be accomplished by several methods, including cash flow matching, duration matching, and volatility and convexity matching. It can also be accomplished by trading in bond forwards, futures, or options.
Other types of financial risks, such as foreign exchange risk or stock market risk, can be immunised using similar strategies. If the immunization is incomplete, these strategies are usually called hedging. If the immunization is complete, these strategies are usually called arbitrage.
History.
Immunisation was discovered independently by several researchers in the early 1940s and 1950s. This work was largely ignored before being re-introduced in the early 1970s, whereafter it gained popularity. See Dedicated Portfolio Theory#History for details.
Redington.
Frank Redington is generally considered to be the originator of the immunization strategy. Redington was an actuary from the United Kingdom. In 1952 he published his "Review of the Principle of Life-Office Valuations," in which he defined immunization as "the investment of the assets in such a way that the existing business is immune to a general change in the rate of interest." Redington believed that if a company (for example, a life insurance company) structured its investment portfolio assets to be of the same duration as its liabilities, and market interest rates decreased during the planning horizon, the lower yield earned on reinvested cash flows would be offset by the increased value of portfolio assets remaining at the end of the planning period. On the other hand, if market interest rates increased, the same offset effect would occur: higher yields earned on reinvested cash flows would be offset by a reduction in the value of the portfolio. In either scenario, with offsetting effects on each side of the balance sheet, the shareholders' equity value of the business would be immunized from the effect of changes in interest rates.
Fisher and Weil.
In 1971, Lawrence Fisher and Roman Weil framed the issue as follows: to immunize a portfolio, "the average duration of the bond portfolio must be set equal to the remaining time in the planning horizon, and the market value of assets must be greater than or equal to the present value of the liabilities discounted at the internal rate of return of the portfolio."
Applications.
Pension funds use immunization to lock in current market rates, when they are attractive, over a specified planning horizon, and to fund a future stream of pension benefit payments to retirees. Banks and thrift (savings and loan) associations immunize in order to manage the relationship between assets and liabilities, which affects their capital requirements. Insurance companies construct immunized portfolios to support guaranteed investment contracts, structured financial instruments which are sold to institutional investors.
How portfolios are immunized.
Immunization theory assumes that the yield curve is flat, and that interest rate changes are parallel shifts up or down in that yield curve.
Cash flow matching.
Conceptually, the easiest form of immunization is cash flow matching. For example, if a financial company is obliged to pay 100 dollars to someone in 10 years, it can protect itself by buying and holding a 10-year, zero-coupon bond that matures in 10 years and has a redemption value of $100. Thus, the firm's expected cash inflows would exactly match its expected cash outflows, and a change in interest rates would not affect the firm's ability to pay its obligations. Nevertheless, a firm with many expected cash flows can find that cash flow matching can be difficult or expensive to achieve in practice. Once, that meant that only institutional investors could afford it. But the advent of the Internet and the personal computer relieved much of this difficulty. Dedicated portfolio theory is based on cash flow matching and is being used by personal financial advisors to construct retirement portfolios for private individuals. Withdrawals from the portfolio to pay living expenses represent the stream of expected future cash flows to be matched. Individual bonds with staggered maturities are purchased whose coupon interest payments and redemptions supply the cash flows to meet the withdrawals of the retirees.
Mathematically, this can be expressed as follows. Let the net cash flow at time formula_0 be denoted by formula_1, i.e.:
formula_2
where formula_3 and formula_4 represent cash inflows and outflows or liabilities respectively.
Assuming that the present value of cash inflows from the assets is equal to the present value of the cash outflows from the liabilities, then:
formula_5
Duration matching.
Another immunization method is duration matching. Here, a portfolio manager creates a bond portfolio with a duration equal to the duration of the liabilities. To make the match actually profitable under changing interest rates, the assets and liabilities are arranged so that the total convexity of the assets exceed the convexity of the liabilities. In other words, one can match the first derivatives (with respect to interest rate) of the price functions of the assets and liabilities and make sure that the second derivative of the asset price function is set to be greater than or equal to the second derivative of the liability price function.
Rebalancing.
Immunization requires that the average durations of assets and liabilities be kept equal at all times. This makes it necessary to rebalance the portfolio investments regularly, because the years remaining in the planning period grow shorter with each passing year. Coupon income, reinvestment income, proceeds from maturities and sales proceeds must be reinvested in securities that will keep the portfolio's duration equal to the remaining years in the planning period.
Immunization in practice.
An immunization strategy is designed so that as interest rates change, interest-rate risk and reinvestment risk will offset each other. However, as Dr. Frank Fabozzi points out, the Macaulay duration metric and immunization theory are based on the assumption that any shifts in the yield curve during the planning period will be parallel, i.e. equal at each point in the term structure of interest rates. But when a non-parallel shift in the yield curve occurs, there is a risk that the portfolio will not be immunized even if its duration matches the liability duration. Immunization risk can be quantified so that a portfolio that minimizes this risk can be constructed.
A principal component analysis of changes along the U.S. Government Treasury yield curve reveals that more than 90% of yield curve shifts are parallel shifts, followed by a smaller percentage of slope shifts and a small percentage of curvature shifts. Using that knowledge, an immunized portfolio can be created by creating long positions with durations at the long and short end of the curve, and a matching short position with a duration in the middle of the curve. These positions protect against parallel shifts and slope changes, in exchange for exposure to curvature changes.
Immunization can be done in a portfolio of a single asset type, such as government bonds, by creating long and short positions along the yield curve. It is usually possible to immunize a portfolio against the most prevalent risk factors.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t"
},
{
"math_id": 1,
"text": "R_t"
},
{
"math_id": 2,
"text": "R_t = A_t - L_t \\text{ for } t = 1,2,3,\\ldots,n"
},
{
"math_id": 3,
"text": "A_t"
},
{
"math_id": 4,
"text": "L_t"
},
{
"math_id": 5,
"text": "P(i) = 0"
}
] | https://en.wikipedia.org/wiki?curid=1297934 |
12981660 | CMB cold spot | Region in space
The CMB Cold Spot or WMAP Cold Spot is a region of the sky seen in microwaves that has been found to be unusually large and cold relative to the expected properties of the cosmic microwave background radiation (CMBR). The "Cold Spot" is approximately 70 μK (0.00007 K) colder than the average CMB temperature (approximately 2.7 K), whereas the root mean square of typical temperature variations is only 18 μK. At some points, the "cold spot" is 140 μK colder than the average CMB temperature.
The radius of the "cold spot" subtends about 5°; it is centered at the galactic coordinate "l""II" = 207.8°, "b""II" = −56.3° (equatorial: "α" = 03h 15m 05s, "δ" = ° 35′ 02″). It is, therefore, in the Southern Celestial Hemisphere, in the direction of the constellation Eridanus.
Typically, the largest fluctuations of the primordial CMB temperature occur on angular scales of about 1°. Thus a cold region as large as the "cold spot" appears very unlikely, given generally accepted theoretical models. Various alternative explanations exist, including a so-called Eridanus Supervoid or Great Void that may exist between us and the primordial CMB (foreground voids can cause cold spots against the CMB). Such a void would affect the observed CMB via the integrated Sachs–Wolfe effect, and would be one of the largest structures in the observable universe. This would be an extremely large region of the universe, roughly 150 to 300 Mpc or 500 million to one billion light-years across and 6 to 10 billion light years away, at redshift formula_0, containing a density of matter much smaller than the average density at that redshift.
Discovery and significance.
In the first year of data recorded by the Wilkinson Microwave Anisotropy Probe (WMAP), a region of sky in the constellation Eridanus was found to be colder than the surrounding area. Subsequently, using the data gathered by WMAP over 3 years, the statistical significance of such a large, cold region was estimated. The probability of finding a deviation at least as high in Gaussian simulations was found to be 1.85%. Thus it appears unlikely, but not impossible, that the cold spot was generated by the standard mechanism of quantum fluctuations during cosmological inflation, which in most inflationary models gives rise to Gaussian statistics. The cold spot may also, as suggested in the references above, be a signal of non-Gaussian primordial fluctuations.
Some authors called into question the statistical significance of this cold spot.
In 2013, the CMB Cold Spot was also observed by the Planck satellite at similar significance, discarding the possibility of being caused by a systematic error of the WMAP satellite.
Possible causes other than primordial temperature fluctuation.
The large 'cold spot' forms part of what has been called an 'axis of evil' (so-called because it was unexpected to see a structure like this).
Supervoid.
One possible explanation of the cold spot is a huge void between us and the primordial CMB. A region cooler than surrounding sightlines can be observed if a large void is present, as such a void would cause an increased cancellation between the "late-time" integrated Sachs–Wolfe effect and the "ordinary" Sachs–Wolfe effect. This effect would be much smaller if dark energy were not stretching the void as photons went through it.
"Rudnick et al". found a dip in NVSS galaxy number counts in the direction of the Cold Spot, suggesting the presence of a large void. Since then, some additional works have cast doubt on the "supervoid" explanation. The correlation between the NVSS dip and the Cold Spot was found to be marginal using a more conservative statistical analysis. Also, a direct survey for galaxies in several one-degree-square fields within the Cold Spot found no evidence for a supervoid. However, the supervoid explanation has not been ruled out entirely; it remains intriguing, since supervoids do seem capable of affecting the CMB measurably.
A 2015 study shows the presence of a supervoid that has a diameter of 1.8 billion light years and is centered at 3 billion light-years from our galaxy in the direction of the Cold Spot, likely being associated with it. This would make it the largest void detected, and one of the largest structures known. Later measurements of the Sachs–Wolfe effect show too its likely existence.
Although large voids are known in the universe, a void would have to be exceptionally vast to explain the cold spot, perhaps 1,000 times larger in volume than expected typical voids. It would be 6 billion–10 billion light-years away and nearly one billion light-years across, and would be perhaps even more improbable to occur in the large-scale structure than the WMAP cold spot would be in the primordial CMB.
A 2017 study reported surveys showing no evidence that associated voids in the line of sight could have caused the CMB Cold Spot and concluded that it may instead have a primordial origin.
One important thing to confirm or rule out the late time integrated Sachs–Wolfe effect is the mass profile of galaxies in the area as ISW effect is affected by the galaxy bias which depends on the mass profiles and types of galaxies.
In December 2021, the Dark Energy Survey (DES), analyzing their data, put forward more evidence for the correlation between the Eridanus supervoid and the CMB cold spot.
Cosmic texture.
In late 2007, ("Cruz et al.") argued that the Cold Spot could be due to a cosmic texture, a remnant of a phase transition in the early Universe.
Parallel universe.
A controversial claim by Laura Mersini-Houghton is that it could be the imprint of another universe beyond our own, caused by quantum entanglement between universes before they were separated by cosmic inflation. Laura Mersini-Houghton said, "Standard cosmology cannot explain such a giant cosmic hole" and made the hypothesis that the WMAP cold spot is "... the unmistakable imprint of another universe beyond the edge of our own." If true, this provides the first empirical evidence for a parallel universe (though theoretical models of parallel universes existed previously). It would also support string theory. The team claims that there are testable consequences for its theory. If the parallel-universe theory is true, there will be a similar void in the Celestial sphere's opposite hemisphere (which "New Scientist" reported to be in the Southern celestial hemisphere; the results of the New Mexico array study reported it as being in the Northern).
Other researchers have modeled the cold spot as potentially the result of cosmological bubble collisions, again before inflation.
A sophisticated computational analysis (using Kolmogorov complexity) has derived evidence for a north and a south cold spot in the satellite data: "...among the high randomness regions is the southern non-Gaussian anomaly, the Cold Spot, with a stratification expected for the voids. Existence of its counterpart, a Northern Cold Spot with almost identical randomness properties among other low-temperature regions is revealed."
These predictions and others were made prior to the measurements (see Laura Mersini). However, apart from the Southern Cold Spot, the varied statistical methods in general fail to confirm each other regarding a Northern Cold Spot. The 'K-map' used to detect the Northern Cold Spot was noted to have twice the measure of randomness measured in the standard model. The difference is speculated to be caused by the randomness introduced by voids (unaccounted-for voids were speculated to be the reason for the increased randomness above the standard model).
Sensitivity to finding method.
The cold spot is mainly anomalous because it stands out compared to the relatively hot ring around it; it is not unusual if one only considers the size and coldness of the spot itself. More technically, its detection and significance depends on using a compensated filter like a Mexican hat wavelet to find it.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
External links.
<indicator name="01-sky-coordinates"><templatestyles src="Template:Sky/styles.css" />Coordinates: &de=&zoom=&show_grid=1&show_constellation_lines=1&show_constellation_boundaries=1&show_const_names=1&show_galaxies=1&img_source=IMG_all 03h 15m 05s, −19° 35′ 02″</indicator> | [
{
"math_id": 0,
"text": "z\\simeq 1"
}
] | https://en.wikipedia.org/wiki?curid=12981660 |
12982585 | Serial relation | In set theory a serial relation is a homogeneous relation expressing the connection of an element of a sequence to the following element. The successor function used by Peano to define natural numbers is the prototype for a serial relation.
Bertrand Russell used serial relations in "The Principles of Mathematics" (1903) as he explored the foundations of order theory and its applications. The term "serial relation" was also used by B. A. Bernstein for an article showing that particular common axioms in order theory are nearly incompatible: connectedness, irreflexivity, and transitivity.
A serial relation "R" is an endorelation on a set "U". As stated by Russell, formula_0 where the universal and existential quantifiers refer to "U". In contemporary language of relations, this property defines a total relation. But a total relation may be heterogeneous. Serial relations are of historic interest.
For a relation "R", let {"y": "xRy"} denote the "successor neighborhood" of "x". A serial relation can be equivalently characterized as a relation for which every element has a non-empty successor neighborhood. Similarly, an inverse serial relation is a relation in which every element has non-empty "predecessor neighborhood".
In normal modal logic, the extension of fundamental axiom set K by the serial property results in axiom set D.
Russell's series.
Relations are used to develop series in "The Principles of Mathematics". The prototype is Peano's successor function as a one-one relation on the natural numbers. Russell's series may be finite or generated by a relation giving cyclic order. In that case, the point-pair separation relation is used for description. To define a progression, he requires the generating relation to be a connected relation. Then ordinal numbers are derived from progressions, the finite ones are finite ordinals.Chapter 28: Progressions and ordinal numbers Distinguishing open and closed series234 results in four total orders: finite, one end, no end and open, and no end and closed.202
Contrary to other writers, Russell admits negative ordinals. For motivation, consider the scales of measurement using scientific notation, where a power of ten represents a decade of measure. Informally, this parameter corresponds to orders of magnitude used to quantify physical units. The parameter takes on negative as well as positive values.
Stretch.
Russell adopted the term "stretch" from Meinong, who had contributed to the theory of distance. Stretch refers to the intermediate terms between two points in a series, and the "number of terms measures the distance and divisibility of the whole."181 To explain Meinong, Russell refers to the Cayley–Klein metric, which uses stretch coordinates in anharmonic ratios which determine distance by using logarithm.255
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\forall x \\exists y \\ xRy ,"
}
] | https://en.wikipedia.org/wiki?curid=12982585 |
12983678 | Gross reproduction rate | Average number of daughters per woman if she survived all her childbearing years
The gross reproduction rate (GRR) is the average number of daughters a woman would have if she survived all of her childbearing years, which is roughly to the age of 45, subject to the age-specific fertility rate and sex ratio at birth throughout that period. This rate is a measure of replacement fertility if mortality is not in the equation. It is often regarded as the extent to which the generation of daughters replaces the preceding generation of women and so on and so forth. If the value is equal to one that indicates that women will replace themselves. If the value is more than one that indicates that the next generation of women will outnumber the current one. If the value is less than one that indicates that the next generation of women will be less numerous than the current one.
The gross reproduction rate is similar to the net reproduction rate (NRR), the average number of daughters a woman would have if she survived her lifetime subject to the age-specific fertility rate and mortality rate throughout that period.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "GRR=\\Sigma ASFR_{f'}\\times5"
}
] | https://en.wikipedia.org/wiki?curid=12983678 |
12986662 | MagmaFS | Magma is a distributed file system based on a distributed hash table, written in C, compatible with Linux and BSD kernels using FUSE.
Terminology and basic principles.
Magma binds several hosts interconnected by a TCP/IP network to form a common storage space called a "lava ring". Each host (or node) is called a "vulcano". Each vulcano hosts a portion of a common key space, delimited by two SHA1 keys. Each vulcano is also in charge of mirroring the key space of the previous node, to ensure data redundancy. Each key can represent one or more object inside the storage space. These objects are called "flares".
Magma can store a different range of objects: files, directories, symbolic links, block and character devices, FIFO pipes. Each object is bound to a flare and vice versa. A flare of any type in the six listed above is described by some basic properties common to all flares, like a path and a hash key. But each of the six types has also its own specific properties. For example, directory flares will have some specific information that don't apply to symbolic links. A flare with only generic information is called "uncast" while a complete flare is called "cast".
An uncast flare does not contain enough information to operate on data, but has enough information to be moved as a sort of opaque container between vulcano nodes. To be easily movable, each type of flare, including directories, has been reimplemented as a two files set, the first containing flare information (metadata) and the second containing flare content. Moving flares across lava ring is called "load balancing" and is done to leverage load inequalities between nodes in the attempt to provide best performance.
Flare system.
The internal engine of Magma is called "flare system" and is implemented as a layered stack.
magma_mkdir() can be used as an example of layer traversing. In this paragraph will be assumed that a directory called "/example" will be created. magma_mkdir() is part of "Public API" layer. It is used to create a new directory, as done by standard Libc counterpart mkdir().
magma_mkdir() first route the request to decide if it can be locally managed or will require network operations. To perform routing, path "/example" is translated in corresponding SHA1 hash key "81f762fd59d88768b06b8e9de56aef8a95962045". If routing determines the need to contact another vulcano node, request will not leave this layer. Lava network layer will forward the request to node owning the key, continuing the flow of operations on remote node. Routing is half the role of "Lava network" layer, which also includes network monitoring and vulcano nodes creation, update and removal.
Both being a local or a remote request, the last step is performed by "Flare layer". Flare corresponding to key "81f762fd59d88768b06b8e9de56aef8a95962045" will be searched inside the cache. If not found, it will be created and loaded from disk, if already existing. On resulting flare object are first applied permission checking tests. If permission to operate is granted, initial request is fulfilled: in this example, flare is cast to directory if it wasn't already and is saved to disk.
Routing.
Since each vulcano node has complete network topology available, routing is just a matter of matching flare keys with nodes key-space and find the node holding the flare. Network topology is also saved in the distributed directory /.dht/ inside magma filesystem. Vulcano nodes can periodically check their information against contents of /.dht/ directory to know if something has changed. Nodes also periodically save their own information inside /.dht/ directory.
Load balancing.
Each vulcano node has some parameters declared at boot, like bandwidth and storage available. A separate thread called "balancer" is devoted to distribute keys to avoid nodes overloading or underloading. Each node has a dynamic load value associated, which is computed by the formula:
formula_0
where formula_1 is the node key load calculated on logarithmic scale; formula_2 is node bandwidth and formula_3 is average bandwidth; formula_4 is node storage and formula_5 is average storage
Magma software distribution.
Magma is distributed in the form of a server called "magmad" and a client called "mount.magma".
Magma server.
Magma server magmad manages intercommunication between DHT nodes and magma clients. Flare system provides a network event loop that accept incoming connections. Three kind of connection are accepted.
Magma client.
Magma client magma.mount is based on FUSE, being compatible with Linux and BSD kernels. Magma client uses a flare protocol connection to contact and operate with a near Magma server. Network topology and flare location is totally transparent to clients. The client simply query one server in the exact manner as all information were located on that host only.
A cryptographic layer is planned for Magma client, allowing file contents only encryption. Implementing cryptography on client side is due to scalability (computational power will increase at same rate of computational request) and cryptographic key privacy (keys or passphrases will never reach the server).
Alternative NFS interface.
As an alternative to Magma client, which is supported only by Linux and BSD kernels, Magma server plans to offer a NFS interface for other Unices. Since NFS is an established standard, no new features can be added. For example, cryptographic layer will be unavailable to client mounting Magma shares through NFS. | [
{
"math_id": 0,
"text": "l_n = kl_n \\cdot \\frac{b_n}{b_a} \\cdot \\frac{s_n}{s_a}"
},
{
"math_id": 1,
"text": "kl_n"
},
{
"math_id": 2,
"text": "b_n"
},
{
"math_id": 3,
"text": "b_a"
},
{
"math_id": 4,
"text": "s_n"
},
{
"math_id": 5,
"text": "s_a"
}
] | https://en.wikipedia.org/wiki?curid=12986662 |
12987179 | Homogeneous relation | Binary relation over a set and itself
In mathematics, a homogeneous relation (also called endorelation) on a set "X" is a binary relation between "X" and itself, i.e. it is a subset of the Cartesian product "X" × "X". This is commonly phrased as "a relation on "X"" or "a (binary) relation over "X"". An example of a homogeneous relation is the relation of kinship, where the relation is between people.
Common types of endorelations include orders, graphs, and equivalences. Specialized studies of order theory and graph theory have developed understanding of endorelations. Terminology particular for graph theory is used for description, with an ordinary (undirected) graph presumed to correspond to a symmetric relation, and a general endorelation corresponding to a directed graph. An endorelation "R" corresponds to a logical matrix of 0s and 1s, where the expression "xRy" corresponds to an edge between "x" and "y" in the graph, and to a 1 in the square matrix of "R". It is called an adjacency matrix in graph terminology.
Particular homogeneous relations.
Some particular homogeneous relations over a set "X" (with arbitrary elements "x"1, "x"2) are:
Example.
Fifteen large tectonic plates of the Earth's crust contact each other in a homogeneous relation. The relation can be expressed as a logical matrix with 1 indicating contact and 0 no contact. This example expresses a symmetric relation.
Properties.
Some important properties that a homogeneous relation R over a set X may have are:
The previous 6 alternatives are far from being exhaustive; e.g., the binary relation "xRy" defined by "y" = "x"2 is neither irreflexive, nor coreflexive, nor reflexive, since it contains the pair (0, 0), and (2, 4), but not (2, 2), respectively. The latter two facts also rule out (any kind of) quasi-reflexivity.
Again, the previous 3 alternatives are far from being exhaustive; as an example over the natural numbers, the relation "xRy" defined by "x" > 2 is neither symmetric nor antisymmetric, let alone asymmetric.
Again, the previous 5 alternatives are not exhaustive. For example, the relation "xRy" if ("y" = 0 or "y" = "x"+1) satisfies none of these properties. On the other hand, the empty relation trivially satisfies all of them.
Moreover, all properties of binary relations in general also may apply to homogeneous relations:
A preorder is a relation that is reflexive and transitive. A total preorder, also called linear preorder or weak order, is a relation that is reflexive, transitive, and connected.
A partial order, also called order, is a relation that is reflexive, antisymmetric, and transitive. A strict partial order, also called strict order, is a relation that is irreflexive, antisymmetric, and transitive. A total order, also called linear order, simple order, or chain, is a relation that is reflexive, antisymmetric, transitive and connected. A strict total order, also called strict linear order, strict simple order, or strict chain, is a relation that is irreflexive, antisymmetric, transitive and connected.
A partial equivalence relation is a relation that is symmetric and transitive. An equivalence relation is a relation that is reflexive, symmetric, and transitive. It is also a relation that is symmetric, transitive, and total, since these properties imply reflexivity.
Operations.
If "R" is a homogeneous relation over a set "X" then each of the following is a homogeneous relation over "X":
All operations defined in "" also apply to homogeneous relations.
Enumeration.
The set of all homogeneous relations formula_0 over a set "X" is the set 2"X"×"X", which is a Boolean algebra augmented with the involution of mapping of a relation to its converse relation. Considering composition of relations as a binary operation on formula_0, it forms a monoid with involution where the identity element is the identity relation.
The number of distinct homogeneous relations over an "n"-element set is 2"n"2 (sequence in the OEIS):
Note that "S"("n", "k") refers to Stirling numbers of the second kind.
Notes:
The homogeneous relations can be grouped into pairs (relation, complement), except that for "n" = 0 the relation is its own complement. The non-symmetric ones can be grouped into quadruples (relation, complement, inverse, inverse complement).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{B}(X)"
}
] | https://en.wikipedia.org/wiki?curid=12987179 |
1298822 | William Goldman (mathematician) | American mathematician
William Mark Goldman (born 1955 in Kansas City, Missouri) is a professor of mathematics at the University of Maryland, College Park (since 1986). He received a B.A. in mathematics from Princeton University in 1977, and a Ph.D. in mathematics from the University of California, Berkeley in 1980.
Research contributions.
Goldman has investigated geometric structures, in various incarnations, on manifolds since his undergraduate thesis, "Affine manifolds and projective geometry on manifolds", supervised by William Thurston and Dennis Sullivan. This work led to work with Morris Hirsch and David Fried on affine structures on manifolds, and work in real projective structures on compact surfaces. In particular he proved that the space of convex real projective structures on a closed orientable surface of genus formula_0 is homeomorphic to an open cell of dimension formula_1. With Suhyoung Choi, he proved that this space is a connected component (the "Hitchin component") of the space of equivalence classes of representations of the fundamental group in formula_2. Combining this result with Suhyoung Choi's convex decomposition theorem, this led to a complete classification of convex real projective structures on compact surfaces.
His doctoral dissertation, "Discontinuous groups and the Euler class" (supervised by Morris W. Hirsch), characterizes discrete embeddings of surface groups in formula_3 in terms of maximal Euler class, proving a converse to the Milnor–Wood inequality for flat bundles. Shortly thereafter he showed that the space of representations of the fundamental group of a closed orientable surface of genus formula_4 in formula_3 has formula_5 connected components, distinguished by the Euler class.
With David Fried, he classified compact quotients of Euclidean 3-space by discrete groups of affine transformations, showing that all such manifolds are finite quotients of torus bundles over the circle. The noncompact case is much more interesting, as Grigory Margulis found complete affine manifolds with nonabelian free fundamental group. In his 1990 doctoral thesis, Todd Drumm found examples which are solid handlebodies using polyhedra which have since been called "crooked planes."
Goldman found examples (non-Euclidean nilmanifolds and solvmanifolds) of closed 3-manifolds which fail to admit flat conformal structures.
Generalizing Scott Wolpert's work on the Weil–Petersson symplectic structure on the space of hyperbolic structures on surfaces, he found an algebraic-topological description of a symplectic structure on spaces of representations of a surface group in a reductive Lie group. Traces of representations of the corresponding curves on the surfaces generate a Poisson algebra, whose Lie bracket has a topological description in terms of the intersections of curves. Furthermore, the Hamiltonian vector fields of these trace functions define flows generalizing the Fenchel–Nielsen flows on Teichmüller space. This symplectic structure is invariant under the natural action of the mapping class group, and using the relationship between Dehn twists and the generalized Fenchel–Nielsen flows, he proved the ergodicity of the action of the mapping class group on the SU(2)-character variety with respect to symplectic Lebesgue measure.
Following suggestions of Pierre Deligne, he and John Millson proved that the variety of representations of the fundamental group of a compact Kähler manifold has singularities defined by systems of homogeneous quadratic equations. This leads to various local rigidity results for actions on Hermitian symmetric spaces.
With John Parker, he examined the complex hyperbolic ideal triangle group representations. These are representations of hyperbolic ideal triangle groups to the group of holomorphic isometries of the complex hyperbolic plane such that each standard generator of the triangle group maps to a complex reflection and the products of pairs of generators to parabolics. The space of representations for a given triangle group (modulo conjugacy) is parametrized by a half-open interval. They showed that the representations in a particular range were discrete and conjectured that a representation would be discrete if and only if it was in a specified larger range. This has become known as the Goldman–Parker conjecture and was eventually proven by Richard Schwartz.
Professional service.
Goldman also heads a research group at the University of Maryland called the Experimental Geometry Lab, a team developing software (primarily in Mathematica) to explore geometric structures and dynamics in low dimensions. He served on the Board of Governors for The Geometry Center at the University of Minnesota from 1994 to 1996.
He served as Editor-In-Chief of Geometriae Dedicata from 2003 until 2013.
Awards and honors.
In 2012 he became a fellow of the American Mathematical Society.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g > 1"
},
{
"math_id": 1,
"text": "16g-16"
},
{
"math_id": 2,
"text": "{\\rm SL}(3,\\R)"
},
{
"math_id": 3,
"text": "{\\rm PSL}(3,\\R)"
},
{
"math_id": 4,
"text": "g>1"
},
{
"math_id": 5,
"text": "4g-3"
}
] | https://en.wikipedia.org/wiki?curid=1298822 |
12989803 | Non-positive curvature | In mathematics, spaces of non-positive curvature occur in many contexts and form a generalization of hyperbolic geometry. In the category of Riemannian manifolds, one can consider the sectional curvature of the manifold and require that this curvature be everywhere less than or equal to zero. The notion of curvature extends to the category of geodesic metric spaces, where one can use comparison triangles to quantify the curvature of a space; in this context, non-positively curved spaces are known as (locally) CAT(0) spaces.
Riemann Surfaces.
If formula_0 is a closed, orientable Riemann surface then it follows from the Uniformization theorem that formula_0 may be endowed with a complete Riemannian metric with constant Gaussian curvature of either formula_1, formula_2 or formula_3. As a result of the Gauss–Bonnet theorem one can determine that the surfaces which have a Riemannian metric of constant curvature formula_4 formula_5 i.e. Riemann surfaces with a complete, Riemannian metric of non-positive constant curvature, are exactly those whose genus is at least formula_2. The Uniformization theorem and the Gauss–Bonnet theorem can both be applied to orientable Riemann surfaces with boundary to show that those surfaces which have a non-positive Euler characteristic are exactly those which admit a Riemannian metric of non-positive curvature. There is therefore an infinite family of homeomorphism types of such surfaces whereas the Riemann sphere is the only closed, orientable Riemann surface of constant Gaussian curvature formula_2.
The definition of curvature above depends upon the existence of a Riemannian metric and therefore lies in the field of geometry. However the Gauss–Bonnet theorem ensures that the topology of a surface places constraints on the complete Riemannian metrics which may be imposed on a surface so the study of metric spaces of non-positive curvature is of vital interest in both the mathematical fields of geometry and topology. Classical examples of surfaces of non-positive curvature are the Euclidean plane and flat torus (for curvature formula_1) and the hyperbolic plane and pseudosphere (for curvature formula_3). For this reason these metrics as well as the Riemann surfaces which on which they lie as complete metrics are referred to as Euclidean and hyperbolic respectively.
Generalizations.
The characteristic features of the geometry of non-positively curved Riemann surfaces are used to generalize the notion of non-positive beyond the study of Riemann surfaces. In the study of manifolds or orbifolds of higher dimension, the notion of sectional curvature is used wherein one restricts one's attention to two-dimensional subspaces of the tangent space at a given point. In dimensions greater than formula_6 the Mostow–Prasad rigidity theorem ensures that a hyperbolic manifold of finite area has a unique complete hyperbolic metric so the study of hyperbolic geometry in this setting is integral to the study of topology.
In an arbitrary geodesic metric space the notions of being Gromov hyperbolic or of being a CAT(0) space generalise the notion that on a Riemann surface of non-positive curvature, triangles whose sides are geodesics appear "thin" whereas in settings of positive curvature they appear "fat". This notion of non-positive curvature allows the notion of non-positive curvature is most commonly applied to graphs and is therefore of great use in the fields of combinatorics and geometric group theory. | [
{
"math_id": 0,
"text": " S "
},
{
"math_id": 1,
"text": "0"
},
{
"math_id": 2,
"text": "1"
},
{
"math_id": 3,
"text": "-1"
},
{
"math_id": 4,
"text": " 0 "
},
{
"math_id": 5,
"text": "-1 "
},
{
"math_id": 6,
"text": "2"
}
] | https://en.wikipedia.org/wiki?curid=12989803 |
12989981 | Intrinsic dimension | The number of variables needed in a minimal representation of multidimensional data
The intrinsic dimension for a data set can be thought of as the number of variables needed in a minimal representation of the data. Similarly, in signal processing of multidimensional signals, the intrinsic dimension of the signal describes how many variables are needed to generate a good approximation of the signal.
When estimating intrinsic dimension, however, a slightly broader definition based on manifold dimension is often used, where a representation in the intrinsic dimension does only need to exist locally. Such intrinsic dimension estimation methods can thus handle data sets with different intrinsic dimensions in different parts of the data set. This is often referred to as local intrinsic dimensionality.
The intrinsic dimension can be used as a lower bound of what dimension it is possible to compress a data set into through dimension reduction, but it can also be used as a measure of the complexity of the data set or signal. For a data set or signal of "N" variables, its intrinsic dimension "M" satisfies "0 ≤ M ≤ N", although estimators may yield higher values.
Example.
Let "formula_0" be a two-variable function (or signal) which is of the form
formula_1
for some one-variable function "g" which is not constant. This means that "f" varies, in accordance to "g", with the first variable or along the first coordinate. On the other hand, "f" is constant with respect to the second variable or along the second coordinate. It is only necessary to know the value of one, namely the first, variable in order to determine the value of "f". Hence, it is a two-variable function but its intrinsic dimension is one.
A slightly more complicated example isformula_2.
"f" is still intrinsic one-dimensional, which can be seen by making a variable transformation
formula_3 and
formula_4
which gives
formula_5.
Since the variation in "f" can be described by the single variable "y1" its intrinsic dimension is one.
For the case that "f" is constant, its intrinsic dimension is zero since no variable is needed to describe variation. For the general case, when the intrinsic dimension of the two-variable function "f" is neither zero or one, it is two.
In the literature, functions which are of intrinsic dimension zero, one, or two are sometimes referred to as "i0D", "i1D" or "i2D", respectively.
Formal definition for signals.
For an "N"-variable function "f", the set of variables can be represented as an "N"-dimensional vector x:
formula_6.
If for some "M"-variable function "g" and "M × N" matrix A is it the case that
then the intrinsic dimension of "f" is "M".
The intrinsic dimension is a characterization of "f", it is not an unambiguous characterization of "g" nor of A. That is, if the above relation is satisfied for some "f", "g", and A, it must also be satisfied for the same "f" and "g′" and A′ given by
formula_8
and
formula_9
where B is a non-singular "M × M" matrix, since
formula_10.
The Fourier transform of signals of low intrinsic dimension.
An "N" variable function which has intrinsic dimension "M < N" has a characteristic Fourier transform. Intuitively, since this type of function is constant along one or several dimensions its Fourier transform must appear like an impulse (the Fourier transform of a constant) along the same dimension in the frequency domain.
A simple example.
Let "f" be a two-variable function which is i1D. This means that there exists a normalized vector formula_11 and a one-variable function "g" such that
formula_12
for all formula_13. If "F" is the Fourier transform of "f" (both are two-variable functions) it must be the case that
formula_14.
Here "G" is the Fourier transform of "g" (both are one-variable functions), "δ" is the Dirac impulse function and m is a normalized vector in formula_15 perpendicular to n. This means that "F" vanishes everywhere except on a line which passes through the origin of the frequency domain and is parallel to m. Along this line "F" varies according to "G".
The general case.
Let "f" be an "N"-variable function which has intrinsic dimension "M", that is, there exists an "M"-variable function "g" and "M × N" matrix A such that
formula_16.
Its Fourier transform "F" can then be described as follows:
Generalizations.
The type of intrinsic dimension described above assumes that a linear transformation is applied to the coordinates of the "N"-variable function "f" to produce the "M" variables which are necessary to represent every value of "f". This means that "f" is constant along lines, planes, or hyperplanes, depending on "N" and "M".
In a general case, "f" has intrinsic dimension "M" if there exist "M" functions "a1", "a2", ..., "aM" and an "M"-variable function "g" such that
A simple example is transforming a 2-variable function "f" to polar coordinates:
formula_18
For the general case, a simple description of either the point sets for which "f" is constant or its Fourier transform is usually not possible.
Local Intrinsic Dimensionality.
Local intrinsic dimensionality (LID) refers to the observation that often data is distributed on a lower-dimensional manifold when only considering a nearby subset of the data. For example the function formula_21 can be considered one-dimensional when "y" is close to 0 (with one variable "x"), two-dimensional when "y" is close to 1, and again one-dimensional when "y" is positive and much larger than 1 (with variable "x+y").
Local intrinsic dimensionality is often used with respect to data. It then usually is estimated based on the "k" nearest neighbors of a data point, often based on a concept related to the doubling dimension in mathematics. Since the volume of a "d"-sphere grows exponentially in "d", the rate at which new neighbors are found as the search radius is increased can be used to estimate the local intrinsic dimensionality (e.g., GED estimation). However, alternate approaches of estimation have been proposed, for example angle-based estimation.
Intrinsic dimension estimation.
Intrinsic dimension of data manifolds can be estimated by many methods, depending on assumptions of the data manifold. A 2016 review is.
The two-nearest neighbors (TwoNN) method is a method for estimating the intrinsic dimension of an immersed Riemannian manifold. The algorithm is as follows:Scatter some points on the manifold.
Measure formula_22 for many points, where formula_23 are the distances to the point's two closest neighbors.
Fit the empirical CDF of formula_24 to formula_25.
Return formula_26.
History.
During the 1950s so called "scaling" methods were developed in the social sciences to explore and summarize multidimensional data sets. After Shepard introduced non-metric multidimensional scaling in 1962 one of the major research areas within multi-dimensional scaling (MDS) was estimation of the intrinsic dimension. The topic was also studied in information theory, pioneered by Bennet in 1965 who coined the term "intrinsic dimension" and wrote a computer program to estimate it.
During the 1970s intrinsic dimensionality estimation methods were constructed that did not depend on dimensionality reductions such as MDS: based on local eigenvalues., based on distance distributions, and based on other dimension-dependent geometric properties
Estimating intrinsic dimension of sets and probability measures has also been extensively studied since around 1980 in the field of dynamical systems, where dimensions of (strange) attractors have been the subject of interest. For strange attractors there is no manifold assumption, and the dimension measured is some version of fractal dimension — which also can be non-integer. However, definitions of fractal dimension yield the manifold dimension for manifolds.
In the 2000s the "curse of dimensionality" has been exploited to estimate intrinsic dimension.
Applications.
The case of a two-variable signal which is i1D appears frequently in computer vision and image processing and captures the idea of local image regions which contain lines or edges. The analysis of such regions has a long history, but it was not until a more formal and theoretical treatment of such operations began that the concept of intrinsic dimension was established, even though the name has varied.
For example, the concept which here is referred to as an "image neighborhood of intrinsic dimension 1" or "i1D neighborhood" is called "1-dimensional" by Knutsson (1982), "linear symmetric" by Bigün & Granlund (1987) and "simple neighborhood" in Granlund & Knutsson (1995).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x_1, x_2)"
},
{
"math_id": 1,
"text": "f(x_1, x_2) = g(x_1)"
},
{
"math_id": 2,
"text": "f(x_1, x_2) = g(x_1 + x_2)"
},
{
"math_id": 3,
"text": "y_1 = x_1 + x_2"
},
{
"math_id": 4,
"text": "y_2 = x_1 - x_2"
},
{
"math_id": 5,
"text": "f\\left(\\frac{y_1 + y_2}{2}, \\frac{y_1 - y_2}{2}\\right) = g\\left(y_1\\right)"
},
{
"math_id": 6,
"text": "f = f\\left(\\mathbf{x} \\right) \\text{ where } \\mathbf{x} = \\left(x_1, \\dots, x_N \\right)"
},
{
"math_id": 7,
"text": "f(\\mathbf{x}) = g(\\mathbf{Ax}),"
},
{
"math_id": 8,
"text": "g'\\left(\\mathbf{y}\\right) = g \\left(\\mathbf{By}\\right) "
},
{
"math_id": 9,
"text": "\\mathbf{A'} = \\mathbf{B}^{-1} \\mathbf{A}"
},
{
"math_id": 10,
"text": "f\\left(\\mathbf{x}\\right) = \ng'\\left(\\mathbf{A'x}\\right) = g \\left(\\mathbf{BA'x}\\right) = g\\left(\\mathbf{Ax}\\right) "
},
{
"math_id": 11,
"text": "\\mathbf{n} \\in \\reals^{2}"
},
{
"math_id": 12,
"text": "f(\\mathbf{x}) = g(\\mathbf{n}^{\\operatorname {T}} \\mathbf{x})"
},
{
"math_id": 13,
"text": "\\mathbf{x} \\in \\reals^{2}"
},
{
"math_id": 14,
"text": "F \\left(\\mathbf{u}\\right) = G \\left(\\mathbf{n}^{\\mathrm{T}} \\mathbf{u}\\right) \\cdot \\delta \\left(\\mathbf{m}^{\\mathrm{T}} \\mathbf{u}\\right)"
},
{
"math_id": 15,
"text": "\\reals^{2}"
},
{
"math_id": 16,
"text": "f(\\mathbf{x}) = g(\\mathbf{Ax}) \\quad \\forall \\mathbf{x}"
},
{
"math_id": 17,
"text": "f(\\mathbf{x}) = g \\left( a_1(\\mathbf{x}), a_2(\\mathbf{x}), \\dots, a_M(\\mathbf{x}) \\right)"
},
{
"math_id": 18,
"text": "f\\left(\\frac{y_1 + y_2}{2}, \\frac{y_1 - y_2}{2}\\right) = g\\left(y_1\\right)"
},
{
"math_id": 19,
"text": "f(x_1, x_2) = g \\left(\\sqrt{x_1^2 + x_2^2} \\right)"
},
{
"math_id": 20,
"text": "f(x_1, x_2) = g \\left(\\arctan \\left(\\frac{x_2}{x_1}\\right)\\right)"
},
{
"math_id": 21,
"text": "f(x,y) = x + \\max\\{0, |y|-1\\}\n"
},
{
"math_id": 22,
"text": "\\mu = r_2/r_1"
},
{
"math_id": 23,
"text": "r_1, r_2"
},
{
"math_id": 24,
"text": "\\mu"
},
{
"math_id": 25,
"text": "1-\\mu^{-d}"
},
{
"math_id": 26,
"text": "d"
}
] | https://en.wikipedia.org/wiki?curid=12989981 |
12990 | Gall–Peters projection | Cylindrical equal-area map projection
The Gall–Peters projection is a rectangular, equal-area map projection. Like all equal-area projections, it distorts most shapes. It is a cylindrical equal-area projection with latitudes 45° north and south as the regions on the map that have no distortion. The projection is named after James Gall and Arno Peters.
Gall described the projection in 1855 at a science convention and published a paper on it in 1885. Peters brought the projection to a wider audience beginning in the early 1970s through his "Peters World Map". The name "Gall–Peters projection" was first used by Arthur H. Robinson in a pamphlet put out by the American Cartographic Association in 1986.
The Gall–Peters projection achieved notoriety in the late 20th century as the centerpiece of a controversy about the political implications of map design.
Description.
Formula.
The projection is conventionally defined as:
formula_0
where "λ" is the longitude from the central meridian in degrees, "φ" is the latitude, and "R" is the radius of the globe used as the model of the earth for projection. For longitude given in radians, remove the factors.
Simplified formula.
Stripping out unit conversion and uniform scaling, the formulae may be written:
formula_1
where "formula_2" is the longitude from the central meridian (in radians), "formula_3" is the latitude, and "R" is the radius of the globe used as the model of the earth for projection. Hence the sphere is mapped onto the vertical cylinder, and the cylinder is stretched to double its length. The stretch factor, 2 in this case, is what distinguishes the variations of cylindric equal-area projection.
Relation to cylindric equal-area projections.
The various specializations of the cylindric equal-area projection differ only in the ratio of the vertical to horizontal axis. This ratio determines the "standard parallel" of the projection, which is the parallel at which there is no distortion and along which distances match the stated scale. The standard parallels of the Gall–Peters are 45° N and 45° S. Several other specializations of the equal-area cylindric have been described, promoted, or otherwise named.
Origins and naming.
The Gall–Peters projection was first described in 1855 by the Scottish clergyman James Gall, who presented it along with two other projections at the Glasgow meeting of the British Association for the Advancement of Science (the BA). He gave it the name "orthographic" and formally published his work in 1885 in the "Scottish Geographical Magazine". The projection is suggestive of the orthographic projection in that distances between parallels of the Gall–Peters are a constant multiple of the distances between the parallels of the orthographic. That constant is √2.
In 1967, the German filmmaker Arno Peters independently devised a similar projection, which he presented in 1973 as the "Peters world map". Peters's original description of his projection contained a geometric error that, taken literally, implies standard parallels of 46°02′ N/S. However the text accompanying the description made it clear that he had intended the standard parallels to be 45° N/S, making his projection identical to Gall's orthographic. In any case, the difference is negligible in a world map.
The name "Gall–Peters projection" seems to have been used first by Arthur H. Robinson in a pamphlet put out by the American Cartographic Association in 1986. Before 1973 it had been known, when referred to at all, as the "Gall orthographic" or "Gall's orthographic". Most Peters supporters refer to it as the "Peters projection". During the years of controversy, the cartographic articles tended to use one name or the other, while acknowledging both names. In recent years "Gall–Peters" seems to dominate.
Peters world map controversy.
The Gall–Peters projection initially passed unnoticed when presented by Gall in 1855. It achieved more widespread attention after Arno Peters reintroduced it in 1973. He promoted it as a superior alternative to the commonly used Mercator projection, on the basis that the Mercator projection greatly distorts the relative sizes of regions on a map. In particular, he criticized that the Mercator projection causes wealthy Europe and North America to appear very large relative to poorer Africa and South America. These arguments swayed many socially concerned groups to adopt the Gall–Peters projection, including the National Council of Churches and the magazine "New Internationalist".
His campaign was bolstered by the inaccurate claim that the Gall–Peters projection was the only "area-correct" map. In actuality, some of the oldest projections are equal-area (such as the sinusoidal projection), and hundreds have been described. He also inaccurately claimed that it possessed "absolute angle conformality", had "no extreme distortions of form", and was "totally distance-factual". Peters framed his criticisms of the Mercator projection with criticisms of the broader cartographic community. In particular, Peters wrote in "The New Cartography",
<templatestyles src="Template:Blockquote/styles.css" />
As Peters's promotions gained popularity, the cartographic community reacted with hostility to his criticisms, as well as to the inaccuracy and lack of novelty of his claims. They called attention to the long list of cartographers who, over the preceding century, had formally expressed frustration with publishers' overuse of the Mercator and advocated for alternatives. In addition, several scholars criticized the particularly large distortions present in the Gall–Peters projection, and remarked on the irony of its undistorted presentation of the mid latitudes, including Peters's native Germany, at the expense of the low latitudes, which host more of the technologically underdeveloped nations.
The increasing publicity of Peters's claims in 1986 motivated the American Cartographic Association (now Cartography and Geographic Information Society) to produce a series of booklets (including "Which Map Is Best") designed to educate the public about map projections and distortion in maps. In 1989 and 1990, after some internal debate, seven North American geographic organizations adopted a resolution rejecting all rectangular world maps, a category that includes both the Mercator and the Gall–Peters projections, though the North American Cartographic Information Society notably declined to endorse it.
The two camps never made any real attempts toward reconciliation. The Peters camp largely ignored the protests of the cartographers, and did not acknowledge Gall's prior work until the controversy had largely run its course, late in Peters's life. While he likely devised the projection independently, his unscholarly conduct and refusal to engage the cartographic community undoubtedly contributed to the polarization and impasse.
In the ensuing decades, J. Brian Harley credited the Peters phenomenon with demonstrating the social implications of map projections, while the geographer Jeremy Crampton considers all maps to be political, and sees the condemnation from the cartographic community as reactionary and perhaps demonstrative of immaturity in the profession.
Adoption.
Maps based on the projection are promoted by UNESCO, and they are also widely used by British schools. The U.S. state of Massachusetts and Boston Public Schools began phasing in these maps in March 2017, becoming the first public school district and state in the United States to adopt Gall–Peters maps as their standard. Until its dissolution in 2020, Amherst-based ODT Maps Inc. was the exclusive North American publisher of Peters and Hobo–Dyer projection maps. On April 16, 2024, Nebraska Governor Jim Pillen signed a law that requires public schools to display maps based on the Gall–Peters projection, a similar cylindrical equal-area projection, or the AuthaGraph projection beginning in the 2024–2025 school year.
References.
Notes
<templatestyles src="Reflist/styles.css" />
Further reading | [
{
"math_id": 0,
"text": "\\begin{align}\n x &= \\frac{R\\pi\\lambda\\cos 45^\\circ}{180^\\circ} = \\frac{R\\pi\\lambda}{180^\\circ\\sqrt{2}}\\\\\n y &= \\frac{R\\sin \\varphi}{\\cos 45^\\circ} = R \\sqrt{2} \\sin \\varphi\n\\end{align}"
},
{
"math_id": 1,
"text": "\\begin{align}\n x &= R\\lambda\\\\\n y &= 2R\\sin\\varphi\n\\end{align}"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "\\varphi"
}
] | https://en.wikipedia.org/wiki?curid=12990 |
12990367 | Easter Microplate | Very small tectonic plate to the west of Easter Island
Easter Plate is a tectonic microplate located to the west of Easter Island off the west coast of South America in the middle of the Pacific Ocean, bordering the Nazca Plate to the east and the Pacific Plate to the west. It was discovered from looking at earthquake distributions that were offset from the previously perceived Nazca-Pacific Divergent boundary. This young plate is 5.25 million years old and is considered a microplate because it is small with an area of approximately . Seafloor spreading along the Easter microplate's borders have some of the highest global rates, ranging from /yr.
Structure and tectonics (present).
From the 1970s to 1990s, multiple efforts were made to collect data on the area, including several magnetic and gravitational anomaly surveys. These surveys show that Easter plate is uniquely shallow, bordered by spreading centers and transform boundaries, with a triple junction located at the southern and northern tip.
Along the eastern border, there are several spreading centers south of 27° S and 3 northward propagating rift to the north of 27° S. The axis further north is a graben reaching a depth of approximately 6000 m. Northward propagation of the eastern rifts is continuous at a speed of /yr. The spreading ridge between 26° S and 27° S has a spreading rate of /yr, but is asymmetrical on Nazca Plate side. Bathymetry data shows the depth is near 26°30' S and progressively gets deeper to the north, reaching depths of in an axial valley. There is approximately a gap at the northern end of the east rift with no rift connecting the northern boundary to the eastern boundary.
The northern border has wide ridges, greater than 1 km tall, linked side-by-side with the steeper slopes to the south. The southern trough area sits deeper than the areas to the north. The very eastern end of the northern border has pure strike-slip motion, while the western end is marked by the Northern Pacific-Nazca-Easter triple junction. This triple junction is a stable rift-fracture-fracture zone with anomalous earthquakes occurring to the northeast portion, indicating a possible second spreading axis. The rest of the northern boundary to the east and west of the triple junction are colinear transform boundaries. A trough, approximately deep, borders the north along this transform boundary to the east connecting to a deep hole, called the "Pito Deep" because of its close proximity to the Pito Seamount, at the northeastern limit.
The western border is divided into two parts. The west section has 2 spreading segments running north to south with spreading rates that approximately range from /yr. These segments are connected by sinistrally slipping transform faults around 14°15' S. A relay basin runs north to south along the southernmost segment as a result of past counter-clockwise rotation. The southwest consists of one slower spreading center (/yr) that runs northwest to southeast until joining the southern transform boundary.
Like the western end of the northerner border, the southern end also has an inferred rift-rift-fracture triple junction, but no data has been gathered yet to verify its existence. A single transform fault runs west to east and is home to the most rugged and shallow terrain with high seismic activity.
Evolution.
In 1995, routine magnetic, gravity, and echosounder data, supplemented with data from GLORIA (a long-range side scan sonar), German Sea Beam, SeaMARC II, and data from the World Data Center in Boulder, CO were all utilized to construct a two-stage model for the evolution of the Easter microplate.
Stage 1: 5.25 to 2.25 million years ago.
Approximately 5.25 million years ago, the boundary between the Pacific and Nazca plates was not connected and did not completely separate the two plates. The Easter microplate began to grow to the north-south throughout this period. The eastern rift, having not yet connected to the western rift, began to propagate northward by pseudofaults that appear to the west and east of the rift and continued until approximately 2.25 million years ago when the tip reached 23° S. While this was occurring, the west rift was propagating southward, north of the east rift, breaking into segments connected by transform faults that trend towards the southwest. The entire microplate continued a counter-clockwise rotation rate of 15° every million years throughout the entire history of the Easter microplate.
Stage 2: 2.25 million years ago to present.
The Easter microplate grew at a slower rate in the east-west dimension during this period, as it stopped growing north-south due to the cessation of east rift propagation. The east rift did continue angular spreading while keeping the same growth rate, but did not propagate any further northward. The west rift continued adjusting with more segmenting until the southwest rift began to open and propagate to the east. The southwest rift continued propagation until the present day southern triple junction was created.
Future predictions.
Though other evolution models have argued that the microplate was created approximately 4.5 million years ago, there is currently only one hypothesis for future evolution of the Easter microplate. It is believed that due to the slowing spreading rates at the southwest rift and the northern end of the east rift, the southwest and west rift will cease spreading activity and completely transfer the microplate from the Nazca to the Pacific Plate. This has been the case for other areas where extensive rift propagation studies have been conducted.
Dynamics.
Driving forces.
Divergence of the Nazca and Pacific plates generate a pulling force acting on the Easter microplate, causing its rotation. Two types of driving forces are believed to act on the Nazca-Pacific plate divergence: shear and tension. Shear driving forces occur along the north and south boundaries, which explain failures due to compression in the northern end of the plate. Tension driving forces occur at the east and west rifts. Because of the fast spreading rates along these boundaries, the Easter microplate has a thin lithosphere. The normal tensional forces applied across the east and west rifts is enough to drive the microplate's rotation. Due to the slowing trend of these spread rates along these rifts to the north, it is believed the lithosphere gets thicker near the north and the shear forces are believed to contribute to the overall driving force.
Resisting forces.
Mantle basal drag accounts for 20% of the forces applied to the Easter microplate. Mantle basal drag force is calculated using the equation: formula_0, where formula_1 is the mantle drag force per unit area, formula_2 is the proportionality constant, and formula_3 is absolute velocity of microplate using a fixed hotspot as the reference frame. The value for formula_1 represents a quantification of the total resisting force that the ductile asthenosphere applies to the brittle lithosphere floating on top.
The other 80% of the resisting forces come from the rotation of the Easter microplate. As the microplate is rotating, normal resistances are applied to the microplate at the north and south ends where there are no rifts to help microplate adjustment. Both tension and compression contribute to the resistance, but compressional forces along the ends of the rifts have more of an impact. These compressional forces are what create the elevated regions that surround the "Pito Deep".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec{F}_D = D\\vec{V}"
},
{
"math_id": 1,
"text": "\\vec{F}_D"
},
{
"math_id": 2,
"text": "D"
},
{
"math_id": 3,
"text": "\\vec{V}"
}
] | https://en.wikipedia.org/wiki?curid=12990367 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.